hugo icon indicating copy to clipboard operation
hugo copied to clipboard

Build pages from data source

Open regisphilibert opened this issue 5 years ago • 67 comments

Currently Hugo handles internal and external data source with getJSON/getCSV which is great for using data source in a template.

But Hugo cannot, using a data set of items, build a page for each of them plus related list pages like it does from the content directory files.

Here is a fresh start on specing this important step in the roadmap.

As a user, I can only see the configuration aspect of the task.

I don’t see many configuration tied issues except for the mapping of the key/values collected from the data source and the obvious external or internal endpoint of the data set. The following are suggestions regarding how those configurations could be managed by the users followed by a code block example.

Endpoint/URL/Local file

Depending on use cases there may be a need of one or several url/path.

For many projects, not every page types (post, page etc…) may be built from the same source. Type could be defined from a data source key or as a source parameter.

I suppose there could be other parameters per sources.

Front Matter mapping

User must be able to map the keys from the data source to Hugo’s commonly used Front Matter Variables (title, permalink, slug, taxonomies, etc…). Every keys not referenced in the map configuration could be stored as is a user-define Front Matter available in the .Params object but this should not be default as there maybe way too many.

Example:

This is a realtor agency, a branch of a bigger one.

Their pages are built with hugo's local markdown

They have an old wordpress site whose > 100 blog posts they did not want to convert to markdown. So they load those blog posts from a local data file on top of the local Hugo's own markdown posts.

They use a tier service to create job posts when they need to fill a new position. They want to hosts those job listing on their site though. Their jobs are served by https://ourjobs.com/api/client/george-and-son/jobs.json

The most important part of the website are their realty listings. They add their listing to their mother company's own website whose API in turn serves those at https://api.mtl-realtors/listings/?branch=george-and-son&status=available

Configuration

title: George and Son (A MTL Realtors Agency)

dataSources:
  - source: data/old_site_posts.json
    contentPath: blog
    mapping: 
      Title: post_title
      Date: post_date
      Type: post_type
      Content: post_content
      Params.location.city: post_meta.city
      Params.location.country: post_meta.country

  - source: https://ourjobs.com/api/client/george-and-son/jobs.json
    contentPath: jobs
    mapping: 
      Title: job_label
      Content: job_description

  - source: https://api.mtl-realtors/listings/?branch=george-and-son&status=available
    contentPath: listings/:Type/
    grabAllFrontMatter: true
    mapping: 
      Type: amenity_kind
      Title: name
      Content: description
      Params.neighbourhood: geo.neighbour
      Params.city: geo.city

Content structure

This results in a content "shadow" structure. Hard lines dir/files are local, while dashed ones are remote.

content
├── _index.md
├── about.md
├── contact.md
├── blog
│     ├─── happy-halloween.md
│     ├─── merry-christmas.md
│     ├- - nice-summer
│     └- - hello-world
├- -listings
│     ├- - appartment
│     │   ├- - opportunity-studio
│     │   ├- - mile-end-condo
│     │   └- - downtown-tower-1
│     └- - house
│         └- - cottage-green
└- - jobs
      ├- - marketing-director
      └- - accountant-internship

regisphilibert avatar Aug 14 '18 17:08 regisphilibert

Thanks for starting this discussion. I suspect we have to go some rounds on this to get to where we want.

Yes, we need field mapping. But when I thought about this problem, I imagined something more than a 1:1 mapping between an article with a permalink and some content in Hugo. I have thought about it as content adapters. I think it even helps to think of the current filesystem as a filesystem Hugo content adapter.

So, if this is how it looks like on disk:

content
├── _index.md
├── blog
│   └── first-post
│       ├── index.md
│       └── sunset.jpg
└── logo.png

What would the above look like if the data source was JSON or XML? Or even WordPress?

It should, of course, be possible to set the URL "per post" (like it is in content files), but it should also be possible to be part of the content section tree with permalink config per section, translations etc.. So, when you have 1 content dir + some other data sources, it ends up as one merged view.

bep avatar Aug 14 '18 17:08 bep

As most data sources are usually a flat list of items, I suppose building the content directory structure will require some more mapping.

There are the type and section keys to be used as well as maybe others which would help position the item in the content structure. There could also be urlsource parameter designed the same way as the global config one except it would take one of the mapped key's as pattern (I'll update my example after this):

url: /:Section/:Title/

I suppose there is no way around having many source configuration params/mapping which Hugo may need to best adapt the data source to the desired structure. Maybe even having to use some pattern/regex/glob to best adapt those like the url suggestion above.

As for default structure. If there is no configured data source with a type parameter of blog, then Hugo will build it from content, the rest would be build from data source (supposing we a Page Bundle toggle, media mapping). See this real content merged with data source "phantom" structure:

content
├── _index.md
├── blog
│   └── first-post
│       ├── index.md
│       └── sunset.jpg
├ - - - recipe  (from data source)
│        └- - - first-recipe
│               ├ - - - index                
│               └ - - - cake-frosting.jpg
└── logo.png

regisphilibert avatar Aug 14 '18 18:08 regisphilibert

@bep now I understand more fully what you meant (I think). The config needs to tell Hugo how to model the content structure so it can build its pages from that. In a sense we are not building pages from data source we are building a content structure from both local content and remote data source which Hugo will interpret and build pages from.

To reflect this here I added to the desc a better project example to illustrate both configuration possibilities and the resulting "content" structure. This is a project we can add to in order to maybe better spec what this feature should achieve.

regisphilibert avatar Aug 16 '18 16:08 regisphilibert

@regisphilibert I have been thinking about this, and I think the challenge with all of this isn't the mapping (we can experiment until we get a "working and good looking scheme"), but more the practical workflow -- esp. how to handle state/updates.

  • As an editor, I would love it if my site (including content) was as static as possible at commit time (v1.3.0 of Hugo Times is this).
  • That is, if I the editor, looked at the Netlify preview on GitHub and push merge, I would be sadly disappointed if I then ended up with something completely different.
  • I think this is an often overlooked quality of static sites: Versioned content.

I understand that in a dynamic world with JS APIs etc., the above will not be entirely true, always. But it should be a core requirement whenever possible.

A person in another thread mentioned GatsbyJS's create-source-plugin.

I don't think their approach of emulating the file system is a good match for Hugo, but I'm more curious about how they pull in data.

Ensure local data is synced with its source and 100% accurate. If your source allows you to add an updatedSince query (or something similar) you can store the last time you fetched data using setPluginStatus.

This is me guessing a little, but if I commit my GatsbyJS with some create-source-plugin sources to GitHub and build on Netlify, those sources will be pulled completely on every build (which I guess also is sub-optimal in the performance department). I suspect setPluginStatus is a local thing and the updatedSince is a way to speed up local development.

Given the above assumptions, the Gatsby approach does not meet the "static content" criteria above. I'm not sure how they can assure that the data is "100% accurate", but the important part here is that you have no way of knowing if the source has changed.

So, I was thinkering about:

  1. Adding a sgllite3 database as a "build cache"
  2. Adding a "prepare step" (somehow) that exports the non-file content sources out into a merge-friendly text format (i.e. consistent ordering etc.)

The output of 2) is what we use to build the final site.

There are probably some practical holes in the above. But there are several upsides. sqllite3 has some very interesting features (which could enable more cool stuff), so if you would want to make that the "master", you could probably edit your data directly in the DB, and you could probably drop the "flat file format" and put your DB into source control ... This is me thinking out loud a little.

bep avatar Nov 02 '18 07:11 bep

That is, if I the editor, looked at the Netlify preview on GitHub and push merge, I would be sadly disappointed if I then ended up with something completely different

I'm not sure about this. And I already apologize if some of my lack of understanding of the technology/feature at hands bias my view.

I guess most of the use cases for this will be using contentful or WordPress Rest API or FireBase to manage your content, and let Hugo build the site from this remote source plus maybe a few other ones (remote and local). In this use case, the editor will not see markdown and probably not the Netlify preview or that merge button, but only the contentful or WordPress dashboard and create/edit their content from there. When a new page is published out of the draft zone, the editor will expect it to be visible on the site with few regard to the repo status. On bigger sites where several editors work at the same time, Hugo's built speed will help making sure the website can be "refreshed" often in order to keep up with content editing.

But this does not change the fact that we need caching and being able to tell the difference between the cached source and the remote one efficiently.

In order to handle the "when", by this I mean the decision between calling the remote source or using the cached one, I was thinking about a setting per source indicating at which rate it should be checked. If setting is one hour, then Hugo would check cached source time and if older than one hour, call remote. It would then use and cache the remote source only if it differs from the cached one. (Maybe using a hash to compare cached vs remote ?)

I'm not sure I understand the process described with sqlite3, would this mean having a database inside Hugo ? 🤔

regisphilibert avatar Nov 02 '18 14:11 regisphilibert

My talk about "database etc." clutters the discussion. This process cannot be stateless/dumb, was my main point. With 10 remote resources, some of them possibly out of your control, you (or I) would want some kind of control over:

  1. If it should be published.
  2. Possibly also when it should be published.

None of the above allows for a simple "pull and push". So, if you do your builds on a CI server (Netlify), but do your editing on your local PC, that state must be handled somehow so Netlify knows ... what. Note that the answer to 1) and 2) could possibly be to "publish everything, always", if that's your cup of tea.

bep avatar Nov 02 '18 15:11 bep

Note that the answer to 1) and 2) could possibly be to "publish everything, always", if that's your cup of tea.

Yeah, maybe some people want it or default to it but offering more control is definetly a must have I think.

So, if you do your builds on a CI server (Netlify), but do your editing on your local PC, that state must be handled somehow so Netlify knows ...

True but I didn't really saw it as Hugo's business. In my mind, a CI pipeline would have to be put into place above Hugo. So when the source is edited (using contentful or other) the CI is notified and can run something like hugo --fetch-source="contentful".

Or a simple cronjobs (don't know what to call those in the modern JAMstack) could be set in place so website is build every hour with hugo --fetch-source="contentful" and every day with hugo --fetch-source="events,weather"

regisphilibert avatar Nov 02 '18 15:11 regisphilibert

OK, I'm possibly overthinking it (at least for a v1 of this). But for the above to work at speed and for big sites, you need a proper cache you can depend on. I notice the GatsbyJS WordPress plugin saying that "this should work for any number of posts", but if you want this to work for your 10K WP blog, you really need to avoid pulling down everything all the time. I will investigate this vs Netlify and CircleCI.

bep avatar Nov 03 '18 10:11 bep

but if you want this to work for your 10K WP blog, you really need to avoid pulling down everything all the time

Yes. Time is essence! I can't imagine how long Gatsby would take to build a 10K WP blog considering it already takes 18s to build the hello-world starter kit.

And this is precisely why big content projects want to turn to Hugo.

regisphilibert avatar Nov 05 '18 16:11 regisphilibert

After spending time with playing with the friendly competition and its data source solutions. It becomes apparent that one of the biggest challenges of the current issue (now that Front Matter mapping will be taken care of by #5455) will be how the user can define everything Hugo needs to know in order to

  1. efficiently connect to remote or local a data source,
  2. retrieve the desired data,
  3. and merge it into its content system (path etc...).

3 will be unique to each project and potentially source. On the other hand 1 and 2 will be for the most part, constant for many data sources, like WordPress API or Contentful. For example, for a source of type WordPress REST API, Hugo will always use the same set of endpoints plus a few custom ones potentially added by the user. It will also systematically uses the same parameter to fetch paginated items.

We could group the settings of 1 and 2 into one Data Source Type (DST). Then, in the line of Output Formats and MediaTypes, any newly defined Data Source could use X or Y Data Source Type.

This way any DST could be potentially:

  • Reusable among one project without repeating same lengthy settings (2 different WordPress APIs for one website)
  • Shared among users as setting files.
  • Built-in

Rough example of DataSourceType/DataSources settings:

DataSourceTypes
  - name: wordpress
    endpoint_base: wp-json/v2/
    endpoints: ['posts', 'page', 'listings']
    pagination: true
    pagination_param: page=:page
    [...]

DataSources:
  - source: https://api.wordpress.blog.com/
    type: wordpress
    contentPath: blog/
    [...]

regisphilibert avatar Nov 22 '18 18:11 regisphilibert

I wanted to throw this into the discussion because it's a demonstration of how I generated temporary .md files from two merged sets of JSON data (Google Sheets API). These .md files are only generated and used during compilation and are not saved into the repository.

https://www.bryanklein.com/blog/hugo-python-gsheets-oh-my/

This is a fairly simple script, but you can see that I needed to filter the data source and map the 2 source JSON data sets to front matter parameters per page.

bwklein avatar Dec 02 '18 15:12 bwklein

@bwklein Thanks for this very informative input but... this belongs in a "tips and tricks" thread in the discourse which could mention this issue. Not the other way around :)

PS: This really belongs there, people would love to read this I'm sure.

regisphilibert avatar Dec 03 '18 14:12 regisphilibert

@regisphilibert I'm exactly looking for the same thing! Having json parts in a page is simple, but generate post from json... Headless CMS to Hugo 👍

itwars avatar Dec 30 '18 19:12 itwars

That goes without saying that the “DataSourceTypes” mentioned above could be distributed as Hugo Components (theme conponents)

regisphilibert avatar Jan 26 '19 14:01 regisphilibert

Will having some logics in md files also be a feature on the roadmap:

For example: {{ $jsonResponse := getJSON $apiUrl "https://example.com/xyz/blabla }} {{ range $jsonResponse.data }} --- title: "{{ .id | "default title" }}" --- {{- end -}}

So this way, the title can be dynamic for routes, permalinks, etc.

ZhenWang-Jen avatar Mar 12 '19 19:03 ZhenWang-Jen

👍

suzel avatar Mar 17 '19 21:03 suzel

So this thing has been bugging me for a while now (especially as I wanted to make it easier for contributors on my website to add content without manually creating folders & files). So I wrote a temporary solution: kidsil/hugo-data-to-pages

It's a JS wrapper that generates pages from data (and it cleans after itself!). I'm currently using this to generate pages from YAML files and it seems to be working perfectly. Would definitely appreciate some feedback.

Cheers

kidsil avatar Jul 18 '19 20:07 kidsil

With #6041, it seems convenient that any data source could be assigned to a "directory" in the file system.

If this route is chosen, we could, in order to add the remote "jobs" source from this issues's desc example, simply add a content/jobs/_index.md to the project and handle any config/FM/data source info from there.

Using same example as above:

# config.yaml
DataSourceTypes
  - name: wordpress
    endpoint_base: wp-json/v2/
    endpoints: ['posts', 'page', 'listings']
    pagination: true
    pagination_param: page=:page
    [...]
# content/jobs/_index.md
DataSource:
    source: https://api.wordpress.blog.com/
    type: wordpress
cascade:
   permalinks: /our-job-offers/:title

regisphilibert avatar Aug 11 '19 13:08 regisphilibert

I am looking into Hugo and this would be the killer feature allowing me to reach my goal. I am trying to integrate Doxygen content into a Hugo managed website. My current workaround is in two steps using external scripts:

  1. generate YAML data from the Doxygen XML
  2. generate md pages under content/ from the above YAML

Here I do not look at the first step that is unrelated to this issue. Only at the second one.

Ideally, I would like to be able to drop my YAML file in the data/ folder and use templates to specify how to actually present this to the user.

This issue and #4485 seem very close from my newcommer point of view. Is there a subtle difference I missed. In that case, which specific feature request would be the one matching the workflow I described?

jbigot avatar Jan 17 '20 12:01 jbigot

I really wish this would be bumped up in priority. Would be an absolute game-changer

chris-79 avatar Mar 16 '20 11:03 chris-79

@jbigot and @chris-79 I'm also eagerly waiting for this functionality. You might want to keep an eye on issue #6310 which is the current proposal for building pages from data. If you read though the updates, there have been quite a few changes in the background preparing for this

mjmurphy avatar Mar 16 '20 13:03 mjmurphy

I really wish this would be bumped up in priority. Would be an absolute game-changer

I agree, if this was possible we would literally be able to connect Hugo to every headless CMS source e.g. craft, wordpress, contentful, etc! Relying on forestry.io has been a pain..

This would help teams and organizations to be able to work in their native cms to organize content in any way possible. :)

jinsupark avatar Mar 22 '20 05:03 jinsupark

@regisphilibert The Paginator object can convert an index page into multiple pages. One simple solution to have pages from data might be extend this to be able to paginate not just a list of pages but an arbitrary list. Then with page size 1, we have created one page per entry in a JSON.

We might need enhancements to the paginator object to be able to customize the front matter of of the generated pages. But I think enhancements like those can be incremental. I don't know the internals of Hugo to be able to say how difficult is the task, but as a user, this would fall inline with the current way of using Hugo and we will not need to learn a new concept.

atishay avatar May 25 '20 05:05 atishay

This feature would be a game changer for Hugo. I hate having to use Netlify CMS to achieve wh.at I want, Strapi would be my go to CMS

TheLazyLemur avatar Jun 26 '20 06:06 TheLazyLemur

fwiw, I ended up making my own scripts to process custom JSON output from our Wordpress.com site.

It's not pretty code, and may not follow correct Go conventions (as I'm super new to it), but it's working for us.

Hope this helps someone until we get official support within Hugo :)

Articles

For building Markdown files in a flat, single directory for one of our Hugo sites.

Vuepress-Pages

For building files into hierarchical directories based on the "link" key. Yes, this is for VuePress, but it should work just fine with Hugo with minor tweaking (like changing /README.md to /_index.md).

Example Yarn build command (runs in Netlify):

{
  "scripts": {
    "build": "curl -s -H 'Cache-Control: no-cache' \"https://example.com/wp-json/custom-routes/v1/internal-resources\" --output pages.json && curl -s -H 'Cache-Control: no-cache' -sL -o main.go \"https://gitlab.com/snippets/*******/raw\" && go run main.go && vuepress build docs"
  }
}

chris-79 avatar Jun 26 '20 12:06 chris-79

My proposal for this feature is that it should be three features:

  • Define a format for content files in JSON. Eg you’d have sources/myfile.json with a new line delimited JSON stream of objects with keys “filepath”: “stories/whatever.md” and “body”: “whatever” and that gets represented as though it’s a bunch of files in the file system.

  • Allow STDOUT plugins that have a line in Hugo’s conf like “plugin-cmd = ‘myscript.py’” and those can spit out JSON in the format above. It gets rerun at some interval, either per request or per time duration.

  • FCGI gateway that works like the plugin except as requests come in, they get passed to the app which can respond with JSON.

earthboundkid avatar Oct 27 '20 21:10 earthboundkid

@carlmjohnson Could you clarify what you mean when you mention "requests" here? As Hugo is a static site generator, it's unclear what per-request behavior would mean.

lucperkins avatar Dec 17 '20 19:12 lucperkins

Good question. Per request means using hugo serve. For example, if you want to be able to create a live preview, you could have hugo serve running somewhere and the plugin intercepts the request, generates some content, and then passes that to Hugo to build the actual page.

I noticed that @bep starred https://github.com/natefinch/pie on Github the other day. I think that approach (using JSON-RPC over STDIN) would work for plugins. Apparently LSP plugins like Gopls etc. work on a similar basis.

earthboundkid avatar Dec 17 '20 19:12 earthboundkid

Let me talk through some sites I have built with Hugo and how I did them and how I wish I could have done them to give more background for this.

I made one site that listed a lot (several hundred) of candidates for local elections. We emailed the candidates and had them fill out a Survey Monkey survey. We cleaned up the data into a big Google Sheet. I wrote a script that downloaded the Google Sheet and turned it into a JSON file for each candidate, which we put into the content directory as a .md file (because content files have to end in .md). Code here: https://github.com/baltimore-sun-data/voter-guide-2018

I more recently did a similar site that lists expert sources in Pennsylvania. Again, survey, Google Sheet, JSON files in the content directory. Code here: https://github.com/spotlightpa/sourcesdb

Writing the script to download a Google Sheet and turn it into a bunch of small JSON files was not a big task for me, but it was some amount of work, and I imagine a lot of people who know HTML/CSS might not be comfortable doing it. It would have been easier if I could have dropped a single CSV/JSON into the repo, instead of many small files. It would be even better if I could write a plugin that automatically connects to Google Sheets when you run hugo or hugo serve.

I have another site where we schedule stories to publish at a certain time. We do this by saving the stories in a database and then a task runs periodically to see if any stories are ready to publish, and if so, it adds them to Github at the proper location, which in turn triggers Netlify to deploy the site. It would be nice if I could just tell Hugo to get the stories out of my database, and then to do a scheduled publish, I would just trigger a Netlify build at the proper time (no middle step with Github saving the content).

earthboundkid avatar Dec 17 '20 19:12 earthboundkid

@carlmjohnson I'm trying something similar (without much luck for the past few weeks). Its a personal project, and I am trying to build a repository of assignments and work done by students in our college. I was planning on having a front-end Google Form for user-submitted content where each row is a new submission. After this, we set up a script to read that as a JSON file and hopefully break it into individual content files, which can be pushed to Github, which triggers a Netlify deploy.

Unfortunately, my experience with Python (which I'm assuming you also used?) so far has been limited to NLP and not so much in this area. Your last website implementation seems most ideal to me, where the Sheets document could act as the database and I only build new content pages if there are new rows. I don't mind doing a manual trigger for connecting to Sheets every now and then.

I tried going through the repos you linked but I can't seem to be able to find the code that does the conversion from Sheets to JSON in the content folder. Would you be able to help me out? If it is okay with you and as time permits, I would like to take your help with this since you seem to have done it before.

thedivtagguy avatar Jan 05 '21 07:01 thedivtagguy