community
community copied to clipboard
Automation: Use the Zapier Code tool to eliminate manual labor on YouTube when uploading SIG videos
Currently, we have an automation setup in Zapier to take video from Zoom and dump it into Google Drive. Then, another Zap takes it from Google Drive to YouTube. But there are a few manual steps that need to happen on YouTube after the upload, and right now, @pnbrown is the only one doing that work.
graph LR
RS1("SIG-ContribEx Zoom") --> Zapier
RS2("SIG-Docs Zoom") --> Zapier
RS3("SIG-K8s-Infra Zoom") --> Zapier
RS4("WG-LTS Zoom") --> Zapier
Zapier --> |$TOPIC for $TIME| D("Google Drive")
D --> E("YouTube")
E --> F("Nigel")
F --> E
Ideally, using Zapier Code, we can write a Python or JS script to accomplish the metadata tweaks needed on YouTube to eliminate this problem. @mfahlandt has taken the task of writing a JS script to update video metadata on YouTube. This means the above diagram becomes this:
graph LR
RS1("SIG-ContribEx Zoom") --> Zapier
RS2("SIG-Docs Zoom") --> Zapier
RS3("SIG-K8s-Infra Zoom") --> Zapier
RS4("WG-LTS Zoom") --> Zapier
Zapier --> |$TOPIC for $TIME| D("Google Drive")
D --> E("YouTube")
E --> F("Zapier Code")
F --> E
/assign mfahlandt
/sig contributor-experience
/area contributor-comms
/assign chris-short
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@pnbrown Are we good here or still missing a component?
It looks good to me. I'd have to see what other onboarding has been done or if it's all captured