lively icon indicating copy to clipboard operation
lively copied to clipboard

Local storage for Shadertoy wallpapers

Open EvelynSubarrow opened this issue 5 years ago โ€ข 8 comments

I've got a slightly flaky internet connection, sometimes there's issues when starting up, and delays when switching between wallpapers which use shadertoy.

It'd be nice if it were possible to store these wallpapers locally. Ideally, lively would cache a copy, preferring a live version if/when it's available, with a setting to change this behaviour on an individual wallpaper and globally - options could be cache and don't try to get live, cache but prefer live, don't cache at all

EvelynSubarrow avatar Nov 20 '20 19:11 EvelynSubarrow

Looks like CefSharp don't support this feature: https://stackoverflow.com/questions/54298505/cefsharp-chromiumwebbrowser-load-from-cache-without-network-connection

Curious, which shadertoy shaders are you using as wallpaper? It's certainly possible to write a custom offline shader loader like the Rain v2 wallpaper etc..

rocksdanister avatar Nov 21 '20 12:11 rocksdanister

You should be able to implement a service worker to handle caching.

amaitland avatar Nov 21 '20 20:11 amaitland

Did you search for CefSharp keyword sorted by new issues ๐Ÿ˜ƒ

You should be able to implement a service worker to handle caching.

Thanks. Looks like its a Javascript thing, will need to look into it. Although for this particular case a custom shader loader is another option.

For future reference: https://developers.google.com/web/fundamentals/primers/service-workers

rocksdanister avatar Nov 23 '20 06:11 rocksdanister

Did you search for CefSharp keyword sorted by new issues ๐Ÿ˜ƒ

I have a saved search that I check occasionally ๐Ÿ˜„

Looks like its a Javascript thing, will need to look into it. Although for this particular case a custom shader loader is another option.

You can probably do that without actually using a service working just using the Cache implementation. See https://developer.mozilla.org/en-US/docs/Web/API/Cache

amaitland avatar Nov 23 '20 09:11 amaitland

Looking at the rain repo, I think I understand how that works, I'll see if I can do something similar, thanks for the pointer. The shader in question is mostly this one:

https://www.shadertoy.com/view/WsdfRH

EvelynSubarrow avatar Nov 23 '20 22:11 EvelynSubarrow

Lively has built in wallpaper recorder to create video wallpaper now - you can try that: https://youtu.be/cPV_e6psq1g

rocksdanister avatar Feb 11 '21 13:02 rocksdanister

Not an expert in shader programming, but, what about an "Import ShaderToy flow" that turns a ShaderToy URL into a self-contained, offline HTML wallpaper (no code fetching implemented yet)? I quickly attempted this approach using the shader above (https://www.shadertoy.com/view/WsdfRH). I had to add a few helper functions and make sure to use WebGL2, but the resulting file runs offline as a standalone HTML, as you can see in the attached example test (check comments to see the structure and necessary edits):

test.html

See code
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <title>Offline ShaderToy Shader Example</title>
  <style>
    /* Make the shader fill the entire window */
    body, html { margin: 0; overflow: hidden; background: black; }
    canvas { width: 100vw; height: 100vh; display: block; }
  </style>
</head>
<body>
<canvas id="glcanvas"></canvas>

<!--
  FRAGMENT SHADER SECTION
  - This is where your ShaderToy code goes.
  - Keep the uniforms (iTime, iResolution, iMouse) because WebGL uses them.
-->
<script id="shader-fs" type="x-shader/x-fragment">#version 300 es
precision highp float;

// Uniforms
uniform float iTime;
uniform vec2 iResolution;
uniform vec2 iMouse;

// Output
out vec4 fragColor;


/* === SHADERTOY mainImage FUNCTION HERE === */


// Basic hash and noise helpers
float hash1_2(vec2 p) {
    return fract(sin(dot(p, vec2(127.1, 311.7))) * 43758.5453123);
}
vec2 hash2_2(vec2 p) {
    return fract(sin(vec2(dot(p, vec2(127.1, 311.7)), dot(p, vec2(269.5, 183.3)))) * 43758.5453123);
}
float noise1_2(vec2 p) {
    vec2 i = floor(p);
    vec2 f = fract(p);
    float a = hash1_2(i);
    float b = hash1_2(i + vec2(1.0, 0.0));
    float c = hash1_2(i + vec2(0.0, 1.0));
    float d = hash1_2(i + vec2(1.0, 1.0));
    vec2 u = f * f * (3.0 - 2.0 * f);
    return mix(a, b, u.x) + (c - a)*u.y*(1.0 - u.x) + (d - b)*u.x*u.y;
}
vec2 noise2_2(vec2 p) {
    return vec2(noise1_2(p), noise1_2(p + 13.37));
}

// Rest of the shader code unchanged here

//Shader License: CC BY 3.0
//Author: Jan Mrรณz (jaszunio15)

#define PI 3.1415927
#define TWO_PI 6.283185

#define ANIMATION_SPEED 1.5
#define MOVEMENT_SPEED 1.5
// -.7/-1 (after swapping)
#define MOVEMENT_DIRECTION vec2(-0.8, -1.0)

#define PARTICLE_SIZE 0.009

//#define PARTICLE_SCALE (vec2(0.5, 1.6))
//#define PARTICLE_SCALE_VAR (vec2(0.25, 0.2))

//#define PARTICLE_BLOOM_SCALE (vec2(0.5, 0.8))
//#define PARTICLE_BLOOM_SCALE_VAR (vec2(0.3, 0.1))

#define PARTICLE_SCALE (vec2(1.6, 0.5))
#define PARTICLE_SCALE_VAR (vec2(0.2, 0.25))

#define PARTICLE_BLOOM_SCALE (vec2(0.8, 0.5))
#define PARTICLE_BLOOM_SCALE_VAR (vec2(0.1, 0.3))

//#define SPARK_COLOR vec3(1.0, 0.4, 0.05) * 1.5
//#define BLOOM_COLOR vec3(1.0, 0.4, 0.05) * 0.8
//#define SMOKE_COLOR vec3(1.0, 0.43, 0.1) * 0.8

#define SPARK_COLOR vec3(0.5, 0.4, 1.0) * 1.5
#define BLOOM_COLOR vec3(0.5, 0.4, 1.0) * 0.8
#define SMOKE_COLOR vec3(0.1, 0.43, 1.0) * 0.8

//#define SIZE_MOD 1.05
//#define ALPHA_MOD 0.9
//#define LAYERS_COUNT 15

#define SIZE_MOD 1.05
#define ALPHA_MOD 0.9
#define LAYERS_COUNT 20


float layeredNoise1_2(in vec2 uv, in float sizeMod, in float alphaMod, in int layers, in float animation)
{
 	float noise = 0.0;
    float alpha = 1.0;
    float size = 1.0;
    vec2 offset;
    for (int i = 0; i < layers; i++)
    {
        offset += hash2_2(vec2(alpha, size)) * 10.0;
        
        //Adding noise with movement
     	noise += noise1_2(uv * size + iTime * animation * 8.0 * MOVEMENT_DIRECTION * MOVEMENT_SPEED + offset) * alpha;
        alpha *= alphaMod;
        size *= sizeMod;
    }
    
    noise *= (1.0 - alphaMod)/(1.0 - pow(alphaMod, float(layers)));
    return noise;
}

//Rotates point around 0,0
vec2 rotate(in vec2 point, in float deg)
{
 	float s = sin(deg);
    float c = cos(deg);
    return mat2x2(s, c, -c, s) * point;
}

//Cell center from point on the grid
vec2 voronoiPointFromRoot(in vec2 root, in float deg)
{
  	vec2 point = hash2_2(root) - 0.5;
    float s = sin(deg);
    float c = cos(deg);
    point = mat2x2(s, c, -c, s) * point * 0.66;
    point += root + 0.5;
    return point;
}

//Voronoi cell point rotation degrees
float degFromRootUV(in vec2 uv)
{
 	return iTime * ANIMATION_SPEED * (hash1_2(uv) - 0.5) * 2.0;   
}

vec2 randomAround2_2(in vec2 point, in vec2 range, in vec2 uv)
{
 	return point + (hash2_2(uv) - 0.5) * range;
}


vec3 fireParticles(in vec2 uv, in vec2 originalUV)
{
    vec3 particles = vec3(0.0);
    vec2 rootUV = floor(uv);
    float deg = degFromRootUV(rootUV);
    vec2 pointUV = voronoiPointFromRoot(rootUV, deg);
    float dist = 2.0;
    float distBloom = 0.0;
   
   	//UV manipulation for the faster particle movement
    vec2 tempUV = uv + (noise2_2(uv * 2.0) - 0.5) * 0.1;
    tempUV += -(noise2_2(uv * 3.0 + iTime) - 0.5) * 0.07;

    //Sparks sdf
    dist = length(rotate(tempUV - pointUV, 0.7) * randomAround2_2(PARTICLE_SCALE, PARTICLE_SCALE_VAR, rootUV));
    
    //Bloom sdf
    distBloom = length(rotate(tempUV - pointUV, 0.7) * randomAround2_2(PARTICLE_BLOOM_SCALE, PARTICLE_BLOOM_SCALE_VAR, rootUV));

    //Add sparks
    particles += (1.0 - smoothstep(PARTICLE_SIZE * 0.6, PARTICLE_SIZE * 3.0, dist)) * SPARK_COLOR;
    
    //Add bloom
    particles += pow((1.0 - smoothstep(0.0, PARTICLE_SIZE * 6.0, distBloom)) * 1.0, 3.0) * BLOOM_COLOR;

    //Upper disappear curve randomization
    float border = (hash1_2(rootUV) - 0.5) * 2.0;
 	float disappear = 1.0 - smoothstep(border, border + 0.5, originalUV.y);
	
    //Lower appear curve randomization
    border = (hash1_2(rootUV + 0.214) - 1.8) * 0.7;
    float appear = smoothstep(border, border + 0.4, originalUV.y);
    
    return particles * disappear * appear;
}


//Layering particles to imitate 3D view
vec3 layeredParticles(in vec2 uv, in float sizeMod, in float alphaMod, in int layers, in float smoke) 
{ 
    vec3 particles = vec3(0);
    float size = 1.0;
    float alpha = 1.0;
    vec2 offset = vec2(0.0);
    vec2 noiseOffset;
    vec2 bokehUV;
    
    for (int i = 0; i < layers; i++)
    {
        //Particle noise movement
        noiseOffset = (noise2_2(uv * size * 2.0 + 0.5) - 0.5) * 0.15;
        
        //UV with applied movement
        bokehUV = (uv * size + iTime * MOVEMENT_DIRECTION * MOVEMENT_SPEED) + offset + noiseOffset; 
        
        //Adding particles								if there is more smoke, remove smaller particles
		particles += fireParticles(bokehUV, uv) * alpha * (1.0 - smoothstep(0.0, 1.0, smoke) * (float(i) / float(layers)));
        
        //Moving uv origin to avoid generating the same particles
        offset += hash2_2(vec2(alpha, alpha)) * 10.0;
        
        alpha *= alphaMod;
        size *= sizeMod;
    }
    
    return particles;
}

void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
    vec2 uv = (2.0 * fragCoord - iResolution.xy) / iResolution.x;
    
    float vignette = 1.0 - smoothstep(0.4, 1.4, length(uv + vec2(0.0, 0.3)));
    
    uv *= 1.8;
    
    float smokeIntensity = layeredNoise1_2(uv * 10.0 + iTime * 4.0 * MOVEMENT_DIRECTION * MOVEMENT_SPEED, 1.7, 0.7, 6, 0.2);
    smokeIntensity *= pow(1.0 - smoothstep(-1.0, 1.6, uv.y), 2.0); 
    vec3 smoke = smokeIntensity * SMOKE_COLOR * 0.8 * vignette;
    
    //Cutting holes in smoke
    smoke *= pow(layeredNoise1_2(uv * 4.0 + iTime * 0.5 * MOVEMENT_DIRECTION * MOVEMENT_SPEED, 1.8, 0.5, 3, 0.2), 2.0) * 1.5;
    
    vec3 particles = layeredParticles(uv, SIZE_MOD, ALPHA_MOD, LAYERS_COUNT, smokeIntensity);
    
    vec3 col = particles + smoke + SMOKE_COLOR * 0.02;
	col *= vignette;
    
    col = smoothstep(-0.08, 1.0, col);

    fragColor = vec4(col, 1.0);
}


/* ========= DO NOT EDIT BELOW THIS LINE ========= */
/* Wrapper to adapt ShaderToy's mainImage() to WebGL's main() */
void main() {
    vec4 color;
    mainImage(fragColor, gl_FragCoord.xy);
    //mainImage(color, gl_FragCoord.xy);
    //gl_FragColor = color;
}
</script>

<script>
/* === JAVASCRIPT SECTION === */
/* This sets up the WebGL context and provides uniforms like iTime, iResolution, iMouse */


const canvas = document.getElementById("glcanvas");
// const gl = canvas.getContext("webgl");
const gl = canvas.getContext("webgl2");
if (!gl) {
  alert("WebGL2 not available in this browser.");
}
function resizeCanvasToDisplaySize() {
  const w = window.innerWidth|0;
  const h = window.innerHeight|0;
  if (canvas.width !== w || canvas.height !== h) {
    canvas.width = w;
    canvas.height = h;
  }
}
resizeCanvasToDisplaySize();

// Vertex shader (fixed)
const vsSource = `#version 300 es
in vec2 aPosition;
void main() {
  // A full-screen triangle; pass position to gl_Position
  gl_Position = vec4(aPosition, 0.0, 1.0);
}
`;

// Fragment shader source from <script id="shader-fs">
const fsSource = document.getElementById("shader-fs").textContent;

// Helper to compile a shader
function compile(type, source) {
  const shader = gl.createShader(type);
  gl.shaderSource(shader, source);
  gl.compileShader(shader);
  if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS))
    console.error(gl.getShaderInfoLog(shader));
  return shader;
}

// Create and link the shader program
const program = gl.createProgram();
gl.attachShader(program, compile(gl.VERTEX_SHADER, vsSource));
gl.attachShader(program, compile(gl.FRAGMENT_SHADER, fsSource));
gl.linkProgram(program);
gl.useProgram(program);

// Setup fullscreen triangle
const vertices = new Float32Array([-1, -1, 3, -1, -1, 3]);
const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, vertices, gl.STATIC_DRAW);
const loc = gl.getAttribLocation(program, "aPosition");
gl.enableVertexAttribArray(loc);
gl.vertexAttribPointer(loc, 2, gl.FLOAT, false, 0, 0);

// Uniform locations
const iResolution = gl.getUniformLocation(program, "iResolution");
const iTime = gl.getUniformLocation(program, "iTime");
const iMouse = gl.getUniformLocation(program, "iMouse");

// Track time and mouse
let startTime = performance.now();
let mouse = [0, 0];
canvas.addEventListener('mousemove', e => {
  mouse = [e.clientX, canvas.height - e.clientY];
});

// Main render loop
function render() {
  const now = performance.now();
  const time = (now - startTime) / 1000.0;

  // Send uniforms to shader
  gl.viewport(0, 0, canvas.width, canvas.height);
  gl.uniform2f(iResolution, canvas.width, canvas.height);
  gl.uniform1f(iTime, time);
  gl.uniform2f(iMouse, mouse[0], mouse[1]);

  // Draw the triangle (which covers the whole screen)
  gl.drawArrays(gl.TRIANGLES, 0, 3);

  requestAnimationFrame(render);
}
render();
</script>
</body>
</html>

I've successfully tested a few other simple shaders, but it will break if the shader uses multiple channels, SoundCloud as audio source, image files and others. But if it's possible to generalize and automate this process for the static HMTL file generation, most of the shaders on ShaderToy would continue to work offline too.

SamMed05 avatar Oct 05 '25 20:10 SamMed05

The issue is too much work, always have to update and maintain this library in addition to support multiple channels etc.

I think there are some libraries that already do this but not sure if they are being maintained.

rocksdanister avatar Oct 09 '25 17:10 rocksdanister