Allow vertex to work on Vercel-Edge
Feature Description
The vertex model doesn't work on vercel edge. This is cause by the auth-library not being compatible with the edge. This could be easily replaced by a vanilla js script:
import { webcrypto } from 'crypto'
const tokenUrl = 'https://oauth2.googleapis.com/token';
const base64url = (source) => {
let encodedSource = Buffer.from(source).toString('base64');
encodedSource = encodedSource.replace(/=+$/, '');
encodedSource = encodedSource.replace(/\+/g, '-');
encodedSource = encodedSource.replace(/\//g, '_');
return encodedSource;
}
const createJwtSegment = (segment) => base64url(JSON.stringify(segment));
const importPrivateKey = async (pemKey) => {
const pemHeader = "-----BEGIN PRIVATE KEY-----";
const pemFooter = "-----END PRIVATE KEY-----";
const pemContents = pemKey.substring(pemHeader.length, pemKey.length - pemFooter.length);
const binaryDerString = Buffer.from(pemContents, 'base64').toString('binary');
const binaryDer = new Uint8Array(binaryDerString.length);
for (let i = 0; i < binaryDerString.length; i++) {
binaryDer[i] = binaryDerString.charCodeAt(i);
}
return webcrypto.subtle.importKey("pkcs8", binaryDer.buffer, { name: "RSASSA-PKCS1-v1_5", hash: "SHA-256" }, true, ["sign"] );
}
const signJwt = async (header, payload, privateKey) => {
const data = `${header}.${payload}`;
const encoder = new TextEncoder();
const signature = await webcrypto.subtle.sign("RSASSA-PKCS1-v1_5", privateKey, encoder.encode(data) );
return `${data}.${base64url(Buffer.from(signature))}`;
};
const buildJwt = async ({ client_email, private_key, private_key_id, scopes }) => {
const iat = Math.floor(Date.now() / 1000);
const exp = iat + 3600;
const payload = { iss: client_email, sub: client_email, aud: tokenUrl, iat: iat, exp: exp, scope: scopes.join(" ") }
const jwtHeaders = { kid: private_key_id, alg: 'RS256', typ: 'JWT' };
const encodedHeader = createJwtSegment(jwtHeaders);
const encodedPayload = createJwtSegment(payload);
// Import the private key
const privateKey = await importPrivateKey(private_key);
// Sign the JWT
const signedJwt = await signJwt(encodedHeader, encodedPayload, privateKey);
return signedJwt;
}
// Function to get access token
const generateToken = async(credentials) => {
// 1) Build and sign JWT
const signedJwt = await buildJwt(credentials)
// 2) Get access token
const response = await fetch(tokenUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({
'grant_type': 'urn:ietf:params:oauth:grant-type:jwt-bearer',
'assertion': signedJwt
})
})
if (!response.ok) {
throw new Error(`Token request failed: ${response.statusText}`);
}
return await response.json()
}
It can easily be used like this:
const { expires_in, access_token } = await generateToken({
client_email:process.env.GOOGLE_CLOUD_CLIENT_EMAIL,
private_key: process.env.GOOGLE_CLOUD_PRIVATE_KEY?.replace(/\\n/gm, "\n"), // Fix issue with new line
private_key_id: process.env.GOOGLE_CLOUD_PRIVATE_KEY_ID,
scopes: ["https://www.googleapis.com/auth/cloud-platform"]
})
Use Case
Would be great to decouple vertex and auto so that we can use our custom auth script like the one above or simply add the script above directly in the ai-sdk model so that it would work on vercel edge.
Additional context
No response
Also vertex has a very simply rest API, so most likely the whole ai-sdk/vertex could only depend on built in fetch and work perfectly everywhere.
I wonder if we swap to using dvsekhvalnov/jose-jwt to get it working. IIRC Jose works on the edge
I wonder if we swap to using dvsekhvalnov/jose-jwt to get it working. IIRC Jose works on the edge
Yep, it does according to their README and this PR
@cosbgn thanks for the sample code and suggestion, it's super helpful.
As we look at shifting to use REST APIs as part of this, there are a number of them. Which model(s) specifically are you looking to use? For example https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference is fairly straightforward and appears to be the current recommended path, but is also Gemini specific. Is Gemini enough for you?
Gemini is all I need, however it now works directly with ai/google. Before there were some differences (like unable to pass a pdf).
So if vertex would support only Gemini you might as well just use ai/google.
Using vertex for other models like llama would be nice.