mediapipe
mediapipe copied to clipboard
ReferenceError: navigator is not defined - (mediapipe face)mesh.js)
I ran yarn build ( building next js) and this error poped up
==>
ReferenceError: navigator is not defined
at Object.
Hi @arianatri, Could you please provide the steps to reproduce the above error. Thanks!
install next js (obv : |) add facemesh to project create a component that has facemesh used in it and at last yarn build:project_name
will give you that error
in face_mesh.js we have an error in line 74 :
.includes(navigator.platform)||navigator.userAgent.
did you solve it?
Is there a solution for this issue yet?
There's no null check on the navigator
object during the population of useCpuInference
inside face_mesh.js
.
default: "iPad Simulator;iPhone Simulator;iPod Simulator;iPad;iPhone;iPod".split(";").includes(navigator.platform) || navigator.userAgent.includes("Mac") && "ontouchend" in document }
I can't seem to find the source for face_mesh.js
- it's probably being compiled from the Java code somewhere, but a simple null check (navigator?.platform
&& navigator?.userAgent?.includes(...)
) should do the trick.
Wish I could help more, having the same issue as everyone else here.
Also, this issue is present in @mediapipe/hands as well.
Did someone find any solution to fix this ? thanks in advance!
I am currently using react alone. So I created one component to handle webcam and other to listen to facemesh events. Here it follows:
import {memo, useEffect, useRef} from 'react'
import Webcam from 'react-webcam'
import styles from '../styles/faceMeshCanvas.module.scss'
import {FaceMesh} from '@mediapipe/face_mesh'
interface WebcamStreamCaptureProps {
faceMesh: FaceMesh | undefined
}
const WebcamStreamCapture = ({faceMesh}: WebcamStreamCaptureProps) => {
const videoRef = useRef<any>(null)
useEffect(() => {
console.log('WebcamStreamCapture rendered')
}, [])
const doSomethingWithTheFrame = async (now: number, metadata: any) => {
const video = videoRef.current.video
await faceMesh?.send({image: video})
videoRef.current.video.requestVideoFrameCallback(doSomethingWithTheFrame)
}
const handleUserMedia = (stream: MediaStream) => {
videoRef.current.video.requestVideoFrameCallback(doSomethingWithTheFrame);
}
return (
<>
<Webcam audio={false}
videoConstraints={{facingMode: 'environment'}}
className={styles.frame}
ref={videoRef}
onUserMedia={handleUserMedia}/>
</>
)
}
export default memo(WebcamStreamCapture)
import React, {useEffect, useRef, useState} from 'react'
import styles from '../styles/faceMeshCanvas.module.scss'
import {
FaceMesh,
FACEMESH_LEFT_EYE,
FACEMESH_LEFT_IRIS,
FACEMESH_RIGHT_EYE,
FACEMESH_RIGHT_IRIS
} from '@mediapipe/face_mesh';
import {drawConnectors} from '@mediapipe/drawing_utils';
import WebcamStreamCapture from "./WebcamStreamCapture";
const DIMENSIONS = {
width: 320,
height: 240
}
interface Point {
x: number
y: number
z: number
visibility?: boolean
}
const faceMeshOptions = {
maxNumFaces: 1,
refineLandmarks: true,
minDetectionConfidence: 0.5,
minTrackingConfidence: 0.5
}
const RIGHT_EYE_FACEMESH_INDEXES = [33, 160, 158, 133, 153, 144]
const LEFT_EYE_FACEMESH_INDEXES = [362, 385, 387, 263, 373, 380]
const locateFile = (file: string) => `https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/${file}`
interface CanvasProps {
earThreshold: number
}
const FaceMeshCanvas = ({earThreshold}: CanvasProps) => {
const canvasRef = useRef<HTMLCanvasElement>(null)
const [faceMesh, setFaceMesh] = useState<FaceMesh>()
useEffect(() => {
const canvasEl: any = canvasRef.current
canvasEl.width = DIMENSIONS.width
canvasEl.height = DIMENSIONS.height
const ctx: CanvasRenderingContext2D | null = canvasEl.getContext('2d')
const faceMesh = new FaceMesh({locateFile})
faceMesh.setOptions(faceMeshOptions)
faceMesh.onResults((results) => onFaceMeshResults(results, ctx))
setFaceMesh(faceMesh)
return () => {
faceMesh.close().catch(console.error)
}
}, [])
const onFaceMeshResults = async (results: any, ctx: any) => {
ctx.clearRect(0, 0, DIMENSIONS.width, DIMENSIONS.height)
ctx.drawImage(results.image, 0, 0, DIMENSIONS.width, DIMENSIONS.height)
if (results.multiFaceLandmarks && ctx) {
const eyes: { left: Point[], right: Point[] } = {
left: [],
right: []
}
for (const landmarks of results.multiFaceLandmarks) {
for (const index of RIGHT_EYE_FACEMESH_INDEXES) eyes.right.push(landmarks[index])
for (const index of LEFT_EYE_FACEMESH_INDEXES) eyes.left.push(landmarks[index])
const rightEAR = getEAR(eyes.right);
const leftEAR = getEAR(eyes.left);
if (rightEAR < earThreshold || leftEAR < earThreshold)
console.log(rightEAR, leftEAR)
drawConnectors(ctx, landmarks, FACEMESH_RIGHT_EYE, {color: 'blue', lineWidth: 1})
drawConnectors(ctx, landmarks, FACEMESH_LEFT_EYE, {color: 'red', lineWidth: 1})
}
}
ctx.restore()
}
const euclideanDistance = (p1: Point, p2: Point) => {
return Math.sqrt((p2.x - p1.x) * (p2.x - p1.x) + (p2.y - p1.y) * (p2.y - p1.y));
}
const getEAR = (points: Point[]) => {
return (
(euclideanDistance(points[1], points[5]) + euclideanDistance(points[2], points[4])) /
(2 * euclideanDistance(points[0], points[3]))
)
}
return (
<>
{faceMesh && <WebcamStreamCapture faceMesh={faceMesh}/>}
<canvas ref={canvasRef} className={styles.frame}/>
</>
)
}
export default FaceMeshCanvas
I also have the same problem.
Env:
- ProductName: macOS
- ProductVersion: 12.3.1
- node: v16.13.0
- npm: v8.1.0
Modules Version:
- next: 12.2.5
- react: 18.2.0
- @mediapipe/camera_utils: ^0.3.1620247984
- @mediapipe/face_mesh: ^0.4.1629159166
I tried to compile other examples that use FaceMesh with react. For instance, I cloned this repository and compiled. FaceMesh worked.
In addition, I used other mediapipe modules to confirm error. In this case, I used '@mediapipe/hands'. The '@mediapipe/hands' worked based on NextJS! There was no Reference error.
To summarize,
- @mediapipe/face_mesh ... react: OK, nextjs: NO
- @mediapipe/hands ... react: OK, nextjs: OK
I am thinking that there are two possibilities of Reference error:
- Problem in face_mesh module
- Problem in nextjs
I think the problem is with SSR, but I don't know how to solve it.
I used the dynamic
import from this thread and the navigation error is gone, but I am still having integration issues.
Does anyone have an appropriate solution?
(I already tried this). However, I was not able to solve the problem.)
Thanks!
I got it to work in Next.js by extracting the FaceMesh
import to a separate react component and then dynamically importing that component on a page:
FaceMeshComponent.tsx
import {FaceMesh} from '@mediapipe/face_mesh';
import {useEffect} from 'react';
const faceMesh = new FaceMesh({
locateFile: file => {
return `[https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/${file}](https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/$%7Bfile%7D)`;
}
});
const FaceMeshComponent = () => {
useEffect(() => {
console.log('This is the facemesh instance', faceMesh);
}, []);
return <div>...</div>;
};
export default FaceMeshComponent;
Then use it on any page in next.js like this:
const FaceMeshComponent = dynamic(() => import('../components/face-mesh-component'), {
ssr: false
});
const HomePage = () => {
return (
<div>
<FaceMeshComponent/>
</div>
)
}
export default Home;
So the bottom line is, if you can make it work in plain react, just copy the components over to next.js and make sure you dynamically import the components that rely on @mediapipe/face_mesh
. Also make sure to use {ssr: false}
, otherwise it won't work.
Hope that helps!
this solution don't work when running npm run build
in nextjs
In my case, I had that problem with the @tensorflow-models/face-landmarks-detection library and I had to import it dynamically as a variable, inside an async function like this: const faceLandmarksDetection = (await import("@tensorflow-models/face-landmarks-detection")); instead of importing it normally like this: import faceLandmarksDetection from '@tensorflow-models/face-landmarks-detection'; With other libraries I didn't have that problem. Hope this can help you.
Hello @arianatri Are you still looking for a resolution? If yes, would you please check if the issue persists in the new MediaPipe solutions?
We are building a set of new, improved MediaPipe Solutions to help you more easily build and customize ML solutions for your applications. These new solutions will provide a superset of capabilities available in the legacy solutions. And we request the MediaPipe developer community help us uncover the issues and make the APIs more resilient.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
The issue still stands as of today.
Closing as stale. Please reopen if you'd like to work on this further.
@ayushgdev , this issue has not been resolved. It's not been fixed yet. Below is the logs of my npm run build of nextjs app and also if I try to deploy it on vercel I am getting the same error.
info - Linting and checking validity of types info - Compiled successfully info - Collecting page data ..ReferenceError: navigator is not defined at Object.<anonymous> (/node_modules/@mediapipe/face_mesh/face_mesh.js:75:288) at Object.<anonymous> (/node_modules/@mediapipe/face_mesh/face_mesh.js:131:502) at Module._compile (node:internal/modules/cjs/loader:1275:14)
Problem not solved.
It looks like the error is related to the use of navigator during server-side rendering (SSR). To address this, you can check if the code is running on the client side before using navigator. We can use the typeof window check for this purpose.
`import React, { useRef, useEffect } from 'react';
import { FaceMesh } from '@mediapipe/face_mesh';
const Detect = () => {
const canvasRef = useRef(null);
useEffect(() => {
const handleImageUpload = (event) => {
const file = event.target.files[0];
if (file) {
const reader = new FileReader();
reader.onload = (e) => {
// Check if running on the client side
if (typeof window !== 'undefined') {
const faceMesh = new FaceMesh({
locateFile: (file) => {
return `https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh/${file}`;
},
});
faceMesh.setOptions({
maxNumFaces: 3,
minDetectionConfidence: 0.5,
minTrackingConfidence: 0.5,
});
faceMesh.onResults(onResults);
const image = new Image();
image.src = e.target.result;
function onResults(results) {
console.log(results);
}
image.onload = async () => {
const canvas = canvasRef.current;
const context = canvas.getContext('2d');
context.clearRect(0, 0, canvas.width, canvas.height);
context.drawImage(image, 0, 0, canvas.width, canvas.height);
await faceMesh.send({ image });
};
}
};
reader.readAsDataURL(file);
}
};
// Attach the event listener to the file input
const fileInput = document.getElementById('fileInput');
fileInput.addEventListener('change', handleImageUpload);
// Cleanup the event listener on component unmount
return () => {
fileInput.removeEventListener('change', handleImageUpload);
};
}, []); // Empty dependency array ensures the effect runs only once after initial render
return (
<div>
...
</div>
);
};
export default Detect;`
I tried this method in nextjs component where it checks if the code is running on the client side before creating the FaceMesh instance and using navigator.
Has anyone figured out how to run face mesh in node.js/server-side? The fix in #1411 doesn't seem to work for next.js Is it even possible?