DrJesseHansen

Results 14 issues of DrJesseHansen

Hello, We are running Alphafold 2.3.1 on a Linux HPC through SLURM submission. The databases were downloaded to our cluster for AF2 to run correctly. We were able to successfully...

Hi all, I have resubmitted the same job as my [previous issue](https://github.com/deepmind/alphafold/issues/742), except this time on GeForce RTX 2080 Ti rather than A10 or A40s (under the assumption that these...

Hello all, thank you for the update! However now we are running into a new error while testing on a bunch of different GPUs. This is on our HPC. ```...

hi, I am using RELION for subtomo averaging after exporting the extracted subvolumes from WARP. My box is 200 pix at 4.2A/pix. I have 4000 particles. This doesnt seem excessive....

hi, I am running RELION/4.0 on cluster environment. I am doing subtomo refinement, box size 160, 38000 particles. local angular search only, at healpix 5, and a translational search of...

Dear Devs, I am using tomotwin as part of napari 0.4.18. I am running in clustering mode. The pixel size of my tomogram is 11.06, note that I did try...

Hi I followed your tutorial steps except using my data. I gave it several tomograms, preprocessed as you describe, then ran refine on the HPC. It runs for about 3...

hi Biochemfan, I am sorry for posting this here, I'm not sure how else to contact you. I am wondering if it's possible to include a dose weighting scheme for...

Running on a Linux Deb 12 compute cluster, have tried on various GPU nodes. Running relion/5 beta-3-commit-6331fe. For example when I run the command below: relion_python_tomo_align_tilt_series AreTomo --sample-thickness-nanometers 120 --gpu...

3D refinement with subvolumes gives the error cited. Subvolumes extracted from WarpTools 2.0.0/dev28. reconstruct_particle gives a very nice reconstruction. `mpirun --np 5 --oversubscribe relion_refine_mpi --o Refine3D/job001_3D/run --auto_refine --split_random_halves --i allparticles_bin8_3D.star...