ipex-llm icon indicating copy to clipboard operation
ipex-llm copied to clipboard

Qwen 235B Performance?

Open iamthemulti opened this issue 7 months ago • 4 comments

Hello,

I'm evaluating whether to go full Intel for my next inference build. Would it be possible to share some performance numbers for Qwen 235B fully loaded on GPU (no CPU inference)? Ideally INT4 (or similar) quantization with longer sequence lengths.

With the Arc Pro B60 Dual cards coming soonish, I think this will be a somewhat common use case for LLM enthusiasts.

iamthemulti avatar May 28 '25 14:05 iamthemulti

Hello,

I'm evaluating whether to go full Intel for my next inference build. Would it be possible to share some performance numbers for Qwen 235B fully loaded on GPU (no CPU inference)? Ideally INT4 (or similar) quantization with longer sequence lengths.

With the Arc Pro B60 Dual cards coming soonish, I think this will be a somewhat common use case for LLM enthusiasts.

Hi @iamthemulti Unfortunately, these performance data are not available now. We need to go through internal reviews for any public performance numbers. Please stay tuned.

qiyuangong avatar May 29 '25 01:05 qiyuangong

@qiyuangong hello, do you can to share a link of Qwen3 235B int4, that using in example? "128GB CPU memory for Qwen3MoE 235B INT4 model" https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/flashmoe_quickstart.md

In animation that showed in quickstart guide: https://llm-assets.readthedocs.io/en/latest/_images/FlashMoE-Qwen3-235B.gif (

Is it a quantized model by unsloth?

Image https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF

savvadesogle avatar Jul 06 '25 12:07 savvadesogle

Yes. We recommended the unsloth version.

For Qwen3-235B 4 bit model, you can try this link https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/tree/main/Q4_K_M

qiyuangong avatar Jul 07 '25 02:07 qiyuangong

Yes. We recommended the unsloth version.

For Qwen3-235B 4 bit model, you can try this link https://huggingface.co/unsloth/Qwen3-235B-A22B-GGUF/tree/main/Q4_K_M

Do you can to provide a command to start?

if i use default command like in an example ./flash-moe -m ~/llm/models/Qwen3-235B-A22B-GGUF/UD-Q3_K_XL/Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf --prompt "How to make a tea?" -no-cnv

i have only **2.99 t/s **:

llama_perf_sampler_print:    sampling time =      12.86 ms /   139 runs   (    0.09 ms per token, 10807.87 tokens per second)
llama_perf_context_print:        load time =    9218.06 ms
llama_perf_context_print: prompt eval time =    1102.96 ms /     6 tokens (  183.83 ms per token,     5.44 tokens per second)
llama_perf_context_print:        eval time =   44218.47 ms /   132 runs   (  334.99 ms per token,     2.99 tokens per second)
llama_perf_context_print:       total time =   45733.70 ms /   138 tokens
Interrupted by user

LOG for llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-core:

(ollama) arc@xpu:~/Downloads/llama-cpp/core/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-core$ ./flash-moe -m ~/llm/models/Qwen3-235B-A22B-GGUF/UD-Q3_K_XL/Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf --prompt "How to make a tea?" -no-cnv
build: 1 (99a3cc3) with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.4 (2025.0.4.20241205) for x86_64-unknown-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) A770 Graphics) - 15473 MiB free
llama_model_loader: additional 2 GGUFs metadata loaded.
llama_model_loader: loaded meta data with 45 key-value pairs and 1131 tensors from /home/arc/llm/models/Qwen3-235B-A22B-GGUF/UD-Q3_K_XL/Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen3moe
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Qwen3-235B-A22B
llama_model_loader: - kv   3:                           general.basename str              = Qwen3-235B-A22B
llama_model_loader: - kv   4:                       general.quantized_by str              = Unsloth
llama_model_loader: - kv   5:                         general.size_label str              = 235B-A22B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                       general.license.link str              = https://huggingface.co/Qwen/Qwen3-235...
llama_model_loader: - kv   8:                           general.repo_url str              = https://huggingface.co/unsloth
llama_model_loader: - kv   9:                   general.base_model.count u32              = 1
llama_model_loader: - kv  10:                  general.base_model.0.name str              = Qwen3 235B A22B
llama_model_loader: - kv  11:          general.base_model.0.organization str              = Qwen
llama_model_loader: - kv  12:              general.base_model.0.repo_url str              = https://huggingface.co/Qwen/Qwen3-235...
llama_model_loader: - kv  13:                               general.tags arr[str,2]       = ["unsloth", "text-generation"]
llama_model_loader: - kv  14:                       qwen3moe.block_count u32              = 94
llama_model_loader: - kv  15:                    qwen3moe.context_length u32              = 40960
llama_model_loader: - kv  16:                  qwen3moe.embedding_length u32              = 4096
llama_model_loader: - kv  17:               qwen3moe.feed_forward_length u32              = 12288
llama_model_loader: - kv  18:              qwen3moe.attention.head_count u32              = 64
llama_model_loader: - kv  19:           qwen3moe.attention.head_count_kv u32              = 4
llama_model_loader: - kv  20:                    qwen3moe.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  21:  qwen3moe.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  22:                 qwen3moe.expert_used_count u32              = 8
llama_model_loader: - kv  23:              qwen3moe.attention.key_length u32              = 128
llama_model_loader: - kv  24:            qwen3moe.attention.value_length u32              = 128
llama_model_loader: - kv  25:                      qwen3moe.expert_count u32              = 128
llama_model_loader: - kv  26:        qwen3moe.expert_feed_forward_length u32              = 1536
llama_model_loader: - kv  27:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  28:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  29:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  30:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  31:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  32:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  33:            tokenizer.ggml.padding_token_id u32              = 151654
llama_model_loader: - kv  34:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  35:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  36:               general.quantization_version u32              = 2
llama_model_loader: - kv  37:                          general.file_type u32              = 12
llama_model_loader: - kv  38:                      quantize.imatrix.file str              = Qwen3-235B-A22B-GGUF/imatrix_unsloth.dat
llama_model_loader: - kv  39:                   quantize.imatrix.dataset str              = unsloth_calibration_Qwen3-235B-A22B.txt
llama_model_loader: - kv  40:             quantize.imatrix.entries_count i32              = 744
llama_model_loader: - kv  41:              quantize.imatrix.chunks_count i32              = 685
llama_model_loader: - kv  42:                                   split.no u16              = 0
llama_model_loader: - kv  43:                        split.tensors.count i32              = 1131
llama_model_loader: - kv  44:                                split.count u16              = 3
llama_model_loader: - type  f32:  471 tensors
llama_model_loader: - type q3_K:  271 tensors
llama_model_loader: - type q4_K:  359 tensors
llama_model_loader: - type q5_K:   19 tensors
llama_model_loader: - type q6_K:   11 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q3_K - Medium
print_info: file size   = 96.59 GiB (3.53 BPW) 
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch             = qwen3moe
print_info: vocab_only       = 0
print_info: n_ctx_train      = 40960
print_info: n_embd           = 4096
print_info: n_layer          = 94
print_info: n_head           = 64
print_info: n_head_kv        = 4
print_info: n_rot            = 128
print_info: n_swa            = 0
print_info: n_embd_head_k    = 128
print_info: n_embd_head_v    = 128
print_info: n_gqa            = 16
print_info: n_embd_k_gqa     = 512
print_info: n_embd_v_gqa     = 512
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-06
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 12288
print_info: n_expert         = 128
print_info: n_expert_used    = 8
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 2
print_info: rope scaling     = linear
print_info: freq_base_train  = 1000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 40960
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 235B.A22B
print_info: model params     = 235.09 B
print_info: general.name     = Qwen3-235B-A22B
print_info: n_ff_exp         = 1536
print_info: vocab type       = BPE
print_info: n_vocab          = 151936
print_info: n_merges         = 151387
print_info: BOS token        = 11 ','
print_info: EOS token        = 151645 '<|im_end|>'
print_info: EOT token        = 151645 '<|im_end|>'
print_info: PAD token        = 151654 '<|vision_pad|>'
print_info: LF token         = 198 'Ċ'
print_info: FIM PRE token    = 151659 '<|fim_prefix|>'
print_info: FIM SUF token    = 151661 '<|fim_suffix|>'
print_info: FIM MID token    = 151660 '<|fim_middle|>'
print_info: FIM PAD token    = 151662 '<|fim_pad|>'
print_info: FIM REP token    = 151663 '<|repo_name|>'
print_info: FIM SEP token    = 151664 '<|file_sep|>'
print_info: EOG token        = 151643 '<|endoftext|>'
print_info: EOG token        = 151645 '<|im_end|>'
print_info: EOG token        = 151662 '<|fim_pad|>'
print_info: EOG token        = 151663 '<|repo_name|>'
print_info: EOG token        = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 94 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 95/95 layers to GPU
load_tensors:   CPU_Mapped model buffer size = 47121.29 MiB
load_tensors:   CPU_Mapped model buffer size = 47619.95 MiB
load_tensors:   CPU_Mapped model buffer size =  3641.85 MiB
load_tensors:        SYCL0 model buffer size =  4394.40 MiB
....................................................................................................
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 4096
llama_init_from_model: n_ctx_per_seq = 4096
llama_init_from_model: n_batch       = 4096
llama_init_from_model: n_ubatch      = 4096
llama_init_from_model: flash_attn    = 0
llama_init_from_model: freq_base     = 1000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (4096) < n_ctx_train (40960) -- the full capacity of the model will not be utilized
Running with Environment Variables:
  GGML_SYCL_DEBUG: 0
  GGML_SYCL_DISABLE_OPT: 1
Build with Macros:
  GGML_SYCL_FORCE_MMQ: no
  GGML_SYCL_F16: no
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0| [level_zero:gpu:0]|                Intel Arc A770 Graphics|  12.55|    512|    1024|   32| 16225M|         1.6.33578+15|
SYCL Optimization Feature:
|ID|        Device Type|Reorder|
|--|-------------------|-------|
| 0| [level_zero:gpu:0]|      Y|
This model is not recommended to use quantize kv cache!
llama_kv_cache_init: kv_size = 4096, offload = 1, type_k = 'i8', type_v = 'i8', n_layer = 94, can_shift = 1
llama_kv_cache_init:      SYCL0 KV buffer size =   376.00 MiB
llama_init_from_model: KV self size  =  376.00 MiB, K (i8):  188.00 MiB, V (i8):  188.00 MiB
llama_init_from_model:  SYCL_Host  output buffer size =     0.58 MiB
llama_init_from_model:      SYCL0 compute buffer size =  2438.00 MiB
llama_init_from_model:  SYCL_Host compute buffer size =   130.17 MiB
llama_init_from_model: graph nodes  = 3672
llama_init_from_model: graph splits = 190
common_init_from_params: setting dry_penalty_last_n to ctx_size = 4096
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 17

system_info: n_threads = 17 (n_threads_batch = 17) / 36 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 

sampler seed: 130355867
sampler params: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 4096
	top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist 
generate: n_ctx = 4096, n_batch = 4096, n_predict = -1, n_keep = 0

ERROR LOG for llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon:

(ollama) arc@xpu:~/Downloads/llama-cpp/xeon/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon$ ./flash-moe -m ~/llm/models/Qwen3-235B-A22B-GGUF/UD-Q3_K_XL/Qwen3-235B-A22B-UD-Q3_K_XL-00001-of-00003.gguf --prompt "How to make a tea?" -no-cnv
load_backend: loaded SYCL backend from ./libggml-sycl.so
register_backend: registered backend SYCL (1 devices)
register_device: registered device SYCL0 (Intel(R) Arc(TM) A770 Graphics)
ggml_backend_load_best: ./libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: ./libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: ./libggml-cpu-sapphirerapids.so score: 0
ggml_backend_load_best: ./libggml-cpu-haswell.so score: 55
ggml_backend_load_best: /home/arc/Downloads/llama-cpp/xeon/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon/libggml-cpu-alderlake.so score: 0
ggml_backend_load_best: /home/arc/Downloads/llama-cpp/xeon/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon/libggml-cpu-skylakex.so score: 0
ggml_backend_load_best: /home/arc/Downloads/llama-cpp/xeon/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon/libggml-cpu-sapphirerapids.so score: 0
ggml_backend_load_best: /home/arc/Downloads/llama-cpp/xeon/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon/libggml-cpu-haswell.so score: 55
./flash-moe: line 24:  8458 Illegal instruction     (core dumped) LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(cd "$(dirname "$0")";pwd) $(cd "$(dirname "$0")";pwd)/llama-cli-bin -t $CORES -e -ngl 999 --color --no-context-shift -ot exps=CPU "$@"

LOG env-check.sh

(ollama) arc@xpu:~/Downloads/llama-cpp/xeon/llama-cpp-ipex-llm-2.3.0b20250612-ubuntu-xeon$ bash ~/llm/env-check.sh 
-----------------------------------------------------------------
PYTHON_VERSION=3.11.13
-----------------------------------------------------------------
transformers=4.44.2
-----------------------------------------------------------------
torch=2.2.0+cu121
-----------------------------------------------------------------
ipex-llm Version: 2.3.0b20250706
-----------------------------------------------------------------
IPEX is not installed. 
-----------------------------------------------------------------
CPU Information: 
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      46 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             36
On-line CPU(s) list:                0-35
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
CPU family:                         6
Model:                              63
Thread(s) per core:                 1
Core(s) per socket:                 18
Socket(s):                          2
Stepping:                           2
CPU max MHz:                        3600.0000
CPU min MHz:                        1200.0000
BogoMIPS:                           4590.01
-----------------------------------------------------------------
Total CPU Memory: 124.683 GB
-----------------------------------------------------------------
Operating System: 
Ubuntu 22.04.5 LTS \n \l

-----------------------------------------------------------------
Linux xpu 6.5.0-35-generic #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May  7 09:00:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
-----------------------------------------------------------------
CLI:
    Version: 1.2.41.20250414
    Build ID: 002a9706

Service:
    Version: 1.2.41.20250414
    Build ID: 002a9706
    Level Zero Version: 1.21.9
-----------------------------------------------------------------
  Driver UUID                                     32352e31-382e-3333-3537-380000000000
  Driver Version                                  25.18.33578
  Driver UUID                                     32352e31-382e-3333-3537-380000000000
  Driver Version                                  25.18.33578
  Driver UUID                                     32352e31-382e-3333-3537-380000000000
  Driver Version                                  25.18.33578
  Driver UUID                                     32352e31-382e-3333-3537-380000000000
  Driver Version                                  25.18.33578
  Driver Version                                  2023.16.12.0.12_195853.xmain-hotfix
  Driver UUID                                     32303235-2e32-302e-362e-302e30345f32
  Driver Version                                  2025.20.6.0.04_224945
-----------------------------------------------------------------
Driver related package version:
ii  intel-fw-gpu                                   2025.13.2-398~22.04                     all          Firmware package for Intel integrated and discrete GPUs
ii  intel-i915-dkms                                1.23.10.92.231129.101+i141-1            all          Out of tree i915 driver.
ii  intel-level-zero-gpu-raytracing                1.1.0-97~u22.04                         amd64        oneAPI Level Zero Ray Tracing Support
-----------------------------------------------------------------
/home/arc/llm/env-check.sh: line 167: sycl-ls: command not found
igpu not detected
-----------------------------------------------------------------
xpu-smi is properly installed. 
-----------------------------------------------------------------
+-----------+--------------------------------------------------------------------------------------+
| Device ID | Device Information                                                                   |
+-----------+--------------------------------------------------------------------------------------+
| 0         | Device Name: Intel(R) Arc(TM) A770 Graphics                                          |
|           | Vendor Name: Intel(R) Corporation                                                    |
|           | SOC UUID: 00000000-0000-0005-0000-000856a08086                                       |
|           | PCI BDF Address: 0000:05:00.0                                                        |
|           | DRM Device: /dev/dri/card0                                                           |
|           | Function Type: physical                                                              |
+-----------+--------------------------------------------------------------------------------------+
| 1         | Device Name: Intel(R) Arc(TM) A770 Graphics                                          |
|           | Vendor Name: Intel(R) Corporation                                                    |
|           | SOC UUID: 00000000-0000-0009-0000-000856a08086                                       |
|           | PCI BDF Address: 0000:09:00.0                                                        |
|           | DRM Device: /dev/dri/card1                                                           |
|           | Function Type: physical                                                              |
+-----------+--------------------------------------------------------------------------------------+
| 2         | Device Name: Intel(R) Arc(TM) A770 Graphics                                          |
|           | Vendor Name: Intel(R) Corporation                                                    |
|           | SOC UUID: 00000000-0000-0085-0000-000856a08086                                       |
|           | PCI BDF Address: 0000:85:00.0                                                        |
|           | DRM Device: /dev/dri/card2                                                           |
|           | Function Type: physical                                                              |
+-----------+--------------------------------------------------------------------------------------+
| 3         | Device Name: Intel(R) Arc(TM) A770 Graphics                                          |
|           | Vendor Name: Intel(R) Corporation                                                    |
|           | SOC UUID: 00000000-0000-0089-0000-000856a08086                                       |
|           | PCI BDF Address: 0000:89:00.0                                                        |
|           | DRM Device: /dev/dri/card3                                                           |
|           | Function Type: physical                                                              |
+-----------+--------------------------------------------------------------------------------------+
GPU0 Memory size=16G
GPU1 Memory size=16G
GPU2 Memory size=16G
GPU3 Memory size=16G
-----------------------------------------------------------------
05:00.0 VGA compatible controller: Intel Corporation Device 56a0 (rev 08) (prog-if 00 [VGA controller])
	Subsystem: ASRock Incorporation Device 6012
	Flags: bus master, fast devsel, latency 0, IRQ 75, NUMA node 0
	Memory at 90000000 (64-bit, non-prefetchable) [size=16M]
	Memory at 38000000000 (64-bit, prefetchable) [size=16G]
	Expansion ROM at <ignored> [disabled]
	Capabilities: <access denied>
	Kernel driver in use: i915
	Kernel modules: i915
--
09:00.0 VGA compatible controller: Intel Corporation Device 56a0 (rev 08) (prog-if 00 [VGA controller])
	Subsystem: ASRock Incorporation Device 6012
	Flags: bus master, fast devsel, latency 0, IRQ 78, NUMA node 0
	Memory at c4000000 (64-bit, non-prefetchable) [size=16M]
	Memory at 3b800000000 (64-bit, prefetchable) [size=16G]
	Expansion ROM at c5000000 [disabled] [size=2M]
	Capabilities: <access denied>
	Kernel driver in use: i915
	Kernel modules: i915
--
85:00.0 VGA compatible controller: Intel Corporation Device 56a0 (rev 08) (prog-if 00 [VGA controller])
	Subsystem: ASRock Incorporation Device 6012
	Flags: bus master, fast devsel, latency 0, IRQ 81
	Memory at fa000000 (64-bit, non-prefetchable) [size=16M]
	Memory at 3f800000000 (64-bit, prefetchable) [size=16G]
	Expansion ROM at fb000000 [disabled] [size=2M]
	Capabilities: <access denied>
	Kernel driver in use: i915
	Kernel modules: i915
--
89:00.0 VGA compatible controller: Intel Corporation Device 56a0 (rev 08) (prog-if 00 [VGA controller])
	Subsystem: ASRock Incorporation Device 6012
	Flags: bus master, fast devsel, latency 0, IRQ 84
	Memory at f8000000 (64-bit, non-prefetchable) [size=16M]
	Memory at 3f000000000 (64-bit, prefetchable) [size=16G]
	Expansion ROM at f9000000 [disabled] [size=2M]
	Capabilities: <access denied>
	Kernel driver in use: i915
	Kernel modules: i915
-----------------------------------------------------------------

savvadesogle avatar Jul 07 '25 10:07 savvadesogle