Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Vulkan backend crash on model loading #887

Closed
LSXAxeller opened this issue Aug 1, 2024 · 3 comments
Closed

[BUG]: Vulkan backend crash on model loading #887

LSXAxeller opened this issue Aug 1, 2024 · 3 comments
Labels
duplicate This issue or pull request already exists

Comments

@LSXAxeller
Copy link

Description

after updating to v0.14.0 and releasing Vulkan backend, I decided to give it a try instead using CPU inference, but on loading model it crash with console output

WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
WARNING: [Loader Message] Code 0 : Layer VK_LAYER_RENDERDOC_Capture uses API version 1.2 which is older than the application specified API version of 1.3. May cause issues.
llama_model_loader: loaded meta data with 25 key-value pairs and 327 tensors from C:\Models\Text\Index-1.9B-Character\Index-1.9B-Character-Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = Index-1.9B-Character_test
llama_model_loader: - kv   2:                          llama.block_count u32              = 36
llama_model_loader: - kv   3:                       llama.context_length u32              = 4096
llama_model_loader: - kv   4:                     llama.embedding_length u32              = 2048
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 5888
llama_model_loader: - kv   6:                 llama.attention.head_count u32              = 16
llama_model_loader: - kv   7:              llama.attention.head_count_kv u32              = 16
llama_model_loader: - kv   8:     llama.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv   9:                          general.file_type u32              = 18
llama_model_loader: - kv  10:                           llama.vocab_size u32              = 65029
llama_model_loader: - kv  11:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  12:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  13:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  14:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  15:                      tokenizer.ggml.tokens arr[str,65029]   = ["<unk>", "<s>", "</s>", "reserved_0"...
llama_model_loader: - kv  16:                      tokenizer.ggml.scores arr[f32,65029]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  17:                  tokenizer.ggml.token_type arr[i32,65029]   = [2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  18:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  19:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  20:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  22:               tokenizer.ggml.add_eos_token bool             = false
llama_model_loader: - kv  23:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  24:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   73 tensors
llama_model_loader: - type q6_K:  254 tensors
llm_load_vocab: special tokens cache size = 515
llm_load_vocab: token to piece cache size = 0.3670 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 65029
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_layer          = 36
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 5888
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = 8B
llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 2.17 B
llm_load_print_meta: model size       = 1.66 GiB (6.56 BPW) 
llm_load_print_meta: general.name     = Index-1.9B-Character_test
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: PAD token        = 0 '<unk>'
llm_load_print_meta: LF token         = 270 '<0x0A>'
llm_load_print_meta: max token length = 48
ggml_vulkan: Found 1 Vulkan devices:
Vulkan0: Radeon RX 580 Series (AMD proprietary driver) | uma: 0 | fp16: 0 | warp size: 64
Fatal error: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

Repeat 2 times:
--------------------------------
at LLama.Native.SafeLlamaModelHandle.llama_load_model_from_file(System.String, LLama.Native.LLamaModelParams)
--------------------------------
at LLama.Native.SafeLlamaModelHandle.LoadFromFile(System.String, LLama.Native.LLamaModelParams)
at LLama.LlamaWeights+<>c__DisplayClass21_0.<LoadFromFileAsync>b__1()
at System.Threading.Tasks.Task`1[[System.__Canon, System.Private.CoreLib, Version=8.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].InnerInvoke()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(System.Threading.Thread, System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef, System.Threading.Thread)
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool+WorkerThread.WorkerThreadStart()

Reproduction Steps

            if (!NativeLibraryConfig.LLama.LibraryHasLoaded)
                NativeLibraryConfig.All
                    .WithCuda(AppConfig.Instance.Device == Device.CUDA)
                    .WithVulkan(AppConfig.Instance.Device == Device.VULKAN)
                    .WithAutoFallback(); // AppConfig.Instance.Device is set to Device.VULKAN

            var modelPath = "MODEL_PATH_ON_PC";
            ModelParameters = new ModelParams(modelPath)
            {
                ContextSize = 4096,
                Embeddings = false,
                GpuLayerCount = 16,
            };
            Model = await LLamaWeights.LoadFromFileAsync(ModelParameters);

Environment & Configuration

  • Operating system: Windows 11
  • .NET runtime version: 8
  • LLamaSharp version: 0.14.0
  • CUDA version (if you are using cuda backend):
  • CPU & GPU device: I5-11400F & RX 580 GPU Driver Version
    23.19.10-240104a-399660C-AMD-Software-Adrenalin-Edition

Known Workarounds

None

@m0nsky
Copy link
Contributor

m0nsky commented Aug 2, 2024

"Attempted to read or write protected memory" usually means you are running out of VRAM.

  • Is it a 4GB or 8GB model RX 580?
  • What VRAM utilization are you seeing right before the crash?
  • Does the crash also happen if you change GpuLayerCount from 16 to 1?

@LSXAxeller
Copy link
Author

"Attempted to read or write protected memory" usually means you are running out of VRAM.

  • Is it a 4GB or 8GB model RX 580?
  • What VRAM utilization are you seeing right before the crash?
  • Does the crash also happen if you change GpuLayerCount from 16 to 1?
  • RX 580 OEM 4GB
  • 4% like Idle | Free Memory 3.5GB
  • 1 or 0 or 16 or 36 it's all the same result

I tried with different models, 1.9B, 1.1B, 300M, 22M

@SeriousOldMan
Copy link

#886 is the same error. I think one issue can be closed.

@martindevans martindevans added the duplicate This issue or pull request already exists label Aug 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

4 participants