-
Notifications
You must be signed in to change notification settings - Fork 175
Pull requests: openvinotoolkit/openvino.genai
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
use genai callback in image gen and switch to genai by default
category: GHA
CI based on Github actions
category: llm_bench
Label for tool/llm_bench folder
fill prompt for sampler analysis with real tokens in VLM pipeline
category: visual language
Visual language pipeline
Add parametrization for the detokenization/decoding
category: GenAI C++ API
Changes in GenAI C++ public headers
category: GHA
CI based on Github actions
category: LLM
LLM pipeline (stateful, static)
category: Python API
Python API for GenAI
category: tokenizers
Tokenizer class or submodule update
enhancement
New feature or request
no-match-files
[Prompt lookup]
category: cmake / build
Cmake scripts
category: continuous batching
Continuous batching
category: GenAI C++ API
Changes in GenAI C++ public headers
category: LLM
LLM pipeline (stateful, static)
category: samples
GenAI samples
category: speculative decoding
Speculative decoding
no-match-files
Update requirements.txt and add requirements_2024.5.txt
category: llm_bench
Label for tool/llm_bench folder
category: sampling
Sampling / Decoding algorithms
#1242
opened Nov 21, 2024 by
wgzintel
Loading…
Static llm pipeline dynamic shape model
category: LLM
LLM pipeline (stateful, static)
category: samples
GenAI samples
#1240
opened Nov 20, 2024 by
AsyaPronina
•
Draft
Parallel sampling
category: cmake / build
Cmake scripts
category: continuous batching
Continuous batching
category: sampling
Sampling / Decoding algorithms
no-match-files
Move beam search in case of chat scenario to sampler.cpp
category: continuous batching
Continuous batching
category: LLM
LLM pipeline (stateful, static)
category: visual language
Visual language pipeline
no-match-files
Added chat template to CLI.
category: WWB
PR changes WWB
#1208
opened Nov 13, 2024 by
andreyanufr
•
Draft
[CPU] Change kvcache default type of PagedAttention to u8 for CPU plugin
category: continuous batching
Continuous batching
category: GHA
CI based on Github actions
Test master logits
category: LLM
LLM pipeline (stateful, static)
category: samples
GenAI samples
category: sampling
Sampling / Decoding algorithms
do_not_merge
do_not_review
StaticLLMPipeline: Optimize kvcache copy
category: LLM
LLM pipeline (stateful, static)
category: NPU
#1199
opened Nov 12, 2024 by
yviansu
Loading…
[JS] Add GenAI Node.js bindings
category: cmake / build
Cmake scripts
category: samples
GenAI samples
category: tokenizers
Tokenizer class or submodule update
do_not_merge
no-match-files
#1193
opened Nov 11, 2024 by
vishniakov-nikolai
•
Draft
3 tasks
Added basic git based code checks #663
category: GHA
CI based on Github actions
#1181
opened Nov 9, 2024 by
rishik-ashili
Loading…
[Speculative decoding] Alignment Speculative decoding vs Continuous batching results with many requests
category: continuous batching
Continuous batching
category: GenAI C++ API
Changes in GenAI C++ public headers
category: sampling
Sampling / Decoding algorithms
category: speculative decoding
Speculative decoding
Preserve default config for LLM samples
category: samples
GenAI samples
category: sampling
Sampling / Decoding algorithms
#1152
opened Nov 6, 2024 by
Wovchena
Loading…
Add GPU FP16 overflow fix
category: WWB
PR changes WWB
do_not_merge
#1120
opened Nov 1, 2024 by
AlexKoff88
•
Draft
Verify chatglm3 6b
category: GHA
CI based on Github actions
category: tokenizers
Tokenizer class or submodule update
no-match-files
#1119
opened Oct 31, 2024 by
Aniruddha521
Loading…
StaticLLMPipeline: Use set_tensor for kvcache model
category: LLM
LLM pipeline (stateful, static)
#1106
opened Oct 30, 2024 by
TolyaTalamanov
•
Draft
Corrected performance data when batch size is greater than 1
category: sampling
Sampling / Decoding algorithms
do_not_merge
Previous Next
ProTip!
Adding no:label will show everything without a label.