Skip to content

Commit

Permalink
FIX: Prevent LLM enumerator from erroring when spam enabled (#1045)
Browse files Browse the repository at this point in the history
This PR fixes an issue where LLM enumerator would error out when `SiteSetting.ai_spam_detection = true` but there was no `AiModerationSetting.spam` present.

Typically, we add an `LlmDependencyValidator` for the setting itself, however, since Spam is unique in that it has it's model set in `AiModerationSetting` instead of a `SiteSetting`, we'll add a simple check here to prevent erroring out.
  • Loading branch information
keegangeorge authored Dec 26, 2024
1 parent 47ecf86 commit b480f13
Show file tree
Hide file tree
Showing 2 changed files with 22 additions and 1 deletion.
2 changes: 1 addition & 1 deletion lib/configuration/llm_enumerator.rb
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ def self.global_usage
rval[model_id] << { type: :ai_embeddings_semantic_search }
end

if SiteSetting.ai_spam_detection_enabled
if SiteSetting.ai_spam_detection_enabled && AiModerationSetting.spam.present?
model_id = AiModerationSetting.spam[:llm_model_id]
rval[model_id] << { type: :ai_spam }
end
Expand Down
21 changes: 21 additions & 0 deletions spec/configuration/llm_enumerator_spec.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# frozen_string_literal: true

RSpec.describe DiscourseAi::Configuration::LlmEnumerator do
fab!(:fake_model)

describe "#global_usage" do
before do
SiteSetting.ai_helper_model = "custom:#{fake_model.id}"
SiteSetting.ai_helper_enabled = true
end

it "returns a hash of Llm models in use globally" do
expect(described_class.global_usage).to eq(fake_model.id => [{ type: :ai_helper }])
end

it "doesn't error on spam when spam detection is enabled but moderation setting is missing" do
SiteSetting.ai_spam_detection_enabled = true
expect { described_class.global_usage }.not_to raise_error
end
end
end

0 comments on commit b480f13

Please sign in to comment.