-
Notifications
You must be signed in to change notification settings - Fork 362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed extra prompt build caller. #249
Fixed extra prompt build caller. #249
Conversation
⌛ Running regression test suite: https://github.com/qodo-ai/qodo-cover/actions/runs/12379255839 |
I ran through a few test projects using the regression script. Here's why I'm thinking it's not needed. The function we're calling doesn't actually set anything at the class level: def build_prompt(self, failed_test_runs, language, testing_framework, code_coverage_report) -> dict:
"""
Builds a prompt using the provided information to be used for generating tests.
This method checks for the existence of failed test runs and then calls the PromptBuilder class to construct the prompt.
The prompt includes details such as the source file path, test file path, code coverage report, included files,
additional instructions, failed test runs, and the programming language being used.
Returns:
str: The generated prompt to be used for test generation.
"""
# Check for existence of failed tests:
if not failed_test_runs:
failed_test_runs_value = ""
else:
failed_test_runs_value = ""
try:
for failed_test in failed_test_runs:
failed_test_dict = failed_test.get("code", {})
if not failed_test_dict:
continue
# dump dict to str
code = json.dumps(failed_test_dict)
error_message = failed_test.get("error_message", None)
failed_test_runs_value += f"Failed Test:\n```\n{code}\n```\n"
if error_message:
failed_test_runs_value += (
f"Test execution error analysis:\n{error_message}\n\n\n"
)
else:
failed_test_runs_value += "\n\n"
except Exception as e:
self.logger.error(f"Error processing failed test runs: {e}")
failed_test_runs_value = ""
# Call PromptBuilder to build the prompt
self.prompt_builder = PromptBuilder(
source_file_path=self.source_file_path,
test_file_path=self.test_file_path,
code_coverage_report=code_coverage_report,
included_files=self.included_files,
additional_instructions=self.additional_instructions,
failed_test_runs=failed_test_runs_value,
language=language,
testing_framework=testing_framework,
project_root=self.project_root,
)
return self.prompt_builder.build_prompt() It returns a
So what purpose does this serve? Are we just testing that the setup is good and no assertion failures are occurring? If that's the case then I would revert my last commit but I would make sure to add a comment stating exactly that in this MR. |
(i deleted my old comment because after reviewing the complicated flow, I understood it was wrong, and the change here is justified. But the main issue remains - the current structure is too complicated, and the flow is not clear enough. too many |
/describe |
PR Description updated to latest commit (6684306)
|
PR Type
Bug fix
Description
test_gen.build_prompt()
in theCoverAgent.init()
methodrun_test_gen()
method where its result is actually usedChanges walkthrough 📝
CoverAgent.py
Remove redundant prompt building call in initialization
cover_agent/CoverAgent.py
test_gen.build_prompt()
ininit()
methodrun_test_gen()
version.txt
Version bump to 0.2.10
cover_agent/version.txt