Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'NoneType' object has no attribute 'lower' #5

Open
luochenxin opened this issue Apr 6, 2024 · 6 comments
Open

AttributeError: 'NoneType' object has no attribute 'lower' #5

luochenxin opened this issue Apr 6, 2024 · 6 comments

Comments

@luochenxin
Copy link

python optimize_instructions.py --optimizer="gpt-3.5-turbo" --scorer="text-bison" --instruction_pos="Q_end" --dataset="gsm8k" --task="train" --palm_api_key="..." --openai_api_key="..."

I tried to run this code to get such a problem on the gsm8k data collection. How should I solve it?

File "/root/autodl-tmp/LLMasop/opro/evaluation/eval_utils.py", line 802, in evaluate_single_instruction
choices = list(
File "/root/autodl-tmp/LLMasop/opro/evaluation/eval_utils.py", line 804, in
lambda x, y: _parse_prediction(
File "/root/autodl-tmp/LLMasop/opro/evaluation/eval_utils.py", line 794, in _parse_prediction
return metrics.get_normalized_prediction(
File "/root/autodl-tmp/LLMasop/opro/evaluation/metrics.py", line 210, in get_normalized_prediction
prediction_parsed = prediction.lower().strip()
AttributeError: 'NoneType' object has no attribute 'lower'

@chengrunyang
Copy link
Collaborator

Hi @luochenxin, this seems to suggest that the raw_answers_to_parse object you get at

raw_answers_to_parse = (
has None elements (instead it should be a list of strings). Could you print out this variable to check its value?

@luochenxin
Copy link
Author

Thank you for your suggestion, I tried to print its value and I found that one of the elements in this list has a value of None, I'm not sure why this problem occurs, how can I fix it?

@chengrunyang
Copy link
Collaborator

This sounds weird, especially when only one of the elements is None and the others are normal. To track down the error, could you also print out a few more variables, like raw_answers_second_round at https://github.com/google-deepmind/opro/blob/e81b2f573ce4e15755c70c2535279d6fb940b4b7/opro/evaluation/eval_utils.py#L772C57-L772C81, and raw_prompts_flattened that's sent to

def _prompt_a_list_in_parallel(
? And ideally more variables before them. Basically it would be useful to print out the intermediate variables to see whether each step in the prompting pipeline works as expected.

@luochenxin
Copy link
Author

Following your suggestion, I printed "raw_prompts_flattened" without any problem, but when I print the value of "raw_answers" at /opro/evaluation/eval_utils.py, line708, I find that the elements in this list, each time the serial number 30 has a value of "None", and
it's input is: '"The gummy bear factory manufactures 300 gummy bears a minute. Each packet of gummy bears has 50 gummy bears inside. How long would it take for the factory to manufacture enough gummy bears to fill 240 packets, in minutes?\nLet's solve the problem.",
I don't know what's causing this problem

@Mercer-zwy
Copy link

Hi, have you solved this problem? I have also encountered this problem.

@luochenxin
Copy link
Author

I think it may be a problem with Gemini. If I switch to gpt-3.5, there will be no problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants