Invoca a API de Inferência Hugging Face https://huggingface.co/docs/hub/en/api
Invoke-HuggingFaceInferenceApi [[-model] <Object>] [[-params] <Object>] [-Public] [-OpenaiChatCompletion] [[-StreamCallback] <Object>]
[<CommonParameters>]
Parameter Set: (All)
Type: Object
Aliases:
Accepted Values:
Required: false
Position: 1
Default Value:
Accept pipeline input: false
Accept wildcard characters: false
Parameter Set: (All)
Type: Object
Aliases:
Accepted Values:
Required: false
Position: 2
Default Value:
Accept pipeline input: false
Accept wildcard characters: false
Parameter Set: (All)
Type: SwitchParameter
Aliases:
Accepted Values:
Required: false
Position: named
Default Value: False
Accept pipeline input: false
Accept wildcard characters: false
Forçar usar endpoint de chat completion Params deverá ser tratado como o mesmo params da Api da Openai (Veja o cmdle Get-OpenaiChat). Mais info: https://huggingface.co/blog/tgi-messages-api So funciona com modelos que possuem um chat template!
Parameter Set: (All)
Type: SwitchParameter
Aliases:
Accepted Values:
Required: false
Position: named
Default Value: False
Accept pipeline input: false
Accept wildcard characters: false
Stream Callback para ser usado no caso de streamS!
Parameter Set: (All)
Type: Object
Aliases:
Accepted Values:
Required: false
Position: 3
Default Value:
Accept pipeline input: false
Accept wildcard characters: false