Skip to content

Change Logs

Tolga Kayhan edited this page Dec 4, 2024 · 28 revisions

Changelog

9.0.1

  • Message list now accept RunId
  • Upgraded to Microsoft.Extensions.AI version 9.0.1, which resolves the "Method not found: '!!0" error when used alongside other SDKs with different versions.

9.0.0

  • .NET 9 support added.
  • ⚠️ Support for .NET 6 and .NET 7 has ended.
  • Fixed utility library issues and synced with latest version.

8.10.1

  • Fixed an issue with the Store parameter being included in requests by default, causing errors with Azure OpenAI models. The parameter is now optional and excluded from serialization unless explicitly set.

8.10.0

  • Added support for Microsoft.Extensions.AI IChatClient and IEmbeddingGenerator (more information will be coming soon to the Wiki).

  • Added missing Temperature and TopP parameters to AssistantResponse.

  • Added missing Store parameter to ChatCompletionCreateRequest.

  • Breaking Changes:

    • ⚠️ CreatedAt parameter renamed to CreatedAtUnix and converted to long instead of int. Added CreatedAt parameter as DateTimeOffset type, which will automatically convert Unix time to DateTime.

8.9.0

  • Realtime API implementation is completed. As usual this is the first version and it may contain bugs. Please report any issues you encounter.
  • Realtime Sample

8.8.0

  • Compatibility Enhancement: You can now use this library alongside the official OpenAI library and/or Semantic Kernel within the same project. The name changes in this update support this feature.

  • Namespace and Package ID Update: The namespace and PackageId have been changed from Betalgo.OpenAI to Betalgo.Ranul.OpenAI.

  • OpenAI Naming Consistency: We've standardized the use of "OpenAI" throughout the library, replacing any instances of "OpenAi" or other variations.

  • Migration Instructions: Intellisense should assist you in updating your code. If it doesn't, please make the following changes manually:

    • Switch to the new NuGet package: Betalgo.Ranul.OpenAI instead of Betalgo.OpenAI.
    • Update all namespaces from OpenAI to Betalgo.Ranul.OpenAI.
    • Replace all occurrences of "OpenAi", "Openai", or any other variations with "OpenAI".
  • Need Help?: If you encounter any issues, feel free to reach out via our Discord channel, Reddit channel, or GitHub discussions. We're happy to assist.

  • Feedback Welcomed: If you notice any mistakes or missing name changes, please create an issue to let us know.

  • Utilities Library Status: Please note that the Utilities library might remain broken for a while. I will focus on fixing it after completing the real-time API implementation.

8.7.2

  • Fixed incorrect Azure Urls.
  • Token usage response extended with PromptTokensDetails, audio_tokens and cached_tokens.
  • Model list extended with Gpt_4o_2024_08_06 and Chatgpt_4o_latest.

8.7.1

  • moved strict paremeter from ToolDefinition to FunctionDefinition

  • moved strict paremeter from ToolDefinition to FunctionDefinition

8.7.0

  • Added Support for o1 reasing models (o1-mini and o1-preview).
  • Added MaxCompletionTokens for chat completions.
  • Added support for ParallelToolCalls for chat completions.
  • Added support for ServiceTier for chat completions.
  • Added support for ChunkingStrategy in Vector Store and Vector Store Files.
  • Added support for Strict in ToolDefinition.
  • Added support for MaxNumberResults and RankingOptions for FileSearchTool.
  • Added support for ReasoningTokens for token usage.
  • Added support for ResponseFormatOneOfType for AssistantResponse.cs.

8.6.2

8.6.1

  • Updated Models with new GPT-4o mini model.

8.6.0

  • Fixed Azure Assistant URLs.
  • Updated library logo.
  • Added support for tool resources in Assistant response.

8.5.1

  • Introduced IsDelta into BaseResponseModel, which can help to determine if incoming data is part of the delta.

8.5.0

  • Assistant Stream now returns the BaseResponse type, but they can be cast to the appropriate types(RunStepResponse,RunResponse,MessageResponse). The reason for this change is that we realized the stream API returns multiple different object types rather than returning a single object type.
  • The Base Response now has a StreamEvent field, which can be used to determine the type of event while streaming.

8.4.0

  • Added Stream support for submitToolOutputsToRun, createRun, and createThreadAndRun
  • With this update, we are now in sync with OpenAI's latest API changes. We shouldn't have any missing features as of now. 🎉

8.3.0

  • Updated Assistant tests, added sample for CreateMessageWithImage
  • Azure Assistant endpoints are updated since documentation reference still earlier version (Assistant v1). I am not sure if Azure supports all Assistant v2 features. So, feedback is much appreciated.
  • Fixed error handling and response parsing for audio transcription result in text mode.
  • Fixed Culture issue for number conversions (Audio Temperature and Image N)
  • Removed file_ids from Create Assistant
  • Added Support for Chat LogProbs
  • Fixed File_Id Typo in file VisionImageUrl
  • Updated File purpose enum list

8.2.2

  • Assistant (Beta) feature is now available in the main package. Be aware there might still be bugs due to the beta status of the feature and the SDK itself. Please report any issues you encounter.
  • Use "UseBeta": true in your config file or serviceCollection.AddOpenAIService(r => r.UseBeta = true); or new OpenAiOptions { UseBeta = true } in your service registration to enable Assistant features.
  • Expect more frequent breaking changes around the assistant API due to its beta nature.
  • All Assistant endpoints are implemented except for streaming functionality, which will be added soon.
  • The Playground has samples for every endpoint usage, but lacks a complete implementation for the Assistant APIs. Refer to Assistants overview - OpenAI API for more details.
  • Special thanks to all contributors for making this version possible!

Other Changes:

  • Fixed a bug with multiple tools calling in stream mode.
  • Added error handling for streaming.
  • Added usage information for streaming (use StreamOptions = new(){IncludeUsage = true,} to get usage information).
  • Added timestamp_granularities[] for Create transcription to provide the timestamp of every word.

8.1.1

  • Fixed incorrect mapping for batch API error response.

8.1.0

  • Added support for Batch API

8.0.1

  • Added support for new Models gpt-4-turbo and gpt-4-turbo-2024-04-09 thanks to @ChaseIngersol

8.0.0

  • Added support for .NET 8.0 thanks to @BroMarduk
  • Utilities library updated to work with only .NET 8.0

7.4.7

  • Fixed a bug that was causing binary image to be sent as base64 string, Thanks to @yt3trees
  • Fixed a bug that was blocking CreateCompletionAsStream on some platforms. #331
  • Fixed a bug that was causing an error with multiple tool calls, now we are handling index parameter #493, thanks to @David-Buyer

7.4.6

  • Fixed again🥲 incorrect Model Naming - moderation models and ada embedding 2 model

7.4.5

  • Fixed function calling streaming bugs thanks to @David-Buyer @dogdie233 @gavi @Maracaipe611
  • Breaking Change: While streaming (CreateCompletionAsStream), there were some unexpected incoming data chunks like :pings or :events, etc. @gavi discovered this issue. We are now ignoring these chunks. If you were using it, you need to set justDataMode to false.

7.4.4

  • Added support for new models : TextEmbeddingV3Small, TextEmbeddingV3Large, Gpt_3_5_Turbo_0125, Gpt_4_0125_preview, Gpt_4_turbo_preview, Text_moderation_007, Text_moderation_latest, Text_moderation_stable
  • Added optinal dimension and encoding for embedding thanks to @shanepowell

7.4.3

  • Fixed the response format of AudioCreateSpeechRequest.
  • Updated Azure OpenAI version to 2023-12-01-preview, which now supports dall-e 3.
  • Added the ability to retrieve header values from the base response, such as ratelimit, etc. Please note that this feature is experimental and may change in the future.
  • Semi-Breaking change:
    • The SDK will now attempt to handle 500 errors and other similar errors from the OpenAI server. Previously, an exception was thrown in such cases. Now, the SDK will try to read the response and return it as an error message. This change provides more visibility to developers and helps them understand the cause of the error.

7.4.2

  • Let's start with breaking changes:
    • OpenAI has replaced function calling with tools. We have made the necessary changes to our code. This is not a major change; now you just have a wrapper around your function calling, which is named as "tool". The Playground provides an example. Please take a look to see how you can update your code.
      This update was completed by @shanepowell. Many thanks to him.
  • Now we support the Vision API, which involves passing message contents to the existing chat method. It is quite easy to use, but documentation was not available in the OpenAI API documentation.
    This feature was completed by @belaszalontai. Many thanks to them.

7.4.1

  • Added support for "Create Speech" thanks to @belaszalontai / @szabe74

7.4.0

  • Added support for Dall-e 3, thanks to @belaszalontai and @szabe74
  • Added support for GPT-4-Turbo/Vision thanks to @ChaseIngersol
  • Models are updated with the latest.

7.3.1

  • Reverting a breking change which will be also Breaking Changes(only for 7.3.0):
    • Reverting the usage of EnsureStatusCode() which caused the loss of error information. Initially, I thought it would help in implementing HTTP retry tools, but now I believe it is a bad idea for two reasons.
      1. You can't simply retry if the request wasn't successful because it could fail for various reasons. For example, you might have used too many tokens in your request, causing OpenAI to reject the response, or you might have tried to use a nonexistent model. It would be better to use the Error object in your retry rules. All responses are already derived from this base object.
      2. We will lose error response data.

7.3.0

  • Updated Moderation categories as reported by @dmki.
  • Breaking Changes:
    • Introduced the use of EnsureStatusCode() after making requests.Please adjust your code accordingly for handling failure cases. Thanks to @miroljub1995 for reporting.
    • Previously, we used to override paths in the base domain, but this behavior has now changed. If you were using abc.com/mypath as the base domain, we used to ignore /mypath. This will no longer be the case, and the code will now respect /mypath. Thanks to @Hzw576816 for reporting.

7.2.0

  • Added Chatgpt Finetununig support thanks to @aghimir3
  • Default Azure Openai version increased thanks to @mac8005
  • Fixed Azure Openai Audio endpoint thanks to @mac8005

7.1.5

  • Added error handling for PlatformNotSupportedException in PostAsStreamAsync when using HttpClient.Send, now falls back to SendRequestPreNet6 for compatibility on platforms like MAUI, Mac. Thanks to @Almis90
  • We now have a function caller describe method that automatically generates function descriptions. This method is available in the utilities library. Thanks to @vbandi

7.1.3

  • This release was a bit late and took longer than expected due to a couple of reasons. The future was quite big, and I couldn't cover all possibilities. However, I believe I have covered most of the function definitions (with some details missing). Additionally, I added an option to build it manually. If you don't know what I mean, you don't need to worry. I plan to cover the rest of the function definition in the next release. Until then, you can discover this by playing in the playground or in the source code. This version also support using other libraries to export your function definition.
  • We now have support for functions! Big cheers to @rzubek for completing most of this feature.
  • Additionally, we have made bug fixes and improvements. Thanks to @choshinyoung, @yt3trees, @WeihanLi, @N0ker, and all the bug reporters. (Apologies if I missed any names. Please let me know if I missed your name and you have a commit.)

7.1.2-beta

7.1.0-beta

  • Function Calling: We're releasing this version to bring in a new feature that lets you call functions faster. But remember, this version might not be perfectly stable and we might change it a lot later. A big shout-out to @rzubek for helping us add this feature. Although I liked his work, I didn't have enough time to look into it thoroughly. Still, the tests I did showed it was working, so I decided to add his feature to our code. This lets everyone use it now. Even though I'm busy moving houses and didn't have much time, seeing @rzubek's help made things a lot easier for me.
  • Support for New Models: This update also includes support for new models that OpenAI recently launched. I've also changed the naming style to match OpenAI's. Model names will no longer start with 'chat'; instead, they'll start with 'gpt_3_5' and so on.

7.0.0

  • The code now supports .NET 7.0. Big cheers to @BroMarduk for making this happen.
  • The library now automatically disposes of the Httpclient when it's created by the constructor. This feature is thanks to @BroMarduk.
  • New support has been added for using more than one instance at the same time. Check out this link for more details. Thanks to @remixtedi for bringing this to my attention.
  • A lot of small improvements have been done by @BroMarduk.
  • Breaking Changes 😢
    • I've removed 'GPT3' from the namespace, so you might need to modify some aspects of your project. But don't worry, it's pretty simple! For instance, instead of writing using OpenAI.GPT3.Interfaces, you'll now write using OpenAI.Interfaces.
    • The order of the OpenAI constructor parameters has changed. It now takes 'options' first, then 'httpclient'.
      //Before
      var openAiService = new OpenAIService(httpClient, options);
      //Now
      var openAiService = new OpenAIService(options, httpClient);

6.8.6

  • Updated Azure OpenAI default API version to the preview version to support ChatGPT. thanks to all issue reporters
  • Added support for an optional chat name field. thanks to @shanepowell
  • Breaking Change
    • FineTuneCreateRequest.PromptLossWeight converto to float thanks to @JohnJ0808

6.8.5

  • Mostly bug fixes
  • Fixed Moderation functions. https://github.com/betalgo/openai/issues/214 thanks to @scolmarg @AbdelAzizMohamedMousa @digitalvir
  • Added File Stream support for Whisper, Thanks to @Swimburger
  • Fixed Whisper default response type, Thanks to @Swimburger
  • Performance improvements and code clean up,again Thanks to @Swimburger 👏
  • Code clenaup, Thanks to @WeihanLi

6.8.4

  • Released update message about nuget Package ID change

6.8.3

  • Breaking Changes:
    • I am going to update library namespace from Betalgo.OpenAI.GPT3 to OpenAI.GPT3. This is the first time I am trying to update my nuget packageId. If something broken, please be patient. I will be fixing it soon. Reverted namespace change, maybe next time.

    • Small Typo change on model name Model.GPT4 to Model.GPT_4

    • ServiceCollection.AddOpenAIService(); now returns IHttpClientBuilder which means it allows you to play with httpclient object. Thanks for all the reporters and @LGinC. Here is a little sample

ServiceCollection.AddOpenAIService()
.ConfigurePrimaryHttpMessageHandler((s => new HttpClientHandler
{
    Proxy = new WebProxy("1.1.1.1:1010"),
});

6.8.1

  • Breaking Changes: Typo fixed in Content Moderation CategoryScores, changing Sexualminors to SexualMinors. Thanks to @HowToDoThis.
  • Tokenizer changes thanks to @IS4Code.
    • Performance improvement
    • Introduced a new method TokenCount that returns the number of tokens instead of a list.
    • Breaking Changes: Removed overridden methods that were basically string conversions. I think these methods were not used much and it is fairly easy to do these conversions outside of the method. If you disagree, let me know and I can consider adding them back.

6.8.0

  • Added .Net Standart Support, Massive thanks to @pdcruze and @ricaun

6.7.3

  • Breaking change: ChatMessage.FromAssistance is now ChatMessage.FromAssistant. Thanks to @Swimburger
  • The Tokenizer method has been extended with cleanUpCREOL. You can use this option to clean up Windows-style line endings. Thanks to @gspentzas1991

6.7.2

  • Removed Microsoft.AspNet.WebApi.Client dependency
  • The action build device has been updated to ubuntu due to suspicions that the EOL of the vocab.bpe file had been altered in the last few Windows builds.
  • Added support for TextEmbeddingAdaV2 Model.

6.7.1

  • Introduced support for Whisper.
  • Grateful thanks to @shanepowell for contributing RetrieveFileContent.
  • Resolved an issue that was causing problems with the tokenizer. A clean build should hopefully address this.
  • Added support for skip options validation

6.7.0

  • We all beeen waiting for this moment. Please enjoy Chat GPT API
  • Added support for Chat GPT API
  • Fixed Tokenizer Bug, it was not working properly.

6.6.8

  • Breaking Changes

    • Renamed Engine keyword to Model in accordance with OpenAI's new naming convention.
    • Deprecated DefaultEngineId in favor of DefaultModelId.
    • DefaultEngineId and DefaultModelId is not static anymore.
  • Added support for Azure OpenAI, a big thanks to @copypastedeveloper!

  • Added support for Tokenizer, inspired by @dluc's https://github.com/dluc/openai-tools repository. Please consider giving the repo a star.

These two changes are recent additions, so please let me know if you encounter any issues.

  • Updated documentation links from beta.openai.com to platform.openai.com.

6.6.5

  • Sad news, we have Breaking Changes.
    • SetDefaultEngineId() replaced by SetDefaultModelId()
    • RetrieveModel(modelId) will not use the default Model anymore. You have to pass modelId as a parameter.
    • I implemented Model overwrite logic.
      • If you pass a modelId as a parameter it will overwrite the Default Model Id and object modelId
      • If you pass your modelId in your object it will overwrite the Default Model Id
      • If you don't pass any modelId it will use Default Model Id
      • If you didn't set a Default Model Id, SDK will throw a null argument exception
        • Parameter Model Id > Object Model Id > Default Model Id
        • If you find this complicated please have a look at the implementation, OpenAI.SDK/Extensions/ModelExtension.cs -> ProcessModelId()
  • New Method introduced: GetDefaultModelId();
  • Some name changes about the legacy engine keyword with the new model keyword
  • Started to use the latest Completion endpoint. This expecting to solve finetuning issues. Thanks to @maiemy and other reporters.

6.6.4

  • Bug-fix, ImageEditRequest.Mask now is optional. thanks to @hanialaraj (if you are using edit request without mask your image has to be RGBA, RGB is not allowed)

6.6.3

  • Bug-fix, now we are handling logprops response properly, thanks to @KosmonikOS
  • Code clean-up, thanks to @KosmonikOS

6.6.2

  • Bug-fix,added jsonignore for stop and stopAsList, thanks to @Patapum

6.6.1

  • Breaking change.

    • EmbeddingCreateRequest.Input was a string list type now it is a string type.
      I have introduced InputAsList property instead of Input. You may need to update your code according the change.
      Both Input(string) and InputAsList(string list) avaliable for use
  • Added string and string List support for some of the propertis.

    • CompletionCreateRequest --> Prompt & PromptAsList / Stop & StopAsList
    • CreateModerationRequest --> Input & InputAsList
    • EmbeddingCreateRequest --> Input & InputAsList

6.6.0

  • Added support for new models (davinciv3 & edit models)
  • Added support for Edit endpoint.
  • (Warning: edit endpoint works with only some of the models, I couldn't find documentation about it, please follow the thread for more information: https://community.openai.com/t/is-edit-endpoint-documentation-incorrect/23361 )
  • Some objects were created as class instead of record at last version. I change them to record. This will be breaking changes for some of you.
  • With this version I think we cover all of openai APIs
  • In next version I will be focusing on code cleanup and refactoring.
  • If I don't need to relase bug-fix for this version also I will be updating library with dotnet 7 in next version as I promised.

6.5.0

  • OpenAI made a surprise release yesterday and they have announced DALL·E API. I needed to do other things but I couldn't resist. Because I was rushing, some methods and class names may will change in the next release. Until that day, enjoy your creative AI.
  • This library now fully support all DALL·E features.
  • I tried to complete Edit API too bu unfortunately something was wrong with the documentation, I need to ask some questions in the community forum.

6.4.1

  • Bug-fixes
    • FineTuneCreateRequest suffix json property name changed "Suffix" to "suffix"
    • CompletionCreateRequest user json property name changed "User" to "user" (Thanks to @shaneqld), also now it is a nullable string

6.4.0

  • I have good news and bad news
  • Moderation feature implementation is done. Now we support Moderation.
  • Updated some request and response models to catch up with changes in OpenAI API
  • New version has some breaking changes. Because we are in the fall season I needed to do some cleanup. Sorry for breaking changes but most of them are just renaming. I believe they can be solved before your coffee finish.
  • I am hoping to support Edit Feature in the next version.

6.3.0

  • Thanks to @c-d and @sarilouis for their contributions to this version.
  • Now we support Embedding endpoint. Thanks to @sarilouis
  • Bug fixes and updates for Models
  • Code clean-up

6.2.0

6.1.0

  • Organization id is not a required value anymore, Thanks to @samuelnygaard
  • Removed deprecated Engine Endpoint and replaced it with Models Endpoint. Now Model response has more fields.
  • Regarding OpenAI Engine naming, I had to rename Engine Enum and static fields. They are quite similar but you have to replace them with new ones. Please use Models class instead of Engine class.
  • To support fast engine name changing I have created a new Method, Models.ModelNameBuilder() you may consider using it.