SwiftUI view that asynchronously loads and displays an OpenAI image from open API
Please star the repository if you believe continuing the development of this package is worthwhile. This will help me understand which package deserves more effort.
- Supports multiple platforms: iOS, macOS, watchOS, and tvOS
- Customizable with SwiftUI Image properties (e.g.,
renderingMode
,resizable
,antialiased
) - Configurable transport layer via custom
Loader
- Designed with interfaces, not implementations
- Fully leverages Swift's new concurrency model
Where do I find my Secret API Key?
let apiKey = "your API KEY"
let endpoint = OpenAIImageEndpoint.get(with: apiKey)
let loader = OpenAIDefaultLoader(endpoint: endpoint)
OpenAIDefaultLoaderKey.defaultValue = loader
OpenAIAsyncImage(prompt: .constant("sun"))
or with custom ViewBuilder
OpenAIAsyncImage(prompt: $imageText, size: .dpi1024){ state in
switch state{
case .loaded(let image) :
image
.resizable()
.scaledToFill()
case .loadError(let error) : Text(error.localizedDescription)
case .loading : ProgressView()
}
}
Param | Description |
---|---|
prompt | A text description of the desired image(s). The maximum length is 1000 characters |
size | The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 |
tpl | Custom view builder tpl |
loader | Custom loader if you need something specific |
- You need to have Xcode 13 installed in order to have access to Documentation Compiler (DocC)
- Go to Product > Build Documentation or ⌃⇧⌘ D
Announced in 2022, OpenAI's text-to-image model DALL-E 2 is a recent example of diffusion models. It uses diffusion models for both the model's prior (which produces an image embedding given a text caption) and the decoder that generates the final image. In machine learning, diffusion models, also known as diffusion probabilistic models, are a class of latent variable models. They are Markov chains trained using variational inference. The goal of diffusion models is to learn the latent structure of a dataset by modeling the way in which data points diffuse through the latent space. Diffusion models can be applied to a variety of tasks, including image denoising, inpainting, super-resolution, and image generation. For example, an image generation model would start with a random noise image and then, after having been trained reversing the diffusion process on natural images, the model would be able to generate new natural images. Replicate kit
The example app for running text-to-image or image-to-image models to generate images using Apple's Core ML Stable Diffusion implementation