diff --git a/docusaurus/docs/Android/01-basics/01-introduction.mdx b/docusaurus/docs/Android/01-basics/01-introduction.mdx
index 658ecaa4ba..1ee91c8952 100644
--- a/docusaurus/docs/Android/01-basics/01-introduction.mdx
+++ b/docusaurus/docs/Android/01-basics/01-introduction.mdx
@@ -12,9 +12,9 @@ Moreover, all calls are routed through Stream's global edge network, thereby ens
If you're new to Stream Video SDK, we recommend starting with the following three tutorials:
-* ** [Video & Audio Calling Tutorial](../02-tutorials/01-video-calling.mdx) **
-* ** [Audio Room Tutorial](../02-tutorials/02-audio-room.mdx) **
-* ** [Livestream Tutorial](../02-tutorials/03-livestream.mdx) **
+* ** [Video & Audio Calling Tutorial](https://getstream.io/video/sdk/android/tutorial/video-calling/) **
+* ** [Audio Room Tutorial](https://getstream.io/video/sdk/android/tutorial/audio-room/) **
+* ** [Livestream Tutorial](https://getstream.io/video/sdk/android/tutorial/livestreaming/) **
After the tutorials the documentation explains how to use the
diff --git a/docusaurus/docs/Android/02-tutorials/01-video-calling.mdx b/docusaurus/docs/Android/02-tutorials/01-video-calling.mdx
deleted file mode 100644
index b2f9009bdc..0000000000
--- a/docusaurus/docs/Android/02-tutorials/01-video-calling.mdx
+++ /dev/null
@@ -1,454 +0,0 @@
----
-title: How to Build an Android Video Calling App
-description: How to build a video call similar to Zoom or facebook messenger
----
-
-import { TokenSnippet } from '../../../shared/_tokenSnippet.jsx';
-
-This tutorial teaches you how to build Zoom/Whatsapp style video calling for your app.
-
-* Calls run on Stream's global edge network for optimal latency & reliability.
-* Permissions give you fine grained control over who can do what.
-* Video quality and codecs are automatically optimized.
-* Powered by Stream's [Video Calling API](https://getstream.io/video/).
-
-### Step 1 - Create a new project in Android Studio
-
-1. Create a new project
-2. Select Phone & Tablet -> **Empty Activity**
-3. Name your project **VideoCall**.
-
-Note that this tutorial was written using Android Studio Giraffe. Setup steps can vary slightly across Android Studio versions.
-We recommend using Android Studio Giraffe or newer.
-
-### Step 2 - Install the SDK & Setup the client
-
-**Add the Video Compose SDK** and [Jetpack Compose](https://developer.android.com/jetpack/compose) dependencies to your app's `build.gradle.kts` file found in `app/build.gradle.kts`.
-If you're new to android, note that there are 2 `build.gradle` files, you want to open the `build.gradle` in the app folder.
-
-
-
-```kotlin
-dependencies {
- // Stream Video Compose SDK
- implementation("io.getstream:stream-video-android-ui-compose:0.4.2")
-
- // Optionally add Jetpack Compose if Android studio didn't automatically include them
- implementation(platform("androidx.compose:compose-bom:2023.08.00"))
- implementation("androidx.activity:activity-compose:1.7.2")
- implementation("androidx.compose.ui:ui")
- implementation("androidx.compose.ui:ui-tooling")
- implementation("androidx.compose.runtime:runtime")
- implementation("androidx.compose.foundation:foundation")
- implementation("androidx.compose.material:material")
-}
-```
-
-There are 2 versions of Stream's SDK.
-
-- **Video Compose SDK**: `io.getstream:stream-video-android-ui-compose` dependency that includes the video core SDK + compose UI components.
-- **Video Core SDK**: `io.getstream:stream-video-android-core` that only includes the core parts of the video SDK.
-
-For this tutorial, we'll use the compose UI components.
-
-### Step 3 - Create & Join a call
-
-To keep this tutorial short and easy to understand we'll place all code in `MainActivity.kt`.
-For a production app you'd want to initialize the client in your Application class or DI module.
-You'd also want to use a viewmodel.
-
-Open up `MainActivity.kt` and replace the **MainActivity** class with:
-
-```kotlin
-class MainActivity : ComponentActivity() {
- override fun onCreate(savedInstanceState: Bundle?) {
- super.onCreate(savedInstanceState)
-
- val userToken = "REPLACE_WITH_TOKEN"
- val userId = "REPLACE_WITH_USER_ID"
- val callId = "REPLACE_WITH_CALL_ID"
-
- // step1 - create a user.
- val user = User(
- id = userId, // any string
- name = "Tutorial" // name and image are used in the UI
- )
-
- // step2 - initialize StreamVideo. For a production app we recommend adding the client to your Application class or di module.
- val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "hd8szvscpxvd", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
- ).build()
-
- // step3 - join a call, which type is `default` and id is `123`.
- val call = client.call("default", callId)
- lifecycleScope.launch {
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show()
- }
- }
-
- setContent {
- // step4 - apply VideoTheme
- VideoTheme {
- // step5 - define required properties.
- val participants by call.state.participants.collectAsState()
- val connection by call.state.connection.collectAsState()
-
- // step6 - render texts that display connection status.
- Box(
- contentAlignment = Alignment.Center,
- modifier = Modifier.fillMaxSize()
- ) {
- if (connection != RealtimeConnection.Connected) {
- Text("loading...", fontSize = 30.sp)
- } else {
- Text("Call ${call.id} has ${participants.size} participants", fontSize = 30.sp)
- }
- }
- }
- }
- }
-}
-```
-
-To actually run this sample, we need a valid user token. The user token is typically generated by your server side API.
-When a user logs in to your app you return the user token that gives them access to the call.
-To make this tutorial easier to follow we'll generate a user token for you:
-
-Please update **REPLACE_WITH_USER_ID**, **REPLACE_WITH_TOKEN** and **REPLACE_WITH_CALL_ID** with the actual values shown below:
-
-
-
-Now when you run the sample app it will connect successfully.
-The text will say "call ... has 1 participant" (yourself).
-Let's review what we did in the above code.
-
-**Create a user**. First we create a user object.
-You typically sync these users via a server side integration from your own backend.
-Alternatively, you can also use guest or anonymous users.
-
-```kotlin
-val user = User(
- id = userId, // any string
- name = "Tutorial" // name and image are used in the UI
-)
-```
-
-**Initialize the Stream Client**. Next we initialize the client by passing the API Key, user and user token.
-
-```kotlin
- val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "hd8szvscpxvd", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
-).build()
-```
-
-**Create and Join Call**. After the user and client are created, we create a call like this:
-
-```kotlin
-val call = client.call("default", callId)
-lifecycleScope.launch {
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show()
- }
-}
-```
-
-As soon as you use `call.join` the connection for video & audio is setup.
-
-Lastly, the UI is rendered by observing `call.state` (participants and connection states):
-
-```kotlin
-val participants by call.state.participants.collectAsState()
-val connection by call.state.connection.collectAsState()
-```
-
-You'll find all relevant state for the call in `call.state` and `call.state.participants`.
-The documentation on [Call state and Participant state](../03-guides/03-call-and-participant-state.mdx) explains this in further detail.
-
-### Step 4 - Joining from the web
-
-To make this a little more interactive, let's join the call from your browser.
-
-
-
-On your Android device, you'll see the text update to 2 participants.
-Let's keep the browser tab open as you go through the tutorial.
-
-### Step 5 - Rendering Video
-
-In this next step we're going to:
-
-1. Request Android Runtime permissions (to capture video and audio)
-2. Render your local & remote participant video
-
-#### A. Requesting Android Runtime Permissions
-
-To capture the microphone and camera output we need to request [Android runtime permissions](https://source.android.com/docs/core/permissions/runtime_perms).
-In `MainActivity.kt` just below setContent add the line `LaunchCallPermissions(call = call)`:
-
-```kotlin
-setContent {
- LaunchCallPermissions(call = call)
- ...
-}
-```
-
-The launch call permissions will request permissions when you open the call.
-Review the [permissions docs](../05-ui-cookbook/08-permission-requests.mdx) to learn more about how you can easily request permissions.
-
-#### B. Render the video
-
-In the `MainActivity.kt` file, replace the code inside `setContent` code with the example below:
-
-```kotlin
-setContent {
- LaunchCallPermissions(call = call)
-
- VideoTheme {
- val remoteParticipants by call.state.remoteParticipants.collectAsState()
- val remoteParticipant = remoteParticipants.firstOrNull()
- val me by call.state.me.collectAsState()
- val connection by call.state.connection.collectAsState()
- var parentSize: IntSize by remember { mutableStateOf(IntSize(0, 0)) }
-
- Box(
- contentAlignment = Alignment.Center,
- modifier = Modifier
- .fillMaxSize()
- .background(VideoTheme.colors.appBackground)
- .onSizeChanged { parentSize = it }
- ) {
- if (remoteParticipant != null) {
- val remoteVideo by remoteParticipant.video.collectAsState()
-
- Column(modifier = Modifier.fillMaxSize()) {
- VideoRenderer(
- modifier = Modifier.weight(1f),
- call = call,
- video = remoteVideo
- )
- }
- } else {
- if (connection != RealtimeConnection.Connected) {
- Text(
- text = "loading...",
- fontSize = 30.sp,
- color = VideoTheme.colors.textHighEmphasis
- )
- } else {
- Text(
- modifier = Modifier.padding(30.dp),
- text = "Join call ${call.id} in your browser to see the video here",
- fontSize = 30.sp,
- color = VideoTheme.colors.textHighEmphasis,
- textAlign = TextAlign.Center
- )
- }
- }
-
- // floating video UI for the local video participant
- me?.let { localVideo ->
- FloatingParticipantVideo(
- modifier = Modifier.align(Alignment.TopEnd),
- call = call,
- participant = localVideo,
- parentBounds = parentSize
- )
- }
- }
- }
-}
-```
-
-Now when you run the app, you'll see your local video in a floating video element and the video from your browser.
-The end result should look somewhat like this:
-
-![Video Tutorial](../assets/portrait-video-two.png)
-
-Let's review the changes we made.
-
-**[VideoRenderer](../04-ui-components/02-video-renderer.mdx)** is one of our primary low-level components.
-
-```kotlin
-VideoRenderer(
- modifier = Modifier.weight(1f),
- call = call,
- video = remoteVideo?.value
-)
-```
-
-It only displays the video and doesn't add any other UI elements.
-The video is lazily loaded, and only requested from the video infrastructure if you're actually displaying it.
-So if you have a video call with 200 participants, and you show only 10 of them, you'll only receive video for 10 participants.
-This is how software like Zoom and Google Meet make large calls work.
-
-**[FloatingParticipantVideo](../04-ui-components/05-participants/03-floating-participant-video.mdx)** renders a draggable display of your own video.
-
-```kotlin
-FloatingParticipantVideo(
- modifier = Modifier.align(Alignment.TopEnd),
- call = call,
- participant = me!!,
- parentBounds = parentSize
-)
-```
-
-### Step 6 - A Full Video Calling UI
-
-The above example showed how to use the call state object and compose to build a basic video UI.
-For a production version of calling you'd want a few more UI elements:
-
-* Indicators of when someone is speaking
-* Quality of their network
-* Layout support for >2 participants
-* Labels for the participant names
-* Call header and controls
-
-Stream ships with several Compose components to make this easy.
-You can customize the components with theming, arguments and swapping parts of them.
-This is convenient if you want to quickly build a production ready calling experience for you app.
-(and if you need more flexibility, many customers use the above low level approach to build a UI from scratch)
-
-To render a full calling UI, we'll leverage the [CallContent](../04-ui-components/04-call/01-call-content.mdx) component.
-This includes sensible defaults for a call header, video grid, call controls, picture-in-picture, and everything that you need to build a video call screen.
-
-Open `MainActivity.kt`, and update the code inside of `VideoTheme` to use the `CallContent`.
-The code will be a lot smaller than before since all UI logic is handled in the `CallContent`:
-
-```kotlin
-VideoTheme {
- CallContent(
- modifier = Modifier.fillMaxSize(),
- call = call,
- onBackPressed = { onBackPressed() },
- )
-}
-```
-
-The result will be:
-
-![Compose Content](../assets/compose_call_container.png)
-
-When you now run your app, you'll see a more polished video UI.
-It supports reactions, screensharing, active speaker detection, network quality indicators etc.
-The most commonly used UI components are:
-
-- **[VideoRenderer](../04-ui-components/02-video-renderer.mdx)**: For rendering video and automatically requesting video tracks when needed. Most of the Video components are built on top of this.
-- **[ParticipantVideo](../04-ui-components/05-participants/01-participant-video.mdx)**: The participant's video + some UI elements for network quality, reactions, speaking etc.
-- **[ParticipantsGrid](../04-ui-components/05-participants/02-participants-grid.mdx)**: A grid of participant video elements.
-- **[FloatingParticipantVideo](../04-ui-components/05-participants/03-floating-participant-video.mdx)**: A draggable version of the participant video. Typically used for your own video.
-- **[ControlActions](../05-ui-cookbook/02-control-actions.mdx)**: A set of buttons for controlling your call, such as changing audio and video states.
-- **[RingingCallContent](../04-ui-components/04-call/04-ringing-call.mdx)**: UI for displaying incoming and outgoing calls.
-
-The full list of **[UI components](../04-ui-components/01-overview.mdx)** is available in the docs.
-
-### Step 7 - Customizing the UI
-
-You can customize the UI by:
-
-* Building your own UI components (the most flexibility, build anything).
-* Mixing and matching with Stream's UI Components (speeds up how quickly you can build common video UIs).
-* Theming (basic customization of colors, fonts etc).
-
-The example below shows how to swap out the call controls for your own controls:
-
-```kotlin
-override fun onCreate(savedInstanceState: Bundle?) {
- super.onCreate(savedInstanceState)
-
- lifecycleScope.launch {
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show()
- }
- }
-
- setContent {
- VideoTheme {
- val isCameraEnabled by call.camera.isEnabled.collectAsState()
- val isMicrophoneEnabled by call.microphone.isEnabled.collectAsState()
-
- CallContent(
- modifier = Modifier.background(color = VideoTheme.colors.appBackground),
- call = call,
- onBackPressed = { onBackPressed() },
- controlsContent = {
- ControlActions(
- call = call,
- actions = listOf(
- {
- ToggleCameraAction(
- modifier = Modifier.size(52.dp),
- isCameraEnabled = isCameraEnabled,
- onCallAction = { call.camera.setEnabled(it.isEnabled) }
- )
- },
- {
- ToggleMicrophoneAction(
- modifier = Modifier.size(52.dp),
- isMicrophoneEnabled = isMicrophoneEnabled,
- onCallAction = { call.microphone.setEnabled(it.isEnabled) }
- )
- },
- {
- FlipCameraAction(
- modifier = Modifier.size(52.dp),
- onCallAction = { call.camera.flip() }
- )
- },
- )
- )
- }
- )
- }
- }
-}
-```
-
-Stream's Video SDK provides fully polished UI components, allowing you to build a video call quickly and customize them. As you've seen before, you can implement a full complete video call screen with `CallContent` composable in Jetpack Compose. The `CallContent` composable consists of three major parts below:
-
-- **appBarContent**: Content is shown that calls information or additional actions.
-- **controlsContent**: Content is shown that allows users to trigger different actions to control a joined call.
-- **videoContent**: Content shown to be rendered when we're connected to a call successfully.
-
-Theming gives you control over the colors and fonts.
-
-```kotlin
-VideoTheme(
- colors = StreamColors.defaultColors().copy(appBackground = Color.Black),
- dimens = StreamDimens.defaultDimens().copy(callAvatarSize = 72.dp),
- typography = StreamTypography.defaultTypography().copy(title1 = TextStyle()),
- shapes = StreamShapes.defaultShapes().copy(avatar = CircleShape)
-) {
- ..
-}
-```
-
-### Recap
-
-Please do let us know if you ran into any issues while building an video calling app with Kotlin.
-Our team is also happy to review your UI designs and offer recommendations on how to achieve it with Stream.
-
-To recap what we've learned about android video calling:
-
-* You setup a call: (val call = client.call("default", "123"))
-* The call type ("default" in the above case) controls which features are enabled and how permissions are setup
-* When you join a call, realtime communication is setup for audio & video calling: (call.join())
-* Stateflow objects in call.state and call.state.participants make it easy to build your own UI
-* VideoRenderer is the low level component that renders video
-
-We've used Stream's [Video Calling API](https://getstream.io/video/), which means calls run on a global edge network of video servers.
-By being closer to your users the latency and reliability of calls are better.
-The kotlin SDK enables you to build in-app video calling, audio rooms and livestreaming in days.
-
-We hope you've enjoyed this tutorial and please do feel free to reach out if you have any suggestions or questions.
diff --git a/docusaurus/docs/Android/02-tutorials/02-audio-room.mdx b/docusaurus/docs/Android/02-tutorials/02-audio-room.mdx
deleted file mode 100644
index d50ecc0a41..0000000000
--- a/docusaurus/docs/Android/02-tutorials/02-audio-room.mdx
+++ /dev/null
@@ -1,535 +0,0 @@
----
-title: How to Build an Android Audio Room with Kotlin
-description: How to build an audio room using Stream's video SDKs
----
-
-import { TokenSnippet } from '../../../shared/_tokenSnippet.jsx';
-
-This tutorial will teach you how to build an audio room experience like Twitter Spaces or Clubhouse.
-The end result will look like the image below and support the following features:
-
-* Backstage mode. You can start the call with your co-hosts and chat a bit before going live.
-* Calls run on Stream's global edge network for optimal latency and scalability.
-* There is no cap to how many listeners you can have in a room.
-* Listeners can raise their hand, and be invited to speak by the host.
-* Audio tracks are sent multiple times for optimal reliability.
-
-![Audio Room](../assets/audio-room.png)
-
-Time to get started building an audio-room for your app.
-
-### Step 1 - Create a new project in Android Studio
-
-Note that this tutorial was written using Android Studio Giraffe. Setup steps can vary slightly across Android Studio versions.
-We recommend using Android Studio Giraffe or newer.
-
-1. Create a new project
-2. Select Phone & Tablet -> **Empty Activity**
-3. Name your project **AudioRoom**.
-
-### Step 2 - Install the SDK & Setup the client
-
-**Add the Video Compose SDK** and [Jetpack Compose](https://developer.android.com/jetpack/compose) dependencies to your app's `build.gradle.kts` file found in `app/build.gradle.kts`.
-If you're new to android, note that there are 2 `build.gradle` files, you want to open the `build.gradle` in the app folder.
-
-```groovy
-dependencies {
- // Stream Video Compose SDK
- implementation("io.getstream:stream-video-android-ui-compose:0.4.2")
-
- // Jetpack Compose (optional/ android studio typically adds them when you create a new project)
- implementation(platform("androidx.compose:compose-bom:2023.08.00"))
- implementation("androidx.activity:activity-compose:1.7.2")
- implementation("androidx.compose.ui:ui")
- implementation("androidx.compose.ui:ui-tooling")
- implementation("androidx.compose.runtime:runtime")
- implementation("androidx.compose.foundation:foundation")
- implementation("androidx.compose.material:material")
-}
-```
-
-There are 2 versions of Stream's SDK.
-
-- **Video Compose SDK**: `io.getstream:stream-video-android-ui-compose` dependency that includes the video core SDK + compose UI components.
-- **Video Core SDK**: `io.getstream:stream-video-android-core` that only includes the core parts of the video SDK.
-
-For this tutorial, we'll use the compose UI components.
-
-### Step 3 - Create & Join a call
-
-Open up `MainActivity.kt` and replace the **MainActivity** class with the following code:
-
-```kotlin
-class MainActivity : ComponentActivity() {
- override fun onCreate(savedInstanceState: Bundle?) {
- super.onCreate(savedInstanceState)
-
- val userToken = "REPLACE_WITH_TOKEN"
- val userId = "REPLACE_WITH_USER_ID"
- val callId = "REPLACE_WITH_CALL_ID"
-
- // step1 - create a user.
- val user = User(
- id = userId, // any string
- name = "Tutorial" // name and image are used in the UI
- )
-
- // step2 - initialize StreamVideo. For a production app we recommend adding the client to your Application class or di module.
- val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "hd8szvscpxvd", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
- ).build()
-
- // step3 - join a call, which type is `audio_room` and id is `123`.
- val call = client.call("audio_room", callId)
- lifecycleScope.launch {
- val result = call.join(create = true, createOptions = CreateCallOptions(
- members = listOf(
- MemberRequest(userId = userId, role="host", custom = emptyMap())
- ), custom = mapOf(
- "title" to "Compose Trends",
- "description" to "Talk about how easy compose makes it to reuse and combine UI"
- )
- ))
- result.onError {
- Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show()
- }
- }
-
- setContent {
- VideoTheme {
- val connection by call.state.connection.collectAsState()
-
- Column(horizontalAlignment = Alignment.CenterHorizontally, modifier = Modifier.padding(16.dp)) {
- if (connection != RealtimeConnection.Connected) {
- Text("loading", fontSize = 30.sp)
- } else {
- Text("ready to render an audio room", fontSize = 30.sp)
- }
- }
- }
- }
- }
-}
-```
-
-To keep the tutorial short and simple to follow we've added the client, state and UI straight into the **MainActivity** class.
-For a real app, you'd typically want to use an [Application class](https://developer.android.com/reference/android/app/Application) for the client and a [ViewModel](https://developer.android.com/topic/libraries/architecture/viewmodel) for managing the state.
-
-Let's review the example above and go over the details.
-
-**Create a user**. First we create a user object.
-You typically sync your users via a server side integration from your own backend.
-Alternatively, you can also use guest or anonymous users.
-
-```kotlin
-val user = User(
- id = userId, // any string
- name = "Tutorial" // name and image are used in the UI
-)
-```
-
-**Initialize the Stream Client**. Next we initialize the client by passing the API Key, user and user token.
-
-```kotlin
- val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "hd8szvscpxvd", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
-).build()
-```
-
-**Create and Join Call**. After the user and client are created, we create a call like this:
-
-```kotlin
-val call = client.call("audio_room", callId)
-lifecycleScope.launch {
- val result = call.join(
- create = true, createOptions = CreateCallOptions(
- members = listOf(
- MemberRequest(userId = userId, role = "host", custom = emptyMap())
- ), custom = mapOf(
- "title" to "Compose Trends",
- "description" to "Talk about how easy compose makes it to reuse and combine UI"
- )
- )
- )
- result.onError {
- Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show()
- }
-}
-```
-
-* This joins and creates a call with the type: "audio_room" and the specified callId.
-* You add yourself as a member with the "host" role. You can create custom roles and grant them permissions to fit your app.
-* The `title` and `description` custom fields are set on the call object.
-* Shows an error toast if you fail to join an audio room.
-
-To actually run this sample, we need a valid user token. The user token is typically generated by your server side API.
-When a user logs in to your app you return the user token that gives them access to the call.
-To make this tutorial easier to follow we'll generate a user token for you:
-
-Please update **REPLACE_WITH_USER_ID**, **REPLACE_WITH_TOKEN** and **REPLACE_WITH_CALL_ID** with the actual values shown below:
-
-
-
-With valid credentials in place, we can join the call.
-When you run the app you'll see the following:
-
-![Audio Room](../assets/audio-room-2.png)
-
-### Step 4 - Audio Room & Description
-
-Now that we've successfully connected to the audio room. Let's setup a basic UI and description.
-Replace the code in `setContent` with the following sample:
-
-```kotlin
-setContent {
- VideoTheme {
- val connection by call.state.connection.collectAsState()
- val activeSpeakers by call.state.activeSpeakers.collectAsState()
- val audioLevel = activeSpeakers.firstOrNull()?.audioLevel?.collectAsState()
-
- val color1 = Color.White.copy(alpha = 0.2f + (audioLevel?.value ?: 0f) * 0.8f)
- val color2 = Color.White.copy(alpha = 0.2f + (audioLevel?.value ?: 0f) * 0.8f)
-
- Column(
- horizontalAlignment = Alignment.CenterHorizontally,
- verticalArrangement = Arrangement.Top,
- modifier = Modifier
- .background(Brush.linearGradient(listOf(color1, color2)))
- .fillMaxSize()
- .fillMaxHeight()
- .padding(16.dp)
- ) {
-
- if (connection != RealtimeConnection.Connected) {
- Text("loading", fontSize = 30.sp)
- } else {
- AudioRoom(call = call)
- }
- }
- }
-}
-```
-
-All state for a call is available in `call.state`. In the example above we're observing the connection state and the active speakers.
-The [ParticipantState docs](../03-guides/03-call-and-participant-state.mdx) explain the available stateflow objects.
-
-You'll see that the **AudioRoom** composable hasn't been implemented yet. In `MainActivity`, add the following `AudioRoom` composable:
-
-```kotlin
-@Composable
-public fun AudioRoom(
- call: Call,
-){
- val custom by call.state.custom.collectAsState()
- val title = custom["title"] as? String
- val description = custom["description"] as? String
- val participants by call.state.participants.collectAsState()
- val activeSpeakers by call.state.activeSpeakers.collectAsState()
- val activeSpeaker = activeSpeakers.firstOrNull()
- val sortedParticipants by call.state.sortedParticipants.collectAsState(emptyList())
-
- val backstage by call.state.backstage.collectAsState()
- val isMicrophoneEnabled by call.microphone.isEnabled.collectAsState()
-
- Description(title, description, participants)
-
- activeSpeaker?.let {
- Text("${it.userNameOrId.value} is speaking")
- }
-
- Column(
- modifier = Modifier
- .fillMaxHeight()
- .padding(0.dp, 32.dp, 0.dp, 0.dp)
- ) {
- Participants(
- modifier = Modifier.weight(4f),
- sortedParticipants = sortedParticipants
- )
- Controls(
- modifier = Modifier
- .weight(1f)
- .fillMaxWidth()
- .padding(16.dp), call = call,
- isMicrophoneEnabled = isMicrophoneEnabled,
- backstage = backstage,
- enableMicrophone = { call.microphone.setEnabled(it) }
- )
- }
-}
-```
-
-The code above observes the participants, active speakers and backstage stateflow objects in `call.state`.
-
-We still need to implement a **Controls**, **Participants**, and **Description** composable.
-Let's add those next.
-
-```kotlin
-@Composable
-public fun Description(
- title: String?,
- description: String?,
- participants: List
-) {
- Text("$title", fontSize = 30.sp)
- Text("$description", fontSize = 20.sp, modifier = Modifier.padding(16.dp))
- Text("${participants.size} participants", fontSize = 20.sp)
-}
-
-@Composable
-public fun Participants(
- modifier: Modifier = Modifier,
- sortedParticipants: List
-) {
- Text("participants todo", fontSize = 30.sp)
-}
-
-@Composable
-public fun Controls(
- modifier: Modifier = Modifier,
- call: Call,
- backstage: Boolean = false,
- isMicrophoneEnabled: Boolean = false,
- enableMicrophone: (Boolean) -> Unit = {}
-) {
- Text("controls todo", fontSize = 30.sp)
-}
-```
-
-That's it for the basics. Now when you run your app, you'll see the following UI:
-
-![Audio Room](../assets/audio-room-3.png)
-
-The approach is the same for all components. We take the states of the call by observing `call.state` properties, such as `call.state.participants` and use it to power our UI.
-The [ParticipantState docs](../03-guides/03-call-and-participant-state.mdx) exposes all the state objects we need for the name, avatar, audio levels, speaking, etc.
-
-### Step 5 - Audio Room Controls & Permission
-
-Any app that records the microphone needs to ask the user for permission. We'll do this now.
-
-To capture the microphone output, we need to request [Android runtime permissions](https://source.android.com/docs/core/permissions/runtime_perms).
-In `MainActivity.kt` just below `setContent` add the line `LaunchMicrophonePermissions(call = call)`:
-
-```kotlin
-setContent {
- LaunchMicrophonePermissions(call = call)
- ..
-}
-```
-
-The launch call permissions will request permissions when you enter the app.
-Review the [permissions docs](../05-ui-cookbook/08-permission-requests.mdx) to learn more about how you can easily request permissions.
-
-Now let's have a look at the `Controls` composable. Replace the `Controls` composable with the following:
-
-```kotlin
-@Composable
-public fun Controls(
- modifier: Modifier = Modifier,
- call: Call,
- backstage: Boolean = false,
- isMicrophoneEnabled: Boolean = false,
- enableMicrophone: (Boolean) -> Unit = {}
-){
- val scope = rememberCoroutineScope()
- Row(
- modifier = modifier,
- horizontalArrangement = Arrangement.SpaceEvenly
- ) {
- ToggleMicrophoneAction(
- modifier = Modifier.size(52.dp),
- isMicrophoneEnabled = isMicrophoneEnabled,
- onCallAction = { enableMicrophone(it.isEnabled) }
- )
-
- Button(
- onClick = {
- scope.launch {
- if (backstage) call.goLive() else call.stopLive()
- }
- }
- ) {
- Text(text = if (backstage) "Go Live" else "End")
- }
- }
-}
-```
-
-Now when you run the app, you'll see a button to disable/enable the microphone and to start or end the broadcast.
-
-To make this a little more interactive, let's join the audio room from your browser.
-
-
-
-At first you won't be allowed to join the room since it's not live yet.
-By default the audio_room call type has backstage mode enabled. This makes it easy to try out your room and talk to your co-hosts before going live.
-You can enable/disable the usage of backstage mode in the dashboard.
-
-Let's go live and join the call:
-
-* Click go live on Android
-* On web join the room
-* You'll see the participant count increase to 2
-
-### Step 6 - Participants UI
-
-Time to build a pretty UI for the participants. Replace the `Participants` composable with the following:
-
-```kotlin
-@Composable
-public fun Participants(
- modifier: Modifier = Modifier,
- sortedParticipants: List
-){
- LazyVerticalGrid(
- modifier = modifier,
- columns = GridCells.Adaptive(minSize = 128.dp)
- ) {
- items(items = sortedParticipants, key = { it.sessionId }) { participant ->
- ParticipantAvatar(participant)
- }
- }
-}
-```
-
-The `Participants` composable is responsible for rendering all participants in the audio room as a grid list.
-Now we'll add a pretty **ParticipantAvatar** composable, which represents a user in the audio room:
-
-```kotlin
-@Composable
-public fun ParticipantAvatar(
- participant: ParticipantState,
- modifier: Modifier = Modifier
-) {
- val nameOrId by participant.userNameOrId.collectAsState()
- val image by participant.image.collectAsState()
- val isSpeaking by participant.speaking.collectAsState()
- val audioEnabled by participant.audioEnabled.collectAsState()
-
- Column(
- modifier = modifier,
- horizontalAlignment = Alignment.CenterHorizontally,
- verticalArrangement = Arrangement.Center
- ) {
-
- Box(modifier = Modifier.size(VideoTheme.dimens.audioAvatarSize)) {
- UserAvatar(
- userName = nameOrId,
- userImage = image,
- modifier = Modifier
- .fillMaxSize()
- .padding(VideoTheme.dimens.audioAvatarPadding)
- )
-
- if (isSpeaking) {
- Box(
- modifier = Modifier
- .fillMaxSize()
- .border(BorderStroke(2.dp, Color.Gray), CircleShape)
- )
- } else if (!audioEnabled) {
- Box(
- modifier = Modifier
- .fillMaxSize()
- .padding(VideoTheme.dimens.audioAvatarPadding)
- ) {
- Box(
- modifier = Modifier
- .clip(CircleShape)
- .background(VideoTheme.colors.appBackground)
- .size(VideoTheme.dimens.audioRoomMicSize)
- ) {
- Icon(
- modifier = Modifier
- .fillMaxSize()
- .padding(VideoTheme.dimens.audioRoomMicPadding),
- painter = painterResource(id = io.getstream.video.android.ui.common.R.drawable.stream_video_ic_mic_off),
- tint = VideoTheme.colors.errorAccent,
- contentDescription = null
- )
- }
- }
- }
- }
-
- Spacer(modifier = Modifier.height(8.dp))
-
- Text(
- modifier = Modifier.fillMaxWidth(),
- text = nameOrId,
- fontSize = 14.sp,
- fontWeight = FontWeight.Bold,
- color = VideoTheme.colors.textHighEmphasis,
- textAlign = TextAlign.Center,
- )
-
- Text(
- modifier = Modifier.fillMaxWidth(),
- text = user.role,
- fontSize = 11.sp,
- color = VideoTheme.colors.textHighEmphasis,
- textAlign = TextAlign.Center,
- )
- }
-}
-```
-
-The `ParticipantAvatar` composable represents each participant in the audio room, displays the initial of the user and the status of the microphone.
-Now when you run the app, you'll see a pretty UI for the participants.
-
-![Audio Room](../assets/audio-room-4.png)
-
-In the above example, we use the following state flow objects:
-
-```kotlin
-val user by participant.user.collectAsState()
-val nameOrId by participant.userNameOrId.collectAsState()
-val isSpeaking by participant.speaking.collectAsState()
-val audioEnabled by participant.audioEnabled.collectAsState()
-```
-
-The [ParticipantState docs](../03-guides/03-call-and-participant-state.mdx) include all the other attributes that are also available.
-For audio rooms, `participant.audioLevel` and `participant.audioLevels` can be convenient to implement an audio visualizer.
-
-### Other built-in features
-
-There are a few more exciting features that you can use to build audio rooms:
-
-- ** Requesting Permissions **: Participants can ask the host for permission to speak, share video etc
-- ** Query Calls **: You can query calls to easily show upcoming calls, calls that recently finished etc
-- ** Call Previews **: Before you join the call you can observe it and show a preview. IE John, Sarah and 3 others are on this call.
-- ** Reactions & Custom events **: Reactions and custom events are supported
-- ** Recording & Broadcasting **: You can record your calls, or broadcast them to HLS
-- ** Chat **: Stream's chat SDKs are fully featured and you can integrate them in the call
-- ** Moderation **: Moderation capabilities are built-in to the product
-- ** Transcriptions **: Transcriptions aren't available yet, but are coming soon
-
-### Recap
-
-It was fun to see just how quickly you can build an audio-room for your app.
-Please do let us know if you ran into any issues.
-Our team is also happy to review your UI designs and offer recommendations on how to achieve it with Stream.
-
-To recap what we've learned:
-
-* You setup a call: (val call = client.call("audio_room", "222"))
-* The call type "audio_room" controls, which features are enabled and how permissions are setup
-* The audio_room by default enables "backstage" mode, and only allows admins to join before the call goes live
-* When you join a call, realtime communication is setup for audio & video calling: (call.join())
-* Stateflow objects in `call.state` and `call.state.participants` make it easy to build your own UI
-
-Calls run on Stream's global edge network of video servers.
-Being closer to your users improves the latency and reliability of calls.
-For audio rooms we use Opus RED and Opus DTX for optimal audio quality.
-
-The SDKs enable you to build audio rooms, video calling and livestreaming in days.
-
-We hope you've enjoyed this tutorial, and please do feel free to reach out if you have any suggestions or questions.
diff --git a/docusaurus/docs/Android/02-tutorials/03-livestream.mdx b/docusaurus/docs/Android/02-tutorials/03-livestream.mdx
deleted file mode 100644
index 49182f93e0..0000000000
--- a/docusaurus/docs/Android/02-tutorials/03-livestream.mdx
+++ /dev/null
@@ -1,391 +0,0 @@
----
-title: Livestream Tutorial
-description: How to build a livestream experience using Stream's video SDKs
----
-
-import { TokenSnippet } from '../../../shared/_tokenSnippet.jsx';
-
-In this tutorial we'll quickly build a low-latency in-app livestreaming experience.
-The livestream is broadcasted using Stream's edge network of servers around the world.
-We'll cover the following topics:
-
-* Ultra low latency streaming
-* Multiple streams & co-hosts
-* RTMP in and WebRTC input
-* Exporting to HLS
-* Reactions, custom events and chat
-* Recording & Transcriptions
-
-Let's get started, if you have any questions or feedback be sure to let us know via the feedback button.
-
-### Step 1 - Create a new project in Android Studio
-
-Note that this tutorial was written using **Android Studio Giraffe**. Setup steps can vary slightly across Android Studio versions.
-We recommend using [Android Studio Giraffe or newer](https://developer.android.com/studio/releases).
-
-1. Create a new project
-2. Select Phone & Tablet -> **Empty Activity**
-3. Name your project **Livestream**.
-
-### Step 2 - Install the SDK & Setup the client
-
-**Add the Video Compose SDK** and [Jetpack Compose](https://developer.android.com/jetpack/compose) dependencies to your app's `build.gradle.kts` file found in `app/build.gradle.kts`.
-If you're new to android, note that there are 2 `build.gradle` files, you want to open the `build.gradle` in the app folder.
-
-```kotlin
-dependencies {
- // Stream Video Compose SDK
- implementation("io.getstream:stream-video-android-ui-compose:0.4.2")
-
- // Jetpack Compose (optional/ android studio typically adds them when you create a new project)
- implementation(platform("androidx.compose:compose-bom:2023.08.00"))
- implementation("androidx.activity:activity-compose:1.7.2")
- implementation("androidx.compose.ui:ui")
- implementation("androidx.compose.ui:ui-tooling")
- implementation("androidx.compose.runtime:runtime")
- implementation("androidx.compose.foundation:foundation")
- implementation("androidx.compose.material:material")
-}
-```
-
-There are 2 versions of Stream's SDK.
-
-- **Video Compose SDK**: `io.getstream:stream-video-android-ui-compose` dependency that includes the video core SDK + compose UI components.
-- **Video Core SDK**: `io.getstream:stream-video-android-core` that only includes the core parts of the video SDK.
-
-This tutorial demonstrates the Compose Video SDK, but you have the option to use the core library without Compose based on your preference.
-
-### Step 3 - Broadcast a livestream from your phone
-
-The following code shows how to publish from your phone's camera.
-Let's open `MainActivity.kt` and replace the `MainActivity` class with the following code:
-
-```kotlin
-class MainActivity : ComponentActivity() {
- override fun onCreate(savedInstanceState: Bundle?) {
- super.onCreate(savedInstanceState)
-
- val userToken = "REPLACE_WITH_TOKEN"
- val userId = "REPLACE_WITH_USER_ID"
- val callId = "REPLACE_WITH_CALL_ID"
-
- // create a user.
- val user = User(
- id = userId, // any string
- name = "Tutorial" // name and image are used in the UI
- )
-
- // for a production app we recommend adding the client to your Application class or di module.
- val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "hd8szvscpxvd", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
- ).build()
-
- // join a call, which type is `default`
- val call = client.call("livestream", callId)
- lifecycleScope.launch {
- // join the call
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, "uh oh $it", Toast.LENGTH_SHORT).show()
- }
- }
-
- setContent {
- // request the Android runtime permissions for the camera and microphone
- LaunchCallPermissions(call = call)
-
- VideoTheme {
- Text("TODO: render video")
- }
- }
- }
-}
-```
-
-You'll notice that these first 3 lines need their values replaced.
-
-```kotlin
-val userToken = "REPLACE_WITH_TOKEN"
-val userId = "REPLACE_WITH_USER_ID"
-val callId = "REPLACE_WITH_CALL_ID"
-```
-
-Replace them now with the values shown below:
-
-
-
-When you run the app now you'll see a text message saying: "TODO: render video".
-Before we get around to rendering the video let's review the code above.
-
-In the first step we setup the user:
-
-```kotlin
-val user = User(
- id = userId, // any string
- name = "Tutorial" // name and image are used in the UI
-)
-```
-
-If you don't have an authenticated user you can also use a guest or anonymous user.
-For most apps it's convenient to match your own system of users to grant and remove permissions.
-
-Next we create the client:
-
-```kotlin
-val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "mmhfdzb5evj2", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
-).build()
-```
-
-You'll see the `userToken` variable. Your backend typically generates the user token on signup or login.
-
-The most important step to review is how we create the call.
-Stream uses the same call object for livestreaming, audio rooms and video calling.
-Have a look at the code snippet below:
-
-```kotlin
-val call = client.call("livestream", callId)
-lifecycleScope.launch {
- // join the call
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, "uh oh $it", Toast.LENGTH_SHORT).show()
- }
-}
-```
-
-To create the first call object, specify the call type as **livestream** and provide a unique **callId**. The **livestream** call type comes with default settings that are usually suitable for livestreams, but you can customize features, permissions, and settings in the dashboard. Additionally, the dashboard allows you to create new call types as required.
-
-Finally, using `call.join(create = true)` will not only create the call object on our servers but also initiate the real-time transport for audio and video. This allows for seamless and immediate engagement in the livestream.
-
-Note that you can also add members to a call and assign them different roles. For more information, see the [call creation docs](../03-guides/02-joining-creating-calls.mdx)
-
-### Step 4 - Rendering the video
-
-In this step we're going to build a UI for showing your local video with a button to start the livestream.
-This example uses Compose, but you could also use our XML VideoRenderer.
-
-In `MainActivity.kt` replace the `VideoTheme` with the following code:
-
-```kotlin
-VideoTheme {
- val connection by call.state.connection.collectAsState()
- val totalParticipants by call.state.totalParticipants.collectAsState()
- val backstage by call.state.backstage.collectAsState()
- val localParticipant by call.state.localParticipant.collectAsState()
- val video = localParticipant?.video?.collectAsState()?.value
- val duration by call.state.duration.collectAsState()
-
- androidx.compose.material.Scaffold(
- modifier = Modifier
- .fillMaxSize()
- .background(VideoTheme.colors.appBackground)
- .padding(6.dp),
- contentColor = VideoTheme.colors.appBackground,
- backgroundColor = VideoTheme.colors.appBackground,
- topBar = {
- if (connection == RealtimeConnection.Connected) {
- if (!backstage) {
- Box(
- modifier = Modifier
- .fillMaxWidth()
- .padding(6.dp)
- ) {
- Text(
- modifier = Modifier
- .align(Alignment.CenterEnd)
- .background(
- color = VideoTheme.colors.primaryAccent,
- shape = RoundedCornerShape(6.dp)
- )
- .padding(horizontal = 12.dp, vertical = 4.dp),
- text = "Live $total",
- color = Color.White
- )
-
- Text(
- modifier = Modifier.align(Alignment.Center),
- text = "Live for $duration",
- color = VideoTheme.colors.textHighEmphasis
- )
- }
- }
- }
- },
- bottomBar = {
- androidx.compose.material.Button(
- colors = ButtonDefaults.buttonColors(
- contentColor = VideoTheme.colors.primaryAccent,
- backgroundColor = VideoTheme.colors.primaryAccent
- ),
- onClick = {
- lifecycleScope.launch {
- if (backstage) call.goLive() else call.stopLive()
- }
- }
- ) {
- Text(
- text = if (backstage) "Go Live" else "Stop Broadcast",
- color = Color.White
- )
- }
- }
- ) {
- VideoRenderer(
- modifier = Modifier
- .fillMaxSize()
- .padding(it)
- .clip(RoundedCornerShape(6.dp)),
- call = call,
- video = video,
- videoFallbackContent = {
- Text(text = "Video rendering failed")
- }
- )
- }
-}
-```
-
-Upon running your app, you will be greeted with an interface that looks like this:
-
-![Livestream](../assets/tutorial-livestream.png)
-
-Stream uses a technology called SFU cascading to replicate your livestream over different servers around the world.
-This makes it possible to reach a large audience in realtime.
-
-Now let's press **Go live** in the android app and click the link below to watch the video in your browser.
-
-
-
-#### State & Participants
-
-Let's take a moment to review the Compose code above. `Call.state` exposes all the stateflow objects you need.
-The [participant state docs](../03-guides/03-call-and-participant-state.mdx) show all the available fields.
-
-In this example we use:
-
-* `call.state.connection`: to show if we're connected to the realtime video. you can use this for implementing a loading interface
-* `call.state.backstage`: a boolean that returns if the call is in backstage mode or not
-* `call.state.duration`: how long the call has been running
-* `call.state.totalParticipants`: the number of participants watching the livestream
-* `call.state.participants`: the list of participants
-
-The `call.state.participants` can optionally contain more information about who's watching the stream.
-If you have multiple people broadcasting video this also contain the video tracks.
-
-* `participant.user`: the user's name, image and custom data
-* `participant.video`: the video for this user
-* `participant.roles`: the roles for the participant. it enables you to have co-hosts etc
-
-There are many possibilities and the [participant state docs](../03-guides/03-call-and-participant-state.mdx) explain this in more detail.
-
-#### Creating a UI to watch a livestream
-
-The livestream layout is built using standard Jetpack Compose. The [VideoRenderer](../04-ui-components/02-video-renderer.mdx) component is provided by Stream.
-**VideoRenderer** renders the video and a fallback. You can use it for rendering the local and remote video.
-
-If you want to learn more about building an advanced UIs for watching a livestream, check out [Cookbook: Watching a livestream](../05-ui-cookbook/16-watching-livestream.mdx).
-
-#### Backstage mode
-
-In the example above you might have noticed the `call.goLive()` method and the `call.state.backstage` stateflow.
-The backstage functionality is enabled by default on the livestream call type.
-It makes it easy to build a flow where you and your co-hosts can setup your camera and equipment before going live.
-Only after you call `call.goLive()` will regular users be allowed to join the livestream.
-
-This is convenient for many livestreaming and audio-room use cases. If you want calls to start immediately when you join them that's also possible.
-Simply go the Stream dashboard, click the livestream call type and disable the backstage mode.
-
-### Step 4 - (Optional) Publishing RTMP using OBS
-
-The example above showed how to publish your phone's camera to the livestream.
-Almost all livestream software and hardware supports RTMPS.
-[OBS](https://obsproject.com/) is one of the most popular livestreaming software packages and we'll use it to explain how to import RTMPS.
-
-A. Log the URL & Stream Key
-
-```kotlin
-val rtmp = call.state.ingress.rtmp
-Log.i("Tutorial", "RTMP url and streamingKey: $rtmp")
-```
-
-B. Open OBS and go to settings -> stream
-
-- Select "custom" service
-- Server: equal to the server URL from the log
-- Stream key: equal to the stream key from the log
-
-Press start streaming in OBS. The RTMP stream will now show up in your call just like a regular video participant.
-Now that we've learned to publish using WebRTC or RTMP let's talk about watching the livestream.
-
-### Step 5 - Viewing a livestream (WebRTC)
-
-Watching a livestream is even easier than broadcasting.
-
-Compared to the current code in in `MainActivity.kt` you:
-
-* Don't need to request permissions or enable the camera
-* Don't render the local video, but instead render the remote video
-* Typically include some small UI elements like viewer count, a button to mute etc
-
-### Step 6 - (Optional) Viewing a livestream with HLS
-
-Another way to watch a livestream is using HLS. HLS tends to have a 10 to 20 seconds delay, while the above WebRTC approach is realtime.
-The benefit that HLS offers is better buffering under poor network conditions.
-So HLS can be a good option when:
-
-* A 10-20 second delay is acceptable
-* Your users want to watch the Stream in poor network conditions
-
-Let's show how to broadcast your call to HLS:
-
-```kotlin
-call.startHLS()
-val hlsUrl = call.state.egress.value?.hls?.playlistUrl
-Log.i("Tutorial", "HLS url = $hlsUrl")
-```
-
-You can play the HLS video feed using any HLS capable video player, such as [ExoPlayer](https://github.com/google/ExoPlayer).
-
-### 7 - Advanced Features
-
-This tutorial covered broadcasting and watching a livestream.
-It also went into more details about HLS & RTMP-in.
-
-There are several advanced features that can improve the livestreaming experience:
-
-* ** [Co-hosts](../03-guides/02-joining-creating-calls.mdx) ** You can add members to your livestream with elevated permissions. So you can have co-hosts, moderators etc.
-* ** [Custom events](../03-guides/09-reactions-and-custom-events.mdx) ** You can use custom events on the call to share any additional data. Think about showing the score for a game, or any other realtime use case.
-* ** [Reactions & Chat](../03-guides/09-reactions-and-custom-events.mdx) ** Users can react to the livestream, and you can add chat. This makes for a more engaging experience.
-* ** [Notifications](../06-advanced/01-ringing.mdx) ** You can notify users via push notifications when the livestream starts
-* ** [Recording](../06-advanced/06-recording.mdx) ** The call recording functionality allows you to record the call with various options and layouts
-
-### Recap
-
-It was fun to see just how quickly you can build in-app low latency livestreaming.
-Please do let us know if you ran into any issues.
-Our team is also happy to review your UI designs and offer recommendations on how to achieve it with Stream.
-
-To recap what we've learned:
-
-* WebRTC is optimal for latency, HLS is slower but buffers better for users with poor connections
-* You setup a call: (val call = client.call("livestream", callId))
-* The call type "livestream" controls which features are enabled and how permissions are setup
-* The livestream by default enables "backstage" mode. This allows you and your co-hosts to setup your mic and camera before allowing people in
-* When you join a call, realtime communication is setup for audio & video: (call.join())
-* Stateflow objects in call.state and call.state.participants make it easy to build your own UI
-* For a livestream the most important one is call.state.???
-
-Calls run on Stream's global edge network of video servers.
-Being closer to your users improves the latency and reliability of calls.
-The SDKs enable you to build livestreaming, audio rooms and video calling in days.
-
-We hope you've enjoyed this tutorial and please do feel free to reach out if you have any suggestions or questions.
\ No newline at end of file
diff --git a/docusaurus/docs/Android/04-ui-components/04-call/01-call-content.mdx b/docusaurus/docs/Android/04-ui-components/04-call/01-call-content.mdx
index cdcb546ace..290aaea241 100644
--- a/docusaurus/docs/Android/04-ui-components/04-call/01-call-content.mdx
+++ b/docusaurus/docs/Android/04-ui-components/04-call/01-call-content.mdx
@@ -109,4 +109,4 @@ The following parameters are available on the `CallContent`:
You can find out the parameters details in the [CallContent docs](https://getstream.github.io/stream-video-android/stream-video-android-ui-compose/io.getstream.video.android.compose.ui.components.call.activecall/-call-content.html).
:::
-If you're looking for guides on how to override and customize this UI, we have various [UI Cookbook](../../05-ui-cookbook/01-overview.mdx) recipes for you and we cover a portion of customization within the [Video Android SDK Tutorial](../../02-tutorials/01-video-calling.mdx).
+If you're looking for guides on how to override and customize this UI, we have various [UI Cookbook](../../05-ui-cookbook/01-overview.mdx) recipes for you and we cover a portion of customization within the [Video Android SDK Tutorial](https://getstream.io/video/sdk/android/tutorial/video-calling/).
diff --git a/docusaurus/docs/Android/04-ui-components/04-call/03-call-controls.mdx b/docusaurus/docs/Android/04-ui-components/04-call/03-call-controls.mdx
index f3bc9efc26..455d38b055 100644
--- a/docusaurus/docs/Android/04-ui-components/04-call/03-call-controls.mdx
+++ b/docusaurus/docs/Android/04-ui-components/04-call/03-call-controls.mdx
@@ -150,7 +150,7 @@ This is a very simple component so it doesn't have replaceable slots, but it sti
* `modifier`: Allows you to customize the size, position, elevation, background and much more of the component. Using this in pair with `VideoTheme` and our [theming guide](../03-video-theme.mdx), you're able to customize the shape of the call controls as well as colors padding and more.
* `actions`: As previously mentioned, by changing the `actions`, you don't only change the possible behavior, but also the appearance. You can use our own predefined action buttons or add your own Composable and tweak orders.
-In our [Video Android Tutorial](../../02-tutorials/01-video-calling.mdx), we showcased how to build custom `ControlActions` to remove a leave call action button and only feature camera and audio buttons. The result ended up looking something like this:
+In our [Video Android Tutorial](https://getstream.io/video/sdk/android/tutorial/video-calling/), we showcased how to build custom `ControlActions` to remove a leave call action button and only feature camera and audio buttons. The result ended up looking something like this:
![Compose Control Actions](../../assets/compose_call_controls_custom.png)
diff --git a/docusaurus/docs/Android/04-ui-components/04-call/05-screen-share-content.mdx b/docusaurus/docs/Android/04-ui-components/04-call/05-screen-share-content.mdx
index 176f9bb132..58a3927922 100644
--- a/docusaurus/docs/Android/04-ui-components/04-call/05-screen-share-content.mdx
+++ b/docusaurus/docs/Android/04-ui-components/04-call/05-screen-share-content.mdx
@@ -63,4 +63,4 @@ This is a very simple component so it doesn't have replaceable slots, but it sti
- `style`: Defined properties for styling a single video call track.
- `videoRenderer`: A single video renderer renders each individual participant.
-If you're looking for guides on how to override and customize this UI, we have various [UI Cookbook](../../05-ui-cookbook/01-overview.mdx) recipes for you and we cover a portion of customization within the [Video Android SDK Tutorial](../../02-tutorials/01-video-calling.mdx).
\ No newline at end of file
+If you're looking for guides on how to override and customize this UI, we have various [UI Cookbook](../../05-ui-cookbook/01-overview.mdx) recipes for you and we cover a portion of customization within the [Video Android SDK Tutorial](https://getstream.io/video/sdk/android/tutorial/video-calling/).
\ No newline at end of file
diff --git a/docusaurus/docs/Android/04-ui-components/05-participants/04-participants-spotlight.mdx b/docusaurus/docs/Android/04-ui-components/05-participants/04-participants-spotlight.mdx
index 86a9b7cb9a..cb927787af 100644
--- a/docusaurus/docs/Android/04-ui-components/05-participants/04-participants-spotlight.mdx
+++ b/docusaurus/docs/Android/04-ui-components/05-participants/04-participants-spotlight.mdx
@@ -67,4 +67,4 @@ This is a very simple component so it doesn't have replaceable slots, but it sti
- `style`: Defined properties for styling a single video call track.
- `videoRenderer`: A single video renderer renders each individual participant.
-If you're looking for guides on how to override and customize this UI, we have various [UI Cookbook](../../05-ui-cookbook/01-overview.mdx) recipes for you and we cover a portion of customization within the [Video Android SDK Tutorial](../../02-tutorials/01-video-calling.mdx).
\ No newline at end of file
+If you're looking for guides on how to override and customize this UI, we have various [UI Cookbook](../../05-ui-cookbook/01-overview.mdx) recipes for you and we cover a portion of customization within the [Video Android SDK Tutorial](https://getstream.io/video/sdk/android/tutorial/video-calling/).
\ No newline at end of file
diff --git a/docusaurus/docs/Android/05-ui-cookbook/15-hostling-livestream.mdx b/docusaurus/docs/Android/05-ui-cookbook/15-hostling-livestream.mdx
deleted file mode 100644
index 3473c3d82e..0000000000
--- a/docusaurus/docs/Android/05-ui-cookbook/15-hostling-livestream.mdx
+++ /dev/null
@@ -1,360 +0,0 @@
----
-title: Hosting a livestream
-description: How to host a livestream on Android with Kotlin
----
-
-This cookbook tutorial walks you through how to build an advanced UIs for hosting a livestream on Android.
-
-:::note
-In this cookbook tutorial, we will assume that you already know how to join a livestream call. If you haven't familiarized yourself with the [Livestream Tutorial](../02-tutorials/03-livestream.mdx) yet, we highly recommend doing so before proceeding with this cookbook.
-:::
-
-When you build a UI to host livestreaming, there are a few things to keep in mind:
-
-* Start/Stop the broadcasting
-* Toggling the device options, such as a camera and microphone
-* How to indicate when there are connection problems
-* Number of participants
-* Duration of the call
-
-In this cookbook tutorial, you'll learn how to build the result below at the end:
-
-| On Backstage | On Live |
-| --- | --- |
-| ![LiveStream Backstage](../assets/cookbook/livestream-backstage.png) | ![LiveStream Live](../assets/cookbook/livestream-live.png) |
-
-### Rendering Livestreaming
-
-First and foremost, rendering the livestreaming video is the key feature and the most crucial part of the screen.
-
-To accomplish this, you can easily render your livestreaming video using the following simple sample code:
-
-```kotlin
-val userToken = "REPLACE_WITH_TOKEN"
-val userId = "REPLACE_WITH_USER_ID"
-val callId = "REPLACE_WITH_CALL_ID"
-
-// step1 - create a user.
-val user = User(
- id = userId, // any string
- name = "Tutorial", // name and image are used in the UI
- role = "admin"
-)
-
-// step2 - initialize StreamVideo. For a production app we recommend adding the client to your Application class or di module.
-val client = StreamVideoBuilder(
- context = applicationContext,
- apiKey = "mmhfdzb5evj2", // demo API key
- geo = GEO.GlobalEdgeNetwork,
- user = user,
- token = userToken,
-).build()
-
-// step3 - join a call, which type is `default` and id is `123`.
-val call = client.call("livestream", callId)
-lifecycleScope.launch {
- // join the call
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, "uh oh $it", Toast.LENGTH_SHORT).show()
- }
-}
-
-setContent {
- // request the Android runtime permissions for the camera and microphone
- LaunchCallPermissions(call = call)
-
- // step4 - apply VideoTheme
- VideoTheme {
- val me by call.state.me.collectAsState()
- val video = me?.video?.collectAsState()?.value
-
- VideoRenderer(
- modifier = Modifier
- .fillMaxSize()
- .clip(RoundedCornerShape(6.dp)),
- call = call,
- video = video,
- videoFallbackContent = {
- Text(text = "Video rendering failed")
- }
- )
- }
-}
-```
-
-If you run the above example, you'll see the very basic video streaming screen below:
-
-![Video Streaming](../assets/compose_single_video.png)
-
-### Implement Live Participants Label
-
-Now you need to build labels that display the count of participants in your livestreaming session and indicate the streaming time.
-
-You can easily implement the live label using the following approach:
-
-```kotlin
-@Composable
-fun LiveLabel(
- modifier: Modifier,
- liveCount: Int
-) {
- Row(modifier = modifier.clip(RoundedCornerShape(6.dp))) {
- Text(
- modifier = Modifier
- .background(VideoTheme.colors.primaryAccent)
- .padding(vertical = 3.dp, horizontal = 12.dp),
- text = "Live",
- color = Color.White
- )
-
- Row(
- modifier = Modifier.background(Color(0xFF1C1E22)),
- verticalAlignment = Alignment.CenterVertically
- ) {
- Icon(
- modifier = Modifier
- .padding(horizontal = 6.dp)
- .size(22.dp),
- imageVector = Icons.Default.Person,
- tint = Color.White,
- contentDescription = null
- )
-
- Text(
- modifier = Modifier.padding(end = 12.dp, top = 3.dp, bottom = 3.dp),
- text = liveCount.toString(),
- color = Color.White
- )
- }
- }
-}
-```
-
-Upon building a preview for the `LiveLabel` Composable, you will observe the following result:
-
-![LiveLabel](../assets/cookbook/livestream-live-label.png)
-
-### Implement Live Time Label
-
-Next, you need to implement the live time label, which will display the duration of the livestream once it starts.
-
-You can simply implement the live time label like so:
-
-```kotlin
-@Composable
-fun TimeLabel(
- modifier: Modifier = Modifier,
- sessionTime: Long
-) {
- val time by remember(sessionTime) {
- val date = Date(sessionTime)
- val format = SimpleDateFormat("mm:ss", Locale.US)
- mutableStateOf(format.format(date))
- }
-
- Row(
- modifier = modifier
- .background(Color(0xFF1C1E22), RoundedCornerShape(6.dp)),
- verticalAlignment = Alignment.CenterVertically
- ) {
- Icon(
- modifier = Modifier
- .size(28.dp)
- .padding(start = 12.dp),
- imageVector = Icons.Default.CheckCircle,
- tint = VideoTheme.colors.infoAccent,
- contentDescription = null
- )
-
- Text(
- modifier = Modifier.padding(horizontal = 12.dp),
- text = time,
- color = Color.White
- )
- }
-}
-```
-
-If you build a preview for `LiveLabel` Composable, you'll see the result below:
-
-![TimeLabel](../assets/cookbook/livestream-time-label.png)
-
-### Connect implementations With Call State
-
-Now, let's connect those implementations with the call state and put them all with `Scaffold`, which consists of `TopBar`, `BottomBar`, and `content`.
-
-```kotlin
-VideoTheme {
- val participantCount by call.state.participantCounts.collectAsState()
- val connection by call.state.connection.collectAsState()
- val backstage by call.state.backstage.collectAsState()
- val me by call.state.me.collectAsState()
- val video = me?.video?.collectAsState()?.value
- val sessionTime by call.state.liveDurationInMs.collectAsState()
-
- Scaffold(
- modifier = Modifier
- .fillMaxSize()
- .background(Color(0xFF272A30))
- .padding(6.dp),
- contentColor = Color(0xFF272A30),
- backgroundColor = Color(0xFF272A30),
- topBar = {
- if (connection == RealtimeConnection.Connected) {
- Box(
- modifier = Modifier
- .fillMaxWidth()
- .padding(6.dp)
- ) {
- if (!backstage) {
- LiveLabel(
- modifier = Modifier.align(Alignment.CenterStart),
- liveCount = participantCount?.total ?: 0
- )
- }
-
- TimeLabel(
- modifier = Modifier.align(Alignment.Center),
- sessionTime = sessionTime ?: 0
- )
- }
- }
- }
- ) {
- VideoRenderer(
- modifier = Modifier
- .fillMaxSize()
- .padding(it)
- .clip(RoundedCornerShape(6.dp)),
- call = call,
- video = video,
- videoFallbackContent = {
- Text(text = "Video rendering failed")
- }
- )
- }
-}
-```
-
-As demonstrated in the example above, you can observe several state declarations representing the call state.:
-
-- `participantCount`: A model that contains information about participant counts.
-- `connection`: Indicates the connection state of a call.
-- `backstage`: Whether the call is on the backstage or not.
-- `me`: A video track, which renders a local video stream.
-- `video`: A local video track.
-- `sessionTime`: Indicates the time duration since your call goes to live.
-
-### Implement Live Button
-
-Let's proceed with building a live button that enables you to start/stop broadcasting your call and control your physical device, including the camera and microphone.
-
-You can implement the live button like so:
-
-```kotlin
-@Composable
-fun LiveButton(
- modifier: Modifier,
- call: Call,
- isBackstage: Boolean,
- onClick: () -> Unit
-) {
- Box(modifier = Modifier.fillMaxWidth()) {
- Button(
- modifier = modifier,
- colors = if (isBackstage) {
- ButtonDefaults.buttonColors(
- backgroundColor = VideoTheme.colors.primaryAccent,
- contentColor = VideoTheme.colors.primaryAccent
- )
- } else {
- ButtonDefaults.buttonColors(
- backgroundColor = VideoTheme.colors.errorAccent,
- contentColor = VideoTheme.colors.errorAccent
- )
- },
- onClick = onClick
- ) {
- Icon(
- modifier = Modifier.padding(vertical = 3.dp, horizontal = 6.dp),
- imageVector = if (isBackstage) {
- Icons.Default.PlayArrow
- } else {
- Icons.Default.Close
- },
- tint = Color.White,
- contentDescription = null
- )
-
- Text(
- modifier = Modifier.padding(end = 6.dp),
- text = if (isBackstage) "Go Live" else "Stop Broadcast",
- fontWeight = FontWeight.Bold,
- fontSize = 16.sp,
- color = Color.White
- )
- }
-
- val isCameraEnabled by call.camera.isEnabled.collectAsState()
- val isMicrophoneEnabled by call.microphone.isEnabled.collectAsState()
-
- Row(modifier = Modifier.align(Alignment.CenterEnd)) {
- ToggleCameraAction(
- modifier = Modifier.size(45.dp),
- isCameraEnabled = isCameraEnabled,
- enabledColor = VideoTheme.colors.callActionIconEnabledBackground,
- disabledColor = VideoTheme.colors.callActionIconEnabledBackground,
- disabledIconTint = VideoTheme.colors.errorAccent,
- shape = RoundedCornerShape(8.dp),
- onCallAction = { callAction -> call.camera.setEnabled(callAction.isEnabled) }
- )
-
- ToggleMicrophoneAction(
- modifier = Modifier
- .padding(horizontal = 12.dp)
- .size(45.dp),
- isMicrophoneEnabled = isMicrophoneEnabled,
- enabledColor = VideoTheme.colors.callActionIconEnabledBackground,
- disabledColor = VideoTheme.colors.callActionIconEnabledBackground,
- disabledIconTint = VideoTheme.colors.errorAccent,
- shape = RoundedCornerShape(8.dp),
- onCallAction = { callAction -> call.microphone.setEnabled(callAction.isEnabled) }
- )
- }
- }
-}
-```
-
-Now, let's complete the `Scaffold` with the new `LiveButton` Composable.
-
-### Complete The Live Screen
-
-Now, everything is ready to put together. You can complete the `Scaffold` with the new `LiveButton` Composable like so:
-
-```kotlin
-Scaffold(
- ..,
- bottomBar = {
- LiveButton(
- modifier = Modifier.padding(9.dp),
- call = call,
- isBackstage = backstage
- ) {
- lifecycleScope.launch {
- if (backstage) call.goLive() else call.stopLive()
- }
- }
- }
- ) {
- ..
- }
-```
-
-Once you've completed building your project, you'll witness the final result as depicted below:
-
-![LiveStream Backstage](../assets/cookbook/livestream-backstage.png)
-
-By simply clicking the **Go Live** button, you can begin broadcasting your stream.
-
-In this cookbook tutorial, you have learned how to create an advanced live streaming screen. If you wish to refer to the code, feel free to explore the [GitHub Repository](https://github.com/GetStream/stream-video-android/tree/develop/tutorials/tutorial-livestream).
\ No newline at end of file
diff --git a/docusaurus/docs/Android/05-ui-cookbook/16-watching-livestream.mdx b/docusaurus/docs/Android/05-ui-cookbook/16-watching-livestream.mdx
deleted file mode 100644
index acca6e0cb3..0000000000
--- a/docusaurus/docs/Android/05-ui-cookbook/16-watching-livestream.mdx
+++ /dev/null
@@ -1,151 +0,0 @@
----
-title: Watching a livestream
-description: How to watch a livestream on Android with Kotlin
----
-
-This cookbook tutorial walks you through how to build an advanced UIs for watching a livestream on Android.
-
-:::note
-We will assume that you already know how to join a livestream call. If you haven't familiarized yourself with the [Livestream Tutorial](../02-tutorials/03-livestream.mdx) yet, we highly recommend doing so before proceeding with this cookbook.
-:::
-
-In this cookbook tutorial, you'll learn how to build the result below at the end:
-
-| On Backstage | On Live |
-| --- | --- |
-| ![LiveStream Backstage](../assets/cookbook/livestream-watching-backstage.png) | ![LiveStream Live](../assets/cookbook/livestream-watching-live.png) |
-
-### Watching Livestreaming
-
-The Stream Compose SDK offers a pre-built UI component, `LivestreamPlayer`, designed to simplify the creation of a livestream viewing screen. This component includes a video renderer, displays information such as the number of participants and call duration, and provides controls for pausing and resuming the livestream.
-
-You can use the `LivestreamPlayer` like the sample below:
-
-```kotlin
-val call = client.call("livestream", callId)
-lifecycleScope.launch {
- // join the call
- val result = call.join(create = true)
- result.onError {
- Toast.makeText(applicationContext, "uh oh $it", Toast.LENGTH_SHORT).show()
- }
-}
-
-setContent {
- LivestreamPlayer(call = call)
-}
-```
-
-If you run the above example, you'll see the screen below:
-
-![Watching Livestream](../assets/cookbook/livestream-watching-backstage.png)
-
-As indicated in the message above, it appears that the host hasn't initiated the livestream.
-
-Now, if you run the hosting a livestream sample and start a broadcasting following the [Livestream Tutorial](../02-tutorials/03-livestream.mdx), you'll see the livestreaming screen below:
-
-![LiveStream Live](../assets/cookbook/livestream-watching-live.png)
-
-### LivestreamPlayer
-
-The `LivestreamPlayer` component offers streamlined customization options for each element:
-
-```kotlin
-LivestreamPlayer(
- call = call,
- enablePausing = true,
- onPausedPlayer = { isPaused -> Log.d("livestream", "paused: $isPaused") },
- backstageContent = {
- Text(
- modifier = Modifier.align(Alignment.Center),
- text = "Waiting for live host",
- )
- },
- rendererContent = {
- val livestream by call.state.livestream.collectAsState()
-
- VideoRenderer(
- modifier = Modifier.fillMaxSize(),
- call = call,
- video = livestream,
- )
- },
- overlayContent = {
- val totalParticipants by call.state.totalParticipants.collectAsState()
- val duration by call.state.duration.collectAsState()
-
- Row(
- modifier = Modifier.align(Alignment.Center),
- verticalAlignment = Alignment.CenterVertically,
- ) {
- Text(text = totalParticipants.toString())
-
- Text(text = (duration ?: 0).toString())
- }
- }
-)
-```
-
-As you can observe in the example above, you have the flexibility to extensively customize every element of the `LivestreamPlayer`. Each element serves a distinct purpose:
-
-* `enablePausing`: Enables pausing or resuming the livestream video.
-* `onPausedPlayer`: Listen to pause or resume the livestream video.
-* `backstageContent`: Content shown when the host has not yet started the live stream.
-* `rendererContent`: The rendered stream originating from the host.
-* `overlayContent`: The default content is displayed to indicate participant counts, live stream duration, and device settings controls. You can overlay anything that you want by customizing this Composable parameter.
-
-You can utilize each element depending on your situations and requirements.
-
-`LivestreamPlayer` also provides pausing/resuming the livestream video by clicking on the video renderer.
-
-If you click on the video renderer, it will pause or resume like the image below:
-
-![LiveStream Live Pausing](../assets/cookbook/livestream-watching-live-pause.png)
-
-### Build Your Own LivestreamPlayer
-
-You can also create your own custom livestream player without relying on Stream's pre-built UI component.
-
-The key logic to consider includes:
-
-* UI for when the video isn't loaded yet
-* A message to show when the livestream didn't start yet
-* What to show when the livestream stopped
-* How to indicate when there are connection problems
-* Muting the volume
-* Number of participants
-* Duration of the call
-
-The `call.state` provides the means to monitor whether the livestream is in the background, the count of participants, and more. You can leverage this information to construct your own custom livestream player.
-
-```kotlin
-val backstage: Boolean by call.state.backstage.collectAsState()
-val livestream: ParticipantState.Video? by call.state.livestream.collectAsState()
-val totalParticipants: ParticipantCount? by call.state.totalParticipants.collectAsState()
-val duration: kotlin.time.Duration? by call.state.duration.collectAsState()
-```
-
-Then now you can implement a your livestream player like the example below:
-
-```kotlin
-if (backstage) {
- Text(text = "Waiting for live host")
-} else {
- VideoRenderer(
- modifier = Modifier.fillMaxSize(),
- call = call,
- video = livestream,
- )
-
- Row(
- modifier = Modifier.align(Alignment.Center),
- verticalAlignment = Alignment.CenterVertically,
- ) {
- Text(text = totalParticipants.toString())
-
- Text(text = (duration ?: 0).toString())
- }
-}
-```
-
-In this cookbook tutorial, you have learned how to create an advanced live streaming screen. If you wish to refer to the code, feel free to explore the [GitHub Repository](https://github.com/GetStream/stream-video-android/tree/develop/tutorials/tutorial-livestream).
\ No newline at end of file
diff --git a/docusaurus/sidebars-android.js b/docusaurus/sidebars-android.js
index e64c5532c8..7f3ab345ab 100644
--- a/docusaurus/sidebars-android.js
+++ b/docusaurus/sidebars-android.js
@@ -10,27 +10,6 @@ module.exports = {
},
],
},
- {
- type: "category",
- label: "Tutorials",
- items: [
- {
- type: 'doc',
- id: 'tutorials/video-calling', // document ID
- label: 'Video Call Tutorial', // sidebar label
- },
- {
- type: 'doc',
- id: 'tutorials/audio-room', // document ID
- label: 'Audio Room Tutorial', // sidebar label
- },
- {
- type: 'doc',
- id: 'tutorials/livestream', // document ID
- label: 'Livestream Tutorial', // sidebar label
- },
- ],
- },
{
type: "category",
label: "Core Concepts",