Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test/integration testing #951

Merged
merged 72 commits into from
Mar 22, 2024
Merged
Show file tree
Hide file tree
Changes from 66 commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
7311ecc
Attempt to fix a sqlite conflict on submission status.
jrobinAV Feb 8, 2024
93918d3
Merge branch 'develop' into fix/increase-sql-repo-connection-timeout
jrobinAV Feb 14, 2024
abb1bce
add status change traces
jrobinAV Mar 4, 2024
f10da19
formatting
jrobinAV Mar 4, 2024
55a6da0
adjust logger location
jrobinAV Mar 4, 2024
6db1222
adjust logger
jrobinAV Mar 4, 2024
2da9db0
Add traces
jrobinAV Mar 4, 2024
b745b9d
Attempt to add traces
jrobinAV Mar 4, 2024
1397964
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 4, 2024
f5f3996
test
jrobinAV Mar 6, 2024
2fe9f84
Revert "test"
jrobinAV Mar 7, 2024
e7098f9
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 7, 2024
fdc1f79
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 8, 2024
cba70b7
enlarge lock scope
jrobinAV Mar 8, 2024
f68f2a9
Remove traces
jrobinAV Mar 8, 2024
b8d5445
retry pattern on sql save method
jrobinAV Mar 11, 2024
16ea1e7
Adding traces
jrobinAV Mar 11, 2024
7c9c486
Adding traces
jrobinAV Mar 11, 2024
5bca172
linter
jrobinAV Mar 11, 2024
786e4e0
linter
jrobinAV Mar 11, 2024
80161cf
update traces
jrobinAV Mar 11, 2024
023ec9a
update traces
jrobinAV Mar 11, 2024
3c64a9a
Add traces around lock acquisition
jrobinAV Mar 11, 2024
bcaf8e1
linter
jrobinAV Mar 11, 2024
d562088
linter
jrobinAV Mar 11, 2024
f7486a0
linter
jrobinAV Mar 11, 2024
e6cbbb8
attempt to isolate lock on job removal
jrobinAV Mar 11, 2024
4f2d7fb
attempt 2 to isolate lock on job removal
jrobinAV Mar 11, 2024
bbb2f82
attempt 3 to isolate lock on job removal
jrobinAV Mar 11, 2024
9b891c5
Add lock around nb_available_workers
jrobinAV Mar 12, 2024
fe43ba8
fix UTs
jrobinAV Mar 12, 2024
b0ef819
minor Cleanning
jrobinAV Mar 12, 2024
7437895
Remove logs
jrobinAV Mar 12, 2024
9cc19d9
Remove logs
jrobinAV Mar 12, 2024
b94d3c3
Cleaning
jrobinAV Mar 12, 2024
3cbcb4d
Revert unecessary change
jrobinAV Mar 12, 2024
8eeb940
Revert sql retry pattern
jrobinAV Mar 12, 2024
8a3b57b
fix: reset everything after each test as well
trgiangdo Mar 12, 2024
a4d07cf
fix: linter error
trgiangdo Mar 12, 2024
67bac4e
minor improvements
trgiangdo Mar 13, 2024
53e29da
Merge branch 'develop' into test/integration-testing
trgiangdo Mar 13, 2024
7430f47
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 13, 2024
f1aa370
fix: move dispatcher.join() back to the stop method
trgiangdo Mar 13, 2024
9ee4c2b
fix: move _nb_available_workers_lock lock to class level
trgiangdo Mar 13, 2024
794f4c4
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 18, 2024
4af9e0f
Revert "Remove logs"
jrobinAV Mar 18, 2024
fb9d621
fix linter
jrobinAV Mar 18, 2024
fab34c6
minor change.
jrobinAV Mar 18, 2024
8d9dd33
Turn logs to debug
jrobinAV Mar 19, 2024
9720572
add msecs to default log format
jrobinAV Mar 19, 2024
e35ded6
replace threading lock with rlock
Mar 19, 2024
e07c719
replace threading lock with rlock
Mar 19, 2024
9993c25
remove msecs to default log format
jrobinAV Mar 19, 2024
b684dd4
add msecs again to default log format
jrobinAV Mar 19, 2024
474d439
add msecs properly to default log format
jrobinAV Mar 19, 2024
598d6a0
improve debug log
jrobinAV Mar 19, 2024
4afc4c3
remove logs
jrobinAV Mar 19, 2024
612b274
remove logs
jrobinAV Mar 19, 2024
3961ce3
Revert "remove logs"
jrobinAV Mar 19, 2024
4a4f2f3
Revert "remove logs"
jrobinAV Mar 19, 2024
c8811a1
attempt to wait 1 second before stopping dispatcher
jrobinAV Mar 20, 2024
9d08841
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 20, 2024
a5a9946
linter
jrobinAV Mar 20, 2024
5d94b45
Remove Rlock and replace by Lock
jrobinAV Mar 21, 2024
95679d2
use is_running instead of is_alive
jrobinAV Mar 21, 2024
701ffac
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 21, 2024
074bc94
Linter
jrobinAV Mar 21, 2024
a88bd9a
Fix wrong merge conflict resolution
jrobinAV Mar 21, 2024
1aa87c7
reduce and clean logs
jrobinAV Mar 21, 2024
4a32956
Removing logs about releasing lock
jrobinAV Mar 21, 2024
41a0a07
Merge branch 'develop' into test/integration-testing
jrobinAV Mar 21, 2024
935adae
Removing useless sleep
jrobinAV Mar 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 25 additions & 18 deletions taipy/core/_orchestrator/_dispatcher/_job_dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,30 +55,37 @@ def stop(self, wait: bool = True, timeout: Optional[float] = None):
wait (bool): If True, the method will wait for the dispatcher to stop.
timeout (Optional[float]): The maximum time to wait. If None, the method will wait indefinitely.
"""
self.stop_wait = wait
self.stop_timeout = timeout
self._STOP_FLAG = True
if wait and self.is_running():
self._logger.debug("Waiting for the dispatcher thread to stop...")
self.join(timeout=timeout)

def run(self):
self._logger.debug("Job dispatcher started.")
while not self._STOP_FLAG:
try:
if self._can_execute():
with self.lock:
if self._STOP_FLAG:
break
if not self._can_execute():
time.sleep(0.1) # We need to sleep to avoid busy waiting.
continue

with self.lock:
self._logger.debug("Acquiring lock to check jobs to run.")
job = None
try:
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved
if not self._STOP_FLAG:
job = self.orchestrator.jobs_to_run.get(block=True, timeout=0.1)
self._execute_job(job)
else:
time.sleep(0.1) # We need to sleep to avoid busy waiting.
except Empty: # In case the last job of the queue has been removed.
pass
except Exception as e:
self._logger.exception(e)
pass
if self.stop_wait:
self._logger.debug("Waiting for the dispatcher thread to stop...")
self.join(timeout=self.stop_timeout)
except Empty: # In case the last job of the queue has been removed.
pass
self._logger.debug("Releasing lock after checking jobs to run.")
if job:
self._logger.debug(f"Got a job to execute {job.id}.")
try:
if not self._STOP_FLAG:
self._execute_job(job)
else:
self.orchestrator.jobs_to_run.put(job)
except Exception as e:
self._logger.exception(e)
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved
time.sleep(1)
trgiangdo marked this conversation as resolved.
Show resolved Hide resolved
self._logger.debug("Job dispatcher stopped.")

@abstractmethod
Expand Down
22 changes: 12 additions & 10 deletions taipy/core/_orchestrator/_dispatcher/_standalone_job_dispatcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

from concurrent.futures import Executor, ProcessPoolExecutor
from functools import partial
from threading import Lock
from typing import Callable, Optional

from taipy.config._serializer._toml_serializer import _TomlSerializer
Expand All @@ -25,6 +26,7 @@
class _StandaloneJobDispatcher(_JobDispatcher):
"""Manages job dispatching (instances of `Job^` class) in an asynchronous way using a ProcessPoolExecutor."""

_nb_available_workers_lock = Lock()
_DEFAULT_MAX_NB_OF_WORKERS = 2

def __init__(self, orchestrator: _AbstractOrchestrator, subproc_initializer: Optional[Callable] = None):
Expand All @@ -38,30 +40,30 @@ def __init__(self, orchestrator: _AbstractOrchestrator, subproc_initializer: Opt

def _can_execute(self) -> bool:
"""Returns True if the dispatcher have resources to dispatch a job."""
return self._nb_available_workers > 0
with self._nb_available_workers_lock:
self._logger.debug(f"Can execute a job ? {self._nb_available_workers} available workers.")
return self._nb_available_workers > 0

def run(self):
with self._executor:
super().run()
self._logger.debug("Standalone job dispatcher: Pool executor shut down")
self._logger.debug("Standalone job dispatcher: Pool executor shut down.")

def _dispatch(self, job: Job):
"""Dispatches the given `Job^` on an available worker for execution.

Parameters:
job (Job^): The job to submit on an executor with an available worker.
"""

self._nb_available_workers -= 1
with self._nb_available_workers_lock:
self._nb_available_workers -= 1
self._logger.debug(f"Changing nb_available_workers to {self._nb_available_workers} from dispatch method.")
config_as_string = _TomlSerializer()._serialize(Config._applied_config) # type: ignore[attr-defined]
future = self._executor.submit(_TaskFunctionWrapper(job.id, job.task), config_as_string=config_as_string)

future.add_done_callback(self._release_worker) # We must release the worker before updating the job status
# so that the worker is available for another job as soon as possible.
future.add_done_callback(partial(self._update_job_status_from_future, job))

def _release_worker(self, _):
self._nb_available_workers += 1

def _update_job_status_from_future(self, job: Job, ft):
with self._nb_available_workers_lock:
self._nb_available_workers += 1
self._logger.debug(f"Changing nb_available_workers to {self._nb_available_workers} from callback method.")
self._update_job_status(job, ft.result())
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved
39 changes: 27 additions & 12 deletions taipy/core/_orchestrator/_orchestrator.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ class _Orchestrator(_AbstractOrchestrator):
"""

jobs_to_run: Queue = Queue()
blocked_jobs: List = []
blocked_jobs: List[Job] = []

lock = Lock()
__logger = _TaipyLogger._get_logger()
Expand All @@ -58,15 +58,15 @@ def submit(
"""Submit the given `Scenario^` or `Sequence^` for an execution.

Parameters:
submittable (Union[SCenario^, Sequence^]): The scenario or sequence to submit for execution.
submittable (Union[Scenario^, Sequence^]): The scenario or sequence to submit for execution.
callbacks: The optional list of functions that should be executed on jobs status change.
force (bool) : Enforce execution of the scenario's or sequence's tasks even if their output data
nodes are cached.
wait (bool): Wait for the orchestrated jobs created from the scenario or sequence submission to be
finished in asynchronous mode.
timeout (Union[float, int]): The optional maximum number of seconds to wait for the jobs to be finished
before returning.
**properties (dict[str, any]): A keyworded variable length list of user additional arguments
**properties (dict[str, any]): A key worded variable length list of user additional arguments
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved
that will be stored within the `Submission^`. It can be accessed via `Submission.properties^`.
Returns:
The created `Submission^` containing the information about the submission.
Expand All @@ -80,6 +80,7 @@ def submit(
jobs: List[Job] = []
tasks = submittable._get_sorted_tasks()
with cls.lock:
cls.__logger.debug(f"Acquiring lock to submit {submission.entity_id}.")
for ts in tasks:
jobs.extend(
cls._lock_dn_output_and_create_job(
Expand All @@ -91,8 +92,9 @@ def submit(
)
for task in ts
)
submission.jobs = jobs # type: ignore
cls._orchestrate_job_to_run_or_block(jobs)
submission.jobs = jobs # type: ignore
cls._orchestrate_job_to_run_or_block(jobs)
cls.__logger.debug(f"Releasing lock after submitting {submission.entity_id}.")
if Config.job_config.is_development:
cls._check_and_execute_jobs_if_development_mode()
elif wait:
Expand All @@ -119,7 +121,7 @@ def submit_task(
in asynchronous mode.
timeout (Union[float, int]): The optional maximum number of seconds to wait for the job
to be finished before returning.
**properties (dict[str, any]): A keyworded variable length list of user additional arguments
**properties (dict[str, any]): A key worded variable length list of user additional arguments
that will be stored within the `Submission^`. It can be accessed via `Submission.properties^`.
Returns:
The created `Submission^` containing the information about the submission.
Expand All @@ -129,16 +131,18 @@ def submit_task(
)
submit_id = submission.id
with cls.lock:
cls.__logger.debug(f"Acquiring lock to submit task {task.id}.")
job = cls._lock_dn_output_and_create_job(
task,
submit_id,
submission.entity_id,
itertools.chain([cls._update_submission_status], callbacks or []),
force,
)
jobs = [job]
submission.jobs = jobs # type: ignore
cls._orchestrate_job_to_run_or_block(jobs)
jobs = [job]
submission.jobs = jobs # type: ignore
cls._orchestrate_job_to_run_or_block(jobs)
cls.__logger.debug(f"Releasing lock after submitting task {task.id}.")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now inside a lock context.

if Config.job_config.is_development:
cls._check_and_execute_jobs_if_development_mode()
else:
Expand Down Expand Up @@ -223,18 +227,25 @@ def _unlock_edit_on_jobs_outputs(jobs: Union[Job, List[Job], Set[Job]]):
@classmethod
def _on_status_change(cls, job: Job):
if job.is_completed() or job.is_skipped():
cls.__logger.debug(f"{job.id} has been completed or skipped. Unblocking jobs.")
cls.__unblock_jobs()
elif job.is_failed():
cls._fail_subsequent_jobs(job)

@classmethod
def __unblock_jobs(cls):
for job in cls.blocked_jobs:
if not cls._is_blocked(job):
with cls.lock:
with cls.lock:
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved
cls.__logger.debug("Acquiring lock to unblock jobs.")
for job in cls.blocked_jobs:
cls.__logger.debug(f" Unblocking {job.id} ?")
if not cls._is_blocked(job):
cls.__logger.debug(f" Unblocking {job.id} !")
job.pending()
cls.__logger.debug(f" Removing {job.id} from the blocked list.")
cls.__remove_blocked_job(job)
cls.__logger.debug(f" Adding {job.id} to the list of jobs to run.")
cls.jobs_to_run.put(job)
cls.__logger.debug("Releasing lock after unblocking jobs.")

@classmethod
def __remove_blocked_job(cls, job):
Expand All @@ -253,12 +264,14 @@ def cancel_job(cls, job: Job):
cls.__logger.info(f"{job.id} has already failed and cannot be canceled.")
else:
with cls.lock:
cls.__logger.debug(f"Acquiring lock to cancel job {job.id}.")
to_cancel_or_abandon_jobs = {job}
to_cancel_or_abandon_jobs.update(cls.__find_subsequent_jobs(job.submit_id, set(job.task.output.keys())))
cls.__remove_blocked_jobs(to_cancel_or_abandon_jobs)
cls.__remove_jobs_to_run(to_cancel_or_abandon_jobs)
cls._cancel_jobs(job.id, to_cancel_or_abandon_jobs)
cls._unlock_edit_on_jobs_outputs(to_cancel_or_abandon_jobs)
cls.__logger.debug(f"Releasing lock after canceling {job.id}.")

@classmethod
def __find_subsequent_jobs(cls, submit_id, output_dn_config_ids: Set) -> Set[Job]:
Expand Down Expand Up @@ -292,6 +305,7 @@ def __remove_jobs_to_run(cls, jobs):
@classmethod
def _fail_subsequent_jobs(cls, failed_job: Job):
with cls.lock:
cls.__logger.debug("Acquiring lock to fail subsequent jobs.")
to_fail_or_abandon_jobs = set()
to_fail_or_abandon_jobs.update(
cls.__find_subsequent_jobs(failed_job.submit_id, set(failed_job.task.output.keys()))
Expand All @@ -302,6 +316,7 @@ def _fail_subsequent_jobs(cls, failed_job: Job):
cls.__remove_blocked_jobs(to_fail_or_abandon_jobs)
cls.__remove_jobs_to_run(to_fail_or_abandon_jobs)
cls._unlock_edit_on_jobs_outputs(to_fail_or_abandon_jobs)
cls.__logger.debug("Releasing lock after fail subsequent jobs.")

@classmethod
def _cancel_jobs(cls, job_id_to_cancel: JobId, jobs: Set[Job]):
Expand Down
20 changes: 17 additions & 3 deletions taipy/core/_repository/_sql_repository.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,22 @@

import json
import pathlib
from sqlite3 import DatabaseError
from typing import Any, Dict, Iterable, List, Optional, Type, Union

from sqlalchemy.dialects import sqlite
from sqlalchemy.exc import NoResultFound

from ...logger._taipy_logger import _TaipyLogger
from .._repository._abstract_repository import _AbstractRepository
from ..common.typing import Converter, Entity, ModelType
from ..exceptions import ModelNotFound
from .db._sql_connection import _SQLConnection


class _SQLRepository(_AbstractRepository[ModelType, Entity]):
_logger = _TaipyLogger._get_logger()

def __init__(self, model_type: Type[ModelType], converter: Type[Converter]):
"""
Holds common methods to be used and extended when the need for saving
Expand All @@ -47,9 +51,19 @@ def __init__(self, model_type: Type[ModelType], converter: Type[Converter]):
def _save(self, entity: Entity):
obj = self.converter._entity_to_model(entity)
if self._exists(entity.id): # type: ignore
self._update_entry(obj)
return
self.__insert_model(obj)
try:
self._update_entry(obj)
return
except DatabaseError as e:
self._logger.error(f"Error while updating {entity.id} in {self.table.name}. ") # type: ignore
self._logger.error(f"Error : {e}")
raise e
try:
self.__insert_model(obj)
except DatabaseError as e:
self._logger.error(f"Error while inserting {entity.id} into {self.table.name}. ") # type: ignore
self._logger.error(f"Error : {e}")
raise e
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved

def _exists(self, entity_id: str):
query = self.table.select().filter_by(id=entity_id)
Expand Down
2 changes: 1 addition & 1 deletion taipy/core/_repository/db/_sql_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,4 +84,4 @@ def _build_connection() -> Connection:

@lru_cache
def __build_connection(db_location: str):
return sqlite3.connect(db_location, check_same_thread=False)
return sqlite3.connect(db_location, check_same_thread=False, timeout=20)
jrobinAV marked this conversation as resolved.
Show resolved Hide resolved
5 changes: 3 additions & 2 deletions taipy/core/job/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
def _run_callbacks(fn):
def __run_callbacks(job):
fn(job)
_TaipyLogger._get_logger().debug(f"{job.id} status has changed to {job.status}.")
for fct in job._subscribers:
fct(job)

Expand Down Expand Up @@ -200,6 +201,7 @@ def failed(self):
def completed(self):
"""Set the status to _completed_ and notify subscribers."""
self.status = Status.COMPLETED
self.__logger.info(f"job {self.id} is completed.")

@_run_callbacks
def skipped(self):
Expand Down Expand Up @@ -287,7 +289,7 @@ def is_finished(self) -> bool:
return self.is_completed() or self.is_failed() or self.is_canceled() or self.is_skipped() or self.is_abandoned()

def _is_finished(self) -> bool:
"""Indicate if the job is finished. This function will not triggered the persistency feature like is_finished().
"""Indicate if the job is finished. This function will not trigger the persistence feature like is_finished().

Returns:
True if the job is finished.
Expand Down Expand Up @@ -322,7 +324,6 @@ def update_status(self, exceptions):
self.__logger.error(st)
else:
self.completed()
self.__logger.info(f"job {self.id} is completed.")

def __hash__(self):
return hash(self.id)
Expand Down
11 changes: 10 additions & 1 deletion taipy/core/submission/submission.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
from datetime import datetime
from typing import Any, Dict, List, Optional, Set, Union

from taipy.logger._taipy_logger import _TaipyLogger

from .._entity._entity import _Entity
from .._entity._labeled import _Labeled
from .._entity._properties import _Properties
Expand Down Expand Up @@ -43,6 +45,7 @@ class Submission(_Entity, _Labeled):
_MANAGER_NAME = "submission"
__SEPARATOR = "_"
lock = threading.Lock()
__logger = _TaipyLogger._get_logger()

def __init__(
self,
Expand Down Expand Up @@ -201,7 +204,10 @@ def _update_submission_status(self, job: Job):
job_status = job.status
if job_status == Status.FAILED:
submission._submission_status = SubmissionStatus.FAILED
_SubmissionManagerFactory._build_manager()._set(submission)
submission_manager._set(submission)
self.__logger.debug(
f"{job.id} status is {job_status}. Submission status set to " f"{submission._submission_status}"
)
return
if job_status == Status.CANCELED:
submission._is_canceled = True
Expand Down Expand Up @@ -242,6 +248,9 @@ def _update_submission_status(self, job: Job):
submission.submission_status = SubmissionStatus.COMPLETED # type: ignore
else:
submission.submission_status = SubmissionStatus.UNDEFINED # type: ignore
self.__logger.debug(
f"{job.id} status is {job_status}. Submission status set to " f"{submission._submission_status}"
)

def is_finished(self) -> bool:
"""Indicate if the submission is finished.
Expand Down
4 changes: 2 additions & 2 deletions taipy/logger/_taipy_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def _get_logger(cls):
cls.__logger.setLevel(logging.INFO)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.INFO)
formatter = logging.Formatter("[%(asctime)s][%(name)s][%(levelname)s] %(message)s", "%Y-%m-%d %H:%M:%S")
ch.setFormatter(formatter)
f = logging.Formatter("[%(asctime)s.%(msecs)03d][%(name)s][%(levelname)s] %(message)s", "%Y-%m-%d %H:%M:%S")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add milliseconds to default logging format.

Copy link
Member

@trgiangdo trgiangdo Mar 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will this cause too much noise?

It affects the whole application so I think we need third opinion about this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My opinion is that in a multithreading/multiprocessing context, having milliseconds in the logs is more than useful to understand what's happening. This is not only true for us as developers but also for our users.

However, I agree we should discuss it during the daily to have more opinions on that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FlorianJacta @FabienLelaquais Is it ok for you to have a default log formatting with milliseconds?

ch.setFormatter(f)
cls.__logger.addHandler(ch)
return cls.__logger
Loading
Loading