From 1d1ed1307fa387c613ea1cfa167dcfd7c5247a7b Mon Sep 17 00:00:00 2001 From: Kleinbard Date: Wed, 5 Jun 2024 11:26:07 -0400 Subject: [PATCH 1/2] darkmode fixes --- contents/ai_for_good/ai_for_good.html | 1384 +++++++++ contents/benchmarking/benchmarking.html | 1958 ++++++++++++ contents/conclusion/conclusion.html | 1076 +++++++ .../data_engineering/data_engineering.html | 1726 +++++++++++ contents/dl_primer/dl_primer.html | 1463 +++++++++ .../dsp_spectral_features_block.html | 1617 ++++++++++ contents/efficient_ai/efficient_ai.html | 1400 +++++++++ contents/frameworks/frameworks.html | 2154 +++++++++++++ contents/generative_ai/generative_ai.html | 1046 +++++++ contents/hw_acceleration/hw_acceleration.html | 2477 +++++++++++++++ .../image_classification.html | 1689 +++++++++++ contents/introduction/introduction.html | 1070 +++++++ contents/kws_feature_eng/kws_feature_eng.html | 1266 ++++++++ contents/kws_nicla/kws_nicla.html | 1502 ++++++++++ contents/ml_systems/ml_systems.html | 1431 +++++++++ .../motion_classify_ad.html | 1622 ++++++++++ contents/niclav_sys/niclav_sys.html | 1441 +++++++++ .../object_detection_fomo.html | 1467 +++++++++ .../ondevice_learning/ondevice_learning.html | 2122 +++++++++++++ contents/ops/ops.html | 2180 ++++++++++++++ contents/optimizations/optimizations.html | 2664 +++++++++++++++++ .../privacy_security/privacy_security.html | 2412 +++++++++++++++ contents/responsible_ai/responsible_ai.html | 1716 +++++++++++ contents/robust_ai/robust_ai.html | 2624 ++++++++++++++++ contents/sustainable_ai/sustainable_ai.html | 1947 ++++++++++++ contents/training/training.html | 2350 +++++++++++++++ contents/workflow/workflow.html | 1277 ++++++++ references.html | 1038 +++++++ scripts/ai_menu/dist/142.bundle.js | 1 - scripts/ai_menu/dist/384.bundle.js | 1 - scripts/ai_menu/dist/761.bundle.js | 1 - scripts/ai_menu/dist/bundle.js | 1 - 32 files changed, 48119 insertions(+), 4 deletions(-) create mode 100644 contents/ai_for_good/ai_for_good.html create mode 100644 contents/benchmarking/benchmarking.html create mode 100644 contents/conclusion/conclusion.html create mode 100644 contents/data_engineering/data_engineering.html create mode 100644 contents/dl_primer/dl_primer.html create mode 100644 contents/dsp_spectral_features_block/dsp_spectral_features_block.html create mode 100644 contents/efficient_ai/efficient_ai.html create mode 100644 contents/frameworks/frameworks.html create mode 100644 contents/generative_ai/generative_ai.html create mode 100644 contents/hw_acceleration/hw_acceleration.html create mode 100644 contents/image_classification/image_classification.html create mode 100644 contents/introduction/introduction.html create mode 100644 contents/kws_feature_eng/kws_feature_eng.html create mode 100644 contents/kws_nicla/kws_nicla.html create mode 100644 contents/ml_systems/ml_systems.html create mode 100644 contents/motion_classify_ad/motion_classify_ad.html create mode 100644 contents/niclav_sys/niclav_sys.html create mode 100644 contents/object_detection_fomo/object_detection_fomo.html create mode 100644 contents/ondevice_learning/ondevice_learning.html create mode 100644 contents/ops/ops.html create mode 100644 contents/optimizations/optimizations.html create mode 100644 contents/privacy_security/privacy_security.html create mode 100644 contents/responsible_ai/responsible_ai.html create mode 100644 contents/robust_ai/robust_ai.html create mode 100644 contents/sustainable_ai/sustainable_ai.html create mode 100644 contents/training/training.html create mode 100644 contents/workflow/workflow.html create mode 100644 references.html delete mode 100644 scripts/ai_menu/dist/142.bundle.js delete mode 100644 scripts/ai_menu/dist/384.bundle.js delete mode 100644 scripts/ai_menu/dist/761.bundle.js delete mode 100644 scripts/ai_menu/dist/bundle.js diff --git a/contents/ai_for_good/ai_for_good.html b/contents/ai_for_good/ai_for_good.html new file mode 100644 index 00000000..c130d36e --- /dev/null +++ b/contents/ai_for_good/ai_for_good.html @@ -0,0 +1,1384 @@ + + + + + + + + + +Machine Learning Systems - 17  AI for Good + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

17  AI for Good

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Illustration of planet Earth wrapped in shimmering neural networks, with diverse humans and AI robots working together on various projects like planting trees, cleaning the oceans, and developing sustainable energy solutions. The positive and hopeful atmosphere represents a united effort to create a better future.
+
+
+

By aligning AI progress with human values, goals, and ethics, the ultimate goal of ML systems (at any scale) is to be a technology that reflects human principles and aspirations. Initiatives under “AI for Good” promote the development of AI to tackle the UN Sustainable Development Goals (SDGs) using embedded AI technologies, expanding access to AI education, amongst other things. While it is now clear that AI will be an instrumental part of progress towards the SDGs, its adoption and impact are limited by the immense power consumption, strong connectivity requirements, and high costs of cloud-based deployments. TinyML can circumvent many of these issues by allowing ML models to run on low-cost and low-power microcontrollers.

+
+

The “AI for Good” movement is critical in cultivating a future where an AI-empowered society is more just, sustainable, and prosperous for all humanity.

+
+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand how TinyML can help advance the UN Sustainable Development Goals in health, agriculture, education, and the environment.

  • +
  • Recognize the versatility of TinyML for enabling localized, low-cost solutions tailored to community needs.

  • +
  • Consider the challenges of adopting TinyML globally, such as limited training, data constraints, accessibility, and cultural barriers.

  • +
  • Appreciate the importance of collaborative, ethical approaches to develop and deploy TinyML to serve local contexts best.

  • +
  • Recognize the potential of TinyML, if responsibly implemented, to promote equity and empower underserved populations worldwide.

  • +
+
+
+
+

17.1 Introduction

+

To give ourselves a framework around which to think about AI for social good, we will be following the UN Sustainable Development Goals (SDGs). The UN SDGs are a collection of 17 global goals, shown in Figure fig-sdg, adopted by the United Nations in 2015 as part of the 2030 Agenda for Sustainable Development. The SDGs address global challenges related to poverty, inequality, climate change, environmental degradation, prosperity, and peace and justice.

+

What is special about the SDGs is that they are a collection of interlinked objectives designed to serve as a “shared blueprint for peace and prosperity for people and the planet, now and into the future.” The SDGs emphasize sustainable development’s interconnected environmental, social, and economic aspects by putting sustainability at their center.

+

A recent study (Vinuesa et al. 2020) highlights the influence of AI on all aspects of sustainable development, particularly on the 17 Sustainable Development Goals (SDGs) and 169 targets internationally defined in the 2030 Agenda for Sustainable Development. The study shows that AI can act as an enabler for 134 targets through technological improvements, but it also highlights the challenges of AI on some targets. The study shows that AI can benefit 67 targets when considering AI and societal outcomes. Still, it also warns about the issues related to the implementation of AI in countries with different cultural values and wealth.

+
+Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. “The Role of Artificial Intelligence in Achieving the Sustainable Development Goals.” Nat. Commun. 11 (1): 1–10. https://doi.org/10.1038/s41467-019-14108-y. +
+
+
+ +
+
+Figure 17.1: United Nations Sustainable Development Goals (SDG). Credit: United Nations. +
+
+
+

In our book’s context, TinyML could help advance at least some of these SDG goals.

+
    +
  • Goal 1 - No Poverty: TinyML could help provide low-cost solutions for crop monitoring to improve agricultural yields in developing countries.

  • +
  • Goal 2 - Zero Hunger: TinyML could enable localized and precise crop health monitoring and disease detection to reduce crop losses.

  • +
  • Goal 3 - Good Health and Wellbeing: TinyML could help enable low-cost medical diagnosis tools for early detection and prevention of diseases in remote areas.

  • +
  • Goal 6 - Clean Water and Sanitation: TinyML could monitor water quality and detect contaminants to ensure Access to clean drinking water.

  • +
  • Goal 7 - Affordable and Clean Energy: TinyML could optimize energy consumption and enable predictive maintenance for renewable energy infrastructure.

  • +
  • Goal 11 - Sustainable Cities and Communities: TinyML could enable intelligent traffic management, air quality monitoring, and optimized resource management in smart cities.

  • +
  • Goal 13 - Climate Action: TinyML could monitor deforestation and track reforestation efforts. It could also help predict extreme weather events.

  • +
+

The portability, lower power requirements, and real-time analytics enabled by TinyML make it well-suited for addressing several sustainability challenges developing regions face. The widespread deployment of power solutions has the potential to provide localized and cost-effective monitoring to help achieve some of the UN’s SDGs. In the rest of the sections, we will dive into how TinyML is useful across many sectors that can address the UN SDGs.

+
+
+

17.2 Agriculture

+

Agriculture is essential to achieving many of the UN Sustainable Development Goals, including eradicating Hunger and malnutrition, promoting economic growth, and using natural resources sustainably. TinyML can be a valuable tool to help advance sustainable agriculture, especially for smallholder farmers in developing regions.

+

TinyML solutions can provide real-time monitoring and data analytics for crop health and growing conditions - all without reliance on connectivity infrastructure. For example, low-cost camera modules connected to microcontrollers can monitor for disease, pests, and nutritional deficiencies. TinyML algorithms can analyze the images to detect issues early before they spread and damage yields. Precision monitoring can optimize inputs like water, fertilizer, and pesticides - improving efficiency and sustainability.

+

Other sensors, such as GPS units and accelerometers, can track microclimate conditions, soil humidity, and livestock wellbeing. Local real-time data helps farmers respond and adapt better to changes in the field. TinyML analytics at the edge avoids lag, network disruptions, and the high data costs of cloud-based systems. Localized systems allow customization of specific crops, diseases, and regional issues.

+

Widespread TinyML applications can help digitize smallholder farms to increase productivity, incomes, and resilience. The low cost of hardware and minimal connectivity requirements make solutions accessible. Projects across the developing world have shown the benefits:

+
    +
  • Microsoft’s FarmBeats project is an end-to-end approach to enable data-driven farming by using low-cost sensors, drones, and vision and machine learning algorithms. The project aims to solve the problem of limited adoption of technology in farming due to the need for more power and internet connectivity in farms and the farmers’ limited technology savviness. The project aims to increase farm productivity and reduce costs by coupling data with farmers’ knowledge and intuition about their farms. The project has successfully enabled actionable insights from data by building artificial intelligence (AI) or machine learning (ML) models based on fused data sets.

  • +
  • In Sub-Saharan Africa, off-the-shelf cameras and edge AI have cut cassava disease losses from 40% to 5%, protecting a staple crop (Ramcharan et al. 2017).

  • +
  • In Indonesia, sensors monitor microclimates across rice paddies, optimizing water usage even with erratic rains (Tirtalistyani, Murtiningrum, and Kanwar 2022).

  • +
+
+Ramcharan, Amanda, Kelsee Baranowski, Peter McCloskey, Babuali Ahmed, James Legg, and David P. Hughes. 2017. “Deep Learning for Image-Based Cassava Disease Detection.” Front. Plant Sci. 8 (October): 1852. https://doi.org/10.3389/fpls.2017.01852. +
+Tirtalistyani, Rose, Murtiningrum Murtiningrum, and Rameshwar S. Kanwar. 2022. Indonesia Rice Irrigation System: Time for Innovation.” Sustainability 14 (19): 12477. https://doi.org/10.3390/su141912477. +

With greater investment and integration into rural advisory services, TinyML could transform small-scale agriculture and improve farmers’ livelihoods worldwide. The technology effectively brings the benefits of precision agriculture to disconnected regions most in need.

+
+

Exercise 17.1 (Crop Yield Modeling)  

+
+
+ +
+
+

This exercise teaches you how to predict crop yields in Nepal by combining satellite data (Sentinel-2), climate data (WorldClim), and on-the-ground measurements. You’ll use a machine learning algorithm called XGBoost Regressor to build a model, split the data for training and testing, and fine-tune the model parameters for the best performance. This notebook lays the foundation for implementing TinyML in the agriculture domain. Consider how you could adapt this process for smaller datasets, fewer features, and simplified models to make it compatible with the power and memory constraints of TinyML devices.

+

+
+
+
+
+
+

17.3 Healthcare

+
+

17.3.1 Expanding Access

+

Universal health coverage and quality care remain out of reach for millions worldwide. In many regions, more medical professionals are required to Access basic diagnosis and treatment. Additionally, healthcare infrastructure like clinics, hospitals, and utilities to power complex equipment needs to be improved. These gaps disproportionately impact marginalized communities, exacerbating health disparities.

+

TinyML offers a promising technological solution to help expand Access to quality healthcare globally. TinyML refers to the ability to deploy machine learning algorithms on microcontrollers, tiny chips with processing power, memory, and connectivity. TinyML enables real-time data analysis and intelligence in low-powered, compact devices.

+

This creates opportunities for transformative medical tools that are portable, affordable, and accessible. TinyML software and hardware can be optimized to run even in resource-constrained environments. For example, a TinyML system could analyze symptoms or make diagnostic predictions using minimal computing power, no continuous internet connectivity, and a battery or solar power source. These capabilities can bring medical-grade screening and monitoring directly to underserved patients.

+
+
+

17.3.2 Early Diagnosis

+

Early detection of diseases is one major application. Small sensors paired with TinyML software can identify symptoms before conditions escalate or visible signs appear. For instance, cough monitors with embedded machine learning can pick up on acoustic patterns indicative of respiratory illness, malaria, or tuberculosis. Detecting diseases at onset improves outcomes and reduces healthcare costs.

+

A detailed example could be given for TinyML monitoring pneumonia in children. Pneumonia is a leading cause of death for children under 5, and detecting it early is critical. A startup called Respira Labs has developed a low-cost wearable audio sensor that uses TinyML algorithms to analyze coughs and identify symptoms of respiratory illnesses like pneumonia. The device contains a microphone sensor and microcontroller that runs a neural network model trained to classify respiratory sounds. It can identify features like wheezing, crackling, and stridor that may indicate pneumonia. The device is designed to be highly accessible - it has a simple strap, requires no battery or charging, and results are provided through LED lights and audio cues.

+

Another example involves researchers at UNIFEI in Brazil who have developed a low-cost device that leverages TinyML to monitor heart rhythms. Their innovative solution addresses a critical need - atrial fibrillation and other heart rhythm abnormalities often go undiagnosed due to the prohibitive cost and limited availability of screening tools. The device overcomes these barriers through its ingenious design. It uses an off-the-shelf microcontroller that costs only a few dollars, along with a basic pulse sensor. By minimizing complexity, the device becomes accessible to under-resourced populations. The TinyML algorithm running locally on the microcontroller analyzes pulse data in real-time to detect irregular heart rhythms. This life-saving heart monitoring device demonstrates how TinyML enables powerful AI capabilities to be deployed in cost-effective, user-friendly designs.

+

TinyML’s versatility also shows promise for tackling infectious diseases. Researchers have proposed applying TinyML to identify malaria-spreading mosquitoes by their wingbeat sounds. When equipped with microphones, small microcontrollers can run advanced audio classification models to determine mosquito species. This compact, low-power solution produces results in real time, suitable for remote field use. By making entomology analytics affordable and accessible, TinyML could revolutionize monitoring insects that endanger human health. TinyML is expanding healthcare access for vulnerable communities from heart disease to malaria.

+
+
+

17.3.3 Infectious Disease Control

+

Mosquitoes remain the most deadly disease vector worldwide, transmitting illnesses that infect over one billion people annually (“Vector-Borne Diseases,” n.d.). Diseases like malaria, dengue, and Zika are especially prevalent in resource-limited regions lacking robust infrastructure for mosquito control. Monitoring local mosquito populations is essential to prevent outbreaks and properly target interventions.

+
+“Vector-Borne Diseases.” n.d. https://www.who.int/news-room/fact-sheets/detail/vector-borne-diseases. +

Traditional monitoring methods are expensive, labor-intensive, and difficult to deploy remotely. The proposed TinyML solution aims to overcome these barriers. Small microphones coupled with machine learning algorithms can classify mosquitoes by species based on minute differences in wing oscillations. The TinyML software runs efficiently on low-cost microcontrollers, eliminating the need for continuous connectivity.

+

A collaborative research team from the University of Khartoum and the ICTP is exploring an innovative solution using TinyML. In a recent paper, they presented a low-cost device that can identify disease-spreading mosquito species through their wing beat sounds (Altayeb, Zennaro, and Rovai 2022).

+
+Altayeb, Moez, Marco Zennaro, and Marcelo Rovai. 2022. “Classifying Mosquito Wingbeat Sound Using TinyML.” In Proceedings of the 2022 ACM Conference on Information Technology for Social Good, 132–37. ACM. https://doi.org/10.1145/3524458.3547258. +

This portable, self-contained system shows great promise for entomology. The researchers suggest it could revolutionize insect monitoring and vector control strategies in remote areas. TinyML could significantly bolster malaria eradication efforts by providing cheaper, easier mosquito analytics. Its versatility and minimal power needs make it ideal for field use in isolated, off-grid regions with scarce resources but high disease burden.

+
+
+

17.3.4 TinyML Design Contest in Healthcare

+

The first TinyML contest in healthcare, TDC’22 (Jia et al. 2023), was held in 2022 to motivate participating teams to design AI/ML algorithms for detecting life-threatening ventricular arrhythmias (VAs) and deploy them on Implantable Cardioverter Defibrillators (ICDs). VAs are the main cause of sudden cardiac death (SCD). People at high risk of SCD rely on the ICD to deliver proper and timely defibrillation treatment (i.e., shocking the heart back into normal rhythm) when experiencing life-threatening VAs.

+
+Jia, Zhenge, Dawei Li, Xiaowei Xu, Na Li, Feng Hong, Lichuan Ping, and Yiyu Shi. 2023. “Life-Threatening Ventricular Arrhythmia Detection Challenge in Implantable Cardioverterdefibrillators.” Nature Machine Intelligence 5 (5): 554–55. https://doi.org/10.1038/s42256-023-00659-9. +

An on-device algorithm for early and timely life-threatening VA detection will increase the chances of survival. The proposed AI/ML algorithm needed to be deployed and executed on an extremely low-power and resource-constrained microcontroller (MCU) (a $10 development board with an ARM Cortex-M4 core at 80 MHz, 256 kB of flash memory and 64 kB of SRAM). The submitted designs were evaluated by metrics measured on the MCU for (1) detection performance, (2) inference latency, and (3) memory occupation by the program of AI/ML algorithms.

+

The champion, GaTech EIC Lab, obtained 0.972 in \(F_\beta\) (F1 score with a higher weight to recall), 1.747 ms in latency, and 26.39 kB in memory footprint with a deep neural network. An ICD with an on-device VA detection algorithm was implanted in a clinical trial.

+
+

Exercise 17.2 (Clinical Data: Unlocking Insights with Named Entity Recognition)  

+
+
+ +
+
+

In this exercise, you’ll learn about Named Entity Recognition (NER), a powerful tool for extracting valuable information from clinical text. Using Spark NLP, a specialized library for healthcare NLP, we’ll explore how NER models like BiLSTM-CNN-Char and BERT can automatically identify important medical entities such as diagnoses, medications, test results, and more. You’ll get hands-on experience applying these techniques with a special focus on oncology-related data extraction, helping you unlock insights about cancer types and treatment details from patient records.

+

+
+
+
+
+
+
+

17.4 Science

+

In many scientific fields, researchers are limited by the quality and resolution of data they can collect. They often must indirectly infer the true parameters of interest using approximate correlations and models built on sparse data points. This constrains the accuracy of scientific understanding and predictions.

+

The emergence of TinyML opens new possibilities for gathering high-fidelity scientific measurements. With embedded machine learning, tiny, low-cost sensors can automatically process and analyze data locally in real-time. This creates intelligent sensor networks that capture nuanced data at much greater scales and frequencies.

+

For example, monitoring environmental conditions to model climate change remains challenging due to the need for widespread, continuous data. The Ribbit Project from UC Berkeley is pioneering a crowdsourced TinyML solution (Rao 2021). They developed an open-source CO2 sensor that uses an onboard microcontroller to process the gas measurements. An extensive dataset can be aggregated by distributing hundreds of these low-cost sensors. The TinyML devices compensate for environmental factors and provide previously impossible, granular, accurate readings.

+

The potential to massively scale out intelligent sensing via TinyML has profound scientific implications. Higher-resolution data can lead to discoveries and predictive capabilities in fields ranging from ecology to cosmology. Other applications could include seismic sensors for earthquake early warning systems, distributed weather monitors to track microclimate changes, and acoustic sensors to study animal populations.

+

As sensors and algorithms continue improving, TinyML networks may generate more detailed maps of natural systems than ever before. Democratizing the collection of scientific data can accelerate research and understanding across disciplines. However, it raises new challenges around data quality, privacy, and modeling unknowns. TinyML signifies a growing convergence of AI and the natural sciences to answer fundamental questions.

+
+
+

17.5 Conservation and Environment

+

TinyML is emerging as a powerful tool for environmental conservation and sustainability efforts. Recent research has highlighted numerous applications of tiny machine learning in domains such as wildlife monitoring, natural resource management, and tracking climate change.

+

One example is using TinyML for real-time wildlife tracking and protection. Researchers have developed Smart Wildlife Tracker devices that leverage TinyML algorithms to detect poaching activities. The collars contain sensors like cameras, microphones, and GPS to monitor the surrounding environment continuously. Embedded machine learning models analyze the audio and visual data to identify threats like nearby humans or gunshots. Early poaching detection gives wildlife rangers critical information to intervene and take action.

+

Other projects apply TinyML to study animal behavior through sensors. The smart wildlife collar uses accelerometers and acoustic monitoring to track elephant movements, communication, and moods (Verma 2022). The low-power TinyML collar devices transmit rich data on elephant activities while avoiding burdensome Battery changes. This helps researchers unobtrusively observe elephant populations to inform conservation strategies.

+
+Verma, Team Dual_Boot: Swapnil. 2022. “Elephant AI.” Hackster.io. https://www.hackster.io/dual\_boot/elephant-ai-ba71e9. +

On a broader scale, distributed TinyML devices are envisioned to create dense sensor networks for environmental modeling. Hundreds of low-cost air quality monitors could map pollution across cities. Underwater sensors may detect toxins and give early warning of algal blooms. Such applications underscore TinyML’s versatility in ecology, climatology, and sustainability.

+

Researchers from Moulay Ismail University of Meknes in Morocco (Bamoumen et al. 2022) have published a survey on how TinyML can be used to solve environmental issues. However, thoughtfully assessing benefits, risks, and equitable Access will be vital as TinyML expands environmental research and conservation. With ethical consideration of impacts, TinyML offers data-driven solutions to protect biodiversity, natural resources, and our planet.

+
+Bamoumen, Hatim, Anas Temouden, Nabil Benamar, and Yousra Chtouki. 2022. “How TinyML Can Be Leveraged to Solve Environmental Problems: A Survey.” In 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 338–43. IEEE; IEEE. https://doi.org/10.1109/3ict56508.2022.9990661. +
+
+

17.6 Disaster Response

+

In disaster response, speed and safety are paramount. But rubble and wreckage create hazardous, confined environments that impede human search efforts. TinyML enables nimble drones to assist rescue teams in these dangerous scenarios.

+

When buildings collapse after earthquakes, small drones can prove invaluable. Equipped with TinyML navigation algorithms, micro-sized drones like the CrazyFlie can traverse cramped voids and map pathways beyond human reach (Bardienus P. Duisterhof et al. 2019). Obstacle avoidance allows the drones to weave through unstable debris. This autonomous mobility lets them rapidly sweep areas humans cannot access.

+
+Duisterhof, Bardienus P, Srivatsan Krishnan, Jonathan J Cruz, Colby R Banbury, William Fu, Aleksandra Faust, Guido CHE de Croon, and Vijay Janapa Reddi. 2019. “Learning to Seek: Autonomous Source Seeking with Deep Reinforcement Learning Onboard a Nano Drone Microcontroller.” ArXiv Preprint abs/1909.11236. https://arxiv.org/abs/1909.11236. +

The video below presents the (Bardienus P. Duisterhof et al. 2019) paper on deep reinforcement learning using drones for source-seeking.

+
+

Crucially, onboard sensors and TinyML processors analyze real-time data to identify signs of survivors. Thermal cameras detect body heat, microphones pick up calls for help, and gas sensors warn of leaks (Bardienus P. Duisterhof et al. 2021). Processing data locally using TinyML allows for quick interpretation to guide rescue efforts. As conditions evolve, the drones can adapt by adjusting their search patterns and priorities.

+

The following video is an overview of autonomous drones for gas leak detection.

+
+

Additionally, coordinated swarms of drones unlock new capabilities. By collaborating and sharing insights, drone teams comprehensively view the situation. Blanketing disaster sites allows TinyML algorithms to fuse and analyze data from multiple vantage points, amplifying situational awareness beyond individual drones (Bardienus P. Duisterhof et al. 2021).

+
+Duisterhof, Bardienus P., Shushuai Li, Javier Burgues, Vijay Janapa Reddi, and Guido C. H. E. de Croon. 2021. “Sniffy Bug: A Fully Autonomous Swarm of Gas-Seeking Nano Quadcopters in Cluttered Environments.” In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 9099–9106. IEEE; IEEE. https://doi.org/10.1109/iros51168.2021.9636217. +

Most importantly, initial drone reconnaissance enhances safety for human responders. Keeping rescue teams at a safe distance until drone surveys assess hazards saves lives. Once secured, drones can guide precise personnel placement.

+

By combining agile mobility, real-time data, and swarm coordination, TinyML-enabled drones promise to transform disaster response. Their versatility, speed, and safety make them a vital asset for rescue efforts in dangerous, inaccessible environments. Integrating autonomous drones with traditional methods can accelerate responses when it matters most.

+
+
+

17.7 Education and Outreach

+

TinyML holds immense potential to help address challenges in developing regions, but realizing its benefits requires focused education and capacity building. Recognizing this need, academic researchers have spearheaded outreach initiatives to spread TinyML education globally.

+

In 2020, Harvard University, Columbia University, the International Centre for Theoretical Physics (ICTP), and UNIFEI jointly founded the TinyML for Developing Communities (TinyML4D) network (Zennaro, Plancher, and Reddi 2022). This network empowers universities and researchers in developing countries to harness TinyML for local impact.

+
+Zennaro, Marco, Brian Plancher, and V Janapa Reddi. 2022. TinyML: Applied AI for Development.” In The UN 7th Multi-Stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals, 2022–05. +

A core focus is expanding Access to applied machine learning education. The TinyML4D network provides training, curricula, and lab resources to members. Hands-on workshops and data collection projects give students practical experience. Members can share best practices and build a community through conferences and academic collaborations.

+

The network prioritizes enabling locally relevant TinyML solutions. Projects address challenges like agriculture, health, and environmental monitoring based on community needs. For example, a member university in Rwanda developed a low-cost flood monitoring system using TinyML and sensors.

+

TinyML4D includes over 50 member institutions across Africa, Asia, and Latin America. However, greater investments and industry partnerships are needed to reach all underserved regions. The ultimate vision is training new generations to ethically apply TinyML for sustainable development. Outreach efforts today lay the foundation for democratizing transformative technology for the future.

+
+
+

17.8 Accessibility

+

Technology has immense potential to break down barriers faced by people with disabilities and bridge gaps in accessibility. TinyML specifically opens new possibilities for developing intelligent, personalized assistive devices.

+

With machine learning algorithms running locally on microcontrollers, compact accessibility tools can operate in real time without reliance on connectivity. The National Institute on Deafness and Other Communication Disorders (NIDCD) states that 20% of the world’s population has some form of hearing loss. Hearing aids leveraging TinyML could recognize multiple speakers and amplify the voice of a chosen target in crowded rooms. This allows people with hearing impairments to focus on specific conversations.

+

Similarly, mobility devices could use on-device vision processing to identify obstacles and terrain characteristics. This enables enhanced navigation and safety for the visually impaired. Companies like Envision are developing smart glasses, converting visual information into speech, with embedded TinyML to guide blind people by detecting objects, text, and traffic signals.

+

The video below shows the different real-life use cases of the Envision visual aid glasses.

+
+

TinyML could even power responsive prosthetic limbs. By analyzing nerve signals and sensory data like muscle tension, prosthetics and exoskeletons with embedded ML can move and adjust grip dynamically, making control more natural and intuitive. Companies are creating affordable, everyday bionic hands using TinyML. For those with speech difficulties, voice-enabled devices with TinyML can generate personalized vocal outputs from non-verbal inputs. Pairs by Anthropic translates gestures into natural speech tailored for individual users.

+

By enabling more customizable assistive tech, TinyML makes services more accessible and tailored to individual needs. And through translation and interpretation applications, TinyML can break down communication barriers. Apps like Microsoft Translator offer real-time translation powered by TinyML algorithms.

+

With its thoughtful and inclusive design, TinyML promises more autonomy and dignity for people with disabilities. However, developers should engage communities directly, avoid compromising privacy, and consider affordability to maximize the benefits. TinyML has huge potential to contribute to a more just, equitable world.

+
+
+

17.9 Infrastructure and Urban Planning

+

As urban populations swell, cities face immense challenges in efficiently managing resources and infrastructure. TinyML presents a powerful tool for developing intelligent systems to optimize city operations and sustainability. It could revolutionize energy efficiency in smart buildings.

+

Machine learning models can learn to predict and regulate energy usage based on occupancy patterns. Miniaturized sensors placed throughout buildings can provide granular, real-time data on space utilization, temperature, and more (Seyedzadeh et al. 2018). This visibility allows TinyML systems to minimize waste by optimizing heating, cooling, lighting, etc.

+
+Seyedzadeh, Saleh, Farzad Pour Rahimian, Ivan Glesk, and Marc Roper. 2018. “Machine Learning for Estimation of Building Energy Consumption and Performance: A Review.” Visualization in Engineering 6 (1): 1–20. https://doi.org/10.1186/s40327-018-0064-7. +

These examples demonstrate TinyML’s huge potential for efficient, sustainable city infrastructure. However, urban planners must consider privacy, security, and accessibility to ensure responsible adoption. With careful implementation, TinyML could profoundly modernize urban life.

+
+
+

17.10 Challenges and Considerations

+

While TinyML presents immense opportunities, thoughtful consideration of challenges and ethical implications will be critical as adoption spreads globally. Researchers have highlighted key factors to address, especially when deploying TinyML in developing regions.

+

A foremost challenge is limited Access to training and hardware (Ooko et al. 2021). Only educational programs exist tailored to TinyML, and emerging economies often need a robust electronics supply chain. Thorough training and partnerships will be needed to nurture expertise and make devices available to underserved communities. Initiatives like the TinyML4D network help provide structured learning pathways.

+
+Ooko, Samson Otieno, Marvin Muyonga Ogore, Jimmy Nsenga, and Marco Zennaro. 2021. TinyML in Africa: Opportunities and Challenges.” In 2021 IEEE Globecom Workshops (GC Wkshps), 1–6. IEEE; IEEE. https://doi.org/10.1109/gcwkshps52748.2021.9682107. +

Data limitations also pose hurdles. TinyML models require quality localized datasets, which are scarce in under-resourced environments. Creating frameworks to crowdsource data ethically could address this. However, data collection should benefit local communities directly, not just extract value.

+

Optimizing power usage and connectivity will be vital for sustainability. TinyML’s low power needs make it ideal for off-grid use cases. Integrating battery or solar can enable continuous operation. Adapting devices for low-bandwidth transmission where the internet is limited also maximizes impact.

+

Cultural and language barriers further complicate adoption. User interfaces and devices should account for all literacy levels and avoid excluding subgroups. Voice-controllable solutions in local dialects can enhance accessibility.

+

Addressing these challenges requires holistic partnerships, funding, and policy support. However, inclusively and ethically scaling TinyML has monumental potential to uplift disadvantaged populations worldwide. With thoughtful implementation, the technology could profoundly democratize opportunity.

+
+
+

17.11 Conclusion

+

TinyML presents a tremendous opportunity to harness the power of artificial intelligence to advance the UN Sustainable Development Goals and drive social impact globally, as highlighted by examples across sectors like healthcare, agriculture, conservation, and more; embedded machine learning unlocks new capabilities for low-cost, accessible solutions tailored to local contexts. TinyML circumvents barriers like poor infrastructure, limited connectivity, and high costs that often exclude developing communities from emerging technology.

+

However, realizing TinyML’s full potential requires holistic collaboration. Researchers, policymakers, companies, and local stakeholders must collaborate to provide training, establish ethical frameworks, co-design solutions, and adapt them to community needs. Through inclusive development and deployment, TinyML can deliver on its promise to bridge inequities and uplift vulnerable populations without leaving any behind.

+

If cultivated responsibly, TinyML could democratize opportunity and accelerate progress on global priorities from poverty alleviation to climate resilience. The technology represents a new wave of applied AI to empower societies, promote sustainability, and propel humanity toward greater justice, prosperity, and peace. TinyML provides a glimpse into an AI-enabled future that is accessible to all.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+ +
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/benchmarking/benchmarking.html b/contents/benchmarking/benchmarking.html new file mode 100644 index 00000000..da220d11 --- /dev/null +++ b/contents/benchmarking/benchmarking.html @@ -0,0 +1,1958 @@ + + + + + + + + + +Machine Learning Systems - 11  Benchmarking AI + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

11  Benchmarking AI

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Photo of a podium set against a tech-themed backdrop. On each tier of the podium, there are AI chips with intricate designs. The top chip has a gold medal hanging from it, the second one has a silver medal, and the third has a bronze medal. Banners with ‘AI Olympics’ are displayed prominently in the background.
+
+
+

Benchmarking is critical to developing and deploying machine learning systems, especially TinyML applications. Benchmarks allow developers to measure and compare the performance of different model architectures, training procedures, and deployment strategies. This provides key insights into which approaches work best for the problem at hand and the constraints of the deployment environment.

+

This chapter will provide an overview of popular ML benchmarks, best practices for benchmarking, and how to use benchmarks to improve model development and system performance. It aims to provide developers with the right tools and knowledge to effectively benchmark and optimize their systems, especially for TinyML systems.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the purpose and goals of benchmarking AI systems, including performance assessment, resource evaluation, validation, and more.

  • +
  • Learn about the different types of benchmarks - micro, macro, and end-to-end - and their role in evaluating different aspects of an AI system.

  • +
  • Become familiar with the key components of an AI benchmark, including datasets, tasks, metrics, baselines, reproducibility rules, and more.

  • +
  • Understand the distinction between training and inference and how each phase warrants specialized ML systems benchmarking.

  • +
  • Learn about system benchmarking concepts like throughput, latency, power, and computational efficiency.

  • +
  • Appreciate the evolution of model benchmarking from accuracy to more holistic metrics like fairness, robustness, and real-world applicability.

  • +
  • Recognize the growing role of data benchmarking in evaluating issues like bias, noise, balance, and diversity.

  • +
  • Understand the limitations of evaluating models, data, and systems in isolation and the emerging need for integrated benchmarking.

  • +
+
+
+
+

11.1 Introduction

+

Benchmarking provides the essential measurements needed to drive machine learning progress and truly understand system performance. As the physicist Lord Kelvin famously said, “To measure is to know.” Benchmarks allow us to quantitatively know the capabilities of different models, software, and hardware. They allow ML developers to measure the inference time, memory usage, power consumption, and other metrics that characterize a system. Moreover, benchmarks create standardized processes for measurement, enabling fair comparisons across different solutions.

+

When benchmarks are maintained over time, they become instrumental in capturing progress across generations of algorithms, datasets, and hardware. The models and techniques that set new records on ML benchmarks from one year to the next demonstrate tangible improvements in what’s possible for on-device machine learning. By using benchmarks to measure, ML practitioners can know the real-world capabilities of their systems and have confidence that each step reflects genuine progress towards the state-of-the-art.

+

Benchmarking has several important goals and objectives that guide its implementation for machine learning systems.

+
    +
  • Performance assessment. This involves evaluating key metrics like a given model’s speed, accuracy, and efficiency. For instance, in a TinyML context, it is crucial to benchmark how quickly a voice assistant can recognize commands, as this evaluates real-time performance.

  • +
  • Resource evaluation. This means assessing the model’s impact on critical system resources, including battery life, memory usage, and computational overhead. A relevant example is comparing the battery drain of two different image recognition algorithms running on a wearable device.

  • +
  • Validation and verification. Benchmarking helps ensure the system functions correctly and meets specified requirements. One way is by checking the accuracy of an algorithm, like a heart rate monitor on a smartwatch, against readings from medical-grade equipment as a form of clinical validation.

  • +
  • Competitive analysis. This enables comparing solutions against competing offerings in the market. For example, benchmarking a custom object detection model versus common TinyML benchmarks like MobileNet and Tiny-YOLO.

  • +
  • Credibility. Accurate benchmarks uphold the credibility of AI solutions and the organizations that develop them. They demonstrate a commitment to transparency, honesty, and quality, which are essential in building trust with users and stakeholders.

  • +
  • Regulation and Standardization. As the AI industry continues to grow, there is an increasing need for regulation and standardization to ensure that AI solutions are safe, ethical, and effective. Accurate and reliable benchmarks are essential to this regulatory framework, as they provide the data and evidence needed to assess compliance with industry standards and legal requirements.

  • +
+

This chapter will cover the 3 types of AI benchmarks, the standard metrics, tools, and techniques designers use to optimize their systems, and the challenges and trends in benchmarking.

+
+
+

11.2 Historical Context

+
+

11.2.1 Standard Benchmarks

+

The evolution of benchmarks in computing vividly illustrates the industry’s relentless pursuit of excellence and innovation. In the early days of computing during the 1960s and 1970s, benchmarks were rudimentary and designed for mainframe computers. For example, the Whetstone benchmark, named after the Whetstone ALGOL compiler, was one of the first standardized tests to measure the floating-point arithmetic performance of a CPU. These pioneering benchmarks prompted manufacturers to refine their architectures and algorithms to achieve better benchmark scores.

+

The 1980s marked a significant shift with the rise of personal computers. As companies like IBM, Apple, and Commodore competed for market share, and so benchmarks became critical tools to enable fair competition. The SPEC CPU benchmarks, introduced by the System Performance Evaluation Cooperative (SPEC), established standardized tests allowing objective comparisons between different machines. This standardization created a competitive environment, pushing silicon manufacturers and system creators to continually enhance their hardware and software offerings.

+

The 1990s brought the era of graphics-intensive applications and video games. The need for benchmarks to evaluate graphics card performance led to Futuremark’s creation of 3DMark. As gamers and professionals sought high-performance graphics cards, companies like NVIDIA and AMD were driven to rapid innovation, leading to major advancements in GPU technology like programmable shaders.

+

The 2000s saw a surge in mobile phones and portable devices like tablets. With portability came the challenge of balancing performance and power consumption. Benchmarks like MobileMark by BAPCo evaluated speed and battery life. This drove companies to develop more energy-efficient System-on-Chips (SOCs), leading to the emergence of architectures like ARM that prioritized power efficiency.

+

The focus of the recent decade has shifted towards cloud computing, big data, and artificial intelligence. Cloud service providers like Amazon Web Services and Google Cloud compete on performance, scalability, and cost-effectiveness. Tailored cloud benchmarks like CloudSuite have become essential, driving providers to optimize their infrastructure for better services.

+
+
+

11.2.2 Custom Benchmarks

+

In addition to industry-standard benchmarks, there are custom benchmarks specifically designed to meet the unique requirements of a particular application or task. They are tailored to the specific needs of the user or developer, ensuring that the performance metrics are directly relevant to the intended use of the AI model or system. Custom benchmarks can be created by individual organizations, researchers, or developers and are often used in conjunction with industry-standard benchmarks to provide a comprehensive evaluation of AI performance.

+

For example, a hospital could develop a benchmark to assess an AI model for predicting patient readmission. This benchmark would incorporate metrics relevant to the hospital’s patient population, like demographics, medical history, and social factors. Similarly, a financial institution’s fraud detection benchmark could focus on identifying fraudulent transactions accurately while minimizing false positives. In automotive, an autonomous vehicle benchmark may prioritize performance in diverse conditions, responding to obstacles, and safety. Retailers could benchmark recommendation systems using click-through rate, conversion rate, and customer satisfaction. Manufacturing companies might benchmark quality control systems on defect identification, efficiency, and waste reduction. In each industry, custom benchmarks provide organizations with evaluation criteria tailored to their unique needs and context. This allows for a more meaningful assessment of how well AI systems meet requirements.

+

The advantage of custom benchmarks lies in their flexibility and relevance. They can be designed to test specific performance aspects critical to the success of the AI solution in its intended application. This allows for a more targeted and accurate assessment of the AI model or system’s capabilities. Custom benchmarks also provide valuable insights into the performance of AI solutions in real-world scenarios, which can be crucial for identifying potential issues and areas for improvement.

+

In AI, benchmarks play a crucial role in driving progress and innovation. While benchmarks have long been used in computing, their application to machine learning is relatively recent. AI-focused benchmarks provide standardized metrics to evaluate and compare the performance of different algorithms, model architectures, and hardware platforms.

+
+
+

11.2.3 Community Consensus

+

A key prerogative for any benchmark to be impactful is that it must reflect the shared priorities and values of the broader research community. Benchmarks designed in isolation risk failing to gain acceptance if they overlook key metrics considered important by leading groups. Through collaborative development with open participation from academic labs, companies, and other stakeholders, benchmarks can incorporate collective input on critical capabilities worth measuring. This helps ensure the benchmarks evaluate aspects the community agrees are essential to advance the field. The process of reaching alignment on tasks and metrics itself supports converging on what matters most.

+

Furthermore, benchmarks published with broad co-authorship from respected institutions carry authority and validity that convinces the community to adopt them as trusted standards. Benchmarks perceived as biased by particular corporate or institutional interests breed skepticism. Ongoing community engagement through workshops and challenges is also key after the initial release, and that is what, for instance, led to the success of ImageNet. As research progresses, collective participation enables continual refinement and expansion of benchmarks over time.

+

Finally, community-developed benchmarks released with open access accelerate adoption and consistent implementation. We shared open-source code, documentation, models, and infrastructure to lower barriers for groups to benchmark solutions on an equal footing using standardized implementations. This consistency is critical for fair comparisons. Without coordination, labs and companies may implement benchmarks differently, reducing result reproducibility.

+

Community consensus brings benchmarks lasting relevance, while fragmentation confuses. Through collaborative development and transparent operation, benchmarks can become authoritative standards for tracking progress. Several of the benchmarks that we discuss in this chapter were developed and built by the community, for the community, and that is what ultimately led to their success.

+
+
+
+

11.3 AI Benchmarks: System, Model, and Data

+

The need for comprehensive benchmarking becomes paramount as AI systems grow in complexity and ubiquity. Within this context, benchmarks are often classified into three primary categories: Hardware, Model, and Data. Let’s delve into why each of these buckets is essential and the significance of evaluating AI from these three distinct dimensions:

+
+

11.3.1 System Benchmarks

+

AI computations, especially those in deep learning, are resource-intensive. The hardware on which these computations run plays an important role in determining AI solutions’ speed, efficiency, and scalability. Consequently, hardware benchmarks help evaluate the performance of CPUs, GPUs, TPUs, and other accelerators in AI tasks. By understanding hardware performance, developers can choose which hardware platforms best suit specific AI applications. Furthermore, hardware manufacturers use these benchmarks to identify areas for improvement, driving innovation in AI-specific chip designs.

+
+
+

11.3.2 Model Benchmarks

+

The architecture, size, and complexity of AI models vary widely. Different models have different computational demands and offer varying levels of accuracy and efficiency. Model benchmarks help us assess the performance of various AI architectures on standardized tasks. They provide insights into different models’ speed, accuracy, and resource demands. By benchmarking models, researchers can identify best-performing architectures for specific tasks, guiding the AI community towards more efficient and effective solutions. Additionally, these benchmarks aid in tracking the progress of AI research, showcasing advancements in model design and optimization.

+
+
+

11.3.3 Data Benchmarks

+

AI, particularly machine learning, is inherently data-driven. The quality, size, and diversity of data influence AI models’ training efficacy and generalization capability. Data benchmarks focus on the datasets used in AI training and evaluation. They provide standardized datasets the community can use to train and test models, ensuring a level playing field for comparisons. Moreover, these benchmarks highlight data quality, diversity, and representation challenges, pushing the community to address biases and gaps in AI training data. By understanding data benchmarks, researchers can also gauge how models might perform in real-world scenarios, ensuring robustness and reliability.

+

In the remainder of the sections, we will discuss each of these benchmark types. The focus will be an in-depth exploration of system benchmarks, as these are critical to understanding and advancing machine learning system performance. We will briefly cover model and data benchmarks for a comprehensive perspective, but the emphasis and majority of the content will be devoted to system benchmarks.

+
+
+
+

11.4 System Benchmarking

+
+

11.4.1 Granularity

+

Machine learning system benchmarking provides a structured and systematic approach to assessing a system’s performance across various dimensions. Given the complexity of ML systems, we can dissect their performance through different levels of granularity and obtain a comprehensive view of the system’s efficiency, identify potential bottlenecks, and pinpoint areas for improvement. To this end, various types of benchmarks have evolved over the years and continue to persist.

+

Figure fig-granularity illustrates the different layers of granularity of an ML system. At the application level, end-to-end benchmarks assess the overall system performance, considering factors like data preprocessing, model training, and inference. While at the model layer, benchmarks focus on assessing the efficiency and accuracy of specific models. This includes evaluating how well models generalize to new data and their computational efficiency during training and inference. Furthermore, benchmarking can extend to hardware and software infrastructure, examining the performance of individual components like GPUs or TPUs.

+
+
+
+ +
+
+Figure 11.1: ML system granularity. +
+
+
+
+

Micro Benchmarks

+

Micro-benchmarks in AI are specialized, evaluating distinct components or specific operations within a broader machine learning process. These benchmarks zero in on individual tasks, offering insights into the computational demands of a particular neural network layer, the efficiency of a unique optimization technique, or the throughput of a specific activation function. For instance, practitioners might use micro-benchmarks to measure the computational time required by a convolutional layer in a deep learning model or to evaluate the speed of data preprocessing that feeds data into the model. Such granular assessments are instrumental in fine-tuning and optimizing discrete aspects of AI models, ensuring that each component operates at its peak potential.

+

These types of microbenchmarks include zooming into very specific operations or components of the AI pipeline, such as the following:

+
    +
  • Tensor Operations: Libraries like cuDNN (by NVIDIA) often have benchmarks to measure the performance of individual tensor operations, such as convolutions or matrix multiplications, which are foundational to deep learning computations.
  • +
  • Activation Functions: Benchmarks that measure the speed and efficiency of various activation functions like ReLU, Sigmoid, or Tanh in isolation.
  • +
  • Layer Benchmarks: Evaluations of the computational efficiency of distinct neural network layers, such as LSTM or Transformer blocks, when operating on standardized input sizes.
  • +
+

Example: DeepBench, introduced by Baidu, is a good example of something that assesses the above. DeepBench assesses the performance of basic operations in deep learning models, providing insights into how different hardware platforms handle neural network training and inference.

+
+

Exercise 11.1 (System Benchmarking - Tensor Operations)  

+
+
+ +
+
+

Ever wonder how your image filters get so fast? Special libraries like cuDNN supercharge those calculations on certain hardware. In this Colab, we’ll use cuDNN with PyTorch to speed up image filtering. Think of it as a tiny benchmark, showing how the right software can unlock your GPU’s power!

+

+
+
+
+
+
+

Macro Benchmarks

+

Macro benchmarks provide a holistic view, assessing the end-to-end performance of entire machine learning models or comprehensive AI systems. Rather than focusing on individual operations, macro-benchmarks evaluate the collective efficacy of models under real-world scenarios or tasks. For example, a macro-benchmark might assess the complete performance of a deep learning model undertaking image classification on a dataset like ImageNet. This includes gauging accuracy, computational speed, and resource consumption. Similarly, one might measure the cumulative time and resources needed to train a natural language processing model on extensive text corpora or evaluate the performance of an entire recommendation system, from data ingestion to final user-specific outputs.

+

Examples: These benchmarks evaluate the AI model:

+
    +
  • MLPerf Inference(Reddi et al. (2020)): An industry-standard set of benchmarks for measuring the performance of machine learning software and hardware. MLPerf has a suite of dedicated benchmarks for specific scales, such as MLPerf Mobile for mobile class devices and MLPerf Tiny, which focuses on microcontrollers and other resource-constrained devices.

  • +
  • EEMBC’s MLMark: A benchmarking suite for evaluating the performance and power efficiency of embedded devices running machine learning workloads. This benchmark provides insights into how different hardware platforms handle tasks like image recognition or audio processing.

  • +
  • AI-Benchmark(Ignatov et al. (2019)): A benchmarking tool designed for Android devices, it evaluates the performance of AI tasks on mobile devices, encompassing various real-world scenarios like image recognition, face parsing, and optical character recognition.

  • +
+
+Reddi, Vijay Janapa, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, et al. 2020. MLPerf Inference Benchmark.” In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 446–59. IEEE; IEEE. https://doi.org/10.1109/isca45697.2020.00045. +
+Ignatov, Andrey, Radu Timofte, Andrei Kulik, Seungsoo Yang, Ke Wang, Felix Baum, Max Wu, Lirong Xu, and Luc Van Gool. 2019. AI Benchmark: All about Deep Learning on Smartphones in 2019.” In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 0–0. IEEE. https://doi.org/10.1109/iccvw.2019.00447. +
+
+

End-to-end Benchmarks

+

End-to-end benchmarks provide an all-inclusive evaluation that extends beyond the boundaries of the AI model itself. Instead of focusing solely on a machine learning model’s computational efficiency or accuracy, these benchmarks encompass the entire pipeline of an AI system. This includes initial data preprocessing, the core model’s performance, post-processing of the model’s outputs, and other integral components like storage and network interactions.

+

Data preprocessing is the first stage in many AI systems, transforming raw data into a format suitable for model training or inference. These preprocessing steps’ efficiency, scalability, and accuracy are vital for the overall system’s performance. End-to-end benchmarks assess this phase, ensuring that data cleaning, normalization, augmentation, or any other transformation process doesn’t become a bottleneck.

+

The post-processing phase also takes center stage. This involves interpreting the model’s raw outputs, possibly converting scores into meaningful categories, filtering results, or even integrating with other systems. In real-world applications, this phase is crucial for delivering actionable insights, and end-to-end benchmarks ensure it’s both efficient and effective.

+

Beyond the core AI operations, other system components are important in the overall performance and user experience. Storage solutions, whether cloud-based, on-premises, or hybrid, can significantly impact data retrieval and storage times, especially with vast AI datasets. Similarly, network interactions, vital for cloud-based AI solutions or distributed systems, can become performance bottlenecks if not optimized. End-to-end benchmarks holistically evaluate these components, ensuring that the entire system operates seamlessly, from data retrieval to final output delivery.

+

To date, there are no public, end-to-end benchmarks that take into account the role of data storage, network, and compute performance. Arguably, MLPerf Training and Inference come close to the idea of an end-to-end benchmark, but they are exclusively focused on ML model performance and do not represent real-world deployment scenarios of how models are used in the field. Nonetheless, they provide a very useful signal that helps assess AI system performance.

+

Given the inherent specificity of end-to-end benchmarking, it is typically performed internally at a company by instrumenting real production deployments of AI. This allows engineers to have a realistic understanding and breakdown of the performance, but given the sensitivity and specificity of the information, it is rarely reported outside of the company.

+
+
+

Understanding the Trade-offs

+

Different issues arise at different stages of an AI system. Micro-benchmarks help fine-tune individual components, macro-benchmarks aid in refining model architectures or algorithms, and end-to-end benchmarks guide the optimization of the entire workflow. By understanding where a problem lies, developers can apply targeted optimizations.

+

Moreover, while individual components of an AI system might perform optimally in isolation, bottlenecks can emerge when they interact. End-to-end benchmarks, in particular, are crucial to ensure that the entire system, when operating collectively, meets desired performance and efficiency standards.

+

Finally, organizations can make informed decisions on where to allocate resources by discerning performance bottlenecks or inefficiencies. For instance, if micro-benchmarks reveal inefficiencies in specific tensor operations, investments can be directed toward specialized hardware accelerators. Conversely, if end-to-end benchmarks indicate data retrieval issues, investments might be channeled toward better storage solutions.

+
+
+
+

11.4.2 Benchmark Components

+

At its core, an AI benchmark is more than just a test or a score; it’s a comprehensive evaluation framework. To understand this in-depth, let’s break down the typical components that go into an AI benchmark.

+
+

Standardized Datasets

+

Datasets serve as the foundation for most AI benchmarks. They provide a consistent data set on which models are trained and evaluated, ensuring a level playing field for comparisons.

+

Example: ImageNet, a large-scale dataset containing millions of labeled images spanning thousands of categories, is a popular benchmarking standard for image classification tasks.

+
+
+

Pre-defined Tasks

+

A benchmark should have a clear objective or task that models aim to achieve. This task defines the problem the AI system is trying to solve.

+

Example: Tasks for natural language processing benchmarks might include sentiment analysis, named entity recognition, or machine translation.

+
+
+

Evaluation Metrics

+

Once a task is defined, benchmarks require metrics to quantify performance. These metrics offer objective measures to compare different models or systems.

+

In classification tasks, metrics like accuracy, precision, recall, and F1 score are commonly used. Mean squared or absolute errors might be employed for regression tasks.

+
+
+

Baseline Models

+

Benchmarks often include baseline models or reference implementations. These serve as starting points or minimum performance standards against which new models or techniques can be compared.

+

Example: In many benchmark suites, simple models like linear regression or basic neural networks serve as baselines to provide context for more complex model evaluations.

+
+
+

Hardware and Software Specifications

+

Given the variability introduced by different hardware and software configurations, benchmarks often specify or document the hardware and software environments in which tests are conducted.

+

Example: An AI benchmark might note that evaluations were conducted on an NVIDIA Tesla V100 GPU using TensorFlow v2.4.

+
+
+

Environmental Conditions

+

As external factors can influence benchmark results, it’s essential to either control or document conditions like temperature, power source, or system background processes.

+

Example: Mobile AI benchmarks might specify that tests were conducted at room temperature with devices plugged into a power source to eliminate battery-level variances.

+
+
+

Reproducibility Rules

+

To ensure benchmarks are credible and can be replicated by others in the community, they often include detailed protocols covering everything from random seeds used to exact hyperparameters.

+

Example: A benchmark for a reinforcement learning task might detail the exact training episodes, exploration-exploitation ratios, and reward structures used.

+
+
+

Result Interpretation Guidelines

+

Beyond raw scores or metrics, benchmarks often provide guidelines or context to interpret results, helping practitioners understand the broader implications.

+

Example: A benchmark might highlight that while Model A scored higher than Model B in accuracy, it offers better real-time performance, making it more suitable for time-sensitive applications.

+
+
+
+

11.4.3 Training vs. Inference

+

The development life cycle of a machine learning model involves two critical phases - training and inference. Training is the process of learning patterns from data to create the model. Inference refers to the model making predictions on new unlabeled data. Both phases play indispensable yet distinct roles. Consequently, each phase warrants rigorous benchmarking to evaluate performance metrics like speed, accuracy, and computational efficiency.

+

Benchmarking the training phase provides insights into how different model architectures, hyperparameter values, and optimization algorithms impact the time and resources needed to train the model. For instance, benchmarking shows how neural network depth affects training time on a given dataset. Benchmarking also reveals how hardware accelerators like GPUs and TPUs can speed up training.

+

On the other hand, benchmarking inference evaluates model performance in real-world conditions after deployment. Key metrics include latency, throughput, memory footprint, and power consumption. Inference benchmarking determines if a model meets the requirements of its target application regarding response time and device constraints, which is typically the focus of TinyML. However, we will discuss these broadly to ensure a general understanding.

+
+
+

11.4.4 Training Benchmarks

+

Training represents the phase where the system processes and ingests raw data to adjust and refine its parameters. Therefore, it is an algorithmic activity and involves system-level considerations, including data pipelines, storage, computing resources, and orchestration mechanisms. The goal is to ensure that the ML system can efficiently learn from data, optimizing both the model’s performance and the system’s resource utilization.

+
+

Purpose

+

From an ML systems perspective, training benchmarks evaluate how well the system scales with increasing data volumes and computational demands. It’s about understanding the interplay between hardware, software, and the data pipeline in the training process.

+

Consider a distributed ML system designed to train on vast datasets, like those used in large-scale e-commerce product recommendations. A training benchmark would assess how efficiently the system scales across multiple nodes, manage data sharding and handle failures or node drop-offs during training.

+

Training benchmarks evaluate CPU, GPU, memory, and network utilization during the training phase, guiding system optimizations. When training a model in a cloud-based ML system, it’s crucial to understand how resources are being utilized. Are GPUs being fully leveraged? Is there unnecessary memory overhead? Benchmarks can highlight bottlenecks or inefficiencies in resource utilization, leading to cost savings and performance improvements.

+

Training an ML model is contingent on timely and efficient data delivery. Benchmarks in this context would also assess the efficiency of data pipelines, data preprocessing speed, and storage retrieval times. For real-time analytics systems, like those used in fraud detection, the speed at which training data is ingested, preprocessed, and fed into the model can be critical. Benchmarks would evaluate the latency of data pipelines, the efficiency of storage systems (like SSDs vs. HDDs), and the speed of data augmentation or transformation tasks.

+
+
+

Metrics

+

When viewed from a systems perspective, training metrics offer insights that transcend conventional algorithmic performance indicators. These metrics measure the model’s learning efficacy and gauge the efficiency, scalability, and robustness of the entire ML system during the training phase. Let’s delve deeper into these metrics and their significance.

+

The following metrics are often considered important:

+
    +
  1. Training Time: The time it takes to train a model from scratch until it reaches a satisfactory performance level. It directly measures the computational resources required to train a model. For example, Google’s BERT(Devlin et al. (2019)) is a natural language processing model that requires several days to train on a massive corpus of text data using multiple GPUs. The long training time is a significant resource consumption and cost challenge.

  2. +
  3. Scalability: How well the training process can handle increases in data size or model complexity. Scalability can be assessed by measuring training time, memory usage, and other resource consumption as data size or model complexity increases. OpenAI’s GPT-3(Brown et al. (2020)) model has 175 billion parameters, making it one of the largest language models in existence. Training GPT-3 required extensive engineering efforts to scale the training process to handle the massive model size. This involved using specialized hardware, distributed training, and other techniques to ensure the model could be trained efficiently.

  4. +
  5. Resource Utilization: The extent to which the training process utilizes available computational resources such as CPU, GPU, memory, and disk I/O. High resource utilization can indicate an efficient training process, while low utilization can suggest bottlenecks or inefficiencies. For instance, training a convolutional neural network (CNN) for image classification requires significant GPU resources. Utilizing multi-GPU setups and optimizing the training code for GPU acceleration can greatly improve resource utilization and training efficiency.

  6. +
  7. Memory Consumption: The amount of memory the training process uses. Memory consumption can be a limiting factor for training large models or datasets. For example, Google researchers faced significant memory consumption challenges when training BERT. The model has hundreds of millions of parameters, requiring large amounts of memory. The researchers had to develop techniques to reduce memory consumption, such as gradient checkpointing and model parallelism.

  8. +
  9. ** Energy Consumption: ** The energy consumed during training. As machine learning models become more complex, energy consumption has become an important consideration. Training large machine learning models can consume significant energy, leading to a large carbon footprint. For instance, the training of OpenAI’s GPT-3 was estimated to have a carbon footprint equivalent to traveling by car for 700,000 kilometers.

  10. +
  11. Throughput: The number of training samples processed per unit time. Higher throughput generally indicates a more efficient training process. The throughput is an important metric to consider when training a recommendation system for an e-commerce platform. A high throughput ensures that the model can process large volumes of user interaction data promptly, which is crucial for maintaining the relevance and accuracy of the recommendations. But it’s also important to understand how to balance throughput with latency bounds. Therefore, a latency-bounded throughput constraint is often imposed on service-level agreements for data center application deployments.

  12. +
  13. Cost: The cost of training a model can include both computational and human resources. Cost is important when considering the practicality and feasibility of training large or complex models. Training large language models like GPT-3 is estimated to cost millions of dollars. This cost includes computational, electricity and human resources required for model development and training.

  14. +
  15. Fault Tolerance and Robustness: The ability of the training process to handle failures or errors without crashing or producing incorrect results. This is important for ensuring the reliability of the training process. Network failures or hardware malfunctions can occur in a real-world scenario where a machine-learning model is being trained on a distributed system. In recent years, it has become abundantly clear that faults arising from silent data corruption have emerged as a major issue. A fault-tolerant and robust training process can recover from such failures without compromising the model’s integrity.

  16. +
  17. Ease of Use and Flexibility: The ease with which the training process can be set up and used and its flexibility in handling different types of data and models. In companies like Google, efficiency can sometimes be measured by the number of Software Engineer (SWE) years saved since that translates directly to impact. Ease of use and flexibility can reduce the time and effort required to train a model. TensorFlow and PyTorch are popular machine-learning frameworks that provide user-friendly interfaces and flexible APIs for building and training machine-learning models. These frameworks support many model architectures and are equipped with tools that simplify the training process.

  18. +
  19. Reproducibility: The ability to reproduce the training process results. Reproducibility is important for verifying a model’s correctness and validity. However, variations due to stochastic network characteristics often make it hard to reproduce the precise behavior of applications being trained, which can present a challenge for benchmarking.

  20. +
+
+Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North, 4171–86. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/n19-1423. +

By benchmarking for these types of metrics, we can obtain a comprehensive view of the training process’s performance and efficiency from a systems perspective. This can help identify areas for improvement and ensure that resources are used effectively.

+
+
+

Tasks

+

Selecting a handful of representative tasks for benchmarking machine learning systems is challenging because machine learning is applied to various domains with unique characteristics and requirements. Here are some of the challenges faced in selecting representative tasks:

+
    +
  1. Diversity of Applications: Machine learning is used in numerous fields such as healthcare, finance, natural language processing, computer vision, and many more. Each field has specific tasks that may not be representative of other fields. For example, image classification tasks in computer vision may not be relevant to financial fraud detection.
  2. +
  3. Variability in Data Types and Quality: Different tasks require different data types, such as text, images, videos, or numerical data. Data quality and availability can vary greatly between tasks, making it difficult to select tasks that are representative of the general challenges faced in machine learning.
  4. +
  5. Task Complexity and Difficulty: The complexity of tasks varies greatly. Some are relatively straightforward, while others are highly complex and require sophisticated models and techniques. Selecting representative tasks that cover the complexities encountered in machine learning is challenging.
  6. +
  7. Ethical and Privacy Concerns: Some tasks may involve sensitive or private data, such as medical records or personal information. These tasks may have ethical and privacy concerns that need to be addressed, making them less suitable as representative tasks for benchmarking.
  8. +
  9. Scalability and Resource Requirements: Different tasks may have different scalability and resource requirements. Some tasks may require extensive computational resources, while others can be performed with minimal resources. Selecting tasks that represent the general resource requirements in machine learning is difficult.
  10. +
  11. Evaluation Metrics: The metrics used to evaluate the performance of machine learning models vary between tasks. Some tasks may have well-established evaluation metrics, while others lack clear or standardized metrics. This can make it challenging to compare performance across different tasks.
  12. +
  13. Generalizability of Results: The results obtained from benchmarking on a specific task may not be generalizable to other tasks. This means that a machine learning system’s performance on a selected task may not be indicative of its performance on other tasks.
  14. +
+

It is important to carefully consider these factors when designing benchmarks to ensure they are meaningful and relevant to the diverse range of tasks encountered in machine learning.

+
+
+

Benchmarks

+

Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for training machine learning systems.

+

MLPerf Training Benchmark

+

MLPerf is a suite of benchmarks designed to measure the performance of machine learning hardware, software, and services. The MLPerf Training benchmark (Mattson et al. 2020a) focuses on the time it takes to train models to a target quality metric. It includes diverse workloads, such as image classification, object detection, translation, and reinforcement learning.

+

Metrics:

+
    +
  • Training time to target quality
  • +
  • Throughput (examples per second)
  • +
  • Resource utilization (CPU, GPU, memory, disk I/O)
  • +
+

DAWNBench

+

DAWNBench (Coleman et al. 2019) is a benchmark suite focusing on end-to-end deep learning training time and inference performance. It includes common tasks such as image classification and question answering.

+
+Coleman, Cody, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. 2019. “Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark.” ACM SIGOPS Operating Systems Review 53 (1): 14–25. https://doi.org/10.1145/3352020.3352024. +

Metrics:

+
    +
  • Time to train to target accuracy
  • +
  • Inference latency
  • +
  • Cost (in terms of cloud computing and storage resources)
  • +
+

Fathom

+

Fathom (Adolf et al. 2016) is a benchmark from Harvard University that evaluates the performance of deep learning models using a diverse set of workloads. These include common tasks such as image classification, speech recognition, and language modeling.

+
+Adolf, Robert, Saketh Rama, Brandon Reagen, Gu-yeon Wei, and David Brooks. 2016. “Fathom: Reference Workloads for Modern Deep Learning Methods.” In 2016 IEEE International Symposium on Workload Characterization (IISWC), 1–10. IEEE; IEEE. https://doi.org/10.1109/iiswc.2016.7581275. +

Metrics:

+
    +
  • Operations per second (to measure computational efficiency)
  • +
  • Time to completion for each workload
  • +
  • Memory bandwidth
  • +
+

Example Use Case

+

Consider a scenario where we want to benchmark the training of an image classification model on a specific hardware platform.

+
    +
  1. Task: The task is to train a convolutional neural network (CNN) for image classification on the CIFAR-10 dataset.
  2. +
  3. Benchmark: We can use the MLPerf Training benchmark for this task. It includes an image classification workload that is relevant to our task.
  4. +
  5. Metrics: We will measure the following metrics:
  6. +
+
    +
  • Training time to reach a target accuracy of 90%.
  • +
  • Throughput in terms of images processed per second.
  • +
  • GPU and CPU utilization during training.
  • +
+

By measuring these metrics, we can assess the performance and efficiency of the training process on the selected hardware platform. This information can then be used to identify potential bottlenecks or areas for improvement.

+
+
+
+

11.4.5 Inference Benchmarks

+

Inference in machine learning refers to using a trained model to make predictions on new, unseen data. It is the phase where the model applies its learned knowledge to solve the problem it was designed for, such as classifying images, recognizing speech, or translating text.

+
+

Purpose

+

When we build machine learning models, our ultimate goal is to deploy them in real-world applications where they can provide accurate and reliable predictions on new, unseen data. This process of using a trained model to make predictions is known as inference. A machine learning model’s real-world performance can differ significantly from its performance on training or validation datasets, which makes benchmarking inference a crucial step in the development and deployment of machine learning models.

+

Benchmarking inference allows us to evaluate how well a machine-learning model performs in real-world scenarios. This evaluation ensures that the model is practical and reliable when deployed in applications, providing a more comprehensive understanding of the model’s behavior with real data. Additionally, benchmarking can help identify potential bottlenecks or limitations in the model’s performance. For example, if a model takes less time to predict, it may be impractical for real-time applications such as autonomous driving or voice assistants.

+

Resource efficiency is another critical aspect of inference, as it can be computationally intensive and require significant memory and processing power. Benchmarking helps ensure that the model is efficient regarding resource usage, which is particularly important for edge devices with limited computational capabilities, such as smartphones or IoT devices. Moreover, benchmarking allows us to compare the performance of our model with competing models or previous versions of the same model. This comparison is essential for making informed decisions about which model to deploy in a specific application.

+

Finally, it is vital to ensure that the model’s predictions are not only accurate but also consistent across different data points. Benchmarking helps verify the model’s accuracy and consistency, ensuring that it meets the application’s requirements. It also assesses the model’s robustness, ensuring that it can handle real-world data variability and still make accurate predictions.

+
+
+

Metrics

+
    +
  1. Accuracy: Accuracy is one of the most vital metrics when benchmarking machine learning models. It quantifies the proportion of correct predictions made by the model compared to the true values or labels. For example, if a spam detection model can correctly classify 95 out of 100 email messages as spam or not, its accuracy would be calculated as 95%.

  2. +
  3. Latency: Latency is a performance metric that calculates the time lag or delay between the input receipt and the production of the corresponding output by the machine learning system. An example that clearly depicts latency is a real-time translation application; if a half-second delay exists from the moment a user inputs a sentence to the time the app displays the translated text, then the system’s latency is 0.5 seconds.

  4. +
  5. Latency-Bounded Throughput: Latency-bounded throughput is a valuable metric that combines the aspects of latency and throughput, measuring the maximum throughput of a system while still meeting a specified latency constraint. For example, in a video streaming application that utilizes a machine learning model to generate and display subtitles automatically, latency-bounded throughput would measure how many video frames the system can process per second (throughput) while ensuring that the subtitles are displayed with no more than a 1-second delay (latency). This metric is particularly important in real-time applications where meeting latency requirements is crucial to the user experience.

  6. +
  7. Throughput: Throughput assesses the system’s capacity by measuring the number of inferences or predictions a machine learning model can handle within a specific unit of time. Consider a speech recognition system that employs a Recurrent Neural Network (RNN) as its underlying model; if this system can process and understand 50 different audio clips in a minute, then its throughput rate stands at 50 clips per minute.

  8. +
  9. Inference Time: Inference time is a crucial metric that measures the duration a machine learning system, such as a Convolutional Neural Network (CNN) used in image recognition tasks, takes to process an input and generate a prediction or output. For instance, if a CNN takes approximately 2 milliseconds to identify and label a cat within a given photo accurately, then its inference time is said to be 2 milliseconds.

  10. +
  11. Energy Efficiency: Energy efficiency is a metric that determines the amount of energy consumed by the machine learning model to perform a single inference. A prime example of this would be a natural language processing model built on a Transformer network architecture; if it utilizes 0.1 Joules of energy to translate a sentence from English to French, its energy efficiency is measured at 0.1 Joules per inference.

  12. +
  13. Memory Usage: Memory usage quantifies the volume of RAM needed by a machine learning model to carry out inference tasks. A relevant example to illustrate this would be a face recognition system based on a CNN; if such a system requires 150 MB of RAM to process and recognize faces within an image, its memory usage is 150 MB.

  14. +
+
+
+

Tasks

+

The challenges in picking representative tasks for benchmarking inference machine learning systems are, by and large, somewhat similar to the taxonomy we have provided for training. Nevertheless, to be pedantic, let’s discuss those in the context of inference machine learning systems.

+
    +
  1. Diversity of Applications: Inference machine learning is employed across numerous domains such as healthcare, finance, entertainment, security, and more. Each domain has unique tasks, and what’s representative in one domain might not be in another. For example, an inference task for predicting stock prices in the financial domain might differ from image recognition tasks in the medical domain.

  2. +
  3. Variability in Data Types: Different inference tasks require different types of data—text, images, videos, numerical data, etc. Ensuring that benchmarks address the wide variety of data types used in real-world applications is challenging. For example, voice recognition systems process audio data, which is vastly different from the visual data processed by facial recognition systems.

  4. +
  5. Task Complexity: The complexity of inference tasks can differ immensely, from basic classification tasks to intricate tasks requiring state-of-the-art models. For example, differentiating between two categories (binary classification) is typically simpler than detecting hundreds of object types in a crowded scene.

  6. +
  7. Real-time Requirements: Some applications demand immediate or real-time responses, while others may allow for some delay. In autonomous driving, real-time object detection and decision-making are paramount, whereas a recommendation engine for a shopping website might tolerate slight delays.

  8. +
  9. Scalability Concerns: Given the varied scale of applications, from edge devices to cloud-based servers, tasks must represent the diverse computational environments where inference occurs. For example, an inference task running on a smartphone’s limited resources differs from a powerful cloud server.

  10. +
  11. Evaluation Metrics Diversity: The metrics used to evaluate performance can differ significantly depending on the task. Finding a common ground or universally accepted metric for diverse tasks is challenging. For example, precision and recall might be vital for a medical diagnosis task, whereas throughput (inferences per second) might be more crucial for video processing tasks.

  12. +
  13. Ethical and Privacy Concerns: Concerns related to ethics and privacy exist, especially in sensitive areas like facial recognition or personal data processing. These concerns can impact the selection and nature of tasks used for benchmarking. For example, using real-world facial data for benchmarking can raise privacy issues, whereas synthetic data might not replicate real-world challenges.

  14. +
  15. Hardware Diversity: With a wide range of devices from GPUs, CPUs, and TPUs to custom ASICs used for inference, ensuring that tasks are representative across varied hardware is challenging. For example, a task optimized for inference on a GPU might perform sub-optimally on an edge device.

  16. +
+
+
+

Benchmarks

+

Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for inference machine learning systems.

+

MLPerf Inference Benchmark

+

MLPerf Inference is a comprehensive benchmark suite that assesses machine learning models’ performance during the inference phase. It encompasses a variety of workloads, including image classification, object detection, and natural language processing, aiming to provide standardized and insightful metrics for evaluating different inference systems.

+

Metrics:

+
    +
  • Inference time
  • +
  • Latency
  • +
  • Throughput
  • +
  • Accuracy
  • +
  • Energy consumption
  • +
+

AI Benchmark

+

AI Benchmark is a benchmarking tool that evaluates the performance of AI and machine learning models on mobile devices and edge computing platforms. It includes tests for image classification, object detection, and natural language processing tasks, providing a detailed analysis of the inference performance on different hardware platforms.

+

Metrics:

+
    +
  • Inference time
  • +
  • Latency
  • +
  • Energy consumption
  • +
  • Memory usage
  • +
  • Throughput
  • +
+

OpenVINO toolkit

+

OpenVINO toolkit provides a benchmark tool to measure the performance of deep learning models for various tasks, such as image classification, object detection, and facial recognition, on Intel hardware. It offers detailed insights into the models’ inference performance on different hardware configurations.

+

Metrics:

+
    +
  • Inference time
  • +
  • Throughput
  • +
  • Latency
  • +
  • CPU and GPU utilization
  • +
+

Example Use Case

+

Consider a scenario where we want to evaluate the inference performance of an object detection model on a specific edge device.

+

Task: The task is to perform real-time object detection on video streams, detecting and identifying objects such as vehicles, pedestrians, and traffic signs.

+

Benchmark: We can use the AI Benchmark for this task as it evaluates inference performance on edge devices, which suits our scenario.

+

Metrics: We will measure the following metrics:

+
    +
  • Inference time to process each video frame
  • +
  • Latency to generate the bounding boxes for detected objects
  • +
  • Energy consumption during the inference process
  • +
  • Throughput in terms of video frames processed per second
  • +
+

By measuring these metrics, we can assess the performance of the object detection model on the edge device and identify any potential bottlenecks or areas for optimization to enhance real-time processing capabilities.

+
+

Exercise 11.2 (Inference Benchmarks - MLPerf)  

+
+
+ +
+
+

Get ready to put your AI models to the ultimate test! MLPerf is like the Olympics for machine learning performance. In this Colab, we’ll use a toolkit called CK to run official MLPerf benchmarks, measure how fast and accurate your model is, and even use TVM to give it a super speed boost. Are you ready to see your model earn its medal?

+

+
+
+
+
+
+
+

11.4.6 Benchmark Example

+

To properly illustrate the components of a systems benchmark, we can look at the keyword spotting benchmark in MLPerf Tiny and explain the motivation behind each decision.

+
+

Task

+

Keyword spotting was selected as a task because it is a common use case in TinyML that has been well-established for years. Additionally, the typical hardware used for keyword spotting differs substantially from the offerings of other benchmarks, such as MLPerf Inference’s speech recognition task.

+
+
+

Dataset

+

Google Speech Commands(Warden (2018)) was selected as the best dataset to represent the task. The dataset is well-established in the research community and has permissive licensing, allowing it to be easily used in a benchmark.

+
+Warden, Pete. 2018. “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” ArXiv Preprint abs/1804.03209. https://arxiv.org/abs/1804.03209. +
+
+

Model

+

The next core component is the model, which will act as the primary workload for the benchmark. The model should be well established as a solution to the selected task rather than a state-of-the-art solution. The model selected is a simple depthwise separable convolution model. This architecture is not the state-of-the-art solution to the task, but it is well-established and not designed for a specific hardware platform like many state-of-the-art solutions. Despite being an inference benchmark, the benchmark also establishes a reference training recipe to be fully reproducible and transparent.

+
+
+

Metrics

+

Latency was selected as the primary metric for the benchmark, as keyword spotting systems need to react quickly to maintain user satisfaction. Additionally, given that TinyML systems are often battery-powered, energy consumption is measured to ensure the hardware platform is efficient. The accuracy of the model is also measured to ensure that the optimizations applied by a submitter, such as quantization, don’t degrade the accuracy beyond a threshold.

+
+
+

Benchmark Harness

+

MLPerf Tiny uses EEMBCs EnergyRunner benchmark harness to load the inputs to the model and isolate and measure the device’s energy consumption. When measuring energy consumption, it’s critical to select a harness that is accurate at the expected power levels of the devices under test and simple enough not to become a burden for the benchmark participants.

+
+
+

Baseline Submission

+

Baseline submissions are critical for contextualizing results and as a reference point to help participants get started. The baseline submission should prioritize simplicity and readability over state-of-the-art performance. The keyword spotting baseline uses a standard STM microcontroller as its hardware and TensorFlow Lite for Microcontrollers(David et al. (2021)) as its inference framework.

+
+David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811. +
+
+
+

11.4.7 Challenges and Limitations

+

While benchmarking provides a structured methodology for performance evaluation in complex domains like artificial intelligence and computing, the process also poses several challenges. If not properly addressed, these challenges can undermine the credibility and accuracy of benchmarking results. Some of the predominant difficulties faced in benchmarking include the following:

+
    +
  • Incomplete problem coverage—Benchmark tasks may not fully represent the problem space. For instance, common image classification datasets like CIFAR-10 have limited diversity in image types. Algorithms tuned for such benchmarks may fail to generalize well to real-world datasets.
  • +
  • Statistical insignificance - Benchmarks must have enough trials and data samples to produce statistically significant results. For example, benchmarking an OCR model on only a few text scans may not adequately capture its true error rates.
  • +
  • Limited reproducibility—Varying hardware, software versions, codebases, and other factors can reduce the reproducibility of benchmark results. MLPerf addresses this by providing reference implementations and environment specifications.
  • +
  • Misalignment with end goals - Benchmarks focusing only on speed or accuracy metrics may misalign real-world objectives like cost and power efficiency. Benchmarks must reflect all critical performance axes.
  • +
  • Rapid staleness—Due to the rapid pace of advancements in AI and computing, benchmarks and their datasets can quickly become outdated. Maintaining up-to-date benchmarks is thus a persistent challenge.
  • +
+

But of all these, the most important challenge is benchmark engineering.

+
+

Hardware Lottery

+

The “hardware lottery” in benchmarking machine learning systems refers to the situation where the success or efficiency of a machine learning model is significantly influenced by the compatibility of the model with the underlying hardware (Chu et al. 2021). In other words, some models perform exceptionally well because they are a good fit for the particular characteristics or capabilities of the hardware they are run on rather than because they are intrinsically superior models. Figure fig-hardware-lottery demonstrates the performance of different models on different hardware: notice how (follow the big yellow arrow) the Mobilenet V3 Large model (in green) has the lowest latency among all models when run unquantized on the Pixel4 CPU. At the same time, it performs the worst on Pixel4 DSP Qualcomm Snapdragon 855. Unfortunately, the hardware used is often omitted from papers or only briefly mentioned, making reproducing results difficult, if possible.

+
+Chu, Grace, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, and Andrew Howard. 2021. “Discovering Multi-Hardware Mobile Models via Architecture Search.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 3022–31. IEEE. https://doi.org/10.1109/cvprw53098.2021.00337. +
+
+
+ +
+
+Figure 11.2: Hardware Lottery. +
+
+
+

For instance, certain machine learning models may be designed and optimized to take advantage of the parallel processing capabilities of specific hardware accelerators, such as Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs). As a result, these models might show superior performance when benchmarked on such hardware compared to other models that are not optimized for the hardware.

+

For example, a 2018 paper introduced a new convolutional neural network architecture for image classification that achieved state-of-the-art accuracy on ImageNet. However, the paper only mentioned that the model was trained on 8 GPUs without specifying the model, memory size, or other relevant details. A follow-up study tried to reproduce the results but found that training the same model on commonly available GPUs achieved 10% lower accuracy, even after hyperparameter tuning. The original hardware likely had far higher memory bandwidth and compute power. As another example, training times for large language models can vary drastically based on the GPUs used.

+

The “hardware lottery” can introduce challenges and biases in benchmarking machine learning systems, as the model’s performance is not solely dependent on the model’s architecture or algorithm but also on the compatibility and synergies with the underlying hardware. This can make it difficult to compare different models fairly and to identify the best model based on its intrinsic merits. It can also lead to a situation where the community converges on models that are a good fit for the popular hardware of the day, potentially overlooking other models that might be superior but incompatible with the current hardware trends.

+
+
+

Benchmark Engineering

+

Hardware lottery occurs when a machine learning model unintentionally performs exceptionally well or poorly on a specific hardware setup due to unforeseen compatibility or incompatibility. The model is not explicitly designed or optimized for that particular hardware by the developers or engineers; rather, it happens to align or (mis)align with the hardware’s capabilities or limitations. In this case, the model’s performance on the hardware is a byproduct of coincidence rather than design.

+

In contrast to the accidental hardware lottery, benchmark engineering involves deliberately optimizing or designing a machine learning model to perform exceptionally well on specific hardware, often to win benchmarks or competitions. This intentional optimization might include tweaking the model’s architecture, algorithms, or parameters to exploit the hardware’s features and capabilities fully.

+
+
+

Problem

+

Benchmark engineering refers to tweaking or modifying an AI system to optimize performance on specific benchmark tests, often at the expense of generalizability or real-world performance. This can include adjusting hyperparameters, training data, or other aspects of the system specifically to achieve high scores on benchmark metrics without necessarily improving the overall functionality or utility of the system.

+

The motivation behind benchmark engineering often stems from the desire to achieve high-performance scores for marketing or competitive purposes. High benchmark scores can demonstrate the superiority of an AI system compared to competitors and can be a key selling point for potential users or investors. This pressure to perform well on benchmarks sometimes leads to prioritizing benchmark-specific optimizations over more holistic improvements to the system.

+

It can lead to several risks and challenges. One of the primary risks is that the AI system may perform better in real-world applications than the benchmark scores suggest. This can lead to user dissatisfaction, reputational damage, and potential safety or ethical concerns. Furthermore, benchmark engineering can contribute to a lack of transparency and accountability in the AI community, as it can be difficult to discern how much of an AI system’s performance is due to genuine improvements versus benchmark-specific optimizations.

+

The AI community must prioritize transparency and accountability to mitigate the risks associated with benchmark engineering. This can include disclosing any optimizations or adjustments made specifically for benchmark tests and providing more comprehensive evaluations of AI systems that include real-world performance metrics and benchmark scores. Researchers and developers must prioritize holistic improvements to AI systems that improve their generalizability and functionality across various applications rather than focusing solely on benchmark-specific optimizations.

+
+
+

Issues

+

One of the primary problems with benchmark engineering is that it can compromise the real-world performance of AI systems. When developers focus on optimizing their systems to achieve high scores on specific benchmark tests, they may neglect other important system performance aspects crucial in real-world applications. For example, an AI system designed for image recognition might be engineered to perform exceptionally well on a benchmark test that includes a specific set of images but needs help to recognize images slightly different from those in the test set accurately.

+

Another area for improvement with benchmark engineering is that it can result in AI systems that lack generalizability. In other words, while the system may perform well on the benchmark test, it may need help handling a diverse range of inputs or scenarios. For instance, an AI model developed for natural language processing might be engineered to achieve high scores on a benchmark test that includes a specific type of text but fails to process text that falls outside of that specific type accurately.

+

It can also lead to misleading results. When AI systems are engineered to perform well on benchmark tests, the results may not accurately reflect the system’s true capabilities. This can be problematic for users or investors who rely on benchmark scores to make informed decisions about which AI systems to use or invest in. For example, an AI system engineered to achieve high scores on a benchmark test for speech recognition might need to be more capable of accurately recognizing speech in real-world situations, leading users or investors to make decisions based on inaccurate information.

+
+
+

Mitigation

+

There are several ways to mitigate benchmark engineering. Transparency in the benchmarking process is crucial to maintaining benchmark accuracy and reliability. This involves clearly disclosing the methodologies, data sets, and evaluation criteria used in benchmark tests, as well as any optimizations or adjustments made to the AI system for the purpose of the benchmark.

+

One way to achieve transparency is through the use of open-source benchmarks. Open-source benchmarks are made publicly available, allowing researchers, developers, and other stakeholders to review, critique, and contribute to them, thereby ensuring their accuracy and reliability. This collaborative approach also facilitates sharing best practices and developing more robust and comprehensive benchmarks.

+

One example is the MLPerf Tiny. It’s an open-source framework designed to make it easy to compare different solutions in the world of TinyML. Its modular design allows components to be swapped out for comparison or improvement. The reference implementations, shown in green and orange in Figure fig-ml-perf, act as the baseline for results. TinyML often needs optimization across the entire system, and users can contribute by focusing on specific parts, like quantization. The modular benchmark design allows users to showcase their contributions and competitive advantage by modifying a reference implementation. In short, MLPerf Tiny offers a flexible and modular way to assess and enhance TinyML applications, making it easier to compare and improve different aspects of the technology.

+
+
+
+ +
+
+Figure 11.3: MLPerf Tiny modular design. Credit: Mattson et al. (2020a). +
+
+———, et al. 2020a. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843. +
+
+

Another method for achieving transparency is through peer review of benchmarks. This involves having independent experts review and validate the benchmark’s methodology, data sets, and results to ensure their credibility and reliability. Peer review can provide a valuable means of verifying the accuracy of benchmark tests and help build confidence in the results.

+

Standardization of benchmarks is another important solution to mitigate benchmark engineering. Standardized benchmarks provide a common framework for evaluating AI systems, ensuring consistency and comparability across different systems and applications. This can be achieved by developing industry-wide standards and best practices for benchmarking and through common metrics and evaluation criteria.

+

Third-party verification of results can also be valuable in mitigating benchmark engineering. This involves having an independent third party verify the results of a benchmark test to ensure their credibility and reliability. Third-party verification can build confidence in the results and provide a valuable means of validating the performance and capabilities of AI systems.

+
+
+
+
+

11.5 Model Benchmarking

+

Benchmarking machine learning models is important for determining the effectiveness and efficiency of various machine learning algorithms in solving specific tasks or problems. By analyzing the results obtained from benchmarking, developers and researchers can identify their models’ strengths and weaknesses, leading to more informed decisions on model selection and further optimization.

+

The evolution and progress of machine learning models are intrinsically linked to the availability and quality of data sets. In machine learning, data acts as the raw material that powers the algorithms, allowing them to learn, adapt, and ultimately perform tasks that were traditionally the domain of humans. Therefore, it is important to understand this history.

+
+

11.5.1 Historical Context

+

Machine learning datasets have a rich history and have evolved significantly over the years, growing in size, complexity, and diversity to meet the ever-increasing demands of the field. Let’s take a closer look at this evolution, starting from one of the earliest and most iconic datasets – MNIST.

+
+

MNIST (1998)

+

The MNIST dataset, created by Yann LeCun, Corinna Cortes, and Christopher J.C. Burges in 1998, can be considered a cornerstone in the history of machine learning datasets. It comprises 70,000 labeled 28x28 pixel grayscale images of handwritten digits (0-9). MNIST has been widely used for benchmarking algorithms in image processing and machine learning as a starting point for many researchers and practitioners. Figure fig-mnist shows some examples of handwritten digits.

+
+
+
+ +
+
+Figure 11.4: MNIST handwritten digits. Credit: Suvanjanprasai. +
+
+
+
+
+

ImageNet (2009)

+

Fast forward to 2009, and we see the introduction of the ImageNet dataset, which marked a significant leap in the scale and complexity of datasets. ImageNet consists of over 14 million labeled images spanning more than 20,000 categories. Fei-Fei Li and her team developed it to advance object recognition and computer vision research. The dataset became synonymous with the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual competition crucial in developing deep learning models, including the famous AlexNet in 2012.

+
+
+

COCO (2014)

+

The Common Objects in Context (COCO) dataset(Lin et al. (2014)), released in 2014, further expanded the landscape of machine learning datasets by introducing a richer set of annotations. COCO consists of images containing complex scenes with multiple objects, and each image is annotated with object bounding boxes, segmentation masks, and captions. This dataset has been instrumental in advancing research in object detection, segmentation, and image captioning.

+
+Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. “Microsoft Coco: Common Objects in Context.” In Computer VisionECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part v 13, 740–55. Springer. +

Coco dataset. Credit: Coco.https://cocodataset.org/images/jpg/coco-examples.jpg

+
+
+

GPT-3 (2020)

+

While the above examples primarily focus on image datasets, there have also been significant developments in text datasets. One notable example is GPT-3 (Brown et al. 2020), developed by OpenAI. GPT-3 is a language model trained on diverse internet text. Although the dataset used to train GPT-3 is not publicly available, the model itself, consisting of 175 billion parameters, is a testament to the scale and complexity of modern machine learning datasets and models.

+
+Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. +
+
+

Present and Future

+

Today, we have a plethora of datasets spanning various domains, including healthcare, finance, social sciences, and more. The following characteristics help us taxonomize the space and growth of machine learning datasets that fuel model development.

+
    +
  1. Diversity of Data Sets: The variety of data sets available to researchers and engineers has expanded dramatically, covering many fields, including natural language processing, image recognition, and more. This diversity has fueled the development of specialized machine-learning models tailored to specific tasks, such as translation, speech recognition, and facial recognition.

  2. +
  3. Volume of Data: The sheer volume of data that has become available in the digital age has also played a crucial role in advancing machine learning models. Large data sets enable models to capture the complexity and nuances of real-world phenomena, leading to more accurate and reliable predictions.

  4. +
  5. Quality and Cleanliness of Data: The quality of data is another critical factor that influences the performance of machine learning models. Clean, well-labeled, and unbiased data sets are essential for training models that are robust and fair.

  6. +
  7. Open Access to Data: The availability of open-access data sets has also contributed significantly to machine learning’s progress. Open data allows researchers from around the world to collaborate, share insights, and build upon each other’s work, leading to faster innovation and the development of more advanced models.

  8. +
  9. Ethics and Privacy Concerns: As data sets grow in size and complexity, ethical considerations and privacy concerns become increasingly important. There is an ongoing debate about the balance between leveraging data for machine learning advancements and protecting individuals’ privacy rights.

  10. +
+

The development of machine learning models relies heavily on the availability of diverse, large, high-quality, and open-access data sets. As we move forward, addressing the ethical considerations and privacy concerns associated with using large data sets is crucial to ensure that machine learning technologies benefit society. There is a growing awareness that data acts as the rocket fuel for machine learning, driving and fueling the development of machine learning models. Consequently, more focus is being placed on developing the data sets themselves. We will explore this in further detail in the data benchmarking section.

+
+
+
+

11.5.2 Model Metrics

+

Machine learning model evaluation has evolved from a narrow focus on accuracy to a more comprehensive approach considering a range of factors, from ethical considerations and real-world applicability to practical constraints like model size and efficiency. This shift reflects the field’s maturation as machine learning models are increasingly applied in diverse, complex real-world scenarios.

+
+

Accuracy

+

Accuracy is one of the most intuitive and commonly used metrics for evaluating machine learning models. At its core, accuracy measures the proportion of correct predictions made by the model out of all predictions. For example, imagine we have developed a machine learning model to classify images as either containing a cat or not. If we test this model on a dataset of 100 images, and it correctly identifies 90 of them, we would calculate its accuracy as 90%.

+

In the initial stages of machine learning, accuracy was often the primary, if not the only, metric considered when evaluating model performance. This is understandable, given its straightforward nature and ease of interpretation. However, as the field has progressed, the limitations of relying solely on accuracy have become more apparent.

+

Consider the example of a medical diagnosis model with an accuracy of 95%. While at first glance this may seem impressive, we must delve deeper to assess the model’s performance fully. Suppose the model fails to accurately diagnose severe conditions that, while rare, can have severe consequences; its high accuracy may not be as meaningful. A pertinent example of this is Google’s retinopathy machine learning model, which was designed to diagnose diabetic retinopathy and diabetic macular edema from retinal photographs.

+

The Google model demonstrated impressive accuracy levels in lab settings. Still, when deployed in real-world clinical environments in Thailand, it faced significant challenges. In the real-world setting, the model encountered diverse patient populations, varying image quality, and a range of different medical conditions that it had not been exposed to during its training. Consequently, its performance could have been better, and it struggled to maintain the same accuracy levels observed in lab settings. This example serves as a clear reminder that while high accuracy is an important and desirable attribute for a medical diagnosis model, it must be evaluated in conjunction with other factors, such as the model’s ability to generalize to different populations and handle diverse and unpredictable real-world conditions, to understand its value and potential impact on patient care truly.

+

Similarly, if the model performs well on average but exhibits significant disparities in performance across different demographic groups, this, too, would be cause for concern.

+

The evolution of machine learning has thus seen a shift towards a more holistic approach to model evaluation, taking into account not just accuracy, but also other crucial factors such as fairness, transparency, and real-world applicability. A prime example is the Gender Shades project at MIT Media Lab, led by Joy Buolamwini, highlighting significant racial and gender biases in commercial facial recognition systems. The project evaluated the performance of three facial recognition technologies developed by IBM, Microsoft, and Face++. It found that they all exhibited biases, performing better on lighter-skinned and male faces compared to darker-skinned and female faces.

+

While accuracy remains a fundamental and valuable metric for evaluating machine learning models, a more comprehensive approach is required to fully assess a model’s performance. This means considering additional metrics that account for fairness, transparency, and real-world applicability, as well as conducting rigorous testing across diverse datasets to uncover and mitigate any potential biases. The move towards a more holistic approach to model evaluation reflects the maturation of the field and its increasing recognition of the real-world implications and ethical considerations associated with deploying machine learning models.

+
+
+

Fairness

+

Fairness in machine learning models is a multifaceted and critical aspect that requires careful attention, particularly in high-stakes applications that significantly affect people’s lives, such as in loan approval processes, hiring, and criminal justice. It refers to the equitable treatment of all individuals, irrespective of their demographic or social attributes such as race, gender, age, or socioeconomic status.

+

Simply relying on accuracy can be insufficient and potentially misleading when evaluating models. For instance, consider a loan approval model with a 95% accuracy rate. While this figure may appear impressive at first glance, it does not reveal how the model performs across different demographic groups. If this model consistently discriminates against a particular group, its accuracy is less commendable, and its fairness is questioned.

+

Discrimination can manifest in various forms, such as direct discrimination, where a model explicitly uses sensitive attributes like race or gender in its decision-making process, or indirect discrimination, where seemingly neutral variables correlate with sensitive attributes, indirectly influencing the model’s outcomes. An infamous example of the latter is the COMPAS tool used in the US criminal justice system, which exhibited racial biases in predicting recidivism rates despite not explicitly using race as a variable.

+

Addressing fairness involves careful examination of the model’s performance across diverse groups, identifying potential biases, and rectifying disparities through corrective measures such as re-balancing datasets, adjusting model parameters, and implementing fairness-aware algorithms. Researchers and practitioners continuously develop metrics and methodologies tailored to specific use cases to evaluate fairness in real-world scenarios. For example, disparate impact analysis, demographic parity, and equal opportunity are some of the metrics employed to assess fairness.

+

Additionally, transparency and interpretability of models are fundamental to achieving fairness. Understanding how a model makes decisions can reveal potential biases and enable stakeholders to hold developers accountable. Open-source tools like AI Fairness 360 by IBM and Fairness Indicators by TensorFlow are being developed to facilitate fairness assessments and mitigation of biases in machine learning models.

+

Ensuring fairness in machine learning models, particularly in applications that significantly impact people’s lives, requires rigorous evaluation of the model’s performance across diverse groups, careful identification and mitigation of biases, and implementation of transparency and interpretability measures. By comprehensively addressing fairness, we can work towards developing machine learning models that are equitable, just, and beneficial for society.

+
+
+

Complexity

+
+
Parameters*
+

In the initial stages of machine learning, model benchmarking often relied on parameter counts as a proxy for model complexity. The rationale was that more parameters typically lead to a more complex model, which should, in turn, deliver better performance. However, this approach has proven inadequate as it needs to account for the computational cost associated with processing many parameters.

+

For example, GPT-3, developed by OpenAI, is a language model that boasts an astounding 175 billion parameters. While it achieves state-of-the-art performance on various natural language processing tasks, its size and the computational resources required to run it make it impractical for deployment in many real-world scenarios, especially those with limited computational capabilities.

+

Relying on parameter counts as a proxy for model complexity also fails to consider the model’s efficiency. If optimized for efficiency, a model with fewer parameters might be just as effective, if not more so, than a model with a higher parameter count. For instance, MobileNets, developed by Google, is a family of models designed specifically for mobile and edge devices. They utilize depth-wise separable convolutions to reduce the number of parameters and computational costs while still achieving competitive performance.

+

In light of these limitations, the field has moved towards a more holistic approach to model benchmarking that considers parameter counts and other crucial factors such as floating-point operations per second (FLOPs), memory consumption, and latency. FLOPs, in particular, have emerged as an important metric as they provide a more accurate representation of the computational load a model imposes. This shift towards a more comprehensive approach to model benchmarking reflects a recognition of the need to balance performance with practicality, ensuring that models are effective, efficient, and deployable in real-world scenarios.

+
+
+
FLOPS
+

The size of a machine learning model is an essential aspect that directly impacts its usability in practical scenarios, especially when computational resources are limited. Traditionally, the number of parameters in a model was often used as a proxy for its size, with the underlying assumption being that more parameters would translate to better performance. However, this simplistic view does not consider the computational cost of processing these parameters. This is where the concept of floating-point operations per second (FLOPs) comes into play, providing a more accurate representation of the computational load a model imposes.

+

FLOPs measure the number of floating-point operations a model performs to generate a prediction. A model with many FLOPs requires substantial computational resources to process the vast number of operations, which may render it impractical for certain applications. Conversely, a model with a lower FLOP count is more lightweight and can be easily deployed in scenarios where computational resources are limited.

+

Let’s consider an example. BERT Bidirectional Encoder Representations from Transformers, a popular natural language processing model, has over 340 million parameters, making it a large model with high accuracy and impressive performance across various tasks. However, the sheer size of BERT, coupled with its high FLOP count, makes it a computationally intensive model that may not be suitable for real-time applications or deployment on edge devices with limited computational capabilities.

+

In light of this, there has been a growing interest in developing smaller models that can achieve similar performance levels as their larger counterparts while being more efficient in computational load. DistilBERT, for instance, is a smaller version of BERT that retains 97% of its performance while being 40% smaller in terms of parameter count. The size reduction also translates to a lower FLOP count, making DistilBERT a more practical choice for resource-constrained scenarios.

+

In summary, while parameter count provides a useful indication of model size, it is not a comprehensive metric as it needs to consider the computational cost associated with processing these parameters. FLOPs, on the other hand, offer a more accurate representation of a model’s computational load and are thus an essential consideration when deploying machine learning models in real-world scenarios, particularly when computational resources are limited. The evolution from relying solely on parameter count to considering FLOPs signifies a maturation in the field, reflecting a greater awareness of the practical constraints and challenges of deploying machine learning models in diverse settings.

+
+
+
Efficiency
+

Efficiency metrics, such as memory consumption and latency/throughput, have also gained prominence. These metrics are particularly crucial when deploying models on edge devices or in real-time applications, as they measure how quickly a model can process data and how much memory it requires. In this context, Pareto curves are often used to visualize the trade-off between different metrics, helping stakeholders decide which model best suits their needs.

+
+
+
+
+

11.5.3 Lessons Learned

+

Model benchmarking has offered us several valuable insights that can be leveraged to drive innovation in system benchmarks. The progression of machine learning models has been profoundly influenced by the advent of leaderboards and the open-source availability of models and datasets. These elements have served as significant catalysts, propelling innovation and accelerating the integration of cutting-edge models into production environments. However, as we will explore further, these are not the only contributors to the development of machine learning benchmarks.

+

Leaderboards play a vital role in providing an objective and transparent method for researchers and practitioners to evaluate the efficacy of different models, ranking them based on their performance in benchmarks. This system fosters a competitive environment, encouraging the development of models that are not only accurate but also efficient. The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a prime example of this, with its annual leaderboard significantly contributing to developing groundbreaking models such as AlexNet.

+

Open-source access to state-of-the-art models and datasets further democratizes machine learning, facilitating collaboration among researchers and practitioners worldwide. This open access accelerates the process of testing, validation, and deployment of new models in production environments, as evidenced by the widespread adoption of models like BERT and GPT-3 in various applications, from natural language processing to more complex, multi-modal tasks.

+

Community collaboration platforms like Kaggle have revolutionized the field by hosting competitions that unite data scientists from across the globe to solve intricate problems. Specific benchmarks serve as the goalposts for innovation and model development.

+

Moreover, the availability of diverse and high-quality datasets is paramount in training and testing machine learning models. Datasets such as ImageNet have played an instrumental role in the evolution of image recognition models, while extensive text datasets have facilitated advancements in natural language processing models.

+

Lastly, the contributions of academic and research institutions must be supported. Their role in publishing research papers, sharing findings at conferences, and fostering collaboration between various institutions has significantly contributed to advancing machine learning models and benchmarks.

+ +
+
+

11.5.4 Limitations and Challenges

+

While model benchmarks are an essential tool in assessing machine learning models, several limitations and challenges should be addressed to ensure that they accurately reflect a model’s performance in real-world scenarios.

+

Dataset does not Correspond to Real-World Scenarios: Often, the data used in model benchmarks is cleaned and preprocessed to such an extent that it may need to accurately represent the data that a model would encounter in real-world applications. This idealized data version can lead to overestimating a model’s performance. In the case of the ImageNet dataset, the images are well-labeled and categorized. Still, in a real-world scenario, a model may need to deal with blurry images that could be better lit or taken from awkward angles. This discrepancy can significantly affect the model’s performance.

+

Sim2Real Gap: The Sim2Real gap refers to the difference in the performance of a model when transitioning from a simulated environment to a real-world environment. This gap is often observed in robotics, where a robot trained in a simulated environment struggles to perform tasks in the real world due to the complexity and unpredictability of real-world environments. A robot trained to pick up objects in a simulated environment may need help to perform the same task in the real world because the simulated environment does not accurately represent the complexities of real-world physics, lighting, and object variability.

+

Challenges in Creating Datasets: Creating a dataset for model benchmarking is a challenging task that requires careful consideration of various factors such as data quality, diversity, and representation. As discussed in the data engineering section, ensuring that the data is clean, unbiased, and representative of the real-world scenario is crucial for the accuracy and reliability of the benchmark. For example, when creating a dataset for a healthcare-related task, it is important to ensure that the data is representative of the entire population and not biased towards a particular demographic. This ensures that the model performs well across diverse patient populations.

+

Model benchmarks are essential in measuring the capability of a model architecture in solving a fixed task, but it is important to address the limitations and challenges associated with them. This includes ensuring that the dataset accurately represents real-world scenarios, addressing the Sim2Real gap, and overcoming the challenges of creating unbiased and representative datasets. By addressing these challenges and many others, we can ensure that model benchmarks provide a more accurate and reliable assessment of a model’s performance in real-world applications.

+

The Speech Commands dataset and its successor MSWC, are common benchmarks for one of the quintessential TinyML applications, keyword spotting. Speech commands establish streaming error metrics beyond the standard top-1 classification accuracy more relevant to the keyword spotting use case. Using case-relevant metrics is what elevates a dataset to a model benchmark.

+
+
+
+

11.6 Data Benchmarking

+

For the past several years, AI has focused on developing increasingly sophisticated machine learning models like large language models. The goal has been to create models capable of human-level or superhuman performance on a wide range of tasks by training them on massive datasets. This model-centric approach produced rapid progress, with models attaining state-of-the-art results on many established benchmarks. Figure fig-superhuman-perf shows the performance of AI systems relative to human performance (marked by the horizontal line at 0) across five applications: handwriting recognition, speech recognition, image recognition, reading comprehension, and language understanding. Over the past decade, the AI performance has surpassed that of humans.

+

However, growing concerns about issues like bias, safety, and robustness persist even in models that achieve high accuracy on standard benchmarks. Additionally, some popular datasets used for evaluating models are beginning to saturate, with models reaching near-perfect performance on existing test splits (Kiela et al. 2021). As a simple example, there are test images in the classic MNIST handwritten digit dataset that may look indecipherable to most human evaluators but were assigned a label when the dataset was created - models that happen to agree with those labels may appear to exhibit superhuman performance but instead may only be capturing idiosyncrasies of the labeling and acquisition process from the dataset’s creation in 1994. In the same spirit, computer vision researchers now ask, “Are we done with ImageNet?” (Beyer et al. 2020). This highlights limitations in the conventional model-centric approach of optimizing accuracy on fixed datasets through architectural innovations.

+
+Beyer, Lucas, Olivier J Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. 2020. “Are We Done with Imagenet?” ArXiv Preprint abs/2006.07159. https://arxiv.org/abs/2006.07159. +
+
+
+ +
+
+Figure 11.5: AI vs human performane. Credit: Kiela et al. (2021). +
+
+Kiela, Douwe, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, et al. 2021. “Dynabench: Rethinking Benchmarking in NLP.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4110–24. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.324. +
+
+

An alternative paradigm is emerging called data-centric AI. Rather than treating data as static and focusing narrowly on model performance, this approach recognizes that models are only as good as their training data. So, the emphasis shifts to curating high-quality datasets that better reflect real-world complexity, developing more informative evaluation benchmarks, and carefully considering how data is sampled, preprocessed, and augmented. The goal is to optimize model behavior by improving the data rather than just optimizing metrics on flawed datasets. Data-centric AI critically examines and enhances the data itself to produce beneficial AI. This reflects an important evolution in mindset as the field addresses the shortcomings of narrow benchmarking.

+

This section will explore the key differences between model-centric and data-centric approaches to AI. This distinction has important implications for how we benchmark AI systems. Specifically, we will see how focusing on data quality and Efficiency can directly improve machine learning performance as an alternative to optimizing model architectures solely. The data-centric approach recognizes that models are only as good as their training data. So, enhancing data curation, evaluation benchmarks, and data handling processes can produce AI systems that are safer, fairer, and more robust. Rethinking benchmarking to prioritize data alongside models represents an important evolution as the field aims to deliver trustworthy real-world impact.

+
+

11.6.1 Limitations of Model-Centric AI

+

In the model-centric AI era, a prominent characteristic was the development of complex model architectures. Researchers and practitioners dedicated substantial effort to devising sophisticated and intricate models in the quest for superior performance. This frequently involved the incorporation of additional layers and the fine-tuning of a multitude of hyperparameters to achieve incremental improvements in accuracy. Concurrently, there was a significant emphasis on leveraging advanced algorithms. These algorithms, often at the forefront of the latest research, were employed to enhance the performance of AI models. The primary aim of these algorithms was to optimize the learning process of models, thereby extracting maximal information from the training data.

+

While the model-centric approach has been central to many advancements in AI, it has several areas for improvement. First, the development of complex model architectures can often lead to overfitting. This is when the model performs well on the training data but needs to generalize to new, unseen data. The additional layers and complexity can capture noise in the training data as if it were a real pattern, harming the model’s performance on new data.

+

Second, relying on advanced algorithms can sometimes obscure the real understanding of a model’s functioning. These algorithms often act as a black box, making it difficult to interpret how the model is making decisions. This lack of transparency can be a significant hurdle, especially in critical applications such as healthcare and finance, where understanding the model’s decision-making process is crucial.

+

Third, the emphasis on achieving state-of-the-art results on benchmark datasets can sometimes be misleading. These datasets need to represent the complexities and variability of real-world data more fully. A model that performs well on a benchmark dataset may not necessarily generalize well to new, unseen data in a real-world application. This discrepancy can lead to false confidence in the model’s capabilities and hinder its practical applicability.

+

Lastly, the model-centric approach often relies on large labeled datasets for training. However, obtaining such datasets takes time and effort in many real-world scenarios. This reliance on large datasets also limits AI’s applicability in domains where data is scarce or expensive to label.

+

As a result of the above reasons, and many more, the AI community is shifting to a more data-centric approach. Rather than focusing just on model architecture, researchers are now prioritizing curating high-quality datasets, developing better evaluation benchmarks, and considering how data is sampled and preprocessed. The key idea is that models are only as good as their training data. So, focusing on getting the right data will allow us to develop AI systems that are more fair, safe, and aligned with human values. This data-centric shift represents an important change in mindset as AI progresses.

+
+
+

11.6.2 The Shift Toward Data-centric AI

+

Data-centric AI is a paradigm that emphasizes the importance of high-quality, well-labeled, and diverse datasets in developing AI models. In contrast to the model-centric approach, which focuses on refining and iterating on the model architecture and algorithm to improve performance, data-centric AI prioritizes the quality of the input data as the primary driver of improved model performance. High-quality data is clean, well-labeled and representative of the real-world scenarios the model will encounter. In contrast, low-quality data can lead to poor model performance, regardless of the complexity or sophistication of the model architecture.

+

Data-centric AI puts a strong emphasis on the cleaning and labeling of data. Cleaning involves the removal of outliers, handling missing values, and addressing other data inconsistencies. Labeling, on the other hand, involves assigning meaningful and accurate labels to the data. Both these processes are crucial in ensuring that the AI model is trained on accurate and relevant data. Another important aspect of the data-centric approach is data augmentation. This involves artificially increasing the size and diversity of the dataset by applying various transformations to the data, such as rotation, scaling, and flipping training images. Data augmentation helps in improving the model’s robustness and generalization capabilities.

+

There are several benefits to adopting a data-centric approach to AI development. First and foremost, it leads to improved model performance and generalization capabilities. By ensuring that the model is trained on high-quality, diverse data, the model can better generalize to new, unseen data (Mattson et al. 2020b).

+

Additionally, a data-centric approach can often lead to simpler models that are easier to interpret and maintain. This is because the emphasis is on the data rather than the model architecture, meaning simpler models can achieve high performance when trained on high-quality data.

+

The shift towards data-centric AI represents a significant paradigm shift. By prioritizing the quality of the input data, this approach aims to improve model performance and generalization capabilities, ultimately leading to more robust and reliable AI systems. As we continue to advance in our understanding and application of AI, the data-centric approach is likely to play an important role in shaping the future of this field.

+
+
+

11.6.3 Benchmarking Data

+

Data benchmarking aims to evaluate common issues in datasets, such as identifying label errors, noisy features, representation imbalance (for example, out of the 1000 classes in Imagenet-1K, there are over 100 categories which are just types of dogs), class imbalance (where some classes have many more samples than others), whether models trained on a given dataset can generalize to out-of-distribution features, or what types of biases might exist in a given dataset (Mattson et al. 2020b). In its simplest form, data benchmarking aims to improve accuracy on a test set by removing noisy or mislabeled training samples while keeping the model architecture fixed. Recent competitions in data benchmarking have invited participants to submit novel augmentation strategies and active learning techniques.

+
+Mattson, Peter, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, et al. 2020b. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843. +

Data-centric techniques continue to gain attention in benchmarking, especially as foundation models are increasingly trained on self-supervised objectives. Compared to smaller datasets like Imagenet-1K, massive datasets commonly used in self-supervised learning, such as Common Crawl, OpenImages, and LAION-5B, contain higher amounts of noise, duplicates, bias, and potentially offensive data.

+

DataComp is a recently launched dataset competition that targets the evaluation of large corpora. DataComp focuses on language-image pairs used to train CLIP models. The introductory whitepaper finds that when the total compute budget for training is constant, the best-performing CLIP models on downstream tasks, such as ImageNet classification, are trained on just 30% of the available training sample pool. This suggests that proper filtering of large corpora is critical to improving the accuracy of foundation models. Similarly, Demystifying CLIP Data (Xu et al. 2023) asks whether the success of CLIP is attributable to the architecture or the dataset.

+
+Xu, Hu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, and Christoph Feichtenhofer. 2023. “Demystifying CLIP Data.” ArXiv Preprint abs/2309.16671. https://arxiv.org/abs/2309.16671. +

DataPerf is another recent effort focusing on benchmarking data in various modalities. DataPerf provides rounds of online competition to spur improvement in datasets. The inaugural offering launched with challenges in vision, speech, acquisition, debugging, and text prompting for image generation.

+
+
+

11.6.4 Data Efficiency

+

As machine learning models grow larger and more complex and compute resources become more scarce in the face of rising demand, it becomes challenging to meet the computation requirements even with the largest machine learning fleets. To overcome these challenges and ensure machine learning system scalability, it is necessary to explore novel opportunities that augment conventional approaches to resource scaling.

+

Improving data quality can be a useful method to impact machine learning system performance significantly. One of the primary benefits of enhancing data quality is the potential to reduce the size of the training dataset while still maintaining or even improving model performance. This data size reduction directly relates to the amount of training time required, thereby allowing models to converge more quickly and efficiently. Achieving this balance between data quality and dataset size is a challenging task that requires the development of sophisticated methods, algorithms, and techniques.

+

Several approaches can be taken to improve data quality. These methods include and are not limited to the following:

+
    +
  • Data Cleaning: This involves handling missing values, correcting errors, and removing outliers. Clean data ensures that the model is not learning from noise or inaccuracies.
  • +
  • Data Interpretability and Explainability: Common techniques include LIME (Ribeiro, Singh, and Guestrin 2016), which provides insight into the decision boundaries of classifiers, and Shapley values (Lundberg and Lee 2017), which estimate the importance of individual samples in contributing to a model’s predictions.
  • +
  • Feature Engineering: Transforming or creating new features can significantly improve model performance by providing more relevant information for learning.
  • +
  • Data Augmentation: Augmenting data by creating new samples through various transformations can help improve model robustness and generalization.
  • +
  • Active Learning: This is a semi-supervised learning approach where the model actively queries a human oracle to label the most informative samples (Coleman et al. 2022). This ensures that the model is trained on the most relevant data.
  • +
  • Dimensionality Reduction: Techniques like PCA can reduce the number of features in a dataset, thereby reducing complexity and training time.
  • +
+
+Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. Why Should i Trust You? Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. +
+Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4765–74. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html. +
+Coleman, Cody, Edward Chou, Julian Katz-Samuels, Sean Culatana, Peter Bailis, Alexander C. Berg, Robert D. Nowak, Roshan Sumbaly, Matei Zaharia, and I. Zeki Yalniz. 2022. “Similarity Search for Efficient Active Learning and Search of Rare Concepts.” In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, the Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, 6402–10. AAAI Press. https://ojs.aaai.org/index.php/AAAI/article/view/20591. +

There are many other methods in the wild. But the goal is the same. Refining the dataset and ensuring it is of the highest quality can reduce the training time required for models to converge. However, achieving this requires developing and implementing sophisticated methods, algorithms, and techniques that can clean, preprocess, and augment data while retaining the most informative samples. This is an ongoing challenge that will require continued research and innovation in the field of machine learning.

+
+
+
+

11.7 The Trifecta

+

While system, model, and data benchmarks have traditionally been studied in isolation, there is a growing recognition that to understand and advance AI fully, we must take a more holistic view. By iterating between benchmarking systems, models, and datasets together, novel insights that are not apparent when these components are analyzed separately may emerge. System performance impacts model accuracy, model capabilities drive data needs, and data characteristics shape system requirements.

+

Benchmarking the triad of system, model, and data in an integrated fashion will likely lead to discoveries about the co-design of AI systems, the generalization properties of models, and the role of data curation and quality in enabling performance. Rather than narrow benchmarks of individual components, the future of AI requires benchmarks that evaluate the symbiotic relationship between computing platforms, algorithms, and training data. This systems-level perspective will be critical to overcoming current limitations and unlocking the next level of AI capabilities.

+

Figure fig-benchmarking-trifecta illustrates the many potential ways to interplay data benchmarking, model benchmarking, and system infrastructure benchmarking together. Exploring these intricate interactions is likely to uncover new optimization opportunities and enhancement capabilities. The data, model, and system benchmark triad offers a rich space for co-design and co-optimization.

+
+
+
+ +
+
+Figure 11.6: Benchmarking trifecta. +
+
+
+

While this integrated perspective represents an emerging trend, the field has much more to discover about the synergies and trade-offs between these components. As we iteratively benchmark combinations of data, models, and systems, new insights that remain hidden when these elements are studied in isolation will emerge. This multifaceted benchmarking approach charting the intersections of data, algorithms, and hardware promises to be a fruitful avenue for major progress in AI, even though it is still in its early stages.

+
+
+

11.8 Benchmarks for Emerging Technologies

+

Given their significant differences from existing techniques, emerging technologies can be particularly challenging to design benchmarks for. Standard benchmarks used for existing technologies may not highlight the key features of the new approach. In contrast, new benchmarks may be seen as contrived to favor the emerging technology over others. They may be so different from existing benchmarks that they cannot be understood and lose insightful value. Thus, benchmarks for emerging technologies must balance fairness, applicability, and ease of comparison with existing benchmarks.

+

An example of emerging technology where benchmarking has proven to be especially difficult is in Neuromorphic Computing. Using the brain as a source of inspiration for scalable, robust, and energy-efficient general intelligence, neuromorphic computing (Schuman et al. 2022) directly incorporates biologically realistic mechanisms in both computing algorithms and hardware, such as spiking neural networks (Maass 1997) and non-von Neumann architectures for executing them (Davies et al. 2018; Modha et al. 2023). From a full-stack perspective of models, training techniques, and hardware systems, neuromorphic computing differs from conventional hardware and AI. Thus, there is a key challenge in developing fair and useful benchmarks for guiding the technology.

+
+Schuman, Catherine D., Shruti R. Kulkarni, Maryam Parsa, J. Parker Mitchell, Prasanna Date, and Bill Kay. 2022. “Opportunities for Neuromorphic Computing Algorithms and Applications.” Nature Computational Science 2 (1): 10–19. https://doi.org/10.1038/s43588-021-00184-y. +
+Maass, Wolfgang. 1997. “Networks of Spiking Neurons: The Third Generation of Neural Network Models.” Neural Networks 10 (9): 1659–71. https://doi.org/10.1016/s0893-6080(97)00011-7. +
+Davies, Mike, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, et al. 2018. “Loihi: A Neuromorphic Manycore Processor with on-Chip Learning.” IEEE Micro 38 (1): 82–99. https://doi.org/10.1109/mm.2018.112130359. +
+Modha, Dharmendra S., Filipp Akopyan, Alexander Andreopoulos, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, et al. 2023. “Neural Inference at the Frontier of Energy, Space, and Time.” Science 382 (6668): 329–35. https://doi.org/10.1126/science.adh1174. +
+Yik, Jason, Soikat Hasan Ahmed, Zergham Ahmed, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, et al. 2023. NeuroBench: Advancing Neuromorphic Computing Through Collaborative, Fair and Representative Benchmarking.” https://arxiv.org/abs/2304.04640. +

An ongoing initiative to develop standard neuromorphic benchmarks is NeuroBench (Yik et al. 2023). To suitably benchmark neuromorphic, NeuroBench follows high-level principles of inclusiveness through task and metric applicability to both neuromorphic and non-neuromorphic solutions, actionability of implementation using common tooling, and iterative updates to continue to ensure relevance as the field rapidly grows. NeuroBench and other benchmarks for emerging technologies provide critical guidance for future techniques, which may be necessary as the scaling limits of existing approaches draw nearer.

+
+
+

11.9 Conclusion

+

What gets measured gets improved. This chapter has explored the multifaceted nature of benchmarking spanning systems, models, and data. Benchmarking is important to advancing AI by providing the essential measurements to track progress.

+

ML system benchmarks enable optimization across speed, Efficiency, and scalability metrics. Model benchmarks drive innovation through standardized tasks and metrics beyond accuracy. Data benchmarks highlight issues of quality, balance, and representation.

+

Importantly, evaluating these components in isolation has limitations. In the future, more integrated benchmarking will likely be used to explore the interplay between system, model, and data benchmarks. This view promises new insights into co-designing data, algorithms, and infrastructure.

+

As AI grows more complex, comprehensive benchmarking becomes even more critical. Standards must continuously evolve to measure new capabilities and reveal limitations. Close collaboration between industry, academics, national labels, etc., is essential to developing benchmarks that are rigorous, transparent, and socially beneficial.

+

Benchmarking provides the compass to guide progress in AI. By persistently measuring and openly sharing results, we can navigate toward performant, robust, and trustworthy systems. If AI is to serve societal and human needs properly, it must be benchmarked with humanity’s best interests in mind. To this end, there are emerging areas, such as benchmarking the safety of AI systems, but that’s for another day and something we can discuss further in Generative AI!

+

Benchmarking is a continuously evolving topic. The article The Olympics of AI: Benchmarking Machine Learning Systems covers several emerging subfields in AI benchmarking, including robotics, extended reality, and neuromorphic computing that we encourage the reader to pursue.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/conclusion/conclusion.html b/contents/conclusion/conclusion.html new file mode 100644 index 00000000..54b10ca4 --- /dev/null +++ b/contents/conclusion/conclusion.html @@ -0,0 +1,1076 @@ + + + + + + + + + +Machine Learning Systems - 20  Conclusion + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

20  Conclusion

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: An image depicting the last chapter of an ML systems book, open to a two-page spread. The pages summarize key concepts such as neural networks, model architectures, hardware acceleration, and MLOps. One page features a diagram of a neural network and different model architectures, while the other page shows illustrations of hardware components for acceleration and MLOps workflows. The background includes subtle elements like circuit patterns and data points to reinforce the technological theme. The colors are professional and clean, with an emphasis on clarity and understanding..
+
+
+
+

The Evolution

+

We have taken on a comprehensive tour through the multifaceted landscape of this rapidly evolving field. We began by tracing ML’s historical trajectory, from a brief overview of its theoretical foundations to its current state as a transformative force across industries. This journey has highlighted the remarkable progress made in the field and the challenges and opportunities that lie ahead.

+

We dove into the basic intricacies of data engineering, recognizing that data quality, diversity, and ethical sourcing are paramount to building robust and reliable machine learning models. The importance of high-quality data must be balanced, as lapses in data quality can lead to significant negative consequences, such as flawed predictions, project terminations, and even potential harm to communities.

+

We then explored various model architectures, from the foundational perceptron to the sophisticated transformer networks, each tailored to specific tasks and data types. This exploration has showcased machine learning models’ remarkable diversity and adaptability, enabling them to tackle various problems across different domains.

+
+
+

Advancements

+

Over the years, we have witnessed remarkable strides in ML systems, particularly in addressing the challenges of resource constraints and real-world deployment. The evolution of model architectures, from the early MobileNets designed for mobile devices to the more recent TinyML models optimized for microcontrollers, has been a testament to the ingenuity and innovation in the field. These advancements have enabled the deployment of powerful AI capabilities on devices with limited resources, opening up new possibilities for healthcare, agriculture, environmental monitoring applications, and more.

+

We have also explored breakthroughs in hardware acceleration, such as developing specialized chips like Edge TPUs and neuromorphic hardware. These innovations have significantly improved the efficiency and performance of machine learning systems, enabling real-time processing and analysis on edge devices. Integrating these hardware advancements with optimized model architectures has unlocked new possibilities for deploying machine learning in resource-constrained environments.

+

Furthermore, we dove into advanced topics like on-device learning, where models can adapt and learn directly on the device, enhancing privacy and reducing reliance on cloud connectivity. This approach has significant implications for data privacy and security, as sensitive information can be processed locally without the need for transmission to external servers. Techniques like transfer learning and federated learning have further expanded the capabilities of on-device learning, enabling collaborative and efficient model updates across distributed devices. These advancements have paved the way for more secure, privacy-preserving, and decentralized machine learning applications.

+
+
+

The Future

+

As we look to the future, the trajectory of machine learning systems points towards a paradigm shift from a model-centric approach to a more data-centric one. This shift recognizes that the quality and diversity of data are paramount to developing robust, reliable, and fair AI models. As such, we can anticipate a growing emphasis on data curation, labeling, and augmentation techniques to ensure that models are trained on high-quality, representative data that reflects the complexities of real-world scenarios. This focus on data will be crucial in addressing the challenges of bias, fairness, and generalizability in machine learning systems.

+

Furthermore, the proliferation of TinyML is set to revolutionize edge computing. With its ability to deploy machine learning models on resource-constrained devices, TinyML is poised to enable a new wave of intelligent applications in healthcare, agriculture, environmental monitoring, and more. This democratization of AI will empower individuals and communities to leverage the power of machine learning for local problem-solving and sustainable development. By bringing AI capabilities to the edge, TinyML has the potential to unlock innovative solutions and drive positive change in diverse domains.

+

Another promising avenue is neuromorphic computing, which draws inspiration from the human brain’s neural networks to create more efficient and adaptable AI systems. While still in its early stages, neuromorphic computing can revolutionize AI by enabling low-power, real-time learning and decision-making on edge devices. This approach could lead to developing AI systems that are more energy-efficient, resilient, and capable of adapting to dynamic environments. As research in neuromorphic computing advances, we can expect breakthroughs in areas such as autonomous systems, robotics, and intelligent sensor networks.

+

However, as we embrace these advancements, it is crucial to remain mindful of the ethical considerations that will shape the future of AI. Fairness, transparency, accountability, and privacy in AI systems will be paramount as they become more integrated into our lives and decision-making processes. The development of ethical frameworks, regulations, and standards will be essential to guide the responsible and equitable development and deployment of AI technologies. It is the collective responsibility of researchers, practitioners, policymakers, and society to engage in ongoing discussions and collaborations to address these ethical challenges head-on.

+
+
+

Ethical Considerations

+

Despite the remarkable progress, the path forward for ML systems has been. Issues such as bias in data and models, the need for explainability and interpretability, the environmental impact of AI, and the ethical considerations surrounding its use remain critical concerns as we continue to scale AI/ML use cases. As the field advances, researchers, practitioners, and government and political agencies stakeholders must address these challenges head-on. Developing techniques for mitigating bias, promoting transparency, and ensuring the responsible deployment of machine learning systems will be extremely important to building trust and fostering widespread adoption.

+

Moreover, the “black box” nature of many complex machine learning models poses challenges in understanding their decision-making processes. This lack of transparency can hinder trust and accountability, especially in high-stakes applications like healthcare and finance, where the consequences of erroneous or biased decisions can be severe. Developing interpretable models and explainability techniques is crucial for building trust and ensuring responsible AI deployment free from biases. By providing insights into how models arrive at their predictions, we can foster greater understanding, enable oversight, and facilitate identifying and mitigating potential biases or errors.

+

The increasing computational demands of machine learning, particularly for training large models, have raised concerns about their environmental impact due to high energy consumption and carbon emissions. As the scale and complexity of models continue to grow, it is crucial to address the sustainability challenges associated with AI development. The development of energy-efficient algorithms, the use of renewable energy sources, and the exploration of alternative computing paradigms like neuromorphic computing are essential for mitigating the environmental footprint of AI. By prioritizing sustainability and investing in green computing initiatives, the AI community can work towards reducing the environmental impact of machine learning systems.

+

Moreover, it is important to acknowledge that access to AI and machine learning compute resources may not be equally distributed across organizations and regions. This disparity can lead to a widening gap between those who have the means to leverage advanced AI technologies and those who do not. Organizations like the Organisation for Economic Cooperation and Development (OECD) are actively exploring ways to address this issue and promote greater equity in AI access and adoption. By fostering international cooperation, sharing best practices, and supporting capacity-building initiatives, we can ensure that AI’s benefits are more widely accessible and that no one is left behind in the AI revolution.

+
+
+

A Call to Action

+

As we conclude this exploration of machine learning systems, we invite you to embark on your journey of discovery and innovation. The field is ripe with possibilities, and your contributions can shape the future of AI. Continue to learn, experiment, and push the boundaries of what’s possible. Engage with the vibrant machine learning community, participate in open-source projects, and share your knowledge and insights. By actively participating in this ever-evolving field, you can help drive the development of responsible, ethical, and sustainable AI systems that benefit society and contribute to a brighter future for all.

+

Remember that the power of machine learning lies not only in the technology itself but also in the hands of those who wield it. As you navigate this exciting landscape, let your curiosity, creativity, and commitment to ethical principles be your guiding lights. Embrace the challenges, seek out diverse perspectives, and strive to create AI systems that are technically advanced and aligned with the values of fairness, transparency, and social good. Together, we can shape a future where machine learning systems serve as powerful tools for positive change, empowering individuals, communities, and industries to tackle the most pressing challenges of our time.

+

Congratulations on making it to the end!

+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/data_engineering/data_engineering.html b/contents/data_engineering/data_engineering.html new file mode 100644 index 00000000..12b63254 --- /dev/null +++ b/contents/data_engineering/data_engineering.html @@ -0,0 +1,1726 @@ + + + + + + + + + +Machine Learning Systems - 5  Data Engineering + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

5  Data Engineering

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Create a rectangular illustration visualizing the concept of data engineering. Include elements such as raw data sources, data processing pipelines, storage systems, and refined datasets. Show how raw data is transformed through cleaning, processing, and storage to become valuable information that can be analyzed and used for decision-making.
+
+
+

Data is the lifeblood of AI systems. Without good data, even the most advanced machine-learning algorithms will not succeed. This section will dive into the intricacies of building high-quality datasets to fuel our AI models. Data engineering involves collecting, storing, processing, and managing data to train machine learning models.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the importance of clearly defining the problem statement and objectives when embarking on an ML project.

  • +
  • Recognize various data sourcing techniques, such as web scraping, crowdsourcing, and synthetic data generation, along with their advantages and limitations.

  • +
  • Appreciate the need for thoughtful data labeling, using manual or AI-assisted approaches, to create high-quality training datasets.

  • +
  • Briefly learn different methods for storing and managing data, such as databases, data warehouses, and data lakes.

  • +
  • Comprehend the role of transparency through metadata and dataset documentation and tracking data provenance to facilitate ethics, auditing, and reproducibility.

  • +
  • Understand how licensing protocols govern legal data access and usage, necessitating careful compliance.

  • +
  • Recognize key challenges in data engineering, including privacy risks, representation gaps, legal restrictions around data access, and balancing competing priorities.

  • +
+
+
+
+

5.1 Introduction

+

Dataset creators face complex privacy and representation challenges when building high-quality training data, especially for sensitive domains like healthcare. Legally, creators may need to remove direct identifiers like names and ages. Even without legal obligations, removing such information can help build user trust. However, excessive anonymization can compromise dataset utility. Techniques like differential privacy\(^{1}\), aggregation, and reducing detail provide alternatives to balance privacy and utility but have downsides. Creators must strike a thoughtful balance based on the use case.

+

Looking beyond privacy, creators need to proactively assess and address representation gaps that could introduce model biases. It is crucial yet insufficient to ensure diversity across individual variables like gender, race, and accent. Combinations of characteristics also require assessment, as models can only work when certain intersections are present. For example, a medical dataset could have balanced gender, age, and diagnosis data individually, but it lacks enough cases to capture older women with a specific condition. Such higher-order gaps are not immediately obvious but can critically impact model performance.

+

Creating useful, ethical training data requires holistic consideration of privacy risks and representation gaps. Perfect solutions are elusive. However, conscientious data engineering practices like anonymization, aggregation, undersampling of overrepresented groups, and synthesized data generation can help balance competing needs. This facilitates models that are both accurate and socially responsible. Cross-functional collaboration and external audits can also strengthen training data. The challenges are multifaceted but surmountable with thoughtful effort.

+

We begin by discussing data collection: Where do we source data, and how do we gather it? Options range from scraping the web, accessing APIs, and utilizing sensors and IoT devices to conducting surveys and gathering user input. These methods reflect real-world practices. Next, we delve into data labeling, including considerations for human involvement. We’ll discuss the tradeoffs and limitations of human labeling and explore emerging methods for automated labeling. Following that, we’ll address data cleaning and preprocessing, a crucial yet frequently undervalued step in preparing raw data for AI model training. Data augmentation comes next, a strategy for enhancing limited datasets by generating synthetic samples. This is particularly pertinent for embedded systems, as many use cases need extensive data repositories readily available for curation. Synthetic data generation emerges as a viable alternative, though it has its own advantages and disadvantages. We’ll also touch upon dataset versioning, emphasizing the importance of tracking data modifications over time. Data is ever-evolving; hence, it’s imperative to devise strategies for managing and storing expansive datasets. By the end of this section, you’ll possess a comprehensive understanding of the entire data pipeline, from collection to storage, essential for operationalizing AI systems. Let’s embark on this journey!

+
+
+

5.2 Problem Definition

+

In many machine learning domains, sophisticated algorithms take center stage, while the fundamental importance of data quality is often overlooked. This neglect gives rise to “Data Cascades” by Sambasivan et al. (2021) (see Figure fig-cascades)—events where lapses in data quality compound, leading to negative downstream consequences such as flawed predictions, project terminations, and even potential harm to communities. In Figure fig-cascades, we have an illustration of potential data pitfalls at every stage and how they influence the entire process down the line. The influence of data collection errors is especially pronounced. Any lapses in this stage will become apparent at later stages (in model evaluation and deployment) and might lead to costly consequences, such as abandoning the entire model and restarting anew. Therefore, investing in data engineering techniques from the onset will help us detect errors early.

+
+
+
+ +
+
+Figure 5.1: Data cascades: compounded costs. Credit: Sambasivan et al. (2021). +
+
+Sambasivan, Nithya, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. ACM. https://doi.org/10.1145/3411764.3445518. +
+
+

Despite many ML professionals recognizing the importance of data, numerous practitioners report facing these cascades. This highlights a systemic issue: while the allure of developing advanced models remains, data often needs to be more appreciated.

+

Take, for example, Keyword Spotting (KWS) (see Figure fig-keywords). KWS is a prime example of TinyML in action and is a critical technology behind voice-enabled interfaces on endpoint devices such as smartphones. Typically functioning as lightweight wake-word engines, these systems are consistently active, listening for a specific phrase to trigger further actions. When we say “OK, Google” or “Alexa,” this initiates a process on a microcontroller embedded within the device. Despite their limited resources, these microcontrollers play an important role in enabling seamless voice interactions with devices, often operating in environments with high ambient noise. The uniqueness of the wake word helps minimize false positives, ensuring that the system is not triggered inadvertently.

+

It is important to appreciate that these keyword-spotting technologies are not isolated; they integrate seamlessly into larger systems, processing signals continuously while managing low power consumption. These systems extend beyond simple keyword recognition, evolving to facilitate diverse sound detections, such as glass breaking. This evolution is geared towards creating intelligent devices capable of understanding and responding to vocal commands, heralding a future where even household appliances can be controlled through voice interactions.

+
+
+
+ +
+
+Figure 5.2: Keyword Spotting example: interacting with Alexa. Credit: Amazon. +
+
+
+

Building a reliable KWS model is a complex task. It demands a deep understanding of the deployment scenario, encompassing where and how these devices will operate. For instance, a KWS model’s effectiveness is not just about recognizing a word; it’s about discerning it among various accents and background noises, whether in a bustling cafe or amid the blaring sound of a television in a living room or a kitchen where these devices are commonly found. It’s about ensuring that a whispered “Alexa” in the dead of night or a shouted “OK Google” in a noisy marketplace are recognized with equal precision.

+

Moreover, many current KWS voice assistants support a limited number of languages, leaving a substantial portion of the world’s linguistic diversity unrepresented. This limitation is partly due to the difficulty in gathering and monetizing data for languages spoken by smaller populations. The long-tail distribution of languages implies that many languages have limited data, making the development of supportive technologies challenging.

+

This level of accuracy and robustness hinges on the availability and quality of data, the ability to label the data correctly, and the transparency of the data for the end user before it is used to train the model. However, it all begins with clearly understanding the problem statement or definition.

+

Generally, in ML, problem definition has a few key steps:

+
    +
  1. Identifying the problem definition clearly

  2. +
  3. Setting clear objectives

  4. +
  5. Establishing success benchmark

  6. +
  7. Understanding end-user engagement/use

  8. +
  9. Understanding the constraints and limitations of deployment

  10. +
  11. Followed by finally doing the data collection.

  12. +
+

A solid project foundation is essential for its trajectory and eventual success. Central to this foundation is first identifying a clear problem, such as ensuring that voice commands in voice assistance systems are recognized consistently across varying environments. Clear objectives, like creating representative datasets for diverse scenarios, provide a unified direction. Benchmarks, such as system accuracy in keyword detection, offer measurable outcomes to gauge progress. Engaging with stakeholders, from end-users to investors, provides invaluable insights and ensures alignment with market needs. Additionally, understanding platform constraints is pivotal when delving into areas like voice assistance. Embedded systems, such as microcontrollers, come with inherent processing power, memory, and energy efficiency limitations. Recognizing these limitations ensures that functionalities, like keyword detection, are tailored to operate optimally, balancing performance with resource conservation.

+

In this context, using KWS as an example, we can break each of the steps out as follows:

+
    +
  1. Identifying the Problem: At its core, KWS aims to detect specific keywords amidst ambient sounds and other spoken words. The primary problem is to design a system that can recognize these keywords with high accuracy, low latency, and minimal false positives or negatives, especially when deployed on devices with limited computational resources.

  2. +
  3. Setting Clear Objectives: The objectives for a KWS system might include:

    +
      +
    • Achieving a specific accuracy rate (e.g., 98% accuracy in keyword detection).
    • +
    • Ensuring low latency (e.g., keyword detection and response within 200 milliseconds).
    • +
    • Minimizing power consumption to extend battery life on embedded devices.
    • +
    • Ensuring the model’s size is optimized for the available memory on the device.
    • +
  4. +
  5. Benchmarks for Success: Establish clear metrics to measure the success of the KWS system. This could include:

    +
      +
    • True Positive Rate: The percentage of correctly identified keywords.
    • +
    • False Positive Rate: The percentage of non-keywords incorrectly identified as keywords.
    • +
    • Response Time: The time taken from keyword utterance to system response.
    • +
    • Power Consumption: Average power used during keyword detection.
    • +
  6. +
  7. Stakeholder Engagement and Understanding: Engage with stakeholders, which include device manufacturers, hardware and software developers, and end-users. Understand their needs, capabilities, and constraints. For instance:

    +
      +
    • Device manufacturers might prioritize low power consumption.
    • +
    • Software developers might emphasize ease of integration.
    • +
    • End-users would prioritize accuracy and responsiveness.
    • +
  8. +
  9. Understanding the Constraints and Limitations of Embedded Systems: Embedded devices come with their own set of challenges:

    +
      +
    • Memory Limitations: KWS models must be lightweight to fit within the memory constraints of embedded devices. Typically, KWS models need to be as small as 16KB to fit in the always-on island of the SoC. Moreover, this is just the model size. Additional application code for preprocessing may also need to fit within the memory constraints.
    • +
    • Processing Power: The computational capabilities of embedded devices are limited (a few hundred MHz of clock speed), so the KWS model must be optimized for efficiency.
    • +
    • Power Consumption: Since many embedded devices are battery-powered, the KWS system must be power-efficient.
    • +
    • Environmental Challenges: Devices might be deployed in various environments, from quiet bedrooms to noisy industrial settings. The KWS system must be robust enough to function effectively across these scenarios.
    • +
  10. +
  11. Data Collection and Analysis: For a KWS system, the quality and diversity of data are paramount. Considerations might include:

    +
      +
    • Variety of Accents: Collect data from speakers with various accents to ensure wide-ranging recognition.
    • +
    • Background Noises: Include data samples with different ambient noises to train the model for real-world scenarios.
    • +
    • Keyword Variations: People might either pronounce keywords differently or have slight variations in the wake word itself. Ensure the dataset captures these nuances.
    • +
  12. +
  13. Iterative Feedback and Refinement: Once a prototype KWS system is developed, it’s crucial to test it in real-world scenarios, gather feedback, and iteratively refine the model. This ensures that the system remains aligned with the defined problem and objectives. This is important because the deployment scenarios change over time as things evolve.

  14. +
+
+

Exercise 5.1 (Keyword Spotting with TensorFlow Lite Micro)  

+
+
+ +
+
+

Explore a hands-on guide for building and deploying Keyword Spotting (KWS) systems using TensorFlow Lite Micro. Follow steps from data collection to model training and deployment to microcontrollers. Learn to create efficient KWS models that recognize specific keywords amidst background noise. Perfect for those interested in machine learning on embedded systems. Unlock the potential of voice-enabled devices with TensorFlow Lite Micro!

+

+
+
+
+

The current chapter underscores the essential role of data quality in ML, using Keyword Spotting (KWS) systems as an example. It outlines key steps, from problem definition to stakeholder engagement, emphasizing iterative feedback. The forthcoming chapter will delve deeper into data quality management, discussing its consequences and future trends, focusing on the importance of high-quality, diverse data in AI system development, addressing ethical considerations and data sourcing methods.

+
+
+

5.3 Data Sourcing

+

The quality and diversity of data gathered are important for developing accurate and robust AI systems. Sourcing high-quality training data requires careful consideration of the objectives, resources, and ethical implications. Data can be obtained from various sources depending on the needs of the project:

+
+

5.3.1 Pre-existing datasets

+

Platforms like Kaggle and UCI Machine Learning Repository provide a convenient starting point. Pre-existing datasets are valuable for researchers, developers, and businesses. One of their primary advantages is cost efficiency. Creating a dataset from scratch can be time-consuming and expensive, so accessing ready-made data can save significant resources. Moreover, many datasets, like ImageNet, have become standard benchmarks in the machine learning community, allowing for consistent performance comparisons across different models and algorithms. This data availability means that experiments can be started immediately without any data collection and preprocessing delays. In a fast-moving field like ML, this practicality is important.

+

The quality assurance that comes with popular pre-existing datasets is important to consider because several datasets have errors in them. For instance, the ImageNet dataset was found to have over 6.4% errors. Given their widespread use, the community often identifies and rectifies any errors or biases in these datasets. This assurance is especially beneficial for students and newcomers to the field, as they can focus on learning and experimentation without worrying about data integrity. Supporting documentation often accompanying existing datasets is invaluable, though this generally applies only to widely used datasets. Good documentation provides insights into the data collection process and variable definitions and sometimes even offers baseline model performances. This information not only aids understanding but also promotes reproducibility in research, a cornerstone of scientific integrity; currently, there is a crisis around improving reproducibility in machine learning systems. When other researchers have access to the same data, they can validate findings, test new hypotheses, or apply different methodologies, thus allowing us to build on each other’s work more rapidly.

+

While platforms like Kaggle and UCI Machine Learning Repository are invaluable resources, it’s essential to understand the context in which the data was collected. Researchers should be wary of potential overfitting when using popular datasets, as multiple models might have been trained on them, leading to inflated performance metrics. Sometimes, these datasets do not reflect the real-world data.

+

In addition, bias, validity, and reproducibility issues may exist in these datasets, and there has been a growing awareness of these issues in recent years. Furthermore, using the same dataset to train different models as shown in Figure fig-misalignment can sometimes create misalignment: training multiple models using the same dataset resultsi in a ‘misalignment’ between the models and the world, in which an entire ecosystem of models reflects only a narrow subset of the real-world data.

+
+
+
+ +
+
+Figure 5.3: Training different models on the same dataset. Credit: (icons from left to right: Becris; Freepik; Freepik; Paul J; SBTS2018). +
+
+
+
+
+

5.3.2 Web Scraping

+

Web scraping refers to automated techniques for extracting data from websites. It typically involves sending HTTP requests to web servers, retrieving HTML content, and parsing that content to extract relevant information. Popular tools and frameworks for web scraping include Beautiful Soup, Scrapy, and Selenium. These tools offer different functionalities, from parsing HTML content to automating web browser interactions, especially for websites that load content dynamically using JavaScript.

+

Web scraping can effectively gather large datasets for training machine learning models, particularly when human-labeled data is scarce. For computer vision research, web scraping enables the collection of massive volumes of images and videos. Researchers have used this technique to build influential datasets like ImageNet and OpenImages. For example, one could scrape e-commerce sites to amass product photos for object recognition or social media platforms to collect user uploads for facial analysis. Even before ImageNet, Stanford’s LabelMe project scraped Flickr for over 63,000 annotated images covering hundreds of object categories.

+

Beyond computer vision, web scraping supports gathering textual data for natural language tasks. Researchers can scrape news sites for sentiment analysis data, forums and review sites for dialogue systems research, or social media for topic modeling. For example, the training data for chatbot ChatGPT was obtained by scraping much of the public Internet. GitHub repositories were scraped to train GitHub’s Copilot AI coding assistant.

+

Web scraping can also collect structured data, such as stock prices, weather data, or product information, for analytical applications. Once data is scraped, it is essential to store it in a structured manner, often using databases or data warehouses. Proper data management ensures the usability of the scraped data for future analysis and applications.

+

However, while web scraping offers numerous advantages, there are significant limitations and ethical considerations to bear. Not all websites permit scraping, and violating these restrictions can lead to legal repercussions. Scraping copyrighted material or private communications is also unethical and potentially illegal. Ethical web scraping mandates adherence to a website’s ‘robots.txt’ file, which outlines the sections of the site that can be accessed and scraped by automated bots.

+

To deter automated scraping, many websites implement rate limits. If a bot sends too many requests in a short period, it might be temporarily blocked, restricting the speed of data access. Additionally, the dynamic nature of web content means that data scraped at different intervals might need more consistency, posing challenges for longitudinal studies. However, there are emerging trends like Web Navigation where machine learning algorithms can automatically navigate the website to access the dynamic content.

+

The volume of pertinent data available for scraping might be limited for niche subjects. For example, while scraping for common topics like images of cats and dogs might yield abundant data, searching for rare medical conditions might be less fruitful. Moreover, the data obtained through scraping is often unstructured and noisy, necessitating thorough preprocessing and cleaning. It is crucial to understand that not all scraped data will be of high quality or accuracy. Employing verification methods, such as cross-referencing with alternate data sources, can enhance data reliability.

+

Privacy concerns arise when scraping personal data, emphasizing the need for anonymization. Therefore, it is paramount to adhere to a website’s Terms of Service, confine data collection to public domains, and ensure the anonymity of any personal data acquired.

+

While web scraping can be a scalable method to amass large training datasets for AI systems, its applicability is confined to specific data types. For example, web scraping makes sourcing data for Inertial Measurement Units (IMU) for gesture recognition more complex. At most, one can scrape an existing dataset.

+

Web scraping can yield inconsistent or inaccurate data. For example, the photo in Figure fig-traffic-light shows up when you search for ‘traffic light’ on Google Images. It is an image from 1914 that shows outdated traffic lights, which are also barely discernible because of the image’s poor quality. This can be problematic for web-scraped datasets, as it pollutes the dataset with inapplicable (old) data samples.

+
+
+
+ +
+
+Figure 5.4: A picture of old traffic lights (1914). Credit: Vox. +
+
+
+
+

Exercise 5.2 (Web Scraping)  

+
+
+ +
+
+

Discover the power of web scraping with Python using libraries like Beautiful Soup and Pandas. This exercise will scrape Python documentation for function names and descriptions and explore NBA player stats. By the end, you’ll have the skills to extract and analyze data from real-world websites. Ready to dive in? Access the Google Colab notebook below and start practicing!

+

+
+
+
+
+
+

5.3.3 Crowdsourcing

+

Crowdsourcing for datasets is the practice of obtaining data using the services of many people, either from a specific community or the general public, typically via the Internet. Instead of relying on a small team or specific organization to collect or label data, crowdsourcing leverages the collective effort of a vast, distributed group of participants. Services like Amazon Mechanical Turk enable the distribution of annotation tasks to a large, diverse workforce. This facilitates the collection of labels for complex tasks like sentiment analysis or image recognition requiring human judgment.

+

Crowdsourcing has emerged as an effective approach for data collection and problem-solving. One major advantage of crowdsourcing is scalability—by distributing tasks to a large, global pool of contributors on digital platforms, projects can process huge volumes of data quickly. This makes crowdsourcing ideal for large-scale data labeling, collection, and analysis.

+

In addition, crowdsourcing taps into a diverse group of participants, bringing a wide range of perspectives, cultural insights, and language abilities that can enrich data and enhance creative problem-solving in ways that a more homogenous group may not. Because crowdsourcing draws from a large audience beyond traditional channels, it is more cost-effective than conventional methods, especially for simpler microtasks.

+

Crowdsourcing platforms also allow for great flexibility, as task parameters can be adjusted in real time based on initial results. This creates a feedback loop for iterative improvements to the data collection process. Complex jobs can be broken down into microtasks and distributed to multiple people, with results cross-validated by assigning redundant versions of the same task. When thoughtfully managed, crowdsourcing enables community engagement around a collaborative project, where participants find reward in contributing.

+

However, while crowdsourcing offers numerous advantages, it’s essential to approach it with a clear strategy. While it provides access to a diverse set of annotators, it also introduces variability in the quality of annotations. Additionally, platforms like Mechanical Turk might not always capture a complete demographic spectrum; often, tech-savvy individuals are overrepresented, while children and older people may be underrepresented. Providing clear instructions and training for the annotators is crucial. Periodic checks and validations of the labeled data help maintain quality. This ties back to the topic of clear Problem Definition that we discussed earlier. Crowdsourcing for datasets also requires careful attention to ethical considerations. It’s crucial to ensure that participants are informed about how their data will be used and that their privacy is protected. Quality control through detailed protocols, transparency in sourcing, and auditing is essential to ensure reliable outcomes.

+

For TinyML, crowdsourcing can pose some unique challenges. TinyML devices are highly specialized for particular tasks within tight constraints. As a result, the data they require tends to be very specific. Obtaining such specialized data from a general audience may be difficult through crowdsourcing. For example, TinyML applications often rely on data collected from certain sensors or hardware. Crowdsourcing would require participants to have access to very specific and consistent devices - like microphones, with the same sampling rates. These hardware nuances present obstacles even for simple audio tasks like keyword spotting.

+

Beyond hardware, the data itself needs high granularity and quality, given the limitations of TinyML. It can be hard to ensure this when crowdsourcing from those unfamiliar with the application’s context and requirements. There are also potential issues around privacy, real-time collection, standardization, and technical expertise. Moreover, the narrow nature of many TinyML tasks makes accurate data labeling easier with the proper understanding. Participants may need full context to provide reliable annotations.

+

Thus, while crowdsourcing can work well in many cases, the specialized needs of TinyML introduce unique data challenges. Careful planning is required for guidelines, targeting, and quality control. For some applications, crowdsourcing may be feasible, but others may require more focused data collection efforts to obtain relevant, high-quality training data.

+
+
+

5.3.4 Synthetic Data

+

Synthetic data generation can be useful for addressing some of the data collection limitations. It involves creating data that wasn’t originally captured or observed but is generated using algorithms, simulations, or other techniques to resemble real-world data. As shown in Figure fig-synthetic-data, synthetic data is merged with historical data and then used as input for model training. It has become a valuable tool in various fields, particularly when real-world data is scarce, expensive, or ethically challenging (e.g., TinyML). Various techniques, such as Generative Adversarial Networks (GANs), can produce high-quality synthetic data almost indistinguishable from real data. These techniques have advanced significantly, making synthetic data generation increasingly realistic and reliable.

+

More real-world data may need to be available for analysis or training machine learning models in many domains, especially emerging ones. Synthetic data can fill this gap by producing large volumes of data that mimic real-world scenarios. For instance, detecting the sound of breaking glass might be challenging in security applications where a TinyML device is trying to identify break-ins. Collecting real-world data would require breaking numerous windows, which is impractical and costly.

+

Moreover, having a diverse dataset is crucial in machine learning, especially in deep learning. Synthetic data can augment existing datasets by introducing variations, thereby enhancing the robustness of models. For example, SpecAugment is an excellent data augmentation technique for Automatic Speech Recognition (ASR) systems.

+

Privacy and confidentiality are also big issues. Datasets containing sensitive or personal information pose privacy concerns when shared or used. Synthetic data, being artificially generated, doesn’t have these direct ties to real individuals, allowing for safer use while preserving essential statistical properties.

+

Generating synthetic data, especially once the generation mechanisms have been established, can be a more cost-effective alternative. Synthetic data eliminates the need to break multiple windows to gather relevant data in the security above application scenario.

+

Many embedded use cases deal with unique situations, such as manufacturing plants, that are difficult to simulate. Synthetic data allows researchers complete control over the data generation process, enabling the creation of specific scenarios or conditions that are challenging to capture in real life.

+

While synthetic data offers numerous advantages, it is essential to use it judiciously. Care must be taken to ensure that the generated data accurately represents the underlying real-world distributions and does not introduce unintended biases.

+
+
+
+ +
+
+Figure 5.5: Increasing training data size with synthetic data generation. Credit: AnyLogic. +
+
+
+
+

Exercise 5.3 (Synthetic Data)  

+
+
+ +
+
+

Let us learn about synthetic data generation using Generative Adversarial Networks (GANs) on tabular data. We’ll take a hands-on approach, diving into the workings of the CTGAN model and applying it to the Synthea dataset from the healthcare domain. From data preprocessing to model training and evaluation, we’ll go step-by-step, learning how to create synthetic data, assess its quality, and unlock the potential of GANs for data augmentation and real-world applications.

+

+
+
+
+
+
+
+

5.4 Data Storage

+

Data sourcing and data storage go hand in hand, and data must be stored in a format that facilitates easy access and processing. Depending on the use case, various kinds of data storage systems can be used to store your datasets. Some examples are shown in Table tbl-databases.

+
+
+
+Table 5.1: Comparative overview of the database, data warehouse, and data lake. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DatabaseData WarehouseData Lake
PurposeOperational and transactionalAnalytical
Data typeStructuredStructured, semi-structured, and/or unstructured
ScaleSmall to large volumes of dataLarge volumes of integrated data
ExamplesMySQLGoogle BigQuery, Amazon Redshift, Microsoft Azure Synapse, Google Cloud Storage, AWS S3, Azure Data Lake Storage
+
+
+
+

The stored data is often accompanied by metadata, defined as ’data about data .’It provides detailed contextual information about the data, such as means of data creation, time of creation, attached data use license, etc. For example, Hugging Face has Dataset Cards. To promote responsible data use, dataset creators should disclose potential biases through the dataset cards. These cards can educate users about a dataset’s contents and limitations. The cards also give vital context on appropriate dataset usage by highlighting biases and other important details. Having this type of metadata can also allow fast retrieval if structured properly. Once the model is developed and deployed to edge devices, the storage systems can continue to store incoming data, model updates, or analytical results.

+

Data Governance: With a large amount of data storage, it is also imperative to have policies and practices (i.e., data governance) that help manage data during its life cycle, from acquisition to disposal. Data governance frames how data is managed and includes making pivotal decisions about data access and control. Figure fig-governance illustrates the different domains involved in data governance. It involves exercising authority and making decisions concerning data to uphold its quality, ensure compliance, maintain security, and derive value. Data governance is operationalized by developing policies, incentives, and penalties, cultivating a culture that perceives data as a valuable asset. Specific procedures and assigned authorities are implemented to safeguard data quality and monitor its utilization and related risks.

+

Data governance utilizes three integrative approaches: planning and control, organizational, and risk-based.

+
    +
  • The planning and control approach, common in IT, aligns business and technology through annual cycles and continuous adjustments, focusing on policy-driven, auditable governance.

  • +
  • The organizational approach emphasizes structure, establishing authoritative roles like Chief Data Officers and ensuring responsibility and accountability in governance.

  • +
  • The risk-based approach, intensified by AI advancements, focuses on identifying and managing inherent risks in data and algorithms. It especially addresses AI-specific issues through regular assessments and proactive risk management strategies, allowing for incidental and preventive actions to mitigate undesired algorithm impacts.

  • +
+
+
+
+ +
+
+Figure 5.6: An overview of the data governance framework. Credit: StarCIO.. +
+
+
+

Some examples of data governance across different sectors include:

+
    +
  • Medicine: Health Information Exchanges(HIEs) enable the sharing of health information across different healthcare providers to improve patient care. They implement strict data governance practices to maintain data accuracy, integrity, privacy, and security, complying with regulations such as the Health Insurance Portability and Accountability Act (HIPAA). Governance policies ensure that patient data is only shared with authorized entities and that patients can control access to their information.

  • +
  • ** Finance: ** [[Basel III Framework] .underline] (https://www.bis.org/bcbs/basel3.htm) is an international regulatory framework for banks. It ensures that banks establish clear policies, practices, and responsibilities for data management, ensuring data accuracy, completeness, and timeliness. Not only does it enable banks to meet regulatory compliance, but it also prevents financial crises by more effectively managing risks.

  • +
  • Government: Government agencies managing citizen data, public records, and administrative information implement data governance to manage data transparently and securely. The Social Security System in the US and the Aadhar system in India are good examples of such governance systems.

  • +
+

Special data storage considerations for TinyML

+

Efficient Audio Storage Formats: Keyword spotting systems need specialized audio storage formats to enable quick keyword searching in audio data. Traditional formats like WAV and MP3 store full audio waveforms, which require extensive processing to search through. Keyword spotting uses compressed storage optimized for snippet-based search. One approach is to store compact acoustic features instead of raw audio. Such a workflow would involve:

+
    +
  • Extracting acoustic features: Mel-frequency cepstral coefficients (MFCCs) commonly represent important audio characteristics.

  • +
  • Creating Embeddings: Embeddings transform extracted acoustic features into continuous vector spaces, enabling more compact and representative data storage. This representation is essential in converting high-dimensional data, like audio, into a more manageable and efficient format for computation and storage.

  • +
  • Vector quantization: This technique represents high-dimensional data, like embeddings, with lower-dimensional vectors, reducing storage needs. Initially, a codebook is generated from the training data to define a set of code vectors representing the original data vectors. Subsequently, each data vector is matched to the nearest codeword according to the codebook, ensuring minimal information loss.

  • +
  • Sequential storage: The audio is fragmented into short frames, and the quantized features (or embeddings) for each frame are stored sequentially to maintain the temporal order, preserving the coherence and context of the audio data.

  • +
+

This format enables decoding the features frame-by-frame for keyword matching. Searching the features is faster than decompressing the full audio.

+

Selective Network Output Storage: Another technique for reducing storage is to discard the intermediate audio features stored during training but not required during inference. The network is run on full audio during training. However, only the final outputs are stored during inference.

+
+
+

5.5 Data Processing

+

Data processing refers to the steps involved in transforming raw data into a format suitable for feeding into machine learning algorithms. It is a crucial stage in any ML workflow, yet often overlooked. With proper data processing, ML models are likely to achieve optimal performance. Figure fig-data-engineering shows a breakdown of a data scientist’s time allocation, highlighting the significant portion spent on data cleaning and organizing (%60).

+
+
+
+ +
+
+Figure 5.7: Data scientists’ tasks breakdown by time spent. Credit: Forbes. +
+
+
+

Proper data cleaning is a crucial step that directly impacts model performance. Real-world data is often dirty, containing errors, missing values, noise, anomalies, and inconsistencies. Data cleaning involves detecting and fixing these issues to prepare high-quality data for modeling. By carefully selecting appropriate techniques, data scientists can improve model accuracy, reduce overfitting, and enable algorithms to learn more robust patterns. Overall, thoughtful data processing allows machine learning systems to uncover insights better and make predictions from real-world data.

+

Data often comes from diverse sources and can be unstructured or semi-structured. Thus, processing and standardizing it is essential, ensuring it adheres to a uniform format. Such transformations may include:

+
    +
  • Normalizing numerical variables
  • +
  • Encoding categorical variables
  • +
  • Using techniques like dimensionality reduction
  • +
+

Data validation serves a broader role than ensuring adherence to certain standards, like preventing temperature values from falling below absolute zero. These issues arise in TinyML because sensors may malfunction or temporarily produce incorrect readings; such transients are not uncommon. Therefore, it is imperative to catch data errors early before propagating through the data pipeline. Rigorous validation processes, including verifying the initial annotation practices, detecting outliers, and handling missing values through techniques like mean imputation, contribute directly to the quality of datasets. This, in turn, impacts the performance, fairness, and safety of the models trained on them. Let’s take a look at Figure fig-data-engineering-kws2 for an example of a data processing pipeline. In the context of TinyML, the Multilingual Spoken Words Corpus (MSWC) is an example of data processing pipelines—systematic and automated workflows for data transformation, storage, and processing. The input data (which’s a collection of short recordings) goes through sevreral phases of processing, such as audio-word alignemnt and keyword extraction. By streamlining the data flow, from raw data to usable datasets, data pipelines enhance productivity and facilitate the rapid development of machine learning models. The MSWC is an expansive and expanding collection of audio recordings of spoken words in 50 different languages, which are collectively used by over 5 billion people. This dataset is intended for academic study and business uses in areas like keyword identification and speech-based search. It is openly licensed under Creative Commons Attribution 4.0 for broad usage.

+
+
+
+ +
+
+Figure 5.8: An overview of the Multilingual Spoken Words Corpus (MSWC) data processing pipeline. Credit: Mazumder et al. (2021). +
+
+Mazumder, Mark, Sharad Chitlangia, Colby Banbury, Yiping Kang, Juan Manuel Ciro, Keith Achorn, Daniel Galvez, et al. 2021. “Multilingual Spoken Words Corpus.” In Thirty-Fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). +
+
+

The MSWC used a forced alignment method to automatically extract individual word recordings to train keyword-spotting models from the Common Voice project, which features crowdsourced sentence-level recordings. Forced alignment refers to long-standing methods in speech processing that predict when speech phenomena like syllables, words, or sentences start and end within an audio recording. In the MSWC data, crowdsourced recordings often feature background noises, such as static and wind. Depending on the model’s requirements, these noises can be removed or intentionally retained.

+

Maintaining the integrity of the data infrastructure is a continuous endeavor. This encompasses data storage, security, error handling, and stringent version control. Periodic updates are crucial, especially in dynamic realms like keyword spotting, to adjust to evolving linguistic trends and device integrations.

+

There is a boom in data processing pipelines, commonly found in ML operations toolchains, which we will discuss in the MLOps chapter. Briefly, these include frameworks like MLOps by Google Cloud. It provides methods for automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment, and infrastructure management. Several mechanisms focus on data processing, an integral part of these systems.

+
+

Exercise 5.4 (Data Processing)  

+
+
+ +
+
+

Let us explore two significant projects in speech data processing and machine learning. The MSWC is a vast audio dataset with over 340,000 keywords and 23.4 million 1-second spoken examples. It’s used in various applications like voice-enabled devices and call center automation. The Few-Shot Keyword Spotting project introduces a new approach for keyword spotting across different languages, achieving impressive results with minimal training data. We’ll delve into the MSWC dataset, learn how to structure it effectively, and then train a few-shot keyword-spotting model. Let’s get started!

+

+
+
+
+
+
+

5.6 Data Labeling

+

Data labeling is important in creating high-quality training datasets for machine learning models. Labels provide ground truth information, allowing models to learn relationships between inputs and desired outputs. This section covers key considerations for selecting label types, formats, and content to capture the necessary information for tasks. It discusses common annotation approaches, from manual labeling to crowdsourcing to AI-assisted methods, and best practices for ensuring label quality through training, guidelines, and quality checks. We also emphasize the ethical treatment of human annotators. The integration of AI to accelerate and augment human annotation is also explored. Understanding labeling needs, challenges, and strategies are essential for constructing reliable, useful datasets to train performant, trustworthy machine learning systems.

+
+

5.6.1 Label Types

+

Labels capture information about key tasks or concepts. Figure fig-labels includes some common label types: a “classification label” is used for categorizing images with labels (labeling an image with “dog” if it features a dog); a “bounding box” identifies object location (drawing a box around the dog); a “segmentation map” classifies objects at the pixel level (highlighting the dog in a distinct color); a “caption” provides descriptive annotations (describing the dog’s actions, position, color, etc.); and a “transcript” denotes audio content. The choice of label format depends on the use case and resource constraints, as more detailed labels require greater effort to collect (Johnson-Roberson et al. (2017)).

+
+Johnson-Roberson, Matthew, Charles Barto, Rounak Mehta, Sharath Nittur Sridhar, Karl Rosaen, and Ram Vasudevan. 2017. “Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?” In 2017 IEEE International Conference on Robotics and Automation (ICRA), 746–53. Singapore, Singapore: IEEE. https://doi.org/10.1109/icra.2017.7989092. +
+
+
+ +
+
+Figure 5.9: An overview of common label types. +
+
+
+

Unless focused on self-supervised learning, a dataset will likely provide labels addressing one or more tasks of interest. Given their unique resource constraints, dataset creators must consider what information labels should capture and how they can practically obtain the necessary labels. Creators must first decide what type(s) of content labels should capture. For example, a creator interested in car detection would want to label cars in their dataset. Still, they might also consider whether to simultaneously collect labels for other tasks that the dataset could potentially be used for, such as pedestrian detection.

+

Additionally, annotators can provide metadata that provides insight into how the dataset represents different characteristics of interest (see sec-data-transparency). The Common Voice dataset, for example, includes various types of metadata that provide information about the speakers, recordings, and dataset quality for each language represented (Ardila et al. (2020)). They include demographic splits showing the number of recordings by speaker age range and gender. This allows us to see who contributed recordings for each language. They also include statistics like average recording duration and total hours of validated recordings. These give insights into the nature and size of the datasets for each language. Additionally, quality control metrics like the percentage of recordings that have been validated are useful to know how complete and clean the datasets are. The metadata also includes normalized demographic splits scaled to 100% for comparison across languages. This highlights representation differences between higher and lower resource languages.

+
+Ardila, Rosana, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. “Common Voice: A Massively-Multilingual Speech Corpus.” In Proceedings of the Twelfth Language Resources and Evaluation Conference, 4218–22. Marseille, France: European Language Resources Association. https://aclanthology.org/2020.lrec-1.520. +

Next, creators must determine the format of those labels. For example, a creator interested in car detection might choose between binary classification labels that say whether a car is present, bounding boxes that show the general locations of any cars, or pixel-wise segmentation labels that show the exact location of each car. Their choice of label format may depend on their use case and resource constraints, as finer-grained labels are typically more expensive and time-consuming to acquire.

+
+
+

5.6.2 Annotation Methods

+

Common annotation approaches include manual labeling, crowdsourcing, and semi-automated techniques. Manual labeling by experts yields high quality but needs more scalability. Crowdsourcing enables non-experts to distribute annotation, often through dedicated platforms (Sheng and Zhang (2019)). Weakly supervised and programmatic methods can reduce manual effort by heuristically or automatically generating labels (Ratner et al. (2018))

+
+Sheng, Victor S., and Jing Zhang. 2019. “Machine Learning with Crowdsourcing: A Brief Summary of the Past Research and Future Directions.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 9837–43. https://doi.org/10.1609/aaai.v33i01.33019837. +
+Ratner, Alex, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher Ré. 2018. “Snorkel MeTaL: Weak Supervision for Multi-Task Learning.” In Proceedings of the Second Workshop on Data Management for End-to-End Machine Learning. ACM. https://doi.org/10.1145/3209889.3209898. +

After deciding on their labels’ desired content and format, creators begin the annotation process. To collect large numbers of labels from human annotators, creators frequently rely on dedicated annotation platforms, which can connect them to teams of human annotators. When using these platforms, creators may need more insight into annotators’ backgrounds and experience levels with topics of interest. However, some platforms offer access to annotators with specific expertise (e.g., doctors).

+
+
+

5.6.3 Ensuring Label Quality

+

There is no guarantee that the data labels are actually correct. Figure fig-hard-labels shows some examples of hard labeling cases: some errors arise from blurred pictures that make them hard to identify (the frog image), and others stem from a lack of domain knowledge (the black stork case). It is possible that despite the best instructions being given to labelers, they still mislabel some images (Northcutt, Athalye, and Mueller (2021)). Strategies like quality checks, training annotators, and collecting multiple labels per datapoint can help ensure label quality. For ambiguous tasks, multiple annotators can help identify controversial datapoints and quantify disagreement levels.

+
+
+
+ +
+
+Figure 5.10: Some examples of hard labeling cases. Credit: Northcutt, Athalye, and Mueller (2021). +
+
+Northcutt, Curtis G, Anish Athalye, and Jonas Mueller. 2021. “Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks.” arXiv. https://doi.org/https://doi.org/10.48550/arXiv.2103.14749 arXiv-issued DOI via DataCite. +
+
+

When working with human annotators, offering fair compensation and otherwise prioritizing ethical treatment is important, as annotators can be exploited or otherwise harmed during the labeling process (Perrigo, 2023). For example, if a dataset is likely to contain disturbing content, annotators may benefit from having the option to view images in grayscale (Google (n.d.)).

+
+Google. n.d. “Information Quality Content Moderation.” https://blog.google/documents/83/. +
+
+

5.6.4 AI-Assisted Annotation

+

ML has an insatiable demand for data. Therefore, more data is needed. This raises the question of how we can get more labeled data. Rather than always generating and curating data manually, we can rely on existing AI models to help label datasets more quickly and cheaply, though often with lower quality than human annotation. This can be done in various ways as shown in Figure fig-weak-supervision, including the following:

+
    +
  • Pre-annotation: AI models can generate preliminary labels for a dataset using methods such as semi-supervised learning (Chapelle, Scholkopf, and Zien (2009)), which humans can then review and correct. This can save a significant amount of time, especially for large datasets.
  • +
  • Active learning: AI models can identify the most informative data points in a dataset, which can then be prioritized for human annotation. This can help improve the labeled dataset’s quality while reducing the overall annotation time.
  • +
  • Quality control: AI models can identify and flag potential errors in human annotations, helping to ensure the accuracy and consistency of the labeled dataset.
  • +
+
+Chapelle, O., B. Scholkopf, and A. Zien Eds. 2009. “Semi-Supervised Learning (Chapelle, O. Et Al., Eds.; 2006) [Book Reviews].” IEEE Trans. Neural Networks 20 (3): 542–42. https://doi.org/10.1109/tnn.2009.2015974. +

Here are some examples of how AI-assisted annotation has been proposed to be useful:

+
    +
  • Medical imaging: AI-assisted annotation labels medical images, such as MRI scans and X-rays (Krishnan, Rajpurkar, and Topol (2022)). Carefully annotating medical datasets is extremely challenging, especially at scale, since domain experts are scarce and become costly. This can help to train AI models to diagnose diseases and other medical conditions more accurately and efficiently.
    +
  • +
  • Self-driving cars: AI-assisted annotation is being used to label images and videos from self-driving cars. This can help to train AI models to identify objects on the road, such as other vehicles, pedestrians, and traffic signs.
  • +
  • Social media: AI-assisted annotation labels social media posts like images and videos. This can help to train AI models to identify and classify different types of content, such as news, advertising, and personal posts.
  • +
+
+Krishnan, Rayan, Pranav Rajpurkar, and Eric J. Topol. 2022. “Self-Supervised Learning in Medicine and Healthcare.” Nat. Biomed. Eng. 6 (12): 1346–52. https://doi.org/10.1038/s41551-022-00914-1. +
+
+
+ +
+
+Figure 5.11: Strategies for acquiring additional labeled training data. Credit: Standford AI Lab. +
+
+
+
+
+
+

5.7 Data Version Control

+

Production systems are perpetually inundated with fluctuating and escalating volumes of data, prompting the rapid emergence of numerous data replicas. This increasing data serves as the foundation for training machine learning models. For instance, a global sales company engaged in sales forecasting continuously receives consumer behavior data. Similarly, healthcare systems formulating predictive models for disease diagnosis are consistently acquiring new patient data. TinyML applications, such as keyword spotting, are highly data-hungry regarding the amount of data generated. Consequently, meticulous tracking of data versions and the corresponding model performance is imperative.

+

Data Version Control offers a structured methodology to handle alterations and versions of datasets efficiently. It facilitates monitoring modifications, preserves multiple versions, and guarantees reproducibility and traceability in data-centric projects. Furthermore, data version control provides the versatility to review and utilize specific versions as needed, ensuring that each stage of the data processing and model development can be revisited and audited precisely and easily. It has a variety of practical uses -

+

Risk Management: Data version control allows transparency and accountability by tracking dataset versions.

+

Collaboration and Efficiency: Easy access to different dataset versions in one place can improve data sharing of specific checkpoints and enable efficient collaboration.

+

Reproducibility: Data version control allows for tracking the performance of models concerning different versions of the data, and therefore enabling reproducibility.

+

Key Concepts

+
    +
  • Commits: It is an immutable snapshot of the data at a specific point in time, representing a unique version. Every commit is associated with a unique identifier to allow

  • +
  • Branches: Branching allows developers and data scientists to diverge from the main development line and continue to work independently without affecting other branches. This is especially useful when experimenting with new features or models, enabling parallel development and experimentation without the risk of corrupting the stable main branch.

  • +
  • Merges: Merges help to integrate changes from different branches while maintaining the integrity of the data.

  • +
+

With data version control in place, we can track the changes shown in Figure fig-data-version-ctrl, reproduce previous results by reverting to older versions, and collaborate safely by branching off and isolating the changes.

+
+
+
+ +
+
+Figure 5.12: Data versioning. +
+
+
+

Popular Data Version Control Systems

+

DVC: It stands for Data Version Control in short and is an open-source, lightweight tool that works on top of Git Hub and supports all kinds of data formats. It can seamlessly integrate into the workflow if Git is used to manage code. It captures the versions of data and models in the Git commits while storing them on-premises or on the cloud (e.g., AWS, Google Cloud, Azure). These data and models (e.g., ML artifacts) are defined in the metadata files, which get updated in every commit. It can allow metrics tracking of models on different versions of the data.

+

lakeFS: It is an open-source tool that supports the data version control on data lakes. It supports many git-like operations, such as branching and merging of data, as well as reverting to previous versions of the data. It also has a unique UI feature, making exploring and managing data much easier.

+

Git LFS: It is useful for data version control on smaller-sized datasets. It uses Git’s inbuilt branching and merging features but is limited in tracking metrics, reverting to previous versions, or integrating with data lakes.

+
+
+

5.8 Optimizing Data for Embedded AI

+

Creators working on embedded systems may have unusual priorities when cleaning their datasets. On the one hand, models may be developed for unusually specific use cases, requiring heavy filtering of datasets. While other natural language models may be capable of turning any speech into text, a model for an embedded system may be focused on a single limited task, such as detecting a keyword. As a result, creators may aggressively filter out large amounts of data because they need to address the task of interest. An embedded AI system may also be tied to specific hardware devices or environments. For example, a video model may need to process images from a single type of camera, which will only be mounted on doorbells in residential neighborhoods. In this scenario, creators may discard images if they came from a different kind of camera, show the wrong type of scenery, or were taken from the wrong height or angle.

+

On the other hand, embedded AI systems are often expected to provide especially accurate performance in unpredictable real-world settings. This may lead creators to design datasets to represent variations in potential inputs and promote model robustness. As a result, they may define a narrow scope for their project but then aim for deep coverage within those bounds. For example, creators of the doorbell model mentioned above might try to cover variations in data arising from:

+
    +
  • Geographically, socially, and architecturally diverse neighborhoods
  • +
  • Different types of artificial and natural lighting
  • +
  • Different seasons and weather conditions
  • +
  • Obstructions (e.g., raindrops or delivery boxes obscuring the camera’s view)
  • +
+

As described above, creators may consider crowdsourcing or synthetically generating data to include these variations.

+
+
+

5.9 Data Transparency

+

By providing clear, detailed documentation, creators can help developers understand how best to use their datasets. Several groups have suggested standardized documentation formats for datasets, such as Data Cards (Pushkarna, Zaldivar, and Kjartansson (2022)), datasheets (Gebru et al. (2021)), data statements (Bender and Friedman (2018)), or Data Nutrition Labels (Holland et al. (2020)). When releasing a dataset, creators may describe what kinds of data they collected, how they collected and labeled it, and what kinds of use cases may be a good or poor fit for the dataset. Quantitatively, it may be appropriate to show how well the dataset represents different groups (e.g., different gender groups, different cameras).

+
+Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2021. “Datasheets for Datasets.” Commun. ACM 64 (12): 86–92. https://doi.org/10.1145/3458723. +
+Bender, Emily M., and Batya Friedman. 2018. “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science.” Transactions of the Association for Computational Linguistics 6 (December): 587–604. https://doi.org/10.1162/tacl_a_00041. +
+Holland, Sarah, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2020. “The Dataset Nutrition Label: A Framework to Drive Higher Data Quality Standards.” In Data Protection and Privacy. Hart Publishing. https://doi.org/10.5040/9781509932771.ch-001. +

Figure fig-data-card shows an example of a data card for a computer vision (CV) dataset. It includes some basic information about the dataset and instructions on how to use it, including known biases.

+
+
+
+ +
+
+Figure 5.13: Data card describing a CV dataset. Credit: Pushkarna, Zaldivar, and Kjartansson (2022). +
+
+Pushkarna, Mahima, Andrew Zaldivar, and Oddur Kjartansson. 2022. “Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI.” In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM. https://doi.org/10.1145/3531146.3533231. +
+
+

Keeping track of data provenance- essentially the origins and the journey of each data point through the data pipeline- is not merely a good practice but an essential requirement for data quality. Data provenance contributes significantly to the transparency of machine learning systems. Transparent systems make it easier to scrutinize data points, enabling better identification and rectification of errors, biases, or inconsistencies. For instance, if an ML model trained on medical data is underperforming in particular areas, tracing the provenance can help identify whether the issue is with the data collection methods, the demographic groups represented in the data or other factors. This level of transparency doesn’t just help debug the system but also plays a crucial role in enhancing the overall data quality. By improving the reliability and credibility of the dataset, data provenance also enhances the model’s performance and its acceptability among end-users.

+

When producing documentation, creators should also specify how users can access the dataset and how the dataset will be maintained over time. For example, users may need to undergo training or receive special permission from the creators before accessing a protected information dataset, as with many medical datasets. In some cases, users may not access the data directly. Instead, they must submit their model to be trained on the dataset creators’ hardware, following a federated learning setup (Aledhari et al. (2020)). Creators may also describe how long the dataset will remain accessible, how the users can submit feedback on any errors they discover, and whether there are plans to update the dataset.

+
+Aledhari, Mohammed, Rehma Razzak, Reza M. Parizi, and Fahad Saeed. 2020. “Federated Learning: A Survey on Enabling Technologies, Protocols, and Applications.” #IEEE_O_ACC# 8: 140699–725. https://doi.org/10.1109/access.2020.3013541. +

Some laws and regulations also promote data transparency through new requirements for organizations:

+
    +
  • General Data Protection Regulation (GDPR) in the European Union: It establishes strict requirements for processing and protecting the personal data of EU citizens. It mandates plain-language privacy policies that clearly explain what data is collected, why it is used, how long it is stored, and with whom it is shared. GDPR also mandates that privacy notices must include details on the legal basis for processing, data transfers, retention periods, rights to access and deletion, and contact info for data controllers.
  • +
  • California’s Consumer Privacy Act (CCPA): CCPA requires clear privacy policies and opt-out rights to sell personal data. Significantly, it also establishes rights for consumers to request their specific data be disclosed. Businesses must provide copies of collected personal information and details on what it is used for, what categories are collected, and what third parties receive. Consumers can identify data points they believe need to be more accurate. The law represents a major step forward in empowering personal data access.
  • +
+

Ensured data transparency presents several challenges, especially because it requires significant time and financial resources. Data systems are also quite complex, and full transparency can take time. Full transparency may also overwhelm consumers with too much detail. Finally, it is also important to balance the tradeoff between transparency and privacy.

+
+
+

5.10 Licensing

+

Many high-quality datasets either come from proprietary sources or contain copyrighted information. This introduces licensing as a challenging legal domain. Companies eager to train ML systems must engage in negotiations to obtain licenses that grant legal access to these datasets. Furthermore, licensing terms can impose restrictions on data applications and sharing methods. Failure to comply with these licenses can have severe consequences.

+

For instance, ImageNet, one of the most extensively utilized datasets for computer vision research, is a case in point. Most of its images were procured from public online sources without explicit permission, sparking ethical concerns (Prabhu and Birhane, 2020). Accessing the ImageNet dataset for corporations requires registration and adherence to its terms of use, which restricts commercial usage (ImageNet, 2021). Major players like Google and Microsoft invest significantly in licensing datasets to enhance their ML vision systems. However, the cost factor restricts accessibility for researchers from smaller companies with constrained budgets.

+

The legal domain of data licensing has seen major cases that help define fair use parameters. A prominent example is Authors Guild, Inc. v. Google, Inc. This 2005 lawsuit alleged that Google’s book scanning project infringed copyrights by displaying snippets without permission. However, the courts ultimately ruled in Google’s favor, upholding fair use based on the transformative nature of creating a searchable index and showing limited text excerpts. This precedent provides some legal grounds for arguing fair use protections apply to indexing datasets and generating representative samples for machine learning. However, license restrictions remain binding, so a comprehensive analysis of licensing terms is critical. The case demonstrates why negotiations with data providers are important to enable legal usage within acceptable bounds.

+

New Data Regulations and Their Implications

+

New data regulations also impact licensing practices. The legislative landscape is evolving with regulations like the EU’s Artificial Intelligence Act, which is poised to regulate AI system development and use within the European Union (EU). This legislation:

+
    +
  1. Classifies AI systems by risk.

  2. +
  3. Mandates development and usage prerequisites.

  4. +
  5. Emphasizes data quality, transparency, human oversight, and accountability.

  6. +
+

Additionally, the EU Act addresses the ethical dimensions and operational challenges in sectors such as healthcare and finance. Key elements include the prohibition of AI systems posing "unacceptable" risks, stringent conditions for high-risk systems, and minimal obligations for "limited risk" AI systems. The proposed European AI Board will oversee and ensure the implementation of efficient regulation.

+

Challenges in Assembling ML Training Datasets

+

Complex licensing issues around proprietary data, copyright law, and privacy regulations constrain options for assembling ML training datasets. However, expanding accessibility through more open licensing or public-private data collaborations could greatly accelerate industry progress and ethical standards.

+

Sometimes, certain portions of a dataset may need to be removed or obscured to comply with data usage agreements or protect sensitive information. For example, a dataset of user information may have names, contact details, and other identifying data that may need to be removed from the dataset; this is well after the dataset has already been actively sourced and used for training models. Similarly, a dataset that includes copyrighted content or trade secrets may need to filter out those portions before being distributed. Laws such as the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the Amended Act on the Protection of Personal Information (APPI) have been passed to guarantee the right to be forgotten. These regulations legally require model providers to erase user data upon request.

+

Data collectors and providers need to be able to take appropriate measures to de-identify or filter out any proprietary, licensed, confidential, or regulated information as needed. Sometimes, the users may explicitly request that their data be removed.

+

The ability to update the dataset by removing data from the dataset will enable the creators to uphold legal and ethical obligations around data usage and privacy. However, the ability to remove data has some important limitations. We must consider that some models may have already been trained on the dataset, and there is no clear or known way to eliminate a particular data sample’s effect from the trained network. There is no erase mechanism. Thus, this begs the question, should the model be retrained from scratch each time a sample is removed? That’s a costly option. Once data has been used to train a model, simply removing it from the original dataset may not fully eliminate its impact on the model’s behavior. New research is needed around the effects of data removal on already-trained models and whether full retraining is necessary to avoid retaining artifacts of deleted data. This presents an important consideration when balancing data licensing obligations with efficiency and practicality in an evolving, deployed ML system.

+

Dataset licensing is a multifaceted domain that intersects technology, ethics, and law. Understanding these intricacies becomes paramount for anyone building datasets during data engineering as the world evolves.

+
+
+

5.11 Conclusion

+

Data is the fundamental building block of AI systems. Without quality data, even the most advanced machine learning algorithms will fail. Data engineering encompasses the end-to-end process of collecting, storing, processing, and managing data to fuel the development of machine learning models. It begins with clearly defining the core problem and objectives, which guides effective data collection. Data can be sourced from diverse means, including existing datasets, web scraping, crowdsourcing, and synthetic data generation. Each approach involves tradeoffs between cost, speed, privacy, and specificity. Once data is collected, thoughtful labeling through manual or AI-assisted annotation enables the creation of high-quality training datasets. Proper storage in databases, warehouses, or lakes facilitates easy access and analysis. Metadata provides contextual details about the data. Data processing transforms raw data into a clean, consistent format for machine learning model development. Throughout this pipeline, transparency through documentation and provenance tracking is crucial for ethics, auditability, and reproducibility. Data licensing protocols also govern legal data access and use. Key challenges in data engineering include privacy risks, representation gaps, legal restrictions around proprietary data, and the need to balance competing constraints like speed versus quality. By thoughtfully engineering high-quality training data, machine learning practitioners can develop accurate, robust, and responsible AI systems, including embedded and TinyML applications.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/dl_primer/dl_primer.html b/contents/dl_primer/dl_primer.html new file mode 100644 index 00000000..2f0e7b15 --- /dev/null +++ b/contents/dl_primer/dl_primer.html @@ -0,0 +1,1463 @@ + + + + + + + + + +Machine Learning Systems - 3  Deep Learning Primer + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

3  Deep Learning Primer

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Photo of a classic classroom with a large blackboard dominating one wall. Chalk drawings showcase a detailed deep neural network with several hidden layers, and each node and connection is precisely labeled with white chalk. The rustic wooden floor and brick walls provide a contrast to the modern concepts. Surrounding the room, posters mounted on frames emphasize deep learning themes: convolutional networks, transformers, neurons, activation functions, and more.
+
+
+

This section briefly introduces deep learning, starting with an overview of its history, applications, and relevance to embedded AI systems. It examines the core concepts like neural networks, highlighting key components like perceptrons, multilayer perceptrons, activation functions, and computational graphs. The primer also briefly explores major deep learning architecture, contrasting their applications and uses. Additionally, it compares deep learning to traditional machine learning to equip readers with the general conceptual building blocks to make informed choices between deep learning and traditional ML techniques based on problem constraints, setting the stage for more advanced techniques and applications that will follow in subsequent chapters.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the basic concepts and definitions of deep neural networks.

  • +
  • Recognize there are different deep learning model architectures.

  • +
  • Comparison between deep learning and traditional machine learning approaches across various dimensions.

  • +
  • Acquire the basic conceptual building blocks to delve deeper into advanced deep-learning techniques and applications.

  • +
+
+
+
+

3.1 Introduction

+
+

3.1.1 Definition and Importance

+

Deep learning, a specialized area within machine learning and artificial intelligence (AI), utilizes algorithms modeled after the structure and function of the human brain, known as artificial neural networks. This field is a foundational element in AI, driving progress in diverse sectors such as computer vision, natural language processing, and self-driving vehicles. Its significance in embedded AI systems is highlighted by its capability to handle intricate calculations and predictions, optimizing the limited resources in embedded settings. Figure fig-ai-ml-dl illustrates the chronological development and relative segmentation of the three fields.

+
+
+
+ +
+
+Figure 3.1: Artificial intelligence subfields. Credit: NVIDIA. +
+
+
+
+
+

3.1.2 Brief History of Deep Learning

+

The idea of deep learning has origins in early artificial neural networks. It has experienced several cycles of interest, starting with the introduction of the Perceptron in the 1950s (Rosenblatt 1957), followed by the invention of backpropagation algorithms in the 1980s (Rumelhart, Hinton, and Williams 1986).

+
+Rosenblatt, Frank. 1957. The Perceptron, a Perceiving and Recognizing Automaton Project Para. Cornell Aeronautical Laboratory. +
+Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. “Learning Representations by Back-Propagating Errors.” Nature 323 (6088): 533–36. https://doi.org/10.1038/323533a0. +
+Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1106–14. https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html. +

The term “deep learning” became prominent in the 2000s, characterized by advances in computational power and data accessibility. Important milestones include the successful training of deep networks like AlexNet (Krizhevsky, Sutskever, and Hinton 2012) by Geoffrey Hinton, a leading figure in AI, and the renewed focus on neural networks as effective tools for data analysis and modeling.

+

Deep learning has recently seen exponential growth, transforming various industries. Computational growth followed an 18-month doubling pattern from 1952 to 2010, which then accelerated to a 6-month cycle from 2010 to 2022, as shown in Figure fig-trends. Concurrently, we saw the emergence of large-scale models between 2015 and 2022, appearing 2 to 3 orders of magnitude faster and following a 10-month doubling cycle.

+ +

Multiple factors have contributed to this surge, including advancements in computational power, the abundance of big data, and improvements in algorithmic designs. First, the growth of computational capabilities, especially the arrival of Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) (Jouppi et al. 2017), has significantly sped up the training and inference times of deep learning models. These hardware improvements have enabled the construction and training of more complex, deeper networks than what was possible in earlier years.

+
+Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. +

Second, the digital revolution has yielded a wealth of big data, offering rich material for deep learning models to learn from and excel in tasks such as image and speech recognition, language translation, and game playing. Large, labeled datasets have been key in refining and successfully deploying deep learning applications in real-world settings.

+

Additionally, collaborations and open-source efforts have nurtured a dynamic community of researchers and practitioners, accelerating advancements in deep learning techniques. Innovations like deep reinforcement learning, transfer learning, and generative adversarial networks have broadened the scope of what is achievable with deep learning, opening new possibilities in various sectors, including healthcare, finance, transportation, and entertainment.

+

Organizations worldwide recognize deep learning’s transformative potential and invest heavily in research and development to leverage its capabilities in providing innovative solutions, optimizing operations, and creating new business opportunities. As deep learning continues its upward trajectory, it is set to redefine how we interact with technology, enhancing convenience, safety, and connectivity in our lives.

+
+
+

3.1.3 Applications of Deep Learning

+

Deep learning is extensively used across numerous industries today. In finance, it is employed for stock market prediction, risk assessment, and fraud detection. Marketing uses it for customer segmentation, personalization, and content optimization. In healthcare, machine learning aids in diagnosis, treatment planning, and patient monitoring. The transformative impact on society is evident.

+

For instance, deep learning algorithms can predict stock market trends, guide investment strategies, and enhance financial decisions. Similarly, in healthcare, deep learning can make medical predictions that improve patient diagnosis and save lives. The benefits are clear: machine learning predicts with greater accuracy than humans and does so much more quickly.

+

In manufacturing, deep learning has had a significant impact. By continuously learning from vast amounts of data collected during manufacturing, companies can boost productivity while minimizing waste through improved efficiency. This financial benefit for companies translates to better quality products at lower customer prices. Machine learning enables manufacturers to continually refine their processes, producing higher quality goods more efficiently than ever.

+

Deep learning enhances everyday products like Netflix recommendations and Google Translate text translations. Moreover, it helps companies like Amazon and Uber reduce customer service costs by swiftly identifying dissatisfied customers.

+
+
+

3.1.4 Relevance to Embedded AI

+

Embedded AI, the integration of AI algorithms directly into hardware devices, naturally gains from deep learning capabilities. Combining deep learning algorithms and embedded systems has laid the groundwork for intelligent, autonomous devices capable of advanced on-device data processing and analysis. Deep learning aids in extracting complex patterns and information from input data, which is essential in developing smart embedded systems, from household appliances to industrial machinery. This collaboration aims to usher in a new era of intelligent, interconnected devices that can learn and adapt to user behavior and environmental conditions, optimizing performance and offering unprecedented convenience and efficiency.

+
+
+
+

3.2 Neural Networks

+

Deep learning draws inspiration from the human brain’s neural networks to create decision-making patterns. This section delves into the foundational concepts of deep learning, providing insights into the more complex topics discussed later in this primer.

+

Neural networks serve as the foundation of deep learning, inspired by the biological neural networks in the human brain to process and analyze data hierarchically. Below, we examine the primary components and structures in neural networks.

+
+

3.2.1 Perceptrons

+

The Perceptron is the basic unit or node that is the foundation for more complex structures. It takes various inputs, applies weights and biases to them, and then uses an activation function to produce an output. Figure fig-perceptron illustrates the building blocks of a perceptron. In simple terms, think of a perceptron as a tiny decision-maker that learns to make a binary decision (e.g. ‘yes’ or ‘no’). It takes in numbers as inputs (x_1, x_2, ...), each representing a feature of an object we wish to analyze (an image for example). Then it multiplies each input by a weight, adds them up, and if the total is high enough (crosses a certain threshold), it returns “yes” as an asnwer, otherwise, it outputs “no.”

+
+
+
+ +
+
+Figure 3.3: Perceptron. Credit: Wikimedia - Chrislb. +
+
+
+

Conceived in the 1950s, perceptrons paved the way for developing more intricate neural networks and have been a fundamental building block in deep learning.

+
+
+

3.2.2 Multilayer Perceptrons

+

Multilayer perceptrons (MLPs) are an evolution of the single-layer perceptron model, featuring multiple layers of nodes connected in a feedforward manner, as shown in Figure fig-mlp. These layers include an input layer for data reception, several hidden layers for data processing, and an output layer for final result generation. MLPs are skilled at identifying non-linear relationships and use a backpropagation technique for training, where weights are optimized through a gradient descent algorithm.

+
+
+
+ +
+
+Figure 3.4: Multilayer Perceptron. Credit: Wikimedia - Charlie. +
+
+
+
+

Forward Pass

+

The forward pass is the initial phase where data moves through the network from the input to the output layer. During this phase, each layer performs specific computations on the input data, using weights and biases before passing the resulting values to subsequent layers. The final output of this phase is used to compute the loss, indicating the difference between the predicted output and actual target values.

+

The video below explains how neural networks work using handwritten digit recognition as an example application. It also touches on the math underlying neural nets.

+
+
+
+

Backward Pass (Backpropagation)

+

Backpropagation is a key algorithm in training deep neural networks. This phase involves calculating the gradient of the loss function concerning each weight using the chain rule, effectively moving backward through the network. The gradients calculated in this step guide the adjustment of weights to minimize the loss function, thereby enhancing the network’s performance with each iteration of training.

+

Grasping these foundational concepts paves the way to understanding more intricate deep learning architectures and techniques, fostering the development of more sophisticated and productive applications, especially within embedded AI systems.

+

The following two videos build upon the previous one. They cover gradient descent and backpropagation in neural networks.

+
+
+
+
+
+

3.2.3 Model Architectures

+

Deep learning architectures refer to the various structured approaches that dictate how neurons and layers are organized and interact in neural networks. These architectures have evolved to tackle different problems and data types effectively. This section overviews some well-known deep learning architectures and their characteristics.

+
+

Multilayer Perceptrons (MLPs)

+

MLPs are basic deep learning architectures comprising three layers: an input layer, one or more hidden layers, and an output layer. These layers are fully connected, meaning each neuron in a layer is linked to every neuron in the preceding and following layers. MLPs can model intricate functions and are used in various tasks, such as regression, classification, and pattern recognition. Their capacity to learn non-linear relationships through backpropagation makes them a versatile instrument in the deep learning toolkit.

+

In embedded AI systems, MLPs can function as compact models for simpler tasks like sensor data analysis or basic pattern recognition, where computational resources are limited. Their ability to learn non-linear relationships with relatively less complexity makes them a suitable choice for embedded systems.

+
+

Exercise 3.1 (Multilayer Perceptrons (MLPs))  

+
+
+ +
+
+

Get ready to dive into the exciting world of deep learning and TinyML! We’ve just covered the core building blocks of neural networks, from simple perceptrons to complex architectures. Now, you’ll get to apply these concepts in practical examples. In the provided Colab notebooks, you’ll explore:

+

Predicting house prices: Learn how neural networks can analyze housing data to estimate property values.  

+

Image Classification: Discover how to build a network to understand the famous MNIST handwritten digit dataset.  

+

Real-world medical diagnosis: Use deep learning to tackle the important task of breast cancer classification.  

+

Are you excited to start building? Let’s go!  

+
+
+
+
+
+

Convolutional Neural Networks (CNNs)

+

CNNs are mainly used in image and video recognition tasks. This architecture employs convolutional layers that filter input data to identify features like edges, corners, and textures. A typical CNN also includes pooling layers to reduce the spatial dimensions of the data and fully connected layers for classification. CNNs have proven highly effective in image recognition, object detection, and computer vision applications.

+

In embedded AI, CNNs are crucial for image and video recognition tasks, where real-time processing is often needed. They can be optimized for embedded systems using techniques like quantization and pruning to minimize memory usage and computational demands, enabling efficient object detection and facial recognition functionalities in devices with limited computational resources.

+
+

Exercise 3.2 (Convolutional Neural Networks (CNNs))  

+
+
+ +
+
+

We discussed that CNNs excel at identifying image features, making them ideal for tasks like object classification. Now, you’ll get to put this knowledge into action! This Colab notebook focuses on building a CNN to classify images from the CIFAR-10 dataset, which includes objects like airplanes, cars, and animals. You’ll learn about the key differences between CIFAR-10 and the MNIST dataset we explored earlier and how these differences influence model choice. By the end of this notebook, you’ll have a grasp of CNNs for image recognition and be well on your way to becoming a TinyML expert!     

+
+
+
+
+
+

Recurrent Neural Networks (RNNs)

+

RNNs are suitable for sequential data analysis, like time series forecasting and natural language processing. In this architecture, connections between nodes form a directed graph along a temporal sequence, allowing information to be carried across sequences through hidden state vectors. Variants of RNNs include Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), designed to capture longer dependencies in sequence data.

+

These networks can be used in voice recognition systems, predictive maintenance, or IoT devices where sequential data patterns are common. Optimizations specific to embedded platforms can assist in managing their typically high computational and memory requirements.

+
+
+

Generative Adversarial Networks (GANs)

+

GANs consist of two networks, a generator and a discriminator, trained simultaneously through adversarial training (Goodfellow et al. 2020). The generator produces data that tries to mimic the real data distribution, while the discriminator aims to distinguish between real and generated data. GANs are widely used in image generation, style transfer, and data augmentation.

+
+Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622. +

In embedded settings, GANs could be used for on-device data augmentation to enhance the training of models directly on the embedded device, enabling continual learning and adaptation to new data without the need for cloud computing resources.

+
+
+

Autoencoders

+

Autoencoders are neural networks for data compression and noise reduction (Bank, Koenigstein, and Giryes 2023). They are structured to encode input data into a lower-dimensional representation and then decode it back to its original form. Variants like Variational Autoencoders (VAEs) introduce probabilistic layers that allow for generative properties, finding applications in image generation and anomaly detection.

+
+Bank, Dor, Noam Koenigstein, and Raja Giryes. 2023. “Autoencoders.” Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook, 353–74. +

Using autoencoders can help in efficient data transmission and storage, improving the overall performance of embedded systems with limited computational and memory resources.

+
+
+

Transformer Networks

+

Transformer networks have emerged as a powerful architecture, especially in natural language processing (Vaswani et al. 2017). These networks use self-attention mechanisms to weigh the influence of different input words on each output word, enabling parallel computation and capturing intricate patterns in data. Transformer networks have led to state-of-the-art results in tasks like language translation, summarization, and text generation.

+
+Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” Advances in Neural Information Processing Systems 30. +

These networks can be optimized to perform language-related tasks directly on the device. For example, transformers can be used in embedded systems for real-time translation services or voice-assisted interfaces, where latency and computational efficiency are crucial. Techniques such as model distillation can be employed to deploy these networks on embedded devices with limited resources.

+

These architectures serve specific purposes and excel in different domains, offering a rich toolkit for addressing diverse problems in embedded AI systems. Understanding the nuances of these architectures is crucial in designing effective and efficient deep learning models for various applications.

+
+
+
+

3.2.4 Traditional ML vs Deep Learning

+

To briefly highlight the differences, Table tbl-mlvsdl illustrates the contrasting characteristics between traditional ML and deep learning:

+
+
+
+Table 3.1: Comparison of traditional machine learning and deep learning. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AspectTraditional MLDeep Learning
Data RequirementsLow to Moderate (efficient with smaller datasets)High (requires large datasets for nuanced learning)
Model ComplexityModerate (suitable for well-defined problems)High (detects intricate patterns, suited for complex tasks)
Computational ResourcesLow to Moderate (cost-effective, less resource-intensive)High (demands substantial computational power and resources)
Deployment SpeedFast (quicker training and deployment cycles)Slow (prolonged training times, especially with larger datasets)
InterpretabilityHigh (clear insights into decision pathways)Low (complex layered structures, “black box” nature)
MaintenanceEasier (simple to update and maintain)Complex (requires more efforts in maintenance and updates)
+
+
+
+
+
+

3.2.5 Choosing Traditional ML vs. DL

+
+

Data Availability and Volume

+

Amount of Data: Traditional machine learning algorithms, such as decision trees or Naive Bayes, are often more suitable when data availability is limited. They offer robust predictions even with smaller datasets. This is particularly true in medical diagnostics for disease prediction and customer segmentation in marketing.

+

Data Diversity and Quality: Traditional machine learning algorithms often work well with structured data (the input to the model is a set of features, ideally independent of each other) but may require significant preprocessing effort (i.e., feature engineering). On the other hand, deep learning takes the approach of automatically performing feature engineering as part of the model architecture. This approach enables the construction of end-to-end models capable of directly mapping from unstructured input data (such as text, audio, and images) to the desired output without relying on simplistic heuristics that have limited effectiveness. However, this results in larger models demanding more data and computational resources. In noisy data, the necessity for larger datasets is further emphasized when utilizing Deep Learning.

+
+
+

Complexity of the Problem

+

 Problem Granularity: Problems that are simple to moderately complex, which may involve linear or polynomial relationships between variables, often find a better fit with traditional machine learning methods.     Hierarchical Feature Representation: Deep learning models are excellent in tasks that require hierarchical feature representation, such as image and speech recognition. However, not all problems require this complexity, and traditional machine learning algorithms may sometimes offer simpler and equally effective solutions.

+
+
+

Hardware and Computational Resources

+

 Resource Constraints: The availability of computational resources often influences the choice between traditional ML and deep learning. The former is generally less resource-intensive and thus preferable in environments with hardware limitations or budget constraints.     Scalability and Speed: Traditional machine learning algorithms, like support vector machines (SVM), often allow for faster training times and easier scalability, which is particularly beneficial in projects with tight timelines and growing data volumes.

+
+
+

Regulatory Compliance

+

Regulatory compliance is crucial in various industries, requiring adherence to guidelines and best practices such as the GDPR in the EU. Traditional ML models, due to their inherent interpretability, often align better with these regulations, especially in sectors like finance and healthcare.

+
+
+

Interpretability

+

Understanding the decision-making process is easier with traditional machine learning techniques than deep learning models, which function as “black boxes,” making it challenging to trace decision pathways.

+
+
+
+

3.2.6 Making an Informed Choice

+

Given the constraints of embedded AI systems, understanding the differences between traditional ML techniques and deep learning becomes essential. Both avenues offer unique advantages, and their distinct characteristics often dictate the choice of one over the other in different scenarios.

+

Despite this, deep learning has steadily outperformed traditional machine learning methods in several key areas due to abundant data, computational advancements, and proven effectiveness in complex tasks.

+

Here are some specific reasons why we focus on deep learning in this text:

+

1. Superior Performance in Complex Tasks: Deep learning models, particularly deep neural networks, excel in tasks where the relationships between data points are incredibly intricate. Tasks like image and speech recognition, language translation, and playing complex games like Go and Chess have seen significant advancements primarily through deep learning algorithms.

+

2. Efficient Handling of Unstructured Data: Unlike traditional machine learning methods, deep learning can more effectively process unstructured data. This is crucial in today’s data landscape, where the vast majority of data, such as text, images, and videos, is unstructured.

+

3. Leveraging Big Data: With the availability of big data, deep learning models can learn and improve continually. These models excel at utilizing large datasets to enhance their predictive accuracy, a limitation in traditional machine-learning approaches.

+

4. Hardware Advancements and Parallel Computing: The advent of powerful GPUs and the availability of cloud computing platforms have enabled the rapid training of deep learning models. These advancements have addressed one of deep learning’s significant challenges: the need for substantial computational resources.

+

5. Dynamic Adaptability and Continuous Learning: Deep learning models can dynamically adapt to new information or data. They can be trained to generalize their learning to new, unseen data, crucial in rapidly evolving fields like autonomous driving or real-time language translation.

+

While deep learning has gained significant traction, it’s essential to understand that traditional machine learning is still relevant. As we delve deeper into the intricacies of deep learning, we will also highlight situations where traditional machine learning methods may be more appropriate due to their simplicity, efficiency, and interpretability. By focusing on deep learning in this text, we aim to equip readers with the knowledge and tools to tackle modern, complex problems across various domains while also providing insights into the comparative advantages and appropriate application scenarios for deep learning and traditional machine learning techniques.

+
+
+
+

3.3 Conclusion

+

Deep learning has become a potent set of techniques for addressing intricate pattern recognition and prediction challenges. Starting with an overview, we outlined the fundamental concepts and principles governing deep learning, laying the groundwork for more advanced studies.

+

Central to deep learning, we explored the basic ideas of neural networks, powerful computational models inspired by the human brain’s interconnected neuron structure. This exploration allowed us to appreciate neural networks’ capabilities and potential in creating sophisticated algorithms capable of learning and adapting from data.

+

Understanding the role of libraries and frameworks was a key part of our discussion. We offered insights into the tools that can facilitate developing and deploying deep learning models. These resources ease the implementation of neural networks and open avenues for innovation and optimization.

+

Next, we tackled the challenges one might face when embedding deep learning algorithms within embedded systems, providing a critical perspective on the complexities and considerations of bringing AI to edge devices.

+

Furthermore, we examined deep learning’s limitations. Through discussions, we unraveled the challenges faced in deep learning applications and outlined scenarios where traditional machine learning might outperform deep learning. These sections are crucial for fostering a balanced view of deep learning’s capabilities and limitations.

+

In this primer, we have equipped you with the knowledge to make informed choices between deploying traditional machine learning or deep learning techniques, depending on the unique demands and constraints of a specific problem.

+

As we conclude this chapter, we hope you are now well-equipped with the basic “language” of deep learning and prepared to delve deeper into the subsequent chapters with a solid understanding and critical perspective. The journey ahead is filled with exciting opportunities and challenges in embedding AI within systems.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/dsp_spectral_features_block/dsp_spectral_features_block.html b/contents/dsp_spectral_features_block/dsp_spectral_features_block.html new file mode 100644 index 00000000..ddc12a2e --- /dev/null +++ b/contents/dsp_spectral_features_block/dsp_spectral_features_block.html @@ -0,0 +1,1617 @@ + + + + + + + + + +Machine Learning Systems - DSP - Spectral Features + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

DSP - Spectral Features

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: 1950s style cartoon illustration of a Latin male and female scientist in a vibration research room. The man is using a calculus ruler to examine ancient circuitry. The woman is at a computer with complex vibration graphs. The wooden table has boards with sensors, prominently an accelerometer. A classic, rounded-back computer shows the Arduino IDE with code for LED pin assignments and machine learning algorithms for movement detection. The Serial Monitor displays FFT, classification, wavelets, and DSPs. Vintage lamps, tools, and charts with FFT and Wavelets graphs complete the scene.
+
+
+
+

Introduction

+

TinyML projects related to motion (or vibration) involve data from IMUs (usually accelerometers and Gyroscopes). These time-series type datasets should be preprocessed before inputting them into a Machine Learning model training, which is a challenging area for embedded machine learning. Still, Edge Impulse helps overcome this complexity with its digital signal processing (DSP) preprocessing step and, more specifically, the Spectral Features Block for Inertial sensors.

+

But how does it work under the hood? Let’s dig into it.

+
+
+

Extracting Features Review

+

Extracting features from a dataset captured with inertial sensors, such as accelerometers, involves processing and analyzing the raw data. Accelerometers measure the acceleration of an object along one or more axes (typically three, denoted as X, Y, and Z). These measurements can be used to understand various aspects of the object’s motion, such as movement patterns and vibrations. Here’s a high-level overview of the process:

+

Data collection: First, we need to gather data from the accelerometers. Depending on the application, data may be collected at different sampling rates. It’s essential to ensure that the sampling rate is high enough to capture the relevant dynamics of the studied motion (the sampling rate should be at least double the maximum relevant frequency present in the signal).

+

Data preprocessing: Raw accelerometer data can be noisy and contain errors or irrelevant information. Preprocessing steps, such as filtering and normalization, can help clean and standardize the data, making it more suitable for feature extraction.

+
+

The Studio does not perform normalization or standardization, so sometimes, when working with Sensor Fusion, it could be necessary to perform this step before uploading data to the Studio. This is particularly crucial in sensor fusion projects, as seen in this tutorial, Sensor Data Fusion with Spresense and CommonSense.

+
+

Segmentation: Depending on the nature of the data and the application, dividing the data into smaller segments or windows may be necessary. This can help focus on specific events or activities within the dataset, making feature extraction more manageable and meaningful. The window size and overlap (window span) choice depend on the application and the frequency of the events of interest. As a rule of thumb, we should try to capture a couple of “data cycles.”

+

Feature extraction: Once the data is preprocessed and segmented, you can extract features that describe the motion’s characteristics. Some typical features extracted from accelerometer data include:

+
    +
  • Time-domain features describe the data’s statistical properties within each segment, such as mean, median, standard deviation, skewness, kurtosis, and zero-crossing rate.
  • +
  • Frequency-domain features are obtained by transforming the data into the frequency domain using techniques like the Fast Fourier Transform (FFT). Some typical frequency-domain features include the power spectrum, spectral energy, dominant frequencies (amplitude and frequency), and spectral entropy.
  • +
  • Time-frequency domain features combine the time and frequency domain information, such as the Short-Time Fourier Transform (STFT) or the Discrete Wavelet Transform (DWT). They can provide a more detailed understanding of how the signal’s frequency content changes over time.
  • +
+

In many cases, the number of extracted features can be large, which may lead to overfitting or increased computational complexity. Feature selection techniques, such as mutual information, correlation-based methods, or principal component analysis (PCA), can help identify the most relevant features for a given application and reduce the dimensionality of the dataset. The Studio can help with such feature-relevant calculations.

+

Let’s explore in more detail a typical TinyML Motion Classification project covered in this series of Hands-Ons.

+
+
+

A TinyML Motion Classification project

+
+
+

+
+
+

In the hands-on project, Motion Classification and Anomaly Detection, we simulated mechanical stresses in transport, where our problem was to classify four classes of movement:

+
    +
  • Maritime (pallets in boats)
  • +
  • Terrestrial (pallets in a Truck or Train)
  • +
  • Lift (pallets being handled by Fork-Lift)
  • +
  • Idle (pallets in Storage houses)
  • +
+

The accelerometers provided the data on the pallet (or container).

+
+
+

+
+
+

Below is one sample (raw data) of 10 seconds, captured with a sampling frequency of 50Hz:

+
+
+

+
+
+
+

The result is similar when this analysis is done over another dataset with the same principle, using a different sampling frequency, 62.5Hz instead of 50Hz.

+
+
+
+

Data Pre-Processing

+

The raw data captured by the accelerometer (a “time series” data) should be converted to “tabular data” using one of the typical Feature Extraction methods described in the last section.

+

We should segment the data using a sliding window over the sample data for feature extraction. The project captured accelerometer data every 10 seconds with a sample rate of 62.5 Hz. A 2-second window captures 375 data points (3 axis x 2 seconds x 62.5 samples). The window is slid every 80ms, creating a larger dataset where each instance has 375 “raw features.”

+
+
+

+
+
+

On the Studio, the previous version (V1) of the Spectral Analysis Block extracted as time-domain features only the RMS, and for the frequency-domain, the peaks and frequency (using FFT) and the power characteristics (PSD) of the signal over time resulting in a fixed tabular dataset of 33 features (11 per each axis),

+
+
+

+
+
+

Those 33 features were the Input tensor of a Neural Network Classifier.

+

In 2022, Edge Impulse released version 2 of the Spectral Analysis block, which we will explore here.

+
+

Edge Impulse - Spectral Analysis Block V.2 under the hood

+

In Version 2, Time Domain Statistical features per axis/channel are:

+
    +
  • RMS
  • +
  • Skewness
  • +
  • Kurtosis
  • +
+

And the Frequency Domain Spectral features per axis/channel are:

+
    +
  • Spectral Power
  • +
  • Skewness (in the next version)
  • +
  • Kurtosis (in the next version)
  • +
+

In this link, we can have more details about the feature extraction.

+
+

Clone the public project. You can also follow the explanation, playing with the code using my Google CoLab Notebook: Edge Impulse Spectral Analysis Block Notebook.

+
+

Start importing the libraries:

+
import numpy as np
+import matplotlib.pyplot as plt
+import seaborn as sns
+import math
+from scipy.stats import skew, kurtosis
+from scipy import signal
+from scipy.signal import welch
+from scipy.stats import entropy
+from sklearn import preprocessing
+import pywt
+
+plt.rcParams['figure.figsize'] = (12, 6)
+plt.rcParams['lines.linewidth'] = 3
+

From the studied project, let’s choose a data sample from accelerometers as below:

+
    +
  • Window size of 2 seconds: [2,000] ms
  • +
  • Sample frequency: [62.5] Hz
  • +
  • We will choose the [None] filter (for simplicity) and a
  • +
  • FFT length: [16].
  • +
+
f =  62.5 # Hertz
+wind_sec = 2 # seconds
+FFT_Lenght = 16
+axis = ['accX', 'accY', 'accZ']
+n_sensors = len(axis)
+
+
+

+
+
+

Selecting the Raw Features on the Studio Spectral Analysis tab, we can copy all 375 data points of a particular 2-second window to the clipboard.

+
+
+

+
+
+

Paste the data points to a new variable data:

+
data=[-5.6330, 0.2376, 9.8701, -5.9442, 0.4830, 9.8701, -5.4217, ...]
+No_raw_features = len(data)
+N = int(No_raw_features/n_sensors)
+

The total raw features are 375, but we will work with each axis individually, where N= 125 (number of samples per axis).

+

We aim to understand how Edge Impulse gets the processed features.

+
+
+

+
+
+

So, you should also past the processed features on a variable (to compare the calculated features in Python with the ones provided by the Studio) :

+
features = [2.7322, -0.0978, -0.3813, 2.3980, 3.8924, 24.6841, 9.6303, ...]
+N_feat = len(features)
+N_feat_axis = int(N_feat/n_sensors)
+

The total number of processed features is 39, which means 13 features/axis.

+

Looking at those 13 features closely, we will find 3 for the time domain (RMS, Skewness, and Kurtosis):

+
    +
  • [rms] [skew] [kurtosis]
  • +
+

and 10 for the frequency domain (we will return to this later).

+
    +
  • [spectral skew][spectral kurtosis][Spectral Power 1] ... [Spectral Power 8]
  • +
+

Splitting raw data per sensor

+

The data has samples from all axes; let’s split and plot them separately:

+
def plot_data(sensors, axis, title):
+    [plt.plot(x, label=y) for x,y in zip(sensors, axis)]
+    plt.legend(loc='lower right')
+    plt.title(title)
+    plt.xlabel('#Sample')
+    plt.ylabel('Value')
+    plt.box(False)
+    plt.grid()
+    plt.show()
+
+accX = data[0::3]
+accY = data[1::3]
+accZ = data[2::3]
+sensors = [accX, accY, accZ] 
+plot_data(sensors, axis, 'Raw Features')
+
+
+

+
+
+

Subtracting the mean

+

Next, we should subtract the mean from the data. Subtracting the mean from a data set is a common data pre-processing step in statistics and machine learning. The purpose of subtracting the mean from the data is to center the data around zero. This is important because it can reveal patterns and relationships that might be hidden if the data is not centered.

+

Here are some specific reasons why subtracting the mean can be helpful:

+
    +
  • It simplifies analysis: By centering the data, the mean becomes zero, making some calculations simpler and easier to interpret.
  • +
  • It removes bias: If the data is biased, subtracting the mean can remove it and allow for a more accurate analysis.
  • +
  • It can reveal patterns: Centering the data can help uncover patterns that might be hidden if the data is not centered. For example, centering the data can help you identify trends over time if you analyze a time series dataset.
  • +
  • It can improve performance: In some machine learning algorithms, centering the data can improve performance by reducing the influence of outliers and making the data more easily comparable. Overall, subtracting the mean is a simple but powerful technique that can be used to improve the analysis and interpretation of data.
  • +
+
dtmean = [(sum(x)/len(x)) for x in sensors]
+[print('mean_'+x+'= ', round(y, 4)) for x,y in zip(axis, dtmean)][0]
+
+accX = [(x - dtmean[0]) for x in accX]
+accY = [(x - dtmean[1]) for x in accY]
+accZ = [(x - dtmean[2]) for x in accZ]
+sensors = [accX, accY, accZ]
+
+plot_data(sensors, axis, 'Raw Features - Subctract the Mean')
+
+
+

+
+
+
+
+
+

Time Domain Statistical features

+

RMS Calculation

+

The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean of the squares of the values or the square of the function that defines the continuous waveform. In physics, the RMS value of an electrical current is defined as the “value of the direct current that dissipates the same power in a resistor.”

+

In the case of a set of n values {𝑥1, 𝑥2, …, 𝑥𝑛}, the RMS is:

+
+
+

+
+
+
+

NOTE that the RMS value is different for the original raw data, and after subtracting the mean

+
+
# Using numpy and standartized data (subtracting mean)
+rms = [np.sqrt(np.mean(np.square(x))) for x in sensors]
+

We can compare the calculated RMS values here with the ones presented by Edge Impulse:

+
[print('rms_'+x+'= ', round(y, 4)) for x,y in zip(axis, rms)][0]
+print("\nCompare with Edge Impulse result features")
+print(features[0:N_feat:N_feat_axis])
+

rms_accX= 2.7322

+

rms_accY= 0.7833

+

rms_accZ= 0.1383

+

Compared with Edge Impulse result features:

+

[2.7322, 0.7833, 0.1383]

+

Skewness and kurtosis calculation

+

In statistics, skewness and kurtosis are two ways to measure the shape of a distribution.

+

Here, we can see the sensor values distribution:

+
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(13, 4))
+sns.kdeplot(accX, fill=True, ax=axes[0])
+sns.kdeplot(accY, fill=True, ax=axes[1])
+sns.kdeplot(accZ, fill=True, ax=axes[2])
+axes[0].set_title('accX')
+axes[1].set_title('accY')
+axes[2].set_title('accZ')
+plt.suptitle('IMU Sensors distribution', fontsize=16, y=1.02)
+plt.show()
+
+
+

+
+
+

Skewness is a measure of the asymmetry of a distribution. This value can be positive or negative.

+
+
+

+
+
+
    +
  • A negative skew indicates that the tail is on the left side of the distribution, which extends towards more negative values.
  • +
  • A positive skew indicates that the tail is on the right side of the distribution, which extends towards more positive values.
  • +
  • A zero value indicates no skewness in the distribution at all, meaning the distribution is perfectly symmetrical.
  • +
+
skew = [skew(x, bias=False) for x in sensors]
+[print('skew_'+x+'= ', round(y, 4)) for x,y in zip(axis, skew)][0]
+print("\nCompare with Edge Impulse result features")
+features[1:N_feat:N_feat_axis]
+

skew_accX= -0.099

+

skew_accY= 0.1756

+

skew_accZ= 6.9463

+

Compared with Edge Impulse result features:

+

[-0.0978, 0.1735, 6.8629]

+

Kurtosis is a measure of whether or not a distribution is heavy-tailed or light-tailed relative to a normal distribution.

+
+
+

+
+
+
    +
  • The kurtosis of a normal distribution is zero.
  • +
  • If a given distribution has a negative kurtosis, it is said to be playkurtic, which means it tends to produce fewer and less extreme outliers than the normal distribution.
  • +
  • If a given distribution has a positive kurtosis , it is said to be leptokurtic, which means it tends to produce more outliers than the normal distribution.
  • +
+
kurt = [kurtosis(x, bias=False) for x in sensors]
+[print('kurt_'+x+'= ', round(y, 4)) for x,y in zip(axis, kurt)][0]
+print("\nCompare with Edge Impulse result features")
+features[2:N_feat:N_feat_axis]
+

kurt_accX= -0.3475

+

kurt_accY= 1.2673

+

kurt_accZ= 68.1123

+

Compared with Edge Impulse result features:

+

[-0.3813, 1.1696, 65.3726]

+
+
+

Spectral features

+

The filtered signal is passed to the Spectral power section, which computes the FFT to generate the spectral features.

+

Since the sampled window is usually larger than the FFT size, the window will be broken into frames (or “sub-windows”), and the FFT is calculated over each frame.

+

FFT length - The FFT size. This determines the number of FFT bins and the resolution of frequency peaks that can be separated. A low number means more signals will average together in the same FFT bin, but it also reduces the number of features and model size. A high number will separate more signals into separate bins, generating a larger model.

+
    +
  • The total number of Spectral Power features will vary depending on how you set the filter and FFT parameters. With No filtering, the number of features is 1/2 of the FFT Length.
  • +
+

Spectral Power - Welch’s method

+

We should use Welch’s method to split the signal on the frequency domain in bins and calculate the power spectrum for each bin. This method divides the signal into overlapping segments, applies a window function to each segment, computes the periodogram of each segment using DFT, and averages them to obtain a smoother estimate of the power spectrum.

+
# Function used by Edge Impulse instead of scipy.signal.welch().
+def welch_max_hold(fx, sampling_freq, nfft, n_overlap):
+    n_overlap = int(n_overlap)
+    spec_powers = [0 for _ in range(nfft//2+1)]
+    ix = 0
+    while ix <= len(fx):
+        # Slicing truncates if end_idx > len, and rfft will auto-zero pad
+        fft_out = np.abs(np.fft.rfft(fx[ix:ix+nfft], nfft))
+        spec_powers = np.maximum(spec_powers, fft_out**2/nfft)
+        ix = ix + (nfft-n_overlap)
+    return np.fft.rfftfreq(nfft, 1/sampling_freq), spec_powers
+

Applying the above function to 3 signals:

+
fax,Pax = welch_max_hold(accX, fs, FFT_Lenght, 0)
+fay,Pay = welch_max_hold(accY, fs, FFT_Lenght, 0)
+faz,Paz = welch_max_hold(accZ, fs, FFT_Lenght, 0)
+specs = [Pax, Pay, Paz ]
+

We can plot the Power Spectrum P(f):

+
plt.plot(fax,Pax, label='accX')
+plt.plot(fay,Pay, label='accY')
+plt.plot(faz,Paz, label='accZ')
+plt.legend(loc='upper right')
+plt.xlabel('Frequency (Hz)')
+#plt.ylabel('PSD [V**2/Hz]')
+plt.ylabel('Power')
+plt.title('Power spectrum P(f) using Welch's method')
+plt.grid()
+plt.box(False)
+plt.show()
+
+
+

+
+
+

Besides the Power Spectrum, we can also include the skewness and kurtosis of the features in the frequency domain (should be available on a new version):

+
spec_skew = [skew(x, bias=False) for x in specs]
+spec_kurtosis = [kurtosis(x, bias=False) for x in specs]
+

Let’s now list all Spectral features per axis and compare them with EI:

+
print("EI Processed Spectral features (accX): ")
+print(features[3:N_feat_axis][0:])
+print("\nCalculated features:")
+print (round(spec_skew[0],4))
+print (round(spec_kurtosis[0],4))
+[print(round(x, 4)) for x in Pax[1:]][0]
+

EI Processed Spectral features (accX):

+

2.398, 3.8924, 24.6841, 9.6303, 8.4867, 7.7793, 2.9963, 5.6242, 3.4198, 4.2735

+

Calculated features:

+

2.9069 8.5569 24.6844 9.6304 8.4865 7.7794 2.9964 5.6242 3.4198 4.2736

+
print("EI Processed Spectral features (accY): ")
+print(features[16:26][0:]) #13: 3+N_feat_axis;  26 = 2x N_feat_axis
+print("\nCalculated features:")
+print (round(spec_skew[1],4))
+print (round(spec_kurtosis[1],4))
+[print(round(x, 4)) for x in Pay[1:]][0]
+

EI Processed Spectral features (accY):

+

0.9426, -0.8039, 5.429, 0.999, 1.0315, 0.9459, 1.8117, 0.9088, 1.3302, 3.112

+

Calculated features:

+

1.1426 -0.3886 5.4289 0.999 1.0315 0.9458 1.8116 0.9088 1.3301 3.1121

+
print("EI Processed Spectral features (accZ): ")
+print(features[29:][0:]) #29: 3+(2*N_feat_axis);
+print("\nCalculated features:")
+print (round(spec_skew[2],4))
+print (round(spec_kurtosis[2],4))
+[print(round(x, 4)) for x in Paz[1:]][0]
+

EI Processed Spectral features (accZ):

+

0.3117, -1.3812, 0.0606, 0.057, 0.0567, 0.0976, 0.194, 0.2574, 0.2083, 0.166

+

Calculated features:

+

0.3781 -1.4874 0.0606 0.057 0.0567 0.0976 0.194 0.2574 0.2083 0.166

+
+
+

Time-frequency domain

+
+

Wavelets

+

Wavelet is a powerful technique for analyzing signals with transient features or abrupt changes, such as spikes or edges, which are difficult to interpret with traditional Fourier-based methods.

+

Wavelet transforms work by breaking down a signal into different frequency components and analyzing them individually. The transformation is achieved by convolving the signal with a wavelet function, a small waveform centered at a specific time and frequency. This process effectively decomposes the signal into different frequency bands, each of which can be analyzed separately.

+

One of the critical benefits of wavelet transforms is that they allow for time-frequency analysis, which means that they can reveal the frequency content of a signal as it changes over time. This makes them particularly useful for analyzing non-stationary signals, which vary over time.

+

Wavelets have many practical applications, including signal and image compression, denoising, feature extraction, and image processing.

+

Let’s select Wavelet on the Spectral Features block in the same project:

+
    +
  • Type: Wavelet
  • +
  • Wavelet Decomposition Level: 1
  • +
  • Wavelet: bior1.3
  • +
+
+
+

+
+
+

The Wavelet Function

+
wavelet_name='bior1.3'
+num_layer = 1
+
+wavelet = pywt.Wavelet(wavelet_name)
+[phi_d,psi_d,phi_r,psi_r,x] = wavelet.wavefun(level=5)
+plt.plot(x, psi_d, color='red')
+plt.title('Wavelet Function')
+plt.ylabel('Value')
+plt.xlabel('Time')
+plt.grid()
+plt.box(False)
+plt.show()
+
+
+

+
+
+

As we did before, let’s copy and past the Processed Features:

+
+
+

+
+
+
features = [3.6251, 0.0615, 0.0615, -7.3517, -2.7641, 2.8462, 5.0924, ...]
+N_feat = len(features)
+N_feat_axis = int(N_feat/n_sensors)
+

Edge Impulse computes the Discrete Wavelet Transform (DWT) for each one of the Wavelet Decomposition levels selected. After that, the features will be extracted.

+

In the case of Wavelets, the extracted features are basic statistical values, crossing values, and entropy. There are, in total, 14 features per layer as below:

+
    +
  • [11] Statiscal Features: n5, n25, n75, n95, mean, median, standard deviation (std), variance (var) root mean square (rms), kurtosis, and skewness (skew).
  • +
  • [2] Crossing Features: Zero crossing rate (zcross) and mean crossing rate (mcross) are the times that the signal passes through the baseline (y = 0) and the average level (y = u) per unit of time, respectively
  • +
  • [1] Complexity Feature: Entropy is a characteristic measure of the complexity of the signal
  • +
+

All the above 14 values are calculated for each Layer (including L0, the original signal)

+
    +
  • The total number of features varies depending on how you set the filter and the number of layers. For example, with [None] filtering and Level[1], the number of features per axis will be 14 x 2 (L0 and L1) = 28. For the three axes, we will have a total of 84 features.
  • +
+
+
+

Wavelet Analysis

+

Wavelet analysis decomposes the signal (accX, accY, and accZ) into different frequency components using a set of filters, which separate these components into low-frequency (slowly varying parts of the signal containing long-term patterns), such as accX_l1, accY_l1, accZ_l1 and, high-frequency (rapidly varying parts of the signal containing short-term patterns) components, such as accX_d1, accY_d1, accZ_d1, permitting the extraction of features for further analysis or classification.

+

Only the low-frequency components (approximation coefficients, or cA) will be used. In this example, we assume only one level (Single-level Discrete Wavelet Transform), where the function will return a tuple. With a multilevel decomposition, the “Multilevel 1D Discrete Wavelet Transform”, the result will be a list (for detail, please see: Discrete Wavelet Transform (DWT) )

+
(accX_l1, accX_d1) = pywt.dwt(accX, wavelet_name)
+(accY_l1, accY_d1) = pywt.dwt(accY, wavelet_name)
+(accZ_l1, accZ_d1) = pywt.dwt(accZ, wavelet_name)
+sensors_l1 = [accX_l1, accY_l1, accZ_l1]
+
+# Plot power spectrum versus frequency
+plt.plot(accX_l1, label='accX')
+plt.plot(accY_l1, label='accY')
+plt.plot(accZ_l1, label='accZ')
+plt.legend(loc='lower right')
+plt.xlabel('Time')
+plt.ylabel('Value')
+plt.title('Wavelet Approximation')
+plt.grid()
+plt.box(False)
+plt.show()
+
+
+

+
+
+
+
+

Feature Extraction

+

Let’s start with the basic statistical features. Note that we apply the function for both the original signals and the resultant cAs from the DWT:

+
def calculate_statistics(signal):
+    n5 = np.percentile(signal, 5)
+    n25 = np.percentile(signal, 25)
+    n75 = np.percentile(signal, 75)
+    n95 = np.percentile(signal, 95)
+    median = np.percentile(signal, 50)
+    mean = np.mean(signal)
+    std = np.std(signal)
+    var = np.var(signal)
+    rms = np.sqrt(np.mean(np.square(signal)))
+    return [n5, n25, n75, n95, median, mean, std, var, rms]
+ 
+stat_feat_l0 = [calculate_statistics(x) for x in sensors]
+stat_feat_l1 = [calculate_statistics(x) for x in sensors_l1]
+

The Skelness and Kurtosis:

+
skew_l0 = [skew(x, bias=False) for x in sensors]
+skew_l1 = [skew(x, bias=False) for x in sensors_l1]
+kurtosis_l0 = [kurtosis(x, bias=False) for x in sensors]
+kurtosis_l1 = [kurtosis(x, bias=False) for x in sensors_l1]
+

Zero crossing (zcross) is the number of times the wavelet coefficient crosses the zero axis. It can be used to measure the signal’s frequency content since high-frequency signals tend to have more zero crossings than low-frequency signals.

+

Mean crossing (mcross), on the other hand, is the number of times the wavelet coefficient crosses the mean of the signal. It can be used to measure the amplitude since high-amplitude signals tend to have more mean crossings than low-amplitude signals.

+
def getZeroCrossingRate(arr):
+    my_array = np.array(arr)
+    zcross = float("{0:.2f}".format((((my_array[:-1] * my_array[1:]) < 0).su    m())/len(arr)))
+    return zcross
+
+def getMeanCrossingRate(arr):
+    mcross = getZeroCrossingRate(np.array(arr) - np.mean(arr))
+    return mcross
+
+def calculate_crossings(list):
+    zcross=[]
+    mcross=[]
+    for i in range(len(list)):
+        zcross_i = getZeroCrossingRate(list[i])
+        zcross.append(zcross_i)
+        mcross_i = getMeanCrossingRate(list[i])
+        mcross.append(mcross_i)
+    return zcross, mcross
+
+cross_l0 = calculate_crossings(sensors)
+cross_l1 = calculate_crossings(sensors_l1)
+

In wavelet analysis, entropy refers to the degree of disorder or randomness in the distribution of wavelet coefficients. Here, we used Shannon entropy, which measures a signal’s uncertainty or randomness. It is calculated as the negative sum of the probabilities of the different possible outcomes of the signal multiplied by their base 2 logarithm. In the context of wavelet analysis, Shannon entropy can be used to measure the complexity of the signal, with higher values indicating greater complexity.

+
def calculate_entropy(signal, base=None):
+    value, counts = np.unique(signal, return_counts=True)
+    return entropy(counts, base=base)
+
+entropy_l0 = [calculate_entropy(x) for x in sensors]
+entropy_l1 = [calculate_entropy(x) for x in sensors_l1]
+

Let’s now list all the wavelet features and create a list by layers.

+
L1_features_names = ["L1-n5", "L1-n25", "L1-n75", "L1-n95", "L1-median", "L1-mean", "L1-std", "L1-var", "L1-rms", "L1-skew", "L1-Kurtosis", "L1-zcross", "L1-mcross", "L1-entropy"]
+
+L0_features_names = ["L0-n5", "L0-n25", "L0-n75", "L0-n95", "L0-median", "L0-mean", "L0-std", "L0-var", "L0-rms", "L0-skew", "L0-Kurtosis", "L0-zcross", "L0-mcross", "L0-entropy"]
+
+all_feat_l0 = []
+for i in range(len(axis)):
+    feat_l0 = stat_feat_l0[i]+[skew_l0[i]]+[kurtosis_l0[i]]+[cross_l0[0][i]]+[cross_l0[1][i]]+[entropy_l0[i]]
+    [print(axis[i]+' '+x+'= ', round(y, 4)) for x,y in zip(L0_features_names, feat_l0)][0]
+    all_feat_l0.append(feat_l0)
+all_feat_l0 = [item for sublist in all_feat_l0 for item in sublist]
+print(f"\nAll L0 Features = {len(all_feat_l0)}")
+
+all_feat_l1 = []
+for i in range(len(axis)):
+feat_l1 = stat_feat_l1[i]+[skew_l1[i]]+[kurtosis_l1[i]]+[cross_l1[0][i]]+[cross_l1[1][i]]+[entropy_l1[i]]
+[print(axis[i]+' '+x+'= ', round(y, 4)) for x,y in zip(L1_features_names, feat_l1)][0]
+all_feat_l1.append(feat_l1)
+all_feat_l1 = [item for sublist in all_feat_l1 for item in sublist]
+print(f"\nAll L1 Features = {len(all_feat_l1)}")
+
+
+

+
+
+
+
+
+

Conclusion

+

Edge Impulse Studio is a powerful online platform that can handle the pre-processing task for us. Still, given our engineering perspective, we want to understand what is happening under the hood. This knowledge will help us find the best options and hyper-parameters for tuning our projects.

+

Daniel Situnayake wrote in his blog: “Raw sensor data is highly dimensional and noisy. Digital signal processing algorithms help us sift the signal from the noise. DSP is an essential part of embedded engineering, and many edge processors have on-board acceleration for DSP. As an ML engineer, learning basic DSP gives you superpowers for handling high-frequency time series data in your models.” I recommend you read Dan’s excellent post in its totality: nn to cpp: What you need to know about porting deep learning models to the edge.

+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/efficient_ai/efficient_ai.html b/contents/efficient_ai/efficient_ai.html new file mode 100644 index 00000000..f5c3dee2 --- /dev/null +++ b/contents/efficient_ai/efficient_ai.html @@ -0,0 +1,1400 @@ + + + + + + + + + +Machine Learning Systems - 8  Efficient AI + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

8  Efficient AI

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: A conceptual illustration depicting efficiency in artificial intelligence using a shipyard analogy. The scene shows a bustling shipyard where containers represent bits or bytes of data. These containers are being moved around efficiently by cranes and vehicles, symbolizing the streamlined and rapid information processing in AI systems. The shipyard is meticulously organized, illustrating the concept of optimal performance within the constraints of limited resources. In the background, ships are docked, representing different platforms and scenarios where AI is applied. The atmosphere should convey advanced technology with an underlying theme of sustainability and wide applicability.
+
+
+

Efficiency in artificial intelligence (AI) is not simply a luxury but a necessity. In this chapter, we dive into the key concepts underpinning AI systems’ efficiency. The computational demands on neural networks can be daunting, even for minimal systems. For AI to be seamlessly integrated into everyday devices and essential systems, it must perform optimally within the constraints of limited resources while maintaining its efficacy. The pursuit of efficiency guarantees that AI models are streamlined, rapid, and sustainable, thereby widening their applicability across various platforms and scenarios.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Recognize the need for efficient AI in TinyML/edge devices.

  • +
  • Understand the need for efficient model architectures like MobileNets and SqueezeNet.

  • +
  • Understand why techniques for model compression are important.

  • +
  • Get an inclination for why efficient AI hardware is important.

  • +
  • Appreciate the significance of numerics and their representations.

  • +
  • We appreciate that we need to understand the nuances of model comparison beyond accuracy.

  • +
  • Recognize efficiency encompasses technology, costs, environment, and ethics.

  • +
+
+
+

The focus is on gaining a conceptual understanding of the motivations and significance of the various strategies for achieving efficient AI, both in terms of techniques and a holistic perspective. Subsequent chapters will dive into the nitty-gritty details of these various concepts.

+
+

8.1 Introduction

+

Training models can consume significant energy, sometimes equivalent to the carbon footprint of sizable industrial processes. We will cover some of these sustainability details in the AI Sustainability chapter. On the deployment side, if these models are not optimized for efficiency, they can quickly drain device batteries, demand excessive memory, or fall short of real-time processing needs. Through this introduction, we aim to elucidate the nuances of efficiency, setting the groundwork for a comprehensive exploration in the subsequent chapters.

+
+
+

8.2 The Need for Efficient AI

+

Efficiency takes on different connotations depending on where AI computations occur. Let’s revisit and differentiate between Cloud, Edge, and TinyML in terms of efficiency. Figure fig-platforms provides a big picture comparison of the three different platforms.

+
+
+
+ +
+
+Figure 8.1: Cloud, Mobile and TinyML. Credit: Schizas et al. (2022). +
+
+Schizas, Nikolaos, Aristeidis Karras, Christos Karras, and Spyros Sioutas. 2022. TinyML for Ultra-Low Power AI and Large Scale IoT Deployments: A Systematic Review.” Future Internet 14 (12): 363. https://doi.org/10.3390/fi14120363. +
+
+

For cloud AI, traditional AI models often run in large-scale data centers equipped with powerful GPUs and TPUs (Barroso, Hölzle, and Ranganathan 2019). Here, efficiency pertains to optimizing computational resources, reducing costs, and ensuring timely data processing and return. However, relying on the cloud introduced latency, especially when dealing with large data streams that must be uploaded, processed, and downloaded.

+
+Barroso, Luiz André, Urs Hölzle, and Parthasarathy Ranganathan. 2019. The Datacenter as a Computer: Designing Warehouse-Scale Machines. Springer International Publishing. https://doi.org/10.1007/978-3-031-01761-2. +
+Li, En, Liekang Zeng, Zhi Zhou, and Xu Chen. 2020. “Edge AI: On-demand Accelerating Deep Neural Network Inference via Edge Computing.” IEEE Trans. Wireless Commun. 19 (1): 447–57. https://doi.org/10.1109/twc.2019.2946140. +

For edge AI, edge computing brings AI closer to the data source, processing information directly on local devices like smartphones, cameras, or industrial machines (Li et al. 2020). Here, efficiency encompasses quick real-time responses and reduced data transmission needs. The constraints, however, are tighter—these devices, while more powerful than microcontrollers, have limited computational power compared to cloud setups.

+

Pushing the frontier even further is TinyML, where AI models run on microcontrollers or extremely resource-constrained environments. The difference in processor and memory performance between TinyML and cloud or mobile systems can be several orders of magnitude (Warden and Situnayake 2019). Efficiency in TinyML is about ensuring models are lightweight enough to fit on these devices, use minimal energy (critical for battery-powered devices), and still perform their tasks effectively.

+
+Warden, Pete, and Daniel Situnayake. 2019. Tinyml: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers. O’Reilly Media. +

The spectrum from Cloud to TinyML represents a shift from vast, centralized computational resources to distributed, localized, and constrained environments. As we transition from one to the other, the challenges and strategies related to efficiency evolve, underlining the need for specialized approaches tailored to each scenario. Having underscored the need for efficient AI, especially within the context of TinyML, we will transition to exploring the methodologies devised to meet these challenges. The following sections outline the main concepts we will delve deeper into later. We will demonstrate the breadth and depth of innovation needed to achieve efficient AI as we delve into these strategies.

+
+
+

8.3 Efficient Model Architectures

+

Choosing the right model architecture is as crucial as optimizing it. In recent years, researchers have explored some novel architectures that can have inherently fewer parameters while maintaining strong performance.

+

MobileNets: MobileNets are efficient mobile and embedded vision application models (Howard et al. 2017). The key idea that led to their success is the use of depth-wise separable convolutions, which significantly reduce the number of parameters and computations in the network. MobileNetV2 and V3 further enhance this design by introducing inverted residuals and linear bottlenecks.

+
+Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint. https://arxiv.org/abs/1704.04861. +
+Iandola, Forrest N, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: Alexnet-level Accuracy with 50x Fewer Parameters and 0.5 MB Model Size.” ArXiv Preprint abs/1602.07360. https://arxiv.org/abs/1602.07360. +

SqueezeNet: SqueezeNet is a class of ML models known for its smaller size without sacrificing accuracy. It achieves this by using a “fire module” that reduces the number of input channels to 3x3 filters, thus reducing the parameters (Iandola et al. 2016). Moreover, it employs delayed downsampling to increase the accuracy by maintaining a larger feature map.

+

ResNet variants: The Residual Network (ResNet) architecture allows for the introduction of skip connections or shortcuts (He et al. 2016). Some variants of ResNet are designed to be more efficient. For instance, ResNet-SE incorporates the “squeeze and excitation” mechanism to recalibrate feature maps (Hu, Shen, and Sun 2018), while ResNeXt offers grouped convolutions for efficiency (Xie et al. 2017).

+
+He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. IEEE. https://doi.org/10.1109/cvpr.2016.90. +
+Hu, Jie, Li Shen, and Gang Sun. 2018. “Squeeze-and-Excitation Networks.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7132–41. IEEE. https://doi.org/10.1109/cvpr.2018.00745. +
+Xie, Saining, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. 2017. “Aggregated Residual Transformations for Deep Neural Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1492–1500. IEEE. https://doi.org/10.1109/cvpr.2017.634. +
+
+

8.4 Efficient Model Compression

+

Model compression methods are very important for bringing deep learning models to devices with limited resources. These techniques reduce models’ size, energy consumption, and computational demands without significantly losing accuracy. At a high level, the methods can briefly be binned into the following fundamental methods:

+

Pruning: This is akin to trimming the branches of a tree. This was first thought of in the Optimal Brain Damage paper (LeCun, Denker, and Solla 1989). This was later popularized in the context of deep learning by Han, Mao, and Dally (2016). Certain weights or even entire neurons are removed from the network in pruning based on specific criteria. This can significantly reduce the model size. Various strategies include weight pruning, neuron pruning, and structured pruning. We will explore these in more detail in sec-pruning. Figure fig-pruning is an examples of neural network pruning: removing some of the nodes in the inner layers (based on a specific criteria) reduces the numbers of edges between the nodes and, in turn, the size of the model.

+
+LeCun, Yann, John Denker, and Sara Solla. 1989. “Optimal Brain Damage.” Adv Neural Inf Process Syst 2. +
+Han, Song, Huizi Mao, and William J. Dally. 2016. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.” https://arxiv.org/abs/1510.00149. +
+
+
+ +
+
+Figure 8.2: Neural Network Pruning. +
+
+
+

Quantization: quantization is the process of constraining an input from a large set to output in a smaller set, primarily in deep learning; this means reducing the number of bits that represent the weights and biases of the model. For example, using 16-bit or 8-bit representations instead of 32-bit can reduce the model size and speed up computations, with a minor trade-off in accuracy. We will explore these in more detail in sec-quant. Figure fig-quantization shows an example of quantization by rounding to the closest number. The conversion from 32-bit floating point to 16-bit reduces the memory usage by 50%. And going from 32-bit to 8-bit Integer, memory is reduced by 75%. While the loss in numeric precision, and consequently model performance, is minor, the memory usage efficiency is very significant.

+
+
+
+ +
+
+Figure 8.3: Different forms of quantization. +
+
+
+

Knowledge Distillation: Knowledge distillation involves training a smaller model (student) to replicate the behavior of a larger model (teacher). The idea is to transfer the knowledge from the cumbersome model to the lightweight one. Hence, the smaller model attains performance close to its larger counterpart but with significantly fewer parameters. We will explore knowledge distillation in more detail in the sec-kd.

+
+
+

8.5 Efficient Inference Hardware

+

Training: An AI model is an intensive task that requires powerful hardware and can take hours to weeks, but inference needs to be as fast as possible, especially in real-time applications. This is where efficient inference hardware comes into play. We can achieve rapid response times and power-efficient operation by optimizing the hardware specifically for inference tasks, which is especially crucial for edge devices and embedded systems.

+

TPUs (Tensor Processing Units): TPUs are custom-built ASICs (Application-Specific Integrated Circuits) by Google to accelerate machine learning workloads (Jouppi et al. 2017). They are optimized for tensor operations, offering high throughput for low-precision arithmetic, and are designed specifically for neural network machine learning. TPUs significantly accelerate model training and inference compared to general-purpose GPU/CPUs. This boost means faster model training and real-time or near-real-time inference capabilities, which are crucial for applications like voice search and augmented reality.

+
+Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. +

Edge TPUs are a smaller, power-efficient version of Google’s TPUs tailored for edge devices. They provide fast on-device ML inferencing for TensorFlow Lite models. Edge TPUs allow for low-latency, high-efficiency inference on edge devices like smartphones, IoT devices, and embedded systems. AI capabilities can be deployed in real-time applications without communicating with a central server, thus saving bandwidth and reducing latency. Consider the table in Figure fig-edge-tpu-perf. It shows the performance differences between running different models on CPUs versus a Coral USB accelerator. The Coral USB accelerator is an accessory by Google’s Coral AI platform that lets developers connect Edge TPUs to Linux computers. Running inference on the Edge TPUs was 70 to 100 times faster than on CPUs.

+
+
+
+ +
+
+Figure 8.4: Accelerator vs CPU performance comparison. Credit: TensorFlow Blog. +
+
+
+

NN Accelerators: Fixed-function neural network accelerators are hardware accelerators designed explicitly for neural network computations. They can be standalone chips or part of a larger system-on-chip (SoC) solution. By optimizing the hardware for the specific operations that neural networks require, such as matrix multiplications and convolutions, NN accelerators can achieve faster inference times and lower power consumption than general-purpose CPUs and GPUs. They are especially beneficial in TinyML devices with power or thermal constraints, such as smartwatches, micro-drones, or robotics.

+

But these are all but the most common examples. A number of other types of hardware are emerging that have the potential to offer significant advantages for inference. These include, but are not limited to, neuromorphic hardware, photonic computing, etc. In sec-aihw, we will explore these in greater detail.

+

Efficient hardware for inference speeds up the process, saves energy, extends battery life, and can operate in real-time conditions. As AI continues to be integrated into myriad applications- from smart cameras to voice assistants- the role of optimized hardware will only become more prominent. By leveraging these specialized hardware components, developers and engineers can bring the power of AI to devices and situations that were previously unthinkable.

+
+
+

8.6 Efficient Numerics

+

Machine learning, and especially deep learning, involves enormous amounts of computation. Models can have millions to billions of parameters, often trained on vast datasets. Every operation, every multiplication or addition, demands computational resources. Therefore, the precision of the numbers used in these operations can significantly impact the computational speed, energy consumption, and memory requirements. This is where the concept of efficient numerics comes into play.

+
+

8.6.1 Numerical Formats

+

There are many different types of numerics. Numerics have a long history in computing systems.

+

Floating point: Known as single-precision floating-point, FP32 utilizes 32 bits to represent a number, incorporating its sign, exponent, and fraction. FP32 is widely adopted in many deep learning frameworks and balances accuracy and computational requirements. It’s prevalent in the training phase for many neural networks due to its sufficient precision in capturing minute details during weight updates.

+

Also known as half-precision floating point, FP16 uses 16 bits to represent a number, including its sign, exponent, and fraction. It offers a good balance between precision and memory savings. FP16 is particularly popular in deep learning training on GPUs that support mixed-precision arithmetic, combining the speed benefits of FP16 with the precision of FP32 where needed.

+

Several other numerical formats fall into an exotic class. An exotic example is BF16 or Brain Floating Point. It is a 16-bit numerical format designed explicitly for deep learning applications. It’s a compromise between FP32 and FP16, retaining the 8-bit exponent from FP32 while reducing the mantissa to 7 bits (as compared to FP32’s 23-bit mantissa). This structure prioritizes range over precision. BF16 has achieved training results comparable in accuracy to FP32 while using significantly less memory and computational resources. This makes it suitable not just for inference but also for training deep neural networks.

+

By retaining the 8-bit exponent of FP32, BF16 offers a similar range, which is crucial for deep learning tasks where certain operations can result in very large or very small numbers. At the same time, by truncating precision, BF16 allows for reduced memory and computational requirements compared to FP32. BF16 has emerged as a promising middle ground in the landscape of numerical formats for deep learning, providing an efficient and effective alternative to the more traditional FP32 and FP16 formats.

+

Figure fig-float-point-formats shows three different floating-point formats: Float32, Float16, and BFloat16.

+
+
+
+ +
+
+Figure 8.5: Three floating-point formats. +
+
+
+

Integer: These are integer representations using 8, 4, and 2 bits. They are often used during the inference phase of neural networks, where the weights and activations of the model are quantized to these lower precisions. Integer representations are deterministic and offer significant speed and memory advantages over floating-point representations. For many inference tasks, especially on edge devices, the slight loss in accuracy due to quantization is often acceptable, given the efficiency gains. An extreme form of integer numerics is for binary neural networks (BNNs), where weights and activations are constrained to one of two values: +1 or -1.

+

Variable bit widths: Beyond the standard widths, research is ongoing into extremely low bit-width numerics, even down to binary or ternary representations. Extremely low bit-width operations can offer significant speedups and further reduce power consumption. While challenges remain in maintaining model accuracy with such drastic quantization, advances continue to be made in this area.

+

Efficient numerics is not just about reducing the bit-width of numbers but understanding the trade-offs between accuracy and efficiency. As machine learning models become more pervasive, especially in real-world, resource-constrained environments, the focus on efficient numerics will continue to grow. By thoughtfully selecting and leveraging the appropriate numeric precision, one can achieve robust model performance while optimizing for speed, memory, and energy. Table tbl-precision summarizes these trade-offs.

+
+
+
+Table 8.1: Comparing precision levels in deep learning. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PrecisionProsCons
FP32 (Floating Point 32-bit)Standard precision used in most deep learning frameworks.
High accuracy due to ample representational capacity.
Well-suited for training
High memory usage.
Slower inference times compared to quantized models.
Higher energy consumption.
FP16 (Floating Point 16-bit)Reduces memory usage compared to FP32.
Speeds up computations on hardware that supports FP16.
Often used in mixed-precision training to balance speed and accuracy.
Lower representational capacity compared to FP32.
Risk of numerical instability in some models or layers.
INT8 (8-bit Integer)Significantly reduced memory footprint compared to floating-point representations.
Faster inference if hardware supports INT8 computations.
Suitable for many post-training quantization scenarios.
Quantization can lead to some accuracy loss.
Requires careful calibration during quantization to minimize accuracy degradation.
INT4 (4-bit Integer)Even lower memory usage than INT8.< br//> Further speedup potential for inference.Higher risk of accuracy loss compared to INT8.
Calibration during quantization becomes more critical.
BinaryMinimal memory footprint (only 1 bit per parameter).
Extremely fast inference due to bitwise operations.
Power efficient.
Significant accuracy drop for many tasks.
Complex training dynamics due to extreme quantization.
TernaryLow memory usage but slightly more than binary.
Offers a middle ground between representation and efficiency.
accuracy might still be lower than that of higher precision models.
Training dynamics can be complex.
+
+
+
+
+
+

8.6.2 Efficiency Benefits

+

Numerical efficiency matters for machine learning workloads for several reasons:

+

Computational Efficiency : High-precision computations (like FP32 or FP64) can be slow and resource-intensive. Reducing numeric precision can achieve faster computation times, especially on specialized hardware that supports lower precision.

+

Memory Efficiency: Storage requirements decrease with reduced numeric precision. For instance, FP16 requires half the memory of FP32. This is crucial when deploying models to edge devices with limited memory or working with large models.

+

Power Efficiency: Lower precision computations often consume less power, which is especially important for battery-operated devices.

+

Noise Introduction: Interestingly, the noise introduced using lower precision can sometimes act as a regularizer, helping to prevent overfitting in some models.

+

Hardware Acceleration: Many modern AI accelerators and GPUs are optimized for lower precision operations, leveraging the efficiency benefits of such numerics.

+
+
+
+

8.7 Evaluating Models

+

It’s worth noting that the actual benefits and trade-offs can vary based on the specific architecture of the neural network, the dataset, the task, and the hardware being used. Before deciding on a numeric precision, it’s advisable to perform experiments to evaluate the impact on the desired application.

+
+

8.7.1 Efficiency Metrics

+

A deep understanding of model evaluation methods is important to guide this process systematically. When assessing AI models’ effectiveness and suitability for various applications, efficiency metrics come to the forefront.

+

FLOPs (Floating Point Operations) gauge a model’s computational demands. For instance, a modern neural network like BERT has billions of FLOPs, which might be manageable on a powerful cloud server but would be taxing on a smartphone. Higher FLOPs can lead to more prolonged inference times and significant power drain, especially on devices without specialized hardware accelerators. Hence, for real-time applications such as video streaming or gaming, models with lower FLOPs might be more desirable.

+

Memory Usage pertains to how much storage the model requires, affecting both the deploying device’s storage and RAM. Consider deploying a model onto a smartphone: a model that occupies several gigabytes of space not only consumes precious storage but might also be slower due to the need to load large weights into memory. This becomes especially crucial for edge devices like security cameras or drones, where minimal memory footprints are vital for storage and rapid data processing.

+

Power Consumption becomes especially crucial for devices that rely on batteries. For instance, a wearable health monitor using a power-hungry model could drain its battery in hours, rendering it impractical for continuous health monitoring. Optimizing models for low power consumption becomes essential as we move toward an era dominated by IoT devices, where many devices operate on battery power.

+

Inference Time is about how swiftly a model can produce results. In applications like autonomous driving, where split-second decisions are the difference between safety and calamity, models must operate rapidly. If a self-driving car’s model takes even a few seconds too long to recognize an obstacle, the consequences could be dire. Hence, ensuring a model’s inference time aligns with the real-time demands of its application is paramount.

+

In essence, these efficiency metrics are more than numbers dictating where and how a model can be effectively deployed. A model might boast high accuracy, but if its FLOPs, memory usage, power consumption, or inference time make it unsuitable for its intended platform or real-world scenarios, its practical utility becomes limited.

+
+
+

8.7.2 Efficiency Comparisons

+

The ecosystem contains an abundance of models, each boasting its unique strengths and idiosyncrasies. However, pure model accuracy figures or training and inference speeds paint a partial picture. When we dive deeper into comparative analyses, several critical nuances emerge.

+

Often, we encounter the delicate balance between accuracy and efficiency. For instance, while a dense, deep learning model and a lightweight MobileNet variant might excel in image classification, their computational demands could be at two extremes. This differentiation is especially pronounced when comparing deployments on resource-abundant cloud servers versus constrained TinyML devices. In many real-world scenarios, the marginal gains in accuracy could be overshadowed by the inefficiencies of a resource-intensive model.

+

Moreover, the optimal model choice is only sometimes universal but often depends on the specifics of an application. Consider object detection: a model that excels in general scenarios that might falter in niche environments, such as when detecting manufacturing defects on a factory floor. This adaptability- or the lack of it- can dictate a model’s real-world utility.

+

Another important consideration is the relationship between model complexity and its practical benefits. Take voice-activated assistants, such as “Alexa” or “OK Google.” While a complex model might demonstrate a marginally superior understanding of user speech if it’s slower to respond than a simpler counterpart, the user experience could be compromised. Thus, adding layers or parameters only sometimes equates to better real-world outcomes.

+

Furthermore, while benchmark datasets, such as ImageNet (Russakovsky et al. 2015), COCO (Lin et al. 2014), Visual Wake Words (Chowdhery et al. 2019), Google Speech Commands (Warden 2018), etc. provide a standardized performance metric, they might not capture the diversity and unpredictability of real-world data. Two facial recognition models with similar benchmark scores might exhibit varied competencies when faced with diverse ethnic backgrounds or challenging lighting conditions. Such disparities underscore the importance of robustness and consistency across varied data. For example, Figure fig-stoves from the Dollar Street dataset shows stove images across extreme monthly incomes. Stoves have different shapes and technological levels across different regions and income levels. A model that is not trained on diverse datasets might perform well on a benchmark but fail in real-world applications. So, if a model was trained on pictures of stoves found in wealthy countries only, it would fail to recognize stoves from poorer regions.

+
+Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. 2015. ImageNet Large Scale Visual Recognition Challenge.” Int. J. Comput. Vision 115 (3): 211–52. https://doi.org/10.1007/s11263-015-0816-y. +
+Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. “Microsoft Coco: Common Objects in Context.” In Computer VisionECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part v 13, 740–55. Springer. +
+Chowdhery, Aakanksha, Pete Warden, Jonathon Shlens, Andrew Howard, and Rocky Rhodes. 2019. “Visual Wake Words Dataset.” arXiv Preprint arXiv:1906.05721. +
+Warden, Pete. 2018. “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” arXiv Preprint arXiv:1804.03209. +
+
+
+ +
+
+Figure 8.6: Different types of stoves. Credit: Dollar Street stove images. +
+
+
+

In essence, a thorough comparative analysis transcends numerical metrics. It’s a holistic assessment intertwined with real-world applications, costs, and the intricate subtleties that each model brings to the table. This is why having standard benchmarks and metrics widely established and adopted by the community becomes important.

+
+
+
+

8.8 Conclusion

+

Efficient AI is extremely important as we push towards broader and more diverse real-world deployment of machine learning. This chapter provided an overview, exploring the various methodologies and considerations behind achieving efficient AI, starting with the fundamental need, similarities, and differences across cloud, Edge, and TinyML systems.

+

We saw that efficient model architectures can be useful for optimizations. Model compression techniques such as pruning, quantization, and knowledge distillation exist to help reduce computational demands and memory footprint without significantly impacting accuracy. Specialized hardware like TPUs and NN accelerators offer optimized silicon for neural network operations and data flow. Efficient numerics balance precision and efficiency, enabling models to attain robust performance using minimal resources. In the subsequent chapters, we will explore these different topics in depth and in detail.

+

Together, these form a holistic framework for efficient AI. But the journey doesn’t end here. Achieving optimally efficient intelligence requires continued research and innovation. As models become more sophisticated, datasets grow, and applications diversify into specialized domains, efficiency must evolve in lockstep. Measuring real-world impact requires nuanced benchmarks and standardized metrics beyond simplistic accuracy figures.

+

Moreover, efficient AI expands beyond technological optimization and encompasses costs, environmental impact, and ethical considerations for the broader societal good. As AI permeates industries and daily lives, a comprehensive outlook on efficiency underpins its sustainable and responsible progress. The subsequent chapters will build upon these foundational concepts, providing actionable insights and hands-on best practices for developing and deploying efficient AI solutions.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+

Coming soon.

+
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/frameworks/frameworks.html b/contents/frameworks/frameworks.html new file mode 100644 index 00000000..85bb4bd9 --- /dev/null +++ b/contents/frameworks/frameworks.html @@ -0,0 +1,2154 @@ + + + + + + + + + +Machine Learning Systems - 6  AI Frameworks + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

6  AI Frameworks

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Illustration in a rectangular format, designed for a professional textbook, where the content spans the entire width. The vibrant chart represents training and inference frameworks for ML. Icons for TensorFlow, Keras, PyTorch, ONNX, and TensorRT are spread out, filling the entire horizontal space, and aligned vertically. Each icon is accompanied by brief annotations detailing their features. The lively colors like blues, greens, and oranges highlight the icons and sections against a soft gradient background. The distinction between training and inference frameworks is accentuated through color-coded sections, with clean lines and modern typography maintaining clarity and focus.
+
+
+

This chapter explores the landscape of AI frameworks that serve as the foundation for developing machine learning systems. AI frameworks provide the tools, libraries, and environments to design, train, and deploy machine learning models. We delve into the evolutionary trajectory of these frameworks, dissect the workings of TensorFlow, and provide insights into the core components and advanced features that define these frameworks.

+

Furthermore, we investigate the specialization of frameworks tailored to specific needs, the emergence of frameworks specifically designed for embedded AI, and the criteria for selecting the most suitable framework for your project. This exploration will be rounded off by a glimpse into the future trends expected to shape the landscape of ML frameworks in the coming years.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the evolution and capabilities of major machine learning frameworks. This includes graph execution models, programming paradigms, hardware acceleration support, and how they have expanded over time.

  • +
  • Learn frameworks’ core components and functionality, such as computational graphs, data pipelines, optimization algorithms, training loops, etc., that enable efficient model building.

  • +
  • Compare frameworks across different environments, such as cloud, edge, and TinyML. Learn how frameworks specialize based on computational constraints and hardware.

  • +
  • Dive deeper into embedded and TinyML-focused frameworks like TensorFlow Lite Micro, CMSIS-NN, TinyEngine, etc., and how they optimize for microcontrollers.

  • +
  • When choosing a framework, explore model conversion and deployment considerations, including latency, memory usage, and hardware support.

  • +
  • Evaluate key factors in selecting the right framework, like performance, hardware compatibility, community support, ease of use, etc., based on the specific project needs and constraints.

  • +
  • Understand the limitations of current frameworks and potential future trends, such as using ML to improve frameworks, decomposed ML systems, and high-performance compilers.

  • +
+
+
+
+

6.1 Introduction

+

Machine learning frameworks provide the tools and infrastructure to efficiently build, train, and deploy machine learning models. In this chapter, we will explore the evolution and key capabilities of major frameworks like TensorFlow (TF), PyTorch, and specialized frameworks for embedded devices. We will dive into the components like computational graphs, optimization algorithms, hardware acceleration, and more that enable developers to construct performant models quickly. Understanding these frameworks is essential to leverage the power of deep learning across the spectrum from cloud to edge devices.

+

ML frameworks handle much of the complexity of model development through high-level APIs and domain-specific languages that allow practitioners to quickly construct models by combining pre-made components and abstractions. For example, frameworks like TensorFlow and PyTorch provide Python APIs to define neural network architectures using layers, optimizers, datasets, and more. This enables rapid iteration compared to coding every model detail from scratch.

+

A key capability framework offered is distributed training engines that can scale model training across clusters of GPUs and TPUs. This makes it feasible to train state-of-the-art models with billions or trillions of parameters on vast datasets. Frameworks also integrate with specialized hardware like NVIDIA GPUs to further accelerate training via optimizations like parallelization and efficient matrix operations.

+

In addition, frameworks simplify deploying finished models into production through tools like TensorFlow Serving for scalable model serving and TensorFlow Lite for optimization on mobile and edge devices. Other valuable capabilities include visualization, model optimization techniques like quantization and pruning, and monitoring metrics during training.

+

They were leading open-source frameworks like TensorFlow, PyTorch, and MXNet, which power much of AI research and development today. Commercial offerings like Amazon SageMaker and Microsoft Azure Machine Learning integrate these open source frameworks with proprietary capabilities and enterprise tools.

+

Machine learning engineers and practitioners leverage these robust frameworks to focus on high-value tasks like model architecture, feature engineering, and hyperparameter tuning instead of infrastructure. The goal is to build and deploy performant models that solve real-world problems efficiently.

+

This chapter, we will explore today’s leading cloud frameworks and how they have adapted models and tools specifically for embedded and edge deployment. We will compare programming models, supported hardware, optimization capabilities, and more to fully understand how frameworks enable scalable machine learning from the cloud to the edge.

+
+
+

6.2 Framework Evolution

+

Machine learning frameworks have evolved significantly to meet the diverse needs of machine learning practitioners and advancements in AI techniques. A few decades ago, building and training machine learning models required extensive low-level coding and infrastructure. Machine learning frameworks have evolved considerably over the past decade to meet the expanding needs of practitioners and rapid advances in deep learning techniques. Insufficient data and computing power constrained early neural network research. Building and training machine learning models required extensive low-level coding and infrastructure. However, the release of large datasets like ImageNet (Deng et al. 2009) and advancements in parallel GPU computing unlocked the potential for far deeper neural networks.

+
+Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A Large-Scale Hierarchical Image Database.” In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–55. IEEE. https://doi.org/10.1109/cvpr.2009.5206848. +
+Team, The Theano Development, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, et al. 2016. “Theano: A Python Framework for Fast Computation of Mathematical Expressions.” https://arxiv.org/abs/1605.02688. +
+Jia, Yangqing, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. “Caffe: Convolutional Architecture for Fast Feature Embedding.” In Proceedings of the 22nd ACM International Conference on Multimedia, 675–78. ACM. https://doi.org/10.1145/2647868.2654889. +
+Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1106–14. https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html. +
+Chollet, François. 2018. “Introduction to Keras.” March 9th. +
+Tokui, Seiya, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuji Suzuki, Kota Uenishi, Brian Vogel, and Hiroyuki Yamazaki Vincent. 2019. “Chainer: A Deep Learning Framework for Accelerating the Research Cycle.” In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining, 5:1–6. ACM. https://doi.org/10.1145/3292500.3330756. +
+Seide, Frank, and Amit Agarwal. 2016. “Cntk: Microsoft’s Open-Source Deep-Learning Toolkit.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2135–35. ACM. https://doi.org/10.1145/2939672.2945397. +
+Ansel, Jason, Edward Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voznesensky, Bin Bao, et al. 2024. PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation.” In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, edited by Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, 8024–35. ACM. https://doi.org/10.1145/3620665.3640366. +

The first ML frameworks, Theano by Team et al. (2016) and Caffe by Jia et al. (2014), were developed by academic institutions (Montreal Institute for Learning Algorithms, Berkeley Vision and Learning Center). Amid growing interest in deep learning due to state-of-the-art performance of AlexNet Krizhevsky, Sutskever, and Hinton (2012) on the ImageNet dataset, private companies and individuals began developing ML frameworks, resulting in frameworks such as Keras by Chollet (2018), Chainer by Tokui et al. (2019), TensorFlow from Google (Yu et al. 2018), CNTK by Microsoft (Seide and Agarwal 2016), and PyTorch by Facebook (Ansel et al. 2024).

+

Many of these ML frameworks can be divided into high-level vs. low-level frameworks and static vs. dynamic computational graph frameworks. High-level frameworks provide a higher level of abstraction than low-level frameworks. High-level frameworks have pre-built functions and modules for common ML tasks, such as creating, training, and evaluating common ML models, preprocessing data, engineering features, and visualizing data, which low-level frameworks do not have. Thus, high-level frameworks may be easier to use but are less customizable than low-level frameworks (i.e., users of low-level frameworks can define custom layers, loss functions, optimization algorithms, etc.). Examples of high-level frameworks include TensorFlow/Keras and PyTorch. Examples of low-level ML frameworks include TensorFlow with low-level APIs, Theano, Caffe, Chainer, and CNTK.

+

Frameworks like Theano and Caffe used static computational graphs, which required rigidly defining the full model architecture upfront. Static graphs require upfront declaration and limit flexibility. Dynamic graphs are constructed on the fly for more iterative development. However, around 2016, frameworks began adopting dynamic graphs like PyTorch and TensorFlow 2.0, which can construct graphs on the fly. This provides greater flexibility for model development. We will discuss these concepts and details later in the AI Training section.

+

The development of these frameworks facilitated an explosion in model size and complexity over time—from early multilayer perceptrons and convolutional networks to modern transformers with billions or trillions of parameters. In 2016, ResNet models by He et al. (2016) achieved record ImageNet accuracy with over 150 layers and 25 million parameters. Then, in 2020, the GPT-3 language model from OpenAI (Brown et al. 2020) pushed parameters to an astonishing 175 billion using model parallelism in frameworks to train across thousands of GPUs and TPUs.

+
+He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. IEEE. https://doi.org/10.1109/cvpr.2016.90. +
+Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. +

Each generation of frameworks unlocked new capabilities that powered advancement:

+
    +
  • Theano and TensorFlow (2015) introduced computational graphs and automatic differentiation to simplify model building.

  • +
  • CNTK (2016) pioneered efficient distributed training by combining model and data parallelism.

  • +
  • PyTorch (2016) provided imperative programming and dynamic graphs for flexible experimentation.

  • +
  • TensorFlow 2.0 (2019) defaulted eager execution for intuitiveness and debugging.

  • +
  • TensorFlow Graphics (2020) added 3D data structures to handle point clouds and meshes.

  • +
+

In recent years, the frameworks have converged. Figure fig-ml-framework shows that TensorFlow and PyTorch have become the overwhelmingly dominant ML frameworks, representing more than 95% of ML frameworks used in research and production. Keras was integrated into TensorFlow in 2019; Preferred Networks transitioned Chainer to PyTorch in 2019; and Microsoft stopped actively developing CNTK in 2022 to support PyTorch on Windows.

+
+
+
+ +
+
+Figure 6.1: Popularity of ML frameworks in the United States as measured by Google web searches. Credit: Google. +
+
+
+

However, a one-size-fits-all approach only works well across the spectrum from cloud to tiny edge devices. Different frameworks represent various philosophies around graph execution, declarative versus imperative APIs, and more. Declaratives define what the program should do, while imperatives focus on how it should be done step-by-step. For instance, TensorFlow uses graph execution and declarative-style modeling, while PyTorch adopts eager execution and imperative modeling for more Pythonic flexibility. Each approach carries tradeoffs, which we will discuss later in the Basic Components section.

+

Today’s advanced frameworks enable practitioners to develop and deploy increasingly complex models - a key driver of innovation in the AI field. However, they continue to evolve and expand their capabilities for the next generation of machine learning. To understand how these systems continue to evolve, we will dive deeper into TensorFlow as an example of how the framework grew in complexity over time.

+
+
+

6.3 DeepDive into TensorFlow

+

TensorFlow was developed by the Google Brain team and was released as an open-source software library on November 9, 2015. It was designed for numerical computation using data flow graphs and has since become popular for a wide range of machine learning and deep learning applications.

+

TensorFlow is a training and inference framework that provides built-in functionality to handle everything from model creation and training to deployment, as shown in Figure fig-tensorflow-architecture. Since its initial development, the TensorFlow ecosystem has grown to include many different “varieties” of TensorFlow, each intended to allow users to support ML on different platforms. In this section, we will mainly discuss only the core package.

+
+

6.3.1 TF Ecosystem

+
    +
  1. TensorFlow Core: primary package that most developers engage with. It provides a comprehensive, flexible platform for defining, training, and deploying machine learning models. It includes tf—keras as its high-level API.

  2. +
  3. TensorFlow Lite (https://www.tensorflow.org/lite): designed for deploying lightweight models on mobile, embedded, and edge devices. It offers tools to convert TensorFlow models to a more compact format suitable for limited-resource devices and provides optimized pre-trained models for mobile.

  4. +
  5. TensorFlow.js: JavaScript library that allows training and deployment of machine learning models directly in the browser or on Node.js. It also provides tools for porting pre-trained TensorFlow models to the browser-friendly format.

  6. +
  7. TensorFlow on Edge Devices (Coral): platform of hardware components and software tools from Google that allows the execution of TensorFlow models on edge devices, leveraging Edge TPUs for acceleration.

  8. +
  9. TensorFlow Federated (TFF): framework for machine learning and other computations on decentralized data. TFF facilitates federated learning, allowing model training across many devices without centralizing the data.

  10. +
  11. TensorFlow Graphics: library for using TensorFlow to carry out graphics-related tasks, including 3D shapes and point clouds processing, using deep learning.

  12. +
  13. TensorFlow Hub: repository of reusable machine learning model components to allow developers to reuse pre-trained model components, facilitating transfer learning and model composition

  14. +
  15. TensorFlow Serving: framework designed for serving and deploying machine learning models for inference in production environments. It provides tools for versioning and dynamically updating deployed models without service interruption.

  16. +
  17. TensorFlow Extended (TFX): end-to-end platform designed to deploy and manage machine learning pipelines in production settings. TFX encompasses data validation, preprocessing, model training, validation, and serving components.

  18. +
+
+
+
+ +
+
+Figure 6.2: Architecture overview of TensorFlow 2.0. Credit: Tensorflow. +
+
+
+

TensorFlow was developed to address the limitations of DistBelief (Yu et al. 2018)—the framework in use at Google from 2011 to 2015—by providing flexibility along three axes: 1) defining new layers, 2) refining training algorithms, and 3) defining new training algorithms. To understand what limitations in DistBelief led to the development of TensorFlow, we will first give a brief overview of the Parameter Server Architecture that DistBelief employed (Dean et al. 2012).

+
+Yu, Yuan, Martı́n Abadi, Paul Barham, Eugene Brevdo, Mike Burrows, Andy Davis, Jeff Dean, et al. 2018. “Dynamic Control Flow in Large-Scale Machine Learning.” In Proceedings of the Thirteenth EuroSys Conference, 265–83. ACM. https://doi.org/10.1145/3190508.3190551. +
+Dean, Jeffrey, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, et al. 2012. “Large Scale Distributed Deep Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1232–40. https://proceedings.neurips.cc/paper/2012/hash/6aca97005c68f1206823815f66102863-Abstract.html. +

The Parameter Server (PS) architecture is a popular design for distributing the training of machine learning models, especially deep neural networks, across multiple machines. The fundamental idea is to separate the storage and management of model parameters from the computation used to update these parameters:

+

Storage: The stateful parameter server processes handled the storage and management of model parameters. Given the large scale of models and the system’s distributed nature, these parameters were sharded across multiple parameter servers. Each server maintained a portion of the model parameters, making it "stateful" as it had to maintain and manage this state across the training process.

+

Computation: The worker processes, which could be run in parallel, were stateless and purely computational. They processed data and computed gradients without maintaining any state or long-term memory (M. Li et al. 2014).

+
+Li, Mu, David G. Andersen, Alexander J. Smola, and Kai Yu. 2014. “Communication Efficient Distributed Machine Learning with the Parameter Server.” In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, edited by Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, 19–27. https://proceedings.neurips.cc/paper/2014/hash/1ff1de774005f8da13f42943881c655f-Abstract.html. +
+

Exercise 6.1 (TensorFlow Core)  

+
+
+ +
+
+

Let’s comprehensively understand core machine learning algorithms using TensorFlow and their practical applications in data analysis and predictive modeling. We will start with linear regression to predict survival rates from the Titanic dataset. Then, using TensorFlow, we will construct classifiers to identify different species of flowers based on their attributes. Next, we will use the K-Means algorithm and its application in segmenting datasets into cohesive clusters. Finally, we will apply hidden Markov models (HMM) to foresee weather patterns.

+

+
+
+
+
+

Exercise 6.2 (TensorFlow Lite)  

+
+
+ +
+
+

Here, we will see how to build a miniature machine-learning model for microcontrollers. We will build a mini neural network that is streamlined to learn from data even with limited resources and optimized for deployment by shrinking our model for efficient use on microcontrollers. TensorFlow Lite, a powerful technology derived from TensorFlow, shrinks models for tiny devices and helps enable on-device features like image recognition in smart devices. It is used in edge computing to allow for faster analysis and decisions in devices processing data locally.

+

+
+
+
+

DistBelief and its architecture defined above were crucial in enabling distributed deep learning at Google but also introduced limitations that motivated the development of TensorFlow:

+
+
+

6.3.2 Static Computation Graph

+

Model parameters are distributed across various parameter servers in the parameter server architecture. Since DistBelief was primarily designed for the neural network paradigm, parameters corresponded to a fixed neural network structure. If the computation graph were dynamic, the distribution and coordination of parameters would become significantly more complicated. For example, a change in the graph might require the initialization of new parameters or the removal of existing ones, complicating the management and synchronization tasks of the parameter servers. This made it harder to implement models outside the neural framework or models that required dynamic computation graphs.

+

TensorFlow was designed as a more general computation framework that expresses computation as a data flow graph. This allows for a wider variety of machine learning models and algorithms outside of neural networks and provides flexibility in refining models.

+
+
+

6.3.3 Usability & Deployment

+

The parameter server model delineates roles (worker nodes and parameter servers) and is optimized for data center deployments, which might only be optimal for some use cases. For instance, this division introduces overheads or complexities on edge devices or in other non-data center environments.

+

TensorFlow was built to run on multiple platforms, from mobile devices and edge devices to cloud infrastructure. It also aimed to be lighter and developer-friendly and to provide ease of use between local and distributed training.

+
+
+

6.3.4 Architecture Design

+

Rather than using the parameter server architecture, TensorFlow deploys tasks across a cluster. These tasks are named processes that can communicate over a network, and each can execute TensorFlow’s core construct, the dataflow graph, and interface with various computing devices (like CPUs or GPUs). This graph is a directed representation where nodes symbolize computational operations, and edges depict the tensors (data) flowing between these operations.

+

Despite the absence of traditional parameter servers, some “PS tasks” still store and manage parameters reminiscent of parameter servers in other systems. The remaining tasks, which usually handle computation, data processing, and gradient calculations, are referred to as "worker tasks." TensorFlow’s PS tasks can execute any computation representable by the dataflow graph, meaning they aren’t just limited to parameter storage, and the computation can be distributed. This capability makes them significantly more versatile and gives users the power to program the PS tasks using the standard TensorFlow interface, the same one they’d use to define their models. As mentioned above, dataflow graphs’ structure also makes them inherently good for parallelism, allowing for the processing of large datasets.

+
+
+

6.3.5 Built-in Functionality & Keras

+

TensorFlow includes libraries to help users develop and deploy more use-case-specific models, and since this framework is open-source, this list continues to grow. These libraries address the entire ML development lifecycle: data preparation, model building, deployment, and responsible AI.

+

One of TensorFlow’s biggest advantages is its integration with Keras, though, as we will cover in the next section, Pytorch recently added a Keras integration. Keras is another ML framework built to be extremely user-friendly and, as a result, has a high level of abstraction. We will cover Keras in more depth later in this chapter. However, when discussing its integration with TensorFlow, it was important to note that it was originally built to be backend-agnostic. This means users could abstract away these complexities, offering a cleaner, more intuitive way to define and train models without worrying about compatibility issues with different backends. TensorFlow users had some complaints about the usability and readability of TensorFlow’s API, so as TF gained prominence, it integrated Keras as its high-level API. This integration offered major benefits to TensorFlow users since it introduced more intuitive readability and portability of models while still taking advantage of powerful backend features, Google support, and infrastructure to deploy models on various platforms.

+
+

Exercise 6.3 (Exploring Keras: Building, Training, and Evaluating Neural Networks)  

+
+
+ +
+
+

Here, we’ll learn how to use Keras, a high-level neural network API, for model development and training. We will explore the functional API for concise model building, understand loss and metric classes for model evaluation, and use built-in optimizers to update model parameters during training. Additionally, we’ll discover how to define custom layers and metrics tailored to our needs. Lastly, we’ll delve into Keras’ training loops to streamline the process of training neural networks on large datasets. This knowledge will empower us to build and optimize neural network models across various machine learning and artificial intelligence applications.

+

+
+
+
+
+
+

6.3.6 Limitations and Challenges

+

TensorFlow is one of the most popular deep learning frameworks but has criticisms and weaknesses, mostly focusing on usability and resource usage. While advantageous, the rapid pace of updates through its support from Google has sometimes led to backward compatibility issues, deprecated functions, and shifting documentation. Additionally, even with the Keras implementation, TensorFlow’s syntax and learning curve can be difficult for new users. One major critique of TensorFlow is its high overhead and memory consumption due to the range of built-in libraries and support. Some of these concerns can be addressed using pared-down versions, but they can still be limited in resource-constrained environments.

+
+
+

6.3.7 PyTorch vs. TensorFlow

+

PyTorch and TensorFlow have established themselves as frontrunners in the industry. Both frameworks offer robust functionalities but differ in design philosophies, ease of use, ecosystem, and deployment capabilities.

+

Design Philosophy and Programming Paradigm: PyTorch uses a dynamic computational graph termed eager execution. This makes it intuitive and facilitates debugging since operations are executed immediately and can be inspected on the fly. In comparison, earlier versions of TensorFlow were centered around a static computational graph, which required the graph’s complete definition before execution. However, TensorFlow 2.0 introduced eager execution by default, making it more aligned with PyTorch. PyTorch’s dynamic nature and Python-based approach have enabled its simplicity and flexibility, particularly for rapid prototyping. TensorFlow’s static graph approach in its earlier versions had a steeper learning curve; the introduction of TensorFlow 2.0, with its Keras integration as the high-level API, has significantly simplified the development process.

+

Deployment: PyTorch is heavily favored in research environments; deploying PyTorch models in production settings was traditionally challenging. However, deployment has become more feasible with the introduction of TorchScript and the TorchServe tool. One of TensorFlow’s strengths lies in its scalability and deployment capabilities, especially on embedded and mobile platforms with TensorFlow Lite. TensorFlow Serving and TensorFlow.js further facilitate deployment in various environments, thus giving it a broader reach in the ecosystem.

+

Performance: Both frameworks offer efficient hardware acceleration for their operations. However, TensorFlow has a slightly more robust optimization workflow, such as the XLA (Accelerated Linear Algebra) compiler, which can further boost performance. Its static computational graph was also advantageous for certain optimizations in the early versions.

+

Ecosystem: PyTorch has a growing ecosystem with tools like TorchServe for serving models and libraries like TorchVision, TorchText, and TorchAudio for specific domains. As we mentioned earlier, TensorFlow has a broad and mature ecosystem. TensorFlow Extended (TFX) provides an end-to-end platform for deploying production machine learning pipelines. Other tools and libraries include TensorFlow Lite, TensorFlow.js, TensorFlow Hub, and TensorFlow Serving.

+

Table tbl-pytorch_vs_tf provides a comparative analysis:

+
+
+
+Table 6.1: Comparison of PyTorch and TensorFlow. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Feature/AspectPyTorchTensorFlow
Design PhilosophyDynamic computational graph (eager execution)Static computational graph (early versions); Eager execution in TensorFlow 2.0
DeploymentTraditionally challenging; Improved with TorchScript & TorchServeScalable, especially on embedded platforms with TensorFlow Lite
Performance & OptimizationEfficient GPU accelerationRobust optimization with XLA compiler
EcosystemTorchServe, TorchVision, TorchText, TorchAudioTensorFlow Extended (TFX), TensorFlow Lite, TensorFlow.js, TensorFlow Hub, TensorFlow Serving
Ease of UsePreferred for its Pythonic approach and rapid prototypingInitially steep learning curve; Simplified with Keras in TensorFlow 2.0
+
+
+
+
+
+
+

6.4 Basic Framework Components

+
+

6.4.1 Tensor data structures

+

To understand tensors, let us start from the familiar concepts in linear algebra. As demonstrated in Figure fig-tensor-data-structure, vectors can be represented as a stack of numbers in a 1-dimensional array. Matrices follow the same idea, and one can think of them as many vectors stacked on each other, making them 2 dimensional. Higher dimensional tensors work the same way. A 3-dimensional tensor is simply a set of matrices stacked on each other in another direction. Therefore, vectors and matrices can be considered special cases of tensors with 1D and 2D dimensions, respectively.

+
+
+
+ +
+
+Figure 6.3: Visualization of Tensor Data Structure. +
+
+
+

Defining formally, in machine learning, tensors are a multi-dimensional array of numbers. The number of dimensions defines the rank of the tensor. As a generalization of linear algebra, the study of tensors is called multilinear algebra. There are noticeable similarities between matrices and higher-ranked tensors. First, extending the definitions given in linear algebra to tensors, such as with eigenvalues, eigenvectors, and rank (in the linear algebra sense), is possible. Furthermore, with the way we have defined tensors, it is possible to turn higher dimensional tensors into matrices. This is critical in practice, as the multiplication of abstract representations of higher dimensional tensors is often completed by first converting them into matrices for multiplication.

+

Tensors offer a flexible data structure that can represent data in higher dimensions. For example, to represent color image data, for each pixel value (in 2 dimensions), one needs the color values for red, green, and blue. With tensors, it is easy to contain image data in a single 3-dimensional tensor, with each number within it representing a certain color value in a certain location of the image. Extending even further, if we wanted to store a series of images, we could extend the dimensions such that the new dimension (to create a 4-dimensional tensor) represents our different images. This is exactly what the famous MNIST dataset does, loading a single 4-dimensional tensor when one calls to load the dataset, allowing a compact representation of all the data in one place.

+
+
+

6.4.2 Computational graphs

+
+

Graph Definition

+

Computational graphs are a key component of deep learning frameworks like TensorFlow and PyTorch. They allow us to express complex neural network architectures efficiently and differentiatedly. A computational graph consists of a directed acyclic graph (DAG) where each node represents an operation or variable, and edges represent data dependencies between them.

+

For example, a node might represent a matrix multiplication operation, taking two input matrices (or tensors) and producing an output matrix (or tensor). To visualize this, consider the simple example in Figure fig-computational-graph. The directed acyclic graph above computes \(z = x \times y\), where each variable is just numbers.

+
+
+
+ +
+
+Figure 6.4: Basic example of a computational graph. +
+
+
+

Underneath the hood, the computational graphs represent abstractions for common layers like convolutional, pooling, recurrent, and dense layers, with data including activations, weights, and biases represented in tensors. Convolutional layers form the backbone of CNN models for computer vision. They detect spatial patterns in input data through learned filters. Recurrent layers like LSTMs and GRUs enable sequential data processing for tasks like language translation. Attention layers are used in transformers to draw global context from the entire input.

+

Layers are higher-level abstractions that define computations on top of those tensors. For example, a Dense layer performs a matrix multiplication and addition between input/weight/bias tensors. Note that a layer operates on tensors as inputs and outputs; the layer is not a tensor. Some key differences:

+
    +
  • Layers contain states like weights and biases. Tensors are stateless, just holding data.

  • +
  • Layers can modify internal state during training. Tensors are immutable/read-only.

  • +
  • Layers are higher-level abstractions. Tensors are at a lower level and directly represent data and math operations.

  • +
  • Layers define fixed computation patterns. Tensors flow between layers during execution.

  • +
  • Layers are used indirectly when building models. Tensors flow between layers during execution.

  • +
+

So, while tensors are a core data structure that layers consume and produce, layers have additional functionality for defining parameterized operations and training. While a layer configures tensor operations under the hood, the layer remains distinct from the tensor objects. The layer abstraction makes building and training neural networks much more intuitive. This abstraction enables developers to build models by stacking these layers together without implementing the layer logic. For example, calling tf.keras.layers.Conv2D in TensorFlow creates a convolutional layer. The framework handles computing the convolutions, managing parameters, etc. This simplifies model development, allowing developers to focus on architecture rather than low-level implementations. Layer abstractions utilize highly optimized implementations for performance. They also enable portability, as the same architecture can run on different hardware backends like GPUs and TPUs.

+

In addition, computational graphs include activation functions like ReLU, sigmoid, and tanh that are essential to neural networks, and many frameworks provide these as standard abstractions. These functions introduce non-linearities that enable models to approximate complex functions. Frameworks provide these as simple, predefined operations that can be used when constructing models, for example, if.nn.relu in TensorFlow. This abstraction enables flexibility, as developers can easily swap activation functions for tuning performance. Predefined activations are also optimized by the framework for faster execution.

+

In recent years, models like ResNets and MobileNets have emerged as popular architectures, with current frameworks pre-packaging these as computational graphs. Rather than worrying about the fine details, developers can utilize them as a starting point, customizing as needed by substituting layers. This simplifies and speeds up model development, avoiding reinventing architectures from scratch. Predefined models include well-tested, optimized implementations that ensure good performance. Their modular design also enables transferring learned features to new tasks via transfer learning. These predefined architectures provide high-performance building blocks to create robust models quickly.

+

These layer abstractions, activation functions, and predefined architectures the frameworks provide constitute a computational graph. When a user defines a layer in a framework (e.g., tf.keras.layers.Dense()), the framework configures computational graph nodes and edges to represent that layer. The layer parameters like weights and biases become variables in the graph. The layer computations become operation nodes (such as the x and y in the figure above). When you call an activation function like tf.nn.relu(), the framework adds a ReLU operation node to the graph. Predefined architectures are just pre-configured subgraphs that can be inserted into your model’s graph. Thus, model definition via high-level abstractions creates a computational graph—the layers, activations, and architectures we use become graph nodes and edges.

+

We implicitly construct a computational graph when defining a neural network architecture in a framework. The framework uses this graph to determine operations to run during training and inference. Computational graphs bring several advantages over raw code, and that’s one of the core functionalities that is offered by a good ML framework:

+
    +
  • Explicit representation of data flow and operations

  • +
  • Ability to optimize graph before execution

  • +
  • Automatic differentiation for training

  • +
  • Language agnosticism - graph can be translated to run on GPUs, TPUs, etc.

  • +
  • Portability - graph can be serialized, saved, and restored later

  • +
+

Computational graphs are the fundamental building blocks of ML frameworks. Model definition via high-level abstractions creates a computational graph—the layers, activations, and architectures we use become graph nodes and edges. The framework compilers and optimizers operate on this graph to generate executable code. The abstractions provide a developer-friendly API for building computational graphs. Under the hood, it’s still graphs down! So, while you may not directly manipulate graphs as a framework user, they enable your high-level model specifications to be efficiently executed. The abstractions simplify model-building, while computational graphs make it possible.

+
+
+

Static vs. Dynamic Graphs

+

Deep learning frameworks have traditionally followed one of two approaches for expressing computational graphs.

+

Static graphs (declare-then-execute): With this model, the entire computational graph must be defined upfront before running it. All operations and data dependencies must be specified during the declaration phase. TensorFlow originally followed this static approach - models were defined in a separate context, and then a session was created to run them. The benefit of static graphs is they allow more aggressive optimization since the framework can see the full graph. However, it also tends to be less flexible for research and interactivity. Changes to the graph require re-declaring the full model.

+

For example:

+
x = tf.placeholder(tf.float32)
+y = tf.matmul(x, weights) + biases
+

The model is defined separately from execution, like building a blueprint. For TensorFlow 1. x, this is done using tf.Graph(). All ops and variables must be declared upfront. Subsequently, the graph is compiled and optimized before running. Execution is done later by feeding in tensor values.

+

Dynamic graphs (define-by-run): Unlike declaring (all) first and then executing, the graph is built dynamically as execution happens. There is no separate declaration phase - operations execute immediately as defined. This style is imperative and flexible, facilitating experimentation.

+

PyTorch uses dynamic graphs, building the graph on the fly as execution happens. For example, consider the following code snippet, where the graph is built as the execution is taking place:

+
x = torch.randn(4,784)
+y = torch.matmul(x, weights) + biases
+

The above example does not have separate compile/build/run phases. Ops define and execute immediately. With dynamic graphs, the definition is intertwined with execution, providing a more intuitive, interactive workflow. However, the downside is that there is less potential for optimization since the framework only sees the graph as it is built.

+

Recently, however, the distinction has blurred as frameworks adopt both modes. TensorFlow 2.0 defaults to dynamic graph mode while letting users work with static graphs when needed. Dynamic declaration makes frameworks easier to use, while static models provide optimization benefits. The ideal framework offers both options.

+

Static graph declaration provides optimization opportunities but less interactivity. While dynamic execution offers flexibility and ease of use, it may have performance overhead. Here is a table comparing the pros and cons of static vs dynamic execution graphs:

+ +++++ + + + + + + + + + + + + + + + + + + + +
Execution GraphProsCons
Static (Declare-then-execute)Enable graph optimizations by seeing full model ahead of time
Can export and deploy frozen graphs
Graph is packaged independently of code
Less flexible for research and iteration
Changes require rebuilding graph
Execution has separate compile and run phases
Dynamic (Define-by-run)Intuitive imperative style like Python code
Interleave graph build with execution
Easy to modify graphs
Debugging seamlessly fits workflow
Harder to optimize without full graph
Possible slowdowns from graph building during execution
Can require more memory
+
+
+
+

6.4.3 Data Pipeline Tools

+

Computational graphs can only be as good as the data they learn from and work on. Therefore, feeding training data efficiently is crucial for optimizing deep neural network performance, though it is often overlooked as one of the core functionalities. Many modern AI frameworks provide specialized pipelines to ingest, process, and augment datasets for model training.

+
+

Data Loaders

+

These pipelines’ cores are data loaders, which handle reading examples from storage formats like CSV files or image folders. Reading training examples from sources like files, databases, object storage, etc., is the job of the data loaders. Deep learning models require diverse data formats depending on the application. Among the popular formats is CSV, a versatile, simple format often used for tabular data. TFRecord: TensorFlow’s proprietary format, optimized for performance. Parquet: Columnar storage, offering efficient data compression and retrieval. JPEG/PNG: Commonly used for image data. WAV/MP3: Prevalent formats for audio data. For instance, tf.data is TensorFlows’s dataloading pipeline: https://www.tensorflow.org/guide/data.

+

Data loaders batch examples to leverage vectorization support in hardware. Batching refers to grouping multiple data points for simultaneous processing, leveraging the vectorized computation capabilities of hardware like GPUs. While typical batch sizes range from 32 to 512 examples, the optimal size often depends on the data’s memory footprint and the specific hardware constraints. Advanced loaders can stream virtually unlimited datasets from disk and cloud storage. They stream large datasets from disks or networks instead of fully loading them into memory, enabling unlimited dataset sizes.

+

Data loaders can also shuffle data across epochs for randomization and preprocess features in parallel with model training to expedite the training process. Randomly shuffling the order of examples between training epochs reduces bias and improves generalization.

+

Data loaders also support caching and prefetching strategies to optimize data delivery for fast, smooth model training. Caching preprocessed batches in memory allows them to be reused efficiently during multiple training steps and eliminates redundant processing. Prefetching, conversely, involves preloading subsequent batches, ensuring that the model never idles waiting for data.

+
+
+
+

6.4.4 Data Augmentation

+

Besides loading, data augmentation expands datasets synthetically. Augmentations apply random transformations for images like flipping, cropping, rotating, altering color, adding noise, etc. For audio, common augmentations involve mixing clips with background noise or modulating speed/pitch/volume.

+

Augmentations increase variation in the training data. Frameworks like TensorFlow and PyTorch simplify applying random augmentations each epoch by integrating them into the data pipeline. By programmatically increasing variation in the training data distribution, augmentations reduce Overfitting and improve model generalization.

+

Many frameworks simplify integrating augmentations into the data pipeline, applying them on the fly each epoch. Together, performant data loaders and extensive augmentations enable practitioners to feed massive, varied datasets to neural networks efficiently. Hands-off data pipelines represent a significant improvement in usability and productivity. They allow developers to focus more on model architecture and less on data wrangling when training deep learning models.

+
+
+

6.4.5 Optimization Algorithms

+

Training a neural network is fundamentally an iterative process that seeks to minimize a loss function. The goal is to fine-tune the model weights and parameters to produce predictions close to the true target labels. Machine learning frameworks have greatly streamlined this process by offering extensive support in three critical areas: loss functions, optimization algorithms, and regularization techniques.

+

Loss Functions are useful to quantify the difference between the model’s predictions and the true values. Different datasets require a different loss function to perform properly, as the loss function tells the computer the “objective” for it to aim. Commonly used loss functions are Mean Squared Error (MSE) for regression tasks and Cross-Entropy Loss for classification tasks.

+

To demonstrate some of the loss functions, imagine you have a set of inputs and the corresponding outputs, \(Y_n\), that denote the output of \(n\)’th value. The inputs are fed into the model, and the model outputs a prediction, which we can call \(\hat{Y_n}\). With the predicted value and the real value, we can, for example, use the MSE to calculate the loss function:

+

\[MSE = \frac{1}{N}\sum_{n=1}^{N}(Y_n - \hat{Y_n})^2\]

+

If the problem is a classification problem, we do not want to use the MSE since the distance between the predicted value and the real value does not have significant meaning. For example, if one wants to recognize handwritten models, while 9 is further away from 2, it does not mean that the model is wrong in making the prediction. Therefore, we use the cross-entropy loss function, which is defined as:

+

\[Cross-Entropy = -\sum_{n=1}^{N}Y_n\log(\hat{Y_n})\]

+

Once a loss like the above is computed, we need methods to adjust the model’s parameters to reduce this loss or error during the training process. To do so, current frameworks use a gradient-based approach, which computes how much changes tuning the weights in a certain way changes the value of the loss function. Knowing this gradient, the model moves in the direction that reduces the gradient. Many challenges are associated with this, primarily stemming from the fact that the optimization problem could not be more, making it very easy to solve. More details about this will come in the AI Training section. Modern frameworks come equipped with efficient implementations of several optimization algorithms, many of which are variants of gradient descent algorithms with stochastic methods and adaptive learning rates. More information with clear examples can be found in the AI Training section.

+

Lastly, overly complex models tend to overfit, meaning they perform well on the training data but must generalize to new, unseen data (see Overfitting). To counteract this, regularization methods are employed to penalize model complexity and encourage it to learn simpler patterns. Dropout randomly sets a fraction of input units to 0 at each update during training, which helps prevent Overfitting.

+

However, there are cases where the problem is more complex than the model can represent, which may result in underfitting. Therefore, choosing the right model architecture is also a critical step in the training process. Further heuristics and techniques are discussed in the AI Training section.

+

Frameworks also efficiently implement gradient descent, Adagrad, Adadelta, and Adam. Adding regularization, such as dropout and L1/L2 penalties, prevents Overfitting during training. Batch normalization accelerates training by normalizing inputs to layers.

+
+
+

6.4.6 Model Training Support

+

A compilation step is required before training a defined neural network model. During this step, the neural network’s high-level architecture is transformed into an optimized, executable format. This process comprises several steps. The first step is to construct the computational graph, which represents all the mathematical operations and data flow within the model. We discussed this earlier.

+

During training, the focus is on executing the computational graph. Every parameter within the graph, such as weights and biases, is assigned an initial value. Depending on the chosen initialization method, this value might be random or based on a predefined logic.

+

The next critical step is memory allocation. Essential memory is reserved for the model’s operations on both CPUs and GPUs, ensuring efficient data processing. The model’s operations are then mapped to the available hardware resources, particularly GPUs or TPUs, to expedite computation. Once the compilation is finalized, the model is prepared for training.

+

The training process employs various tools to enhance efficiency. Batch processing is commonly used to maximize computational throughput. Techniques like vectorization enable operations on entire data arrays rather than proceeding element-wise, which bolsters speed. Optimizations such as kernel fusion (refer to the Optimizations chapter) amalgamate multiple operations into a single action, minimizing computational overhead. Operations can also be segmented into phases, facilitating the concurrent processing of different mini-batches at various stages.

+

Frameworks consistently checkpoint the state, preserving intermediate model versions during training. This ensures that progress is recovered if an interruption occurs, and training can be recommenced from the last checkpoint. Additionally, the system vigilantly monitors the model’s performance against a validation data set. Should the model begin to overfit (if its performance on the validation set declines), training is automatically halted, conserving computational resources and time.

+

ML frameworks incorporate a blend of model compilation, enhanced batch processing methods, and utilities such as checkpointing and early stopping. These resources manage the complex aspects of performance, enabling practitioners to zero in on model development and training. As a result, developers experience both speed and ease when utilizing neural networks’ capabilities.

+
+
+

6.4.7 Validation and Analysis

+

After training deep learning models, frameworks provide utilities to evaluate performance and gain insights into the models’ workings. These tools enable disciplined experimentation and debugging.

+
+

Evaluation Metrics

+

Frameworks include implementations of common evaluation metrics for validation:

+
    +
  • Accuracy - Fraction of correct predictions overall. They are widely used for classification.

  • +
  • Precision - Of positive predictions, how many were positive. Useful for imbalanced datasets.

  • +
  • Recall - Of actual positives, how many did we predict correctly? Measures completeness.

  • +
  • F1-score - Harmonic mean of precision and recall. Combines both metrics.

  • +
  • AUC-ROC - Area under ROC curve. They are used for classification threshold analysis.

  • +
  • MAP - Mean Average Precision. Evaluate ranked predictions in retrieval/detection.

  • +
  • Confusion Matrix - Matrix that shows the true positives, true negatives, false positives, and false negatives. Provides a more detailed view of classification performance.

  • +
+

These metrics quantify model performance on validation data for comparison.

+
+
+

Visualization

+

Visualization tools provide insight into models:

+
    +
  • Loss curves - Plot training and validation loss over time to spot Overfitting.

  • +
  • Activation grids - Illustrate features learned by convolutional filters.

  • +
  • Projection - Reduce dimensionality for intuitive visualization.

  • +
  • Precision-recall curves - Assess classification tradeoffs.

  • +
+

Tools like TensorBoard for TensorFlow and TensorWatch for PyTorch enable real-time metrics and visualization during training.

+
+
+
+

6.4.8 Differentiable programming

+

Machine learning training methods such as backpropagation rely on the change in the loss function with respect to the change in weights (which essentially is the definition of derivatives). Thus, the ability to quickly and efficiently train large machine learning models relies on the computer’s ability to take derivatives. This makes differentiable programming one of the most important elements of a machine learning framework.

+

We can use four primary methods to make computers take derivatives. First, we can manually figure out the derivatives by hand and input them into the computer. This would quickly become a nightmare with many layers of neural networks if we had to compute all the derivatives in the backpropagation steps by hand. Another method is symbolic differentiation using computer algebra systems such as Mathematica, which can introduce a layer of inefficiency, as there needs to be a level of abstraction to take derivatives. Numerical derivatives, the practice of approximating gradients using finite difference methods, suffer from many problems, including high computational costs and larger grid sizes, leading to many errors. This leads to automatic differentiation, which exploits the primitive functions that computers use to represent operations to obtain an exact derivative. With automatic differentiation, the computational complexity of computing the gradient is proportional to computing the function itself. Intricacies of automatic differentiation are not dealt with by end users now, but resources to learn more can be found widely, such as from here. Today’s automatic differentiation and differentiable programming are ubiquitous and are done efficiently and automatically by modern machine learning frameworks.

+
+
+

6.4.9 Hardware Acceleration

+

The trend to continuously train and deploy larger machine-learning models has made hardware acceleration support necessary for machine-learning platforms. Figure fig-hardware-accelerator shows the large number of companies that are offering hardware accelerators in different domains, such as “Very Low Power” and “Embedded” machine learning. Deep layers of neural networks require many matrix multiplications, which attract hardware that can compute matrix operations quickly and in parallel. In this landscape, two hardware architectures, the GPU and TPU, have emerged as leading choices for training machine learning models.

+

The use of hardware accelerators began with AlexNet, which paved the way for future works to utilize GPUs as hardware accelerators for training computer vision models. GPUs, or Graphics Processing Units, excel in handling many computations at once, making them ideal for the matrix operations central to neural network training. Their architecture, designed for rendering graphics, is perfect for the mathematical operations required in machine learning. While they are very useful for machine learning tasks and have been implemented in many hardware platforms, GPUs are still general purpose in that they can be used for other applications.

+

On the other hand, Tensor Processing Units (TPU) are hardware units designed specifically for neural networks. They focus on the multiply and accumulate (MAC) operation, and their hardware consists of a large hardware matrix that contains elements that efficiently compute the MAC operation. This concept, called the systolic array architecture, was pioneered by Kung and Leiserson (1979), but has proven to be a useful structure to efficiently compute matrix products and other operations within neural networks (such as convolutions).

+
+Kung, Hsiang Tsung, and Charles E Leiserson. 1979. “Systolic Arrays (for VLSI).” In Sparse Matrix Proceedings 1978, 1:256–82. Society for industrial; applied mathematics Philadelphia, PA, USA. +

While TPUs can drastically reduce training times, they also have disadvantages. For example, many operations within the machine learning frameworks (primarily TensorFlow here since the TPU directly integrates with it) are not supported by TPUs. They cannot also support custom operations from the machine learning frameworks, and the network design must closely align with the hardware capabilities.

+

Today, NVIDIA GPUs dominate training, aided by software libraries like CUDA, cuDNN, and TensorRT. Frameworks also include optimizations to maximize performance on these hardware types, like pruning unimportant connections and fusing layers. Combining these techniques with hardware acceleration provides greater efficiency. For inference, hardware is increasingly moving towards optimized ASICs and SoCs. Google’s TPUs accelerate models in data centers. Apple, Qualcomm, and others now produce AI-focused mobile chips. The NVIDIA Jetson family targets autonomous robots.

+
+
+
+ +
+
+Figure 6.5: Companies offering ML hardware accelerators. Credit: Gradient Flow. +
+
+
+
+
+
+

6.5 Advanced Features

+
+

6.5.1 Distributed training

+

As machine learning models have become larger over the years, it has become essential for large models to utilize multiple computing nodes in the training process. This process, distributed learning, has allowed for higher training capabilities but has also imposed challenges in implementation.

+

We can consider three different ways to spread the work of training machine learning models to multiple computing nodes. Input data partitioning refers to multiple processors running the same model on different input partitions. This is the easiest implementation and is available for many machine learning frameworks. The more challenging distribution of work comes with model parallelism, which refers to multiple computing nodes working on different parts of the model, and pipelined model parallelism, which refers to multiple computing nodes working on different layers of the model on the same input. The latter two mentioned here are active research areas.

+

ML frameworks that support distributed learning include TensorFlow (through its tf.distribute module), PyTorch (through its torch.nn.DataParallel and torch.nn.DistributedDataParallel modules), and MXNet (through its gluon API).

+
+
+

6.5.2 Model Conversion

+

Machine learning models have various methods to be represented and used within different frameworks and for different device types. For example, a model can be converted to be compatible with inference frameworks within the mobile device. The default format for TensorFlow models is checkpoint files containing weights and architectures, which are needed to retrain the models. However, models are typically converted to TensorFlow Lite format for mobile deployment. TensorFlow Lite uses a compact flat buffer representation and optimizations for fast inference on mobile hardware, discarding all the unnecessary baggage associated with training metadata, such as checkpoint file structures.

+

The default format for TensorFlow models is checkpoint files containing weights and architectures. For mobile deployment, models are typically converted to TensorFlow Lite format. TensorFlow Lite uses a compact flat buffer representation and optimizations for fast inference on mobile hardware.

+

Model optimizations like quantization (see Optimizations chapter) can further optimize models for target architectures like mobile. This reduces the precision of weights and activations to uint8 or int8 for a smaller footprint and faster execution with supported hardware accelerators. For post-training quantization, TensorFlow’s converter handles analysis and conversion automatically.

+

Frameworks like TensorFlow simplify deploying trained models to mobile and embedded IoT devices through easy conversion APIs for TFLite format and quantization. Ready-to-use conversion enables high-performance inference on mobile without a manual optimization burden. Besides TFLite, other common targets include TensorFlow.js for web deployment, TensorFlow Serving for cloud services, and TensorFlow Hub for transfer learning. TensorFlow’s conversion utilities handle these scenarios to streamline end-to-end workflows.

+

More information about model conversion in TensorFlow is linked here.

+
+
+

6.5.3 AutoML, No-Code/Low-Code ML

+

In many cases, machine learning can have a relatively high barrier of entry compared to other fields. To successfully train and deploy models, one needs to have a critical understanding of a variety of disciplines, from data science (data processing, data cleaning), model structures (hyperparameter tuning, neural network architecture), hardware (acceleration, parallel processing), and more depending on the problem at hand. The complexity of these problems has led to the introduction of frameworks such as AutoML, which aims to make “Machine learning available for non-Machine Learning exports” and to “automate research in machine learning.” They have constructed AutoWEKA, which aids in the complex process of hyperparameter selection, and Auto-sklearn and Auto-pytorch, an extension of AutoWEKA into the popular sklearn and PyTorch Libraries.

+

While these efforts to automate parts of machine learning tasks are underway, others have focused on making machine learning models easier by deploying no-code/low-code machine learning, utilizing a drag-and-drop interface with an easy-to-navigate user interface. Companies such as Apple, Google, and Amazon have already created these easy-to-use platforms to allow users to construct machine learning models that can integrate into their ecosystem.

+

These steps to remove barriers to entry continue to democratize machine learning, make it easier for beginners to access, and simplify workflow for experts.

+
+
+

6.5.4 Advanced Learning Methods

+
+

Transfer Learning

+

Transfer learning is the practice of using knowledge gained from a pre-trained model to train and improve the performance of a model for a different task. For example, datasets trained on ImageNet datasets such as MobileNet and ResNet can help classify other image datasets. To do so, one may freeze the pre-trained model, utilizing it as a feature extractor to train a much smaller model built on top of the feature extraction. One can also fine-tune the entire model to fit the new task.

+

Transfer learning has challenges, such as the modified model’s inability to conduct its original tasks after transfer learning. Papers such as “Learning without Forgetting” by Z. Li and Hoiem (2018) aims to address these challenges and have been implemented in modern machine learning platforms.

+
+Li, Zhizhong, and Derek Hoiem. 2018. “Learning Without Forgetting.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (12): 2935–47. https://doi.org/10.1109/tpami.2017.2773081. +
+
+

Federated Learning

+

Consider the problem of labeling items in a photo from personal devices and moving the image data from the devices to a central server, where a single model will train Using the image data provided by the devices. However, this presents many potential challenges. First, with many devices, one needs a massive network infrastructure to move and store data from these devices to a central location. With the number of devices present today, this is often not feasible and very costly. Furthermore, privacy challenges like those of Photos central servers are associated with moving personal data.

+

Federated learning by McMahan et al. (2017) is a form of distributed computing that resolves these issues by distributing the models into personal devices for them to be trained on devices (Figure fig-federated-learning). Initially, a base global model is trained on a central server to be distributed to all devices. Using this base model, the devices individually compute the gradients and send them back to the central hub. Intuitively, this transfers model parameters instead of the data itself. This innovative approach allows the model to be trained with many different datasets (in our example, the set of images on personal devices) without transferring a large amount of potentially sensitive data. However, federated learning also comes with a series of challenges.

+
+McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. “Communication-Efficient Learning of Deep Networks from Decentralized Data.” In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, edited by Aarti Singh and Xiaojin (Jerry) Zhu, 54:1273–82. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v54/mcmahan17a.html. +

Data collected from devices may come with something other than suitable labels in many real-world situations. Users compound this issue; the primary data source can often be unreliable. This unreliability means that even when data is labeled, its accuracy or relevance is not guaranteed. Furthermore, each user’s data is unique, resulting in a significant variance in the data generated by different users. This non-IID nature of data, coupled with the unbalanced data production where some users generate more data than others, can adversely impact the performance of the global model. Researchers have worked to compensate for this by adding a proximal term to balance the local and global model and adding a frozen global hypersphere classifier.

+

Additional challenges are associated with federated learning. The number of mobile device owners can far exceed the average number of training samples on each device, leading to substantial communication overhead. This issue is particularly pronounced in the context of mobile networks, which are often used for such communication and can be unstable. This instability can result in delayed or failed transmission of model updates, thereby affecting the overall training process.

+

The heterogeneity of device resources is another hurdle. Devices participating in Federated Learning can have varying computational powers and memory capacities. This diversity makes it challenging to design efficient algorithms across all devices. Privacy and security issues are not a guarantee for federated learning. Techniques such as inversion gradient attacks can extract information about the training data from the model parameters. Despite these challenges, the many potential benefits continue to make it a popular research area. Open source programs such as Flower have been developed to simplify implementing federated learning with various machine learning frameworks.

+

Figure fig-federated-learning illustrates an example of federated learning. Consider a model used for medical predictions by diffrent hospitals. Given that medical data is extremely sensitive and must be kept private, it can’t be transferred to a centralized server for training. Instead, each hospital would firen-tune/train the base model using its own private data, while only communicating non-sensitive information with the Federated Server, such as the learned parameters.

+
+
+
+ +
+
+Figure 6.6: A centralized-server approach to federated learning. Credit: NVIDIA. +
+
+
+
+
+
+
+

6.6 Framework Specialization

+

Thus far, we have talked about ML frameworks generally. However, typically, frameworks are optimized based on the target environment’s computational capabilities and application requirements, ranging from the cloud to the edge to tiny devices. Choosing the right framework is crucial based on the target environment for deployment. This section provides an overview of the major types of AI frameworks tailored for cloud, edge, and TinyML environments to help understand the similarities and differences between these ecosystems.

+
+

6.6.1 Cloud

+

Cloud-based AI frameworks assume access to ample computational power, memory, and storage resources in the cloud. They generally support both training and inference. Cloud-based AI frameworks are suited for applications where data can be sent to the cloud for processing, such as cloud-based AI services, large-scale data analytics, and web applications. Popular cloud AI frameworks include the ones we mentioned earlier, such as TensorFlow, PyTorch, MXNet, Keras, etc. These frameworks utilize GPUs, TPUs, distributed training, and AutoML to deliver scalable AI. Concepts like model serving, MLOps, and AIOps relate to the operationalization of AI in the cloud. Cloud AI powers services like Google Cloud AI and enables transfer learning using pre-trained models.

+
+
+

6.6.2 Edge

+

Edge AI frameworks are tailored to deploy AI models on IoT devices, smartphones, and edge servers. Edge AI frameworks are optimized for devices with moderate computational resources, balancing power and performance. Edge AI frameworks are ideal for applications requiring real-time or near-real-time processing, including robotics, autonomous vehicles, and smart devices. Key edge AI frameworks include TensorFlow Lite, PyTorch Mobile, CoreML, and others. They employ optimizations like model compression, quantization, and efficient neural network architectures. Hardware support includes CPUs, GPUs, NPUs, and accelerators like the Edge TPU. Edge AI enables use cases like mobile vision, speech recognition, and real-time anomaly detection.

+
+
+

6.6.3 Embedded

+

TinyML frameworks are specialized for deploying AI models on extremely resource-constrained devices, specifically microcontrollers and sensors within the IoT ecosystem. TinyML frameworks are designed for devices with limited resources, emphasizing minimal memory and power consumption. TinyML frameworks are specialized for use cases on resource-constrained IoT devices for predictive maintenance, gesture recognition, and environmental monitoring applications. Major TinyML frameworks include TensorFlow Lite Micro, uTensor, and ARM NN. They optimize complex models to fit within kilobytes of memory through techniques like quantization-aware training and reduced precision. TinyML allows intelligent sensing across battery-powered devices, enabling collaborative learning via federated learning. The choice of framework involves balancing model performance and computational constraints of the target platform, whether cloud, edge, or TinyML. Table tbl-ml_frameworks compares the major AI frameworks across cloud, edge, and TinyML environments:

+
+
+
+Table 6.2: Comparison of framework types for Cloud AI, Edge AI, and TinyML. +
+
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Framework TypeExamplesKey TechnologiesUse Cases
Cloud AITensorFlow, PyTorch, MXNet, KerasGPUs, TPUs, distributed training, AutoML, MLOpsCloud services, web apps, big data analytics
Edge AITensorFlow Lite, PyTorch Mobile, Core MLModel optimization, compression, quantization, efficient NN architecturesMobile apps, robots, autonomous systems, real-time processing
TinyMLTensorFlow Lite Micro, uTensor, ARM NNQuantization-aware training, reduced precision, neural architecture searchIoT sensors, wearables, predictive maintenance, gesture recognition
+
+
+
+

Key differences:

+
    +
  • Cloud AI leverages massive computational power for complex models using GPUs/TPUs and distributed training

  • +
  • Edge AI optimizes models to run locally on resource-constrained edge devices.

  • +
  • TinyML fits models into extremely low memory and computes environments like microcontrollers

  • +
+
+
+
+

6.7 Embedded AI Frameworks

+
+

6.7.1 Resource Constraints

+

Embedded systems face severe resource constraints that pose unique challenges when deploying machine learning models compared to traditional computing platforms. For example, microcontroller units (MCUs) commonly used in IoT devices often have:

+
    +
  • RAM ranges from tens of kilobytes to a few megabytes. The popular ESP8266 MCU has around 80KB RAM available to developers. This contrasts with 8GB or more on typical laptops and desktops today.

  • +
  • Flash storage ranges from hundreds of kilobytes to a few megabytes. The Arduino Uno microcontroller provides just 32KB of code storage. Standard computers today have disk storage in the order of terabytes.

  • +
  • Processing power from just a few MHz to approximately 200MHz. The ESP8266 operates at 80MHz. This is several orders of magnitude slower than multi-GHz multi-core CPUs in servers and high-end laptops.

  • +
+

These tight constraints often make training machine learning models directly on microcontrollers infeasible. The limited RAM precludes handling large datasets for training. Energy usage for training would also quickly deplete battery-powered devices. Instead, models are trained on resource-rich systems and deployed on microcontrollers for optimized inference. But even inference poses challenges:

+
    +
  1. Model Size: AI models are too large to fit on embedded and IoT devices. This necessitates model compression techniques, such as quantization, pruning, and knowledge distillation. Additionally, as we will see, many of the frameworks used by developers for AI development have large amounts of overhead and built-in libraries that embedded systems can’t support.

  2. +
  3. Complexity of Tasks: With only tens of KBs to a few MBs of RAM, IoT devices and embedded systems are constrained in the complexity of tasks they can handle. Tasks that require large datasets or sophisticated algorithms—for example, LLMs—that would run smoothly on traditional computing platforms might be infeasible on embedded systems without compression or other optimization techniques due to memory limitations.

  4. +
  5. Data Storage and Processing: Embedded systems often process data in real time and might only store small amounts locally. Conversely, traditional computing systems can hold and process large datasets in memory, enabling faster data operations analysis and real-time updates.

  6. +
  7. Security and Privacy: Limited memory also restricts the complexity of security algorithms and protocols, data encryption, reverse engineering protections, and more that can be implemented on the device. This could make some IoT devices more vulnerable to attacks.

  8. +
+

Consequently, specialized software optimizations and ML frameworks tailored for microcontrollers must work within these tight resource bounds. Clever optimization techniques like quantization, pruning, and knowledge distillation compress models to fit within limited memory (see Optimizations section). Learnings from neural architecture search help guide model designs.

+

Hardware improvements like dedicated ML accelerators on microcontrollers also help alleviate constraints. For instance, Qualcomm’s Hexagon DSP accelerates TensorFlow Lite models on Snapdragon mobile chips. Google’s Edge TPU packs ML performance into a tiny ASIC for edge devices. ARM Ethos-U55 offers efficient inference on Cortex-M class microcontrollers. These customized ML chips unlock advanced capabilities for resource-constrained applications.

+

Due to limited processing power, it’s almost always infeasible to train AI models on IoT or embedded systems. Instead, models are trained on powerful traditional computers (often with GPUs) and then deployed on the embedded device for inference. TinyML specifically deals with this, ensuring models are lightweight enough for real-time inference on these constrained devices.

+
+
+

6.7.2 Frameworks & Libraries

+

Embedded AI frameworks are software tools and libraries designed to enable AI and ML capabilities on embedded systems. These frameworks are essential for bringing AI to IoT devices, robotics, and other edge computing platforms, and they are designed to work where computational resources, memory, and power consumption are limited.

+
+
+

6.7.3 Challenges

+

While embedded systems present an enormous opportunity for deploying machine learning to enable intelligent capabilities at the edge, these resource-constrained environments pose significant challenges. Unlike typical cloud or desktop environments rich with computational resources, embedded devices introduce severe constraints around memory, processing power, energy efficiency, and specialized hardware. As a result, existing machine learning techniques and frameworks designed for server clusters with abundant resources do not directly translate to embedded systems. This section uncovers some of the challenges and opportunities for embedded systems and ML frameworks.

+
+

Fragmented Ecosystem

+

The lack of a unified ML framework led to a highly fragmented ecosystem. Engineers at companies like STMicroelectronics, NXP Semiconductors, and Renesas had to develop custom solutions tailored to their specific microcontroller and DSP architectures. These ad-hoc frameworks required extensive manual optimization for each low-level hardware platform. This made porting models extremely difficult, requiring redevelopment for new Arm, RISC-V, or proprietary architectures.

+
+
+

Disparate Hardware Needs

+

Without a shared framework, there was no standard way to assess hardware’s capabilities. Vendors like Intel, Qualcomm, and NVIDIA created integrated solutions, blending models and improving software and hardware. This made it hard to discern the sources of performance gains - whether new chip designs like Intel’s low-power x86 cores or software optimizations were responsible. A standard framework was needed so vendors could evaluate their hardware’s capabilities fairly and reproducibly.

+
+
+

Lack of Portability

+

With standardized tools, adapting models trained in common frameworks like TensorFlow or PyTorch to run efficiently on microcontrollers was easier. It required time-consuming manual translation of models to run on specialized DSPs from companies like CEVA or low-power Arm M-series cores. No turnkey tools were enabling portable deployment across different architectures.

+
+
+

Incomplete Infrastructure

+

The infrastructure to support key model development workflows needed to be improved. More support is needed for compression techniques to fit large models within constrained memory budgets. Tools for quantization to lower precision for faster inference were missing. Standardized APIs for integration into applications were incomplete. Essential functionality like on-device debugging, metrics, and performance profiling was absent. These gaps increased the cost and difficulty of embedded ML development.

+
+
+

No Standard Benchmark

+

Without unified benchmarks, there was no standard way to assess and compare the capabilities of different hardware platforms from vendors like NVIDIA, Arm, and Ambiq Micro. Existing evaluations relied on proprietary benchmarks tailored to showcase the strengths of particular chips. This made it impossible to measure hardware improvements objectively in a fair, neutral manner. The Benchmarking AI chapter discusses this topic in more detail.

+
+
+

Minimal Real-World Testing

+

Much of the benchmarks relied on synthetic data. Rigorously testing models on real-world embedded applications was difficult without standardized datasets and benchmarks, raising questions about how performance claims would translate to real-world usage. More extensive testing was needed to validate chips in actual use cases.

+

The lack of shared frameworks and infrastructure slowed TinyML adoption, hampering the integration of ML into embedded products. Recent standardized frameworks have begun addressing these issues through improved portability, performance profiling, and benchmarking support. However, ongoing innovation is still needed to enable seamless, cost-effective deployment of AI to edge devices.

+
+
+

Summary

+

The absence of standardized frameworks, benchmarks, and infrastructure for embedded ML has traditionally hampered adoption. However, recent progress has been made in developing shared frameworks like TensorFlow Lite Micro and benchmark suites like MLPerf Tiny that aim to accelerate the proliferation of TinyML solutions. However, overcoming the fragmentation and difficulty of embedded deployment remains an ongoing process.

+
+
+
+
+

6.8 Examples

+

Machine learning deployment on microcontrollers and other embedded devices often requires specially optimized software libraries and frameworks to work within tight memory, compute, and power constraints. Several options exist for performing inference on such resource-limited hardware, each with its approach to optimizing model execution. This section will explore the key characteristics and design principles behind TFLite Micro, TinyEngine, and CMSIS-NN, providing insight into how each framework tackles the complex problem of high-accuracy yet efficient neural network execution on microcontrollers. It will also showcase different approaches for implementing efficient TinyML frameworks.

+

Table tbl-compare_frameworks summarizes the key differences and similarities between these three specialized machine-learning inference frameworks for embedded systems and microcontrollers.

+
+
+
+Table 6.3: Comparison of frameworks: TensorFlow Lite Micro, TinyEngine, and CMSIS-NN +
+
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FrameworkTensorFlow Lite MicroTinyEngineCMSIS-NN
ApproachInterpreter-basedStatic compilationOptimized neural network kernels
Hardware FocusGeneral embedded devicesMicrocontrollersARM Cortex-M processors
Arithmetic SupportFloating pointFloating point, fixed pointFloating point, fixed point
Model SupportGeneral neural network modelsModels co-designed with TinyNASCommon neural network layer types
Code FootprintLarger due to inclusion of interpreter and opsSmall, includes only ops needed for modelLightweight by design
LatencyHigher due to interpretation overheadVery low due to compiled modelLow latency focus
Memory ManagementDynamically managed by interpreterModel-level optimizationTools for efficient allocation
Optimization ApproachSome code generation featuresSpecialized kernels, operator fusionArchitecture-specific assembly optimizations
Key BenefitsFlexibility, portability, ease of updating modelsMaximizes performance, optimized memory usageHardware acceleration, standardized API, portability
+
+
+
+

We will understand each of these in greater detail in the following sections.

+
+

6.8.1 Interpreter

+

TensorFlow Lite Micro (TFLM) is a machine learning inference framework designed for embedded devices with limited resources. It uses an interpreter to load and execute machine learning models, which provides flexibility and ease of updating models in the field (David et al. 2021).

+
+David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811. +

Traditional interpreters often have significant branching overhead, which can reduce performance. However, machine learning model interpretation benefits from the efficiency of long-running kernels, where each kernel runtime is relatively large and helps mitigate interpreter overhead.

+

An alternative to an interpreter-based inference engine is to generate native code from a model during export. This can improve performance, but it sacrifices portability and flexibility, as the generated code needs recompilation for each target platform and must be replaced entirely to modify a model.

+

TFLM balances the simplicity of code compilation and the flexibility of an interpreter-based approach by incorporating certain code-generation features. For example, the library can be constructed solely from source files, offering much of the compilation simplicity associated with code generation while retaining the benefits of an interpreter-based model execution framework.

+

An interpreter-based approach offers several benefits over code generation for machine learning inference on embedded devices:

+
    +
  • Flexibility: Models can be updated in the field without recompiling the entire application.

  • +
  • Portability: The interpreter can be used to execute models on different target platforms without porting the code.

  • +
  • Memory efficiency: The interpreter can share code across multiple models, reducing memory usage.

  • +
  • Ease of development: Interpreters are easier to develop and maintain than code generators.

  • +
+

TensorFlow Lite Micro is a powerful and flexible framework for machine learning inference on embedded devices. Its interpreter-based approach offers several benefits over code generation, including flexibility, portability, memory efficiency, and ease of development.

+
+
+

6.8.2 Compiler-based

+

TinyEngine is an ML inference framework designed specifically for resource-constrained microcontrollers. It employs several optimizations to enable high-accuracy neural network execution within the tight constraints of memory, computing, and storage on microcontrollers (Lin et al. 2020).

+
+Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han. 2020. MCUNet: Tiny Deep Learning on IoT Devices.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html. +

While inference frameworks like TFLite Micro use interpreters to execute the neural network graph dynamically at runtime, this adds significant overhead regarding memory usage to store metadata, interpretation latency, and lack of optimizations. However, TFLite argues that the overhead is small. TinyEngine eliminates this overhead by employing a code generation approach. It analyzes the network graph during compilation and generates specialized code to execute just that model. This code is natively compiled into the application binary, avoiding runtime interpretation costs.

+

Conventional ML frameworks schedule memory per layer, trying to minimize usage for each layer separately. TinyEngine does model-level scheduling instead of analyzing memory usage across layers. It allocates a common buffer size based on the maximum memory needs of all layers. This buffer is then shared efficiently across layers to increase data reuse.

+

TinyEngine also specializes in the kernels for each layer through techniques like tiling, unrolling, and fusing operators. For example, it will generate unrolled compute kernels with the number of loops needed for a 3x3 or 5x5 convolution. These specialized kernels extract maximum performance from the microcontroller hardware. It uses optimized depthwise convolutions to minimize memory allocations by computing each channel’s output in place over the input channel data. This technique exploits the channel-separable nature of depthwise convolutions to reduce peak memory size.

+

Like TFLite Micro, the compiled TinyEngine binary only includes ops needed for a specific model rather than all possible operations. This results in a very small binary footprint, keeping code size low for memory-constrained devices.

+

One difference between TFLite Micro and TinyEngine is that the latter is co-designed with “TinyNAS,” an architecture search method for microcontroller models similar to differential NAS for microcontrollers. TinyEngine’s efficiency allows for exploring larger and more accurate models through NAS. It also provides feedback to TinyNAS on which models can fit within the hardware constraints.

+

Through various custom techniques, such as static compilation, model-based scheduling, specialized kernels, and co-design with NAS, TinyEngine enables high-accuracy deep learning inference within microcontrollers’ tight resource constraints.

+
+
+

6.8.3 Library

+

CMSIS-NN, standing for Cortex Microcontroller Software Interface Standard for Neural Networks, is a software library devised by ARM. It offers a standardized interface for deploying neural network inference on microcontrollers and embedded systems, focusing on optimization for ARM Cortex-M processors (Lai, Suda, and Chandra 2018).

+
+Lai, Liangzhen, Naveen Suda, and Vikas Chandra. 2018. “Cmsis-Nn: Efficient Neural Network Kernels for Arm Cortex-m Cpus.” ArXiv Preprint abs/1801.06601. https://arxiv.org/abs/1801.06601. +

Neural Network Kernels: CMSIS-NN has highly efficient kernels that handle fundamental neural network operations such as convolution, pooling, fully connected layers, and activation functions. It caters to a broad range of neural network models by supporting floating and fixed-point arithmetic. The latter is especially beneficial for resource-constrained devices as it curtails memory and computational requirements (Quantization).

+

Hardware Acceleration: CMSIS-NN harnesses the power of Single Instruction, Multiple Data (SIMD) instructions available on many Cortex-M processors. This allows for parallel processing of multiple data elements within a single instruction, thereby boosting computational efficiency. Certain Cortex-M processors feature Digital Signal Processing (DSP) extensions that CMSIS-NN can exploit for accelerated neural network execution. The library also incorporates assembly-level optimizations tailored to specific microcontroller architectures to enhance performance further.

+

Standardized API: CMSIS-NN offers a consistent and abstracted API that protects developers from the complexities of low-level hardware details. This makes the integration of neural network models into applications simpler. It may also encompass tools or utilities for converting popular neural network model formats into a format that is compatible with CMSIS-NN.

+

Memory Management: CMSIS-NN provides functions for efficient memory allocation and management, which is vital in embedded systems where memory resources are scarce. It ensures optimal memory usage during inference and, in some instances, allows in-place operations to decrease memory overhead.

+

Portability: CMSIS-NN is designed for portability across various Cortex-M processors. This enables developers to write code that can operate on different microcontrollers without significant modifications.

+

Low Latency: CMSIS-NN minimizes inference latency, making it an ideal choice for real-time applications where swift decision-making is paramount.

+

Energy Efficiency: The library is designed with a focus on energy efficiency, making it suitable for battery-powered and energy-constrained devices.

+
+
+
+

6.9 Choosing the Right Framework

+

Choosing the right machine learning framework for a given application requires carefully evaluating models, hardware, and software considerations. By analyzing these three aspects—models, hardware, and software—ML engineers can select the optimal framework and customize it as needed for efficient and performant on-device ML applications. The goal is to balance model complexity, hardware limitations, and software integration to design a tailored ML pipeline for embedded and edge devices.

+
+
+
+ +
+
+Figure 6.7: TensorFlow Framework Comparison - General. Credit: TensorFlow. +
+
+
+
+

6.9.1 Model

+

TensorFlow supports significantly more ops than TensorFlow Lite and TensorFlow Lite Micro as it is typically used for research or cloud deployment, which require a large number of and more flexibility with operators (see Figure fig-tf-comparison). TensorFlow Lite supports select ops for on-device training, whereas TensorFlow Micro does not. TensorFlow Lite also supports dynamic shapes and quantization-aware training, but TensorFlow Micro does not. In contrast, TensorFlow Lite and TensorFlow Micro offer native quantization tooling and support, where quantization refers to transforming an ML program into an approximated representation with available lower precision operations.

+
+
+

6.9.2 Software

+
+
+
+ +
+
+Figure 6.8: TensorFlow Framework Comparison - Software. Credit: TensorFlow. +
+
+
+

TensorFlow Lite Micro does not have OS support, while TensorFlow and TensorFlow Lite do, to reduce memory overhead, make startup times faster, and consume less energy (see Figure fig-tf-sw-comparison). TensorFlow Lite Micro can be used in conjunction with real-time operating systems (RTOS) like FreeRTOS, Zephyr, and Mbed OS. TensorFlow Lite and TensorFlow Lite Micro support model memory mapping, allowing models to be directly accessed from flash storage rather than loaded into RAM, whereas TensorFlow does not. TensorFlow and TensorFlow Lite support accelerator delegation to schedule code to different accelerators, whereas TensorFlow Lite Micro does not, as embedded systems tend to have a limited array of specialized accelerators.

+
+
+

6.9.3 Hardware

+
+
+
+ +
+
+Figure 6.9: TensorFlow Framework Comparison - Hardware. Credit: TensorFlow. +
+
+
+

TensorFlow Lite and TensorFlow Lite Micro have significantly smaller base binary sizes and memory footprints than TensorFlow (see Figure fig-tf-hw-comparison). For example, a typical TensorFlow Lite Micro binary is less than 200KB, whereas TensorFlow is much larger. This is due to the resource-constrained environments of embedded systems. TensorFlow supports x86, TPUs, and GPUs like NVIDIA, AMD, and Intel. TensorFlow Lite supports Arm Cortex-A and x86 processors commonly used on mobile phones and tablets. The latter is stripped of all the unnecessary training logic for on-device deployment. TensorFlow Lite Micro provides support for microcontroller-focused Arm Cortex M cores like M0, M3, M4, and M7, as well as DSPs like Hexagon and SHARC and MCUs like STM32, NXP Kinetis, Microchip AVR.

+

Selecting the appropriate AI framework is essential to ensure that embedded systems can efficiently execute AI models. Key factors to consider when choosing a machine learning framework are ease of use, community support, performance, scalability, integration with data engineering tools, and integration with model optimization tools. By understanding these factors, you can make informed decisions and maximize the potential of your machine-learning initiatives.

+
+
+

6.9.4 Other Factors

+

Several other key factors beyond models, hardware, and software should be considered when evaluating AI frameworks for embedded systems.

+
+

Performance

+

Performance is critical in embedded systems where computational resources are limited. Evaluate the framework’s ability to optimize model inference for embedded hardware. Model quantization and hardware acceleration support are crucial in achieving efficient inference.

+
+
+

Scalability

+

Scalability is essential when considering the potential growth of an embedded AI project. The framework should support the deployment of models on various embedded devices, from microcontrollers to more powerful processors. It should also seamlessly handle both small-scale and large-scale deployments.

+
+
+

Integration with Data Engineering Tools

+

Data engineering tools are essential for data preprocessing and pipeline management. An ideal AI framework for embedded systems should seamlessly integrate with these tools, allowing for efficient data ingestion, transformation, and model training.

+
+
+

Integration with Model Optimization Tools

+

Model optimization ensures that AI models are well-suited for embedded deployment. Evaluate whether the framework integrates with model optimization tools like TensorFlow Lite Converter or ONNX Runtime to facilitate model quantization and size reduction.

+
+
+

Ease of Use

+

The ease of use of an AI framework significantly impacts development efficiency. A framework with a user-friendly interface and clear documentation reduces developers’ learning curve. Consideration should be given to whether the framework supports high-level APIs, allowing developers to focus on model design rather than low-level implementation details. This factor is incredibly important for embedded systems, which have fewer features than typical developers might be accustomed to.

+
+
+

Community Support

+

Community support plays another essential factor. Frameworks with active and engaged communities often have well-maintained codebases, receive regular updates, and provide valuable forums for problem-solving. As a result, community support also plays into Ease of Use because it ensures that developers have access to a wealth of resources, including tutorials and example projects. Community support provides some assurance that the framework will continue to be supported for future updates. There are only a few frameworks that cater to TinyML needs. TensorFlow Lite Micro is the most popular and has the most community support.

+
+
+
+ +
+

6.11 Conclusion

+

In summary, selecting the optimal framework requires thoroughly evaluating options against criteria like usability, community support, performance, hardware compatibility, and model conversion abilities. There is no universal best solution, as the right framework depends on the specific constraints and use case.

+

TensorFlow Lite Micro currently provides a strong starting point for extremely resource-constrained microcontroller-based platforms. Its comprehensive optimization tooling, such as quantization mapping and kernel optimizations, enables high performance on devices like Arm Cortex-M and RISC-V processors. The active developer community ensures accessible technical support. Seamless integration with TensorFlow for training and converting models makes the workflow cohesive.

+

For platforms with more capable CPUs like Cortex-A, TensorFlow Lite for Microcontrollers expands possibilities. It provides greater flexibility for custom and advanced models beyond the core operators in TFLite Micro. However, this comes at the cost of a larger memory footprint. These frameworks are ideal for automotive systems, drones, and more powerful edge devices that can benefit from greater model sophistication.

+

Frameworks specifically built for specialized hardware like CMSIS-NN on Cortex-M processors can further maximize performance but sacrifice portability. Integrated frameworks from processor vendors tailor the stack to their architectures, unlocking the full potential of their chips but locking you into their ecosystem.

+

Ultimately, choosing the right framework involves finding the best match between its capabilities and the requirements of the target platform. This requires balancing tradeoffs between performance needs, hardware constraints, model complexity, and other factors. Thoroughly assessing intended models and use cases and evaluating options against key metrics will guide developers in picking the ideal framework for their embedded ML application.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/generative_ai/generative_ai.html b/contents/generative_ai/generative_ai.html new file mode 100644 index 00000000..11de0eec --- /dev/null +++ b/contents/generative_ai/generative_ai.html @@ -0,0 +1,1046 @@ + + + + + + + + + +Machine Learning Systems - 19  Generative AI + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

19  Generative AI

+
+ + + +
+ + + + +
+ + + +
+ + +

Coming soon!

+
+
+
+ +
+
+Learning Objectives +
+
+
+

Coming soon.

+
+
+ + + +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/hw_acceleration/hw_acceleration.html b/contents/hw_acceleration/hw_acceleration.html new file mode 100644 index 00000000..6b218c28 --- /dev/null +++ b/contents/hw_acceleration/hw_acceleration.html @@ -0,0 +1,2477 @@ + + + + + + + + + +Machine Learning Systems - 10  AI Acceleration + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

10  AI Acceleration

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Create an intricate and colorful representation of a System on Chip (SoC) design in a rectangular format. Showcase a variety of specialized machine learning accelerators and chiplets, all integrated into the processor. Provide a detailed view inside the chip, highlighting the rapid movement of electrons. Each accelerator and chiplet should be designed to interact with neural network neurons, layers, and activations, emphasizing their processing speed. Depict the neural networks as a network of interconnected nodes, with vibrant data streams flowing between the accelerator pieces, showcasing the enhanced computation speed.
+
+
+

Machine learning has emerged as a transformative technology across many industries. However, deploying ML capabilities in real-world edge devices faces challenges due to limited computing resources. Specialized hardware acceleration is essential to enable high-performance machine learning under these constraints. Hardware accelerators optimize compute-intensive operations like inference using custom silicon optimized for matrix multiplications. This provides dramatic speedups over general-purpose CPUs, unlocking real-time execution of advanced models on size, weight, and power-constrained devices.

+

This chapter provides essential background on hardware acceleration techniques for embedded machine learning and their tradeoffs. The goal is to equip readers to make informed hardware selections and software optimizations to develop performant on-device ML capabilities.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand why hardware acceleration is needed for AI workloads

  • +
  • Survey key accelerator options like GPUs, TPUs, FPGAs, and ASICs and their tradeoffs

  • +
  • Learn about programming models, frameworks, and compilers for AI accelerators

  • +
  • Appreciate the importance of benchmarking and metrics for hardware evaluation

  • +
  • Recognize the role of hardware-software co-design in building efficient systems

  • +
  • Gain exposure to cutting-edge research directions like neuromorphic and quantum computing

  • +
  • Understand how ML is beginning to augment and enhance hardware design

  • +
+
+
+
+

10.1 Introduction

+

Machine learning has emerged as a transformative technology across many industries, enabling systems to learn and improve from data. There is a growing demand for embedded ML solutions to deploy machine learning capabilities in real-world environments - where models are built into edge devices like smartphones, home appliances, and autonomous vehicles. However, these edge devices have limited computing resources compared to data center servers.

+

Specialized hardware acceleration enables high-performance machine learning on resource-constrained edge devices. Hardware acceleration refers to using custom silicon chips and architectures to offload compute-intensive ML operations from the main processor. In neural networks, the most intensive computations are the matrix multiplications during inference. Hardware accelerators can optimize these matrix operations, providing 10-100x speedups over general-purpose CPUs. This acceleration unlocks the ability to run advanced neural network models on devices with size, weight, and power constraints in real-time.

+

This chapter overviews hardware acceleration techniques for embedded machine learning and their design tradeoffs. Its goal is to equip readers with an essential background in embedded ML acceleration. This will enable informed hardware selection and software optimization to develop high-performance machine learning capabilities on edge devices.

+
+
+

10.2 Background and Basics

+
+

10.2.1 Historical Background

+

The origins of hardware acceleration date back to the 1960s, with the advent of floating point math co-processors to offload calculations from the main CPU. One early example was the Intel 8087 chip released in 1980 to accelerate floating point operations for the 8086 processor. This established the practice of using specialized processors to handle math-intensive workloads efficiently.

+

In the 1990s, the first graphics processing units (GPUs) emerged to process graphics pipelines for rendering and gaming rapidly. Nvidia’s GeForce 256 in 1999 was one of the earliest programmable GPUs capable of running custom software algorithms. GPUs exemplify domain-specific fixed-function accelerators and evolve into parallel programmable accelerators.

+

In the 2000s, GPUs were applied to general-purpose computing under GPGPU. Their high memory bandwidth and computational throughput made them well-suited for math-intensive workloads. This included breakthroughs in using GPUs to accelerate training of deep learning models such as AlexNet in 2012.

+

In recent years, Google’s Tensor Processing Units (TPUs) represent customized ASICs specifically architected for matrix multiplication in deep learning. During inference, their optimized tensor cores achieve higher TeraOPS/watt than CPUs or GPUs. Ongoing innovation includes model compression techniques like pruning and quantization to fit larger neural networks on edge devices.

+

This evolution demonstrates how hardware acceleration has focused on solving compute-intensive bottlenecks, from floating point math to graphics to matrix multiplication for ML. Understanding this history provides a crucial context for specialized AI accelerators today.

+
+
+

10.2.2 The Need for Acceleration

+

The evolution of hardware acceleration is closely tied to the broader history of computing. In the early decades, chip design was governed by Moore’s Law and Dennard Scaling, which observed that the number of transistors on an integrated circuit doubled yearly, and their performance (speed) increased as transistors became smaller. At the same time, power density (power per unit area) remains constant. These two laws were held through the single-core era. Figure fig-moore-dennard shows the trends of different microprocessor metrics. As the figure denotes, Dennard Scaling fails around the mid-2000s; notice how the clock speed (frequency) remains almost constant even as the number of transistors keeps increasing.

+

However, as Patterson and Hennessy (2016) describes, technological constraints eventually forced a transition to the multicore era, with chips containing multiple processing cores to deliver performance gains. Power limitations prevented further scaling, which led to “dark silicon” (Dark Silicon), where not all chip areas could be simultaneously active (Xiu 2019).

+
+Patterson, David A, and John L Hennessy. 2016. Computer Organization and Design ARM Edition: The Hardware Software Interface. Morgan kaufmann. +
+Xiu, Liming. 2019. “Time Moore: Exploiting Moore’s Law from the Perspective of Time.” IEEE Solid-State Circuits Mag. 11 (1): 39–55. https://doi.org/10.1109/mssc.2018.2882285. +

The concept of dark silicon emerged as a consequence of these constraints. “Dark silicon” refers to portions of the chip that cannot be powered simultaneously due to thermal and power limitations. Essentially, as the density of transistors increased, the proportion of the chip that could be actively used without overheating or exceeding power budgets shrank.

+

This phenomenon meant that while chips had more transistors, not all could be operational simultaneously, limiting potential performance gains. This power crisis necessitated a shift to the accelerator era, with specialized hardware units tailored for specific tasks to maximize efficiency. The explosion in AI workloads further drove demand for customized accelerators. Enabling factors included new programming languages, software tools, and manufacturing advances.

+
+
+
+ +
+
+Figure 10.1: Microprocessor trends. Credit: Karl Rupp. +
+
+
+

Fundamentally, hardware accelerators are evaluated on performance, power, and silicon area (PPA)—the nature of the target application—whether memory-bound or compute-bound—heavily influences the design. For example, memory-bound workloads demand high bandwidth and low latency access, while compute-bound applications require maximal computational throughput.

+
+
+

10.2.3 General Principles

+

The design of specialized hardware accelerators involves navigating complex tradeoffs between performance, power efficiency, silicon area, and workload-specific optimizations. This section outlines core considerations and methodologies for achieving an optimal balance based on application requirements and hardware constraints.

+
+

Performance Within Power Budgets

+

Performance refers to the throughput of computational work per unit of time, commonly measured in floating point operations per second (FLOPS) or frames per second (FPS). Higher performance enables completing more work, but power consumption rises with activity.

+

Hardware accelerators aim to maximize performance within set power budgets. This requires careful balancing of parallelism, the chip’s clock frequency, the operating voltage, workload optimization, and other techniques to maximize operations per watt.

+
    +
  • Performance = Throughput * Efficiency
  • +
  • Throughput ~= Parallelism * Clock Frequency
  • +
  • Efficiency = Operations / Watt
  • +
+

For example, GPUs achieve high throughput via massively parallel architectures. However, their efficiency is lower than that of customized application-specific integrated circuits (ASICs) like Google’s TPU, which optimize for a specific workload.

+
+
+

Managing Silicon Area and Costs

+

Chip area directly impacts manufacturing cost. Larger die sizes require more materials, lower yields, and higher defect rates. Mulit-die packages help scale designs but add packaging complexity. Silicon area depends on:

+
    +
  • Computational resources - e.g., number of cores, memory, caches
  • +
  • Manufacturing process node - smaller transistors enable higher density
  • +
  • Programming model - programmed accelerators require more flexibility
  • +
+

Accelerator design involves squeezing maximum performance within area constraints. Techniques like pruning and compression help fit larger models on the chip.

+
+
+

Workload-Specific Optimizations

+

The target workload dictates optimal accelerator architectures. Some of the key considerations include:

+
    +
  • Memory vs Compute boundedness: Memory-bound workloads require more memory bandwidth, while compute-bound apps need arithmetic throughput.
  • +
  • Data locality: Data movement should be minimized for efficiency. Near-compute memory helps.
  • +
  • Bit-level operations: Low precision datatypes like INT8/INT4 optimize compute density.
  • +
  • Data parallelism: Multiple replicated compute units allow parallel execution.
  • +
  • Pipelining: Overlapped execution of operations increases throughput.
  • +
+

Understanding workload characteristics enables customized acceleration. For example, convolutional neural networks use sliding window operations optimally mapped to spatial arrays of processing elements.

+

By navigating these architectural tradeoffs, hardware accelerators can deliver massive performance gains and enable emerging applications in AI, graphics, scientific computing, and other domains.

+
+
+

Sustainable Hardware Design

+

In recent years, AI sustainability has become a pressing concern driven by two key factors - the exploding scale of AI workloads and their associated energy consumption.

+

First, the size of AI models and datasets has rapidly grown. For example, based on OpenAI’s AI computing trends, the amount of computing used to train state-of-the-art models doubles every 3.5 months. This exponential growth requires massive computational resources in data centers.

+

Second, the energy usage of AI training and inference presents sustainability challenges. Data centers running AI applications consume substantial energy, contributing to high carbon emissions. It’s estimated that training a large AI model can have a carbon footprint of 626,000 pounds of CO2 equivalent, almost 5 times the lifetime emissions of an average car.

+

As a result, AI research and practice must prioritize energy efficiency and carbon impact alongside accuracy. There is an increasing focus on model efficiency, data center design, hardware optimization, and other solutions to improve sustainability. Striking a balance between AI progress and environmental responsibility has emerged as a key consideration and an area of active research across the field.

+

The scale of AI systems is expected to keep growing. Developing sustainable AI is crucial for managing the environmental footprint and enabling widespread beneficial deployment of this transformative technology.

+

We will learn about Sustainable AI in a later chapter, where we will discuss it in more detail.

+
+
+
+
+

10.3 Accelerator Types

+

Hardware accelerators can take on many forms. They can exist as a widget (like the Neural Engine in the Apple M1 chip) or as entire chips specially designed to perform certain tasks very well. This section will examine processors for machine learning workloads along the spectrum from highly specialized ASICs to more general-purpose CPUs. We first focus on custom hardware purpose-built for AI to understand the most extreme optimizations possible when design constraints are removed. This establishes a ceiling for performance and efficiency.

+

We then progressively consider more programmable and adaptable architectures, discussing GPUs and FPGAs. These make tradeoffs in customization to maintain flexibility. Finally, we cover general-purpose CPUs that sacrifice optimizations for a particular workload in exchange for versatile programmability across applications.

+

By structuring the analysis along this spectrum, we aim to illustrate the fundamental tradeoffs between utilization, efficiency, programmability, and flexibility in accelerator design. The optimal balance point depends on the constraints and requirements of the target application. This spectrum perspective provides a framework for reasoning about hardware choices for machine learning and the capabilities required at each level of specialization.

+

Figure fig-design-tradeoffs illustrates the complex interplay between flexibility, performance, functional diversity, and area of architecture design. Notice how the ASIC is on the bottom-right corner, with minimal area, flexibility, and power consumption and maximal performance, due to its highly specialized application-specific nature. A key tradeoff is functinoal diversity vs performance: general purpose architechtures can serve diverse applications but their application performance is degraded as compared to more customized architectures.

+

The progression begins with the most specialized option, ASICs purpose-built for AI, to ground our understanding in the maximum possible optimizations before expanding to more generalizable architectures. This structured approach aims to elucidate the accelerator design space.

+
+
+
+ +
+
+Figure 10.2: Design tradeoffs. Credit: El-Rayis (2014). +
+
+El-Rayis, A. O. 2014. “Reconfigurable Architectures for the Next Generation of Mobile Device Telecommunications Systems.” : https://www.researchgate.net/publication/292608967. +
+
+
+

10.3.1 Application-Specific Integrated Circuits (ASICs)

+

An Application-Specific Integrated Circuit (ASIC) is a type of integrated circuit (IC) that is custom-designed for a specific application or workload rather than for general-purpose use. Unlike CPUs and GPUs, ASICs do not support multiple applications or workloads. Rather, they are optimized to perform a single task extremely efficiently. The Google TPU is an example of an ASIC.

+

ASICs achieve this efficiency by tailoring every aspect of the chip design - the underlying logic gates, electronic components, architecture, memory, I/O, and manufacturing process - specifically for the target application. This level of customization allows removing any unnecessary logic or functionality required for general computation. The result is an IC that maximizes performance and power efficiency on the desired workload. The efficiency gains from application-specific hardware are so substantial that these software-centric firms dedicate enormous engineering resources to designing customized ASICs.

+

The rise of more complex machine learning algorithms has made the performance advantages enabled by tailored hardware acceleration a key competitive differentiator, even for companies traditionally concentrated on software engineering. ASICs have become a high-priority investment for major cloud providers aiming to offer faster AI computation.

+
+

Advantages

+

Due to their customized nature, ASICs provide significant benefits over general-purpose processors like CPUs and GPUs. The key advantages include the following.

+
+
Maximized Performance and Efficiency
+

The most fundamental advantage of ASICs is maximizing performance and power efficiency by customizing the hardware architecture specifically for the target application. Every transistor and design aspect is optimized for the desired workload - no unnecessary logic or overhead is needed to support generic computation.

+

For example, Google’s Tensor Processing Units (TPUs) contain architectures tailored exactly for the matrix multiplication operations used in neural networks. To design the TPU ASICs, Google’s engineering teams need to define the chip specifications clearly, write the architecture description using Hardware Description Languages like Verilog, synthesize the design to map it to hardware components, and carefully place-and-route transistors and wires based on the fabrication process design rules. This complex design process, known as very-large-scale integration (VLSI), allows them to build an optimized IC for machine learning workloads.

+

As a result, TPU ASICs achieve over an order of magnitude higher efficiency in operations per watt than general-purpose GPUs on ML workloads by maximizing performance and minimizing power consumption through a full-stack custom hardware design.

+
+
+
Specialized On-Chip Memory
+

ASICs incorporate on-chip SRAM and caches specifically optimized to feed data to the computational units. For example, Apple’s M1 system-on-a-chip contains special low-latency SRAM to accelerate the performance of its Neural Engine machine learning hardware. Large local memory with high bandwidth enables data to be kept close to the processing elements. This provides tremendous speed advantages compared to off-chip DRAM access, which can be up to 100x slower.

+

Data locality and optimizing memory hierarchy are crucial for high throughput and low power. Below is a table, “Numbers Everyone Should Know,” from Jeff Dean.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationLatencyNotes
L1 cache reference0.5 ns
Branch mispredict5 ns
L2 cache reference7 ns
Mutex lock/unlock25 ns
Main memory reference100 ns
Compress 1K bytes with Zippy3,000 ns3 us
Send 1 KB bytes over 1 Gbps network10,000 ns10 us
Read 4 KB randomly from SSD150,000 ns150 us
Read 1 MB sequentially from memory250,000 ns250 us
Round trip within same datacenter500,000 ns0.5 ms
Read 1 MB sequentially from SSD1,000,000 ns1 ms
Disk seek10,000,000 ns10 ms
Read 1 MB sequentially from disk20,000,000 ns20 ms
Send packet CA->Netherlands->CA150,000,000 ns150 ms
+
+
+
Custom Datatypes and Operations
+

Unlike general-purpose processors, ASICs can be designed to natively support custom datatypes like INT4 or bfloat16, which are widely used in ML models. For instance, Nvidia’s Ampere GPU architecture has dedicated bfloat16 Tensor Cores to accelerate AI workloads. Low-precision datatypes enable higher arithmetic density and performance. ASICs can also directly incorporate non-standard operations common in ML algorithms as primitive operations - for example, natively supporting activation functions like ReLU makes execution more efficient. Please refer to the Efficient Numeric Representations chapter for additional details.

+
+
+
High Parallelism
+

ASIC architectures can leverage higher parallelism tuned for the target workload versus general-purpose CPUs or GPUs. More computational units tailored for the application mean more operations execute simultaneously. Highly parallel ASICs achieve tremendous throughput for data parallel workloads like neural network inference.

+
+
+
Advanced Process Nodes
+

Cutting-edge manufacturing processes allow more transistors to be packed into smaller die areas, increasing density. ASICs designed specifically for high-volume applications can better amortize the costs of cutting-edge process nodes.

+
+
+
+

Disadvantages

+
+
Long Design Timelines
+

The engineering process of designing and validating an ASIC can take 2-3 years. Synthesizing the architecture using hardware description languages, taping out the chip layout, and fabricating the silicon on advanced process nodes involve long development cycles. For example, to tape out a 7nm chip, teams need to define specifications carefully, write the architecture in HDL, synthesize the logic gates, place components, route all interconnections, and finalize the layout to send for fabrication. This very large-scale integration (VLSI) flow means ASIC design and manufacturing can traditionally take 2-5 years.

+

There are a few key reasons why the long design timelines of ASICs, often 2-3 years, can be challenging for machine learning workloads:

+
    +
  • ML algorithms evolve rapidly: New model architectures, training techniques, and network optimizations are constantly emerging. For example, Transformers became hugely popular in NLP last few years. When an ASIC finishes tapeout, the optimal architecture for a workload may have changed.
  • +
  • Datasets grow quickly: ASICs designed for certain model sizes or datatypes can become undersized relative to demand. For instance, natural language models are scaling exponentially with more data and parameters. A chip designed for BERT might not accommodate GPT-3.
  • +
  • ML applications change frequently: The industry focus shifts between computer vision, speech, NLP, recommender systems, etc. An ASIC optimized for image classification may have less relevance in a few years.
  • +
  • Faster design cycles with GPUs/FPGAs: Programmable accelerators like GPUs can adapt much quicker by upgrading software libraries and frameworks. New algorithms can be deployed without hardware changes.
  • +
  • Time-to-market needs: Getting a competitive edge in ML requires rapidly experimenting with and deploying new ideas. Waiting several years for an ASIC is different from fast iteration.
  • +
+

The pace of innovation in ML needs to be better matched to the multi-year timescale for ASIC development. Significant engineering efforts are required to extend ASIC lifespan through modular architectures, process scaling, model compression, and other techniques. However, the rapid evolution of ML makes fixed-function hardware challenging.

+
+
+
High Non-Recurring Engineering Costs
+

The fixed costs of taking an ASIC from design to high-volume manufacturing can be very capital-intensive, often tens of millions of dollars. Photomask fabrication for taping out chips in advanced process nodes, packaging, and one-time engineering efforts is expensive. For instance, a 7nm chip tape-out alone could cost millions. The high non-recurring engineering (NRE) investment narrows ASIC viability to high-volume production use cases where the upfront cost can be amortized.

+
+
+
Complex Integration and Programming
+

ASICs require extensive software integration work, including drivers, compilers, OS support, and debugging tools. They also need expertise in electrical and thermal packaging. Additionally, efficiently programming ASIC architectures can involve challenges like workload partitioning and scheduling across many parallel units. The customized nature necessitates significant integration efforts to turn raw hardware into fully operational accelerators.

+

While ASICs provide massive efficiency gains on target applications by tailoring every aspect of the hardware design to one specific task, their fixed nature results in tradeoffs in flexibility and development costs compared to programmable accelerators, which must be weighed based on the application.

+
+
+
+
+

10.3.2 Field-Programmable Gate Arrays (FPGAs)

+

FPGAs are programmable integrated circuits that can be reconfigured for different applications. Their customizable nature provides advantages for accelerating AI algorithms compared to fixed ASICs or inflexible GPUs. While Google, Meta, and NVIDIA are considering putting ASICs in data centers, Microsoft deployed FPGAs in its data centers (Putnam et al. 2014) in 2011 to efficiently serve diverse data center workloads.

+
+

Advantages

+

FPGAs provide several benefits over GPUs and ASICs for accelerating machine learning workloads.

+
+
Flexibility Through Reconfigurable Fabric
+

The key advantage of FPGAs is the ability to reconfigure the underlying fabric to implement custom architectures optimized for different models, unlike fixed-function ASICs. For example, quant trading firms use FPGAs to accelerate their algorithms because they change frequently, and the low NRE cost of FPGAs is more viable than tapping out new ASICs. Figure fig-different-fpgas contains a table comparing three different FPGAs.

+
+
+
+ +
+
+Figure 10.3: Comparison of FPGAs. Credit: Gwennap (n.d.). +
+
+Gwennap, Linley. n.d. “Certus-NX Innovates General-Purpose FPGAs.” +
+
+

FPGAs comprise basic building blocks - configurable logic blocks, RAM blocks, and interconnects. Vendors provide a base amount of these resources, and engineers program the chips by compiling HDL code into bitstreams that rearrange the fabric into different configurations. This makes FPGAs adaptable as algorithms evolve.

+

While FPGAs may not achieve the utmost performance and efficiency of workload-specific ASICs, their programmability provides more flexibility as algorithms change. This adaptability makes FPGAs a compelling choice for accelerating evolving machine learning applications. Microsoft has deployed FPGAs in its Azure data centers for machine learning workloads to serve diverse applications instead of ASICs. The programmability enables optimization across changing ML models.

+
+
+
Customized Parallelism and Pipelining
+

FPGA architectures can leverage spatial parallelism and pipelining by tailoring the hardware design to mirror the parallelism in ML models. For example, Intel’s HARPv2 FPGA platform splits the layers of an MNIST convolutional network across separate processing elements to maximize throughput. Unique parallel patterns like tree ensemble evaluations are also possible on FPGAs. Deep pipelines with optimized buffering and dataflow can be customized to each model’s structure and datatypes. This level of tailored parallelism and pipelining is not feasible on GPUs.

+
+
+
Low Latency On-Chip Memory
+

Large amounts of high-bandwidth on-chip memory enable localized storage for weights and activations. For instance, Xilinx Versal FPGAs contain 32MB of low-latency RAM blocks and dual-channel DDR4 interfaces for external memory. Bringing memory physically closer to the compute units reduces access latency. This provides significant speed advantages over GPUs that traverse PCIe or other system buses to reach off-chip GDDR6 memory.

+
+
+
Native Support for Low Precision
+

A key advantage of FPGAs is the ability to natively implement any bit width for arithmetic units, such as INT4 or bfloat16, used in quantized ML models. For example, Intel’s Stratix 10 NX FPGAs have dedicated INT8 cores that can achieve up to 143 INT8 TOPS at ~1 TOPS/W Intel Stratix 10 NX FPGA. Lower bit widths increase arithmetic density and performance. FPGAs can even support mixed precision or dynamic precision tuning at runtime.

+
+
+
+

Disadvantages

+
+
Lower Peak Throughput than ASICs
+

FPGAs cannot match the raw throughput numbers of ASICs customized for a specific model and precision. The overheads of the reconfigurable fabric compared to fixed function hardware result in lower peak performance. For example, the TPU v5e pods allow up to 256 chips to be connected with more than 100 petaOps of INT8 performance, while FPGAs can offer up to 143 INT8 TOPS or 286 INT4 TOPS Intel Stratix 10 NX FPGA.

+

This is because FPGAs comprise basic building blocks—configurable logic blocks, RAM blocks, and interconnects. Vendors provide a set amount of these resources. To program FPGAs, engineers write HDL code and compile it into bitstreams that rearrange the fabric, which has inherent overheads versus an ASIC purpose-built for one computation.

+
+
+
Programming Complexity
+

To optimize FPGA performance, engineers must program the architectures in low-level hardware description languages like Verilog or VHDL. This requires hardware design expertise and longer development cycles than higher-level software frameworks like TensorFlow. Maximizing utilization can be challenging despite advances in high-level synthesis from C/C++.

+
+
+
Reconfiguration Overheads
+

Changing FPGA configurations requires reloading a new bitstream, which has considerable latency and storage size costs. For example, partial reconfiguration on Xilinx FPGAs can take 100s of milliseconds. This makes dynamically swapping architectures in real-time infeasible. The bitstream storage also consumes on-chip memory.

+
+
+
Diminishing Gains on Advanced Nodes
+

While smaller process nodes greatly benefit ASICs, they provide fewer advantages for FPGAs. At 7nm and below, effects like process variation, thermal constraints, and aging disproportionately impact FPGA performance. The overheads of the configurable fabric also diminish gains compared to fixed-function ASICs.

+
+
+
Case Study
+

FPGAs have found widespread application in various fields, including medical imaging, robotics, and finance, where they excel in handling computationally intensive machine learning tasks. In medical imaging, an illustrative example is the application of FPGAs for brain tumor segmentation, a traditionally time-consuming and error-prone process. For instance, Xiong et al. developed a quantized segmentation accelerator, which they retrained using the BraTS19 and BraTS20 datasets. Their work yielded remarkable results, achieving over 5x and 44x performance improvements and 11x and 82x energy efficiency gains compared to GPU and CPU implementations, respectively (Xiong et al. 2021).

+
+Xiong, Siyu, Guoqing Wu, Xitian Fan, Xuan Feng, Zhongcheng Huang, Wei Cao, Xuegong Zhou, et al. 2021. MRI-Based Brain Tumor Segmentation Using FPGA-Accelerated Neural Network.” BMC Bioinf. 22 (1): 421. https://doi.org/10.1186/s12859-021-04347-6. +
+
+
+
+

10.3.3 Digital Signal Processors (DSPs)

+

The first digital signal processor core was built in 1948 by Texas Instruments (The Evolution of Audio DSPs). Traditionally, DSPs would have logic to directly access digital/audio data in memory, perform an arithmetic operation (multiply-add-accumulate-MAC was one of the most common operations), and then write the result back to memory. The DSP would include specialized analog components to retrieve digital/audio data.

+

Once we entered the smartphone era, DSPs started encompassing more sophisticated tasks. They required Bluetooth, Wi-Fi, and cellular connectivity. Media also became much more complex. Today, it’s rare to have entire chips dedicated to just DSP, but a System on Chip would include DSPs and general-purpose CPUs. For example, Qualcomm’s Hexagon Digital Signal Processor claims to be a “world-class processor with both CPU and DSP functionality to support deeply embedded processing needs of the mobile platform for both multimedia and modem functions.” Google Tensors, the chip in the Google Pixel phones, also includes CPUs and specialized DSP engines.

+
+

Advantages

+

DSPs architecturally provide advantages in vector math throughput, low latency memory access, power efficiency, and support for diverse datatypes - making them well-suited for embedded ML acceleration.

+
+
Optimized Architecture for Vector Math
+

DSPs contain specialized data paths, register files, and instructions optimized specifically for vector math operations commonly used in machine learning models. This includes dot product engines, MAC units, and SIMD capabilities tailored for vector/matrix calculations. For example, the CEVA-XM6 DSP (“Ceva SensPro Fuses AI and Vector DSP”) has 512-bit vector units to accelerate convolutions. This efficiency on vector math workloads is far beyond general CPUs.

+
+
+
Low Latency On-Chip Memory
+

DSPs integrate large amounts of fast on-chip SRAM memory to hold data locally for processing. Bringing memory physically closer to the computation units reduces access latency. For example, Analog’s SHARC+ DSP contains 10MB of on-chip SRAM. This high-bandwidth local memory provides speed advantages for real-time applications.

+
+
+
Power Efficiency
+

DSPs are engineered to provide high performance per watt on digital signal workloads. Efficient data paths, parallelism, and memory architectures enable trillions of math operations per second within tight mobile power budgets. For example, Qualcomm’s Hexagon DSP can deliver 4 trillion operations per second (TOPS) while consuming minimal watts.

+
+
+
Support for Integer and Floating Point Math
+

Unlike GPUs that excel at single or half precision, DSPs can natively support 8/16-bit integer and 32-bit floating point datatypes used across ML models. Some DSPs support dot product acceleration at INT8 precision for quantized neural networks.

+
+
+
+

Disadvantages

+

DSPs make architectural tradeoffs that limit peak throughput, precision, and model capacity compared to other AI accelerators. However, their advantages in power efficiency and integer math make them a strong edge computing option. So, while DSPs provide some benefits over CPUs, they also come with limitations for machine learning workloads:

+
+
Lower Peak Throughput than ASICs/GPUs
+

DSPs cannot match the raw computational throughput of GPUs or customized ASICs designed specifically for machine learning. For example, Qualcomm’s Cloud AI 100 ASIC delivers 480 TOPS on INT8, while their Hexagon DSP provides 10 TOPS. DSPs lack the massive parallelism of GPU SM units.

+
+
+
Slower Double Precision Performance
+

Most DSPs must be optimized for the higher precision floating point needed in some ML models. Their dot product engines focus on INT8/16 and FP32, which provide better power efficiency. However, 64-bit floating point throughput is much lower, which can limit usage in models requiring high precision.

+
+
+
Constrained Model Capacity
+

The limited on-chip memory of DSPs constrains the model sizes that can be run. Large deep learning models with hundreds of megabytes of parameters would exceed on-chip SRAM capacity. DSPs are best suited for small to mid-sized models targeted for edge devices.

+
+
+
Programming Complexity
+

Efficient programming of DSP architectures requires expertise in parallel programming and optimizing data access patterns. Their specialized microarchitectures have a steeper learning curve than high-level software frameworks, making development more complex.

+
+
+
+
+

10.3.4 Graphics Processing Units (GPUs)

+

The term graphics processing unit has existed since at least the 1980s. There had always been a demand for graphics hardware in video game consoles (high demand, needed to be relatively lower cost) and scientific simulations (lower demand, but higher resolution, could be at a high price point).

+

The term was popularized, however, in 1999 when NVIDIA launched the GeForce 256, mainly targeting the PC games market sector (Lindholm et al. 2008). As PC games became more sophisticated, NVIDIA GPUs became more programmable. Soon, users realized they could take advantage of this programmability, run various non-graphics-related workloads on GPUs, and benefit from the underlying architecture. And so, in the late 2000s, GPUs became general-purpose graphics processing units or GP-GPUs.

+
+Lindholm, Erik, John Nickolls, Stuart Oberman, and John Montrym. 2008. NVIDIA Tesla: A Unified Graphics and Computing Architecture.” IEEE Micro 28 (2): 39–55. https://doi.org/10.1109/mm.2008.31. +

Intel Arc Graphics and AMD Radeon RX have also developed their GPUs over time.

+
+

Advantages

+
+
High Computational Throughput
+

The key advantage of GPUs is their ability to perform massively parallel floating-point calculations optimized for computer graphics and linear algebra (Raina, Madhavan, and Ng 2009). Modern GPUs like Nvidia’s A100 offer up to 19.5 teraflops of FP32 performance with 6912 CUDA cores and 40GB of graphics memory tightly coupled with 1.6TB/s of graphics memory bandwidth.

+
+Raina, Rajat, Anand Madhavan, and Andrew Y. Ng. 2009. “Large-Scale Deep Unsupervised Learning Using Graphics Processors.” In Proceedings of the 26th Annual International Conference on Machine Learning, edited by Andrea Pohoreckyj Danyluk, Léon Bottou, and Michael L. Littman, 382:873–80. ACM International Conference Proceeding Series. ACM. https://doi.org/10.1145/1553374.1553486. +

This raw throughput stems from the highly parallel streaming multiprocessor (SM) architecture tailored for data-parallel workloads (Zhihao Jia, Zaharia, and Aiken 2019). Each SM contains hundreds of scalar cores optimized for float32/64 math. With thousands of SMs on a chip, GPUs are purpose-built for matrix multiplication and vector operations used throughout neural networks.

+

For example, Nvidia’s latest H100 GPU provides 4000 TFLOPs of FP8, 2000 TFLOPs of FP16, 1000 TFLOPs of TF32, 67 TFLOPs of FP32 and 34 TFLOPs of FP64 Compute performance, which can dramatically accelerate large batch training on models like BERT, GPT-3, and other transformer architectures. The scalable parallelism of GPUs is key to speeding up computationally intensive deep learning.

+
+
+
Mature Software Ecosystem
+

Nvidia provides extensive runtime libraries like cuDNN and cuBLAS that are highly optimized for deep learning primitives. Frameworks like TensorFlow and PyTorch integrate with these libraries to enable GPU acceleration without direct programming. CUDA provides lower-level control for custom computations.

+

This ecosystem enables quick leveraging of GPUs via high-level Python without GPU programming expertise. Known workflows and abstractions provide a convenient on-ramp for scaling up deep learning experiments. The software maturity supplements the throughput advantages.

+
+
+
Broad Availability
+

The economies of scale of graphics processing make GPUs broadly accessible in data centers, cloud platforms like AWS and GCP, and desktop workstations. Their availability in research environments has provided a convenient ML experimentation and innovation platform. For example, nearly every state-of-the-art deep learning result has involved GPU acceleration because of this ubiquity. The broad access supplements the software maturity to make GPUs the standard ML accelerator.

+
+
+
Programmable Architecture
+

While not as flexible as FPGAs, GPUs provide programmability via CUDA and shader languages to customize computations. Developers can optimize data access patterns, create new ops, and tune precisions for evolving models and algorithms.

+
+
+
+

Disadvantages

+

While GPUs have become the standard accelerator for deep learning, their architecture has some key downsides.

+
+
Less Efficient than Custom ASICs
+

The statement “GPUs are less efficient than ASICs” could spark intense debate within the ML/AI field and cause this book to explode.

+

Typically, GPUs are perceived as less efficient than ASICs because the latter are custom-built for specific tasks and thus can operate more efficiently by design. With their general-purpose architecture, GPUs are inherently more versatile and programmable, catering to a broad spectrum of computational tasks beyond ML/AI.

+

However, modern GPUs have evolved to include specialized hardware support for essential AI operations, such as generalized matrix multiplication (GEMM) and other matrix operations, native support for quantization, and native support for pruning, which are critical for running ML models effectively. These enhancements have significantly improved the efficiency of GPUs for AI tasks to the point where they can rival the performance of ASICs for certain applications.

+

Consequently, contemporary GPUs are convergent, incorporating specialized ASIC-like capabilities within a flexible, general-purpose processing framework. This adaptability has blurred the lines between the two types of hardware. GPUs offer a strong balance of specialization and programmability that is well-suited to the dynamic needs of ML/AI research and development.

+
+
+
High Memory Bandwidth Needs
+

The massively parallel architecture requires tremendous memory bandwidth to supply thousands of cores, as shown in Figure 1. For example, the Nvidia A100 GPU requires 1.6TB/sec to fully saturate its computer. GPUs rely on wide 384-bit memory buses to high-bandwidth GDDR6 RAM, but even the fastest GDDR6 tops out at around 1 TB/sec. This dependence on external DRAM incurs latency and power overheads.

+
+
+
Programming Complexity
+

While tools like CUDA help, optimally mapping and partitioning ML workloads across the massively parallel GPU architecture remains challenging, achieving both high utilization and memory locality requires low-level tuning (Zhe Jia et al. 2018). Abstractions like TensorFlow can leave performance on the table.

+
+Jia, Zhe, Marco Maggioni, Benjamin Staiger, and Daniele P. Scarpazza. 2018. “Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking.” ArXiv Preprint. https://arxiv.org/abs/1804.06826. +
+
+
Limited On-Chip Memory
+

GPUs have relatively small on-chip memory caches compared to ML models’ large working set requirements during training. They rely on high bandwidth access to external DRAM, which ASICs minimize with large on-chip SRAM.

+
+
+
Fixed Architecture
+

Unlike FPGAs, the fundamental GPU architecture cannot be altered post-manufacture. This constraint limits adapting to novel ML workloads or layers. The CPU-GPU boundary also creates data movement overheads.

+
+
+
+

Case Study

+

The recent groundbreaking research conducted by OpenAI (Brown et al. 2020) with their GPT-3 model. GPT-3, a language model with 175 billion parameters, demonstrated unprecedented language understanding and generation capabilities. Its training, which would have taken months on conventional CPUs, was accomplished in a matter of days using powerful GPUs, thus pushing the boundaries of natural language processing (NLP) capabilities.

+
+
+
+

10.3.5 Central Processing Units (CPUs)

+

The term CPUs has a long history that dates back to 1955 (Weik 1955) while the first microprocessor CPU-the Intel 4004-was invented in 1971 (Who Invented the Microprocessor?). Compilers compile high-level programming languages like Python, Java, or C to assemble instructions (x86, ARM, RISC-V, etc.) for CPUs to process. The set of instructions a CPU understands is called the “instruction set.” It must be agreed upon by both the hardware and software running atop it (See section 5 for a more in-depth description of instruction set architectures-ISAs).

+
+Weik, Martin H. 1955. A Survey of Domestic Electronic Digital Computing Systems. Ballistic Research Laboratories. +

An overview of significant developments in CPUs:

+
    +
  • ** Single-core Era (1950s- 2000): ** This era is known for aggressive microarchitectural improvements. Techniques like speculative execution (executing an instruction before the previous one was done), out-of-order execution (re-ordering instructions to be more effective), and wider issue widths (executing multiple instructions at once) were implemented to increase instruction throughput. The term “System on Chip” also originated in this era as different analog components (components designed with transistors) and digital components (components designed with hardware description languages that are mapped to transistors) were put on the same platform to achieve some task.
  • +
  • Multicore Era (2000s): Driven by the decrease of Moore’s Law, this era is marked by scaling the number of cores within a CPU. Now, tasks can be split across many different cores, each with its own datapath and control unit. Many of the issues in this era pertained to how to share certain resources, which resources to share, and how to maintain coherency and consistency across all the cores.
  • +
  • Sea of accelerators (2010s): Again, driven by the decrease of Moore’s law, this era is marked by offloading more complicated tasks to accelerators (widgets) attached to the main datapath in CPUs. It’s common to see accelerators dedicated to various AI workloads, as well as image/digital processing, and cryptography. In these designs, CPUs are often described more as judges, deciding which tasks should be processed rather than doing the processing itself. Any task could still be run on the CPU rather than the accelerators, but the CPU would generally be slower. However, the cost of designing and programming the accelerator became a non-trivial hurdle that sparked interest in design-specific libraries (DSLs).
  • +
  • Presence in data centers: Although we often hear that GPUs dominate the data center marker, CPUs are still well suited for tasks that don’t inherently possess a large amount of parallelism. CPUs often handle serial and small tasks and coordinate the data center.
  • +
  • On the edge: Given the tighter resource constraints on the edge, edge CPUs often only implement a subset of the techniques developed in the sing-core era because these optimizations tend to be heavy on power and area consumption. Edge CPUs still maintain a relatively simple datapath with limited memory capacities.
  • +
+

Traditionally, CPUs have been synonymous with general-purpose computing, a term that has also changed as the “average” workload a consumer would run changes over time. For example, floating point components were once considered reserved for “scientific computing,” they were usually implemented as a co-processor (a modular component that worked with the datapath) and seldom deployed to average consumers. Compare this attitude to today, where FPUs are built into every datapath.

+
+

Advantages

+

While raw throughput is limited, general-purpose CPUs provide practical AI acceleration benefits.

+
+
General Programmability
+

CPUs support diverse workloads beyond ML, providing flexible general-purpose programmability. This versatility comes from their standardized instruction sets and mature compiler ecosystems, which allow running any application, from databases and web servers to analytics pipelines (Hennessy and Patterson 2019).

+
+Hennessy, John L., and David A. Patterson. 2019. “A New Golden Age for Computer Architecture.” Commun. ACM 62 (2): 48–60. https://doi.org/10.1145/3282307. +

This avoids the need for dedicated ML accelerators and enables leveraging existing CPU-based infrastructure for basic ML deployment. For example, X86 servers from vendors like Intel and AMD can run common ML frameworks using Python and TensorFlow packages alongside other enterprise workloads.

+
+
+
Mature Software Ecosystem
+

For decades, highly optimized math libraries like BLAS, LAPACK, and FFTW have leveraged vectorized instructions and multithreading on CPUs (Dongarra 2009). Major ML frameworks like PyTorch, TensorFlow, and SciKit-Learn are designed to integrate seamlessly with these CPU math kernels.

+
+Dongarra, Jack J. 2009. “The Evolution of High Performance Computing on System z.” IBM J. Res. Dev. 53: 3–4. +

Hardware vendors like Intel and AMD also provide low-level libraries to optimize performance for deep learning primitives fully (AI Inference Acceleration on CPUs). This robust, mature software ecosystem allows quickly deploying ML on existing CPU infrastructure.

+
+
+
Wide Availability
+

The economies of scale of CPU manufacturing, driven by demand across many markets like PCs, servers, and mobile, make them ubiquitously available. Intel CPUs, for example, have powered most servers for decades (Ranganathan 2011). This wide availability in data centers reduces hardware costs for basic ML deployment.

+
+Ranganathan, Parthasarathy. 2011. “From Microprocessors to Nanostores: Rethinking Data-Centric Systems.” Computer 44 (1): 39–48. https://doi.org/10.1109/mc.2011.18. +

Even small embedded devices typically integrate some CPU, enabling edge inference. The ubiquity reduces the need to purchase specialized ML accelerators in many situations.

+
+
+
Low Power for Inference
+

Optimizations like ARM Neon and Intel AVX vector extensions provide power-efficient integer and floating point throughput optimized for “bursty” workloads such as inference (Ignatov et al. 2018). While slower than GPUs, CPU inference can be deployed in power-constrained environments. For example, ARM’s Cortex-M CPUs now deliver over 1 TOPS of INT8 performance under 1W, enabling keyword spotting and vision applications on edge devices (ARM).

+
+
+
+

Disadvantages

+

While providing some advantages, general-purpose CPUs also have limitations for AI workloads.

+
+
Lower Throughput than Accelerators
+

CPUs lack the specialized architectures for massively parallel processing that GPUs and other accelerators provide. Their general-purpose design reduces computational throughput for the highly parallelizable math operations common in ML models (N. P. Jouppi et al. 2017a).

+
+Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017a. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. +
+
+
Not Optimized for Data Parallelism
+

The architectures of CPUs are not specifically optimized for data parallel workloads inherent to AI (Sze et al. 2017). They allocate substantial silicon area to instruction decoding, speculative execution, caching, and flow control that provides little benefit for the array operations used in neural networks (AI Inference Acceleration on CPUs). However, modern CPUs are equipped with vector instructions like AVX-512 specifically to accelerate certain key operations like matrix multiplication.

+

GPU streaming multiprocessors, for example, devote most transistors to floating point units instead of complex branch prediction logic. This specialization allows much higher utilization for ML math.

+
+
+
Higher Memory Latency
+

CPUs suffer from higher latency accessing main memory relative to GPUs and other accelerators (DDR). Techniques like tiling and caching can help, but the physical separation from off-chip RAM bottlenecks data-intensive ML workloads. This emphasizes the need for specialized memory architectures in ML hardware.

+
+
+
Power Inefficiency Under Heavy Workloads
+

While suitable for intermittent inference, sustaining near-peak throughput for training results in inefficient power consumption on CPUs, especially mobile CPUs (Ignatov et al. 2018). Accelerators explicitly optimize the data flow, memory, and computation for sustained ML workloads. CPUs are energy-inefficient for training large models.

+
+
+
+
+

10.3.6 Comparison

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AcceleratorDescriptionKey AdvantagesKey Disadvantages
ASICsCustom ICs designed for target workloads like AI inferenceMaximizes perf/watt.
Optimized for tensor ops
Low latency on-chip memory
Fixed architecture lacks flexibility
High NRE cost
Long design cycles
FPGAsReconfigurable fabric with programmable logic and routingFlexible architecture
Low latency memory access
Lower perf/watt than ASICs
Complex programming
GPUsOriginally for graphics, now used for neural network accelerationHigh throughput
Parallel scalability
Software ecosystem with CUDA
Not as power efficient as ASICs.
Require high memory bandwidth
CPUsGeneral purpose processorsProgrammability
Ubiquitous availability
Lower performance for AI workloads
+

In general, CPUs provide a readily available baseline, GPUs deliver broadly accessible acceleration, FPGAs offer programmability, and ASICs maximize efficiency for fixed functions. The optimal choice depends on the target application’s scale, cost, flexibility, and other requirements.

+

Although first developed for data center deployment, where [cite some benefit that Google cites], Google has also put considerable effort into developing Edge TPUs. These Edge TPUs maintain the inspiration from systolic arrays but are tailored to the limited resources accessible at the edge.

+
+
+
+

10.4 Hardware-Software Co-Design

+

Hardware-software co-design is based on the principle that AI systems achieve optimal performance and efficiency when the hardware and software components are designed in tight integration. This involves an iterative, collaborative design cycle where the hardware architecture and software algorithms are concurrently developed and refined with continuous feedback between teams.

+

For example, a new neural network model may be prototyped on an FPGA-based accelerator platform to obtain real performance data early in the design process. These results provide feedback to the hardware designers on potential optimizations and the software developers on refinements to the model or framework to better leverage the hardware capabilities. This level of synergy is difficult to achieve with the common practice of software being developed independently to deploy on fixed commodity hardware.

+

Co-design is critical for embedded AI systems facing significant resource constraints like low power budgets, limited memory and compute capacity, and real-time latency requirements. Tight integration between algorithm developers and hardware architects helps unlock optimizations across the stack to meet these restrictions. Enabling techniques include algorithmic improvements like neural architecture search and pruning and hardware advances like specialized dataflows and memory hierarchies.

+

By bringing hardware and software design together, rather than developing them separately, holistic optimizations can be made that maximize performance and efficiency. The next sections provide more details on specific co-design approaches.

+
+

10.4.1 The Need for Co-Design

+

Several key factors make a collaborative hardware-software co-design approach essential for building efficient AI systems.

+
+

Increasing Model Size and Complexity

+

State-of-the-art AI models have been rapidly growing in size, enabled by advances in neural architecture design and the availability of large datasets. For example, the GPT-3 language model contains 175 billion parameters (Brown et al. 2020), requiring huge computational resources for training. This explosion in model complexity necessitates co-design to develop efficient hardware and algorithms in tandem. Techniques like model compression (Cheng et al. 2018) and quantization must be co-optimized with the hardware architecture.

+
+Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. +
+Cheng, Yu, Duo Wang, Pan Zhou, and Tao Zhang. 2018. “Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges.” IEEE Signal Process Mag. 35 (1): 126–36. https://doi.org/10.1109/msp.2017.2765695. +
+
+

Constraints of Embedded Deployment

+

Deploying AI applications on edge devices like mobile phones or smart home appliances introduces significant constraints on energy, memory, and silicon area (Sze et al. 2017). Enable real-time inference under these restrictions requires co-exploring hardware optimizations like specialized dataflows and compression with efficient neural network design and pruning techniques. Co-design maximizes performance within tight deployment constraints.

+
+
+

Rapid Evolution of AI Algorithms

+

AI is rapidly evolving, with new model architectures, training methodologies, and software frameworks constantly emerging. For example, Transformers have recently become hugely popular for NLP (Young et al. 2018). Keeping pace with these algorithmic innovations requires hardware-software co-design to adapt platforms and avoid accrued technical debt quickly.

+
+Young, Tom, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. 2018. “Recent Trends in Deep Learning Based Natural Language Processing [Review Article].” IEEE Comput. Intell. Mag. 13 (3): 55–75. https://doi.org/10.1109/mci.2018.2840738. +
+
+

Complex Hardware-Software Interactions

+

Many subtle interactions and tradeoffs between hardware architectural choices and software optimizations significantly impact overall efficiency. For instance, techniques like tensor partitioning and batching affect parallelism and data access patterns impact memory utilization. Co-design provides a cross-layer perspective to unravel these dependencies.

+
+
+

Need for Specialization

+

AI workloads benefit from specialized operations like low-precision math and customized memory hierarchies. This motivates incorporating custom hardware tailored to neural network algorithms rather than relying solely on flexible software running on generic hardware (Sze et al. 2017). However, the software stack must explicitly target custom hardware operations to realize the benefits.

+
+
+

Demand for Higher Efficiency

+

With growing model complexity, diminishing returns and overhead from optimizing only the hardware or software in isolation (Putnam et al. 2014) arise. Inevitable tradeoffs arise that require global optimization across layers. Jointly co-designing hardware and software provides large compound efficiency gains.

+
+Putnam, Andrew, Adrian M. Caulfield, Eric S. Chung, Derek Chiou, Kypros Constantinides, John Demme, Hadi Esmaeilzadeh, et al. 2014. “A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services.” ACM SIGARCH Computer Architecture News 42 (3): 13–24. https://doi.org/10.1145/2678373.2665678. +
+
+
+

10.4.2 Principles of Hardware-Software Co-Design

+

The underlying hardware architecture and software stack must be tightly integrated and co-optimized to build high-performance and efficient AI systems. Neither can be designed in isolation; maximizing their synergies requires a holistic approach known as hardware-software co-design.

+

The key goal is tailoring the hardware capabilities to match the algorithms and workloads run by the software. This requires a feedback loop between hardware architects and software developers to converge on optimized solutions. Several techniques enable effective co-design:

+
+

Hardware-Aware Software Optimization

+

The software stack can be optimized to leverage the underlying hardware capabilities better:

+
    +
  • Parallelism: Parallelize matrix computations like convolution or attention layers to maximize throughput on vector engines.
  • +
  • Memory Optimization: Tune data layouts to improve cache locality based on hardware profiling. This maximizes reuse and minimizes expensive DRAM access.
  • +
  • Compression: Use sparsity in the models to reduce storage space and save on computation by zero-skipping operations.
  • +
  • Custom Operations: Incorporate specialized operations like low-precision INT4 or bfloat16 into models to capitalize on dedicated hardware support.
  • +
  • Dataflow Mapping: Explicitly map model stages to computational units to optimize data movement on hardware.
  • +
+
+
+

Algorithm-Driven Hardware Specialization

+

Hardware can be tailored to suit the characteristics of ML algorithms better:

+
    +
  • Custom Datatypes: Support low precision INT8/4 or bfloat16 in hardware for higher arithmetic density.
  • +
  • On-Chip Memory: Increase SRAM bandwidth and lower access latency to match model memory access patterns.
  • +
  • Domain-Specific Ops: Add hardware units for key ML functions like FFTs or matrix multiplication to reduce latency and energy.
  • +
  • Model Profiling: Use model simulation and profiling to identify computational hotspots and optimize hardware.
  • +
+

The key is collaborative feedback - insights from hardware profiling guide software optimizations, while algorithmic advances inform hardware specialization. This mutual enhancement provides multiplicative efficiency gains compared to isolated efforts.

+
+
+

Algorithm-Hardware Co-exploration

+

A powerful co-design technique involves jointly exploring innovations in neural network architectures and custom hardware design. This allows for finding ideal pairings tailored to each other’s strengths (Sze et al. 2017).

+
+Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. 2017. “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” Proc. IEEE 105 (12): 2295–2329. https://doi.org/10.1109/jproc.2017.2761740. +
+Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint. https://arxiv.org/abs/1704.04861. +
+Jacob, Benoit, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2704–13. IEEE. https://doi.org/10.1109/cvpr.2018.00286. +
+Gale, Trevor, Erich Elsen, and Sara Hooker. 2019. “The State of Sparsity in Deep Neural Networks.” ArXiv Preprint abs/1902.09574. https://arxiv.org/abs/1902.09574. +
+Mishra, Asit K., Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius. 2021. “Accelerating Sparse Deep Neural Networks.” CoRR abs/2104.08378. https://arxiv.org/abs/2104.08378. +

For instance, the shift to mobile architectures like MobileNets (Howard et al. 2017) was guided by edge device constraints like model size and latency. The quantization (Jacob et al. 2018) and pruning techniques (Gale, Elsen, and Hooker 2019) that unlocked these efficient models became possible thanks to hardware accelerators with native low-precision integer support and pruning support (Mishra et al. 2021).

+

Attention-based models have thrived on massively parallel GPUs and ASICs, where their computation maps well spatially, as opposed to RNN architectures, which rely on sequential processing. The co-evolution of algorithms and hardware unlocked new capabilities.

+

Effective co-exploration requires close collaboration between algorithm researchers and hardware architects. Rapid prototyping on FPGAs (C. Zhang et al. 2015) or specialized AI simulators allows quick evaluation of different pairings of model architectures and hardware designs pre-silicon.

+
+Zhang, Chen, Peng Li, Guangyu Sun, Yijin Guan, Bingjun Xiao, and Jason Optimizing Cong. 2015. FPGA-Based Accelerator Design for Deep Convolutional Neural Networks Proceedings of the 2015 ACM.” In SIGDA International Symposium on Field-Programmable Gate Arrays-FPGA, 15:161–70. +

For example, Google’s TPU architecture evolved with optimizations to TensorFlow models to maximize performance on image classification. This tight feedback loop yielded models tailored for the TPU that would have been unlikely in isolation.

+

Studies have shown 2-5x higher performance and efficiency gains with algorithm-hardware co-exploration than isolated algorithm or hardware optimization efforts (Suda et al. 2016). Parallelizing the joint development also reduces time-to-deployment.

+
+Suda, Naveen, Vikas Chandra, Ganesh Dasika, Abinash Mohanty, Yufei Ma, Sarma Vrudhula, Jae-sun Seo, and Yu Cao. 2016. “Throughput-Optimized OpenCL-Based FPGA Accelerator for Large-Scale Convolutional Neural Networks.” In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 16–25. ACM. https://doi.org/10.1145/2847263.2847276. +

Overall, exploring the tight interdependencies between model innovation and hardware advances unlocks opportunities that must be visible when tackled sequentially. This synergistic co-design yields solutions greater than the sum of their parts.

+
+
+
+

10.4.3 Challenges

+

While collaborative co-design can improve efficiency, adaptability, and time to market, it also has engineering and organizational challenges.

+
+

Increased Prototyping Costs

+

More extensive prototyping is required to evaluate different hardware-software pairings. The need for rapid, iterative prototypes on FPGAs or emulators increases validation overhead. For example, Microsoft found that more prototypes were needed to co-design an AI accelerator than sequential design (Fowers et al. 2018).

+
+Fowers, Jeremy, Kalin Ovtcharov, Michael Papamichael, Todd Massengill, Ming Liu, Daniel Lo, Shlomi Alkalay, et al. 2018. “A Configurable Cloud-Scale DNN Processor for Real-Time AI.” In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), 1–14. IEEE; IEEE. https://doi.org/10.1109/isca.2018.00012. +
+
+

Team and Organizational Hurdles

+

Co-design requires close coordination between traditionally disconnected hardware and software groups. This could introduce communication issues or misaligned priorities and schedules. Navigating different engineering workflows is also challenging. Some organizational inertia to adopting integrated practices may exist.

+
+
+

Simulation and Modeling Complexity

+

Capturing subtle interactions between hardware and software layers for joint simulation and modeling adds significant complexity. Full cross-layer abstractions are difficult to construct quantitatively before implementation, making holistic optimizations harder to quantify ahead of time.

+
+
+

Over-Specialization Risks

+

Tight co-design bears the risk of overfitting optimizations to current algorithms, sacrificing generality. For example, hardware tuned exclusively for Transformer models could underperform on future techniques. Maintaining flexibility requires foresight.

+
+
+

Adoption Challenges

+

Engineers comfortable with established discrete hardware or software design practices may only accept familiar collaborative workflows. Despite the long-term benefits, projects could face friction in transitioning to co-design.

+
+
+
+
+

10.5 Software for AI Hardware

+

Specialized hardware accelerators like GPUs, TPUs, and FPGAs are essential to delivering high-performance artificial intelligence applications. However, an extensive software stack is required to leverage these hardware platforms effectively, spanning the entire development and deployment lifecycle. Frameworks and libraries form the backbone of AI hardware, offering sets of robust, pre-built code, algorithms, and functions specifically optimized to perform various AI tasks on different hardware. They are designed to simplify the complexities of utilizing the hardware from scratch, which can be time-consuming and prone to error. Software plays an important role in the following:

+
    +
  • Providing programming abstractions and models like CUDA and OpenCL to map computations onto accelerators.
  • +
  • Integrating accelerators into popular deep learning frameworks like TensorFlow and PyTorch.
  • +
  • Compilers and tools to optimize across the hardware-software stack.
  • +
  • Simulation platforms to model hardware and software together.
  • +
  • Infrastructure to manage deployment on accelerators.
  • +
+

This expansive software ecosystem is as important as the hardware in delivering performant and efficient AI applications. This section overviews the tools available at each stack layer to enable developers to build and run AI systems powered by hardware acceleration.

+
+

10.5.1 Programming Models

+

Programming models provide abstractions to map computations and data onto heterogeneous hardware accelerators:

+
    +
  • CUDA: Nvidia’s parallel programming model to leverage GPUs using extensions to languages like C/C++. Allows launching kernels across GPU cores (Luebke 2008).
  • +
  • OpenCL: Open standard for writing programs spanning CPUs, GPUs, FPGAs, and other accelerators. Specifies a heterogeneous computing framework (Munshi 2009).
  • +
  • OpenGL/WebGL: 3D graphics programming interfaces that can map general-purpose code to GPU cores (Segal and Akeley 1999).
  • +
  • Verilog/VHDL: Hardware description languages (HDLs) used to configure FPGAs as AI accelerators by specifying digital circuits (Gannot and Ligthart 1994).
  • +
  • TVM: A Compiler framework providing a Python frontend to optimize and map deep learning models onto diverse hardware backends (Chen et al. 2018).
  • +
+
+Luebke, David. 2008. CUDA: Scalable Parallel Programming for High-Performance Scientific Computing.” In 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 836–38. IEEE. https://doi.org/10.1109/isbi.2008.4541126. +
+Munshi, Aaftab. 2009. “The OpenCL Specification.” In 2009 IEEE Hot Chips 21 Symposium (HCS), 1–314. IEEE. https://doi.org/10.1109/hotchips.2009.7478342. +
+Segal, Mark, and Kurt Akeley. 1999. “The OpenGL Graphics System: A Specification (Version 1.1).” +
+Gannot, G., and M. Ligthart. 1994. “Verilog HDL Based FPGA Design.” In International Verilog HDL Conference, 86–92. IEEE. https://doi.org/10.1109/ivc.1994.323743. +
+Chen, Tianqi, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, et al. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 578–94. +

Key challenges include expressing parallelism, managing memory across devices, and matching algorithms to hardware capabilities. Abstractions must balance portability with allowing hardware customization. Programming models enable developers to harness accelerators without hardware expertise. These details are discussed in the AI frameworks section.

+
+

Exercise 10.1 (Software for AI Hardware - TVM)  

+
+
+ +
+
+

We’ve learned that fancy AI hardware needs special software to work magic. TVM is like a super-smart translator, turning your code into instructions that accelerators understand. In this Colab, we’ll use TVM to make a pretend accelerator called VTA do matrix multiplication super fast. Ready to see how software powers up hardware?

+

+
+
+
+
+
+

10.5.2 Libraries and Runtimes

+

Specialized libraries and runtimes provide software abstractions to access and maximize the utilization of AI accelerators:

+
    +
  • Math Libraries: Highly optimized implementations of linear algebra primitives like GEMM, FFTs, convolutions, etc., tailored to the target hardware. Nvidia cuBLAS, Intel MKL, and Arm compute libraries are examples.
  • +
  • Framework Integrations: Libraries to accelerate deep learning frameworks like TensorFlow, PyTorch, and MXNet on supported hardware. For example, cuDNN accelerates CNNs on Nvidia GPUs.
  • +
  • Runtimes: Software to handle accelerator execution, including scheduling, synchronization, memory management, and other tasks. Nvidia TensorRT is an inference optimizer and runtime.
  • +
  • Drivers and Firmware: Low-level software to interface with hardware, initialize devices, and handle execution. Vendors like Xilinx provide drivers for their accelerator boards.
  • +
+

For instance, PyTorch integrators use cuDNN and cuBLAS libraries to accelerate training on Nvidia GPUs. The TensorFlow XLA runtime optimizes and compiles models for accelerators like TPUs. Drivers initialize devices and offload operations.

+

The challenges include efficiently partitioning and scheduling workloads across heterogeneous devices like multi-GPU nodes. Runtimes must also minimize the overhead of data transfers and synchronization.

+

Libraries, runtimes, and drivers provide optimized building blocks that deep learning developers can leverage to tap into accelerator performance without hardware programming expertise. Their optimization is essential for production deployments.

+
+
+

10.5.3 Optimizing Compilers

+

Optimizing compilers is key in extracting maximum performance and efficiency from hardware accelerators for AI workloads. They apply optimizations spanning algorithmic changes, graph-level transformations, and low-level code generation.

+
    +
  • Algorithm Optimization: Techniques like quantization, pruning, and neural architecture search to enhance model efficiency and match hardware capabilities.
  • +
  • Graph Optimizations: Graph-level optimizations like operator fusion, rewriting, and layout transformations to optimize performance on target hardware.
  • +
  • Code Generation: Generating optimized low-level code for accelerators from high-level models and frameworks.
  • +
+

For example, the TVM open compiler stack applies quantization for a BERT model targeting Arm GPUs. It fuses pointwise convolution operations and transforms the weight layout to optimize memory access. Finally, it emits optimized OpenGL code to run the GPU workload.

+

Key compiler optimizations include maximizing parallelism, improving data locality and reuse, minimizing memory footprint, and exploiting custom hardware operations. Compilers build and optimize machine learning workloads holistically across hardware components like CPUs, GPUs, and other accelerators.

+

However, efficiently mapping complex models introduces challenges like efficiently partitioning workloads across heterogeneous devices. Production-level compilers also require extensive time tuning on representative workloads. Still, optimizing compilers is indispensable in unlocking the full capabilities of AI accelerators.

+
+
+

10.5.4 Simulation and Modeling

+

Simulation software is important in hardware-software co-design. It enables joint modeling of proposed hardware architectures and software stacks:

+
    +
  • Hardware Simulation: Platforms like Gem5 allow detailed simulation of hardware components like pipelines, caches, interconnects, and memory hierarchies. Engineers can model hardware changes without physical prototyping (Binkert et al. 2011).
  • +
  • Software Simulation: Compiler stacks like TVM support the simulation of machine learning workloads to estimate performance on target hardware architectures. This assists with software optimizations.
  • +
  • Co-simulation: Unified platforms like the SCALE-Sim (Samajdar et al. 2018) integrate hardware and software simulation into a single tool. This enables what-if analysis to quantify the system-level impacts of cross-layer optimizations early in the design cycle.
  • +
+
+Binkert, Nathan, Bradford Beckmann, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, et al. 2011. “The Gem5 Simulator.” ACM SIGARCH Computer Architecture News 39 (2): 1–7. https://doi.org/10.1145/2024716.2024718. +
+Samajdar, Ananda, Yuhao Zhu, Paul Whatmough, Matthew Mattina, and Tushar Krishna. 2018. “Scale-Sim: Systolic Cnn Accelerator Simulator.” ArXiv Preprint abs/1811.02883. https://arxiv.org/abs/1811.02883. +

For example, an FPGA-based AI accelerator design could be simulated using Verilog hardware description language and synthesized into a Gem5 model. Verilog is well-suited for describing the digital logic and interconnects of the accelerator architecture. Verilog allows the designer to specify the datapaths, control logic, on-chip memories, and other components implemented in the FPGA fabric. Once the Verilog design is complete, it can be synthesized into a model that simulates the behavior of the hardware, such as using the Gem5 simulator. Gem5 is useful for this task because it allows the modeling of full systems, including processors, caches, buses, and custom accelerators. Gem5 supports interfacing Verilog models of hardware to the simulation, enabling unified system modeling.

+

The synthesized FPGA accelerator model could then have ML workloads simulated using TVM compiled onto it within the Gem5 environment for unified modeling. TVM allows optimized compilation of ML models onto heterogeneous hardware like FPGAs. Running TVM-compiled workloads on the accelerator within the Gem5 simulation provides an integrated way to validate and refine the hardware design, software stack, and system integration before physically realizing the accelerator on a real FPGA.

+

This type of co-simulation provides estimations of overall metrics like throughput, latency, and power to guide co-design before expensive physical prototyping. They also assist with partitioning optimizations between hardware and software to guide design tradeoffs.

+

However, accuracy in modeling subtle low-level interactions between components is limited. Quantified simulations are estimates but cannot wholly replace physical prototypes and testing. Still, unified simulation and modeling provide invaluable early insights into system-level optimization opportunities during the co-design process.

+
+
+
+

10.6 Benchmarking AI Hardware

+

Benchmarking is a critical process that quantifies and compares the performance of various hardware platforms designed to speed up artificial intelligence applications. It guides purchasing decisions, development focus, and performance optimization efforts for hardware manufacturers and software developers.

+

The benchmarking chapter explores this topic in great detail, explaining why it has become an indispensable part of the AI hardware development cycle and how it impacts the broader technology landscape. Here, we will briefly review the main concepts, but we recommend that you refer to the chapter for more details.

+

Benchmarking suites such as MLPerf, Fathom, and AI Benchmark offer a set of standardized tests that can be used across different hardware platforms. These suites measure AI accelerator performance across various neural networks and machine learning tasks, from basic image classification to complex language processing. Providing a common ground for Comparison, they help ensure that performance claims are consistent and verifiable. These “tools” are applied not only to guide the development of hardware but also to ensure that the software stack leverages the full potential of the underlying architecture.

+
    +
  • MLPerf: Includes a broad set of benchmarks covering both training (Mattson et al. 2020) and inference (Reddi et al. 2020) for a range of machine learning tasks.
  • +
  • Fathom: Focuses on core operations in deep learning models, emphasizing their execution on different architectures (Adolf et al. 2016).
  • +
  • AI Benchmark: Targets mobile and consumer devices, assessing AI performance in end-user applications (Ignatov et al. 2018).
  • +
+
+Mattson, Peter, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, et al. 2020. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843. +
+Reddi, Vijay Janapa, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, et al. 2020. MLPerf Inference Benchmark.” In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 446–59. IEEE; IEEE. https://doi.org/10.1109/isca45697.2020.00045. +
+Adolf, Robert, Saketh Rama, Brandon Reagen, Gu-yeon Wei, and David Brooks. 2016. “Fathom: Reference Workloads for Modern Deep Learning Methods.” In 2016 IEEE International Symposium on Workload Characterization (IISWC), 1–10. IEEE; IEEE. https://doi.org/10.1109/iiswc.2016.7581275. +
+Ignatov, Andrey, Radu Timofte, William Chou, Ke Wang, Max Wu, Tim Hartley, and Luc Van Gool. 2018. AI Benchmark: Running Deep Neural Networks on Android Smartphones,” 0–0. +

Benchmarks also have performance metrics that are the quantifiable measures used to evaluate the effectiveness of AI accelerators. These metrics provide a comprehensive view of an accelerator’s capabilities and are used to guide the design and selection process for AI systems. Common metrics include:

+
    +
  • Throughput: Usually measured in operations per second, this metric indicates the volume of computations an accelerator can handle.
  • +
  • Latency: The time delay from input to output in a system is vital for real-time processing tasks.
  • +
  • Energy Efficiency: Calculated as computations per watt, representing the tradeoff between performance and power consumption.
  • +
  • Cost Efficiency: This evaluates the cost of operation relative to performance, an essential metric for budget-conscious deployments.
  • +
  • Accuracy: In inference tasks, the precision of computations is critical and sometimes balanced against speed.
  • +
  • Scalability: The ability of the system to maintain performance gains as the computational load scales up.
  • +
+

Benchmark results give insights beyond just numbers—they can reveal bottlenecks in the software and hardware stack. For example, benchmarks may show how increased batch size improves GPU utilization by providing more parallelism or how compiler optimizations boost TPU performance. These learnings enable continuous optimization (Zhihao Jia, Zaharia, and Aiken 2019).

+
+Jia, Zhihao, Matei Zaharia, and Alex Aiken. 2019. “Beyond Data and Model Parallelism for Deep Neural Networks.” In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019, edited by Ameet Talwalkar, Virginia Smith, and Matei Zaharia. mlsys.org. https://proceedings.mlsys.org/book/265.pdf. +
+Zhu, Hongyu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Anand Jayarajan, Amar Phanishayee, Bianca Schroeder, and Gennady Pekhimenko. 2018. “Benchmarking and Analyzing Deep Neural Network Training.” In 2018 IEEE International Symposium on Workload Characterization (IISWC), 88–100. IEEE; IEEE. https://doi.org/10.1109/iiswc.2018.8573476. +

Standardized benchmarking provides a quantified, comparable evaluation of AI accelerators to inform design, purchasing, and optimization. However, real-world performance validation remains essential as well (H. Zhu et al. 2018).

+
+
+

10.7 Challenges and Solutions

+

AI accelerators offer impressive performance improvements, but significant portability and compatibility challenges often need to be improved in their integration into the broader AI landscape. The crux of the issue lies in the diversity of the AI ecosystem—a vast array of machine learning accelerators, frameworks, and programming languages exist, each with its unique features and requirements.

+
+

10.7.1 Portability/Compatibility Issues

+

Developers frequently encounter difficulties transferring their AI models from one hardware environment to another. For example, a machine learning model developed for a desktop environment in Python using the PyTorch framework, optimized for an Nvidia GPU, may not easily transition to a more constrained device such as the Arduino Nano 33 BLE. This complexity stems from stark differences in programming requirements - Python and PyTorch on the desktop versus a C++ environment on an Arduino, not to mention the shift from x86 architecture to ARM ISA.

+

These divergences highlight the intricacy of portability within AI systems. Moreover, the rapid advancement in AI algorithms and models means that hardware accelerators must continually adapt, creating a moving target for compatibility. The absence of universal standards and interfaces compounds the issue, making deploying AI solutions consistently across various devices and platforms challenging.

+
+

Solutions and Strategies

+

To address these hurdles, the AI industry is moving towards several solutions:

+
+
Standardization Initiatives
+

The Open Neural Network Exchange (ONNX) is at the forefront of this pursuit, proposing an open and shared ecosystem that promotes model interchangeability. ONNX facilitates the use of AI models across various frameworks, allowing models trained in one environment to be efficiently deployed in another, significantly reducing the need for time-consuming rewrites or adjustments.

+
+
+
Cross-Platform Frameworks
+

Complementing the standardization efforts, cross-platform frameworks such as TensorFlow Lite and PyTorch Mobile have been developed specifically to create cohesion between diverse computational environments ranging from desktops to mobile and embedded devices. These frameworks offer streamlined, lightweight versions of their parent frameworks, ensuring compatibility and functional integrity across different hardware types without sacrificing performance. This ensures that developers can create applications with the confidence that they will work on many devices, bridging a gap that has traditionally posed a considerable challenge in AI development.

+
+
+
Hardware-agnostic Platforms
+

The rise of hardware-agnostic platforms has also played an important role in democratizing the use of AI. By creating environments where AI applications can be executed on various accelerators, these platforms remove the burden of hardware-specific coding from developers. This abstraction simplifies the development process and opens up new possibilities for innovation and application deployment, free from the constraints of hardware specifications.

+
+
+
Advanced Compilation Tools
+

In addition, the advent of advanced compilation tools like TVM, an end-to-end tensor compiler, offers an optimized path through the jungle of diverse hardware architectures. TVM equips developers with the means to fine-tune machine learning models for a broad spectrum of computational substrates, ensuring optimal performance and avoiding manual model adjustment each time there is a shift in the underlying hardware.

+
+
+
Community and Industry Collaboration
+

The collaboration between open-source communities and industry consortia cannot be understated. These collective bodies are instrumental in forming shared standards and best practices that all developers and manufacturers can adhere to. Such collaboration fosters a more unified and synergistic AI ecosystem, significantly diminishing the prevalence of portability issues and smoothing the path toward global AI integration and advancement. Through these combined efforts, AI is steadily moving toward a future where seamless model deployment across various platforms becomes a standard rather than an exception.

+

Solving the portability challenges is crucial for the AI field to realize the full potential of hardware accelerators in a dynamic and diverse technological landscape. It requires a concerted effort from hardware manufacturers, software developers, and standard bodies to create a more interoperable and flexible environment. With continued innovation and collaboration, the AI community can pave the way for seamless integration and deployment of AI models across many platforms.

+
+
+
+
+

10.7.2 Power Consumption Concerns

+

Power consumption is a crucial issue in the development and operation of data center AI accelerators, like Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) (N. P. Jouppi et al. 2017b) (Norrie et al. 2021) (N. Jouppi et al. 2023). These powerful components are the backbone of contemporary AI infrastructure, but their high energy demands contribute to the environmental impact of technology and drive up operational costs significantly. As data processing needs become more complex, with the popularity of AI and deep learning increasing, there’s a pressing demand for GPUs and TPUs that can deliver the necessary computational power more efficiently. The impact of such advancements is two-fold: they can lower these technologies’ environmental footprint and reduce the cost of running AI applications.

+
+———, et al. 2017b. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. +
+Norrie, Thomas, Nishant Patil, Doe Hyun Yoon, George Kurian, Sheng Li, James Laudon, Cliff Young, Norman Jouppi, and David Patterson. 2021. “The Design Process for Google’s Training Chips: Tpuv2 and TPUv3.” IEEE Micro 41 (2): 56–63. https://doi.org/10.1109/mm.2021.3058217. +
+Jouppi, Norm, George Kurian, Sheng Li, Peter Ma, Rahul Nagarajan, Lifeng Nai, Nishant Patil, et al. 2023. TPU V4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings.” In Proceedings of the 50th Annual International Symposium on Computer Architecture. ISCA ’23. New York, NY, USA: ACM. https://doi.org/10.1145/3579371.3589350. +

Emerging hardware technologies are at the cusp of revolutionizing power efficiency in this sector. Photonic computing, for instance, uses light rather than electricity to carry information, offering a promise of high-speed processing with a fraction of the power usage. We delve deeper into this and other innovative technologies in the “Emerging Hardware Technologies” section, exploring their potential to address current power consumption challenges.

+

At the edge of the network, AI accelerators are engineered to process data on devices like smartphones, IoT sensors, and smart wearables. These devices often work under severe power limitations, necessitating a careful balancing act between performance and power usage. A high-performance AI model may provide quick results but at the cost of depleting battery life swiftly and increasing thermal output, which may affect the device’s functionality and durability. The stakes are higher for devices deployed in remote or hard-to-reach areas, where consistent power supply cannot be guaranteed, underscoring the need for low-power-consuming solutions.

+

Latency issues further compound the challenge of power efficiency at the edge. Edge AI applications in fields such as autonomous driving and healthcare monitoring require speed, precision, and reliability, as delays in processing can lead to serious safety risks. For these applications, developers must optimize both the AI algorithms and the hardware design to strike an optimal balance between power consumption and latency.

+

This optimization effort is not just about making incremental improvements to existing technologies; it’s about rethinking how and where we process AI tasks. By designing AI accelerators that are both power-efficient and capable of quick processing, we can ensure these devices serve their intended purposes without unnecessary energy use or compromised performance. Such developments could propel the widespread adoption of AI across various sectors, enabling smarter, safer, and more sustainable use of technology.

+
+
+

10.7.3 Overcoming Resource Constraints

+

Resource constraints also pose a significant challenge for Edge AI accelerators, as these specialized hardware and software solutions must deliver robust performance within the limitations of edge devices. Due to power and size limitations, edge AI accelerators often have restricted computation, memory, and storage capacity (L. Zhu et al. 2023). This scarcity of resources necessitates a careful allocation of processing capabilities to execute machine learning models efficiently.

+
+Zhu, Ligeng, Lanxiang Hu, Ji Lin, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. 2023. PockEngine: Sparse and Efficient Fine-Tuning in a Pocket.” In 56th Annual IEEE/ACM International Symposium on Microarchitecture. ACM. https://doi.org/10.1145/3613424.3614307. +
+Lin, Ji, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. 2023. AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration.” arXiv. +
+Li, Yuhang, Xin Dong, and Wei Wang. 2020. “Additive Powers-of-Two Quantization: An Efficient Non-Uniform Discretization for Neural Networks.” In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. https://openreview.net/forum?id=BkgXT24tDS. +
+Wang, Tianzhe, Kuan Wang, Han Cai, Ji Lin, Zhijian Liu, Hanrui Wang, Yujun Lin, and Song Han. 2020. APQ: Joint Search for Network Architecture, Pruning and Quantization Policy.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2075–84. IEEE. https://doi.org/10.1109/cvpr42600.2020.00215. +

Moreover, managing constrained resources demands innovative approaches, including model quantization (Lin et al. 2023) (Li, Dong, and Wang 2020), pruning (Wang et al. 2020), and optimizing inference pipelines. Edge AI accelerators must strike a delicate balance between providing meaningful AI functionality and not exhausting available resources while maintaining low power consumption. Overcoming these resource constraints is crucial to ensure the successful deployment of AI at the edge, where many applications, from IoT to mobile devices, rely on efficiently using limited hardware resources to deliver real-time and intelligent decision-making.

+
+
+
+

10.8 Emerging Technologies

+

Thus far, we have discussed AI hardware technology in the context of conventional von Neumann architecture design and CMOS-based implementation. These specialized AI chips offer benefits like higher throughput and power efficiency but rely on traditional computing principles. The relentless growth in demand for AI computing power is driving innovations in integration methods for AI hardware.

+

Two leading approaches have emerged for maximizing compute density—wafer-scale integration and chiplet-based architectures—which we will discuss in this section. Looking much further ahead, we will examine emerging technologies that diverge from conventional architectures and adopt fundamentally different approaches for AI-specialized computing.

+

Some of these unconventional paradigms include neuromorphic computing, which mimics biological neural networks; quantum computing, which leverages quantum mechanical effects; and optical computing, which utilizes photons instead of electrons. Beyond novel computing substrates, new device technologies are enabling additional gains through better memory and interconnecting.

+

Examples include memristors for in-memory computing and nanophotonics for integrated photonic communication. Together, these technologies offer the potential for orders of magnitude improvements in speed, efficiency, and scalability compared to current AI hardware. We will examine these in this section.

+
+

10.8.1 Integration Methods

+

Integration methods refer to the approaches used to combine and interconnect an AI chip or system’s various computational and memory components. By closely linking the key processing elements, integration aims to maximize performance, power efficiency, and density.

+

In the past, AI computing was primarily performed on CPUs and GPUs built using conventional integration methods. These discrete components were manufactured separately and connected together on a board. However, this loose integration creates bottlenecks, such as data transfer overheads.

+

As AI workloads have grown, there is increasing demand for tighter integration between computing, memory, and communication elements. Some key drivers of integration include:

+
    +
  • Minimizing data movement: Tight integration reduces latency and power for moving data between components. This improves efficiency.
  • +
  • Customization: Tailoring all system components to AI workloads allows optimizations throughout the hardware stack.
  • +
  • Parallelism: Integrating many processing elements enables massively parallel computation.
  • +
  • Density: Tighter integration allows more transistors and memory to be packed into a given area.
  • +
  • Cost: Economies of scale from large integrated systems can reduce costs.
  • +
+

In response, new manufacturing techniques like wafer-scale fabrication and advanced packaging now allow much higher levels of integration. The goal is to create unified, specialized AI compute complexes tailored for deep learning and other AI algorithms. Tighter integration is key to delivering the performance and efficiency needed for the next generation of AI.

+
+

Wafer-scale AI

+

Wafer-scale AI takes an extremely integrated approach, manufacturing an entire silicon wafer as one gigantic chip. This differs drastically from conventional CPUs and GPUs, which cut each wafer into many smaller individual chips. Figure fig-wafer-scale shows a comparison between Cerebras Wafer Scale Engine 2, which is the largest chip ever built, and the largest GPU. While some GPUs may contain billions of transistors, they still pale in Comparison to the scale of a wafer-size chip with over a trillion transistors.

+

The wafer-scale approach also diverges from more modular system-on-chip designs that still have discrete components communicating by bus. Instead, wafer-scale AI enables full customization and tight integration of computation, memory, and interconnects across the entire die.

+
+
+
+ +
+
+Figure 10.4: Wafer-scale vs. GPU. Credit: Cerebras. +
+
+
+

By designing the wafer as one integrated logic unit, data transfer between elements is minimized. This provides lower latency and power consumption than discrete system-on-chip or chiplet designs. While chiplets can offer flexibility by mixing and matching components, communication between chiplets is challenging. The monolithic nature of wafer-scale integration eliminates these inter-chip communication bottlenecks.

+

However, the ultra-large-scale also poses difficulties for manufacturability and yield with wafer-scale designs. Defects in any region of the wafer can make (certain parts of) the chip unusable. Specialized lithography techniques are required to produce such large dies. So, wafer-scale integration pursues the maximum performance gains from integration but requires overcoming substantial fabrication challenges.

+

The following video will provide additional context.

+
+
+
+

Chiplets for AI

+

Chiplet design refers to a semiconductor architecture in which a single integrated circuit (IC) is constructed from multiple smaller, individual components known as chiplets. Each chiplet is a self-contained functional block, typically specialized for a specific task or functionality. These chiplets are then interconnected on a larger substrate or package to create a cohesive system. Figure fig-chiplet illustrates this concept. For AI hardware, chiplets enable the mixing of different types of chips optimized for tasks like matrix multiplication, data movement, analog I/O, and specialized memories. This heterogeneous integration differs greatly from wafer-scale integration, where all logic is manufactured as one monolithic chip. Companies like Intel and AMD have adopted chiplet designs for their CPUs.

+

Chiplets are interconnected using advanced packaging techniques like high-density substrate interposers, 2.5D/3D stacking, and wafer-level packaging. This allows combining chiplets fabricated with different process nodes, specialized memories, and various optimized AI engines.

+
+
+
+ +
+
+Figure 10.5: Chiplet partitioning. Credit: Vivet et al. (2021). +
+
+Vivet, Pascal, Eric Guthmuller, Yvain Thonnart, Gael Pillonnet, Cesar Fuguet, Ivan Miro-Panades, Guillaume Moritz, et al. 2021. IntAct: A 96-Core Processor with Six Chiplets 3D-Stacked on an Active Interposer with Distributed Interconnects and Integrated Power Management.” IEEE J. Solid-State Circuits 56 (1): 79–97. https://doi.org/10.1109/jssc.2020.3036341. +
+
+

Some key advantages of using chiplets for AI include:

+
    +
  • Flexibility: Flexibility: Chiplets allow for the combination of different chip types, process nodes, and memories tailored for each function. This is more modular versus a fixed wafer-scale design.
  • +
  • Yield: Smaller chiplets have a higher yield than a gigantic wafer-scale chip. Defects are contained in individual chiplets.
  • +
  • Cost: Leverages existing manufacturing capabilities versus requiring specialized new processes. Reduces costs by reusing mature fabrication.
  • +
  • Compatibility: Can integrate with more conventional system architectures like PCIe and standard DDR memory interfaces.
  • +
+

However, chiplets also face integration and performance challenges:

+
    +
  • Lower density compared to wafer-scale, as chiplets are limited in size.
  • +
  • Added latency when communicating between chiplets versus monolithic integration. Requires optimization for low-latency interconnect.
  • +
  • Advanced packaging adds complexity versus wafer-scale integration, though this is arguable.
  • +
+

The key objective of chiplets is finding the right balance between modular flexibility and integration density for optimal AI performance. Chiplets aim for efficient AI acceleration while working within the constraints of conventional manufacturing techniques. Chiplets take a middle path between the extremes of wafer-scale integration and fully discrete components. This provides practical benefits but may sacrifice some computational density and efficiency versus a theoretical wafer-size system.

+
+
+
+

10.8.2 Neuromorphic Computing

+

Neuromorphic computing is an emerging field aiming to emulate the efficiency and robustness of biological neural systems for machine learning applications. A key difference from classical Von Neumann architectures is the merging of memory and processing in the same circuit (Schuman et al. 2022; Marković et al. 2020; Furber 2016), as illustrated in Figure fig-neuromorphic. The structure of the brain inspires this integrated approach. A key advantage is the potential for orders of magnitude improvement in energy-efficient computation compared to conventional AI hardware. For example, estimates project 100x-1000x gains in energy efficiency versus current GPU-based systems for equivalent workloads.

+
+Marković, Danijela, Alice Mizrahi, Damien Querlioz, and Julie Grollier. 2020. “Physics for Neuromorphic Computing.” Nature Reviews Physics 2 (9): 499–510. https://doi.org/10.1038/s42254-020-0208-2. +
+Furber, Steve. 2016. “Large-Scale Neuromorphic Computing Systems.” J. Neural Eng. 13 (5): 051001. https://doi.org/10.1088/1741-2560/13/5/051001. +
+
+
+ +
+
+Figure 10.6: Comparison of the von Neumann architecture with the neuromorphic architecture. Credit: Schuman et al. (2022). +
+
+Schuman, Catherine D., Shruti R. Kulkarni, Maryam Parsa, J. Parker Mitchell, Prasanna Date, and Bill Kay. 2022. “Opportunities for Neuromorphic Computing Algorithms and Applications.” Nature Computational Science 2 (1): 10–19. https://doi.org/10.1038/s43588-021-00184-y. +
+
+

Intel and IBM are leading commercial efforts in neuromorphic hardware. Intel’s Loihi and Loihi 2 chips (Davies et al. 2018, 2021) offer programmable neuromorphic cores with on-chip learning. IBM’s Northpole (Modha et al. 2023) device comprises over 100 million magnetic tunnel junction synapses and 68 billion transistors. These specialized chips deliver benefits like low power consumption for edge inference.

+
+Davies, Mike, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, et al. 2018. “Loihi: A Neuromorphic Manycore Processor with on-Chip Learning.” IEEE Micro 38 (1): 82–99. https://doi.org/10.1109/mm.2018.112130359. +
+Davies, Mike, Andreas Wild, Garrick Orchard, Yulia Sandamirskaya, Gabriel A. Fonseca Guerra, Prasad Joshi, Philipp Plank, and Sumedh R. Risbud. 2021. “Advancing Neuromorphic Computing with Loihi: A Survey of Results and Outlook.” Proc. IEEE 109 (5): 911–34. https://doi.org/10.1109/jproc.2021.3067593. +
+Modha, Dharmendra S., Filipp Akopyan, Alexander Andreopoulos, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, et al. 2023. “Neural Inference at the Frontier of Energy, Space, and Time.” Science 382 (6668): 329–35. https://doi.org/10.1126/science.adh1174. +
+Maass, Wolfgang. 1997. “Networks of Spiking Neurons: The Third Generation of Neural Network Models.” Neural Networks 10 (9): 1659–71. https://doi.org/10.1016/s0893-6080(97)00011-7. +

Spiking neural networks (SNNs) (Maass 1997) are computational models for neuromorphic hardware. Unlike deep neural networks communicating via continuous values, SNNs use discrete spikes that are more akin to biological neurons. This allows efficient event-based computation rather than constant processing. Additionally, SNNs consider the temporal and spatial characteristics of input data. This better mimics biological neural networks, where the timing of neuronal spikes plays an important role. However, training SNNs remains challenging due to the added temporal complexity. Figure fig-spiking provides an overview of the spiking methodology: (a) Diagram of a neuron; (b) Measuring an action potential propagated along the axon of a neuron. Only the action potential is detectable along the axon; (c) The neuron’s spike is approximated with a binary representation; (d) Event-Driven Processing; (e) Active Pixel Sensor and Dynamic Vision Sensor.

+

You can also watch the video linked below for a more detailed explanation.

+
+
+
+ +
+
+Figure 10.7: Neuromoprhic spiking. Credit: Eshraghian et al. (2023). +
+
+Eshraghian, Jason K., Max Ward, Emre O. Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D. Lu. 2023. “Training Spiking Neural Networks Using Lessons from Deep Learning.” Proc. IEEE 111 (9): 1016–54. https://doi.org/10.1109/jproc.2023.3308088. +
+
+
+

Specialized nanoelectronic devices called memristors (Chua 1971) are synaptic components in neuromorphic systems. Memristors act as nonvolatile memory with adjustable conductance, emulating the plasticity of real synapses. Memristors enable in-situ learning without separate data transfers by combining memory and processing functions. However, memristor technology has yet to reach maturity and scalability for commercial hardware.

+
+Chua, L. 1971. “Memristor-the Missing Circuit Element.” #IEEE_J_CT# 18 (5): 507–19. https://doi.org/10.1109/tct.1971.1083337. +

The integration of photonics with neuromorphic computing (Shastri et al. 2021) has recently emerged as an active research area. Using light for computation and communication allows high speeds and reduced energy consumption. However, fully realizing photonic neuromorphic systems requires overcoming design and integration challenges.

+

Neuromorphic computing offers promising capabilities for efficient edge inference but faces obstacles around training algorithms, nanodevice integration, and system design. Ongoing multidisciplinary research across computer science, engineering, materials science, and physics will be key to unlocking this technology’s full potential for AI use cases.

+
+
+

10.8.3 Analog Computing

+

Analog computing is an emerging approach that uses analog signals and components like capacitors, inductors, and amplifiers rather than digital logic for computing. It represents information as continuous electrical signals instead of discrete 0s and 1s. This allows the computation to directly reflect the analog nature of real-world data, avoiding digitization errors and overhead.

+

Analog computing has generated renewed interest in efficient AI hardware, particularly for inference directly on low-power edge devices. Analog circuits, such as multiplication and summation at the core of neural networks, can be used with very low energy consumption. This makes analog well-suited for deploying ML models on energy-constrained end nodes. Startups like Mythic are developing analog AI accelerators.

+

While analog computing was popular in early computers, the boom of digital logic led to its decline. However, analog is compelling for niche applications requiring extreme efficiency (Haensch, Gokmen, and Puri 2019). It contrasts with digital neuromorphic approaches that still use digital spikes for computation. Analog may allow lower precision computation but requires expertise in analog circuit design. Tradeoffs around precision, programming complexity, and fabrication costs remain active research areas.

+
+Haensch, Wilfried, Tayfun Gokmen, and Ruchir Puri. 2019. “The Next Generation of Deep Learning Hardware: Analog Computing.” Proc. IEEE 107 (1): 108–22. https://doi.org/10.1109/jproc.2018.2871057. +
+Hazan, Avi, and Elishai Ezra Tsur. 2021. “Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation.” Front. Neurosci. 15 (February): 627221. https://doi.org/10.3389/fnins.2021.627221. +

Neuromorphic computing, which aims to emulate biological neural systems for efficient ML inference, can use analog circuits to implement the key components and behaviors of brains. For example, researchers have designed analog circuits to model neurons and synapses using capacitors, transistors, and operational amplifiers (Hazan and Ezra Tsur 2021). The capacitors can exhibit the spiking dynamics of biological neurons, while the amplifiers and transistors provide a weighted summation of inputs to mimic dendrites. Variable resistor technologies like memristors can realize analog synapses with spike-timing-dependent plasticity, which can strengthen or weaken connections based on spiking activity.

+

Startups like SynSense have developed analog neuromorphic chips containing these biomimetic components (Bains 2020). This analog approach results in low power consumption and high scalability for edge devices versus complex digital SNN implementations.

+
+Bains, Sunny. 2020. “The Business of Building Brains.” Nature Electronics 3 (7): 348–51. https://doi.org/10.1038/s41928-020-0449-1. +

However, training analog SNNs on chips remains an open challenge. Overall, analog realization is a promising technique for delivering the efficiency, scalability, and biological plausibility envisioned with neuromorphic computing. The physics of analog components combined with neural architecture design could improve inference efficiency over conventional digital neural networks.

+
+
+

10.8.4 Flexible Electronics

+

While much of the new hardware technology in the ML workspace has been focused on optimizing and making systems more efficient, there’s a parallel trajectory aiming to adapt hardware for specific applications (Gates 2009; Musk et al. 2019; Tang et al. 2023; Tang, He, and Liu 2022; Kwon and Dong 2022). One such avenue is the development of flexible electronics for AI use cases.

+
+Gates, Byron D. 2009. “Flexible Electronics.” Science 323 (5921): 1566–67. https://doi.org/10.1126/science.1171230. +
+Tang, Xin, Hao Shen, Siyuan Zhao, Na Li, and Jia Liu. 2023. “Flexible Braincomputer Interfaces.” Nature Electronics 6 (2): 109–18. https://doi.org/10.1038/s41928-022-00913-9. +
+Tang, Xin, Yichun He, and Jia Liu. 2022. “Soft Bioelectronics for Cardiac Interfaces.” Biophysics Reviews 3 (1). https://doi.org/10.1063/5.0069516. +

Flexible electronics refer to electronic circuits and devices fabricated on flexible plastic or polymer substrates rather than rigid silicon. Unlike conventional rigid boards and chips, this allows the electronics to bend, twist, and conform to irregular shapes. Figure fig-flexible-device shows an example of a flexible device prototype that wirelessly measures body temperature, which can be seamlessly integrated into clothing or skin patches. The flexibility and bendability of emerging electronic materials allow them to be integrated into thin, lightweight form factors that are well-suited for embedded AI and TinyML applications.

+

Flexible AI hardware can conform to curvy surfaces and operate efficiently with microwatt power budgets. Flexibility also enables rollable or foldable form factors to minimize device footprint and weight, ideal for small, portable smart devices and wearables incorporating TinyML. Another key advantage of flexible electronics compared to conventional technologies is lower manufacturing costs and simpler fabrication processes, which could democratize access to these technologies. While silicon masks and fabrication costs typically cost millions of dollars, flexible hardware typically costs only tens of cents to manufacture (Huang et al. 2011; Biggs et al. 2021). The potential to fabricate flexible electronics directly onto plastic films using high-throughput printing and coating processes can reduce costs and improve manufacturability at scale versus rigid AI chips (Musk et al. 2019).

+
+Huang, Tsung-Ching, Kenjiro Fukuda, Chun-Ming Lo, Yung-Hui Yeh, Tsuyoshi Sekitani, Takao Someya, and Kwang-Ting Cheng. 2011. “Pseudo-CMOS: A Design Style for Low-Cost and Robust Flexible Electronics.” IEEE Trans. Electron Devices 58 (1): 141–50. https://doi.org/10.1109/ted.2010.2088127. +
+Biggs, John, James Myers, Jedrzej Kufel, Emre Ozer, Simon Craske, Antony Sou, Catherine Ramsdale, Ken Williamson, Richard Price, and Scott White. 2021. “A Natively Flexible 32-Bit Arm Microprocessor.” Nature 595 (7868): 532–36. https://doi.org/10.1038/s41586-021-03625-w. +
+
+
+ +
+
+Figure 10.8: Flexible device prototype. Credit: Jabil Circuit. +
+
+
+

The field is enabled by advances in organic semiconductors and nanomaterials that can be deposited on thin, flexible films. However, fabrication remains challenging compared to mature silicon processes. Flexible circuits currently typically exhibit lower performance than rigid equivalents. Still, they promise to transform electronics into lightweight, bendable materials.

+

Flexible electronics use cases are well-suited for intimate integration with the human body. Potential medical AI applications include bio-integrated sensors, soft assistive robots, and implants that monitor or stimulate the nervous system intelligently. Specifically, flexible electrode arrays could enable higher-density, less-invasive neural interfaces compared to rigid equivalents.

+

Therefore, flexible electronics are ushering in a new era of wearables and body sensors, largely due to innovations in organic transistors. These components allow for more lightweight and bendable electronics, ideal for wearables, electronic skin, and body-conforming medical devices.

+

They are well-suited for bioelectronic devices in terms of biocompatibility, opening avenues for applications in brain and cardiac interfaces. For example, research in flexible brain-computer interfaces and soft bioelectronics for cardiac applications demonstrates the potential for wide-ranging medical applications.

+

Companies and research institutions are not only developing and investing great amounts of resources in flexible electrodes, as showcased in Neuralink’s work (Musk et al. 2019). Still, they are also pushing the boundaries to integrate machine learning models within the systems (Kwon and Dong 2022). These smart sensors aim for a seamless, long-lasting symbiosis with the human body.

+
+Musk, Elon et al. 2019. “An Integrated Brain-Machine Interface Platform with Thousands of Channels.” J. Med. Internet Res. 21 (10): e16194. https://doi.org/10.2196/16194. +
+Kwon, Sun Hwa, and Lin Dong. 2022. “Flexible Sensors and Machine Learning for Heart Monitoring.” Nano Energy 102 (November): 107632. https://doi.org/10.1016/j.nanoen.2022.107632. +
+Segura Anaya, L. H., Abeer Alsadoon, N. Costadopoulos, and P. W. C. Prasad. 2017. “Ethical Implications of User Perceptions of Wearable Devices.” Sci. Eng. Ethics 24 (1): 1–28. https://doi.org/10.1007/s11948-017-9872-8. +
+Goodyear, Victoria A. 2017. “Social Media, Apps and Wearable Technologies: Navigating Ethical Dilemmas and Procedures.” Qualitative Research in Sport, Exercise and Health 9 (3): 285–302. https://doi.org/10.1080/2159676x.2017.1303790. +
+Farah, Martha J. 2005. “Neuroethics: The Practical and the Philosophical.” Trends Cogn. Sci. 9 (1): 34–40. https://doi.org/10.1016/j.tics.2004.12.001. +
+Roskies, Adina. 2002. “Neuroethics for the New Millenium.” Neuron 35 (1): 21–23. https://doi.org/10.1016/s0896-6273(02)00763-8. +

Ethically, incorporating smart, machine-learning-driven sensors within the body raises important questions. Issues surrounding data privacy, informed consent, and the long-term societal implications of such technologies are the focus of ongoing work in neuroethics and bioethics (Segura Anaya et al. 2017; Goodyear 2017; Farah 2005; Roskies 2002). The field is progressing at a pace that necessitates parallel advancements in ethical frameworks to guide the responsible development and deployment of these technologies. While there are limitations and ethical hurdles to overcome, the prospects for flexible electronics are expansive and hold immense promise for future research and applications.

+
+
+

10.8.5 Memory Technologies

+

Memory technologies are critical to AI hardware, but conventional DDR DRAM and SRAM create bottlenecks. AI workloads require high bandwidth (>1 TB/s). Extreme scientific applications of AI require extremely low latency (<50 ns) to feed data to compute units (Duarte et al. 2022), high density (>128Gb) to store large model parameters and data sets, and excellent energy efficiency (<100 fJ/b) for embedded use (Verma et al. 2019). New memories are needed to meet these demands. Emerging options include several new technologies:

+
+Duarte, Javier, Nhan Tran, Ben Hawks, Christian Herwig, Jules Muhizi, Shvetank Prakash, and Vijay Janapa Reddi. 2022. FastML Science Benchmarks: Accelerating Real-Time Scientific Edge Machine Learning.” ArXiv Preprint abs/2207.07958. https://arxiv.org/abs/2207.07958. +
+Verma, Naveen, Hongyang Jia, Hossein Valavi, Yinqi Tang, Murat Ozatay, Lung-Yen Chen, Bonan Zhang, and Peter Deaville. 2019. “In-Memory Computing: Advances and Prospects.” IEEE Solid-State Circuits Mag. 11 (3): 43–55. https://doi.org/10.1109/mssc.2019.2922889. +
    +
  • Resistive RAM (ReRAM) can improve density with simple, passive arrays. However, challenges around variability remain (Chi et al. 2016).
  • +
  • Phase change memory (PCM) exploits the unique properties of chalcogenide glass. Crystalline and amorphous phases have different resistances. Intel’s Optane DCPMM provides fast (100ns), high endurance PCM. However, challenges include limited write cycles and high reset current (Burr et al. 2016).
  • +
  • 3D stacking can also boost memory density and bandwidth by vertically integrating memory layers with TSV interconnects (Loh 2008). For example, HBM provides 1024-bit wide interfaces.
  • +
+
+Burr, Geoffrey W., Matthew J. BrightSky, Abu Sebastian, Huai-Yu Cheng, Jau-Yi Wu, Sangbum Kim, Norma E. Sosa, et al. 2016. “Recent Progress in Phase-Change\(<\)?Pub _Newline ?\(>\)Memory Technology.” IEEE Journal on Emerging and Selected Topics in Circuits and Systems 6 (2): 146–62. https://doi.org/10.1109/jetcas.2016.2547718. +
+Loh, Gabriel H. 2008. 3D-Stacked Memory Architectures for Multi-Core Processors.” ACM SIGARCH Computer Architecture News 36 (3): 453–64. https://doi.org/10.1145/1394608.1382159. +

New memory technologies, with their innovative cell architectures and materials, are critical to unlocking the next level of AI hardware performance and efficiency. Realizing their benefits in commercial systems remains an ongoing challenge.

+

In-memory computing is gaining traction as a promising avenue for optimizing machine learning and high-performance computing workloads. At its core, the technology co-locates data storage and computation to improve energy efficiency and reduce latency Wong et al. (2012). Two key technologies under this umbrella are Resistive RAM (ReRAM) and Processing-In-Memory (PIM).

+
+Wong, H.-S. Philip, Heng-Yuan Lee, Shimeng Yu, Yu-Sheng Chen, Yi Wu, Pang-Shiu Chen, Byoungil Lee, Frederick T. Chen, and Ming-Jinn Tsai. 2012. MetalOxide RRAM.” Proc. IEEE 100 (6): 1951–70. https://doi.org/10.1109/jproc.2012.2190369. +
+Chi, Ping, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. “Prime: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory.” ACM SIGARCH Computer Architecture News 44 (3): 27–39. https://doi.org/10.1145/3007787.3001140. +

ReRAM (Wong et al. 2012) and PIM (Chi et al. 2016) are the backbones for in-memory computing, storing and computing data in the same location. ReRAM focuses on issues of uniformity, endurance, retention, multi-bit operation, and scalability. On the other hand, PIM involves CPU units integrated directly into memory arrays, specialized for tasks like matrix multiplication, which are central in AI computations.

+

These technologies find applications in AI workloads and high-performance computing, where the synergy of storage and computation can lead to significant performance gains. The architecture is particularly useful for compute-intensive tasks common in machine learning models.

+

While in-memory computing technologies like ReRAM and PIM offer exciting prospects for efficiency and performance, they come with their own challenges, such as data uniformity and scalability issues in ReRAM (Imani, Rahimi, and S. Rosing 2016). Nonetheless, the field is ripe for innovation, and addressing these limitations can open new frontiers in AI and high-performance computing.

+
+Imani, Mohsen, Abbas Rahimi, and Tajana S. Rosing. 2016. “Resistive Configurable Associative Memory for Approximate Computing.” In Proceedings of the 2016 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1327–32. IEEE; Research Publishing Services. https://doi.org/10.3850/9783981537079_0454. +
+
+

10.8.6 Optical Computing

+

In AI acceleration, a burgeoning area of interest lies in novel technologies that deviate from traditional paradigms. Some emerging technologies mentioned above, such as flexible electronics, in-memory computing, or even neuromorphic computing, are close to becoming a reality, given their ground-breaking innovations and applications. One of the promising and leading next-gen frontiers is optical computing technologies H. Zhou et al. (2022). Companies like [LightMatter] are pioneering the use of light photonics for calculations, thereby utilizing photons instead of electrons for data transmission and computation.

+
+Zhou, Hailong, Jianji Dong, Junwei Cheng, Wenchan Dong, Chaoran Huang, Yichen Shen, Qiming Zhang, et al. 2022. “Photonic Matrix Multiplication Lights up Photonic Accelerator and Beyond.” Light: Science &Amp; Applications 11 (1): 30. https://doi.org/10.1038/s41377-022-00717-8. +
+Shastri, Bhavin J., Alexander N. Tait, T. Ferreira de Lima, Wolfram H. P. Pernice, Harish Bhaskaran, C. D. Wright, and Paul R. Prucnal. 2021. “Photonics for Artificial Intelligence and Neuromorphic Computing.” Nat. Photonics 15 (2): 102–14. https://doi.org/10.1038/s41566-020-00754-y. +

Optical computing utilizes photons and photonic devices rather than traditional electronic circuits for computing and data processing. It takes inspiration from fiber optic communication links that rely on light for fast, efficient data transfer (Shastri et al. 2021). Light can propagate with much less loss than semiconductors’ electrons, enabling inherent speed and efficiency benefits.

+

Some specific advantages of optical computing include:

+
    +
  • High throughput: Photons can transmit with bandwidths >100 Tb/s using wavelength division multiplexing.
  • +
  • Low latency: Photons interact on femtosecond timescales, millions faster than silicon transistors.
  • +
  • Parallelism: Multiple data signals can propagate simultaneously through the same optical medium.
  • +
  • Low power: Photonic circuits utilizing waveguides and resonators can achieve complex logic and memory with only microwatts of power.
  • +
+

However, optical computing currently faces significant challenges:

+
    +
  • Lack of optical memory equivalent to electronic RAM
  • +
  • Requires conversion between optical and electrical domains.
  • +
  • Limited set of available optical components compared to rich electronics ecosystem.
  • +
  • Immature integration methods to combine photonics with traditional CMOS chips.
  • +
  • Complex programming models required to handle parallelism.
  • +
+

As a result, optical computing is still in the very early research stage despite its promising potential. However, technical breakthroughs could enable it to complement electronics and unlock performance gains for AI workloads. Companies like Lightmatter are pioneering early optical AI accelerators. In the long term, if key challenges are overcome, it could represent a revolutionary computing substrate.

+
+
+

10.8.7 Quantum Computing

+

Quantum computers leverage unique phenomena of quantum physics, like superposition and entanglement, to represent and process information in ways not possible classically. Instead of binary bits, the fundamental unit is the quantum bit or qubit. Unlike classical bits, which are limited to 0 or 1, qubits can exist simultaneously in a superposition of both states due to quantum effects.

+

Multiple qubits can also be entangled, leading to exponential information density but introducing probabilistic results. Superposition enables parallel computation on all possible states, while entanglement allows nonlocal correlations between qubits.

+

Quantum algorithms carefully manipulate these inherently quantum mechanical effects to solve problems like optimization or search more efficiently than their classical counterparts in theory.

+
    +
  • Faster training of deep neural networks by exploiting quantum parallelism for linear algebra operations.
  • +
  • Efficient quantum ML algorithms make use of the unique capabilities of qubits.
  • +
  • Quantum neural networks with inherent quantum effects baked into the model architecture.
  • +
  • Quantum optimizers leveraging quantum annealing or adiabatic algorithms for combinatorial optimization problems.
  • +
+

However, quantum states are fragile and prone to errors that require error-correcting protocols. The non-intuitive nature of quantum programming also introduces challenges not present in classical computing.

+
    +
  • Noisy and fragile quantum bits are difficult to scale up. The largest quantum computer today has less than 100 qubits.
  • +
  • Restricted set of available quantum gates and circuits relative to classical programming.
  • +
  • Lack of datasets and benchmarks to evaluate quantum ML in practical domains.
  • +
+

While meaningful quantum advantage for ML remains far off, active research at companies like D-Wave, Rigetti, and IonQ is advancing quantum computer engineering and quantum algorithms. Major technology companies like Google, IBM, and Microsoft are actively exploring quantum computing. Google recently announced a 72-qubit quantum processor called Bristlecone and plans to build a 49-qubit commercial quantum system. Microsoft also has an active research program in topological quantum computing and collaborates with quantum startup IonQ

+

Quantum techniques may first make inroads into optimization before more generalized ML adoption. Realizing quantum ML’s full potential awaits major milestones in quantum hardware development and ecosystem maturity.

+
+
+ +
+

10.10 Conclusion

+

Specialized hardware acceleration has become indispensable for enabling performant and efficient artificial intelligence applications as models and datasets explode in complexity. This chapter examined the limitations of general-purpose processors like CPUs for AI workloads. Their lack of parallelism and computational throughput cannot train or run state-of-the-art deep neural networks quickly. These motivations have driven innovations in customized accelerators.

+

We surveyed GPUs, TPUs, FPGAs, and ASICs specifically designed for the math-intensive operations inherent to neural networks. By covering this spectrum of options, we aimed to provide a framework for reasoning through accelerator selection based on constraints around flexibility, performance, power, cost, and other factors.

+

We also explored the role of software in actively enabling and optimizing AI acceleration. This spans programming abstractions, frameworks, compilers, and simulators. We discussed hardware-software co-design as a proactive methodology for building more holistic AI systems by closely integrating algorithm innovation and hardware advances.

+

But there is so much more to come! Exciting frontiers like analog computing, optical neural networks, and quantum machine learning represent active research directions that could unlock orders of magnitude improvements in efficiency, speed, and scale compared to present paradigms.

+

Ultimately, specialized hardware acceleration remains indispensable for unlocking the performance and efficiency necessary to fulfill the promise of artificial intelligence from cloud to edge. We hope this chapter provides useful background and insights into the rapid innovation occurring in this domain.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

Coming soon.

+
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+ +
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/image_classification/image_classification.html b/contents/image_classification/image_classification.html new file mode 100644 index 00000000..e3cd7502 --- /dev/null +++ b/contents/image_classification/image_classification.html @@ -0,0 +1,1689 @@ + + + + + + + + + +Machine Learning Systems - CV on Nicla Vision + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

CV on Nicla Vision

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: Cartoon in a 1950s style featuring a compact electronic device with a camera module placed on a wooden table. The screen displays blue robots on one side and green periquitos on the other. LED lights on the device indicate classifications, while characters in retro clothing observe with interest.
+
+
+
+

Introduction

+

As we initiate our studies into embedded machine learning or TinyML, it’s impossible to overlook the transformative impact of Computer Vision (CV) and Artificial Intelligence (AI) in our lives. These two intertwined disciplines redefine what machines can perceive and accomplish, from autonomous vehicles and robotics to healthcare and surveillance.

+

More and more, we are facing an artificial intelligence (AI) revolution where, as stated by Gartner, Edge AI has a very high impact potential, and it is for now!

+
+
+

+
+
+

In the “bullseye” of the Radar is the Edge Computer Vision, and when we talk about Machine Learning (ML) applied to vision, the first thing that comes to mind is Image Classification, a kind of ML “Hello World”!

+

This exercise will explore a computer vision project utilizing Convolutional Neural Networks (CNNs) for real-time image classification. Leveraging TensorFlow’s robust ecosystem, we’ll implement a pre-trained MobileNet model and adapt it for edge deployment. The focus will be on optimizing the model to run efficiently on resource-constrained hardware without sacrificing accuracy.

+

We’ll employ techniques like quantization and pruning to reduce the computational load. By the end of this tutorial, you’ll have a working prototype capable of classifying images in real-time, all running on a low-power embedded system based on the Arduino Nicla Vision board.

+
+
+

Computer Vision

+

At its core, computer vision aims to enable machines to interpret and make decisions based on visual data from the world, essentially mimicking the capability of the human optical system. Conversely, AI is a broader field encompassing machine learning, natural language processing, and robotics, among other technologies. When you bring AI algorithms into computer vision projects, you supercharge the system’s ability to understand, interpret, and react to visual stimuli.

+

When discussing Computer Vision projects applied to embedded devices, the most common applications that come to mind are Image Classification and Object Detection.

+
+
+

+
+
+

Both models can be implemented on tiny devices like the Arduino Nicla Vision and used on real projects. In this chapter, we will cover Image Classification.

+
+
+

Image Classification Project Goal

+

The first step in any ML project is to define the goal. In this case, it is to detect and classify two specific objects present in one image. For this project, we will use two small toys: a robot and a small Brazilian parrot (named Periquito). Also, we will collect images of a background where those two objects are absent.

+
+
+

+
+
+
+
+

Data Collection

+

Once you have defined your Machine Learning project goal, the next and most crucial step is the dataset collection. You can use the Edge Impulse Studio, the OpenMV IDE we installed, or even your phone for the image capture. Here, we will use the OpenMV IDE for that.

+
+

Collecting Dataset with OpenMV IDE

+

First, create in your computer a folder where your data will be saved, for example, “data.” Next, on the OpenMV IDE, go to Tools > Dataset Editor and select New Dataset to start the dataset collection:

+
+
+

+
+
+

The IDE will ask you to open the file where your data will be saved and choose the “data” folder that was created. Note that new icons will appear on the Left panel.

+
+
+

+
+
+

Using the upper icon (1), enter with the first class name, for example, “periquito”:

+
+
+

+
+
+

Running the dataset_capture_script.py and clicking on the camera icon (2), will start capturing images:

+
+
+

+
+
+

Repeat the same procedure with the other classes

+
+
+

+
+
+
+

We suggest around 60 images from each category. Try to capture different angles, backgrounds, and light conditions.

+
+

The stored images use a QVGA frame size of 320x240 and the RGB565 (color pixel format).

+

After capturing your dataset, close the Dataset Editor Tool on the Tools > Dataset Editor.

+

On your computer, you will end with a dataset that contains three classes: periquito, robot, and background.

+
+
+

+
+
+

You should return to Edge Impulse Studio and upload the dataset to your project.

+
+
+
+

Training the model with Edge Impulse Studio

+

We will use the Edge Impulse Studio for training our model. Enter your account credentials and create a new project:

+
+
+

+
+
+
+

Here, you can clone a similar project: NICLA-Vision_Image_Classification.

+
+
+
+

Dataset

+

Using the EI Studio (or Studio), we will go over four main steps to have our model ready for use on the Nicla Vision board: Dataset, Impulse, Tests, and Deploy (on the Edge Device, in this case, the NiclaV).

+
+
+

+
+
+

Regarding the Dataset, it is essential to point out that our Original Dataset, captured with the OpenMV IDE, will be split into Training, Validation, and Test. The Test Set will be divided from the beginning, and a part will reserved to be used only in the Test phase after training. The Validation Set will be used during training.

+
+
+

+
+
+

On Studio, go to the Data acquisition tab, and on the UPLOAD DATA section, upload the chosen categories files from your computer:

+
+
+

+
+
+

Leave to the Studio the splitting of the original dataset into train and test and choose the label about that specific data:

+
+
+

+
+
+

Repeat the procedure for all three classes. At the end, you should see your “raw data” in the Studio:

+
+
+

+
+
+

The Studio allows you to explore your data, showing a complete view of all the data in your project. You can clear, inspect, or change labels by clicking on individual data items. In our case, a very simple project, the data seems OK.

+
+
+

+
+
+
+
+

The Impulse Design

+

In this phase, we should define how to:

+
    +
  • Pre-process our data, which consists of resizing the individual images and determining the color depth to use (be it RGB or Grayscale) and

  • +
  • Specify a Model, in this case, it will be the Transfer Learning (Images) to fine-tune a pre-trained MobileNet V2 image classification model on our data. This method performs well even with relatively small image datasets (around 150 images in our case).

  • +
+
+
+

+
+
+

Transfer Learning with MobileNet offers a streamlined approach to model training, which is especially beneficial for resource-constrained environments and projects with limited labeled data. MobileNet, known for its lightweight architecture, is a pre-trained model that has already learned valuable features from a large dataset (ImageNet).

+
+
+

+
+
+

By leveraging these learned features, you can train a new model for your specific task with fewer data and computational resources and yet achieve competitive accuracy.

+
+
+

+
+
+

This approach significantly reduces training time and computational cost, making it ideal for quick prototyping and deployment on embedded devices where efficiency is paramount.

+

Go to the Impulse Design Tab and create the impulse, defining an image size of 96x96 and squashing them (squared form, without cropping). Select Image and Transfer Learning blocks. Save the Impulse.

+
+
+

+
+
+
+

Image Pre-Processing

+

All the input QVGA/RGB565 images will be converted to 27,640 features (96x96x3).

+
+
+

+
+
+

Press [Save parameters] and Generate all features:

+
+
+

+
+
+
+
+

Model Design

+

In 2007, Google introduced MobileNetV1, a family of general-purpose computer vision neural networks designed with mobile devices in mind to support classification, detection, and more. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of various use cases. in 2018, Google launched MobileNetV2: Inverted Residuals and Linear Bottlenecks.

+

MobileNet V1 and MobileNet V2 aim at mobile efficiency and embedded vision applications but differ in architectural complexity and performance. While both use depthwise separable convolutions to reduce the computational cost, MobileNet V2 introduces Inverted Residual Blocks and Linear Bottlenecks to enhance performance. These new features allow V2 to capture more complex features using fewer parameters, making it computationally more efficient and generally more accurate than its predecessor. Additionally, V2 employs a non-linear activation in the intermediate expansion layer. It still uses a linear activation for the bottleneck layer, a design choice found to preserve important information through the network. MobileNet V2 offers an optimized architecture for higher accuracy and efficiency and will be used in this project.

+

Although the base MobileNet architecture is already tiny and has low latency, many times, a specific use case or application may require the model to be even smaller and faster. MobileNets introduces a straightforward parameter α (alpha) called width multiplier to construct these smaller, less computationally expensive models. The role of the width multiplier α is that of thinning a network uniformly at each layer.

+

Edge Impulse Studio can use both MobileNetV1 (96x96 images) and V2 (96x96 or 160x160 images), with several different α values (from 0.05 to 1.0). For example, you will get the highest accuracy with V2, 160x160 images, and α=1.0. Of course, there is a trade-off. The higher the accuracy, the more memory (around 1.3MB RAM and 2.6MB ROM) will be needed to run the model, implying more latency. The smaller footprint will be obtained at the other extreme with MobileNetV1 and α=0.10 (around 53.2K RAM and 101K ROM).

+
+
+

+
+
+

We will use MobileNetV2 96x96 0.1 for this project, with an estimated memory cost of 265.3 KB in RAM. This model should be OK for the Nicla Vision with 1MB of SRAM. On the Transfer Learning Tab, select this model:

+
+
+

+
+
+
+
+
+

Model Training

+

Another valuable technique to be used with Deep Learning is Data Augmentation. Data augmentation is a method to improve the accuracy of machine learning models by creating additional artificial data. A data augmentation system makes small, random changes to your training data during the training process (such as flipping, cropping, or rotating the images).

+

Looking under the hood, here you can see how Edge Impulse implements a data Augmentation policy on your data:

+
# Implements the data augmentation policy
+def augment_image(image, label):
+    # Flips the image randomly
+    image = tf.image.random_flip_left_right(image)
+
+    # Increase the image size, then randomly crop it down to
+    # the original dimensions
+    resize_factor = random.uniform(1, 1.2)
+    new_height = math.floor(resize_factor * INPUT_SHAPE[0])
+    new_width = math.floor(resize_factor * INPUT_SHAPE[1])
+    image = tf.image.resize_with_crop_or_pad(image, new_height, new_width)
+    image = tf.image.random_crop(image, size=INPUT_SHAPE)
+
+    # Vary the brightness of the image
+    image = tf.image.random_brightness(image, max_delta=0.2)
+
+    return image, label
+

Exposure to these variations during training can help prevent your model from taking shortcuts by “memorizing” superficial clues in your training data, meaning it may better reflect the deep underlying patterns in your dataset.

+

The final layer of our model will have 12 neurons with a 15% dropout for overfitting prevention. Here is the Training result:

+
+
+

+
+
+

The result is excellent, with 77ms of latency, which should result in 13fps (frames per second) during inference.

+
+
+

Model Testing

+
+
+

+
+
+

Now, you should take the data set aside at the start of the project and run the trained model using it as input:

+
+
+

+
+
+

The result is, again, excellent.

+
+
+

+
+
+
+
+

Deploying the model

+

At this point, we can deploy the trained model as.tflite and use the OpenMV IDE to run it using MicroPython, or we can deploy it as a C/C++ or an Arduino library.

+
+
+

+
+
+
+

Arduino Library

+

First, Let’s deploy it as an Arduino Library:

+
+
+

+
+
+

You should install the library as.zip on the Arduino IDE and run the sketch nicla_vision_camera.ino available in Examples under your library name.

+
+

Note that Arduino Nicla Vision has, by default, 512KB of RAM allocated for the M7 core and an additional 244KB on the M4 address space. In the code, this allocation was changed to 288 kB to guarantee that the model will run on the device (malloc_addblock((void*)0x30000000, 288 * 1024);).

+
+

The result is good, with 86ms of measured latency.

+
+
+

+
+
+

Here is a short video showing the inference results:

+
+
+

OpenMV

+

It is possible to deploy the trained model to be used with OpenMV in two ways: as a library and as a firmware.

+

Three files are generated as a library: the trained.tflite model, a list with labels, and a simple MicroPython script that can make inferences using the model.

+
+
+

+
+
+

Running this model as a .tflite directly in the Nicla was impossible. So, we can sacrifice the accuracy using a smaller model or deploy the model as an OpenMV Firmware (FW). Choosing FW, the Edge Impulse Studio generates optimized models, libraries, and frameworks needed to make the inference. Let’s explore this option.

+

Select OpenMV Firmware on the Deploy Tab and press [Build].

+
+
+

+
+
+

On your computer, you will find a ZIP file. Open it:

+
+
+

+
+
+

Use the Bootloader tool on the OpenMV IDE to load the FW on your board:

+
+
+

+
+
+

Select the appropriate file (.bin for Nicla-Vision):

+
+
+

+
+
+

After the download is finished, press OK:

+
+
+

+
+
+

If a message says that the FW is outdated, DO NOT UPGRADE. Select [NO].

+
+
+

+
+
+

Now, open the script ei_image_classification.py that was downloaded from the Studio and the.bin file for the Nicla.

+
+
+

+
+
+

Run it. Pointing the camera to the objects we want to classify, the inference result will be displayed on the Serial Terminal.

+
+
+

+
+
+
+

Changing the Code to add labels

+

The code provided by Edge Impulse can be modified so that we can see, for test reasons, the inference result directly on the image displayed on the OpenMV IDE.

+

Upload the code from GitHub, or modify it as below:

+
# Marcelo Rovai - NICLA Vision - Image Classification
+# Adapted from Edge Impulse - OpenMV Image Classification Example
+# @24Aug23
+
+import sensor, image, time, os, tf, uos, gc
+
+sensor.reset()                         # Reset and initialize the sensor.
+sensor.set_pixformat(sensor.RGB565)    # Set pxl fmt to RGB565 (or GRAYSCALE)
+sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
+sensor.set_windowing((240, 240))       # Set 240x240 window.
+sensor.skip_frames(time=2000)          # Let the camera adjust.
+
+net = None
+labels = None
+
+try:
+    # Load built in model
+    labels, net = tf.load_builtin_model('trained')
+except Exception as e:
+    raise Exception(e)
+
+clock = time.clock()
+while(True):
+    clock.tick()  # Starts tracking elapsed time.
+
+    img = sensor.snapshot()
+
+    # default settings just do one detection
+    for obj in net.classify(img, 
+                            min_scale=1.0, 
+                            scale_mul=0.8, 
+                            x_overlap=0.5, 
+                            y_overlap=0.5):
+        fps = clock.fps()
+        lat = clock.avg()
+
+        print("**********\nPrediction:")
+        img.draw_rectangle(obj.rect())
+        # This combines the labels and confidence values into a list of tuples
+        predictions_list = list(zip(labels, obj.output()))
+
+        max_val = predictions_list[0][1]
+        max_lbl = 'background'
+        for i in range(len(predictions_list)):
+            val = predictions_list[i][1]
+            lbl = predictions_list[i][0]
+
+            if val > max_val:
+                max_val = val
+                max_lbl = lbl
+
+    # Print label with the highest probability
+    if max_val < 0.5:
+        max_lbl = 'uncertain'
+    print("{} with a prob of {:.2f}".format(max_lbl, max_val))
+    print("FPS: {:.2f} fps ==> latency: {:.0f} ms".format(fps, lat))
+
+    # Draw label with highest probability to image viewer
+    img.draw_string(
+        10, 10,
+        max_lbl + "\n{:.2f}".format(max_val),
+        mono_space = False,
+        scale=2
+        )
+

Here you can see the result:

+
+
+

+
+
+

Note that the latency (136 ms) is almost double of what we got directly with the Arduino IDE. This is because we are using the IDE as an interface and also the time to wait for the camera to be ready. If we start the clock just before the inference:

+
+
+

+
+
+

The latency will drop to only 71 ms.

+
+
+

+
+
+
+

The NiclaV runs about half as fast when connected to the IDE. The FPS should increase once disconnected.

+
+
+
+

Post-Processing with LEDs

+

When working with embedded machine learning, we are looking for devices that can continually proceed with the inference and result, taking some action directly on the physical world and not displaying the result on a connected computer. To simulate this, we will light up a different LED for each possible inference result.

+
+
+

+
+
+

To accomplish that, we should upload the code from GitHub or change the last code to include the LEDs:

+
# Marcelo Rovai - NICLA Vision - Image Classification with LEDs
+# Adapted from Edge Impulse - OpenMV Image Classification Example
+# @24Aug23
+
+import sensor, image, time, os, tf, uos, gc, pyb
+
+ledRed = pyb.LED(1)
+ledGre = pyb.LED(2)
+ledBlu = pyb.LED(3)
+
+sensor.reset()                         # Reset and initialize the sensor.
+sensor.set_pixformat(sensor.RGB565)    # Set pixl fmt to RGB565 (or GRAYSCALE)
+sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
+sensor.set_windowing((240, 240))       # Set 240x240 window.
+sensor.skip_frames(time=2000)          # Let the camera adjust.
+
+net = None
+labels = None
+
+ledRed.off()
+ledGre.off()
+ledBlu.off()
+
+try:
+    # Load built in model
+    labels, net = tf.load_builtin_model('trained')
+except Exception as e:
+    raise Exception(e)
+
+clock = time.clock()
+
+
+def setLEDs(max_lbl):
+
+    if max_lbl == 'uncertain':
+        ledRed.on()
+        ledGre.off()
+        ledBlu.off()
+
+    if max_lbl == 'periquito':
+        ledRed.off()
+        ledGre.on()
+        ledBlu.off()
+
+    if max_lbl == 'robot':
+        ledRed.off()
+        ledGre.off()
+        ledBlu.on()
+
+    if max_lbl == 'background':
+        ledRed.off()
+        ledGre.off()
+        ledBlu.off()
+
+
+while(True):
+    img = sensor.snapshot()
+    clock.tick()  # Starts tracking elapsed time.
+
+    # default settings just do one detection.
+    for obj in net.classify(img, 
+                            min_scale=1.0, 
+                            scale_mul=0.8, 
+                            x_overlap=0.5, 
+                            y_overlap=0.5):
+        fps = clock.fps()
+        lat = clock.avg()
+
+        print("**********\nPrediction:")
+        img.draw_rectangle(obj.rect())
+        # This combines the labels and confidence values into a list of tuples
+        predictions_list = list(zip(labels, obj.output()))
+
+        max_val = predictions_list[0][1]
+        max_lbl = 'background'
+        for i in range(len(predictions_list)):
+            val = predictions_list[i][1]
+            lbl = predictions_list[i][0]
+
+            if val > max_val:
+                max_val = val
+                max_lbl = lbl
+
+    # Print label and turn on LED with the highest probability
+    if max_val < 0.8:
+        max_lbl = 'uncertain'
+
+    setLEDs(max_lbl)
+
+    print("{} with a prob of {:.2f}".format(max_lbl, max_val))
+    print("FPS: {:.2f} fps ==> latency: {:.0f} ms".format(fps, lat))
+
+    # Draw label with highest probability to image viewer
+    img.draw_string(
+        10, 10,
+        max_lbl + "\n{:.2f}".format(max_val),
+        mono_space = False,
+        scale=2
+        )
+

Now, each time that a class scores a result greater than 0.8, the correspondent LED will be lit:

+
    +
  • Led Red 0n: Uncertain (no class is over 0.8)

  • +
  • Led Green 0n: Periquito > 0.8

  • +
  • Led Blue 0n: Robot > 0.8

  • +
  • All LEDs Off: Background > 0.8

  • +
+

Here is the result:

+
+
+

+
+
+

In more detail

+
+
+

+
+
+
+
+
+
+

Image Classification (non-official) Benchmark

+

Several development boards can be used for embedded machine learning (TinyML), and the most common ones for Computer Vision applications (consuming low energy), are the ESP32 CAM, the Seeed XIAO ESP32S3 Sense, the Arduino Nicla Vison, and the Arduino Portenta.

+
+
+

+
+
+

Catching the opportunity, the same trained model was deployed on the ESP-CAM, the XIAO, and the Portenta (in this one, the model was trained again, using grayscaled images to be compatible with its camera). Here is the result, deploying the models as Arduino’s Library:

+
+
+

+
+
+
+
+

Conclusion

+

Before we finish, consider that Computer Vision is more than just image classification. For example, you can develop Edge Machine Learning projects around vision in several areas, such as:

+
    +
  • Autonomous Vehicles: Use sensor fusion, lidar data, and computer vision algorithms to navigate and make decisions.

  • +
  • Healthcare: Automated diagnosis of diseases through MRI, X-ray, and CT scan image analysis

  • +
  • Retail: Automated checkout systems that identify products as they pass through a scanner.

  • +
  • Security and Surveillance: Facial recognition, anomaly detection, and object tracking in real-time video feeds.

  • +
  • Augmented Reality: Object detection and classification to overlay digital information in the real world.

  • +
  • Industrial Automation: Visual inspection of products, predictive maintenance, and robot and drone guidance.

  • +
  • Agriculture: Drone-based crop monitoring and automated harvesting.

  • +
  • Natural Language Processing: Image captioning and visual question answering.

  • +
  • Gesture Recognition: For gaming, sign language translation, and human-machine interaction.

  • +
  • Content Recommendation: Image-based recommendation systems in e-commerce.

  • +
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/introduction/introduction.html b/contents/introduction/introduction.html new file mode 100644 index 00000000..ca76e380 --- /dev/null +++ b/contents/introduction/introduction.html @@ -0,0 +1,1070 @@ + + + + + + + + + +Machine Learning Systems - 1  Introduction + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

1  Introduction

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: A detailed, rectangular, flat 2D illustration depicting a roadmap of a book’s chapters on machine learning systems, set on a crisp clean white background. The image features a winding road traveling through various symbolic landmarks. Each landmark represents a chapter topic: Introduction, ML Systems, Deep Learning, AI Workflow, Data Engineering, AI Frameworks, AI Training, Efficient AI, Model Optimizations, AI Acceleration, Benchmarking AI, On-Device Learning, Embedded AIOps, Security & Privacy, Responsible AI, Sustainable AI, AI for Good, Robust AI, Generative AI. The style is clean, modern, and flat, suitable for a technical book, with each landmark clearly labeled with its chapter title.
+
+
+

In the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be so seamlessly integrated into our environments that it would become an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.

+

Ubiquitous computing (Weiser 1991), as Weiser imagined, is not merely about embedding processors in everyday objects; it is about imbuing our environment with a form of intelligence that anticipates our needs and acts on our behalf, enhancing our experiences without our explicit command. The key to this ubiquitous intelligence lies in developing and deploying machine learning systems at the edge of our networks.

+
+Weiser, Mark. 1991. “The Computer for the 21st Century.” Sci. Am. 265 (3): 94–104. https://doi.org/10.1038/scientificamerican0991-94. +

Machine learning, a subset of artificial intelligence, enables computers to learn from and make decisions based on data rather than following explicitly programmed instructions. When deployed at the edge—closer to where data is generated, and actions are taken—machine learning systems can process information in real-time, responding to environmental changes and user inputs with minimal latency. This capability is critical for applications where timing is crucial, such as autonomous vehicles, real-time language translation, and smart healthcare devices.

+

The migration of machine learning from centralized data centers to the edge of networks marks a significant evolution in computing architecture. The need for speed, privacy, and reduced bandwidth consumption drives this shift. By processing data locally, edge-based machine learning systems can make quick decisions without constantly communicating with a central server. This speeds up response times, conserves bandwidth, and enhances privacy by limiting the amount of data transmitted over the network.

+

Moreover, the ability to deploy machine learning models in diverse environments has led to an explosion of innovative applications. From smart cities that optimize traffic flow in real-time to agricultural drones that monitor crop health and apply treatments precisely where needed, machine learning at the edge enables a level of contextual awareness and responsiveness that was previously unimaginable.

+

Despite the promise of ubiquitous intelligence, deploying machine learning systems at the edge is challenging. These systems must operate within the constraints of limited processing power, memory, and energy availability, often in environments far from the controlled conditions of data centers. Additionally, ensuring the privacy and security of the data in these systems processes is paramount, particularly in applications that handle sensitive personal information.

+

Developing machine learning models that are efficient enough to run at the edge while delivering accurate and reliable results requires innovative model design, training, and deployment approaches. Researchers and engineers are exploring techniques such as model compression, federated learning, and transfer learning to address these challenges.

+

As we stand on the threshold of Weiser’s vision of ubiquitous computing, machine learning systems are clear as the key to unlocking this future. By embedding intelligence in the fabric of our environment, these systems have the potential to make our interactions with technology more natural and intuitive than ever before. As we continue to push the boundaries of what’s possible with machine learning at the edge, we move closer to a world where technology quietly enhances our lives without ever getting in the way.

+

In this book, we will explore the technical foundations of machine learning systems, the challenges of deploying these systems at the edge, and the vast array of applications they enable. Join us as we embark on a journey into the future of ubiquitous intelligence, where the seamless integration of technology into our daily lives transforms the essence of how we live, work, and interact with the world around us.

+ + + + +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/kws_feature_eng/kws_feature_eng.html b/contents/kws_feature_eng/kws_feature_eng.html new file mode 100644 index 00000000..cba9db3d --- /dev/null +++ b/contents/kws_feature_eng/kws_feature_eng.html @@ -0,0 +1,1266 @@ + + + + + + + + + +Machine Learning Systems - Audio Feature Engineering + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

Audio Feature Engineering

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: 1950s style cartoon scene set in an audio research room. Two scientists, one holding a magnifying glass and the other taking notes, examine large charts pinned to the wall. These charts depict FFT graphs and time curves related to audio data analysis. The room has a retro ambiance, with wooden tables, vintage lamps, and classic audio analysis tools.
+
+
+
+

Introduction

+

In this hands-on tutorial, the emphasis is on the critical role that feature engineering plays in optimizing the performance of machine learning models applied to audio classification tasks, such as speech recognition. It is essential to be aware that the performance of any machine learning model relies heavily on the quality of features used, and we will deal with “under-the-hood” mechanics of feature extraction, mainly focusing on Mel-frequency Cepstral Coefficients (MFCCs), a cornerstone in the field of audio signal processing.

+

Machine learning models, especially traditional algorithms, don’t understand audio waves. They understand numbers arranged in some meaningful way, i.e., features. These features encapsulate the characteristics of the audio signal, making it easier for models to distinguish between different sounds.

+
+

This tutorial will deal with generating features specifically for audio classification. This can be particularly interesting for applying machine learning to a variety of audio data, whether for speech recognition, music categorization, insect classification based on wingbeat sounds, or other sound analysis tasks

+
+
+
+

The KWS

+

The most common TinyML application is Keyword Spotting (KWS), a subset of the broader field of speech recognition. While general speech recognition aims to transcribe all spoken words into text, Keyword Spotting focuses on detecting specific “keywords” or “wake words” in a continuous audio stream. The system is trained to recognize these keywords as predefined phrases or words, such as yes or no. In short, KWS is a specialized form of speech recognition with its own set of challenges and requirements.

+

Here a typical KWS Process using MFCC Feature Converter:

+
+
+

+
+
+
+

Applications of KWS:

+
    +
  • Voice Assistants: In devices like Amazon’s Alexa or Google Home, KWS is used to detect the wake word (“Alexa” or “Hey Google”) to activate the device.
  • +
  • Voice-Activated Controls: In automotive or industrial settings, KWS can be used to initiate specific commands like “Start engine” or “Turn off lights.”
  • +
  • Security Systems: Voice-activated security systems may use KWS to authenticate users based on a spoken passphrase.
  • +
  • Telecommunication Services: Customer service lines may use KWS to route calls based on spoken keywords.
  • +
+
+
+

Differences from General Speech Recognition:

+
    +
  • Computational Efficiency: KWS is usually designed to be less computationally intensive than full speech recognition, as it only needs to recognize a small set of phrases.
  • +
  • Real-time Processing: KWS often operates in real-time and is optimized for low-latency detection of keywords.
  • +
  • Resource Constraints: KWS models are often designed to be lightweight, so they can run on devices with limited computational resources, like microcontrollers or mobile phones.
  • +
  • Focused Task: While general speech recognition models are trained to handle a broad range of vocabulary and accents, KWS models are fine-tuned to recognize specific keywords, often in noisy environments accurately.
  • +
+
+
+
+

Introduction to Audio Signals

+

Understanding the basic properties of audio signals is crucial for effective feature extraction and, ultimately, for successfully applying machine learning algorithms in audio classification tasks. Audio signals are complex waveforms that capture fluctuations in air pressure over time. These signals can be characterized by several fundamental attributes: sampling rate, frequency, and amplitude.

+
    +
  • Frequency and Amplitude: Frequency refers to the number of oscillations a waveform undergoes per unit time and is also measured in Hz. In the context of audio signals, different frequencies correspond to different pitches. Amplitude, on the other hand, measures the magnitude of the oscillations and correlates with the loudness of the sound. Both frequency and amplitude are essential features that capture audio signals’ tonal and rhythmic qualities.

  • +
  • Sampling Rate: The sampling rate, often denoted in Hertz (Hz), defines the number of samples taken per second when digitizing an analog signal. A higher sampling rate allows for a more accurate digital representation of the signal but also demands more computational resources for processing. Typical sampling rates include 44.1 kHz for CD-quality audio and 16 kHz or 8 kHz for speech recognition tasks. Understanding the trade-offs in selecting an appropriate sampling rate is essential for balancing accuracy and computational efficiency. In general, with TinyML projects, we work with 16KHz. Altough music tones can be heard at frequencies up to 20 kHz, voice maxes out at 8 kHz. Traditional telephone systems use an 8 kHz sampling frequency.

  • +
+
+

For an accurate representation of the signal, the sampling rate must be at least twice the highest frequency present in the signal.

+
+
    +
  • Time Domain vs. Frequency Domain: Audio signals can be analyzed in the time and frequency domains. In the time domain, a signal is represented as a waveform where the amplitude is plotted against time. This representation helps to observe temporal features like onset and duration but the signal’s tonal characteristics are not well evidenced. Conversely, a frequency domain representation provides a view of the signal’s constituent frequencies and their respective amplitudes, typically obtained via a Fourier Transform. This is invaluable for tasks that require understanding the signal’s spectral content, such as identifying musical notes or speech phonemes (our case).
  • +
+

The image below shows the words YES and NO with typical representations in the Time (Raw Audio) and Frequency domains:

+
+
+

+
+
+
+

Why Not Raw Audio?

+

While using raw audio data directly for machine learning tasks may seem tempting, this approach presents several challenges that make it less suitable for building robust and efficient models.

+

Using raw audio data for Keyword Spotting (KWS), for example, on TinyML devices poses challenges due to its high dimensionality (using a 16 kHz sampling rate), computational complexity for capturing temporal features, susceptibility to noise, and lack of semantically meaningful features, making feature extraction techniques like MFCCs a more practical choice for resource-constrained applications.

+

Here are some additional details of the critical issues associated with using raw audio:

+
    +
  • High Dimensionality: Audio signals, especially those sampled at high rates, result in large amounts of data. For example, a 1-second audio clip sampled at 16 kHz will have 16,000 individual data points. High-dimensional data increases computational complexity, leading to longer training times and higher computational costs, making it impractical for resource-constrained environments. Furthermore, the wide dynamic range of audio signals requires a significant amount of bits per sample, while conveying little useful information.

  • +
  • Temporal Dependencies: Raw audio signals have temporal structures that simple machine learning models may find hard to capture. While recurrent neural networks like LSTMs can model such dependencies, they are computationally intensive and tricky to train on tiny devices.

  • +
  • Noise and Variability: Raw audio signals often contain background noise and other non-essential elements affecting model performance. Additionally, the same sound can have different characteristics based on various factors such as distance from the microphone, the orientation of the sound source, and acoustic properties of the environment, adding to the complexity of the data.

  • +
  • Lack of Semantic Meaning: Raw audio doesn’t inherently contain semantically meaningful features for classification tasks. Features like pitch, tempo, and spectral characteristics, which can be crucial for speech recognition, are not directly accessible from raw waveform data.

  • +
  • Signal Redundancy: Audio signals often contain redundant information, with certain portions of the signal contributing little to no value to the task at hand. This redundancy can make learning inefficient and potentially lead to overfitting.

  • +
+

For these reasons, feature extraction techniques such as Mel-frequency Cepstral Coefficients (MFCCs), Mel-Frequency Energies (MFEs), and simple Spectograms are commonly used to transform raw audio data into a more manageable and informative format. These features capture the essential characteristics of the audio signal while reducing dimensionality and noise, facilitating more effective machine learning.

+
+
+
+

Introduction to MFCCs

+
+

What are MFCCs?

+

Mel-frequency Cepstral Coefficients (MFCCs) are a set of features derived from the spectral content of an audio signal. They are based on human auditory perceptions and are commonly used to capture the phonetic characteristics of an audio signal. The MFCCs are computed through a multi-step process that includes pre-emphasis, framing, windowing, applying the Fast Fourier Transform (FFT) to convert the signal to the frequency domain, and finally, applying the Discrete Cosine Transform (DCT). The result is a compact representation of the original audio signal’s spectral characteristics.

+

The image below shows the words YES and NO in their MFCC representation:

+
+
+

+
+
+
+

This video explains the Mel Frequency Cepstral Coefficients (MFCC) and how to compute them.

+
+
+
+

Why are MFCCs important?

+

MFCCs are crucial for several reasons, particularly in the context of Keyword Spotting (KWS) and TinyML:

+
    +
  • Dimensionality Reduction: MFCCs capture essential spectral characteristics of the audio signal while significantly reducing the dimensionality of the data, making it ideal for resource-constrained TinyML applications.
  • +
  • Robustness: MFCCs are less susceptible to noise and variations in pitch and amplitude, providing a more stable and robust feature set for audio classification tasks.
  • +
  • Human Auditory System Modeling: The Mel scale in MFCCs approximates the human ear’s response to different frequencies, making them practical for speech recognition where human-like perception is desired.
  • +
  • Computational Efficiency: The process of calculating MFCCs is computationally efficient, making it well-suited for real-time applications on hardware with limited computational resources.
  • +
+

In summary, MFCCs offer a balance of information richness and computational efficiency, making them popular for audio classification tasks, particularly in constrained environments like TinyML.

+
+
+

Computing MFCCs

+

The computation of Mel-frequency Cepstral Coefficients (MFCCs) involves several key steps. Let’s walk through these, which are particularly important for Keyword Spotting (KWS) tasks on TinyML devices.

+
    +
  • Pre-emphasis: The first step is pre-emphasis, which is applied to accentuate the high-frequency components of the audio signal and balance the frequency spectrum. This is achieved by applying a filter that amplifies the difference between consecutive samples. The formula for pre-emphasis is: y(t) = x(t) - \(\alpha\) x(t-1) , where \(\alpha\) is the pre-emphasis factor, typically around 0.97.

  • +
  • Framing: Audio signals are divided into short frames (the frame length), usually 20 to 40 milliseconds. This is based on the assumption that frequencies in a signal are stationary over a short period. Framing helps in analyzing the signal in such small time slots. The frame stride (or step) will displace one frame and the adjacent. Those steps could be sequential or overlapped.

  • +
  • Windowing: Each frame is then windowed to minimize the discontinuities at the frame boundaries. A commonly used window function is the Hamming window. Windowing prepares the signal for a Fourier transform by minimizing the edge effects. The image below shows three frames (10, 20, and 30) and the time samples after windowing (note that the frame length and frame stride are 20 ms):

  • +
+
+
+

+
+
+
    +
  • Fast Fourier Transform (FFT) The Fast Fourier Transform (FFT) is applied to each windowed frame to convert it from the time domain to the frequency domain. The FFT gives us a complex-valued representation that includes both magnitude and phase information. However, for MFCCs, only the magnitude is used to calculate the Power Spectrum. The power spectrum is the square of the magnitude spectrum and measures the energy present at each frequency component.
  • +
+
+

The power spectrum \(P(f)\) of a signal \(x(t)\) is defined as \(P(f) = |X(f)|^2\), where \(X(f)\) is the Fourier Transform of \(x(t)\). By squaring the magnitude of the Fourier Transform, we emphasize stronger frequencies over weaker ones, thereby capturing more relevant spectral characteristics of the audio signal. This is important in applications like audio classification, speech recognition, and Keyword Spotting (KWS), where the focus is on identifying distinct frequency patterns that characterize different classes of audio or phonemes in speech.

+
+
+
+

+
+
+
    +
  • Mel Filter Banks: The frequency domain is then mapped to the Mel scale, which approximates the human ear’s response to different frequencies. The idea is to extract more features (more filter banks) in the lower frequencies and less in the high frequencies. Thus, it performs well on sounds distinguished by the human ear. Typically, 20 to 40 triangular filters extract the Mel-frequency energies. These energies are then log-transformed to convert multiplicative factors into additive ones, making them more suitable for further processing.
  • +
+
+
+

+
+
+
    +
  • Discrete Cosine Transform (DCT): The last step is to apply the Discrete Cosine Transform (DCT) to the log Mel energies. The DCT helps to decorrelate the energies, effectively compressing the data and retaining only the most discriminative features. Usually, the first 12-13 DCT coefficients are retained, forming the final MFCC feature vector.
  • +
+
+
+

+
+
+
+
+
+

Hands-On using Python

+

Let’s apply what we discussed while working on an actual audio sample. Open the notebook on Google CoLab and extract the MLCC features on your audio samples: [Open In Colab]

+
+
+

Conclusion

+
+

What Feature Extraction technique should we use?

+

Mel-frequency Cepstral Coefficients (MFCCs), Mel-Frequency Energies (MFEs), or Spectrogram are techniques for representing audio data, which are often helpful in different contexts.

+

In general, MFCCs are more focused on capturing the envelope of the power spectrum, which makes them less sensitive to fine-grained spectral details but more robust to noise. This is often desirable for speech-related tasks. On the other hand, spectrograms or MFEs preserve more detailed frequency information, which can be advantageous in tasks that require discrimination based on fine-grained spectral content.

+
+

MFCCs are particularly strong for:

+
    +
  1. Speech Recognition: MFCCs are excellent for identifying phonetic content in speech signals.
  2. +
  3. Speaker Identification: They can be used to distinguish between different speakers based on voice characteristics.
  4. +
  5. Emotion Recognition: MFCCs can capture the nuanced variations in speech indicative of emotional states.
  6. +
  7. Keyword Spotting: Especially in TinyML, where low computational complexity and small feature size are crucial.
  8. +
+
+
+

Spectrograms or MFEs are often more suitable for:

+
    +
  1. Music Analysis: Spectrograms can capture harmonic and timbral structures in music, which is essential for tasks like genre classification, instrument recognition, or music transcription.
  2. +
  3. Environmental Sound Classification: In recognizing non-speech, environmental sounds (e.g., rain, wind, traffic), the full spectrogram can provide more discriminative features.
  4. +
  5. Birdsong Identification: The intricate details of bird calls are often better captured using spectrograms.
  6. +
  7. Bioacoustic Signal Processing: In applications like dolphin or bat call analysis, the fine-grained frequency information in a spectrogram can be essential.
  8. +
  9. Audio Quality Assurance: Spectrograms are often used in professional audio analysis to identify unwanted noises, clicks, or other artifacts.
  10. +
+ + +
+
+
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/kws_nicla/kws_nicla.html b/contents/kws_nicla/kws_nicla.html new file mode 100644 index 00000000..5207f7da --- /dev/null +++ b/contents/kws_nicla/kws_nicla.html @@ -0,0 +1,1502 @@ + + + + + + + + + +Machine Learning Systems - Keyword Spotting (KWS) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

Keyword Spotting (KWS)

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: 1950s style cartoon scene set in a vintage audio research room. Two Afro-American female scientists are at the center. One holds a magnifying glass, closely examining ancient circuitry, while the other takes notes. On their wooden table, there are multiple boards with sensors, notably featuring a microphone. Behind these boards, a computer with a large, rounded back displays the Arduino IDE. The IDE showcases code for LED pin assignments and machine learning inference for voice command detection. A distinct window in the IDE, the Serial Monitor, reveals outputs indicating the spoken commands ‘yes’ and ‘no’. The room ambiance is nostalgic with vintage lamps, classic audio analysis tools, and charts depicting FFT graphs and time-domain curves.
+
+
+
+

Introduction

+

Having already explored the Nicla Vision board in the Image Classification and Object Detection applications, we are now shifting our focus to voice-activated applications with a project on Keyword Spotting (KWS).

+

As introduced in the Feature Engineering for Audio Classification Hands-On tutorial, Keyword Spotting (KWS) is integrated into many voice recognition systems, enabling devices to respond to specific words or phrases. While this technology underpins popular devices like Google Assistant or Amazon Alexa, it’s equally applicable and feasible on smaller, low-power devices. This tutorial will guide you through implementing a KWS system using TinyML on the Nicla Vision development board equipped with a digital microphone.

+

Our model will be designed to recognize keywords that can trigger device wake-up or specific actions, bringing them to life with voice-activated commands.

+
+
+

How does a voice assistant work?

+

As said, voice assistants on the market, like Google Home or Amazon Echo-Dot, only react to humans when they are “waked up” by particular keywords such as ” Hey Google” on the first one and “Alexa” on the second.

+
+
+

+
+
+

In other words, recognizing voice commands is based on a multi-stage model or Cascade Detection.

+
+
+

+
+
+

Stage 1: A small microprocessor inside the Echo Dot or Google Home continuously listens, waiting for the keyword to be spotted, using a TinyML model at the edge (KWS application).

+

Stage 2: Only when triggered by the KWS application on Stage 1 is the data sent to the cloud and processed on a larger model.

+

The video below shows an example of a Google Assistant being programmed on a Raspberry Pi (Stage 2), with an Arduino Nano 33 BLE as the TinyML device (Stage 1).

+
+
+

To explore the above Google Assistant project, please see the tutorial: Building an Intelligent Voice Assistant From Scratch.

+
+

In this KWS project, we will focus on Stage 1 (KWS or Keyword Spotting), where we will use the Nicla Vision, which has a digital microphone that will be used to spot the keyword.

+
+
+

The KWS Hands-On Project

+

The diagram below gives an idea of how the final KWS application should work (during inference):

+
+
+

+
+
+

Our KWS application will recognize four classes of sound:

+
    +
  • YES (Keyword 1)
  • +
  • NO (Keyword 2)
  • +
  • NOISE (no words spoken; only background noise is present)
  • +
  • UNKNOW (a mix of different words than YES and NO)
  • +
+
+

For real-world projects, it is always advisable to include other sounds besides the keywords, such as “Noise” (or Background) and “Unknown.”

+
+
+

The Machine Learning workflow

+

The main component of the KWS application is its model. So, we must train such a model with our specific keywords, noise, and other words (the “unknown”):

+
+
+

+
+
+
+
+
+

Dataset

+

The critical component of any Machine Learning Workflow is the dataset. Once we have decided on specific keywords, in our case (YES and NO), we can take advantage of the dataset developed by Pete Warden, “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” This dataset has 35 keywords (with +1,000 samples each), such as yes, no, stop, and go. In words such as yes and no, we can get 1,500 samples.

+

You can download a small portion of the dataset from Edge Studio (Keyword spotting pre-built dataset), which includes samples from the four classes we will use in this project: yes, no, noise, and background. For this, follow the steps below:

+ +
+

Uploading the dataset to the Edge Impulse Studio

+

Initiate a new project at Edge Impulse Studio (EIS) and select the Upload Existing Data tool in the Data Acquisition section. Choose the files to be uploaded:

+
+
+

+
+
+

Define the Label, select Automatically split between train and test, and Upload data to the EIS. Repeat for all classes.

+
+
+

+
+
+

The dataset will now appear in the Data acquisition section. Note that the approximately 6,000 samples (1,500 for each class) are split into Train (4,800) and Test (1,200) sets.

+
+
+

+
+
+
+
+

Capturing additional Audio Data

+

Although we have a lot of data from Pete’s dataset, collecting some words spoken by us is advised. When working with accelerometers, creating a dataset with data captured by the same type of sensor is essential. In the case of sound, this is optional because what we will classify is, in reality, audio data.

+
+

The key difference between sound and audio is the type of energy. Sound is mechanical perturbation (longitudinal sound waves) that propagate through a medium, causing variations of pressure in it. Audio is an electrical (analog or digital) signal representing sound.

+
+

When we pronounce a keyword, the sound waves should be converted to audio data. The conversion should be done by sampling the signal generated by the microphone at a 16KHz frequency with 16-bit per sample amplitude.

+

So, any device that can generate audio data with this basic specification (16KHz/16bits) will work fine. As a device, we can use the NiclaV, a computer, or even your mobile phone.

+
+
+

+
+
+
+

Using the NiclaV and the Edge Impulse Studio

+

As we learned in the chapter Setup Nicla Vision, EIS officially supports the Nicla Vision, which simplifies the capture of the data from its sensors, including the microphone. So, please create a new project on EIS and connect the Nicla to it, following these steps:

+
    +
  • Download the last updated EIS Firmware and unzip it.

  • +
  • Open the zip file on your computer and select the uploader corresponding to your OS:

  • +
+
+
+

+
+
+
    +
  • Put the NiclaV in Boot Mode by pressing the reset button twice.

  • +
  • Upload the binary arduino-nicla-vision.bin to your board by running the batch code corresponding to your OS.

  • +
+

Go to your project on EIS, and on the Data Acquisition tab, select WebUSB. A window will pop up; choose the option that shows that the Nicla is paired and press [Connect].

+

You can choose which sensor data to pick in the Collect Data section on the Data Acquisition tab. Select: Built-in microphone, define your label (for example, yes), the sampling Frequency[16000Hz], and the Sample length (in milliseconds), for example [10s]. Start sampling.

+
+
+

+
+
+

Data on Pete’s dataset have a length of 1s, but the recorded samples are 10s long and must be split into 1s samples. Click on three dots after the sample name and select Split sample.

+

A window will pop up with the Split tool.

+
+
+

+
+
+

Once inside the tool, split the data into 1-second (1000 ms) records. If necessary, add or remove segments. This procedure should be repeated for all new samples.

+
+
+

Using a smartphone and the EI Studio

+

You can also use your PC or smartphone to capture audio data, using a sampling frequency of 16KHz and a bit depth of 16.

+

Go to Devices, scan the QR Code using your phone, and click on the link. A data Collection app will appear in your browser. Select Collecting Audio, and define your Label, data capture Length, and Category.

+
+
+

+
+
+

Repeat the same procedure used with the NiclaV.

+
+

Note that any app, such as Audacity, can be used for audio recording, provided you use 16KHz/16-bit depth samples.

+
+
+
+
+
+

Creating Impulse (Pre-Process / Model definition)

+

An impulse takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data.

+
+

Impulse Design

+
+
+

+
+
+

First, we will take the data points with a 1-second window, augmenting the data and sliding that window in 500ms intervals. Note that the option zero-pad data is set. It is essential to fill with ‘zeros’ samples smaller than 1 second (in some cases, some samples can result smaller than the 1000 ms window on the split tool to avoid noise and spikes).

+

Each 1-second audio sample should be pre-processed and converted to an image (for example, 13 x 49 x 1). As discussed in the Feature Engineering for Audio Classification Hands-On tutorial, we will use Audio (MFCC), which extracts features from audio signals using Mel Frequency Cepstral Coefficients, which are well suited for the human voice, our case here.

+

Next, we select the Classification block to build our model from scratch using a Convolution Neural Network (CNN).

+
+

Alternatively, you can use the Transfer Learning (Keyword Spotting) block, which fine-tunes a pre-trained keyword spotting model on your data. This approach has good performance with relatively small keyword datasets.

+
+
+
+

Pre-Processing (MFCC)

+

The following step is to create the features to be trained in the next phase:

+

We could keep the default parameter values, but we will use the DSP Autotune parameters option.

+
+
+

+
+
+

We will take the Raw features (our 1-second, 16KHz sampled audio data) and use the MFCC processing block to calculate the Processed features. For every 16,000 raw features (16,000 x 1 second), we will get 637 processed features (13 x 49).

+
+
+

+
+
+

The result shows that we only used a small amount of memory to pre-process data (16KB) and a latency of 34ms, which is excellent. For example, on an Arduino Nano (Cortex-M4f @ 64MHz), the same pre-process will take around 480ms. The parameters chosen, such as the FFT length [512], will significantly impact the latency.

+

Now, let’s Save parameters and move to the Generated features tab, where the actual features will be generated. Using UMAP, a dimension reduction technique, the Feature explorer shows how the features are distributed on a two-dimensional plot.

+
+
+

+
+
+

The result seems OK, with a visually clear separation between yes features (in red) and no features (in blue). The unknown features seem nearer to the no space than the yes. This suggests that the keyword no has more propensity to false positives.

+
+
+

Going under the hood

+

To understand better how the raw sound is preprocessed, look at the Feature Engineering for Audio Classification chapter. You can play with the MFCC features generation by downloading this notebook from GitHub or [Opening it In Colab]

+
+
+
+

Model Design and Training

+

We will use a simple Convolution Neural Network (CNN) model, tested with 1D and 2D convolutions. The basic architecture has two blocks of Convolution + MaxPooling ([8] and [16] filters, respectively) and a Dropout of [0.25] for the 1D and [0.5] for the 2D. For the last layer, after Flattening, we have [4] neurons, one for each class:

+
+
+

+
+
+

As hyper-parameters, we will have a Learning Rate of [0.005] and a model trained by [100] epochs. We will also include a data augmentation method based on SpecAugment. We trained the 1D and the 2D models with the same hyperparameters. The 1D architecture had a better overall result (90.5% accuracy when compared with 88% of the 2D, so we will use the 1D.

+
+
+

+
+
+
+

Using 1D convolutions is more efficient because it requires fewer parameters than 2D convolutions, making them more suitable for resource-constrained environments.

+
+

It is also interesting to pay attention to the 1D Confusion Matrix. The F1 Score for yes is 95%, and for no, 91%. That was expected by what we saw with the Feature Explorer (no and unknown at close distance). In trying to improve the result, you can inspect closely the results of the samples with an error.

+
+
+

+
+
+

Listen to the samples that went wrong. For example, for yes, most of the mistakes were related to a yes pronounced as “yeh”. You can acquire additional samples and then retrain your model.

+
+

Going under the hood

+

If you want to understand what is happening “under the hood,” you can download the pre-processed dataset (MFCC training data) from the Dashboard tab and run this Jupyter Notebook, playing with the code or [Opening it In Colab]. For example, you can analyze the accuracy by each epoch:

+
+
+

+
+
+
+
+
+

Testing

+

Testing the model with the data reserved for training (Test Data), we got an accuracy of approximately 76%.

+
+
+

+
+
+

Inspecting the F1 score, we can see that for YES, we got 0.90, an excellent result since we expect to use this keyword as the primary “trigger” for our KWS project. The worst result (0.70) is for UNKNOWN, which is OK.

+

For NO, we got 0.72, which was expected, but to improve this result, we can move the samples that were not correctly classified to the training dataset and then repeat the training process.

+
+

Live Classification

+

We can proceed to the project’s next step but also consider that it is possible to perform Live Classification using the NiclaV or a smartphone to capture live samples, testing the trained model before deployment on our device.

+
+
+
+

Deploy and Inference

+

The EIS will package all the needed libraries, preprocessing functions, and trained models, downloading them to your computer. Go to the Deployment section, select Arduino Library, and at the bottom, choose Quantized (Int8) and press Build.

+
+
+

+
+
+

When the Build button is selected, a zip file will be created and downloaded to your computer. On your Arduino IDE, go to the Sketch tab, select the option Add .ZIP Library, and Choose the .zip file downloaded by EIS:

+
+
+

+
+
+

Now, it is time for a real test. We will make inferences while completely disconnected from the EIS. Let’s use the NiclaV code example created when we deployed the Arduino Library.

+

In your Arduino IDE, go to the File/Examples tab, look for your project, and select nicla-vision/nicla-vision_microphone (or nicla-vision_microphone_continuous)

+
+
+

+
+
+

Press the reset button twice to put the NiclaV in boot mode, upload the sketch to your board, and test some real inferences:

+
+
+

+
+
+
+
+

Post-processing

+

Now that we know the model is working since it detects our keywords, let’s modify the code to see the result with the NiclaV completely offline (disconnected from the PC and powered by a battery, a power bank, or an independent 5V power supply).

+

The idea is that whenever the keyword YES is detected, the Green LED will light; if a NO is heard, the Red LED will light, if it is a UNKNOW, the Blue LED will light; and in the presence of noise (No Keyword), the LEDs will be OFF.

+

We should modify one of the code examples. Let’s do it now with the nicla-vision_microphone_continuous.

+

Start with initializing the LEDs:

+
...
+void setup()
+{
+        // Once you finish debugging your code, you can comment or delete the Serial part of the code
+    Serial.begin(115200);
+    while (!Serial);
+    Serial.println("Inferencing - Nicla Vision KWS with LEDs");
+    
+    // Pins for the built-in RGB LEDs on the Arduino NiclaV
+    pinMode(LEDR, OUTPUT);
+    pinMode(LEDG, OUTPUT);
+    pinMode(LEDB, OUTPUT);
+
+    // Ensure the LEDs are OFF by default.
+    // Note: The RGB LEDs on the Arduino Nicla Vision
+    // are ON when the pin is LOW, OFF when HIGH.
+    digitalWrite(LEDR, HIGH);
+    digitalWrite(LEDG, HIGH);
+    digitalWrite(LEDB, HIGH);
+...
+}
+

Create two functions, turn_off_leds() function , to turn off all RGB LEDs

+
**
+ * @brief      turn_off_leds function - turn-off all RGB LEDs
+ */
+void turn_off_leds(){
+    digitalWrite(LEDR, HIGH);
+    digitalWrite(LEDG, HIGH);
+    digitalWrite(LEDB, HIGH);
+}
+

Another turn_on_led() function is used to turn on the RGB LEDs according to the most probable result of the classifier.

+
/**
+ * @brief      turn_on_leds function used to turn on the RGB LEDs
+ * @param[in]  pred_index     
+ *             no:       [0] ==> Red ON
+ *             noise:    [1] ==> ALL OFF 
+ *             unknown:  [2] ==> Blue ON
+ *             Yes:      [3] ==> Green ON
+ */
+void turn_on_leds(int pred_index) {
+  switch (pred_index)
+  {
+    case 0:
+      turn_off_leds();
+      digitalWrite(LEDR, LOW);
+      break;
+
+    case 1:
+      turn_off_leds();
+      break;
+    
+    case 2:
+      turn_off_leds();
+      digitalWrite(LEDB, LOW);
+      break;
+
+    case 3:
+      turn_off_leds();
+      digitalWrite(LEDG, LOW);
+      break;
+  }
+}
+

And change the // print the predictions portion of the code on loop():

+
...
+
+    if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) {
+        // print the predictions
+        ei_printf("Predictions ");
+        ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
+            result.timing.dsp, result.timing.classification, result.timing.anomaly);
+        ei_printf(": \n");
+
+        int pred_index = 0;     // Initialize pred_index
+        float pred_value = 0;   // Initialize pred_value
+
+        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
+            if (result.classification[ix].value > pred_value){
+                pred_index = ix;
+                pred_value = result.classification[ix].value;
+            }
+            // ei_printf("    %s: ", result.classification[ix].label);
+            // ei_printf_float(result.classification[ix].value);
+            // ei_printf("\n");
+        }
+        ei_printf("  PREDICTION: ==> %s with probability %.2f\n", 
+                  result.classification[pred_index].label, pred_value);
+        turn_on_leds (pred_index);
+
+        
+#if EI_CLASSIFIER_HAS_ANOMALY == 1
+        ei_printf("    anomaly score: ");
+        ei_printf_float(result.anomaly);
+        ei_printf("\n");
+#endif
+
+        print_results = 0;
+    }
+}
+
+...
+

You can find the complete code on the project’s GitHub.

+

Upload the sketch to your board and test some real inferences. The idea is that the Green LED will be ON whenever the keyword YES is detected, the Red will lit for a NO, and any other word will turn on the Blue LED. All the LEDs should be off if silence or background noise is present. Remember that the same procedure can “trigger” an external device to perform a desired action instead of turning on an LED, as we saw in the introduction.

+
+
+
+

Conclusion

+
+

You will find the notebooks and codes used in this hands-on tutorial on the GitHub repository.

+
+

Before we finish, consider that Sound Classification is more than just voice. For example, you can develop TinyML projects around sound in several areas, such as:

+
    +
  • Security (Broken Glass detection, Gunshot)
  • +
  • Industry (Anomaly Detection)
  • +
  • Medical (Snore, Cough, Pulmonary diseases)
  • +
  • Nature (Beehive control, insect sound, pouching mitigation)
  • +
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/ml_systems/ml_systems.html b/contents/ml_systems/ml_systems.html new file mode 100644 index 00000000..56d08c9d --- /dev/null +++ b/contents/ml_systems/ml_systems.html @@ -0,0 +1,1431 @@ + + + + + + + + + +Machine Learning Systems - 2  ML Systems + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

2  ML Systems

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Illustration in a rectangular format depicting the merger of embedded systems with Embedded AI. The left half of the image portrays traditional embedded systems, including microcontrollers and processors, detailed and precise. The right half showcases the world of artificial intelligence, with abstract representations of machine learning models, neurons, and data flow. The two halves are distinctly separated, emphasizing the individual significance of embedded tech and AI, but they come together in harmony at the center.
+
+
+

Machine learning (ML) systems, built on the foundation of computing systems, hold the potential to transform our world. These systems, with their specialized roles and real-time computational capabilities, represent a critical junction where data and computation meet on a micro-scale. They are specifically tailored to optimize performance, energy usage, and spatial efficiency—key factors essential for the successful implementation of ML systems.

+

As this chapter progresses, we will explore embedded systems’ complex and fascinating world. We’ll gain insights into their structural design and operational features and understand their pivotal role in powering ML applications. Starting with the basics of microcontroller units, we will examine the interfaces and peripherals that enhance their functionalities. This chapter is designed to be a comprehensive guide elucidating the nuanced aspects of embedded systems within the ML systems framework.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Acquire a comprehensive understanding of ML systems, including their definitions, architecture, and programming languages, focusing on the evolution and significance of TinyML.

  • +
  • Explore the design and operational principles of ML systems, including the use of a microcontroller rather than a microprocessor, memory management, System-on-chip (SoC) integration, and the development and deployment of machine learning models.

  • +
  • Examine the interfaces, power management, and real-time operating characteristics essential for efficient ML systems alongside energy efficiency, reliability, and security considerations.

  • +
  • Investigate the distinctions, benefits, challenges, and use cases for Cloud ML, Edge ML, and TinyML, emphasizing selecting the appropriate machine learning approach based on specific application needs and the evolving landscape of embedded systems in machine learning.

  • +
+
+
+
+

2.1 Machine Learning Systems

+

ML is rapidly evolving, with new paradigms reshaping how models are developed, trained, and deployed. One such paradigm is embedded machine learning, which is experiencing significant innovation driven by the proliferation of smart sensors, edge devices, and microcontrollers. Embedded machine learning refers to the integration of machine learning algorithms into the hardware of a device, enabling real-time data processing and analysis without relying on cloud connectivity. This chapter explores the landscape of embedded machine learning, covering the key approaches of Cloud ML, Edge ML, and TinyML (Figure fig-cloud-edge-tinyml-comparison).

+
+
+
+ +
+
+Figure 2.1: Cloud vs. Edge vs. TinyML: The Spectrum of Distributed Intelligence. Credit: Massimo Banzi – Arduino. +
+
+
+

We begin by outlining each embedded ML variant’s features or characteristics, benefits, challenges, and use cases. This provides context on where these technologies do well and where they face limitations. We then combine all three approaches into a comparative analysis, evaluating them across critical parameters like latency, privacy, computational demands, and more. This side-by-side perspective highlights the unique strengths and tradeoffs of selecting these strategies.

+

Next, we trace the evolution timeline of embedded systems and machine learning, from the origins of wireless sensor networks to the integration of ML algorithms into microcontrollers. This historical lens enriches our understanding of the rapid pace of advancement in this domain. Finally, practical hands-on exercises offer an opportunity to experiment first-hand with embedded computer vision applications.

+

By the end of this multipronged exploration of embedded ML, you will possess the conceptual and practical knowledge to determine the appropriate ML implementation for your specific use case constraints. The chapter aims to equip you with the contextual clarity and technical skills to navigate this quickly shifting landscape, empowering impactful innovations.

+
+
+

2.2 Cloud ML

+
+

2.2.1 Characteristics

+

Cloud ML is a specialized branch of the broader machine learning field within cloud computing environments. It offers a virtual platform for developing, training, and deploying machine learning models, providing flexibility and scalability.

+

At its foundation, Cloud ML utilizes a powerful blend of high-capacity servers, expansive storage solutions, and robust networking architectures, all located in data centers worldwide (Figure fig-cloudml-example). This setup centralizes computational resources, simplifying the management and scaling of machine learning projects.

+

The cloud environment excels in data processing and model training and is designed to manage large data volumes and complex computations. Models crafted in Cloud ML can leverage vast amounts of data, processed and analyzed centrally, thereby enhancing the model’s learning and predictive performance.

+
+
+
+ +
+
+Figure 2.2: Cloud TPU data center at Google. Credit: Google. +
+
+
+
+
+

2.2.2 Benefits

+

Cloud ML is synonymous with immense computational power, adept at handling complex algorithms and large datasets. This is particularly advantageous for machine learning models that demand significant computational resources, effectively circumventing the constraints of local setups.

+

A key advantage of Cloud ML is its dynamic scalability. As data volumes or computational needs grow, the infrastructure can adapt seamlessly, ensuring consistent performance.

+

Cloud ML platforms often offer a wide array of advanced tools and algorithms. Developers can utilize these resources to accelerate the building, training, and deploying sophisticated models, fostering innovation.

+
+
+

2.2.3 Challenges

+

Despite its capabilities, Cloud ML can face latency issues, especially in applications that require real-time responses. The time taken to send data to centralized servers and back can introduce delays, a significant drawback in time-sensitive scenarios.

+

Centralizing data processing and storage can also create data privacy and security vulnerabilities. Data centers become attractive targets for cyber-attacks, requiring substantial investments in security measures to protect sensitive data.

+

Additionally, as data processing needs escalate, so do the costs of using cloud services. Organizations dealing with large data volumes may encounter rising costs, which could affect the long-term scalability and feasibility of their operations.

+
+
+

2.2.4 Example Use Cases

+

Cloud ML is important in powering virtual assistants like Siri and Alexa. These systems harness the cloud’s computational prowess to analyze and process voice inputs, delivering intelligent and personalized responses to users.

+

It also provides the foundation for advanced recommendation systems in platforms like Netflix and Amazon. These systems sift through extensive datasets to identify patterns and preferences, offering personalized content or product suggestions to boost user engagement.

+

In the financial realm, Cloud ML has created robust fraud detection systems. These systems scrutinize vast amounts of transactional data to flag potential fraudulent activities, enabling timely interventions and reducing financial risks.

+

In summary, it’s virtually impossible to navigate the internet today without encountering some form of Cloud ML, directly or indirectly. From the personalized ads on your social media feed to the predictive text features in email services, Cloud ML is deeply integrated into our online experiences. It powers smart algorithms that recommend products on e-commerce sites, fine-tunes search engines to deliver accurate results, and even automates the tagging and categorization of photos on platforms like Facebook.

+

Furthermore, Cloud ML bolsters user security through anomaly detection systems that monitor for unusual activities, potentially shielding users from cyber threats. It acts as the unseen powerhouse, continuously operating behind the scenes to refine, secure, and personalize our digital interactions, making the modern internet a more intuitive and user-friendly environment.

+
+
+
+

2.3 Edge ML

+
+

2.3.1 Characteristics

+

Definition of Edge ML

+

Edge Machine Learning (Edge ML) runs machine learning algorithms directly on endpoint devices or closer to where the data is generated rather than relying on centralized cloud servers. This approach aims to bring computation closer to the data source, reducing the need to send large volumes of data over networks, often resulting in lower latency and improved data privacy.

+

Decentralized Data Processing

+

In Edge ML, data processing happens in a decentralized fashion. Instead of sending data to remote servers, the data is processed locally on devices like smartphones, tablets, or IoT devices (Figure fig-edgeml-example). This local processing allows devices to make quick decisions based on the data they collect without relying heavily on a central server’s resources. This decentralization is particularly important in real-time applications where even a slight delay can have significant consequences.

+

Local Data Storage and Computation

+

Local data storage and computation are key features of Edge ML. This setup ensures that data can be stored and analyzed directly on the devices, thereby maintaining the privacy of the data and reducing the need for constant internet connectivity. Moreover, this often leads to more efficient computation, as data doesn’t have to travel long distances, and computations are performed with a more nuanced understanding of the local context, which can sometimes result in more insightful analyses.

+
+
+
+ +
+
+Figure 2.3: Edge ML Examples. Credit: Edge Impulse. +
+
+
+
+
+

2.3.2 Benefits

+

Reduced Latency

+

One of Edge ML’s main advantages is the significant latency reduction compared to Cloud ML. This reduced latency can be a critical benefit in situations where milliseconds count, such as in autonomous vehicles, where quick decision-making can mean the difference between safety and an accident.

+

Enhanced Data Privacy

+

Edge ML also offers improved data privacy, as data is primarily stored and processed locally. This minimizes the risk of data breaches that are more common in centralized data storage solutions. Sensitive information can be kept more secure, as it’s not sent over networks that could be intercepted.

+

Lower Bandwidth Usage

+

Operating closer to the data source means less data must be sent over networks, reducing bandwidth usage. This can result in cost savings and efficiency gains, especially in environments where bandwidth is limited or costly.

+
+
+

2.3.3 Challenges

+

Limited Computational Resources Compared to Cloud ML

+

However, Edge ML has its challenges. One of the main concerns is the limited computational resources compared to cloud-based solutions. Endpoint devices may have a different processing power or storage capacity than cloud servers, limiting the complexity of the machine learning models that can be deployed.

+

Complexity in Managing Edge Nodes

+

Managing a network of edge nodes can introduce complexity, especially regarding coordination, updates, and maintenance. Ensuring all nodes operate seamlessly and are up-to-date with the latest algorithms and security protocols can be a logistical challenge.

+

Security Concerns at the Edge Nodes

+

While Edge ML offers enhanced data privacy, edge nodes can sometimes be more vulnerable to physical and cyber-attacks. Developing robust security protocols that protect data at each node without compromising the system’s efficiency remains a significant challenge in deploying Edge ML solutions.

+
+
+

2.3.4 Example Use Cases

+

Edge ML has many applications, from autonomous vehicles and smart homes to industrial IoT. These examples were chosen to highlight scenarios where real-time data processing, reduced latency, and enhanced privacy are not just beneficial but often critical to the operation and success of these technologies. They demonstrate the pivotal role that Edge ML can play in driving advancements in various sectors, fostering innovation, and paving the way for more intelligent, responsive, and adaptive systems.

+

Autonomous Vehicles

+

Autonomous vehicles stand as a prime example of Edge ML’s potential. These vehicles rely heavily on real-time data processing to navigate and make decisions. Localized machine learning models assist in quickly analyzing data from various sensors to make immediate driving decisions, ensuring safety and smooth operation.

+

Smart Homes and Buildings

+

Edge ML plays a crucial role in efficiently managing various systems in smart homes and buildings, from lighting and heating to security. By processing data locally, these systems can operate more responsively and harmoniously with the occupants’ habits and preferences, creating a more comfortable living environment.

+

Industrial IoT

+

The Industrial Internet of Things (IoT) leverages Edge ML to monitor and control complex industrial processes. Here, machine learning models can analyze data from numerous sensors in real-time, enabling predictive maintenance, optimizing operations, and enhancing safety measures. This revolution in industrial automation and efficiency.

+

The applicability of Edge ML is vast and not limited to these examples. Various other sectors, including healthcare, agriculture, and urban planning, are exploring and integrating Edge ML to develop innovative solutions responsive to real-world needs and challenges, heralding a new era of smart, interconnected systems.

+
+
+
+

2.4 Tiny ML

+
+

2.4.1 Characteristics

+

Definition of TinyML

+

TinyML sits at the crossroads of embedded systems and machine learning, representing a burgeoning field that brings smart algorithms directly to tiny microcontrollers and sensors. These microcontrollers operate under severe resource constraints, particularly regarding memory, storage, and computational power (see a TinyML kit example in Figure fig-tinyml-example).

+

On-Device Machine Learning

+

In TinyML, the focus is on on-device machine learning. This means that machine learning models are deployed and trained on the device, eliminating the need for external servers or cloud infrastructures. This allows TinyML to enable intelligent decision-making right where the data is generated, making real-time insights and actions possible, even in settings where connectivity is limited or unavailable.

+

Low Power and Resource-Constrained Environments

+

TinyML excels in low-power and resource-constrained settings. These environments require highly optimized solutions that function within the available resources. TinyML meets this need through specialized algorithms and models designed to deliver decent performance while consuming minimal energy, thus ensuring extended operational periods, even in battery-powered devices.

+
+
+
+ +
+
+Figure 2.4: Examples of TinyML device kits. Credit: Widening Access to Applied Machine Learning with TinyML. +
+
+
+
+

Exercise 2.1 (TinyML with Arduino)  

+
+
+ +
+
+

Get ready to bring machine learning to the smallest of devices! In the embedded machine learning world, TinyML is where resource constraints meet ingenuity. This Colab notebook will walk you through building a gesture recognition model designed on an Arduino board. You’ll learn how to train a small but effective neural network, optimize it for minimal memory usage, and deploy it to your microcontroller. If you’re excited about making everyday objects smarter, this is where it begins!

+

+
+
+
+
+
+

2.4.2 Benefits

+

Extremely Low Latency

+

One of the standout benefits of TinyML is its ability to offer ultra-low latency. Since computation occurs directly on the device, the time required to send data to external servers and receive a response is eliminated. This is crucial in applications requiring immediate decision-making, enabling quick responses to changing conditions.

+

High Data Security

+

TinyML inherently enhances data security. Because data processing and analysis happen on the device, the risk of data interception during transmission is virtually eliminated. This localized approach to data management ensures that sensitive information stays on the device, strengthening user data security.

+

Energy Efficiency

+

TinyML operates within an energy-efficient framework, a necessity given its resource-constrained environments. By employing lean algorithms and optimized computational methods, TinyML ensures that devices can execute complex tasks without rapidly depleting battery life, making it a sustainable option for long-term deployments.

+
+
+

2.4.3 Challenges

+

Limited Computational Capabilities

+

However, the shift to TinyML comes with its set of hurdles. The primary limitation is the devices’ constrained computational capabilities. The need to operate within such limits means that deployed models must be simplified, which could affect the accuracy and sophistication of the solutions.

+

Complex Development Cycle

+

TinyML also introduces a complicated development cycle. Crafting lightweight and effective models demands a deep understanding of machine learning principles and expertise in embedded systems. This complexity calls for a collaborative development approach, where multi-domain expertise is essential for success.

+

Model Optimization and Compression

+

A central challenge in TinyML is model optimization and compression. Creating machine learning models that can operate effectively within the limited memory and computational power of microcontrollers requires innovative approaches to model design. Developers often face the challenge of striking a delicate balance and optimizing models to maintain effectiveness while fitting within stringent resource constraints.

+
+
+

2.4.4 Example Use Cases

+

Wearable Devices

+

In wearables, TinyML opens the door to smarter, more responsive gadgets. From fitness trackers offering real-time workout feedback to smart glasses processing visual data on the fly, TinyML transforms how we engage with wearable tech, delivering personalized experiences directly from the device.

+

Predictive Maintenance

+

In industrial settings, TinyML plays a significant role in predictive maintenance. By deploying TinyML algorithms on sensors that monitor equipment health, companies can preemptively identify potential issues, reducing downtime and preventing costly breakdowns. On-site data analysis ensures quick responses, potentially stopping minor issues from becoming major problems.

+

Anomaly Detection

+

TinyML can be employed to create anomaly detection models that identify unusual data patterns. For instance, a smart factory could use TinyML to monitor industrial processes and spot anomalies, helping prevent accidents and improve product quality. Similarly, a security company could use TinyML to monitor network traffic for unusual patterns, aiding in detecting and preventing cyber-attacks. TinyML could monitor patient data for anomalies in healthcare, aiding early disease detection and better patient treatment.

+

Environmental Monitoring

+

In environmental monitoring, TinyML enables real-time data analysis from various field-deployed sensors. These could range from city air quality monitoring to wildlife tracking in protected areas. Through TinyML, data can be processed locally, allowing for quick responses to changing conditions and providing a nuanced understanding of environmental patterns, crucial for informed decision-making.

+

In summary, TinyML serves as a trailblazer in the evolution of machine learning, fostering innovation across various fields by bringing intelligence directly to the edge. Its potential to transform our interaction with technology and the world is immense, promising a future where devices are connected, intelligent, and capable of making real-time decisions and responses.

+
+
+
+

2.5 Comparison

+

Up to this point, we’ve explored each of the different ML variants individually. Now, let’s bring them all together for a comprehensive view. Table tbl-big_vs_tiny offers a comparative analysis of Cloud ML, Edge ML, and TinyML based on various features and aspects. This comparison aims to provide a clear perspective on the unique advantages and distinguishing factors, aiding in making informed decisions based on the specific needs and constraints of a given application or project.

+
+
+
+Table 2.1: Comparison of feature aspects across Cloud ML, Edge ML, and TinyML. +
+
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Feature/AspectCloud MLEdge MLTinyML
Processing LocationCentralized servers (Data Centers)Local devices (closer to data sources)On-device (microcontrollers, embedded systems)
LatencyHigh (Depends on internet connectivity)Moderate (Reduced latency compared to Cloud ML)Low (Immediate processing without network delay)
Data PrivacyModerate (Data transmitted over networks)High (Data remains on local networks)Very High (Data processed on-device, not transmitted)
Computational PowerHigh (Utilizes powerful data center infrastructure)Moderate (Utilizes local device capabilities)Low (Limited to the power of the embedded system)
Energy ConsumptionHigh (Data centers consume significant energy)Moderate (Less than data centers, more than TinyML)Low (Highly energy-efficient, designed for low power)
ScalabilityHigh (Easy to scale with additional server resources)Moderate (Depends on local device capabilities)Low (Limited by the hardware resources of the device)
CostHigh (Recurring costs for server usage, maintenance)Variable (Depends on the complexity of local setup)Low (Primarily upfront costs for hardware components)
Connectivity DependenceHigh (Requires stable internet connectivity)Low (Can operate with intermittent connectivity)Very Low (Can operate without any network connectivity)
Real-time ProcessingModerate (Can be affected by network latency)High (Capable of real-time processing locally)Very High (Immediate processing with minimal latency)
Application ExamplesBig Data Analysis, Virtual AssistantsAutonomous Vehicles, Smart HomesWearables, Sensor Networks
Development ComplexityModerate to High (Requires knowledge in cloud computing)Moderate (Requires knowledge in local network setup)Moderate to High (Requires expertise in embedded systems)
+
+
+
+
+
+

2.6 Conclusion

+

In this chapter, we’ve offered a panoramic view of the evolving landscape of machine learning, covering cloud, edge, and tiny ML paradigms. Cloud-based machine learning leverages the immense computational resources of cloud platforms to enable powerful and accurate models but comes with limitations, including latency and privacy concerns. Edge ML mitigates these limitations by bringing inference directly to edge devices, offering lower latency and reduced connectivity needs. TinyML takes this further by miniaturizing ML models to run directly on highly resource-constrained devices, opening up a new category of intelligent applications.

+

Each approach has its tradeoffs, including model complexity, latency, privacy, and hardware costs. Over time, we anticipate converging these embedded ML approaches, with cloud pre-training facilitating more sophisticated edge and tiny ML implementations. Advances like federated learning and on-device learning will enable embedded devices to refine their models by learning from real-world data.

+

The embedded ML landscape is rapidly evolving and poised to enable intelligent applications across a broad spectrum of devices and use cases. This chapter serves as a snapshot of the current state of embedded ML. As algorithms, hardware, and connectivity continue to improve, we can expect embedded devices of all sizes to become increasingly capable, unlocking transformative new applications for artificial intelligence.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+

Coming soon.

+
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/motion_classify_ad/motion_classify_ad.html b/contents/motion_classify_ad/motion_classify_ad.html new file mode 100644 index 00000000..9bad9093 --- /dev/null +++ b/contents/motion_classify_ad/motion_classify_ad.html @@ -0,0 +1,1622 @@ + + + + + + + + + +Machine Learning Systems - Motion Classification and Anomaly Detection + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

Motion Classification and Anomaly Detection

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: 1950s style cartoon illustration depicting a movement research room. In the center of the room, there’s a simulated container used for transporting goods on trucks, boats, and forklifts. The container is detailed with rivets and markings typical of industrial cargo boxes. Around the container, the room is filled with vintage equipment, including an oscilloscope, various sensor arrays, and large paper rolls of recorded data. The walls are adorned with educational posters about transportation safety and logistics. The overall ambiance of the room is nostalgic and scientific, with a hint of industrial flair.
+
+
+
+

Introduction

+

Transportation is the backbone of global commerce. Millions of containers are transported daily via various means, such as ships, trucks, and trains, to destinations worldwide. Ensuring these containers’ safe and efficient transit is a monumental task that requires leveraging modern technology, and TinyML is undoubtedly one of them.

+

In this hands-on tutorial, we will work to solve real-world problems related to transportation. We will develop a Motion Classification and Anomaly Detection system using the Arduino Nicla Vision board, the Arduino IDE, and the Edge Impulse Studio. This project will help us understand how containers experience different forces and motions during various phases of transportation, such as terrestrial and maritime transit, vertical movement via forklifts, and stationary periods in warehouses.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Setting up the Arduino Nicla Vision Board
  • +
  • Data Collection and Preprocessing
  • +
  • Building the Motion Classification Model
  • +
  • Implementing Anomaly Detection
  • +
  • Real-world Testing and Analysis
  • +
+
+
+

By the end of this tutorial, you’ll have a working prototype that can classify different types of motion and detect anomalies during the transportation of containers. This knowledge can be a stepping stone to more advanced projects in the burgeoning field of TinyML involving vibration.

+
+
+

IMU Installation and testing

+

For this project, we will use an accelerometer. As discussed in the Hands-On Tutorial, Setup Nicla Vision, the Nicla Vision Board has an onboard 6-axis IMU: 3D gyroscope and 3D accelerometer, the LSM6DSOX. Let’s verify if the LSM6DSOX IMU library is installed. If not, install it.

+
+
+

+
+
+

Next, go to Examples > Arduino_LSM6DSOX > SimpleAccelerometer and run the accelerometer test. You can check if it works by opening the IDE Serial Monitor or Plotter. The values are in g (earth gravity), with a default range of +/- 4g:

+
+
+

+
+
+
+

Defining the Sampling frequency:

+

Choosing an appropriate sampling frequency is crucial for capturing the motion characteristics you’re interested in studying. The Nyquist-Shannon sampling theorem states that the sampling rate should be at least twice the highest frequency component in the signal to reconstruct it properly. In the context of motion classification and anomaly detection for transportation, the choice of sampling frequency would depend on several factors:

+
    +
  1. Nature of the Motion: Different types of transportation (terrestrial, maritime, etc.) may involve different ranges of motion frequencies. Faster movements may require higher sampling frequencies.

  2. +
  3. Hardware Limitations: The Arduino Nicla Vision board and any associated sensors may have limitations on how fast they can sample data.

  4. +
  5. Computational Resources: Higher sampling rates will generate more data, which might be computationally intensive, especially critical in a TinyML environment.

  6. +
  7. Battery Life: A higher sampling rate will consume more power. If the system is battery-operated, this is an important consideration.

  8. +
  9. Data Storage: More frequent sampling will require more storage space, another crucial consideration for embedded systems with limited memory.

  10. +
+

In many human activity recognition tasks, sampling rates of around 50 Hz to 100 Hz are commonly used. Given that we are simulating transportation scenarios, which are generally not high-frequency events, a sampling rate in that range (50-100 Hz) might be a reasonable starting point.

+

Let’s define a sketch that will allow us to capture our data with a defined sampling frequency (for example, 50Hz):

+
/*
+ * Based on Edge Impulse Data Forwarder Example (Arduino)
+  - https://docs.edgeimpulse.com/docs/cli-data-forwarder
+ * Developed by M.Rovai @11May23
+ */
+
+/* Include ----------------------------------------------------------------- */
+#include <Arduino_LSM6DSOX.h>
+
+/* Constant defines -------------------------------------------------------- */
+#define CONVERT_G_TO_MS2 9.80665f
+#define FREQUENCY_HZ        50
+#define INTERVAL_MS         (1000 / (FREQUENCY_HZ + 1))
+
+static unsigned long last_interval_ms = 0;
+float x, y, z;
+
+void setup() {
+  Serial.begin(9600);
+  while (!Serial);
+
+  if (!IMU.begin()) {
+    Serial.println("Failed to initialize IMU!");
+    while (1);
+  }
+}
+
+void loop() {
+  if (millis() > last_interval_ms + INTERVAL_MS) {
+    last_interval_ms = millis();
+    
+    if (IMU.accelerationAvailable()) {
+      // Read raw acceleration measurements from the device
+      IMU.readAcceleration(x, y, z);
+
+      // converting to m/s2
+      float ax_m_s2 = x * CONVERT_G_TO_MS2;
+      float ay_m_s2 = y * CONVERT_G_TO_MS2;
+      float az_m_s2 = z * CONVERT_G_TO_MS2;
+
+      Serial.print(ax_m_s2); 
+      Serial.print("\t");
+      Serial.print(ay_m_s2); 
+      Serial.print("\t");
+      Serial.println(az_m_s2); 
+    }
+  }
+}
+

Uploading the sketch and inspecting the Serial Monitor, we can see that we are capturing 50 samples per second.

+
+
+

+
+
+
+

Note that with the Nicla board resting on a table (with the camera facing down), the z-axis measures around 9.8m/s\(^2\), the expected earth acceleration.

+
+
+
+
+

The Case Study: Simulated Container Transportation

+

We will simulate container (or better package) transportation through different scenarios to make this tutorial more relatable and practical. Using the built-in accelerometer of the Arduino Nicla Vision board, we’ll capture motion data by manually simulating the conditions of:

+
    +
  1. Terrestrial Transportation (by road or train)
  2. +
  3. Maritime-associated Transportation
  4. +
  5. Vertical Movement via Fork-Lift
  6. +
  7. Stationary (Idle) period in a Warehouse
  8. +
+
+
+

+
+
+

From the above images, we can define for our simulation that primarily horizontal movements (x or y axis) should be associated with the “Terrestrial class,” Vertical movements (z-axis) with the “Lift Class,” no activity with the “Idle class,” and movement on all three axes to Maritime class.

+
+
+

+
+
+
+
+

Data Collection

+

For data collection, we can have several options. In a real case, we can have our device, for example, connected directly to one container, and the data collected on a file (for example .CSV) and stored on an SD card (Via SPI connection) or an offline repo in your computer. Data can also be sent remotely to a nearby repository, such as a mobile phone, using Bluetooth (as done in this project: Sensor DataLogger). Once your dataset is collected and stored as a .CSV file, it can be uploaded to the Studio using the CSV Wizard tool.

+
+

In this video, you can learn alternative ways to send data to the Edge Impulse Studio.

+
+
+

Connecting the device to Edge Impulse

+

We will connect the Nicla directly to the Edge Impulse Studio, which will also be used for data pre-processing, model training, testing, and deployment. For that, you have two options:

+
    +
  1. Download the latest firmware and connect it directly to the Data Collection section.
  2. +
  3. Use the CLI Data Forwarder tool to capture sensor data from the sensor and send it to the Studio.
  4. +
+

Option 1 is more straightforward, as we saw in the Setup Nicla Vision hands-on, but option 2 will give you more flexibility regarding capturing your data, such as sampling frequency definition. Let’s do it with the last one.

+

Please create a new project on the Edge Impulse Studio (EIS) and connect the Nicla to it, following these steps:

+
    +
  1. Install the Edge Impulse CLI and the Node.js into your computer.
  2. +
  3. Upload a sketch for data capture (the one discussed previously in this tutorial).
  4. +
  5. Use the CLI Data Forwarder to capture data from the Nicla’s accelerometer and send it to the Studio, as shown in this diagram:
  6. +
+
+
+

+
+
+

Start the CLI Data Forwarder on your terminal, entering (if it is the first time) the following command:

+
$ edge-impulse-data-forwarder --clean
+

Next, enter your EI credentials and choose your project, variables (for example, accX, accY, and accZ), and device name (for example, NiclaV:

+
+
+

+
+
+

Go to the Devices section on your EI Project and verify if the device is connected (the dot should be green):

+
+
+

+
+
+
+

You can clone the project developed for this hands-on: NICLA Vision Movement Classification.

+
+
+
+

Data Collection

+

On the Data Acquisition section, you should see that your board [NiclaV] is connected. The sensor is available: [sensor with 3 axes (accX, accY, accZ)] with a sampling frequency of [50Hz]. The Studio suggests a sample length of [10000] ms (10s). The last thing left is defining the sample label. Let’s start with[terrestrial]:

+
+
+

+
+
+

Terrestrial (palettes in a Truck or Train), moving horizontally. Press [Start Sample]and move your device horizontally, keeping one direction over your table. After 10 s, your data will be uploaded to the studio. Here is how the sample was collected:

+
+
+

+
+
+

As expected, the movement was captured mainly in the Y-axis (green). In the blue, we see the Z axis, around -10 m/s\(^2\) (the Nicla has the camera facing up).

+

As discussed before, we should capture data from all four Transportation Classes. So, imagine that you have a container with a built-in accelerometer facing the following situations:

+

Maritime (pallets in boats into an angry ocean). The movement is captured on all three axes:

+
+
+

+
+
+

Lift (Palettes being handled vertically by a Forklift). Movement captured only in the Z-axis:

+
+
+

+
+
+

Idle (Paletts in a warehouse). No movement detected by the accelerometer:

+
+
+

+
+
+

You can capture, for example, 2 minutes (twelve samples of 10 seconds) for each of the four classes (a total of 8 minutes of data). Using the three dots menu after each one of the samples, select 2 of them, reserving them for the Test set. Alternatively, you can use the automatic Train/Test Split tool on the Danger Zone of Dashboard tab. Below, you can see the resulting dataset:

+
+
+

+
+
+

Once you have captured your dataset, you can explore it in more detail using the Data Explorer, a visual tool to find outliers or mislabeled data (helping to correct them). The data explorer first tries to extract meaningful features from your data (by applying signal processing and neural network embeddings) and then uses a dimensionality reduction algorithm such as PCA or t-SNE to map these features to a 2D space. This gives you a one-look overview of your complete dataset.

+
+
+

+
+
+

In our case, the dataset seems OK (good separation). But the PCA shows we can have issues between maritime (green) and lift (orange). This is expected, once on a boat, sometimes the movement can be only “vertical”.

+
+
+
+

Impulse Design

+

The next step is the definition of our Impulse, which takes the raw data and uses signal processing to extract features, passing them as the input tensor of a learning block to classify new data. Go to Impulse Design and Create Impulse. The Studio will suggest the basic design. Let’s also add a second Learning Block for Anomaly Detection.

+
+
+

+
+
+

This second model uses a K-means model. If we imagine that we could have our known classes as clusters, any sample that could not fit on that could be an outlier, an anomaly such as a container rolling out of a ship on the ocean or falling from a Forklift.

+
+
+

+
+
+

The sampling frequency should be automatically captured, if not, enter it: [50]Hz. The Studio suggests a Window Size of 2 seconds ([2000] ms) with a sliding window of [20]ms. What we are defining in this step is that we will pre-process the captured data (Time-Seres data), creating a tabular dataset features) that will be the input for a Neural Networks Classifier (DNN) and an Anomaly Detection model (K-Means), as shown below:

+
+
+

+
+
+

Let’s dig into those steps and parameters to understand better what we are doing here.

+
+

Data Pre-Processing Overview

+

Data pre-processing is extracting features from the dataset captured with the accelerometer, which involves processing and analyzing the raw data. Accelerometers measure the acceleration of an object along one or more axes (typically three, denoted as X, Y, and Z). These measurements can be used to understand various aspects of the object’s motion, such as movement patterns and vibrations.

+

Raw accelerometer data can be noisy and contain errors or irrelevant information. Preprocessing steps, such as filtering and normalization, can clean and standardize the data, making it more suitable for feature extraction. In our case, we should divide the data into smaller segments or windows. This can help focus on specific events or activities within the dataset, making feature extraction more manageable and meaningful. The window size and overlap (window increase) choice depend on the application and the frequency of the events of interest. As a thumb rule, we should try to capture a couple of “cycles of data”.

+
+

With a sampling rate (SR) of 50Hz and a window size of 2 seconds, we will get 100 samples per axis, or 300 in total (3 axis x 2 seconds x 50 samples). We will slide this window every 200ms, creating a larger dataset where each instance has 300 raw features.

+
+
+
+

+
+
+

Once the data is preprocessed and segmented, you can extract features that describe the motion’s characteristics. Some typical features extracted from accelerometer data include:

+
    +
  • Time-domain features describe the data’s statistical properties within each segment, such as mean, median, standard deviation, skewness, kurtosis, and zero-crossing rate.
  • +
  • Frequency-domain features are obtained by transforming the data into the frequency domain using techniques like the Fast Fourier Transform (FFT). Some typical frequency-domain features include the power spectrum, spectral energy, dominant frequencies (amplitude and frequency), and spectral entropy.
  • +
  • Time-frequency domain features combine the time and frequency domain information, such as the Short-Time Fourier Transform (STFT) or the Discrete Wavelet Transform (DWT). They can provide a more detailed understanding of how the signal’s frequency content changes over time.
  • +
+

In many cases, the number of extracted features can be large, which may lead to overfitting or increased computational complexity. Feature selection techniques, such as mutual information, correlation-based methods, or principal component analysis (PCA), can help identify the most relevant features for a given application and reduce the dimensionality of the dataset. The Studio can help with such feature importance calculations.

+
+
+

EI Studio Spectral Features

+

Data preprocessing is a challenging area for embedded machine learning, still, Edge Impulse helps overcome this with its digital signal processing (DSP) preprocessing step and, more specifically, the Spectral Features Block.

+

On the Studio, the collected raw dataset will be the input of a Spectral Analysis block, which is excellent for analyzing repetitive motion, such as data from accelerometers. This block will perform a DSP (Digital Signal Processing), extracting features such as FFT or Wavelets.

+

For our project, once the time signal is continuous, we should use FFT with, for example, a length of [32].

+

The per axis/channel Time Domain Statistical features are:

+ +

The per axis/channel Frequency Domain Spectral features are:

+
    +
  • Spectral Power: 16 features (FFT Length/2)
  • +
  • Skewness: 1 feature
  • +
  • Kurtosis: 1 feature
  • +
+

So, for an FFT length of 32 points, the resulting output of the Spectral Analysis Block will be 21 features per axis (a total of 63 features).

+
+

You can learn more about how each feature is calculated by downloading the notebook Edge Impulse - Spectral Features Block Analysis TinyML under the hood: Spectral Analysis or opening it directly on Google CoLab.

+
+
+
+

Generating features

+

Once we understand what the pre-processing does, it is time to finish the job. So, let’s take the raw data (time-series type) and convert it to tabular data. For that, go to the Spectral Features section on the Parameters tab, define the main parameters as discussed in the previous section ([FFT] with [32] points), and select[Save Parameters]:

+
+
+

+
+
+

At the top menu, select the Generate Features option and the Generate Features button. Each 2-second window data will be converted into one data point of 63 features.

+
+

The Feature Explorer will show those data in 2D using UMAP. Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualization similarly to t-SNE but is also applicable for general non-linear dimension reduction.

+
+

The visualization makes it possible to verify that after the feature generation, the classes present keep their excellent separation, which indicates that the classifier should work well. Optionally, you can analyze how important each one of the features is for one class compared with others.

+
+
+

+
+
+
+
+
+

Models Training

+

Our classifier will be a Dense Neural Network (DNN) that will have 63 neurons on its input layer, two hidden layers with 20 and 10 neurons, and an output layer with four neurons (one per each class), as shown here:

+
+
+

+
+
+

As hyperparameters, we will use a Learning Rate of [0.005], a Batch size of [32], and [20]% of data for validation for [30] epochs. After training, we can see that the accuracy is 98.5%. The cost of memory and latency is meager.

+
+
+

+
+
+

For Anomaly Detection, we will choose the suggested features that are precisely the most important ones in the Feature Extraction, plus the accZ RMS. The number of clusters will be [32], as suggested by the Studio:

+
+
+

+
+
+
+
+

Testing

+

We can verify how our model will behave with unknown data using 20% of the data left behind during the data capture phase. The result was almost 95%, which is good. You can always work to improve the results, for example, to understand what went wrong with one of the wrong results. If it is a unique situation, you can add it to the training dataset and then repeat it.

+

The default minimum threshold for a considered uncertain result is [0.6] for classification and [0.3] for anomaly. Once we have four classes (their output sum should be 1.0), you can also set up a lower threshold for a class to be considered valid (for example, 0.4). You can Set confidence thresholds on the three dots menu, besides the Classy all button.

+
+
+

+
+
+

You can also perform Live Classification with your device (which should still be connected to the Studio).

+
+

Be aware that here, you will capture real data with your device and upload it to the Studio, where an inference will be taken using the trained model (But the model is NOT in your device).

+
+
+
+

Deploy

+

It is time to deploy the preprocessing block and the trained model to the Nicla. The Studio will package all the needed libraries, preprocessing functions, and trained models, downloading them to your computer. You should select the option Arduino Library, and at the bottom, you can choose Quantized (Int8) or Unoptimized (float32) and [Build]. A Zip file will be created and downloaded to your computer.

+
+
+

+
+
+

On your Arduino IDE, go to the Sketch tab, select Add.ZIP Library, and Choose the.zip file downloaded by the Studio. A message will appear in the IDE Terminal: Library installed.

+
+

Inference

+

Now, it is time for a real test. We will make inferences wholly disconnected from the Studio. Let’s change one of the code examples created when you deploy the Arduino Library.

+

In your Arduino IDE, go to the File/Examples tab and look for your project, and on examples, select Nicla_vision_fusion:

+
+
+

+
+
+

Note that the code created by Edge Impulse considers a sensor fusion approach where the IMU (Accelerometer and Gyroscope) and the ToF are used. At the beginning of the code, you have the libraries related to our project, IMU and ToF:

+
/* Includes ---------------------------------------------------------------- */
+#include <NICLA_Vision_Movement_Classification_inferencing.h> 
+#include <Arduino_LSM6DSOX.h> //IMU
+#include "VL53L1X.h" // ToF
+
+

You can keep the code this way for testing because the trained model will use only features pre-processed from the accelerometer. But consider that you will write your code only with the needed libraries for a real project.

+
+

And that is it!

+

You can now upload the code to your device and proceed with the inferences. Press the Nicla [RESET] button twice to put it on boot mode (disconnect from the Studio if it is still connected), and upload the sketch to your board.

+

Now you should try different movements with your board (similar to those done during data capture), observing the inference result of each class on the Serial Monitor:

+
    +
  • Idle and lift classes:
  • +
+
+
+

+
+
+
    +
  • maritime and terrestrial:
  • +
+
+
+

+
+
+

Note that in all situations above, the value of the anomaly score was smaller than 0.0. Try a new movement that was not part of the original dataset, for example, “rolling” the Nicla, facing the camera upside-down, as a container falling from a boat or even a boat accident:

+
    +
  • anomaly detection:
  • +
+
+
+

+
+
+

In this case, the anomaly is much bigger, over 1.00

+
+
+

Post-processing

+

Now that we know the model is working since it detects the movements, we suggest that you modify the code to see the result with the NiclaV completely offline (disconnected from the PC and powered by a battery, a power bank, or an independent 5V power supply).

+

The idea is to do the same as with the KWS project: if one specific movement is detected, a specific LED could be lit. For example, if terrestrial is detected, the Green LED will light; if maritime, the Red LED will light, if it is a lift, the Blue LED will light; and if no movement is detected (idle), the LEDs will be OFF. You can also add a condition when an anomaly is detected, in this case, for example, a white color can be used (all e LEDs light simultaneously).

+
+
+
+

Conclusion

+
+

The notebooks and codes used in this hands-on tutorial will be found on the GitHub repository.

+
+

Before we finish, consider that Movement Classification and Object Detection can be utilized in many applications across various domains. Here are some of the potential applications:

+
+

Case Applications

+
+

Industrial and Manufacturing

+
    +
  • Predictive Maintenance: Detecting anomalies in machinery motion to predict failures before they occur.
  • +
  • Quality Control: Monitoring the motion of assembly lines or robotic arms for precision assessment and deviation detection from the standard motion pattern.
  • +
  • Warehouse Logistics: Managing and tracking the movement of goods with automated systems that classify different types of motion and detect anomalies in handling.
  • +
+
+
+

Healthcare

+
    +
  • Patient Monitoring: Detecting falls or abnormal movements in the elderly or those with mobility issues.
  • +
  • Rehabilitation: Monitoring the progress of patients recovering from injuries by classifying motion patterns during physical therapy sessions.
  • +
  • Activity Recognition: Classifying types of physical activity for fitness applications or patient monitoring.
  • +
+
+
+

Consumer Electronics

+
    +
  • Gesture Control: Interpreting specific motions to control devices, such as turning on lights with a hand wave.
  • +
  • Gaming: Enhancing gaming experiences with motion-controlled inputs.
  • +
+
+
+

Transportation and Logistics

+
    +
  • Vehicle Telematics: Monitoring vehicle motion for unusual behavior such as hard braking, sharp turns, or accidents.
  • +
  • Cargo Monitoring: Ensuring the integrity of goods during transport by detecting unusual movements that could indicate tampering or mishandling.
  • +
+
+
+

Smart Cities and Infrastructure

+
    +
  • Structural Health Monitoring: Detecting vibrations or movements within structures that could indicate potential failures or maintenance needs.
  • +
  • Traffic Management: Analyzing the flow of pedestrians or vehicles to improve urban mobility and safety.
  • +
+
+
+

Security and Surveillance

+
    +
  • Intruder Detection: Detecting motion patterns typical of unauthorized access or other security breaches.
  • +
  • Wildlife Monitoring: Detecting poachers or abnormal animal movements in protected areas.
  • +
+
+
+

Agriculture

+
    +
  • Equipment Monitoring: Tracking the performance and usage of agricultural machinery.
  • +
  • Animal Behavior Analysis: Monitoring livestock movements to detect behaviors indicating health issues or stress.
  • +
+
+
+

Environmental Monitoring

+
    +
  • Seismic Activity: Detecting irregular motion patterns that precede earthquakes or other geologically relevant events.
  • +
  • Oceanography: Studying wave patterns or marine movements for research and safety purposes.
  • +
+
+
+
+

Nicla 3D case

+

For real applications, as some described before, we can add a case to our device, and Eoin Jordan, from Edge Impulse, developed a great wearable and machine health case for the Nicla range of boards. It works with a 10mm magnet, 2M screws, and a 16mm strap for human and machine health use case scenarios. Here is the link: Arduino Nicla Voice and Vision Wearable Case.

+
+
+

+
+
+

The applications for motion classification and anomaly detection are extensive, and the Arduino Nicla Vision is well-suited for scenarios where low power consumption and edge processing are advantageous. Its small form factor and efficiency in processing make it an ideal choice for deploying portable and remote applications where real-time processing is crucial and connectivity may be limited.

+ + +
+
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/niclav_sys/niclav_sys.html b/contents/niclav_sys/niclav_sys.html new file mode 100644 index 00000000..3635cf54 --- /dev/null +++ b/contents/niclav_sys/niclav_sys.html @@ -0,0 +1,1441 @@ + + + + + + + + + +Machine Learning Systems - Setup Nicla Vision + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

Setup Nicla Vision

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: Illustration reminiscent of a 1950s cartoon where the Arduino NICLA VISION board, equipped with a variety of sensors including a camera, is the focal point on an old-fashioned desk. In the background, a computer screen with rounded edges displays the Arduino IDE. The code seen is related to LED configurations and machine learning voice command detection. Outputs on the Serial Monitor explicitly display the words ‘yes’ and ‘no’.
+
+
+
+

Introduction

+

The Arduino Nicla Vision (sometimes called NiclaV) is a development board that includes two processors that can run tasks in parallel. It is part of a family of development boards with the same form factor but designed for specific tasks, such as the Nicla Sense ME and the Nicla Voice. The Niclas can efficiently run processes created with TensorFlow Lite. For example, one of the cores of the NiclaV runs a computer vision algorithm on the fly (inference), while the other executes low-level operations like controlling a motor and communicating or acting as a user interface. The onboard wireless module allows the management of WiFi and Bluetooth Low Energy (BLE) connectivity simultaneously.

+
+
+

+
+
+
+
+

Hardware

+
+

Two Parallel Cores

+

The central processor is the dual-core STM32H747, including a Cortex M7 at 480 MHz and a Cortex M4 at 240 MHz. The two cores communicate via a Remote Procedure Call mechanism that seamlessly allows calling functions on the other processor. Both processors share all the on-chip peripherals and can run:

+
    +
  • Arduino sketches on top of the Arm Mbed OS

  • +
  • Native Mbed applications

  • +
  • MicroPython / JavaScript via an interpreter

  • +
  • TensorFlow Lite

  • +
+
+
+

+
+
+
+
+

Memory

+

Memory is crucial for embedded machine learning projects. The NiclaV board can host up to 16 MB of QSPI Flash for storage. However, it is essential to consider that the MCU SRAM is the one to be used with machine learning inferences; the STM32H747 is only 1MB, shared by both processors. This MCU also has incorporated 2MB of FLASH, mainly for code storage.

+
+
+

Sensors

+
    +
  • Camera: A GC2145 2 MP Color CMOS Camera.

  • +
  • Microphone: The MP34DT05 is an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and the IC interface.

  • +
  • 6-Axis IMU: 3D gyroscope and 3D accelerometer data from the LSM6DSOX 6-axis IMU.

  • +
  • Time of Flight Sensor: The VL53L1CBV0FY Time-of-Flight sensor adds accurate and low power-ranging capabilities to the Nicla Vision. The invisible near-infrared VCSEL laser (including the analog driver) is encapsulated with receiving optics in an all-in-one small module below the camera.

  • +
+
+
+
+

Arduino IDE Installation

+

Start connecting the board (microUSB) to your computer:

+
+
+

+
+
+

Install the Mbed OS core for Nicla boards in the Arduino IDE. Having the IDE open, navigate to Tools > Board > Board Manager, look for Arduino Nicla Vision on the search window, and install the board.

+
+
+

+
+
+

Next, go to Tools > Board > Arduino Mbed OS Nicla Boards and select Arduino Nicla Vision. Having your board connected to the USB, you should see the Nicla on Port and select it.

+
+

Open the Blink sketch on Examples/Basic and run it using the IDE Upload button. You should see the Built-in LED (green RGB) blinking, which means the Nicla board is correctly installed and functional!

+
+
+

Testing the Microphone

+

On Arduino IDE, go to Examples > PDM > PDMSerialPlotter, open and run the sketch. Open the Plotter and see the audio representation from the microphone:

+
+
+

+
+
+
+

Vary the frequency of the sound you generate and confirm that the mic is working correctly.

+
+
+
+

Testing the IMU

+

Before testing the IMU, it will be necessary to install the LSM6DSOX library. For that, go to Library Manager and look for LSM6DSOX. Install the library provided by Arduino:

+
+
+

+
+
+

Next, go to Examples > Arduino_LSM6DSOX > SimpleAccelerometer and run the accelerometer test (you can also run Gyro and board temperature):

+
+
+

+
+
+
+
+

Testing the ToF (Time of Flight) Sensor

+

As we did with IMU, it is necessary to install the VL53L1X ToF library. For that, go to Library Manager and look for VL53L1X. Install the library provided by Pololu:

+
+
+

+
+
+

Next, run the sketch proximity_detection.ino:

+
+
+

+
+
+

On the Serial Monitor, you will see the distance from the camera to an object in front of it (max of 4m).

+
+
+

+
+
+
+
+

Testing the Camera

+

We can also test the camera using, for example, the code provided on Examples > Camera > CameraCaptureRawBytes. We cannot see the image directly, but it is possible to get the raw image data generated by the camera.

+

Anyway, the best test with the camera is to see a live image. For that, we will use another IDE, the OpenMV.

+
+
+
+

Installing the OpenMV IDE

+

OpenMV IDE is the premier integrated development environment with OpenMV Cameras like the one on the Nicla Vision. It features a powerful text editor, debug terminal, and frame buffer viewer with a histogram display. We will use MicroPython to program the camera.

+

Go to the OpenMV IDE page, download the correct version for your Operating System, and follow the instructions for its installation on your computer.

+
+
+

+
+
+

The IDE should open, defaulting to the helloworld_1.py code on its Code Area. If not, you can open it from Files > Examples > HelloWord > helloword.py

+
+
+

+
+
+

Any messages sent through a serial connection (using print() or error messages) will be displayed on the Serial Terminal during run time. The image captured by a camera will be displayed in the Camera Viewer Area (or Frame Buffer) and in the Histogram area, immediately below the Camera Viewer.

+
+

Before connecting the Nicla to the OpenMV IDE, ensure you have the latest bootloader version. Go to your Arduino IDE, select the Nicla board, and open the sketch on Examples > STM_32H747_System STM32H747_manageBootloader. Upload the code to your board. The Serial Monitor will guide you.

+
+

After updating the bootloader, put the Nicla Vision in bootloader mode by double-pressing the reset button on the board. The built-in green LED will start fading in and out. Now return to the OpenMV IDE and click on the connect icon (Left ToolBar):

+
+
+

+
+
+

A pop-up will tell you that a board in DFU mode was detected and ask how you would like to proceed. First, select Install the latest release firmware (vX.Y.Z). This action will install the latest OpenMV firmware on the Nicla Vision.

+
+
+

+
+
+

You can leave the option Erase internal file system unselected and click [OK].

+

Nicla’s green LED will start flashing while the OpenMV firmware is uploaded to the board, and a terminal window will then open, showing the flashing progress.

+
+
+

+
+
+

Wait until the green LED stops flashing and fading. When the process ends, you will see a message saying, “DFU firmware update complete!”. Press [OK].

+
+
+

+
+
+

A green play button appears when the Nicla Vison connects to the Tool Bar.

+
+
+

+
+
+

Also, note that a drive named “NO NAME” will appear on your computer.:

+
+
+

+
+
+

Every time you press the [RESET] button on the board, it automatically executes the main.py script stored on it. You can load the main.py code on the IDE (File > Open File...).

+
+
+

+
+
+
+

This code is the “Blink” code, confirming that the HW is OK.

+
+

For testing the camera, let’s run helloword_1.py. For that, select the script on File > Examples > HelloWorld > helloword.py,

+

When clicking the green play button, the MicroPython script (hellowolrd.py) on the Code Area will be uploaded and run on the Nicla Vision. On-Camera Viewer, you will start to see the video streaming. The Serial Monitor will show us the FPS (Frames per second), which should be around 14fps.

+
+
+

+
+
+

Here is the helloworld.py script:

+
# Hello World Example 2
+#
+# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!
+
+import sensor, image, time
+
+sensor.reset()                      # Reset and initialize the sensor.
+sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
+sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
+sensor.skip_frames(time = 2000)     # Wait for settings take effect.
+clock = time.clock()                # Create a clock object to track the FPS.
+
+while(True):
+    clock.tick()                    # Update the FPS clock.
+    img = sensor.snapshot()         # Take a picture and return the image.
+    print(clock.fps())
+

In GitHub, you can find the Python scripts used here.

+

The code can be split into two parts:

+
    +
  • Setup: Where the libraries are imported, initialized and the variables are defined and initiated.

  • +
  • Loop: (while loop) part of the code that runs continually. The image (img variable) is captured (one frame). Each of those frames can be used for inference in Machine Learning Applications.

  • +
+

To interrupt the program execution, press the red [X] button.

+
+

Note: OpenMV Cam runs about half as fast when connected to the IDE. The FPS should increase once disconnected.

+
+

In the GitHub, You can find other Python scripts. Try to test the onboard sensors.

+
+
+

Connecting the Nicla Vision to Edge Impulse Studio

+

We will need the Edge Impulse Studio later in other exercises. Edge Impulse is a leading development platform for machine learning on edge devices.

+

Edge Impulse officially supports the Nicla Vision. So, for starting, please create a new project on the Studio and connect the Nicla to it. For that, follow the steps:

+
    +
  • Download the most updated EI Firmware and unzip it.

  • +
  • Open the zip file on your computer and select the uploader corresponding to your OS:

  • +
+
+
+

+
+
+
    +
  • Put the Nicla-Vision on Boot Mode, pressing the reset button twice.

  • +
  • Execute the specific batch code for your OS for uploading the binary arduino-nicla-vision.bin to your board.

  • +
+

Go to your project on the Studio, and on the Data Acquisition tab, select WebUSB (1). A window will pop up; choose the option that shows that the Nicla is paired (2) and press [Connect] (3).

+
+
+

+
+
+

In the Collect Data section on the Data Acquisition tab, you can choose which sensor data to pick.

+
+
+

+
+
+

For example. IMU data:

+
+
+

+
+
+

Or Image (Camera):

+
+
+

+
+
+

And so on. You can also test an external sensor connected to the ADC (Nicla pin 0) and the other onboard sensors, such as the microphone and the ToF.

+
+
+

Expanding the Nicla Vision Board (optional)

+

A last item to be explored is that sometimes, during prototyping, it is essential to experiment with external sensors and devices, and an excellent expansion to the Nicla is the Arduino MKR Connector Carrier (Grove compatible).

+

The shield has 14 Grove connectors: five single analog inputs (A0-A5), one double analog input (A5/A6), five single digital I/Os (D0-D4), one double digital I/O (D5/D6), one I2C (TWI), and one UART (Serial). All connectors are 5V compatible.

+
+

Note that all 17 Nicla Vision pins will be connected to the Shield Groves, but some Grove connections remain disconnected.

+
+
+
+

+
+
+

This shield is MKR compatible and can be used with the Nicla Vision and Portenta.

+
+
+

+
+
+

For example, suppose that on a TinyML project, you want to send inference results using a LoRaWAN device and add information about local luminosity. Often, with offline operations, a local low-power display such as an OLED is advised. This setup can be seen here:

+
+
+

+
+
+

The Grove Light Sensor would be connected to one of the single Analog pins (A0/PC4), the LoRaWAN device to the UART, and the OLED to the I2C connector.

+

The Nicla Pins 3 (Tx) and 4 (Rx) are connected with the Serial Shield connector. The UART communication is used with the LoRaWan device. Here is a simple code to use the UART:

+
# UART Test - By: marcelo_rovai - Sat Sep 23 2023
+
+import time
+from pyb import UART
+from pyb import LED
+
+redLED = LED(1) # built-in red LED
+
+# Init UART object.
+# Nicla Vision's UART (TX/RX pins) is on "LP1"
+uart = UART("LP1", 9600)
+
+while(True):
+    uart.write("Hello World!\r\n")
+    redLED.toggle()
+    time.sleep_ms(1000)
+

To verify that the UART is working, you should, for example, connect another device as the Arduino UNO, displaying “Hello Word” on the Serial Monitor. Here is the code.

+
+
+

+
+
+

Below is the Hello World code to be used with the I2C OLED. The MicroPython SSD1306 OLED driver (ssd1306.py), created by Adafruit, should also be uploaded to the Nicla (the ssd1306.py script can be found in GitHub).

+
# Nicla_OLED_Hello_World - By: marcelo_rovai - Sat Sep 30 2023
+
+#Save on device: MicroPython SSD1306 OLED driver, I2C and SPI interfaces created by Adafruit
+import ssd1306
+
+from machine import I2C
+i2c = I2C(1)
+
+oled_width = 128
+oled_height = 64
+oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c)
+
+oled.text('Hello, World', 10, 10)
+oled.show()
+

Finally, here is a simple script to read the ADC value on pin “PC4” (Nicla pin A0):

+

+# Light Sensor (A0) - By: marcelo_rovai - Wed Oct 4 2023
+
+import pyb
+from time import sleep
+
+adc = pyb.ADC(pyb.Pin("PC4"))     # create an analog object from a pin
+val = adc.read()                  # read an analog value
+
+while (True):
+
+    val = adc.read()  
+    print ("Light={}".format (val))
+    sleep (1)
+

The ADC can be used for other sensor variables, such as Temperature.

+
+

Note that the above scripts (downloaded from Github) introduce only how to connect external devices with the Nicla Vision board using MicroPython.

+
+
+
+

Conclusion

+

The Arduino Nicla Vision is an excellent tiny device for industrial and professional uses! However, it is powerful, trustworthy, low power, and has suitable sensors for the most common embedded machine learning applications such as vision, movement, sensor fusion, and sound.

+
+

On the GitHub repository, you will find the last version of all the codes used or commented on in this hands-on exercise.

+
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/object_detection_fomo/object_detection_fomo.html b/contents/object_detection_fomo/object_detection_fomo.html new file mode 100644 index 00000000..3c4ed6a9 --- /dev/null +++ b/contents/object_detection_fomo/object_detection_fomo.html @@ -0,0 +1,1467 @@ + + + + + + + + + +Machine Learning Systems - Object Detection + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

Object Detection

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+

+
DALL·E 3 Prompt: Cartoon in the style of the 1940s or 1950s showcasing a spacious industrial warehouse interior. A conveyor belt is prominently featured, carrying a mixture of toy wheels and boxes. The wheels are distinguishable with their bright yellow centers and black tires. The boxes are white cubes painted with alternating black and white patterns. At the end of the moving conveyor stands a retro-styled robot, equipped with tools and sensors, diligently classifying and counting the arriving wheels and boxes. The overall aesthetic is reminiscent of mid-century animation with bold lines and a classic color palette.
+
+
+
+

Introduction

+

This is a continuation of CV on Nicla Vision, now exploring Object Detection on microcontrollers.

+
+
+

+
+
+
+

Object Detection versus Image Classification

+

The main task with Image Classification models is to produce a list of the most probable object categories present on an image, for example, to identify a tabby cat just after his dinner:

+
+
+

+
+
+

But what happens when the cat jumps near the wine glass? The model still only recognizes the predominant category on the image, the tabby cat:

+
+
+

+
+
+

And what happens if there is not a dominant category on the image?

+
+
+

+
+
+

The model identifies the above image completely wrong as an “ashcan,” possibly due to the color tonalities.

+
+

The model used in all previous examples is the MobileNet, trained with a large dataset, the ImageNet.

+
+

To solve this issue, we need another type of model, where not only multiple categories (or labels) can be found but also where the objects are located on a given image.

+

As we can imagine, such models are much more complicated and bigger, for example, the MobileNetV2 SSD FPN-Lite 320x320, trained with the COCO dataset. This pre-trained object detection model is designed to locate up to 10 objects within an image, outputting a bounding box for each object detected. The below image is the result of such a model running on a Raspberry Pi:

+
+
+

+
+
+

Those models used for Object detection (such as the MobileNet SSD or YOLO) usually have several MB in size, which is OK for use with Raspberry Pi but unsuitable for use with embedded devices, where the RAM usually is lower than 1M Bytes.

+
+
+

An innovative solution for Object Detection: FOMO

+

Edge Impulse launched in 2022, FOMO (Faster Objects, More Objects), a novel solution to perform object detection on embedded devices, not only on the Nicla Vision (Cortex M7) but also on Cortex M4F CPUs (Arduino Nano33 and OpenMV M4 series) as well the Espressif ESP32 devices (ESP-CAM and XIAO ESP32S3 Sense).

+

In this Hands-On exercise, we will explore using FOMO with Object Detection, not entering many details about the model itself. To understand more about how the model works, you can go into the official FOMO announcement by Edge Impulse, where Louis Moreau and Mat Kelcey explain in detail how it works.

+
+
+
+

The Object Detection Project Goal

+

All Machine Learning projects need to start with a detailed goal. Let’s assume we are in an industrial facility and must sort and count wheels and special boxes.

+
+
+

+
+
+

In other words, we should perform a multi-label classification, where each image can have three classes:

+
    +
  • Background (No objects)

  • +
  • Box

  • +
  • Wheel

  • +
+

Here are some not labeled image samples that we should use to detect the objects (wheels and boxes):

+
+
+

+
+
+

We are interested in which object is in the image, its location (centroid), and how many we can find on it. The object’s size is not detected with FOMO, as with MobileNet SSD or YOLO, where the Bounding Box is one of the model outputs.

+

We will develop the project using the Nicla Vision for image capture and model inference. The ML project will be developed using the Edge Impulse Studio. But before starting the object detection project in the Studio, let’s create a raw dataset (not labeled) with images that contain the objects to be detected.

+
+
+

Data Collection

+

We can use the Edge Impulse Studio, the OpenMV IDE, your phone, or other devices for the image capture. Here, we will use again the OpenMV IDE for our purpose.

+
+

Collecting Dataset with OpenMV IDE

+

First, create in your computer a folder where your data will be saved, for example, “data.” Next, on the OpenMV IDE, go to Tools > Dataset Editor and select New Dataset to start the dataset collection:

+
+
+

+
+
+

Edge impulse suggests that the objects should be of similar size and not overlapping for better performance. This is OK in an industrial facility, where the camera should be fixed, keeping the same distance from the objects to be detected. Despite that, we will also try with mixed sizes and positions to see the result.

+
+

We will not create separate folders for our images because each contains multiple labels.

+
+

Connect the Nicla Vision to the OpenMV IDE and run the dataset_capture_script.py. Clicking on the Capture Image button will start capturing images:

+
+
+

+
+
+

We suggest around 50 images mixing the objects and varying the number of each appearing on the scene. Try to capture different angles, backgrounds, and light conditions.

+
+

The stored images use a QVGA frame size 320x240 and RGB565 (color pixel format).

+
+

After capturing your dataset, close the Dataset Editor Tool on the Tools > Dataset Editor.

+
+
+
+

Edge Impulse Studio

+
+

Setup the project

+

Go to Edge Impulse Studio, enter your credentials at Login (or create an account), and start a new project.

+
+
+

+
+
+
+

Here, you can clone the project developed for this hands-on: NICLA_Vision_Object_Detection.

+
+

On your Project Dashboard, go down and on Project info and select Bounding boxes (object detection) and Nicla Vision as your Target Device:

+
+
+

+
+
+
+
+

Uploading the unlabeled data

+

On Studio, go to the Data acquisition tab, and on the UPLOAD DATA section, upload from your computer files captured.

+
+
+

+
+
+
+

You can leave for the Studio to split your data automatically between Train and Test or do it manually.

+
+
+
+

+
+
+

All the not labeled images (51) were uploaded but they still need to be labeled appropriately before using them as a dataset in the project. The Studio has a tool for that purpose, which you can find in the link Labeling queue (51).

+

There are two ways you can use to perform AI-assisted labeling on the Edge Impulse Studio (free version):

+
    +
  • Using yolov5
  • +
  • Tracking objects between frames
  • +
+
+

Edge Impulse launched an auto-labeling feature for Enterprise customers, easing labeling tasks in object detection projects.

+
+

Ordinary objects can quickly be identified and labeled using an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset). But since, in our case, the objects are not part of COCO datasets, we should select the option of tracking objects. With this option, once you draw bounding boxes and label the images in one frame, the objects will be tracked automatically from frame to frame, partially labeling the new ones (not all are correctly labeled).

+
+

You can use the EI uploader to import your data if you already have a labeled dataset containing bounding boxes.

+
+
+
+

Labeling the Dataset

+

Starting with the first image of your unlabeled data, use your mouse to drag a box around an object to add a label. Then click Save labels to advance to the next item.

+
+
+

+
+
+

Continue with this process until the queue is empty. At the end, all images should have the objects labeled as those samples below:

+
+
+

+
+
+

Next, review the labeled samples on the Data acquisition tab. If one of the labels was wrong, you can edit it using the three dots menu after the sample name:

+
+
+

+
+
+

You will be guided to replace the wrong label, correcting the dataset.

+
+
+

+
+
+
+
+
+

The Impulse Design

+

In this phase, you should define how to:

+
    +
  • Pre-processing consists of resizing the individual images from 320 x 240 to 96 x 96 and squashing them (squared form, without cropping). Afterwards, the images are converted from RGB to Grayscale.

  • +
  • Design a Model, in this case, “Object Detection.”

  • +
+
+
+

+
+
+
+

Preprocessing all dataset

+

In this section, select Color depth as Grayscale, which is suitable for use with FOMO models and Save parameters.

+
+
+

+
+
+

The Studio moves automatically to the next section, Generate features, where all samples will be pre-processed, resulting in a dataset with individual 96x96x1 images or 9,216 features.

+
+
+

+
+
+

The feature explorer shows that all samples evidence a good separation after the feature generation.

+
+

One of the samples (46) apparently is in the wrong space, but clicking on it can confirm that the labeling is correct.

+
+
+
+
+

Model Design, Training, and Test

+

We will use FOMO, an object detection model based on MobileNetV2 (alpha 0.35) designed to coarsely segment an image into a grid of background vs objects of interest (here, boxes and wheels).

+

FOMO is an innovative machine learning model for object detection, which can use up to 30 times less energy and memory than traditional models like Mobilenet SSD and YOLOv5. FOMO can operate on microcontrollers with less than 200 KB of RAM. The main reason this is possible is that while other models calculate the object’s size by drawing a square around it (bounding box), FOMO ignores the size of the image, providing only the information about where the object is located in the image, by means of its centroid coordinates.

+

How FOMO works?

+

FOMO takes the image in grayscale and divides it into blocks of pixels using a factor of 8. For the input of 96x96, the grid would be 12x12 (96/8=12). Next, FOMO will run a classifier through each pixel block to calculate the probability that there is a box or a wheel in each of them and, subsequently, determine the regions which have the highest probability of containing the object (If a pixel block has no objects, it will be classified as background). From the overlap of the final region, the FOMO provides the coordinates (related to the image dimensions) of the centroid of this region.

+
+
+

+
+
+

For training, we should select a pre-trained model. Let’s use the FOMO (Faster Objects, More Objects) MobileNetV2 0.35`. This model uses around 250KB RAM and 80KB of ROM (Flash), which suits well with our board since it has 1MB of RAM and ROM.

+
+
+

+
+
+

Regarding the training hyper-parameters, the model will be trained with:

+
    +
  • Epochs: 60,
  • +
  • Batch size: 32
  • +
  • Learning Rate: 0.001.
  • +
+

For validation during training, 20% of the dataset (validation_dataset) will be spared. For the remaining 80% (train_dataset), we will apply Data Augmentation, which will randomly flip, change the size and brightness of the image, and crop them, artificially increasing the number of samples on the dataset for training.

+

As a result, the model ends with practically 1.00 in the F1 score, with a similar result when using the Test data.

+
+

Note that FOMO automatically added a 3rd label background to the two previously defined (box and wheel).

+
+
+
+

+
+
+
+

In object detection tasks, accuracy is generally not the primary evaluation metric. Object detection involves classifying objects and providing bounding boxes around them, making it a more complex problem than simple classification. The issue is that we do not have the bounding box, only the centroids. In short, using accuracy as a metric could be misleading and may not provide a complete understanding of how well the model is performing. Because of that, we will use the F1 score.

+
+
+

Test model with “Live Classification”

+

Since Edge Impulse officially supports the Nicla Vision, let’s connect it to the Studio. For that, follow the steps:

+
    +
  • Download the last EI Firmware and unzip it.

  • +
  • Open the zip file on your computer and select the uploader related to your OS:

  • +
+
+
+

+
+
+
    +
  • Put the Nicla-Vision on Boot Mode, pressing the reset button twice.

  • +
  • Execute the specific batch code for your OS for uploading the binary (arduino-nicla-vision.bin) to your board.

  • +
+

Go to Live classification section at EI Studio, and using webUSB, connect your Nicla Vision:

+
+
+

+
+
+

Once connected, you can use the Nicla to capture actual images to be tested by the trained model on Edge Impulse Studio.

+
+
+

+
+
+

One thing to be noted is that the model can produce false positives and negatives. This can be minimized by defining a proper Confidence Threshold (use the Three dots menu for the set-up). Try with 0.8 or more.

+
+
+
+

Deploying the Model

+

Select OpenMV Firmware on the Deploy Tab and press [Build].

+
+
+

+
+
+

When you try to connect the Nicla with the OpenMV IDE again, it will try to update its FW. Choose the option Load a specific firmware instead.

+
+
+

+
+
+

You will find a ZIP file on your computer from the Studio. Open it:

+
+
+

+
+
+

Load the .bin file to your board:

+
+
+

+
+
+

After the download is finished, a pop-up message will be displayed. Press OK, and open the script ei_object_detection.py downloaded from the Studio.

+

Before running the script, let’s change a few lines. Note that you can leave the window definition as 240 x 240 and the camera capturing images as QVGA/RGB. The captured image will be pre-processed by the FW deployed from Edge Impulse

+
# Edge Impulse - OpenMV Object Detection Example
+
+import sensor, image, time, os, tf, math, uos, gc
+
+sensor.reset()                         # Reset and initialize the sensor.
+sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
+sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
+sensor.set_windowing((240, 240))       # Set 240x240 window.
+sensor.skip_frames(time=2000)          # Let the camera adjust.
+
+net = None
+labels = None
+

Redefine the minimum confidence, for example, to 0.8 to minimize false positives and negatives.

+
min_confidence = 0.8
+

Change if necessary, the color of the circles that will be used to display the detected object’s centroid for a better contrast.

+
try:
+    # Load built in model
+    labels, net = tf.load_builtin_model('trained')
+except Exception as e:
+    raise Exception(e)
+
+colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
+    (255, 255,   0), # background: yellow (not used)
+    (  0, 255,   0), # cube: green
+    (255,   0,   0), # wheel: red
+    (  0,   0, 255), # not used
+    (255,   0, 255), # not used
+    (  0, 255, 255), # not used
+    (255, 255, 255), # not used
+]
+

Keep the remaining code as it is and press the green Play button to run the code:

+
+
+

+
+
+

On the camera view, we can see the objects with their centroids marked with 12 pixel-fixed circles (each circle has a distinct color, depending on its class). On the Serial Terminal, the model shows the labels detected and their position on the image window (240X240).

+
+

Be ware that the coordinate origin is in the upper left corner.

+
+
+
+

+
+
+

Note that the frames per second rate is around 8 fps (similar to what we got with the Image Classification project). This happens because FOMO is cleverly built over a CNN model, not with an object detection model like the SSD MobileNet. For example, when running a MobileNetV2 SSD FPN-Lite 320x320 model on a Raspberry Pi 4, the latency is around 5 times higher (around 1.5 fps)

+

Here is a short video showing the inference results:

+
+
+

Conclusion

+

FOMO is a significant leap in the image processing space, as Louis Moreau and Mat Kelcey put it during its launch in 2022:

+
+

FOMO is a ground-breaking algorithm that brings real-time object detection, tracking, and counting to microcontrollers for the first time.

+
+

Multiple possibilities exist for exploring object detection (and, more precisely, counting them) on embedded devices, for example, to explore the Nicla doing sensor fusion (camera + microphone) and object detection. This can be very useful on projects involving bees, for example.

+
+
+

+
+
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/ondevice_learning/ondevice_learning.html b/contents/ondevice_learning/ondevice_learning.html new file mode 100644 index 00000000..5e9638cf --- /dev/null +++ b/contents/ondevice_learning/ondevice_learning.html @@ -0,0 +1,2122 @@ + + + + + + + + + +Machine Learning Systems - 12  On-Device Learning + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

12  On-Device Learning

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Drawing of a smartphone with its internal components exposed, revealing diverse miniature engineers of different genders and skin tones actively working on the ML model. The engineers, including men, women, and non-binary individuals, are tuning parameters, repairing connections, and enhancing the network on the fly. Data flows into the ML model, being processed in real-time, and generating output inferences.
+
+
+

On-device Learning represents a significant innovation for embedded and edge IoT devices, enabling models to train and update directly on small local devices. This contrasts with traditional methods, where models are trained on expansive cloud computing resources before deployment. With On-Device Learning, devices like smart speakers, wearables, and industrial sensors can refine models in real-time based on local data without needing to transmit data externally. For example, a voice-enabled smart speaker could learn and adapt to its owner’s speech patterns and vocabulary right on the device. However, there is no such thing as a free lunch; therefore, in this chapter, we will discuss both the benefits and the limitations of on-device learning.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand on-device learning and how it differs from cloud-based training

  • +
  • Recognize the benefits and limitations of on-device learning

  • +
  • Examine strategies to adapt models through complexity reduction, optimization, and data compression

  • +
  • Understand related concepts like federated learning and transfer learning

  • +
  • Analyze the security implications of on-device learning and mitigation strategies

  • +
+
+
+
+

12.1 Introduction

+

On-device Learning refers to training ML models directly on the device where they are deployed, as opposed to traditional methods where models are trained on powerful servers and then deployed to devices. This method is particularly relevant to TinyML, where ML systems are integrated into tiny, resource-constrained devices.

+

An example of On-Device Learning can be seen in a smart thermostat that adapts to user behavior over time. Initially, the thermostat may have a generic model that understands basic usage patterns. However, as it is exposed to more data, such as the times the user is home or away, preferred temperatures, and external weather conditions, the thermostat can refine its model directly on the device to provide a personalized experience. This is all done without sending data back to a central server for processing.

+

Another example is in predictive text on smartphones. As users type, the phone learns from the user’s language patterns and suggests words or phrases that are likely to be used next. This learning happens directly on the device, and the model updates in real-time as more data is collected. A widely used real-world example of on-device learning is Gboard. On an Android phone, Gboard learns from typing and dictation patterns to enhance the experience for all users. On-device learning is also called federated learning. Figure fig-federated-cycle shows the cycle of federated learning on mobile devices: A. the device learns from user patterns; B. local model updates are communicated to the cloud; C. the cloud server updates the global model and sends the new model to all the devices.

+
+
+
+ +
+
+Figure 12.1: Federated learning cycle. Credit: Google Research. +
+
+
+
+
+

12.2 Advantages and Limitations

+

On-device learning provides several advantages over traditional cloud-based ML. By keeping data and models on the device, it eliminates the need for costly data transmission and addresses privacy concerns. This allows for more personalized, responsive experiences, as the model can adapt in real-time to user behavior.

+

However, On-Device Learning also comes with tradeoffs. The limited computing resources on consumer devices can make it challenging to run complex models locally. Datasets are also more restricted since they consist only of user-generated data from a single device. Additionally, updating models requires pushing out new versions rather than seamless cloud updates.

+

On-device learning opens up new capabilities by enabling offline AI while maintaining user privacy. However, it requires carefully managing model and data complexity within the constraints of consumer devices. Finding the right balance between localization and cloud offloading is key to optimizing on-device experiences.

+
+

12.2.1 Benefits

+
+

Privacy and Data Security

+

One of the significant advantages of on-device learning is the enhanced privacy and security of user data. For instance, consider a smartwatch that monitors sensitive health metrics such as heart rate and blood pressure. By processing data and adapting models directly on the device, the biometric data remains localized, circumventing the need to transmit raw data to cloud servers where it could be susceptible to breaches.

+

Server breaches are far from rare, with millions of records compromised annually. For example, the 2017 Equifax breach exposed the personal data of 147 million people. By keeping data on the device, the risk of such exposures is drastically minimized. On-device learning eliminates reliance on centralized cloud storage and safeguards against unauthorized access from various threats, including malicious actors, insider threats, and accidental exposure.

+

Regulations like the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) mandate stringent data privacy requirements that on-device learning adeptly addresses. By ensuring data remains localized and is not transferred to other systems, on-device learning facilitates compliance with these regulations.

+

On-device learning is not just beneficial for individual users; it has significant implications for organizations and sectors dealing with highly sensitive data. For instance, within the military, on-device learning empowers frontline systems to adapt models and function independently of connections to central servers that could potentially be compromised. Critical and sensitive information is staunchly protected by localizing data processing and learning. However, this comes with the tradeoff that individual devices take on more value and may incentivize theft or destruction as they become the sole carriers of specialized AI models. Care must be taken to secure devices themselves when transitioning to on-device learning.

+

It is also important to preserve the privacy, security, and regulatory compliance of personal and sensitive data. Instead of in the cloud, training and operating models locally substantially augment privacy measures, ensuring that user data is safeguarded from potential threats.

+

However, this is only partially intuitive because on-device learning could instead open systems up to new privacy attacks. With valuable data summaries and model updates permanently stored on individual devices, it may be much harder to physically and digitally protect them than a large computing cluster. While on-device learning reduces the amount of data compromised in any one breach, it could also introduce new dangers by dispersing sensitive information across many decentralized endpoints. Careful security practices are still essential for on-device systems.

+
+
+

Regulatory Compliance

+

On-device learning helps address major privacy regulations like (GDPR) and CCPA. These regulations require data localization, restricting cross-border data transfers to approved countries with adequate controls. GDPR also mandates privacy by design and consent requirements for data collection. By keeping data processing and model training localized on-device, sensitive user data is not transferred across borders. This avoids major compliance headaches for organizations.

+

For example, a healthcare provider monitoring patient vitals with wearables must ensure cross-border data transfers comply with HIPAA and GDPR if using the cloud. Determining which country’s laws apply and securing approvals for international data flows introduces legal and engineering burdens. With on-device learning, no data leaves the device, simplifying compliance. The time and resources spent on compliance are reduced significantly.

+

Industries like healthcare, finance, and government, which have highly regulated data, can benefit greatly from on-device learning. By localizing data and learning, regulatory privacy and data sovereignty requirements are more easily met. On-device solutions provide an efficient way to build compliant AI applications.

+

Major privacy regulations impose restrictions on cross-border data movement that on-device learning inherently addresses through localized processing. This reduces the compliance burden for organizations working with regulated data.

+
+
+

Reduced Bandwidth, Costs, and Increased Efficiency

+

One major advantage of on-device learning is the significant reduction in bandwidth usage and associated cloud infrastructure costs. By keeping data localized for model training rather than transmitting raw data to the cloud, on-device learning can result in substantial bandwidth savings. For instance, a network of cameras analyzing video footage can achieve significant reductions in data transfer by training models on-device rather than streaming all video footage to the cloud for processing.

+

This reduction in data transmission saves bandwidth and translates to lower costs for servers, networking, and data storage in the cloud. Large organizations, which might spend millions on cloud infrastructure to train models on-device data, can experience dramatic cost reductions through on-device learning. In the era of Generative AI, where costs have been escalating significantly, finding ways to keep expenses down has become increasingly important.

+

Furthermore, the energy and environmental costs of running large server farms are also diminished. Data centers consume vast amounts of energy, contributing to greenhouse gas emissions. By reducing the need for extensive cloud-based infrastructure, on-device learning plays a part in mitigating the environmental impact of data processing (Wu et al. 2022).

+
+Wu, Carole-Jean, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, et al. 2022. “Sustainable Ai: Environmental Implications, Challenges and Opportunities.” Proceedings of Machine Learning and Systems 4: 795–813. +

Specifically for endpoint applications, on-device learning minimizes the number of network API calls needed to run inference through a cloud provider. The cumulative costs associated with bandwidth and API calls can quickly escalate for applications with millions of users. In contrast, performing training and inferences locally is considerably more efficient and cost-effective. Under state-of-the-art optimizations, on-device learning has been shown to reduce training memory requirements, drastically improve memory efficiency, and reduce up to 20% in per-iteration latency (Dhar et al. 2021).

+

Another key benefit of on-device learning is the potential for IoT devices to continuously adapt their ML model to new data for continuous, lifelong learning. On-device models can quickly become outdated as user behavior, data patterns, and preferences change. Continuous learning enables the model to efficiently adapt to new data and improvements and maintain high model performance over time.

+
+
+
+

12.2.2 Limitations

+

While traditional cloud-based ML systems have access to nearly endless computing resources, on-device learning is often restricted by the limitations in computational and storage power of the edge device that the model is trained on. By definition, an edge device is a device with restrained computing, memory, and energy resources that cannot be easily increased or decreased. Thus, the reliance on edge devices can restrict the complexity, efficiency, and size of on-device ML models.

+
+

Compute resources

+

Traditional cloud-based ML systems utilize large servers with multiple high-end GPUs or TPUs, providing nearly endless computational power and memory. For example, services like Amazon Web Services (AWS) EC2 allow configuring clusters of GPU instances for massively parallel training.

+

In contrast, on-device learning is restricted by the hardware limitations of the edge device on which it runs. Edge devices refer to endpoints like smartphones, embedded electronics, and IoT devices. By definition, these devices have highly restrained computing, memory, and energy resources compared to the cloud.

+

For example, a typical smartphone or Raspberry Pi may only have a few CPU cores, a few GB of RAM, and a small battery. Even more resource-constrained are TinyML microcontroller devices such as the Arduino Nano BLE Sense. The resources are fixed on these devices and can’t easily be increased on demand, such as scaling cloud infrastructure. This reliance on edge devices directly restricts the complexity, efficiency, and size of models that can be deployed for on-device training:

+
    +
  • Complexity: Limits on memory, computing, and power restrict model architecture design, constraining the number of layers and parameters.
  • +
  • Efficiency: Models must be heavily optimized through methods like quantization and pruning to run faster and consume less energy.
  • +
  • Size: Actual model files must be compressed as much as possible to fit within the storage limitations of edge devices.
  • +
+

Thus, while the cloud offers endless scalability, on-device learning must operate within the tight resource constraints of endpoint hardware. This requires careful codesign of streamlined models, training methods, and optimizations tailored specifically for edge devices.

+
+
+

Dataset Size, Accuracy, and Generalization

+

In addition to limited computing resources, on-device learning is also constrained by the dataset available for training models.

+

In the cloud, models are trained on massive, diverse datasets like ImageNet or Common Crawl. For example, ImageNet contains over 14 million images carefully categorized across thousands of classes.

+

On-device learning instead relies on smaller, decentralized data silos unique to each device. A smartphone camera roll may contain only thousands of photos of users’ interests and environments.

+

This decentralized data leads to a need for IID (independent and identically distributed) data. For instance, two friends may take many photos of the same places and objects, meaning their data distributions are highly correlated rather than independent.

+

Reasons data may be non-IID in on-device settings:

+
    +
  • User heterogeneity: Different users have different interests and environments.
  • +
  • Device differences: Sensors, regions, and demographics affect data.
  • +
  • Temporal effects: time of day, seasonal impacts on data.
  • +
+

The effectiveness of ML relies heavily on large, diverse training data. With small, localized datasets, on-device models may fail to generalize across different user populations and environments. For example, a disease detection model trained only on images from a single hospital would not generalize well to other patient demographics. Withel’s real-world performance would only improve with extensive, diverse medical improvement. Thus, while cloud-based learning leverages massive datasets, on-device learning relies on much smaller, decentralized data silos unique to each user.

+

The limited data and optimizations required for on-device learning can negatively impact model accuracy and generalization:

+
    +
  • Small datasets increase overfitting risk. For example, a fruit classifier trained on 100 images risks overfitting compared to one trained on 1 million diverse images.
  • +
  • Noisy user-generated data reduces quality. Sensor noise or improper data labeling by non-experts may degrade training.
  • +
  • Optimizations like pruning and quantization trade off accuracy for efficiency. An 8-bit quantized model runs faster but less accurately than a 32-bit model.
  • +
+

So while cloud models achieve high accuracy with massive datasets and no constraints, on-device models can struggle to generalize. Some studies show that on-device training matches cloud accuracy on select tasks. However, performance on real-world workloads requires further study (Lin et al. 2022).

+

For instance, a cloud model can accurately detect pneumonia in chest X-rays from thousands of hospitals. However, an on-device model trained only on a small local patient population may fail to generalize.

+

Unreliable accuracy limits the real-world applicability of on-device learning for mission-critical uses like disease diagnosis or self-driving vehicles.

+

On-device training is also slower than the cloud due to limited resources. Even if each iteration is faster, the overall training process takes longer.

+

For example, a real-time robotics application may require model updates within milliseconds. On-device training on small embedded hardware may take seconds or minutes per update - too slow for real-time use.

+

Accuracy, generalization, and speed challenges pose hurdles to adopting on-device learning for real-world production systems, especially when reliability and low latency are critical.

+
+
+
+
+

12.3 On-device Adaptation

+

In an ML task, resource consumption mainly comes from three sources:

+
    +
  • The ML model itself;
  • +
  • The optimization process during model learning
  • +
  • Storing and processing the dataset used for learning.
  • +
+

Correspondingly, there are three approaches to adapting existing ML algorithms onto resource-constrained devices:

+
    +
  • Reducing the complexity of the ML model
  • +
  • Modifying optimizations to reduce training resource requirements
  • +
  • Creating new storage-efficient data representations
  • +
+

In the following section, we will review these on-device learning adaptation methods. The Model Optimizations chapter provides more details on model optimizations.

+
+

12.3.1 Reducing Model Complexity

+

In this section, we will briefly discuss ways to reduce model complexity when adapting ML models on-device. For details on reducing model complexity, please refer to the Model Optimization Chapter.

+
+

Traditional ML Algorithms

+

Due to edge devices’ computing and memory limitations, select traditional ML algorithms are great candidates for on-device learning applications due to their lightweight nature. Some example algorithms with low resource footprints include Naive Bayes Classifiers, Support Vector Machines (SVMs), Linear Regression, Logistic Regression, and select Decision Tree algorithms.

+

With some refinements, these classical ML algorithms can be adapted to specific hardware architectures and perform simple tasks. Their low-performance requirements make it easy to integrate continuous learning even on edge devices.

+
+
+

Pruning

+

Pruning is a technique for reducing the size and complexity of an ML model to improve its efficiency and generalization performance. This is beneficial for training models on edge devices, where we want to minimize resource usage while maintaining competitive accuracy.

+

The primary goal of pruning is to remove parts of the model that do not contribute significantly to its predictive power while retaining the most informative aspects. In the context of decision trees, pruning involves removing some branches (subtrees) from the tree, leading to a smaller and simpler tree. In the context of DNN, pruning is used to reduce the number of neurons (units) or connections in the network, as shown in Figure fig-ondevice-pruning.

+
+
+
+ +
+
+Figure 12.2: Network pruning. +
+
+
+
+
+

Reducing Complexity of Deep Learning Models

+

Traditional cloud-based DNN frameworks have too much memory overhead to be used on-device. For example, deep learning systems like PyTorch and TensorFlow require hundreds of megabytes of memory overhead when training models such as MobilenetV2, and the overhead scales as the number of training parameters increases.

+

Traditional cloud-based DNN frameworks have too much memory overhead to be used on-device. For example, deep learning systems like PyTorch and TensorFlow require hundreds of megabytes of memory overhead when training models such as MobilenetV2-w0.35, and the overhead scales as the number of training parameters increases.

+

Current research for lightweight DNNs mostly explores CNN architectures. Several bare-metal frameworks designed for running Neural Networks on MCUs by keeping computational overhead and memory footprint low also exist. Some examples include MNN, TVM, and TensorFlow Lite. However, they can only perform inference during forward passes and lack support for backpropagation. While these models are designed for edge deployment, their reduction in model weights and architectural connections led to reduced resource requirements for continuous learning.

+

The tradeoff between performance and model support is clear when adapting the most popular DNN systems. How do we adapt existing DNN models to resource-constrained settings while maintaining support for backpropagation and continuous learning? The latest research suggests algorithm and system codesign techniques that help reduce the resource consumption of ML training on edge devices. Utilizing techniques such as quantization-aware scaling (QAS), sparse updates, and other cutting-edge techniques, on-device learning is possible on embedded systems with a few hundred kilobytes of RAM without additional memory while maintaining high accuracy.

+
+
+
+

12.3.2 Modifying Optimization Processes

+

Choosing the right optimization strategy is important for DNN training on a device since this allows for finding a good local minimum. Since training occurs on a device, this strategy must also consider limited memory and power.

+
+

Quantization-Aware Scaling

+

Quantization is a common method for reducing the memory footprint of DNN training. Although this could introduce new errors, these errors can be mitigated by designing a model to characterize this statistical error. For example, models could use stochastic rounding or introduce the quantization error into the gradient updates.

+

A specific algorithmic technique is Quantization-Aware Scaling (QAS), which improves the performance of neural networks on low-precision hardware, such as edge devices, mobile devices, or TinyML systems, by adjusting the scale factors during the quantization process.

+

As we discussed in the Model Optimizations chapter, quantization is the process of mapping a continuous range of values to a discrete set of values. In the context of neural networks, quantization often involves reducing the precision of the weights and activations from 32-bit floating point to lower-precision formats such as 8-bit integers. This reduction in precision can significantly reduce the computational cost and memory footprint of the model, making it suitable for deployment on low-precision hardware. Figure fig-float-int-quantization is an example of float-to-integer quantization.

+
+
+
+ +
+
+Figure 12.3: Float to integer qunatization. Credit: Nvidia. +
+
+
+

However, the quantization process can also introduce quantization errors that can degrade the model’s performance. Quantization-aware scaling is a technique that aims to minimize these errors by adjusting the scale factors used in the quantization process.

+

The QAS process involves two main steps:

+
    +
  • Quantization-aware training: In this step, the neural network is trained with quantization in mind, using simulated quantization to mimic the effects of quantization during the forward and backward passes. This allows the model to learn to compensate for the quantization errors and improve its performance on low-precision hardware. Refer to the QAT section in Model Optimizations for details.

  • +
  • Quantization and scaling: After training, the model is quantized to a low-precision format, and the scale factors are adjusted to minimize the quantization errors. The scale factors are chosen based on the distribution of the weights and activations in the model and are adjusted to ensure that the quantized values are within the range of the low-precision format.

  • +
+

QAS is used to overcome the difficulties of optimizing models on tiny devices without needing hyperparameter tuning; QAS automatically scales tensor gradients with various bit precisions. This stabilizes the training process and matches the accuracy of floating-point precision.

+
+
+

Sparse Updates

+

Although QAS enables the optimization of a quantized model, it uses a large amount of memory, which is unrealistic for on-device training. So, spare updates are used to reduce the memory footprint of full backward computation. Instead of pruning weights for inference, sparse update prunes the gradient during backward propagation to update the model sparsely. In other words, sparse update skips computing gradients of less important layers and sub-tensors.

+

However, determining the optimal sparse update scheme given a constraining memory budget can be challenging due to the large search space. For example, the MCUNet model has 43 convolutional layers and a search space of approximately 1030. One technique to address this issue is contribution analysis. Contribution analysis measures the accuracy improvement from biases (updating the last few biases compared to only updating the classifier) and weights (updating the weight of one extra layer compared to only having a bias update). By trying to maximize these improvements, contribution analysis automatically derives an optimal sparse update scheme for enabling on-device training.

+
+
+

Layer-Wise Training

+

Other methods besides quantization can help optimize routines. One such method is layer-wise training. A significant memory consumer of DNN training is end-to-end backpropagation, which requires all intermediate feature maps to be stored so the model can calculate gradients. An alternative to this approach that reduces the memory footprint of DNN training is sequential layer-by-layer training (T. Chen et al. 2016). Instead of training end-to-end, training a single layer at a time helps avoid having to store intermediate feature maps.

+
+Chen, Tianqi, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. “Training Deep Nets with Sublinear Memory Cost.” ArXiv Preprint abs/1604.06174. https://arxiv.org/abs/1604.06174. +
+
+

Trading Computation for Memory

+

The strategy of trading computation for memory involves releasing some of the memory being used to store intermediate results. Instead, these results can be recomputed as needed. Reducing memory in exchange for more computation is shown to reduce the memory footprint of DNN training to fit into almost any budget while also minimizing computational cost (Gruslys et al. 2016).

+
+Gruslys, Audrunas, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. 2016. “Memory-Efficient Backpropagation Through Time.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 4125–33. https://proceedings.neurips.cc/paper/2016/hash/a501bebf79d570651ff601788ea9d16d-Abstract.html. +
+
+
+

12.3.3 Developing New Data Representations

+

The dimensionality and volume of the training data can significantly impact on-device adaptation. So, another technique for adapting models onto resource-constrained devices is to represent datasets more efficiently.

+
+

Data Compression

+

The goal of data compression is to reach high accuracies while limiting the amount of training data. One method to achieve this is prioritizing sample complexity: the amount of training data required for the algorithm to reach a target accuracy (Dhar et al. 2021).

+
+Dhar, Sauptik, Junyao Guo, Jiayi (Jason) Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah. 2021. “A Survey of on-Device Machine Learning: An Algorithms and Learning Theory Perspective.” ACM Transactions on Internet of Things 2 (3): 1–49. https://doi.org/10.1145/3450494. +
+Darvish Rouhani, Bita, Azalia Mirhoseini, and Farinaz Koushanfar. 2017. TinyDL: Just-in-time Deep Learning Solution for Constrained Embedded Systems.” In 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 1–4. IEEE. https://doi.org/10.1109/iscas.2017.8050343. +
+Li, Xiang, Tao Qin, Jian Yang, and Tie-Yan Liu. 2016. LightRNN: Memory and Computation-Efficient Recurrent Neural Networks.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 4385–93. https://proceedings.neurips.cc/paper/2016/hash/c3e4035af2a1cde9f21e1ae1951ac80b-Abstract.html. +

Other more common methods of data compression focus on reducing the dimensionality and the volume of the training data. For example, an approach could take advantage of matrix sparsity to reduce the memory footprint of storing training data. Training data can be transformed into a lower-dimensional embedding and factorized into a dictionary matrix multiplied by a block-sparse coefficient matrix (Darvish Rouhani, Mirhoseini, and Koushanfar 2017). Another example could involve representing words from a large language training dataset in a more compressed vector format (Li et al. 2016).

+
+
+
+
+

12.4 Transfer Learning

+

Transfer learning is an ML technique in which a model developed for a particular task is reused as the starting point for a model on a second task. In the context of on-device AI, transfer learning allows us to leverage pre-trained models that have already learned useful representations from large datasets and finetune them for specific tasks using smaller datasets directly on the device. This can significantly reduce the computational resources and time required for training models from scratch.

+

Figure fig-transfer-learning-apps includes some intuitive examples of transfer learning from the real world. For instance, if you can ride a bicycle, you know how to balance yourself on two-wheel vehicles. Then, it would be easier for you to learn how to ride a motorcycle than it would be for someone who cannot ride a bicycle.

+
+
+
+ +
+
+Figure 12.4: Transferring knowledge between tasks. Credit: Zhuang et al. (2021). +
+
+Zhuang, Fuzhen, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. 2021. “A Comprehensive Survey on Transfer Learning.” Proc. IEEE 109 (1): 43–76. https://doi.org/10.1109/jproc.2020.3004555. +
+
+

Let’s take the example of a smart sensor application that uses on-device AI to recognize objects in images captured by the device. Traditionally, this would require sending the image data to a server, where a large neural network model processes the data and sends back the results. With on-device AI, the model is stored and runs directly on-device, eliminating the need to send data to a server.

+

If we want to customize the model for the on-device characteristics, training a neural network model from scratch on the device would be impractical due to the limited computational resources and battery life. This is where transfer learning comes in. Instead of training a model from scratch, we can take a pre-trained model, such as a convolutional neural network (CNN) or a transformer network trained on a large dataset of images, and finetune it for our specific object recognition task. This finetuning can be done directly on the device using a smaller dataset of images relevant to the task. By leveraging the pre-trained model, we can reduce the computational resources and time required for training while still achieving high accuracy for the object recognition task.

+

Transfer learning is important in making on-device AI practical by allowing us to leverage pre-trained models and finetune them for specific tasks, thereby reducing the computational resources and time required for training. The combination of on-device AI and transfer learning opens up new possibilities for AI applications that are more privacy-conscious and responsive to user needs.

+

Transfer learning has revolutionized the way models are developed and deployed, both in the cloud and at the edge. Transfer learning is being used in the real world. One such example is the use of transfer learning to develop AI models that can detect and diagnose diseases from medical images, such as X-rays, MRI scans, and CT scans. For example, researchers at Stanford University developed a transfer learning model that can detect cancer in skin images with an accuracy of 97% (Esteva et al. 2017). This model was pre-trained on 1.28 million images to classify a broad range of objects and then specialized for cancer detection by training on a dermatologist-curated dataset of skin images.

+
+Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks.” Nature 542 (7639): 115–18. https://doi.org/10.1038/nature21056. +

Implementation in production scenarios can be broadly categorized into two stages: pre-deployment and post-deployment.

+
+

12.4.1 Pre-Deployment Specialization

+

In the pre-deployment stage, transfer learning acts as a catalyst to expedite the development process. Here’s how it typically works: Imagine we are creating a system to recognize different breeds of dogs. Rather than starting from scratch, we can utilize a pre-trained model that has already mastered the broader task of recognizing animals in images.

+

This pre-trained model serves as a solid foundation and contains a wealth of knowledge acquired from extensive data. We then finetune this model using a specialized dataset containing images of various dog breeds. This finetuning process tailors the model to our specific need — precisely identifying dog breeds. Once finetuned and validated to meet performance criteria, this specialized model is then ready for deployment.

+

Here’s how it works in practice:

+
    +
  • Start with a Pre-Trained Model: Begin by selecting a model that has already been trained on a comprehensive dataset, usually related to a general task. This model serves as the foundation for the task at hand.
  • +
  • Finetuning: The pre-trained model is then finetuned on a smaller, more specialized dataset specific to the desired task. This step allows the model to adapt and specialize its knowledge to the specific requirements of the application.
  • +
  • Validation: After finetuning, the model is validated to ensure it meets the performance criteria for the specialized task.
  • +
  • Deployment: Once validated, the specialized model is then deployed into the production environment.
  • +
+

This method significantly reduces the time and computational resources required to train a model from scratch (Pan and Yang 2010). By adopting transfer learning, embedded systems can achieve high accuracy on specialized tasks without the need to gather extensive data or expend significant computational resources on training from the ground up.

+
+Pan, Sinno Jialin, and Qiang Yang. 2010. “A Survey on Transfer Learning.” IEEE Trans. Knowl. Data Eng. 22 (10): 1345–59. https://doi.org/10.1109/tkde.2009.191. +
+
+

12.4.2 Post-Deployment Adaptation

+

Deployment to a device need not mark the culmination of an ML model’s educational trajectory. With the advent of transfer learning, we open the doors to the deployment of adaptive ML models in real-world scenarios, catering to users’ personalized needs.

+

Consider a real-world application where a parent wishes to identify their child in a collection of images from a school event on their smartphone. In this scenario, the parent is faced with the challenge of locating their child amidst images of many other children. Transfer learning can be employed here to finetune an embedded system’s model to this unique and specialized task. Initially, the system might use a generic model trained to recognize faces in images. However, with transfer learning, the system can adapt this model to recognize the specific features of the user’s child.

+

Here’s how it works:

+
    +
  1. Data Collection: The embedded system gathers images that include the child, ideally with the parent’s input to ensure accuracy and relevance. This can be done directly on the device, maintaining the user’s data privacy.
  2. +
  3. Model Finetuning: The pre-existing face recognition model, which has been trained on a large and diverse dataset, is then finetuned using the newly collected images of the child. This process adapts the model to recognize the child’s specific facial features, distinguishing them from other children in the images.
  4. +
  5. Validation: The refined model is then validated to ensure it accurately recognizes the child in various images. This can involve the parent verifying the model’s performance and providing feedback for further improvements.
  6. +
  7. Deployment: Once validated, the adapted model is deployed on the device, enabling the parent to easily identify their child in images without having to sift through them manually.
  8. +
+

This on-the-fly customization enhances the model’s efficacy for the individual user, ensuring that they benefit from ML personalization. This is, in part, how iPhotos or Google Photos works when they ask us to recognize a face, and then, based on that information, they index all the photos by that face. Because the learning and adaptation occur on the device itself, there are no risks to personal privacy. The parent’s images are not uploaded to a cloud server or shared with third parties, protecting the family’s privacy while still reaping the benefits of a personalized ML model. This approach represents a significant step forward in the quest to provide users with tailored ML solutions that respect and uphold their privacy.

+
+
+

12.4.3 Benefits

+

Transfer learning has become an important technique in ML and artificial intelligence, and it is particularly valuable for several reasons.

+
    +
  1. Data Scarcity: In many real-world scenarios, acquiring a sufficiently large labeled dataset to train an ML model from scratch is challenging. Transfer learning mitigates this issue by allowing the use of pre-trained models that have already learned valuable features from a vast dataset.
  2. +
  3. Computational Expense: Training a model from scratch requires significant computational resources and time, especially for complex models like deep neural networks. By using transfer learning, we can leverage the computation that has already been done during the training of the source model, thereby saving both time and computational power.
  4. +
  5. Limited Annotated Data: For some specific tasks, there might be ample raw data available, but the process of labeling that data for supervised learning can be costly and time-consuming. Transfer learning enables us to utilize pre-trained models that have been trained on a related task with labeled data, hence requiring less annotated data for the new task.
  6. +
+

There are advantages to reusing the features:

+
    +
  1. Hierarchical Feature Learning: Deep learning models, particularly Convolutional Neural Networks (CNNs), can learn hierarchical features. Lower layers typically learn generic features like edges and shapes, while higher layers learn more complex and task-specific features. Transfer learning allows us to reuse the generic features learned by a model and finetune the higher layers for our specific task.
  2. +
  3. Boosting Performance: Transfer learning has been proven to boost the performance of models on tasks with limited data. The knowledge gained from the source task can provide a valuable starting point and lead to faster convergence and improved accuracy on the target task.
  4. +
+
+

Exercise 12.1 (Transfer Learning)  

+
+
+ +
+
+

Imagine training an AI to recognize flowers like a pro, but without needing a million flower pictures! That’s the power of transfer learning. In this Colab, we’ll take an AI that already knows about images and teach it to become a flower expert with less effort. Get ready to make your AI smarter, not harder!

+

+
+
+
+
+
+

12.4.4 Core Concepts

+

Understanding the core concepts of transfer learning is essential for effectively utilizing this powerful approach in ML. Here, we’ll break down some of the main principles and components that underlie the process of transfer learning.

+
+

Source and Target Tasks

+

In transfer learning, there are two main tasks involved: the source task and the target task. The source task is the task for which the model has already been trained and has learned valuable information. The target task is the new task we want the model to perform. The goal of transfer learning is to leverage the knowledge gained from the source task to improve performance on the target task.

+

Suppose we have a model trained to recognize various fruits in images (source task), and we want to create a new model to recognize different vegetables in images (target task). In that case, we can use transfer learning to leverage the knowledge gained during the fruit recognition task to improve the performance of the vegetable recognition model.

+
+
+

Representation Transfer

+

Representation transfer is about transferring the learned representations (features) from the source task to the target task. There are three main types of representation transfer:

+
    +
  • Instance Transfer: This involves reusing the data instances from the source task in the target task.
  • +
  • Feature-Representation Transfer: This involves transferring the learned feature representations from the source task to the target task.
  • +
  • Parameter Transfer: This involves transferring the model’s learned parameters (weights) from the source task to the target task.
  • +
+

In natural language processing, a model trained to understand the syntax and grammar of a language (source task) can have its learned representations transferred to a new model designed to perform sentiment analysis (target task).

+
+
+

Finetuning

+

Finetuning is the process of adjusting the parameters of a pre-trained model to adapt it to the target task. This typically involves updating the weights of the model’s layers, especially the last few layers, to make the model more relevant for the new task. In image classification, a model pre-trained on a general dataset like ImageNet (source task) can be finetuned by adjusting the weights of its layers to perform well on a specific classification task, like recognizing specific animal species (target task).

+
+
+

Feature Extractions

+

Feature extraction involves using a pre-trained model as a fixed feature extractor, where the output of the model’s intermediate layers is used as features for the target task. This approach is particularly useful when the target task has a small dataset, as the pre-trained model’s learned features can significantly enhance performance. In medical image analysis, a model pre-trained on a large dataset of general medical images (source task) can be used as a feature extractor to provide valuable features for a new model designed to recognize specific types of tumors in X-ray images (target task).

+
+
+
+

12.4.5 Types of Transfer Learning

+

Transfer learning can be classified into three main types based on the nature of the source and target tasks and data. Let’s explore each type in detail:

+
+

Inductive Transfer Learning

+

In inductive transfer learning, the goal is to learn the target predictive function with the help of source data. It typically involves finetuning a pre-trained model on the target task with available labeled data. A common example of inductive transfer learning is image classification tasks. For instance, a model pre-trained on the ImageNet dataset (source task) can be finetuned to classify specific types of birds (target task) using a smaller labeled dataset of bird images.

+
+
+

Transductive Transfer Learning

+

Transductive transfer learning involves using source and target data, but only the source task. The main aim is to transfer knowledge from the source domain to the target domain, even though the tasks remain the same. Sentiment analysis for different languages can serve as an example of transductive transfer learning. A model trained to perform sentiment analysis in English (source task) can be adapted to perform sentiment analysis in another language, like French (target task), by leveraging parallel datasets of English and French sentences with the same sentiments.

+
+
+

Unsupervised Transfer Learning

+

Unsupervised transfer learning is used when the source and target tasks are related, but there is no labeled data available for the target task. The goal is to leverage the knowledge gained from the source task to improve performance on the target task, even without labeled data. An example of unsupervised transfer learning is topic modeling in text data. A model trained to extract topics from news articles (source task) can be adapted to extract topics from social media posts (target task) without needing labeled data for the social media posts.

+
+
+

Comparison and Tradeoffs

+

By leveraging these different types of transfer learning, practitioners can choose the approach that best fits the nature of their tasks and available data, ultimately leading to more effective and efficient ML models. So, in summary:

+
    +
  • Inductive: different source and target tasks, different domains
  • +
  • Transductive: different source and target tasks, same domain
  • +
  • Unsupervised: unlabeled source data, transfers feature representations
  • +
+

Table tbl-tltypes presents a matrix that outlines in a bit more detail the similarities and differences between the types of transfer learning:

+
+
+
+Table 12.1: Comparison of transfer learning types. +
+
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Inductive Transfer LearningTransductive Transfer LearningUnsupervised Transfer Learning
Labeled Data for Target TaskRequiredNot RequiredNot Required
Source TaskCan be differentSameSame or Different
Target TaskCan be differentSameCan be different
ObjectiveImprove target task performance with source dataTransfer knowledge from source to target domainLeverage source task to improve target task performance without labeled data
ExampleImageNet to bird classificationSentiment analysis in different languagesTopic modeling for different text data
+
+
+
+
+
+
+

12.4.6 Constraints and Considerations

+

When engaging in transfer learning, there are several factors that must be considered to ensure successful knowledge transfer and model performance. Here’s a breakdown of some key factors:

+
+

Domain Similarity

+

Domain similarity refers to how closely related the source and target domains are. The more similar the domains, the more likely the transfer learning will be successful. Transferring knowledge from a model trained on images of outdoor scenes (source domain) to a new task that involves recognizing objects in indoor scenes (target domain) might be more successful than transferring knowledge from outdoor scenes to a task involving text analysis, as the domains (images vs. text) are quite different.

+
+
+

Task Similarity

+

Task similarity refers to how closely related the source and target tasks are. Similar tasks are likely to benefit more from transfer learning. A model trained to recognize different breeds of dogs (source task) can be more easily adapted to recognize different breeds of cats (target task) than it can be adapted to perform a completely different task like language translation.

+
+
+

Data Quality and Quantity

+

The quality and quantity of data available for the target task can significantly impact the success of transfer learning. More high-quality data can result in better model performance. Suppose we have a large dataset with clear, well-labeled images to recognize specific bird species. In that case, the transfer learning process will likely be more successful than if we have a small, noisy dataset.

+
+
+

Feature Space Overlap

+

Feature space overlap refers to how well the features learned by the source model align with the features needed for the target task. Greater overlap can lead to more successful transfer learning. A model trained on high-resolution images (source task) may not transfer well to a target task that involves low-resolution images, as the feature space (high-res vs. low-res) is different.

+
+
+

Model Complexity

+

The complexity of the source model can also impact the success of transfer learning. Sometimes, a simpler model might transfer better than a complex one, as it is less likely to overfit the source task. For example, a simple convolutional neural network (CNN) model trained on image data (source task) may transfer more successfully to a new image classification task (target task) than a complex CNN with many layers, as the simpler model is less likely to overfit the source task.

+

By considering these factors, ML practitioners can make informed decisions about when and how to utilize transfer learning, ultimately leading to more successful model performance on the target task. The success of transfer learning hinges on the degree of similarity between the source and target domains. Overfitting is risky, especially when finetuning occurs on a limited dataset. On the computational front, certain pre-trained models, owing to their size, might not comfortably fit into the memory constraints of some devices or may run prohibitively slowly. Over time, as data evolves, there is potential for model drift, indicating the need for periodic re-training or ongoing adaptation.

+

Learn more about transfer learning in the video below.

+
+
+
+
+
+

12.5 Federated Machine Learning

+

Federated Learning Overview

+

The modern internet is full of large networks of connected devices. Whether it’s cell phones, thermostats, smart speakers, or other IOT products, countless edge devices are a goldmine for hyper-personalized, rich data. However, with that rich data comes an assortment of problems with information transfer and privacy. Constructing a training dataset in the cloud from these devices would involve high volumes of bandwidth, cost-efficient data transfer, and violation of users’ privacy.

+

Federated learning offers a solution to these problems: train models partially on the edge devices and only communicate model updates to the cloud. In 2016, a team from Google designed architecture for federated learning that attempts to address these problems.

+

In their initial paper, Google outlines a principle federated learning algorithm called FederatedAveraging, which is shown in Figure fig-federated-avg-algo. Specifically, FederatedAveraging performs stochastic gradient descent (SGD) over several different edge devices. In this process, each device calculates a gradient \(g_k = \nabla F_k(w_t)\) which is then applied to update the server-side weights as (with \(\eta\) as learning rate across \(k\) clients): \[ +w_{t+1} \rightarrow w_t - \eta \sum_{k=1}^{K} \frac{n_k}{n}g_k +\] This summarizes the basic algorithm for federated learning on the right. For each round of training, the server takes a random set of client devices and calls each client to train on its local batch using the most recent server-side weights. Those weights are then returned to the server, where they are collected individually and averaged to update the global model weights.

+
+
+
+ +
+
+Figure 12.5: Google’s Proposed FederatedAverage Algorithm. Credit: McMahan et al. (2017). +
+
+
+

With this proposed structure, there are a few key vectors for further optimizing federated learning. We will outline each in the following subsections.

+

The following video is an overview of federated learning.

+
+
+

12.5.1 Communication Efficiency

+

One of the key bottlenecks in federated learning is communication. Every time a client trains the model, they must communicate their updates back to the server. Similarly, once the server has averaged all the updates, it must send them back to the client. This incurs huge bandwidth and resource costs on large networks of millions of devices. As the field of federated learning advances, a few optimizations have been developed to minimize this communication. To address the footprint of the model, researchers have developed model compression techniques. In the client-server protocol, federated learning can also minimize communication through the selective sharing of updates on clients. Finally, efficient aggregation techniques can also streamline the communication process.

+
+
+

12.5.2 Model Compression

+

In standard federated learning, the server communicates the entire model to each client, and then the client sends back all of the updated weights. This means that the easiest way to reduce the client’s memory and communication footprint is to minimize the size of the model needed to be communicated. We can employ all of the previously discussed model optimization strategies to do this.

+

In 2022, another team at Google proposed that each client communicates via a compressed format and decompresses the model on the fly for training (Yang et al. 2023), allocating and deallocating the full memory for the model only for a short period while training. The model is compressed through a range of various quantization strategies elaborated upon in their paper. Meanwhile, the server can update the uncompressed model by decompressing and applying updates as they come in.

+
+Yang, Tien-Ju, Yonghui Xiao, Giovanni Motta, Françoise Beaufays, Rajiv Mathews, and Mingqing Chen. 2023. “Online Model Compression for Federated Learning with Large Models.” In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE; IEEE. https://doi.org/10.1109/icassp49357.2023.10097124. +
+
+

12.5.3 Selective Update Sharing

+

There are many methods for selectively sharing updates. The general principle is that reducing the portion of the model that the clients are training on the edge reduces the memory necessary for training and the size of communication to the server. In basic federated learning, the client trains the entire model. This means that when a client sends an update to the server, it has gradients for every weight in the network.

+

However, we cannot just reduce communication by sending pieces of those gradients from each client to the server because the gradients are part of an entire update required to improve the model. Instead, you need to architecturally design the model such that each client trains only a small portion of the broader model, reducing the total communication while still gaining the benefit of training on client data. A paper (Shi and Radu 2022) from the University of Sheffield applies this concept to a CNN by splitting the global model into two parts: an upper and a lower part, as shown in Z. Chen and Xu (2023).

+
+Shi, Hongrui, and Valentin Radu. 2022. “Data Selection for Efficient Model Update in Federated Learning.” In Proceedings of the 2nd European Workshop on Machine Learning and Systems, 72–78. ACM. https://doi.org/10.1145/3517207.3526980. +
+
+
+ +
+
+Figure 12.6: Split model architecture for selective sharing. Credit: Shi et al., (2022). +
+
+
+

The lower part is designed to focus on generic features in the dataset, while the upper part, trained on those generic features, is designed to be more sensitive to the activation maps. This means that the lower part of the model is trained through standard federated averaging across all of the clients. Meanwhile, the upper part of the model is trained entirely on the server side from the activation maps generated by the clients. This approach drastically reduces communication for the model while still making the network robust to various types of input found in the data on the client devices.

+
+
+

12.5.4 Optimized Aggregation

+

In addition to reducing the communication overhead, optimizing the aggregation function can improve model training speed and accuracy in certain federated learning use cases. While the standard for aggregation is just averaging, various other approaches can improve model efficiency, accuracy, and security. One alternative is clipped averaging, which clips the model updates within a specific range. Another strategy to preserve security is differential privacy average aggregation. This approach integrates differential privacy into the aggregations tep to protect client identities. Each client adds a layer of random noise to their updates before communicating to the server. The server then updates the server with the noisy updates, meaning that the amount of noise needs to be tuned carefully to balance privacy and accuracy.

+

In addition to security-enhancing aggregation methods, there are several modifications to the aggregation methods that can improve training speed and performance by adding client metadata along with the weight updates. Momentum aggregation is a technique that helps address the convergence problem. In federated learning, client data can be extremely heterogeneous depending on the different environments in which the devices are used. That means that many models with heterogeneous data may need help to converge. Each client stores a momentum term locally, which tracks the pace of change over several updates. With clients communicating this momentum, the server can factor in the rate of change of each update when changing the global model to accelerate convergence. Similarly, weighted aggregation can factor in the client performance or other parameters like device type or network connection strength to adjust the weight with which the server should incorporate the model updates. Further description of specific aggregation algorithms is described by Moshawrab et al. (2023).

+
+Moshawrab, Mohammad, Mehdi Adda, Abdenour Bouzouane, Hussein Ibrahim, and Ali Raad. 2023. “Reviewing Federated Learning Aggregation Algorithms; Strategies, Contributions, Limitations and Future Perspectives.” Electronics 12 (10): 2287. https://doi.org/10.3390/electronics12102287. +
+
+

12.5.5 Handling non-IID Data

+

When using federated learning to train a model across many client devices, it is convenient to consider the data to be independent and identically distributed (IID) across all clients. When data is IID, the model will converge faster and perform better because each local update on any given client is more representative of the broader dataset. This makes aggregation straightforward, as you can directly average all clients. However, this differs from how data often appears in the real world. Consider a few of the following ways in which data may be non-IID:

+
    +
  • If you are learning on a set of health-monitor devices, different device models could mean different sensor qualities and properties. This means that low-quality sensors and devices may produce data, and therefore, model updates distinctly different than high-quality ones

  • +
  • A smart keyboard trained to perform autocorrect. If you have a disproportionate amount of devices from a certain region, the slang, sentence structure, or even language they were using could skew more model updates towards a certain style of typing

  • +
  • If you have wildlife sensors in remote areas, connectivity may not be equally distributed, causing some clients in certain regions to be unable to send more model updates than others. If those regions have different wildlife activity from certain species, that could skew the updates toward those animals

  • +
+

There are a few approaches to addressing non-IID data in federated learning. One approach would be to change the aggregation algorithm. If you use a weighted aggregation algorithm, you can adjust based on different client properties like region, sensor properties, or connectivity (Zhao et al. 2018).

+
+Zhao, Yue, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. “Federated Learning with Non-Iid Data.” ArXiv Preprint abs/1806.00582. https://arxiv.org/abs/1806.00582. +
+
+

12.5.6 Client Selection

+

Considering all of the factors influencing the efficacy of federated learning, like IID data and communication, client selection is a key component to ensuring a system trains well. Selecting the wrong clients can skew the dataset, resulting in non-IID data. Similarly, choosing clients randomly with bad network connections can slow down communication. Therefore, several key characteristics must be considered when selecting the right subset of clients.

+

When selecting clients, there are three main components to consider: data heterogeneity, resource allocation, and communication cost. We can select clients on the previously proposed metrics in the non-IID section to address data heterogeneity. In federated learning, all devices may have different amounts of computing, resulting in some being more inefficient at training than others. When selecting a subset of clients for training, one must consider a balance of data heterogeneity and available resources. In an ideal scenario, you can always select the subset of clients with the greatest resources. However, this may skew your dataset, so a balance must be struck. Communication differences add another layer; you want to avoid being bottlenecked by waiting for devices with poor connections to transmit all their updates. Therefore, you must also consider choosing a subset of diverse yet well-connected devices.

+
+
+

12.5.7 An Example of Deployed Federated Learning: G board

+

A primary example of a deployed federated learning system is Google’s Keyboard, Gboard, for Android devices. In implementing federated learning for the keyboard, Google focused on employing differential privacy techniques to protect the user’s data and identity. Gboard leverages language models for several key features, such as Next Word Prediction (NWP), Smart Compose (SC), and On-The-Fly rescoring (OTF) (Xu et al. 2023), as shown in Figure fig-gboard-features.

+
+Xu, Zheng, Yanxiang Zhang, Galen Andrew, Christopher A Choquette-Choo, Peter Kairouz, H Brendan McMahan, Jesse Rosenstock, and Yuanbo Zhang. 2023. “Federated Learning of Gboard Language Models with Differential Privacy.” ArXiv Preprint abs/2305.18465. https://arxiv.org/abs/2305.18465. +

NWP will anticipate the next word the user tries to type based on the previous one. SC gives inline suggestions to speed up the typing based on each character. OTF will re-rank the proposed next words based on the active typing process. All three of these models need to run quickly on the edge, and federated learning can accelerate training on the users’ data. However, uploading every word a user typed to the cloud for training would be a massive privacy violation. Therefore, federated learning emphasizes differential privacy, which protects the user while enabling a better user experience.

+
+
+
+ +
+
+Figure 12.7: Google G Board Features. Credit: Zheng et al., (2023). +
+
+
+

To accomplish this goal, Google employed its algorithm DP-FTRL, which provides a formal guarantee that trained models will not memorize specific user data or identities. The algorithm system design is shown in Figure fig-differential-privacy. DP-FTRL, combined with secure aggregation, encrypts model updates and provides an optimal balance of privacy and utility. Furthermore, adaptive clipping is applied in the aggregation process to limit the impact of individual users on the global model (step 3 in Figure fig-differential-privacy). By combining all these techniques, Google can continuously refine its keyboard while preserving user privacy in a formally provable way.

+
+
+
+ +
+
+Figure 12.8: Differential Privacy in G Board. Credit: Zheng et al., (2023). +
+
+
+
+

Exercise 12.2 (Federated Learning - Text Generation)  

+
+
+ +
+
+

Have you ever used those smart keyboards to suggest the next word? With federated learning, we can make them even better without sacrificing privacy. In this Colab, we’ll teach an AI to predict words by training on text data spread across devices. Get ready to make your typing even smoother!

+

+
+
+
+
+

Exercise 12.3 (Federated Learning - Image Classification)  

+
+
+ +
+
+

Want to train an image-savvy AI without sending your photos to the cloud? Federated learning is the answer! In this Colab, we’ll train a model across multiple devices, each learning from its images. Privacy is protected, and teamwork makes the AI dream work!

+

+
+
+
+
+
+

12.5.8 Benchmarking for Federated Learning: MedPerf

+

One of the richest examples of data on the edge is medical devices. These devices store some of the most personal data on users but offer huge advances in personalized treatment and better accuracy in medical AI. Given these two factors, medical devices are the perfect use case for federated learning. MedPerf is an open-source platform used to benchmark models using federated evaluation (Karargyris et al. 2023). Instead of just training models via federated learning, MedPerf takes the model to edge devices to test it against personalized data while preserving privacy. In this way, a benchmark committee can evaluate various models in the real world on edge devices while still preserving patient anonymity.

+
+Karargyris, Alexandros, Renato Umeton, Micah J Sheller, Alejandro Aristizabal, Johnu George, Anna Wuest, Sarthak Pati, et al. 2023. “Federated Benchmarking of Medical Artificial Intelligence with MedPerf.” Nature Machine Intelligence 5 (7): 799–810. https://doi.org/10.1038/s42256-023-00652-2. +
+
+
+

12.6 Security Concerns

+

Performing ML model training and adaptation on end-user devices also introduces security risks that must be addressed. Some key security concerns include:

+
    +
  • Exposure of private data: Training data may be leaked or stolen from devices
  • +
  • Data poisoning: Adversaries can manipulate training data to degrade model performance
  • +
  • Model extraction: Attackers may attempt to steal trained model parameters
  • +
  • Membership inference: Models may reveal the participation of specific users’ data
  • +
  • Evasion attacks: Specially crafted inputs can cause misclassification
  • +
+

Any system that performs learning on-device introduces security concerns, as it may expose vulnerabilities in larger-scale models. Numerous security risks are associated with any ML model, but these risks have specific consequences for on-device learning. Fortunately, there are methods to mitigate these risks and improve the real-world performance of on-device learning.

+
+

12.6.1 Data Poisoning

+

On-device ML introduces unique data security challenges compared to traditional cloud-based training. In particular, data poisoning attacks pose a serious threat during on-device learning. Adversaries can manipulate training data to degrade model performance when deployed.

+

Several data poisoning attack techniques exist:

+
    +
  • Label Flipping: It involves applying incorrect labels to samples. For instance, in image classification, cat photos may be labeled as dogs to confuse the model. Flipping even 10% of labels can have significant consequences on the model.
  • +
  • Data Insertion: It introduces fake or distorted inputs into the training set. This could include pixelated images, noisy audio, or garbled text.
  • +
  • ** Logic Corruption: ** This alters the underlying [patterns] (https://www.worldscientific.com/doi/10.1142/S0218001414600027) in data to mislead the model. In sentiment analysis, highly negative reviews may be marked positive through this technique. For this reason, recent surveys have shown that many companies are more afraid of data poisoning than other adversarial ML concerns.
  • +
+

What makes data poisoning alarming is how it exploits the discrepancy between curated datasets and live training data. Consider a cat photo dataset collected from the internet. In the weeks later, when this data trains a model on-device, new cat photos on the web differ significantly.

+

With data poisoning, attackers purchase domains and upload content that influences a portion of the training data. Even small data changes significantly impact the model’s learned behavior. Consequently, poisoning can instill racist, sexist, or other harmful biases if unchecked.

+

Microsoft Tay was a chatbot launched by Microsoft in 2016. It was designed to learn from its interactions with users on social media platforms like Twitter. Unfortunately, Microsoft Tay became a prime example of data poisoning in ML models. Within 24 hours of its launch, Microsoft had to take Tay offline because it had started producing offensive and inappropriate messages, including hate speech and racist comments. This occurred because some users on social media intentionally fed Tay with harmful and offensive input, which the chatbot then learned from and incorporated into its responses.

+

This incident is a clear example of data poisoning because malicious actors intentionally manipulated the data used to train and inform the chatbot’s responses. The data poisoning resulted in the chatbot adopting harmful biases and producing output that its developers did not intend. It demonstrates how even small amounts of maliciously crafted data can significantly impact the behavior of ML models and highlights the importance of implementing robust data filtering and validation mechanisms to prevent such incidents from occurring.

+

Such biases could have dangerous real-world impacts. Rigorous data validation, anomaly detection, and tracking of data provenance are critical defensive measures. Adopting frameworks like Five Safes ensures models are trained on high-quality, representative data (Desai et al. 2016).

+
+Desai, Tanvi, Felix Ritchie, Richard Welpton, et al. 2016. “Five Safes: Designing Data Access for Research.” Economics Working Paper Series 1601: 28. +

Data poisoning is a pressing concern for secure on-device learning since data at the endpoint cannot be easily monitored in real-time. If models are allowed to adapt on their own, then we run the risk of the device acting maliciously. However, continued research in adversarial ML aims to develop robust solutions to detect and mitigate such data attacks.

+
+
+

12.6.2 Adversarial Attacks

+

During the training phase, attackers might inject malicious data into the training dataset, which can subtly alter the model’s behavior. For example, an attacker could add images of cats labeled as dogs to a dataset used to train an image classification model. If done cleverly, the model’s accuracy might not significantly drop, and the attack could be noticed. The model would then incorrectly classify some cats as dogs, which could have consequences depending on the application.

+

In an embedded security camera system, for instance, this could allow an intruder to avoid detection by wearing a specific pattern that the model has been tricked into classifying as non-threatening.

+

During the inference phase, attackers can use adversarial examples to fool the model. Adversarial examples are inputs that have been slightly altered in a way that causes the model to make incorrect predictions. For instance, an attacker might add a small amount of noise to an image in a way that causes a face recognition system to misidentify a person. These attacks can be particularly concerning in applications where safety is at stake, such as autonomous vehicles. In the example you mentioned, the researchers were able to cause a traffic sign recognition system to misclassify a stop sign as a speed sign. This type of misclassification could lead to accidents if it occurred in a real-world autonomous driving system.

+

To mitigate these risks, several defenses can be employed:

+
    +
  • Data Validation and Sanitization: Before incorporating new data into the training dataset, it should be thoroughly validated and sanitized to ensure it is not malicious.
  • +
  • Adversarial Training: The model can be trained on adversarial examples to make it more robust to these types of attacks.
  • +
  • Input Validation: During inference, inputs should be validated to ensure they have not been manipulated to create adversarial examples.
  • +
  • Regular Auditing and Monitoring: Regularly auditing and monitoring the model’s behavior can help detect and mitigate adversarial attacks. However, this is easier said than done in the context of tiny ML systems. It is often hard to monitor embedded ML systems at the endpoint due to communication bandwidth limitations, which we will discuss in the MLOps chapter.
  • +
+

By understanding the potential risks and implementing these defenses, we can help secure on-device training at the endpoint/edge and mitigate the impact of adversarial attacks. Most people easily confuse data poisoning and adversarial attacks. So Table tbl-attacks compares data poisoning and adversarial attacks:

+
+
+
+Table 12.2: Comparison of data poisoning and adversarial attacks. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AspectData PoisoningAdversarial Attacks
TimingTraining phaseInference phase
TargetTraining dataInput data
GoalNegatively affect model’s performanceCause incorrect predictions
MethodInsert malicious examples into training data, often with incorrect labelsAdd carefully crafted noise to input data
ExampleAdding images of cats labeled as dogs to a dataset used for training an image classification modelAdding a small amount of noise to an image in a way that causes a face recognition system to misidentify a person
Potential EffectsModel learns incorrect patterns and makes incorrect predictionsImmediate and potentially dangerous incorrect predictions
Applications AffectedAny ML modelAutonomous vehicles, security systems, etc
+
+
+
+
+
+

12.6.3 Model Inversion

+

Model inversion attacks are a privacy threat to on-device machine learning models trained on sensitive user data (Nguyen et al. 2023). Understanding this attack vector and mitigation strategies will be important for building secure and ethical on-device AI. For example, imagine an iPhone app that uses on-device learning to categorize photos in your camera roll into groups like “beach,” “food,” or “selfies” for easier searching.

+
+Nguyen, Ngoc-Bao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, and Ngai-Man Cheung. 2023. “Re-Thinking Model Inversion Attacks Against Deep Neural Networks.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16384–93. IEEE. https://doi.org/10.1109/cvpr52729.2023.01572. +

The on-device model may be trained by Apple on a dataset of iCloud photos from consenting users. A malicious attacker could attempt to extract parts of those original iCloud training photos using model inversion. Specifically, the attacker feeds crafted synthetic inputs into the on-device photo classifier. By tweaking the synthetic inputs and observing how the model categorizes them, they can refine the inputs until they reconstruct copies of the original training data - like a beach photo from a user’s iCloud. Now, the attacker has breached that user’s privacy by obtaining one of their photos without consent. This demonstrates why model inversion is dangerous - it can potentially leak highly sensitive training data.

+

Photos are an especially high-risk data type because they often contain identifiable people, location information, and private moments. However, the same attack methodology could apply to other personal data, such as audio recordings, text messages, or users’ health data.

+

To defend against model inversion, one would need to take precautions like adding noise to the model outputs or using privacy-preserving machine learning techniques like federated learning to train the on-device model. The goal is to prevent attackers from being able to reconstruct the original training data.

+
+
+

12.6.4 On-Device Learning Security Concerns

+

While data poisoning and adversarial attacks are common concerns for ML models in general, on-device learning introduces unique security risks. When on-device variants of large-scale models are published, adversaries can exploit these smaller models to attack their larger counterparts. Research has demonstrated that as on-device models and full-scale models become more similar, the vulnerability of the original large-scale models increases significantly. For instance, evaluations across 19 Deep Neural Networks (DNNs) revealed that exploiting on-device models could increase the vulnerability of the original large-scale models by up to 100 times.

+

There are three primary types of security risks specific to on-device learning:

+
    +
  • Transfer-Based Attacks: These attacks exploit the transferability property between a surrogate model (an approximation of the target model, similar to an on-device model) and a remote target model (the original full-scale model). Attackers generate adversarial examples using the surrogate model, which can then be used to deceive the target model. For example, imagine an on-device model designed to identify spam emails. An attacker could use this model to generate a spam email that is not detected by the larger, full-scale filtering system.

  • +
  • Optimization-Based Attacks: These attacks generate adversarial examples for transfer-based attacks using some form of the objective function and iteratively modify inputs to achieve the desired outcome. Gradient estimation attacks, for example, approximate the model’s gradient using query outputs (such as softmax confidence scores), while gradient-free attacks use the model’s final decision (the predicted class) to approximate the gradient, albeit requiring many more queries.

  • +
  • Query Attacks with Transfer Priors: These attacks combine elements of transfer-based and optimization-based attacks. They reverse engineer on-device models to serve as surrogates for the target full-scale model. In other words, attackers use the smaller on-device model to understand how the larger model works and then use this knowledge to attack the full-scale model.

  • +
+

By understanding these specific risks associated with on-device learning, we can develop more robust security protocols to protect both on-device and full-scale models from potential attacks.

+
+
+

12.6.5 Mitigation of On-Device Learning Risks

+

Various methods can be employed to mitigate the numerous security risks associated with on-device learning. These methods may be specific to the type of attack or serve as a general tool to bolster security.

+

One strategy to reduce security risks is to diminish the similarity between on-device models and full-scale models, thereby reducing transferability by up to 90%. This method, known as similarity-unpairing, addresses the problem that arises when adversaries exploit the input-gradient similarity between the two models. By finetuning the full-scale model to create a new version with similar accuracy but different input gradients, we can construct the on-device model by quantizing this updated full-scale model. This unpairing reduces the vulnerability of on-device models by limiting the exposure of the original full-scale model. Importantly, the order of finetuning and quantization can be varied while still achieving risk mitigation (Hong, Carlini, and Kurakin 2023).

+

To tackle data poisoning, it is imperative to source datasets from trusted and reliable vendors.

+

Several strategies can be employed to combat adversarial attacks. A proactive approach involves generating adversarial examples and incorporating them into the model’s training dataset, thereby fortifying the model against such attacks. Tools like CleverHans, an open-source training library, are instrumental in creating adversarial examples. Defense distillation is another effective strategy, wherein the on-device model outputs probabilities of different classifications rather than definitive decisions (Hong, Carlini, and Kurakin 2023), making it more challenging for adversarial examples to exploit the model.

+
+Hong, Sanghyun, Nicholas Carlini, and Alexey Kurakin. 2023. “Publishing Efficient on-Device Models Increases Adversarial Vulnerability.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 271–90. IEEE; IEEE. https://doi.org/10.1109/satml54575.2023.00026. +

The theft of intellectual property is another significant concern when deploying on-device models. Intellectual property theft is a concern when deploying on-device models, as adversaries may attempt to reverse-engineer the model to steal the underlying technology. To safeguard against intellectual property theft, the binary executable of the trained model should be stored on a microcontroller unit with encrypted software and secured physical interfaces of the chip. Furthermore, the final dataset used for training the model should be kept private.

+

Furthermore, on-device models often utilize well-known or open-source datasets, such as MobileNet’s Visual Wake Words. As such, it is important to maintain the privacy of the final dataset used for training the model. Additionally, protecting the data augmentation process and incorporating specific use cases can minimize the risk of reverse-engineering an on-device model.

+

Lastly, the Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS) serves as a valuable matrix tool that helps assess the risk profile of on-device models, empowering developers to identify and mitigate potential risks proactively.

+
+
+

12.6.6 Securing Training Data

+

There are various ways to secure on-device training data. Each concept is really deep and could be worth a class by itself. So here, we’ll briefly allude to those concepts so you’re aware of what to learn further.

+
+

Encryption

+

Encryption serves as the first line of defense for training data. This involves implementing end-to-end encryption for local storage on devices and communication channels to prevent unauthorized access to raw training data. Trusted execution environments, such as Intel SGX and ARM TrustZone, are essential for facilitating secure training on encrypted data.

+

Additionally, when aggregating updates from multiple devices, secure multi-party computation protocols can be employed to enhance security (Kairouz, Oh, and Viswanath 2015); a practical application of this is in collaborative on-device learning, where cryptographic privacy-preserving aggregation of user model updates can be implemented. This technique effectively hides individual user data even during the aggregation phase.

+
+Kairouz, Peter, Sewoong Oh, and Pramod Viswanath. 2015. “Secure Multi-Party Differential Privacy.” In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, edited by Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, 2008–16. https://proceedings.neurips.cc/paper/2015/hash/a01610228fe998f515a72dd730294d87-Abstract.html. +
+
+

Differential Privacy

+

Differential privacy is another crucial strategy for protecting training data. By injecting calibrated statistical noise into the data, we can mask individual records while still extracting valuable population patterns (Dwork and Roth 2013). Managing the privacy budget across multiple training iterations and reducing noise as the model converges is also vital (Abadi et al. 2016). Methods such as formally provable differential privacy, which may include adding Laplace or Gaussian noise scaled to the dataset’s sensitivity, can be employed.

+
+Dwork, Cynthia, and Aaron Roth. 2013. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends in Theoretical Computer Science 9 (3-4): 211–407. https://doi.org/10.1561/0400000042. +
+Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318. +
+
+

Anomaly Detection

+

Anomaly detection plays an important role in identifying and mitigating potential data poisoning attacks. This can be achieved through statistical analyses like Principal Component Analysis (PCA) and clustering, which help to detect deviations in aggregated training data. Time-series methods such as Cumulative Sum (CUSUM) charts are useful for identifying shifts indicative of potential poisoning. Comparing current data distributions with previously seen clean data distributions can also help to flag anomalies. Moreover, suspected poisoned batches should be removed from the training update aggregation process. For example, spot checks on subsets of training images on devices can be conducted using photoDNA hashes to identify poisoned inputs.

+
+
+

Input Data Validation

+

Lastly, input data validation is essential for ensuring the integrity and validity of input data before it is fed into the training model, thereby protecting against adversarial payloads. Similarity measures, such as cosine distance, can be employed to catch inputs that deviate significantly from the expected distribution. Suspicious inputs that may contain adversarial payloads should be quarantined and sanitized. Furthermore, parser access to training data should be restricted to validated code paths only. Leveraging hardware security features, such as ARM Pointer Authentication, can prevent memory corruption (ARM Limited, 2023). An example of this is implementing input integrity checks on audio training data used by smart speakers before processing by the speech recognition model (Z. Chen and Xu 2023).

+
+Chen, Zhiyong, and Shugong Xu. 2023. “Learning Domain-Heterogeneous Speaker Recognition Systems with Personalized Continual Federated Learning.” EURASIP Journal on Audio, Speech, and Music Processing 2023 (1): 33. https://doi.org/10.1186/s13636-023-00299-2. +
+
+
+
+

12.7 On-Device Training Frameworks

+

Embedded inference frameworks like TF-Lite Micro (David et al. 2021), TVM (T. Chen et al. 2018), and MCUNet (Lin et al. 2020) provide a slim runtime for running neural network models on microcontrollers and other resource-constrained devices. However, they don’t support on-device training. Training requires its own set of specialized tools due to the impact of quantization on gradient calculation and the memory footprint of backpropagation (Lin et al. 2022).

+
+David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811. +
+Chen, Tianqi, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, et al. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 578–94. +
+Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han. 2020. MCUNet: Tiny Deep Learning on IoT Devices.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html. +
+Lin, Ji, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. 2022. “On-Device Training Under 256kb Memory.” Adv. Neur. In. 35: 22941–54. +

In recent years, a handful of tools and frameworks have started to emerge that enable on-device training. These include Tiny Training Engine (Lin et al. 2022), TinyTL (Cai et al. 2020), and TinyTrain (Kwon et al. 2023).

+
+

12.7.1 Tiny Training Engine

+

Tiny Training Engine (TTE) uses several techniques to optimize memory usage and speed up the training process. An overview of the TTE workflow is shown in Figure fig-tte-workflow. First, TTE offloads the automatic differentiation to compile time instead of runtime, significantly reducing overhead during training. Second, TTE performs graph optimization like pruning and sparse updates to reduce memory requirements and accelerate computations.

+
+
+
+ +
+
+Figure 12.9: TTE workflow. +
+
+
+

Specifically, TTE follows four main steps:

+
    +
  • During compile time, TTE traces the forward propagation graph and derives the corresponding backward graph for backpropagation. This allows differentiation to happen at compile time rather than runtime.
  • +
  • TTE prunes any nodes representing frozen weights from the backward graph. Frozen weights are weights that are not updated during training to reduce certain neurons’ impact. Pruning their nodes saves memory.
  • +
  • TTE reorders the gradient descent operators to interleave them with the backward pass computations. This scheduling minimizes memory footprints.
  • +
  • TTE uses code generation to compile the optimized forward and backward graphs, which are then deployed for on-device training.
  • +
+
+
+

12.7.2 Tiny Transfer Learning

+

Tiny Transfer Learning (TinyTL) enables memory-efficient on-device training through a technique called weight freezing. During training, much of the memory bottleneck comes from storing intermediate activations and updating the weights in the neural network.

+

To reduce this memory overhead, TinyTL freezes the majority of the weights so they do not need to be updated during training. This eliminates the need to store intermediate activations for frozen parts of the network. TinyTL only finetunes the bias terms, which are much smaller than the weights. An overview of TinyTL workflow is shown in Figure fig-tinytl-workflow.

+
+
+
+ +
+
+Figure 12.10: TinyTL workflow. Credit: Cai et al. (2020).) +
+
+Cai, Han, Chuang Gan, Ligeng Zhu, and Song Han. 2020. TinyTL: Reduce Memory, Not Parameters for Efficient on-Device Learning.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/81f7acabd411274fcf65ce2070ed568a-Abstract.html. +
+
+

Freezing weights apply to fully connected layers as well as convolutional and normalization layers. However, only adapting the biases limits the model’s ability to learn and adapt to new data.

+

To increase adaptability without much additional memory, TinyTL uses a small residual learning model. This refines the intermediate feature maps to produce better outputs, even with fixed weights. The residual model introduces minimal overhead - less than 3.8% on top of the base model.

+

By freezing most weights, TinyTL significantly reduces memory usage during on-device training. The residual model then allows it to adapt and learn effectively for the task. The combined approach provides memory-efficient on-device training with minimal impact on model accuracy.

+
+
+

12.7.3 Tiny Train

+

TinyTrain significantly reduces the time required for on-device training by selectively updating only certain parts of the model. It does this using a technique called task-adaptive sparse updating, as shown in Figure fig-tiny-train.

+

Based on the user data, memory, and computing available on the device, TinyTrain dynamically chooses which neural network layers to update during training. This layer selection is optimized to reduce computation and memory usage while maintaining high accuracy.

+
+
+
+ +
+
+Figure 12.11: TinyTrain workflow. Credit: Kwon et al. (2023). +
+
+Kwon, Young D, Rui Li, Stylianos I Venieris, Jagmohan Chauhan, Nicholas D Lane, and Cecilia Mascolo. 2023. TinyTrain: Deep Neural Network Training at the Extreme Edge.” ArXiv Preprint abs/2307.09988. https://arxiv.org/abs/2307.09988. +
+
+

More specifically, TinyTrain first does offline pretraining of the model. During pretraining, it not only trains the model on the task data but also meta-trains the model. Meta-training means training the model on metadata about the training process itself. This meta-learning improves the model’s ability to adapt accurately even when limited data is available for the target task.

+

Then, during the online adaptation stage, when the model is being customized on the device, TinyTrain performs task-adaptive sparse updates. Using the criteria around the device’s capabilities, it selects only certain layers to update through backpropagation. The layers are chosen to balance accuracy, memory usage, and computation time.

+

By sparsely updating layers tailored to the device and task, TinyTrain significantly reduces on-device training time and resource usage. The offline meta-training also improves accuracy when adapting to limited data. Together, these methods enable fast, efficient, and accurate on-device training.

+
+
+

12.7.4 Comparison

+

Here is a table summarizing the key similarities and differences between the Tiny Training Engine, TinyTL, and TinyTrain frameworks:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
FrameworkSimilaritiesDifferences
Tiny Training EngineOn-device training
Optimize memory & computation
Leverage pruning, sparsity, etc
Traces forward & backward graphs
Prunes frozen weights
Interleaves backprop & gradients
Code generation
TinyTLOn-device training
Optimize memory & computation
Leverage freezing, sparsity, etc
Freezes most weights
Only adapts biases
Uses residual model
TinyTrainOn-device training
Optimize memory & computation
Leverage sparsity, etc
Meta-training in pretraining
Task-adaptive sparse updating
Selective layer updating
+
+
+
+

12.8 Conclusion

+

The concept of on-device learning is increasingly important for increasing the usability and scalability of TinyML. This chapter explored the intricacies of on-device learning, exploring its advantages and limitations, adaptation strategies, key related algorithms and techniques, security implications, and existing and emerging on-device training frameworks.

+

On-device learning is, undoubtedly, a groundbreaking paradigm that brings forth numerous advantages for embedded and edge ML deployments. By performing training directly on the endpoint devices, on-device learning obviates the need for continuous cloud connectivity, making it particularly well-suited for IoT and edge computing applications. It comes with benefits such as improved privacy, ease of compliance, and resource efficiency. At the same time, on-device learning faces limitations related to hardware constraints, limited data size, and reduced model accuracy and generalization.

+

Mechanisms such as reduced model complexity, optimization and data compression techniques, and related learning methods such as transfer learning and federated learning allow models to adapt to learn and evolve under resource constraints, thus serving as the bedrock for effective ML on edge devices.

+

The critical security concerns in on-device learning highlighted in this chapter, ranging from data poisoning and adversarial attacks to specific risks introduced by on-device learning, must be addressed in real workloads for on-device learning to be a viable paradigm. Effective mitigation strategies, such as data validation, encryption, differential privacy, anomaly detection, and input data validation, are crucial to safeguard on-device learning systems from these threats.

+

The emergence of specialized on-device training frameworks like Tiny Training Engine, Tiny Transfer Learning, and Tiny Train presents practical tools to enable efficient on-device training. These frameworks employ various techniques to optimize memory usage, reduce computational overhead, and streamline the on-device training process.

+

In conclusion, on-device learning stands at the forefront of TinyML, promising a future where models can autonomously acquire knowledge and adapt to changing environments on edge devices. The application of on-device learning has the potential to revolutionize various domains, including healthcare, industrial IoT, and smart cities. However, the transformative potential of on-device learning must be balanced with robust security measures to protect against data breaches and adversarial threats. Embracing innovative on-device training frameworks and implementing stringent security protocols are key steps in unlocking the full potential of on-device learning. As this technology continues to evolve, it holds the promise of making our devices smarter, more responsive, and better integrated into our daily lives.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides serve as a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we also offer a series of hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/ops/ops.html b/contents/ops/ops.html new file mode 100644 index 00000000..9b41fdb1 --- /dev/null +++ b/contents/ops/ops.html @@ -0,0 +1,2180 @@ + + + + + + + + + +Machine Learning Systems - 13  ML Operations + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

13  ML Operations

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Create a detailed, wide rectangular illustration of an AI workflow. The image should showcase the process across six stages, with a flow from left to right: 1. Data collection, with diverse individuals of different genders and descents using a variety of devices like laptops, smartphones, and sensors to gather data. 2. Data processing, displaying a data center with active servers and databases with glowing lights. 3. Model training, represented by a computer screen with code, neural network diagrams, and progress indicators. 4. Model evaluation, featuring people examining data analytics on large monitors. 5. Deployment, where the AI is integrated into robotics, mobile apps, and industrial equipment. 6. Monitoring, showing professionals tracking AI performance metrics on dashboards to check for accuracy and concept drift over time. Each stage should be distinctly marked and the style should be clean, sleek, and modern with a dynamic and informative color scheme.
+
+
+

This chapter explores the practices and architectures needed to effectively develop, deploy, and manage ML models across their entire lifecycle. We examine the various phases of the ML process, including data collection, model training, evaluation, deployment, and monitoring. The importance of automation, collaboration, and continuous improvement is also discussed. We contrast different environments for ML model deployment, from cloud servers to embedded edge devices, and analyze their distinct constraints. We demonstrate how to tailor ML system design and operations through concrete examples for reliable and optimized model performance in any target environment. The goal is to provide readers with a comprehensive understanding of ML model management so they can successfully build and run ML applications that sustainably deliver value.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand what MLOps is and why it is needed

  • +
  • Learn the architectural patterns for traditional MLOps

  • +
  • Contrast traditional vs. embedded MLOps across the ML lifecycle

  • +
  • Identify key constraints of embedded environments

  • +
  • Learn strategies to mitigate embedded ML challenges

  • +
  • Examine real-world case studies demonstrating embedded MLOps principles

  • +
  • Appreciate the need for holistic technical and human approaches

  • +
+
+
+
+

13.1 Introduction

+

Machine Learning Operations (MLOps) is a systematic approach that combines machine learning (ML), data science, and software engineering to automate the end-to-end ML lifecycle. This includes everything from data preparation and model training to deployment and maintenance. MLOps ensures that ML models are developed, deployed, and maintained efficiently and effectively.

+

Let’s start by taking a general example (i.e., non-edge ML) case. Consider a ridesharing company that wants to deploy a machine-learning model to predict real-time rider demand. The data science team spends months developing a model, but when it’s time to deploy, they realize it needs to be compatible with the engineering team’s production environment. Deploying the model requires rebuilding it from scratch, which costs weeks of additional work. This is where MLOps comes in.

+

With MLOps, protocols, and tools, the model developed by the data science team can be seamlessly deployed and integrated into the production environment. In essence, MLOps removes friction during the development, deployment, and maintenance of ML systems. It improves collaboration between teams through defined workflows and interfaces. MLOps also accelerates iteration speed by enabling continuous delivery for ML models.

+

For the ridesharing company, implementing MLOps means their demand prediction model can be frequently retrained and deployed based on new incoming data. This keeps the model accurate despite changing rider behavior. MLOps also allows the company to experiment with new modeling techniques since models can be quickly tested and updated.

+

Other MLOps benefits include enhanced model lineage tracking, reproducibility, and auditing. Cataloging ML workflows and standardizing artifacts - such as logging model versions, tracking data lineage, and packaging models and parameters - enables deeper insight into model provenance. Standardizing these artifacts facilitates tracing a model back to its origins, replicating the model development process, and examining how a model version has changed over time. This also facilitates regulation compliance, which is especially critical in regulated industries like healthcare and finance, where being able to audit and explain models is important.

+

Major organizations adopt MLOps to boost productivity, increase collaboration, and accelerate ML outcomes. It provides the frameworks, tools, and best practices to effectively manage ML systems throughout their lifecycle. This results in better-performing models, faster time-to-value, and sustained competitive advantage. As we explore MLOps further, consider how implementing these practices can help address embedded ML challenges today and in the future.

+
+
+

13.2 Historical Context

+

MLOps has its roots in DevOps, a set of practices combining software development (Dev) and IT operations (Ops) to shorten the development lifecycle and provide continuous delivery of high-quality software. The parallels between MLOps and DevOps are evident in their focus on automation, collaboration, and continuous improvement. In both cases, the goal is to break down silos between different teams (developers, operations, and, in the case of MLOps, data scientists and ML engineers) and to create a more streamlined and efficient process. It is useful to understand the history of this evolution better to understand MLOps in the context of traditional systems.

+
+

13.2.1 DevOps

+

The term “DevOps” was first coined in 2009 by Patrick Debois, a consultant and Agile practitioner. Debois organized the first DevOpsDays conference in Ghent, Belgium, in 2009. The conference brought together development and operations professionals to discuss ways to improve collaboration and automate processes.

+

DevOps has its roots in the Agile movement, which began in the early 2000s. Agile provided the foundation for a more collaborative approach to software development and emphasized small, iterative releases. However, Agile primarily focuses on collaboration between development teams. As Agile methodologies became more popular, organizations realized the need to extend this collaboration to operations teams.

+

The siloed nature of development and operations teams often led to inefficiencies, conflicts, and delays in software delivery. This need for better collaboration and integration between these teams led to the DevOps movement. DevOps can be seen as an extension of the Agile principles, including operations teams.

+

The key principles of DevOps include collaboration, automation, continuous integration, delivery, and feedback. DevOps focuses on automating the entire software delivery pipeline, from development to deployment. It aims to improve the collaboration between development and operations teams, utilizing tools like Jenkins, Docker, and Kubernetes to streamline the development lifecycle.

+

While Agile and DevOps share common principles around collaboration and feedback, DevOps specifically targets integrating development and IT operations - expanding Agile beyond just development teams. It introduces practices and tools to automate software delivery and enhance the speed and quality of software releases.

+
+
+

13.2.2 MLOps

+

MLOps, on the other hand, stands for MLOps, and it extends the principles of DevOps to the ML lifecycle. MLOps aims to automate and streamline the end-to-end ML lifecycle, from data preparation and model development to deployment and monitoring. The main focus of MLOps is to facilitate collaboration between data scientists, data engineers, and IT operations and to automate the deployment, monitoring, and management of ML models. Some key factors led to the rise of MLOps.

+
    +
  • Data drift: Data drift degrades model performance over time, motivating the need for rigorous monitoring and automated retraining procedures provided by MLOps.
  • +
  • Reproducibility: The lack of reproducibility in machine learning experiments motivated MLOps systems to track code, data, and environment variables to enable reproducible ML workflows.
  • +
  • Explainability: The black box nature and lack of explainability of complex models motivated the need for MLOps capabilities to increase model transparency and explainability.
  • +
  • Monitoring: The inability to reliably monitor model performance post-deployment highlighted the need for MLOps solutions with robust model performance instrumentation and alerting.
  • +
  • Friction: The friction in manually retraining and deploying models motivated the need for MLOps systems that automate machine learning deployment pipelines.
  • +
  • Optimization: The complexity of configuring machine learning infrastructure motivated the need for MLOps platforms with optimized, ready-made ML infrastructure.
  • +
+

While DevOps and MLOps share the common goal of automating and streamlining processes, their focus and challenges differ. DevOps primarily deals with the challenges of software development and IT operations. In contrast, MLOps deals with the additional complexities of managing ML models, such as data versioning, model versioning, and model monitoring. MLOps also requires stakeholder collaboration, including data scientists, engineers, and IT operations.

+

While DevOps and MLOps share similarities in their goals and principles, they differ in their focus and challenges. DevOps focuses on improving the collaboration between development and operations teams and automating software delivery. In contrast, MLOps focuses on streamlining and automating the ML lifecycle and facilitating collaboration between data scientists, data engineers, and IT operations.

+

Table tbl-mlops compares and summarizes them side by side.

+
+
+
+Table 13.1: Comparison of DevOps and MLOps. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AspectDevOpsMLOps
ObjectiveStreamlining software development and operations processesOptimizing the lifecycle of machine learning models
MethodologyContinuous Integration and Continuous Delivery (CI/CD) for software developmentSimilar to CI/CD but focuses on machine learning workflows
Primary ToolsVersion control (Git), CI/CD tools (Jenkins, Travis CI), Configuration management (Ansible, Puppet)Data versioning tools, Model training and deployment tools, CI/CD pipelines tailored for ML
Primary ConcernsCode integration, Testing, Release management, Automation, Infrastructure as codeData management, Model versioning, Experiment tracking, Model deployment, Scalability of ML workflows
Typical OutcomesFaster and more reliable software releases, Improved collaboration between development and operations teamsEfficient management and deployment of machine learning models, Enhanced collaboration between data scientists and engineers
+
+
+
+

Learn more about ML Lifecycles through a case study featuring speech recognition.

+
+
+
+
+

13.3 Key Components of MLOps

+

In this chapter, we will provide an overview of the core components of MLOps, an emerging set of practices that enables robust delivery and lifecycle management of ML models in production. While some MLOps elements like automation and monitoring were covered in previous chapters, we will integrate them into an integrated framework and expand on additional capabilities like governance. Additionally, we will describe and link to popular tools used within each component, such as LabelStudio for data labeling. By the end, we hope that you will understand the end-to-end MLOps methodology that takes models from ideation to sustainable value creation within organizations.

+
+

13.3.1 Data Management

+

Robust data management and data engineering actively empower successful MLOps implementations. Teams properly ingest, store, and prepare raw data from sensors, databases, apps, and other systems for model training and deployment.

+

Teams actively track changes to datasets over time using version control with Git and tools like GitHub or GitLab. Data scientists collaborate on curating datasets by merging changes from multiple contributors. Teams can review or roll back each iteration of a dataset if needed.

+

Teams meticulously label and annotate data using labeling software like LabelStudio, which enables distributed teams to work on tagging datasets together. As the target variables and labeling conventions evolve, teams maintain accessibility to earlier versions.

+

Teams store the raw dataset and all derived assets on cloud storage services like Amazon S3 or Google Cloud Storage. These services provide scalable, resilient storage with versioning capabilities. Teams can set granular access permissions.

+

Robust data pipelines created by teams automate raw data extraction, joining, cleansing, and transformation into analysis-ready datasets. Prefect, Apache Airflow, and dbt are workflow orchestrators that allow engineers to develop flexible, reusable data processing pipelines.

+

For instance, a pipeline may ingest data from PostgreSQL databases, REST APIs, and CSVs stored on S3. It can filter, deduplicate, and aggregate the data, handle errors, and save the output to S3. The pipeline can also push the transformed data into a feature store like Tecton or Feast for low-latency access.

+

In an industrial predictive maintenance use case, sensor data is ingested from devices into S3. A perfect pipeline processes the sensor data, joining it with maintenance records. The enriched dataset is stored in Feast so models can easily retrieve the latest data for training and predictions.

+

The video below is a short overview of data pipelines.

+
+
+
+

13.3.2 CI/CD Pipelines

+

Continuous integration and continuous delivery (CI/CD) pipelines actively automate the progression of ML models from initial development into production deployment. Adapted for ML systems, CI/CD principles empower teams to rapidly and robustly deliver new models with minimized manual errors.

+

CI/CD pipelines orchestrate key steps, including checking out new code changes, transforming data, training and registering new models, validation testing, containerization, deploying to environments like staging clusters, and promoting to production. Teams leverage popular CI/CD solutions like Jenkins, CircleCI and GitHub Actions to execute these MLOps pipelines, while Prefect, Metaflow and Kubeflow offer ML-focused options.

+

Figure fig-ci-cd illustrates a CI/CD pipeline specifically tailored for MLOps. The process starts with a dataset and feature repository (on the left), which feeds into a dataset ingestion stage. Post-ingestion, the data undergoes validation to ensure its quality before being transformed for training. Parallel to this, a retraining trigger can initiate the pipeline based on specified criteria. The data then passes through a model training/tuning phase within a data processing engine, followed by model evaluation and validation. Once validated, the model is registered and stored in a machine learning metadata and artifact repository. The final stage involves deploying the trained model back into the dataset and feature repository, thereby creating a cyclical process for continuous improvement and deployment of machine learning models.

+
+
+
+ +
+
+Figure 13.1: MLOps CI/CD diagram. Credit: HarvardX. +
+
+
+

For example, when a data scientist checks improvements to an image classification model into a GitHub repository, this actively triggers a Jenkins CI/CD pipeline. The pipeline reruns data transformations and model training on the latest data, tracking experiments with MLflow. After automated validation testing, teams deploy the model container to a Kubernetes staging cluster for further QA. Once approved, Jenkins facilitates a phased rollout of the model to production with canary deployments to catch any issues. If anomalies are detected, the pipeline enables teams to roll back to the previous model version gracefully.

+

CI/CD pipelines empower teams to iterate and deliver ML models rapidly by connecting the disparate steps from development to deployment under continuous automation. Integrating MLOps tools like MLflow enhances model packaging, versioning, and pipeline traceability. CI/CD is integral for progressing models beyond prototypes into sustainable business systems.

+
+
+

13.3.3 Model Training

+

In the model training phase, data scientists actively experiment with different ML architectures and algorithms to create optimized models that extract insights and patterns from data. MLOps introduces best practices and automation to make this iterative process more efficient and reproducible.

+

Modern ML frameworks like TensorFlow, PyTorch and Keras provide pre-built components that simplify designing neural networks and other model architectures. Data scientists leverage built-in modules for layers, activations, losses, etc., and high-level APIs like Keras to focus more on model architecture.

+

MLOps enables teams to package model training code into reusable, tracked scripts and notebooks. As models are developed, capabilities like hyperparameter tuning, neural architecture search and automatic feature selection rapidly iterate to find the best-performing configurations.

+

Teams use Git to version control training code and host it in repositories like GitHub to track changes over time. This allows seamless collaboration between data scientists.

+

Notebooks like Jupyter create an excellent interactive model development environment. The notebooks contain data ingestion, preprocessing, model declaration, training loop, evaluation, and export code in one reproducible document.

+

Finally, teams orchestrate model training as part of a CI/CD pipeline for automation. For instance, a Jenkins pipeline can trigger a Python script to load new training data, retrain a TensorFlow classifier, evaluate model metrics, and automatically register the model if performance thresholds are met.

+

An example workflow has a data scientist using a PyTorch notebook to develop a CNN model for image classification. The fastai library provides high-level APIs to simplify training CNNs on image datasets. The notebook trains the model on sample data, evaluates accuracy metrics, and tunes hyperparameters like learning rate and layers to optimize performance. This reproducible notebook is version-controlled and integrated into a retraining pipeline.

+

Automating and standardizing model training empowers teams to accelerate experimentation and achieve the rigor needed to produce ML systems.

+
+
+

13.3.4 Model Evaluation

+

Before deploying models, teams perform rigorous evaluation and testing to validate meeting performance benchmarks and readiness for release. MLOps introduces best practices around model validation, auditing, and canary testing.

+

Teams typically evaluate models against holdout test datasets that are not used during training. The test data originates from the same distribution as production data. Teams calculate metrics like accuracy, AUC, precision, recall, and F1 score.

+

Teams also track the same metrics over time against test data samples. If evaluation data comes from live production streams, this catches data drifts that degrade model performance over time.

+

Human oversight for model release remains important. Data scientists review performance across key segments and slices. Error analysis helps identify model weaknesses to guide enhancement. Teams apply fairness and bias detection techniques.

+

Canary testing releases a model to a small subset of users to evaluate real-world performance before wide deployment. Teams incrementally route traffic to the canary release while monitoring for issues.

+

For example, a retailer evaluates a personalized product recommendation model against historical test data, reviewing accuracy and diversity metrics. Teams also calculate metrics on live customer data over time, detecting decreased accuracy over the last 2 weeks. Before full rollout, the new model is released to 5% of web traffic to ensure no degradation.

+

Automating evaluation and canary releases reduces deployment risks. However, human review still needs to be more critical to assess less quantifiable dynamics of model behavior. Rigorous pre-deployment validation provides confidence in putting models into production.

+
+
+

13.3.5 Model Deployment

+

Teams need to properly package, test, and track ML models to reliably deploy them to production. MLOps introduces frameworks and procedures for actively versioning, deploying, monitoring, and updating models in sustainable ways.

+

Teams containerize models using Docker, which bundles code, libraries, and dependencies into a standardized unit. Containers enable smooth portability across environments.

+

Frameworks like TensorFlow Serving and BentoML help serve predictions from deployed models via performance-optimized APIs. These frameworks handle versioning, scaling, and monitoring.

+

Teams first deploy updated models to staging or QA environments for testing before full production rollout. Shadow or canary deployments route a sample of traffic to test model variants. Teams incrementally increase access to new models.

+

Teams build robust rollback procedures in case issues emerge. Rollbacks revert to the last known good model version. Integration with CI/CD pipelines simplifies redeployment if needed.

+

Teams carefully track model artifacts, such as scripts, weights, logs, and metrics, for each version with ML metadata tools like MLflow. This maintains lineage and auditability.

+

For example, a retailer containerizes a product recommendation model in TensorFlow Serving and deploys it to a Kubernetes staging cluster. After monitoring and approving performance on sample traffic, Kubernetes shifts 10% of production traffic to the new model. If no issues are detected after a few days, the new model takes over 100% of traffic. However, teams should keep the previous version accessible for rollback if needed.

+

Model deployment processes enable teams to make ML systems resilient in production by accounting for all transition states.

+
+
+

13.3.6 Infrastructure Management

+

MLOps teams heavily leverage infrastructure as code (IaC) tools and robust cloud architectures to actively manage the resources needed for development, training, and deployment of ML systems.

+

Teams use IaC tools like Terraform, CloudFormation and Ansible to programmatically define, provision and update infrastructure in a version controlled manner. For MLOps, teams widely use Terraform to spin up resources on AWS, GCP and Azure.

+

For model building and training, teams dynamically provision computing resources like GPU servers, container clusters, storage, and databases through Terraform as needed by data scientists. Code encapsulates and preserves infrastructure definitions.

+

Containers and orchestrators like Docker and Kubernetes allow teams to package models and reliably deploy them across different environments. Containers can be predictably spun up or down automatically based on demand.

+

By leveraging cloud elasticity, teams scale resources up and down to meet spikes in workloads like hyperparameter tuning jobs or spikes in prediction requests. Auto-scaling enables optimized cost efficiency.

+

Infrastructure spans on-prem, cloud, and edge devices. A robust technology stack provides flexibility and resilience. Monitoring tools allow teams to observe resource utilization.

+

For example, a Terraform config may deploy a GCP Kubernetes cluster to host trained TensorFlow models exposed as prediction microservices. The cluster scales up pods to handle increased traffic. CI/CD integration seamlessly rolls out new model containers.

+

Carefully managing infrastructure through IaC and monitoring enables teams to prevent bottlenecks in operationalizing ML systems at scale.

+
+
+

13.3.7 Monitoring

+

MLOps teams actively maintain robust monitoring to sustain visibility into ML models deployed in production. Continuous monitoring provides insights into model and system performance so teams can rapidly detect and address issues to minimize disruption.

+

Teams actively monitor key model aspects, including analyzing samples of live predictions to track metrics like accuracy and confusion matrix over time.

+

When monitoring performance, teams must profile incoming data to check for model drift - a steady decline in model accuracy after production deployment. Model drift can occur in two ways: concept drift and data drift. Concept drift refers to a fundamental change observed in the relationship between the input data and the target outcomes. For instance, as the COVID-19 pandemic progressed, e-commerce and retail sites had to correct their model recommendations since purchase data was overwhelmingly skewed towards items like hand sanitizer. Data drift describes changes in the distribution of data over time. For example, image recognition algorithms used in self-driving cars must account for seasonality in observing their surroundings. Teams also track application performance metrics like latency and errors for model integrations.

+

From an infrastructure perspective, teams monitor for capacity issues like high CPU, memory, and disk utilization and system outages. Tools like Prometheus, Grafana, and Elastic enable teams to actively collect, analyze, query, and visualize diverse monitoring metrics. Dashboards make dynamics highly visible.

+

Teams configure alerting for key monitoring metrics like accuracy declines and system faults to enable proactively responding to events that threaten reliability. For example, drops in model accuracy trigger alerts for teams to investigate potential data drift and retrain models using updated, representative data samples.

+

After deployment, comprehensive monitoring enables teams to maintain confidence in model and system health. It empowers teams to catch and resolve deviations preemptively through data-driven alerts and dashboards. Active monitoring is essential for maintaining highly available, trustworthy ML systems.

+

Watch the video below to learn more about monitoring.

+
+
+
+

13.3.8 Governance

+

MLOps teams actively establish proper governance practices as a critical component. Governance provides oversight into ML models to ensure they are trustworthy, ethical, and compliant. Without governance, significant risks exist of models behaving in dangerous or prohibited ways when deployed in applications and business processes.

+

MLOps governance employs techniques to provide transparency into model predictions, performance, and behavior throughout the ML lifecycle. Explainability methods like SHAP and LIME help auditors understand why models make certain predictions by highlighting influential input features behind decisions. Bias detection analyzes model performance across different demographic groups defined by attributes like age, gender, and ethnicity to detect any systematic skews. Teams perform rigorous testing procedures on representative datasets to validate model performance before deployment.

+

Once in production, teams monitor concept drift to determine whether predictive relationships change over time in ways that degrade model accuracy. Teams also analyze production logs to uncover patterns in the types of errors models generate. Documentation about data provenance, development procedures, and evaluation metrics provides additional visibility.

+

Platforms like Watson OpenScale incorporate governance capabilities like bias monitoring and explainability directly into model building, testing, and production monitoring. The key focus areas of governance are transparency, fairness, and compliance. This minimizes the risks of models behaving incorrectly or dangerously when integrated into business processes. Embedding governance practices into MLOps workflows enables teams to ensure trustworthy AI.

+
+
+

13.3.9 Communication & Collaboration

+

MLOps actively breaks down silos and enables the free flow of information and insights between teams through all ML lifecycle stages. Tools like MLflow, Weights & Biases, and data contexts provide traceability and visibility to improve collaboration.

+

Teams use MLflow to systematize tracking of model experiments, versions, and artifacts. Experiments can be programmatically logged from data science notebooks and training jobs. The model registry provides a central hub for teams to store production-ready models before deployment, with metadata like descriptions, metrics, tags, and lineage. Integrations with Github, GitLab facilitate code change triggers.

+

Weights & Biases provides collaborative tools tailored to ML teams. Data scientists log experiments, visualize metrics like loss curves, and share experimentation insights with colleagues. Comparison dashboards highlight model differences. Teams discuss progress and next steps.

+

Establishing shared data contexts—glossaries, data dictionaries, and schema references—ensures alignment on data meaning and usage across roles. Documentation aids understanding for those without direct data access.

+

For example, a data scientist may use Weights & Biases to analyze an anomaly detection model experiment and share the evaluation results with other team members to discuss improvements. The final model can then be registered with MLflow before handing off for deployment.

+

Enabling transparency, traceability, and communication via MLOps empowers teams to remove bottlenecks and accelerate the delivery of impactful ML systems.

+

The following video covers key challenges in model deployment, including concept drift, model drift, and software engineering issues.

+
+
+
+
+

13.4 Hidden Technical Debt in ML Systems

+

Technical debt is increasingly pressing for ML systems (see Figure 14.2). This metaphor, originally proposed in the 1990s, likens the long-term costs of quick software development to financial debt. Just as some financial debt powers beneficial growth, carefully managed technical debt enables rapid iteration. However, left unchecked, accumulating technical debt can outweigh any gains.

+

Figure fig-technical-debt illustrates the various components contributing to ML systems’ hidden technical debt. It shows the interconnected nature of configuration, data collection, and feature extraction, which is foundational to the ML codebase. The box sizes indicate the proportion of the entire system represented by each component. In industry ML systems, the code for the model algorithm makes up only a tiny fraction (see the small black box in the middle compared to all the other large boxes). The complexity of ML systems and the fast-paced nature of the industry make it very easy to accumulate technical debt.

+
+
+
+ +
+
+Figure 13.2: ML system components. Credit: Sambasivan et al. (2021a) +
+
+
+
+

13.4.1 Model Boundary Erosion

+

Unlike traditional software, ML lacks clear boundaries between components, as seen in the diagram above. This erosion of abstraction creates entanglements that exacerbate technical debt in several ways:

+
+
+

13.4.2 Entanglement

+

Tight coupling between ML model components makes isolating changes difficult. Modifying one part causes unpredictable ripple effects throughout the system. Changing anything changes everything (also known as CACE) is a phenomenon that applies to any tweak you make to your system. Potential mitigations include decomposing the problem when possible or closely monitoring for changes in behavior to contain their impact.

+
+
+

13.4.3 Correction Cascades

+

The flowchart in Figure fig-correction-cascades-flowchart depicts the concept of correction cascades in the ML workflow, from problem statement to model deployment. The arcs represent the potential iterative corrections needed at each workflow stage, with different colors corresponding to distinct issues such as interacting with physical world brittleness, inadequate application-domain expertise, conflicting reward systems, and poor cross-organizational documentation. The red arrows indicate the impact of cascades, which can lead to significant revisions in the model development process. In contrast, the dotted red line represents the drastic measure of abandoning the process to restart. This visual emphasizes the complex, interconnected nature of ML system development and the importance of addressing these issues early in the development cycle to mitigate their amplifying effects downstream.

+
+
+
+ +
+
+Figure 13.3: Correction cascades flowchart. Credit: Sambasivan et al. (2021a). +
+
+———. 2021a. Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3411764.3445518. +
+
+

Building models sequentially creates risky dependencies where later models rely on earlier ones. For example, taking an existing model and fine-tuning it for a new use case seems efficient. However, this bakes in assumptions from the original model that may eventually need correction.

+

Several factors inform the decision to build models sequentially or not:

+
    +
  • Dataset size and rate of growth: With small, static datasets, fine-tuning existing models often makes sense. For large, growing datasets, training custom models from scratch allows more flexibility to account for new data.
  • +
  • Available computing resources: Fine-tuning requires fewer resources than training large models from scratch. With limited resources, leveraging existing models may be the only feasible approach.
  • +
+

While fine-tuning can be efficient, modifying foundational components later becomes extremely costly due to the cascading effects on subsequent models. Careful thought should be given to identifying where introducing fresh model architectures, even with large resource requirements, can avoid correction cascades down the line (see Figure 14.3). There are still scenarios where sequential model building makes sense, which entails weighing these tradeoffs around efficiency, flexibility, and technical debt.

+

Figure fig-data-cascades-debt depicts the concept of correction cascades in the ML workflow, from problem statement to model deployment. The arcs represent the potential iterative corrections needed at each stage of the workflow, with different colors corresponding to distinct issues such as interacting with physical world brittleness, inadequate application-domain expertise, conflicting reward systems, and poor cross-organizational documentation. The red arrows indicate the impact of cascades, which can lead to significant revisions in the model development process. In contrast, the dotted red line represents the drastic measure of abandoning the process to restart. This visual emphasizes the complex, interconnected nature of ML system development and the importance of addressing these issues early in the development cycle to mitigate their amplifying effects downstream.

+
+
+
+ +
+
+Figure 13.4: Data cascades. Credit: Sambasivan et al. (2021b). +
+
+Sambasivan, Nithya, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021b. Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. New York, NY, USA: ACM. https://doi.org/10.1145/3411764.3445518. +
+
+
+
+

13.4.4 Undeclared Consumers

+

Once ML model predictions are made available, many downstream systems may silently consume them as inputs for further processing. However, the original model was not designed to accommodate this broad reuse. Due to the inherent opacity of ML systems, it becomes impossible to fully analyze the impact of the model’s outputs as inputs elsewhere. Changes to the model can then have expensive and dangerous consequences by breaking undiscovered dependencies.

+

Undeclared consumers can also enable hidden feedback loops if their outputs indirectly influence the original model’s training data. Mitigations include restricting access to predictions, defining strict service contracts, and monitoring for signs of un-modelled influences. Architecting ML systems to encapsulate and isolate their effects limits the risks of unanticipated propagation.

+
+
+

13.4.5 Data Dependency Debt

+

Data dependency debt refers to unstable and underutilized data dependencies, which can have detrimental and hard-to-detect repercussions. While this is a key contributor to tech debt for traditional software, those systems can benefit from the use of widely available tools for static analysis by compilers and linkers to identify dependencies of these types. ML systems need similar tooling.

+

One mitigation for unstable data dependencies is to use versioning, which ensures the stability of inputs but comes with the cost of managing multiple sets of data and the potential for staleness. Another mitigation for underutilized data dependencies is to conduct exhaustive leave-one-feature-out evaluation.

+
+
+

13.4.6 Analysis Debt from Feedback Loops

+

Unlike traditional software, ML systems can change their behavior over time, making it difficult to analyze pre-deployment. This debt manifests in feedback loops, both direct and hidden.

+

Direct feedback loops occur when a model influences its future inputs, such as by recommending products to users that, in turn, shape future training data. Hidden loops arise indirectly between models, such as two systems that interact via real-world environments. Gradual feedback loops are especially hard to detect. These loops lead to analysis debt—the inability to predict how a model will act fully after release. They undermine pre-deployment validation by enabling unmodeled self-influence.

+

Careful monitoring and canary deployments help detect feedback. However, fundamental challenges remain in understanding complex model interactions. Architectural choices that reduce entanglement and coupling mitigate analysis debt’s compounding effect.

+
+
+

13.4.7 Pipeline Jungles

+

ML workflows often need more standardized interfaces between components. This leads teams to incrementally “glue” together pipelines with custom code. What emerges are “pipeline jungles”—tangled preprocessing steps that are brittle and resist change. Avoiding modifications to these messy pipelines causes teams to experiment through alternate prototypes. Soon, multiple ways of doing everything proliferate. The need for abstractions and interfaces then impedes sharing, reuse, and efficiency.

+

Technical debt accumulates as one-off pipelines solidify into legacy constraints. Teams sink time into managing idiosyncratic code rather than maximizing model performance. Architectural principles like modularity and encapsulation are needed to establish clean interfaces. Shared abstractions enable interchangeable components, prevent lock-in, and promote best-practice diffusion across teams. Breaking free of pipeline jungles ultimately requires enforcing standards that prevent the accretion of abstraction debt. The benefits of interfaces and APIs that tame complexity outweigh the transitional costs.

+
+
+

13.4.8 Configuration Debt

+

ML systems involve extensive configuration of hyperparameters, architectures, and other tuning parameters. However, the configuration is often an afterthought, needing more rigor and testing—ad hoc configurations increase, amplified by the many knobs available for tuning complex ML models.

+

This accumulation of technical debt has several consequences. Fragile and outdated configurations lead to hidden dependencies and bugs that cause production failures. Knowledge about optimal configurations is isolated rather than shared, leading to redundant work. Reproducing and comparing results becomes difficult when configurations lack documentation. Legacy constraints accumulate as teams fear changing poorly understood configurations.

+

Addressing configuration debt requires establishing standards to document, test, validate, and centrally store configurations. Investing in more automated approaches, such as hyperparameter optimization and architecture search, reduces dependence on manual tuning. Better configuration hygiene makes iterative improvement more tractable by preventing complexity from compounding endlessly. The key is recognizing configuration as an integral part of the ML system lifecycle rather than an ad hoc afterthought.

+
+
+

13.4.9 The Changing World

+

ML systems operate in dynamic real-world environments. Thresholds and decisions that are initially effective become outdated as the world evolves. However, legacy constraints make adapting systems to changing populations, usage patterns, and other shifting contextual factors difficult.

+

This debt manifests in two main ways. First, preset thresholds and heuristics require constant re-evaluation and tuning as their optimal values drift. Second, validating systems through static unit and integration tests fails when inputs and behaviors are moving targets.

+

Responding to a changing world in real-time with legacy ML systems is challenging. Technical debt accumulates as assumptions decay. The lack of modular architecture and the ability to dynamically update components without side effects exacerbates these issues.

+

Mitigating this requires building in configurability, monitoring, and modular updatability. Online learning, where models continuously adapt and robust feedback loops to training pipelines, helps automatically tune to the world. However, anticipating and architecting for change is essential to prevent erosion of real-world performance over time.

+
+ +
+

13.4.11 Summary

+

Although financial debt is a good metaphor for understanding tradeoffs, it differs from technical debt’s measurability. Technical debt needs to be fully tracked and quantified. This makes it hard for teams to navigate the tradeoffs between moving quickly and inherently introducing more debt versus taking the time to pay down that debt.

+

The Hidden Technical Debt of Machine Learning Systems paper spreads awareness of the nuances of ML system-specific tech debt. It encourages additional development in the broad area of maintainable ML.

+
+
+
+

13.5 Roles and Responsibilities

+

Given the vastness of MLOps, successfully implementing ML systems requires diverse skills and close collaboration between people with different areas of expertise. While data scientists build the core ML models, it takes cross-functional teamwork to successfully deploy these models into production environments and enable them to deliver sustainable business value.

+

MLOps provides the framework and practices for coordinating the efforts of various roles involved in developing, deploying, and running MLG systems. Bridging traditional silos between data, engineering, and operations teams is key to MLOp’s success. Enabling seamless collaboration through the machine learning lifecycle accelerates benefit realization while ensuring ML models’ long-term reliability and performance.

+

We will look at some key roles involved in MLOps and their primary responsibilities. Understanding the breadth of skills needed to operationalize ML models guides assembling MLOps teams. It also clarifies how the workflows between roles fit under the overarching MLOps methodology.

+
+

13.5.1 Data Engineers

+

Data engineers are responsible for building and maintaining the data infrastructure and pipelines that feed data to ML models. They ensure data is smoothly moved from source systems into the storage, processing, and feature engineering environments needed for ML model development and deployment. Their main responsibilities include:

+
    +
  • Migrating raw data from on-prem databases, sensors, and apps into cloud-based data lakes like Amazon S3 or Google Cloud Storage. This provides cost-efficient, scalable storage.
  • +
  • Building data pipelines with workflow schedulers like Apache Airflow, Prefect, and dbt. These extract data from sources, transform and validate data, and load it into destinations like data warehouses, feature stores, or directly for model training.
  • +
  • Transforming messy, raw data into structured, analysis-ready datasets. This includes handling null or malformed values, deduplicating, joining disparate data sources, aggregating data, and engineering new features.
  • +
  • Maintaining data infrastructure components like cloud data warehouses (Snowflake, Redshift, BigQuery), data lakes, and metadata management systems. Provisioning and optimizing data processing systems.
  • +
  • Establishing data versioning, backup, and archival processes for ML datasets and features and enforcing data governance policies.
  • +
+

For example, a manufacturing firm may use Apache Airflow pipelines to extract sensor data from PLCs on the factory floor into an Amazon S3 data lake. The data engineers would then process this raw data to filter, clean, and join it with product metadata. These pipeline outputs would then load into a Snowflake data warehouse from which features can be read for model training and prediction.

+

The data engineering team builds and sustains the data foundation for reliable model development and operations. Their work enables data scientists and ML engineers to focus on building, training, and deploying ML models at scale.

+
+
+

13.5.2 Data Scientists

+

The job of the data scientists is to focus on the research, experimentation, development, and continuous improvement of ML models. They leverage their expertise in statistics, modeling, and algorithms to create high-performing models. Their main responsibilities include:

+
    +
  • Working with business and data teams to identify opportunities where ML can add value, framing the problem, and defining success metrics.
  • +
  • Performing exploratory data analysis to understand relationships in data, derive insights, and identify relevant features for modeling.
  • +
  • Researching and experimenting with different ML algorithms and model architectures based on the problem and data characteristics and leveraging libraries like TensorFlow, PyTorch, and Keras.
  • +
  • To maximize performance, train and fine-tune models by tuning hyperparameters, adjusting neural network architectures, feature engineering, etc.
  • +
  • Evaluating model performance through metrics like accuracy, AUC, and F1 scores and performing error analysis to identify areas for improvement.
  • +
  • Developing new model versions by incorporating new data, testing different approaches, optimizing model behavior, and maintaining documentation and lineage for models.
  • +
+

For example, a data scientist may leverage TensorFlow and TensorFlow Probability to develop a demand forecasting model for retail inventory planning. They would iterate on different sequence models like LSTMs and experiment with features derived from product, sales, and seasonal data. The model would be evaluated based on error metrics versus actual demand before deployment. The data scientist monitors performance and retrains/enhances the model as new data comes in.

+

Data scientists drive model creation, improvement, and innovation through their expertise in ML techniques. They collaborate closely with other roles to ensure models create maximum business impact.

+
+
+

13.5.3 ML Engineers

+

ML engineers enable models data scientists develop to be productized and deployed at scale. Their expertise makes models reliably serve predictions in applications and business processes. Their main responsibilities include:

+
    +
  • Taking prototype models from data scientists and hardening them for production environments through coding best practices.
  • +
  • Building APIs and microservices for model deployment using tools like Flask, FastAPI. Containerizing models with Docker.
  • +
  • Manage model versions, sync new models into production using CI/CD pipelines, and implement canary releases, A/B tests, and rollback procedures.
  • +
  • Optimizing model performance for high scalability, low latency, and cost efficiency. Leveraging compression, quantization, and multi-model serving.
  • +
  • Monitor models once in production and ensure continued reliability and accuracy. Retraining models periodically.
  • +
+

For example, an ML engineer may take a TensorFlow fraud detection model developed by data scientists and containerize it using TensorFlow Serving for scalable deployment. The model would be integrated into the company’s transaction processing pipeline via APIs. The ML engineer implements a model registry and CI/CD pipeline using MLFlow and Jenkins to deploy model updates reliably. The ML engineers then monitor the running model for continued performance using tools like Prometheus and Grafana. If model accuracy drops, they initiate retraining and deployment of a new model version.

+

The ML engineering team enables data science models to progress smoothly into sustainable and robust production systems. Their expertise in building modular, monitored systems delivers continuous business value.

+
+
+

13.5.4 DevOps Engineers

+

DevOps engineers enable MLOps by building and managing the underlying infrastructure for developing, deploying, and monitoring ML models. They provide the cloud architecture and automation pipelines. Their main responsibilities include:

+
    +
  • Provisioning and managing cloud infrastructure for ML workflows using IaC tools like Terraform, Docker, and Kubernetes.
  • +
  • Developing CI/CD pipelines for model retraining, validation, and deployment. Integrating ML tools into the pipeline, such as MLflow and Kubeflow.
  • +
  • Monitoring model and infrastructure performance using tools like Prometheus, Grafana, ELK stack. Building alerts and dashboards.
  • +
  • Implement governance practices around model development, testing, and promotion to enable reproducibility and traceability.
  • +
  • Embedding ML models within applications. They are exposing models via APIs and microservices for integration.
  • +
  • Optimizing infrastructure performance and costs and leveraging autoscaling, spot instances, and availability across regions.
  • +
+

For example, a DevOps engineer provisions a Kubernetes cluster on AWS using Terraform to run ML training jobs and online deployment. They build a CI/CD pipeline in Jenkins, which triggers model retraining if new data is available. After automated testing, the model is registered with MLflow and deployed in the Kubernetes cluster. The engineer then monitors cluster health, container resource usage, and API latency using Prometheus and Grafana.

+

The DevOps team enables rapid experimentation and reliable deployments for ML through cloud, automation, and monitoring expertise. Their work maximizes model impact while minimizing technical debt.

+
+
+

13.5.5 Project Managers

+

Project managers play a vital role in MLOps by coordinating the activities between the teams involved in delivering ML projects. They help drive alignment, accountability, and accelerated results. Their main responsibilities include:

+
    +
  • Working with stakeholders to define project goals, success metrics, timelines, and budgets; outlining specifications and scope.
  • +
  • Creating a project plan spanning data acquisition, model development, infrastructure setup, deployment, and monitoring.
  • +
  • Coordinating design, development, and testing efforts between data engineers, data scientists, ML engineers, and DevOps roles.
  • +
  • Tracking progress and milestones, identifying roadblocks and resolving them through corrective actions, and managing risks and issues.
  • +
  • Facilitating communication through status reports, meetings, workshops, and documentation and enabling seamless collaboration.
  • +
  • Driving adherence to timelines and budget and escalating anticipated overruns or shortfalls for mitigation.
  • +
+

For example, a project manager would create a project plan for developing and enhancing a customer churn prediction model. They coordinate between data engineers building data pipelines, data scientists experimenting with models, ML engineers productionalizing models, and DevOps setting up deployment infrastructure. The project manager tracks progress via milestones like dataset preparation, model prototyping, deployment, and monitoring. To enact preventive solutions, they surface any risks, delays, or budget issues.

+

Skilled project managers enable MLOps teams to work synergistically to rapidly deliver maximum business value from ML investments. Their leadership and organization align with diverse teams.

+
+
+
+

13.6 Embedded System Challenges

+

We will briefly review the challenges with embedded systems so that it sets the context for the specific challenges that emerge with embedded MLOps, which we will discuss in the following section.

+
+

13.6.1 Limited Compute Resources

+

Embedded devices like microcontrollers and mobile phones have much more constrained computing power than data center machines or GPUs. A typical microcontroller may have only KB of RAM, MHz CPU speed, and no GPU. For example, a microcontroller in a smartwatch may only have a 32-bit processor running at 120MHz with 320KB of RAM (Stm32L4Q5Ag 2021). This allows simple ML models like small linear regressions or random forests, but more complex deep neural networks would be infeasible. Strategies to mitigate this include quantization, pruning, efficient model architectures, and offloading certain computations to the cloud when connectivity allows.

+
+Stm32L4Q5Ag. 2021. STMicroelectronics. +
+
+

13.6.2 Constrained Memory

+

Storing large ML models and datasets directly on embedded devices is often infeasible with limited memory. For example, a deep neural network model can easily take hundreds of MB, which exceeds the storage capacity of many embedded systems. Consider this example. A wildlife camera that captures images to detect animals may have only a 2GB memory card. More is needed to store a deep learning model for image classification that is often hundreds of MB in size. Consequently, this requires optimization of memory usage through weights compression, lower-precision numerics, and streaming inference pipelines.

+
+
+

13.6.3 Intermittent Connectivity

+

Many embedded devices operate in remote environments without reliable internet connectivity. We must rely on something other than constant cloud access for convenient retraining, monitoring, and deployment. Instead, we need smart scheduling and caching strategies to optimize for intermittent connections. For example, a model predicting crop yield on a remote farm may need to make predictions daily but only have connectivity to the cloud once a week when the farmer drives into town. The model needs to operate independently in between connections.

+
+
+

13.6.4 Power Limitations

+

Embedded devices like phones, wearables, and remote sensors are battery-powered. Continual inference and communication can quickly drain those batteries, limiting functionality. For example, a smart collar tagging endangered animals runs on a small battery. Continuously running a GPS tracking model would drain the battery within days. The collar has to schedule when to activate the model carefully. Thus, embedded ML has to manage tasks carefully to conserve power. Techniques include optimized hardware accelerators, prediction caching, and adaptive model execution.

+
+
+

13.6.5 Fleet Management

+

For mass-produced embedded devices, millions of units can be deployed in the field to orchestrate updates. Hypothetically, updating a fraud detection model on 100 million (future smart) credit cards requires securely pushing updates to each distributed device rather than a centralized data center. Such a distributed scale makes fleet-wide management much harder than a centralized server cluster. It requires intelligent protocols for over-the-air updates, handling connectivity issues, and monitoring resource constraints across devices.

+
+
+

13.6.6 On-Device Data Collection

+

Collecting useful training data requires engineering both the sensors on the device and the software pipelines. This is unlike servers, where we can pull data from external sources. Challenges include handling sensor noise. Sensors on an industrial machine detect vibrations and temperature to predict maintenance needs. This requires tuning the sensors and sampling rates to capture useful data.

+
+
+

13.6.7 Device-Specific Personalization

+

A smart speaker learns an individual user’s voice patterns and speech cadence to improve recognition accuracy while protecting privacy. Adapting ML models to specific devices and users is important, but this poses privacy challenges. On-device learning allows personalization without transmitting as much private data. However, balancing model improvement, privacy preservation, and constraints requires novel techniques.

+
+
+

13.6.8 Safety Considerations

+

If extremely large embedded ML in systems like self-driving vehicles is not engineered carefully, there are serious safety risks. To ensure safe operation before deployment, self-driving cars must undergo extensive track testing in simulated rain, snow, and obstacle scenarios. This requires extensive validation, fail-safes, simulators, and standards compliance before deployment.

+
+
+

13.6.9 Diverse Hardware Targets

+

There is a diverse range of embedded processors, including ARM, x86, specialized AI accelerators, FPGAs, etc. Supporting this heterogeneity makes deployment challenging. We need strategies like standardized frameworks, extensive testing, and model tuning for each platform. For example, an object detection model needs efficient implementations across embedded devices like a Raspberry Pi, Nvidia Jetson, and Google Edge TPU.

+
+
+

13.6.10 Testing Coverage

+

Rigorously testing edge cases is difficult with constrained embedded simulation resources, but exhaustive testing is critical in systems like self-driving cars. Exhaustively testing an autopilot model requires millions of simulated kilometers, exposing it to rare events like sensor failures. Therefore, strategies like synthetic data generation, distributed simulation, and chaos engineering help improve coverage.

+
+
+

13.6.11 Concept Drift Detection

+

With limited monitoring data from each remote device, detecting changes in the input data over time is much harder. Drift can lead to degraded model performance. Lightweight methods are needed to identify when retraining is necessary. A model predicting power grid loads shows declining performance as usage patterns change over time. With only local device data, this trend is difficult to spot.

+
+
+
+

13.7 Traditional MLOps vs. Embedded MLOps

+

In traditional MLOps, ML models are typically deployed in cloud-based or server environments, with abundant resources like computing power and memory. These environments facilitate the smooth operation of complex models that require significant computational resources. For instance, a cloud-based image recognition model might be used by a social media platform to tag photos with relevant labels automatically. In this case, the model can leverage the extensive resources available in the cloud to efficiently process vast amounts of data.

+

On the other hand, embedded MLOps involves deploying ML models on embedded systems, specialized computing systems designed to perform specific functions within larger systems. Embedded systems are typically characterized by their limited computational resources and power. For example, an ML model might be embedded in a smart thermostat to optimize heating and cooling based on the user’s preferences and habits. The model must be optimized to run efficiently on the thermostat’s limited hardware without compromising its performance or accuracy.

+

The key difference between traditional and embedded MLOps lies in the embedded system’s resource constraints. While traditional MLOps can leverage abundant cloud or server resources, embedded MLOps must contend with the hardware limitations on which the model is deployed. This requires careful optimization and fine-tuning of the model to ensure it can deliver accurate and valuable insights within the embedded system’s constraints.

+

Furthermore, embedded MLOps must consider the unique challenges posed by integrating ML models with other embedded system components. For example, the model must be compatible with the system’s software and hardware and must be able to interface seamlessly with other components, such as sensors or actuators. This requires a deep understanding of both ML and embedded systems and close collaboration between data scientists, engineers, and other stakeholders.

+

So, while traditional MLOps and embedded MLOps share the common goal of deploying and maintaining ML models in production environments, the unique challenges posed by embedded systems require a specialized approach. Embedded MLOps must carefully balance the need for model accuracy and performance with the constraints of the hardware on which the model is deployed. This requires a deep understanding of both ML and embedded systems and close collaboration between various stakeholders to ensure the successful integration of ML models into embedded systems.

+

This time, we will group the subtopics under broader categories to streamline the structure of our thought process on MLOps. This structure will help you understand how different aspects of MLOps are interconnected and why each is important for the efficient operation of ML systems as we discuss the challenges in the context of embedded systems.

+
    +
  • Model Lifecycle Management +
      +
    • Data Management: Handling data ingestion, validation, and version control.
    • +
    • Model Training: Techniques and practices for effective and scalable model training.
    • +
    • Model Evaluation: Strategies for testing and validating model performance.
    • +
    • Model Deployment: Approaches for deploying models into production environments.
    • +
  • +
  • Development and Operations Integration +
      +
    • CI/CD Pipelines: Integrating ML models into continuous integration and deployment pipelines.
    • +
    • Infrastructure Management: Setting up and maintaining the infrastructure required for training and deploying models.
    • +
    • Communication & Collaboration: Ensuring smooth communication and collaboration between data scientists, ML engineers, and operations teams.
    • +
  • +
  • Operational Excellence +
      +
    • Monitoring: Techniques for monitoring model performance, data drift, and operational health.
    • +
    • Governance: Implementing policies for model auditability, compliance, and ethical considerations.
    • +
  • +
+
+

13.7.1 Model Lifecycle Management

+
+

Data Management

+

In traditional centralized MLOps, data is aggregated into large datasets and data lakes, then processed on cloud or on-prem servers. However, embedded MLOps relies on decentralized data from local on-device sensors. Devices collect smaller batches of incremental data, often noisy and unstructured. With connectivity constraints, this data cannot always be instantly transmitted to the cloud and needs to be intelligently cached and processed at the edge.

+

Due to limited on-device computing, embedded devices can only preprocess and clean data minimally before transmission. Early filtering and processing occur at edge gateways to reduce transmission loads. While leveraging cloud storage, more processing and storage happen at the edge to account for intermittent connectivity. Devices identify and transmit only the most critical subsets of data to the cloud.

+

Labeling also needs centralized data access, requiring more automated techniques like federated learning, where devices collaboratively label peers’ data. With personal edge devices, data privacy and regulations are critical concerns. Data collection, transmission, and storage must be secure and compliant.

+

For instance, a smartwatch may collect the day’s step count, heart rate, and GPS coordinates. This data is cached locally and transmitted to an edge gateway when WiFi is available—the gateway processes and filters data before syncing relevant subsets with the cloud platform to retrain models.

+
+
+

Model Training

+

In traditional centralized MLOps, models are trained using abundant data via deep learning on high-powered cloud GPU servers. However, embedded MLOps need more support in model complexity, data availability, and computing resources for training.

+

The volume of aggregated data is much lower, often requiring techniques like federated learning across devices to create training sets. The specialized nature of edge data also limits public datasets for pre-training. With privacy concerns, data samples must be tightly controlled and anonymized where possible.

+

Furthermore, the models must use simplified architectures optimized for low-power edge hardware. Given the computing limitations, high-end GPUs are inaccessible for intensive deep learning. Training leverages lower-powered edge servers and clusters with distributed approaches to spread load.

+

Strategies like transfer learning become essential to mitigate data scarcity and irregularity (see Figure 14.5). Models can pre-train on large public datasets and then finetune the training on limited domain-specific edge data. Even incremental on-device learning to customize models helps overcome the decentralized nature of embedded data. The lack of broad labeled data also motivates semi-supervised techniques.

+

Figure fig-transfer-learning-mlops illustrates the concept of transfer learning in model training within an MLOps framework. It showcases a neural network where the initial layers (W_{A1} to W_{A4}), which are responsible for general feature extraction, are frozen (indicated by the green dashed line), meaning their weights are not updated during training. This reuse of pre-trained layers accelerates learning by utilizing knowledge gained from previous tasks. The latter layers (W_{A5} to W_{A7}), depicted beyond the blue dashed line, are finetuned for the specific task at hand, focusing on task-specific feature learning. This approach allows the model to adapt to the new task using fewer resources and potentially achieve higher performance on specialized tasks by reusing the general features learned from a broader dataset.

+
+
+
+ +
+
+Figure 13.5: Transfer learning in MLOps. Credit: HarvardX. +
+
+
+

For example, a smart home assistant may pre-train an audio recognition model on public YouTube clips, which helps bootstrap with general knowledge. It then transfers learning to a small sample of home data to classify customized appliances and events, specializing in the model. The model transforms into a lightweight neural network optimized for microphone-enabled devices across the home.

+

So, embedded MLOps face acute challenges in constructing training datasets, designing efficient models, and distributing compute for model development compared to traditional settings. Given the embedded constraints, careful adaptation, such as transfer learning and distributed training, is required to train models.

+
+
+

Model Evaluation

+

In traditional centralized MLOps, models are evaluated primarily using accuracy metrics and holdout test datasets. However, embedded MLOps require a more holistic evaluation that accounts for system constraints beyond accuracy.

+

Models must be tested early and often on deployed edge hardware covering diverse configurations. In addition to accuracy, factors like latency, CPU usage, memory footprint, and power consumption are critical evaluation criteria. Models are selected based on tradeoffs between these metrics to meet edge device constraints.

+

Data drift must also be monitored - where models trained on cloud data degrade in accuracy over time on local edge data. Embedded data often has more variability than centralized training sets. Evaluating models across diverse operational edge data samples is key. But sometimes, getting the data for monitoring the drift can be challenging if these devices are in the wild and communication is a barrier.

+

Ongoing monitoring provides visibility into real-world performance post-deployment, revealing bottlenecks not caught during testing. For instance, a smart camera model update may be canary tested on 100 cameras first and rolled back if degraded accuracy is observed before expanding to all 5000 cameras.

+
+
+

Model Deployment

+

In traditional MLOps, new model versions are directly deployed onto servers via API endpoints. However, embedded devices require optimized delivery mechanisms to receive updated models. Over-the-air (OTA) updates provide a standardized approach to wirelessly distributing new software or firmware releases to embedded devices. Rather than direct API access, OTA packages allow remote deploying models and dependencies as pre-built bundles. Alternatively, federated learning allows model updates without direct access to raw training data. This decentralized approach has the potential for continuous model improvement but needs robust MLOps platforms.

+

Model delivery relies on physical interfaces like USB or UART serial connections for deeply embedded devices lacking connectivity. The model packaging still follows similar principles to OTA updates, but the deployment mechanism is tailored to the capabilities of the edge hardware. Moreover, specialized OTA protocols optimized for IoT networks are often used rather than standard WiFi or Bluetooth protocols. Key factors include efficiency, reliability, security, and telemetry, such as progress tracking—solutions like Mender. Io provides embedded-focused OTA services handling differential updates across device fleets.

+

Figure fig-model-lifecycle presents an overview of Model Lifecycle Management in an MLOps context, illustrating the flow from development (top left) to deployment and monitoring (bottom right). The process begins with ML Development, where code and configurations are version-controlled. Data and model management are central to the process, involving datasets and feature repositories. Continuous training, model conversion, and model registry are key stages in the operationalization of training. The model deployment includes serving the model and managing serving logs. Alerting mechanisms are in place to flag issues, which feed into continuous monitoring to ensure model performance and reliability over time. This integrated approach ensures that models are developed and maintained effectively throughout their lifecycle.

+
+
+
+ +
+
+Figure 13.6: Model lifecycle management. Credit: HarvardX. +
+
+
+
+
+
+

13.7.2 Development and Operations Integration

+
+

CI/CD Pipelines

+

In traditional MLOps, robust CI/CD infrastructure like Jenkins and Kubernetes enables pipeline automation for large-scale model deployment. However, embedded MLOps need this centralized infrastructure and more tailored CI/CD workflows for edge devices.

+

Building CI/CD pipelines has to account for a fragmented landscape of diverse hardware, firmware versions, and connectivity constraints. There is no standard platform to orchestrate pipelines, and tooling support is more limited.

+

Testing must cover this wide spectrum of target embedded devices early, which is difficult without centralized access. Companies must invest significant effort into acquiring and managing test infrastructure across the heterogeneous embedded ecosystem.

+

Over-the-air updates require setting up specialized servers to distribute model bundles securely to devices in the field. Rollout and rollback procedures must also be carefully tailored for particular device families.

+

With traditional CI/CD tools less applicable, embedded MLOps rely more on custom scripts and integration. Companies take varied approaches, from open-source frameworks to fully in-house solutions. Tight integration between developers, edge engineers, and end customers establishes trusted release processes.

+

Therefore, embedded MLOps can’t leverage centralized cloud infrastructure for CI/CD. Companies combine custom pipelines, testing infrastructure, and OTA delivery to deploy models across fragmented and disconnected edge systems.

+
+
+

Infrastructure Management

+

In traditional centralized MLOps, infrastructure entails provisioning cloud servers, GPUs, and high-bandwidth networks for intensive workloads like model training and serving predictions at scale. However, embedded MLOps require more heterogeneous infrastructure spanning edge devices, gateways, and the cloud.

+

Edge devices like sensors capture and preprocess data locally before intermittent transmission to avoid overloading networks—gateways aggregate and process device data before sending select subsets to the cloud for training and analysis. The cloud provides centralized management and supplemental computing.

+

This infrastructure needs tight integration and balancing processing and communication loads. Network bandwidth is limited, requiring careful data filtering and compression. Edge computing capabilities are modest compared to the cloud, imposing optimization constraints.

+

Managing secure OTA updates across large device fleets presents challenges at the edge. Rollouts must be incremental and rollback-ready for quick mitigation. Given decentralized environments, updating edge infrastructure requires coordination.

+

For example, an industrial plant may perform basic signal processing on sensors before sending data to an on-prem gateway. The gateway handles data aggregation, infrastructure monitoring, and OTA updates. Only curated data is transmitted to the cloud for advanced analytics and model retraining.

+

Embedded MLOps requires holistic management of distributed infrastructure spanning constrained edge, gateways, and centralized cloud. Workloads are balanced across tiers while accounting for connectivity, computing, and security challenges.

+
+
+

Communication & Collaboration

+

In traditional MLOps, collaboration tends to center around data scientists, ML engineers, and DevOps teams. However, embedded MLOps require tighter cross-functional coordination between additional roles to address system constraints.

+

Edge engineers optimize model architectures for target hardware environments. They provide feedback to data scientists during development so models fit device capabilities early on. Similarly, product teams define operational requirements informed by end-user contexts.

+

With more stakeholders across the embedded ecosystem, communication channels must facilitate information sharing between centralized and remote teams. Issue tracking and project management ensure alignment.

+

Collaborative tools optimize models for particular devices. Data scientists can log issues replicated from field devices so models specialize in niche data. Remote device access aids debugging and data collection.

+

For example, data scientists may collaborate with field teams managing fleets of wind turbines to retrieve operational data samples. This data is used to specialize models detecting anomalies specific to that turbine class. Model updates are tested in simulations and reviewed by engineers before field deployment.

+

Embedded MLOps mandates continuous coordination between data scientists, engineers, end customers, and other stakeholders throughout the ML lifecycle. Through close collaboration, models can be tailored and optimized for targeted edge devices.

+
+
+
+

13.7.3 Operational Excellence

+
+

Monitoring

+

Traditional MLOps monitoring focuses on centrally tracking model accuracy, performance metrics, and data drift. However, embedded MLOps must account for decentralized monitoring across diverse edge devices and environments.

+

Edge devices require optimized data collection to transmit key monitoring metrics without overloading networks. Metrics help assess model performance, data patterns, resource usage, and other behaviors on remote devices.

+

With limited connectivity, more analysis occurs at the edge before aggregating insights centrally. Gateways play a key role in monitoring fleet health and coordinating software updates. Confirmed indicators are eventually propagated to the cloud.

+

Broad device coverage is challenging but critical. Issues specific to certain device types may arise, so monitoring needs to cover the full spectrum. Canary deployments help trial monitoring processes before scaling.

+

Anomaly detection identifies incidents requiring rolling back models or retraining on new data. However, interpreting alerts requires understanding unique device contexts based on input from engineers and customers.

+

For example, an automaker may monitor autonomous vehicles for indicators of model degradation using caching, aggregation, and real-time streams. Engineers assess when identified anomalies warrant OTA updates to improve models based on factors like location and vehicle age.

+

Embedded MLOps monitoring provides observability into model and system performance across decentralized edge environments. Careful data collection, analysis, and collaboration deliver meaningful insights to maintain reliability.

+
+
+

Governance

+

In traditional MLOps, governance focuses on model explainability, fairness, and compliance for centralized systems. However, embedded MLOps must also address device-level governance challenges related to data privacy, security, and safety.

+

With sensors collecting personal and sensitive data, local data governance on devices is critical. Data access controls, anonymization, and encrypted caching help address privacy risks and compliance like HIPAA and GDPR. Updates must maintain security patches and settings.

+

Safety governance considers the physical impacts of flawed device behavior. Failures could cause unsafe conditions in vehicles, factories, and critical systems. Redundancy, fail-safes, and warning systems help mitigate risks.

+

Traditional governance, such as bias monitoring and model explainability, remains imperative but is harder to implement for embedded AI. Peeking into black-box models on low-power devices also poses challenges.

+

For example, a medical device may scrub personal data on the device before transmission. Strict data governance protocols approve model updates. Model explainability is limited, but the focus is on detecting anomalous behavior. Backup systems prevent failures.

+

Embedded MLOps governance must encompass privacy, security, safety, transparency, and ethics. Specialized techniques and team collaboration are needed to help establish trust and accountability within decentralized environments.

+
+
+
+

13.7.4 Comparison

+

Here is a comparison table highlighting similarities and differences between Traditional MLOps and Embedded MLOps based on all the things we have learned thus far:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AreaTraditional MLOpsEmbedded MLOps
Data ManagementLarge datasets, data lakes, feature storesOn-device data capture, edge caching and processing
Model DevelopmentLeverage deep learning, complex neural nets, GPU trainingConstraints on model complexity, need for optimization
DeploymentServer clusters, cloud deployment, low latency at scaleOTA deployment to devices, intermittent connectivity
MonitoringDashboards, logs, alerts for cloud model performanceOn-device monitoring of predictions, resource usage
RetrainingRetrain models on new dataFederated learning from devices, edge retraining
InfrastructureDynamic cloud infrastructureHeterogeneous edge/cloud infrastructure
CollaborationShared experiment tracking and model registryCollaboration for device-specific optimization
+

So, while Embedded MLOps shares foundational MLOps principles, it faces unique constraints in tailoring workflows and infrastructure specifically for resource-constrained edge devices.

+
+
+
+

13.8 Commercial Offerings

+

While understanding the principles is not a substitute for understanding them, an increasing number of commercial offerings help ease the burden of building ML pipelines and integrating tools to build, test, deploy, and monitor ML models in production.

+
+

13.8.1 Traditional MLOps

+

Google, Microsoft, and Amazon offer their version of managed ML services. These include services that manage model training and experimentation, model hosting and scaling, and monitoring. These offerings are available via an API and client SDKs, as well as through web UIs. While it is possible to build your own end-to-end MLOps solutions using pieces from each, the greatest ease of use benefits come by staying within a single provider ecosystem to take advantage of interservice integrations.

+

I will provide a quick overview of the services that fit into each part of the MLOps life cycle described above, providing examples of offerings from different providers. The space is moving very quickly; new companies and products are entering the scene very rapidly, and these are not meant to serve as an endorsement of a particular company’s offering.

+
+

Data Management

+

Data storage and versioning are table stakes for any commercial offering, and most take advantage of existing general-purpose storage solutions such as S3. Others use more specialized options such as git-based storage (Example: Hugging Face’s Dataset Hub This is an area where providers make it easy to support their competitors’ data storage options, as they don’t want this to be a barrier for adoptions of the rest of their MLOps services. For example, Vertex AI’s training pipeline seamlessly supports datasets stored in S3, Google Cloud Buckets, or Hugging Face’s Dataset Hub.

+
+
+

Model Training

+

Managed training services are where cloud providers shine, as they provide on-demand access to hardware that is out of reach for most smaller companies. They bill only for hardware during training time, putting GPU-accelerated training within reach of even the smallest developer teams. The control developers have over their training workflow can vary widely depending on their needs. Some providers have services that provide little more than access to the resources and rely on the developer to manage the training loop, logging, and model storage themselves. Other services are as simple as pointing to a base model and a labeled data set to kick off a fully managed finetuning job (example: Vertex AI Fine Tuning).

+

A word of warning: As of 2023, GPU hardware demand well exceeds supply, and as a result, cloud providers are rationing access to their GPUs. In some data center regions, GPUs may be unavailable or require long-term contracts.

+
+
+

Model Evaluation

+

Model evaluation tasks typically involve monitoring models’ accuracy, latency, and resource usage in both the testing and production phases. Unlike embedded systems, ML models deployed to the cloud benefit from constant internet connectivity and unlimited logging capacities. As a result, it is often feasible to capture and log every request and response. This makes replaying or generating synthetic requests to compare different models and versions tractable.

+

Some providers also offer services that automate the experiment tracking of modifying model hyperparameters. They track the runs and performance and generate artifacts from these model training runs. Example: WeightsAndBiases

+
+
+

Model Deployment

+

Each provider typically has a service referred to as a “model registry,” where training models are stored and accessed. Often, these registries may also provide access to base models that are either open source or provided by larger technology companies (or, in some cases, like LLAMA, both!). These model registries are a common place to compare all the models and their versions to allow easy decision-making on which to pick for a given use case. Example: Vertex AI’s model registry

+

From the model registry, deploying a model to an inference endpoint is quick and simple, and it handles the resource provisioning, model weight downloading, and hosting of a given model. These services typically give access to the model via a REST API where inference requests can be sent. Depending on the model type, specific resources can be configured, such as which type of GPU accelerator may be needed to hit the desired performance. Some providers may also offer serverless inference or batch inference options that do not need a persistent endpoint to access the model. Example: AWS SageMaker Inference

+
+
+
+

13.8.2 Embedded MLOps

+

Despite the proliferation of new ML Ops tools in response to the increase in demand, the challenges described earlier have constrained the availability of such tools in embedded systems environments. More recently, new tools such as Edge Impulse (Janapa Reddi et al. 2023) have made the development process somewhat easier, as described below.

+
+Janapa Reddi, Vijay, Alexander Elium, Shawn Hymel, David Tischler, Daniel Situnayake, Carl Ward, Louis Moreau, et al. 2023. “Edge Impulse: An MLOps Platform for Tiny Machine Learning.” Proceedings of Machine Learning and Systems 5. +
+

Edge Impulse

+

Edge Impulse is an end-to-end development platform for creating and deploying machine learning models onto edge devices such as microcontrollers and small processors. It aims to make embedded machine learning more accessible to software developers through its easy-to-use web interface and integrated tools for data collection, model development, optimization, and deployment. Its key capabilities include the following:

+
    +
  • Intuitive drag-and-drop workflow for building ML models without coding required
  • +
  • Tools for acquiring, labeling, visualizing, and preprocessing data from sensors
  • +
  • Choice of model architectures, including neural networks and unsupervised learning
  • +
  • Model optimization techniques to balance performance metrics and hardware constraints
    +
  • +
  • Seamless deployment onto edge devices through compilation, SDKs, and benchmarks
  • +
  • Collaboration features for teams and integration with other platforms
  • +
+

With Edge Impulse, developers with limited data science expertise can develop specialized ML models that run efficiently within small computing environments. It provides a comprehensive solution for creating embedded intelligence and advancing machine learning.

+
+
User Interface
+

Edge Impulse was designed with seven key principles: accessibility, end-to-end capabilities, a data-centric approach, interactiveness, extensibility, team orientation, and community support. The intuitive user interface, shown in Figure fig-edge-impulse-ui, guides developers at all experience levels through uploading data, selecting a model architecture, training the model, and deploying it across relevant hardware platforms. It should be noted that, like any tool, Edge Impulse is intended to assist with, not replace, foundational considerations such as determining if ML is an appropriate solution or acquiring the requisite domain expertise for a given application.

+
+
+
+ +
+
+Figure 13.7: Screenshot of Edge Impulse user interface for building workflows from input data to output features. +
+
+
+

What makes Edge Impulse notable is its comprehensive yet intuitive end-to-end workflow. Developers start by uploading their data through file upload or command line interface (CLI) tools, after which they can examine raw samples and visualize the data distribution in the training and test splits. Next, users can pick from various preprocessing “blocks” to facilitate digital signal processing (DSP). While default parameter values are provided, users can customize the parameters as needed, with considerations around memory and latency displayed. Users can easily choose their neural network architecture - without any code needed.

+

Thanks to the platform’s visual editor, users can customize the architecture’s components and specific parameters while ensuring that the model is still trainable. Users can also leverage unsupervised learning algorithms, such as K-means clustering and Gaussian mixture models (GMM).

+
+
+
Optimizations
+

To accommodate the resource constraints of TinyML applications, Edge Impulse provides a confusion matrix summarizing key performance metrics, including per-class accuracy and F1 scores. The platform elucidates the tradeoffs between model performance, size, and latency using simulations in Renode and device-specific benchmarking. For streaming data use cases, a performance calibration tool leverages a genetic algorithm to find ideal post-processing configurations balancing false acceptance and false rejection rates. Techniques like quantization, code optimization, and device-specific optimization are available to optimize models. For deployment, models can be compiled in appropriate formats for target edge devices. Native firmware SDKs also enable direct data collection on devices.

+

In addition to streamlining development, Edge Impulse scales the modeling process itself. A key capability is the EON Tuner, an automated machine learning (AutoML) tool that assists users in hyperparameter tuning based on system constraints. It runs a random search to generate configurations for digital signal processing and training steps quickly. The resulting models are displayed for the user to select based on relevant performance, memory, and latency metrics. For data, active learning facilitates training on a small labeled subset, followed by manually or automatically labeling new samples based on proximity to existing classes. This expands data efficiency.

+
+
+
Use Cases
+

Beyond the accessibility of the platform itself, the Edge Impulse team has expanded the knowledge base of the embedded ML ecosystem. The platform lends itself to academic environments, having been used in online courses and on-site workshops globally. Numerous case studies featuring industry and research use cases have been published, most notably Oura Ring, which uses ML to identify sleep patterns. The team has made repositories open source on GitHub, facilitating community growth. Users can also make projects public to share techniques and download libraries to share via Apache. Organization-level access enables collaboration on workflows.

+

Overall, Edge Impulse is uniquely comprehensive and integrateable for developer workflows. Larger platforms like Google and Microsoft focus more on cloud versus embedded systems. TinyMLOps frameworks such as Neuton AI and Latent AI offer some functionality but lack Edge Impulse’s end-to-end capabilities. TensorFlow Lite Micro is the standard inference engine due to flexibility, open source status, and TensorFlow integration, but it uses more memory and storage than Edge Impulse’s EON Compiler. Other platforms need to be updated, academic-focused, or more versatile. In summary, Edge Impulse aims to streamline and scale embedded ML through an accessible, automated platform.

+
+
+
+

Limitations

+

While Edge Impulse provides an accessible pipeline for embedded ML, important limitations and risks remain. A key challenge is data quality and availability - the models are only as good as the data used to train them. Users must have sufficient labeled samples that capture the breadth of expected operating conditions and failure modes. Labeled anomalies and outliers are critical yet time-consuming to collect and identify. Insufficient or biased data leads to poor model performance regardless of the tool’s capabilities.

+

Deploying low-powered devices also presents inherent challenges. Optimized models may still need to be more resource-intensive for ultra-low-power MCUs. Striking the right balance of compression versus accuracy takes some experimentation. The tool simplifies but still needs to eliminate the need for foundational ML and signal processing expertise. Embedded environments also constrain debugging and interpretability compared to the cloud.

+

While impressive results are achievable, users shouldn’t view Edge Impulse as a “Push Button ML” solution. Careful project scoping, data collection, model evaluation, and testing are still essential. As with any development tool, reasonable expectations and diligence in application are advised. However, Edge Impulse can accelerate embedded ML prototyping and deployment for developers willing to invest the requisite data science and engineering effort.

+
+

Exercise 13.1 (Edge Impulse)  

+
+
+ +
+
+

Ready to level up your tiny machine-learning projects? Let’s combine the power of Edge Impulse with the awesome visualizations of Weights & Biases (WandB). In this Colab, you’ll learn to track your model’s training progress like a pro! Imagine seeing cool graphs of your model getting smarter, comparing different versions, and ensuring your AI performs its best even on tiny devices.

+

+
+
+
+
+
+
+
+

13.9 Case Studies

+
+

13.9.1 Oura Ring

+

The Oura Ring is a wearable that can measure activity, sleep, and recovery when placed on the user’s finger. Using sensors to track physiological metrics, the device uses embedded ML to predict the stages of sleep. To establish a baseline of legitimacy in the industry, Oura conducted a correlation experiment to evaluate the device’s success in predicting sleep stages against a baseline study. This resulted in a solid 62% correlation compared to the 82-83% baseline. Thus, the team set out to determine how to improve their performance even further.

+

The first challenge was to obtain better data in terms of both quantity and quality. They could host a larger study to get a more comprehensive data set, but the data would be so noisy and large that it would be difficult to aggregate, scrub, and analyze. This is where Edge Impulse comes in.

+

We hosted a massive sleep study of 100 men and women between the ages of 15 and 73 across three continents (Asia, Europe, and North America). In addition to wearing the Oura Ring, participants were responsible for undergoing the industry standard PSG testing, which provided a “label” for this data set. With 440 nights of sleep from 106 participants, the data set totaled 3,444 hours in length across Ring and PSG data. With Edge Impulse, Oura could easily upload and consolidate data from different sources into a private S3 bucket. They were also able to set up a Data Pipeline to merge data samples into individual files and preprocess the data without having to conduct manual scrubbing.

+

Because of the time saved on data processing thanks to Edge Impulse, the Oura team could focus on the key drivers of their prediction. They only extracted three types of sensor data: heart rate, motion, and body temperature. After partitioning the data using five-fold cross-validation and classifying sleep stages, the team achieved a correlation of 79% - just a few percentage points off the standard. They readily deployed two types of sleep detection models: one simplified using just the ring’s accelerometer and one more comprehensive leveraging Autonomic Nervous System (ANS)-mediated peripheral signals and circadian features. With Edge Impulse, they plan to conduct further analyses of different activity types and leverage the platform’s scalability to continue experimenting with different data sources and subsets of extracted features.

+

While most ML research focuses on model-dominant steps such as training and finetuning, this case study underscores the importance of a holistic approach to ML Ops, where even the initial steps of data aggregation and preprocessing fundamentally impact successful outcomes.

+
+
+

13.9.2 ClinAIOps

+

Let’s look at MLOps in the context of medical health monitoring to better understand how MLOps “matures” in a real-world deployment. Specifically, let’s consider continuous therapeutic monitoring (CTM) enabled by wearable devices and sensors. CTM captures detailed physiological data from patients, providing the opportunity for more frequent and personalized adjustments to treatments.

+

Wearable ML-enabled sensors enable continuous physiological and activity monitoring outside clinics, opening up possibilities for timely, data-driven therapy adjustments. For example, wearable insulin biosensors (Psoma and Kanthou 2023) and wrist-worn ECG sensors for glucose monitoring (Li et al. 2021) can automate insulin dosing for diabetes, wrist-worn ECG and PPG sensors can adjust blood thinners based on atrial fibrillation patterns (Attia et al. 2018; Guo et al. 2019), and accelerometers tracking gait can trigger preventative care for declining mobility in the elderly (Liu et al. 2022). The variety of signals that can now be captured passively and continuously allows therapy titration and optimization tailored to each patient’s changing needs. By closing the loop between physiological sensing and therapeutic response with TinyML and on-device learning, wearables are poised to transform many areas of personalized medicine.

+
+Psoma, Sotiria D., and Chryso Kanthou. 2023. “Wearable Insulin Biosensors for Diabetes Management: Advances and Challenges.” Biosensors 13 (7): 719. https://doi.org/10.3390/bios13070719. +
+Li, Jingzhen, Igbe Tobore, Yuhang Liu, Abhishek Kandwal, Lei Wang, and Zedong Nie. 2021. “Non-Invasive Monitoring of Three Glucose Ranges Based on ECG by Using DBSCAN-CNN.” #IEEE_J_BHI# 25 (9): 3340–50. https://doi.org/10.1109/jbhi.2021.3072628. +
+Attia, Zachi I., Alan Sugrue, Samuel J. Asirvatham, Michael J. Ackerman, Suraj Kapa, Paul A. Friedman, and Peter A. Noseworthy. 2018. “Noninvasive Assessment of Dofetilide Plasma Concentration Using a Deep Learning (Neural Network) Analysis of the Surface Electrocardiogram: A Proof of Concept Study.” PLoS One 13 (8): e0201059. https://doi.org/10.1371/journal.pone.0201059. +
+Guo, Yutao, Hao Wang, Hui Zhang, Tong Liu, Zhaoguang Liang, Yunlong Xia, Li Yan, et al. 2019. “Mobile Photoplethysmographic Technology to Detect Atrial Fibrillation.” J. Am. Coll. Cardiol. 74 (19): 2365–75. https://doi.org/10.1016/j.jacc.2019.08.019. +
+Liu, Yingcheng, Guo Zhang, Christopher G. Tarolli, Rumen Hristov, Stella Jensen-Roberts, Emma M. Waddell, Taylor L. Myers, et al. 2022. “Monitoring Gait at Home with Radio Waves in Parkinsons Disease: A Marker of Severity, Progression, and Medication Response.” Sci. Transl. Med. 14 (663): eadc9669. https://doi.org/10.1126/scitranslmed.adc9669. +

ML holds great promise in analyzing CTM data to provide data-driven recommendations for therapy adjustments. But simply deploying AI models in silos, without integrating them properly into clinical workflows and decision-making, can lead to poor adoption or suboptimal outcomes. In other words, thinking about MLOps alone is insufficient to make them useful in practice. This study shows that frameworks are needed to incorporate AI and CTM into real-world clinical practice seamlessly.

+

This case study analyzes “ClinAIOps” as a model for embedded ML operations in complex clinical environments (Chen et al. 2023). We provide an overview of the framework and why it’s needed, walk through an application example, and discuss key implementation challenges related to model monitoring, workflow integration, and stakeholder incentives. Analyzing real-world examples like ClinAIOps illuminates crucial principles and best practices for reliable and effective AI Ops across many domains.

+

Traditional MLOps frameworks are insufficient for integrating continuous therapeutic monitoring (CTM) and AI in clinical settings for a few key reasons:

+
    +
  • MLOps focuses on the ML model lifecycle—training, deployment, monitoring. But healthcare involves coordinating multiple human stakeholders—patients and clinicians—not just models.

  • +
  • MLOps aims to automate IT system monitoring and management. However, optimizing patient health requires personalized care and human oversight, not just automation.

  • +
  • CTM and healthcare delivery are complex sociotechnical systems with many moving parts. MLOps doesn’t provide a framework for coordinating human and AI decision-making.

  • +
  • Ethical considerations regarding healthcare AI require human judgment, oversight, and accountability. MLOps frameworks lack processes for ethical oversight.

  • +
  • Patient health data is highly sensitive and regulated. MLOps alone doesn’t ensure the handling of protected health information to privacy and regulatory standards.

  • +
  • Clinical validation of AI-guided treatment plans is essential for provider adoption. MLOps doesn’t incorporate domain-specific evaluation of model recommendations.

  • +
  • Optimizing healthcare metrics like patient outcomes requires aligning stakeholder incentives and workflows, which pure tech-focused MLOps overlooks.

  • +
+

Thus, effectively integrating AI/ML and CTM in clinical practice requires more than just model and data pipelines; it requires coordinating complex human-AI collaborative decision-making, which ClinAIOps aims to address via its multi-stakeholder feedback loops.

+
+

Feedback Loops

+

The ClinAIOps framework, shown in Figure fig-clinaiops, provides these mechanisms through three feedback loops. The loops are useful for coordinating the insights from continuous physiological monitoring, clinician expertise, and AI guidance via feedback loops, enabling data-driven precision medicine while maintaining human accountability. ClinAIOps provides a model for effective human-AI symbiosis in healthcare: the patient is at the center, providing health challenges and goals that inform the therapy regimen; the clinician oversees this regimen, giving inputs for adjustments based on continuous monitoring data and health reports from the patient; whereas AI developers play a crucial role by creating systems that generate alerts for therapy updates, which the clinician then vets.

+

These feedback loops, which we will discuss below, help maintain clinician responsibility and control over treatment plans by reviewing AI suggestions before they impact patients. They help dynamically customize AI model behavior and outputs to each patient’s changing health status. They help improve model accuracy and clinical utility over time by learning from clinician and patient responses. They facilitate shared decision-making and personalized care during patient-clinician interactions. They enable rapid optimization of therapies based on frequent patient data that clinicians cannot manually analyze.

+
+
+
+ +
+
+Figure 13.8: ClinAIOps cycle. Credit: Chen et al. (2023). +
+
+
+
+
Patient-AI Loop
+

The patient-AI loop enables frequent therapy optimization driven by continuous physiological monitoring. Patients are prescribed wearables like smartwatches or skin patches to collect relevant health signals passively. For example, a diabetic patient could have a continuous glucose monitor, or a heart disease patient may wear an ECG patch. An AI model analyzes the patient’s longitudinal health data streams in the context of their electronic medical records - their diagnoses, lab tests, medications, and demographics. The AI model suggests adjustments to the treatment regimen tailored to that individual, like changing a medication dose or administration schedule. Minor adjustments within a pre-approved safe range can be made by the patient independently, while major changes are reviewed by the clinician first. This tight feedback between the patient’s physiology and AI-guided therapy allows data-driven, timely optimizations like automated insulin dosing recommendations based on real-time glucose levels for diabetes patients.

+
+
+
Clinician-AI Loop
+

The clinician-AI loop allows clinical oversight over AI-generated recommendations to ensure safety and accountability. The AI model provides the clinician with treatment recommendations and easily reviewed summaries of the relevant patient data on which the suggestions are based. For instance, an AI may suggest lowering a hypertension patient’s blood pressure medication dose based on continuously low readings. The clinician can accept, reject, or modify the AI’s proposed prescription changes. This clinician feedback further trains and improves the model. Additionally, the clinician sets the bounds for the types and extent of treatment changes the AI can autonomously recommend to patients. By reviewing AI suggestions, the clinician maintains ultimate treatment authority based on their clinical judgment and accountability. This loop allows them to oversee patient cases with AI assistance efficiently.

+
+
+
Patient-Clinician Loop
+

Instead of routine data collection, the clinician can focus on interpreting high-level data patterns and collaborating with the patient to set health goals and priorities. The AI assistance will also free up clinicians’ time, allowing them to focus more deeply on listening to patients’ stories and concerns. For instance, the clinician may discuss diet and exercise changes with a diabetes patient to improve their glucose control based on their continuous monitoring data. Appointment frequency can also be dynamically adjusted based on patient progress rather than following a fixed calendar. Freed from basic data gathering, the clinician can provide coaching and care customized to each patient informed by their continuous health data. The patient-clinician relationship is made more productive and personalized.

+
+
+
+

Hypertension Example

+

Let’s consider an example. According to the Centers for Disease Control and Prevention, nearly half of adults have hypertension (48.1%, 119.9 million). Hypertension can be managed through ClinAIOps with the help of wearable sensors using the following approach:

+
+
Data Collection
+

The data collected would include continuous blood pressure monitoring using a wrist-worn device equipped with photoplethysmography (PPG) and electrocardiography (ECG) sensors to estimate blood pressure (Zhang, Zhou, and Zeng 2017). The wearable would also track the patient’s physical activity via embedded accelerometers. The patient would log any antihypertensive medications they take, along with the time and dose. The patient’s demographic details and medical history from their electronic health record (EHR) would also be incorporated. This multimodal real-world data provides valuable context for the AI model to analyze the patient’s blood pressure patterns, activity levels, medication adherence, and responses to therapy.

+
+Zhang, Qingxue, Dian Zhou, and Xuan Zeng. 2017. “Highly Wearable Cuff-Less Blood Pressure and Heart Rate Monitoring with Single-Arm Electrocardiogram and Photoplethysmogram Signals.” BioMedical Engineering OnLine 16 (1): 23. https://doi.org/10.1186/s12938-017-0317-z. +
+
+
AI Model
+

The on-device AI model would analyze the patient’s continuous blood pressure trends, circadian patterns, physical activity levels, medication adherence behaviors, and other contexts. It would use ML to predict optimal antihypertensive medication doses and timing to control the individual’s blood pressure. The model would send dosage change recommendations directly to the patient for minor adjustments or to the reviewing clinician for approval for more significant modifications. By observing clinician feedback on its recommendations and evaluating the resulting blood pressure outcomes in patients, the AI model could be continually retrained and improved to enhance performance. The goal is fully personalized blood pressure management optimized for each patient’s needs and responses.

+
+
+
Patient-AI Loop
+

In the Patient-AI loop, the hypertensive patient would receive notifications on their wearable device or tethered smartphone app recommending adjustments to their antihypertensive medications. For minor dose changes within a pre-defined safe range, the patient could independently implement the AI model’s suggested adjustment to their regimen. However, the patient must obtain clinician approval before changing their dosage for more significant modifications. Providing personalized and timely medication recommendations automates an element of hypertension self-management for the patient. It can improve their adherence to the regimen as well as treatment outcomes. The patient is empowered to leverage AI insights to control their blood pressure better.

+
+
+
Clinician-AI Loop
+

In the Clinician-AI loop, the provider would receive summaries of the patient’s continuous blood pressure trends and visualizations of their medication-taking patterns and adherence. They review the AI model’s suggested antihypertensive dosage changes and decide whether to approve, reject, or modify the recommendations before they reach the patient. The clinician also specifies the boundaries for how much the AI can independently recommend changing dosages without clinician oversight. If the patient’s blood pressure is trending at dangerous levels, the system alerts the clinician so they can promptly intervene and adjust medications or request an emergency room visit. This loop maintains accountability and safety while allowing the clinician to harness AI insights by keeping the clinician in charge of approving major treatment changes.

+
+
+
Patient-Clinician Loop
+

In the Patient-Clinician loop, shown in Figure fig-interactive-loop, the in-person visits would focus less on collecting data or basic medication adjustments. Instead, the clinician could interpret high-level trends and patterns in the patient’s continuous monitoring data and have focused discussions about diet, exercise, stress management, and other lifestyle changes to improve their blood pressure control holistically. The frequency of appointments could be dynamically optimized based on the patient’s stability rather than following a fixed calendar. Since the clinician would not need to review all the granular data, they could concentrate on delivering personalized care and recommendations during visits. With continuous monitoring and AI-assisted optimization of medications between visits, the clinician-patient relationship focuses on overall wellness goals and becomes more impactful. This proactive and tailored data-driven approach can help avoid hypertension complications like stroke, heart failure, and other threats to patient health and well-being.

+
+
+
+ +
+
+Figure 13.9: ClinAIOps interactive loop. Credit: Chen et al. (2023). +
+
+Chen, Emma, Shvetank Prakash, Vijay Janapa Reddi, David Kim, and Pranav Rajpurkar. 2023. “A Framework for Integrating Artificial Intelligence for Clinical Care with Continuous Therapeutic Monitoring.” Nat. Biomed. Eng., November. https://doi.org/10.1038/s41551-023-01115-0. +
+
+
+
+
+

MLOps vs. ClinAIOps

+

The hypertension example illustrates well why traditional MLOps are insufficient for many real-world AI applications and why frameworks like ClinAIOps are needed instead.

+

With hypertension, simply developing and deploying an ML model for adjusting medications would only succeed if it considered the broader clinical context. The patient, clinician, and health system have concerns about shaping adoption. The AI model cannot optimize blood pressure outcomes alone—it requires integrating with workflows, behaviors, and incentives.

+
    +
  • Some key gaps the example highlights in a pure MLOps approach:
  • +
  • The model itself would lack the real-world patient data at scale to recommend treatments reliably. ClinAIOps enables this by collecting feedback from clinicians and patients via continuous monitoring.
  • +
  • Clinicians would only trust model recommendations with transparency, explainability, and accountability. ClinAIOps keeps the clinician in the loop to build confidence.
  • +
  • Patients need personalized coaching and motivation - not just AI notifications. The ClinAIOps patient-clinician loop facilitates this.
  • +
  • Sensor reliability and data accuracy would only be sufficient with clinical oversight. ClinAIOps validates recommendations.
  • +
  • Liability for treatment outcomes must be clarified with just an ML model. ClinAIOps maintains human accountability.
  • +
  • Health systems would need to demonstrate value to change workflows. ClinAIOps aligns stakeholders.
  • +
+

The hypertension case clearly shows the need to look beyond training and deploying a performant ML model to consider the entire human-AI sociotechnical system. This is the key gap ClinAIOps aims to address over traditional MLOps. Traditional MLOps is overly tech-focused on automating ML model development and deployment, while ClinAIOps incorporates clinical context and human-AI coordination through multi-stakeholder feedback loops.

+

Table tbl-clinical_ops compares them. This table highlights how, when MLOps is implemented, we need to consider more than just ML models.

+
+
+
+Table 13.2: Comparison of MLOps versus AI operations for clinical use. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Traditional MLOpsClinAIOps
FocusML model development and deploymentCoordinating human and AI decision-making
StakeholdersData scientists, IT engineersPatients, clinicians, AI developers
Feedback loopsModel retraining, monitoringPatient-AI, clinician-AI, patient-clinician
ObjectiveOperationalize ML deploymentsOptimize patient health outcomes
ProcessesAutomated pipelines and infrastructureIntegrates clinical workflows and oversight
Data considerationsBuilding training datasetsPrivacy, ethics, protected health information
Model validationTesting model performance metricsClinical evaluation of recommendations
ImplementationFocuses on technical integrationAligns incentives of human stakeholders
+
+
+
+
+
+

Summary

+

In complex domains like healthcare, successfully deploying AI requires moving beyond a narrow focus on training and deploying performant ML models. As illustrated through the hypertension example, real-world integration of AI necessitates coordinating diverse stakeholders, aligning incentives, validating recommendations, and maintaining accountability. Frameworks like ClinAIOps, which facilitate collaborative human-AI decision-making through integrated feedback loops, are needed to address these multifaceted challenges. Rather than just automating tasks, AI must augment human capabilities and clinical workflows. This allows AI to positively impact patient outcomes, population health, and healthcare efficiency.

+
+
+
+
+

13.10 Conclusion

+

Embedded ML is poised to transform many industries by enabling AI capabilities directly on edge devices like smartphones, sensors, and IoT hardware. However, developing and deploying TinyML models on resource-constrained embedded systems poses unique challenges compared to traditional cloud-based MLOps.

+

This chapter provided an in-depth analysis of key differences between traditional and embedded MLOps across the model lifecycle, development workflows, infrastructure management, and operational practices. We discussed how factors like intermittent connectivity, decentralized data, and limited on-device computing necessitate innovative techniques like federated learning, on-device inference, and model optimization. Architectural patterns like cross-device learning and hierarchical edge-cloud infrastructure help mitigate constraints.

+

Through concrete examples like Oura Ring and ClinAIOps, we demonstrated applied principles for embedded MLOps. The case studies highlighted critical considerations beyond core ML engineering, like aligning stakeholder incentives, maintaining accountability, and coordinating human-AI decision-making. This underscores the need for a holistic approach spanning both technical and human elements.

+

While embedded MLOps face impediments, emerging tools like Edge Impulse and lessons from pioneers help accelerate TinyML innovation. A solid understanding of foundational MLOps principles tailored to embedded environments will empower more organizations to overcome constraints and deliver distributed AI capabilities. As frameworks and best practices mature, seamlessly integrating ML into edge devices and processes will transform industries through localized intelligence.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides serve as a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we also offer a series of hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/optimizations/optimizations.html b/contents/optimizations/optimizations.html new file mode 100644 index 00000000..45aa273a --- /dev/null +++ b/contents/optimizations/optimizations.html @@ -0,0 +1,2664 @@ + + + + + + + + + +Machine Learning Systems - 9  Model Optimizations + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

9  Model Optimizations

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Illustration of a neural network model represented as a busy construction site, with a diverse group of construction workers, both male and female, of various ethnicities, labeled as ‘pruning’, ‘quantization’, and ‘sparsity’. They are working together to make the neural network more efficient and smaller, while maintaining high accuracy. The ‘pruning’ worker, a Hispanic female, is cutting unnecessary connections from the middle of the network. The ‘quantization’ worker, a Caucasian male, is adjusting or tweaking the weights all over the place. The ‘sparsity’ worker, an African female, is removing unnecessary nodes to shrink the model. Construction trucks and cranes are in the background, assisting the workers in their tasks. The neural network is visually transforming from a complex and large structure to a more streamlined and smaller one.
+
+
+

When machine learning models are deployed on systems, especially on resource-constrained embedded systems, the optimization of models is a necessity. While machine learning inherently often demands substantial computational resources, the systems are inherently limited in memory, processing power, and energy. This chapter will dive into the art and science of optimizing machine learning models to ensure they are lightweight, efficient, and effective when deployed in TinyML scenarios.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Learn techniques like pruning, knowledge distillation and specialized model architectures to represent models more efficiently

  • +
  • Understand quantization methods to reduce model size and enable faster inference through reduced precision numerics

  • +
  • Explore hardware-aware optimization approaches to match models to target device capabilities

  • +
  • Discover software tools like frameworks and model conversion platforms that enable deployment of optimized models

  • +
  • Develop holistic thinking to balance tradeoffs in model complexity, accuracy, latency, power etc. based on application requirements

  • +
  • Gain strategic insight into selecting and applying model optimizations based on use case constraints and hardware targets

  • +
+
+
+
+

9.1 Introduction

+

We have structured this chapter in three tiers. First, in sec-model_ops_representation we examine the significance and methodologies of reducing the parameter complexity of models without compromising their inference capabilities. Techniques such as pruning and knowledge distillation are discussed, offering insights into how models can be compressed and simplified while maintaining, or even enhancing, their performance.

+

Going one level lower, in sec-model_ops_numerics, we study the role of numerical precision in model computations and how altering it impacts model size, speed, and accuracy. We will examine the various numerical formats and how reduced-precision arithmetic can be leveraged to optimize models for embedded deployment.

+

Finally, as we go lower and closer to the hardware, in sec-model_ops_hw, we will navigate through the landscape of hardware-software co-design, exploring how models can be optimized by tailoring them to the specific characteristics and capabilities of the target hardware. We will discuss how models can be adapted to exploit the available hardware resources effectively.

+
+
+
+ +
+
+Figure 9.1: Three layers to be covered. +
+
+
+
+
+

9.2 Efficient Model Representation

+

The first avenue of attack for model optimization starts in familiar territory for most ML practitioners: efficient model representation is often first tackled at the highest level of parametrization abstraction - the model’s architecture itself.

+

Most traditional ML practitioners design models with a general high-level objective in mind, whether it be image classification, person detection, or keyword spotting as mentioned previously in this textbook. Their designs generally end up naturally fitting into some soft constraints due to limited compute resources during development, but generally these designs are not aware of later constraints, such as those required if the model is to be deployed on a more constrained device instead of the cloud.

+

In this section, we’ll discuss how practitioners can harness principles of hardware-software co-design even at a model’s high level architecture to make their models compatible with edge devices. From most to least hardware aware at this level of modification, we discuss several of the most common strategies for efficient model parametrization: pruning, model compression, and edge-friendly model architectures.

+
+

9.2.1 Pruning

+
+

Overview

+

Model pruning is a technique in machine learning that aims to reduce the size and complexity of a neural network model while maintaining its predictive capabilities as much as possible. The goal of model pruning is to remove redundant or non-essential components of the model, including connections between neurons, individual neurons, or even entire layers of the network.

+

This process typically involves analyzing the machine learning model to identify and remove weights, nodes, or layers that have little impact on the model’s outputs. By selectively pruning a model in this way, the total number of parameters can be reduced significantly without substantial declines in model accuracy. The resulting compressed model requires less memory and computational resources to train and run while enabling faster inference times.

+

Model pruning is especially useful when deploying machine learning models to devices with limited compute resources, such as mobile phones or TinyML systems. The technique facilitates the deployment of larger, more complex models on these devices by reducing their resource demands. Additionally, smaller models require less data to generalize well and are less prone to overfitting. By providing an efficient way to simplify models, model pruning has become a vital technique for optimizing neural networks in machine learning.

+

There are several common pruning techniques used in machine learning, these include structured pruning, unstructured pruning, iterative pruning, bayesian pruning, and even random pruning. In addition to pruning the weights, one can also prune the activations. Activation pruning specifically targets neurons or filters that activate rarely or have overall low activation. There are numerous other methods, such as sensitivity and movement pruning. For a comprehensive list of methods, the reader is encouraged to read the following paper: “A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations” (2023).

+

So how does one choose the type of pruning methods? Many variations of pruning techniques exist where each varies the heuristic of what should be kept and pruned from the model as well the number of times pruning occurs. Traditionally, pruning happens after the model is fully trained, where the pruned model may experience mild accuracy loss. However, as we will discuss further, recent discoveries have found that pruning can be used during training (i.e., iteratively) to identify more efficient and accurate model representations.

+
+
+

Structured Pruning

+

We start with structured pruning, a technique that reduces the size of a neural network by eliminating entire model-specific substructures while maintaining the overall model structure. It removes entire neurons/channels or layers based on importance criteria. For example, for a convolutional neural network (CNN), this could be certain filter instances or channels. For fully connected networks, this could be neurons themselves while maintaining full connectivity or even be elimination of entire model layers that are deemed to be insignificant. This type of pruning often leads to regular, structured sparse networks that are hardware friendly.

+
+
Components
+

Best practices have started to emerge on how to think about structured pruning. There are three main components:

+
    +
  1. Structures to target for pruning
  2. +
  3. Establishing a criteria for pruning
  4. +
  5. Selecting a pruning strategy
  6. +
+
+
+
Structures to target for pruning
+

Given that there are different strategies, each of these structures (i.e., neurons, channels and layers) is pruned based on specific criteria and strategies, ensuring that the reduced model maintains as much of the predictive prowess of the original model as possible while gaining in computational efficiency and reduction in size.

+

The primary structures targeted for pruning include neurons, channels, and sometimes, entire layers, each having its unique implications and methodologies. When neurons are pruned, we are removing entire neurons along with their associated weights and biases, thereby reducing the width of the layer. This type of pruning is often utilized in fully connected layers.

+

With channel pruning, which is predominantly applied in convolutional neural networks (CNNs), it involves eliminating entire channels or filters, which in turn reduces the depth of the feature maps and impacts the network’s ability to extract certain features from the input data. This is particularly crucial in image processing tasks where computational efficiency is paramount.

+

Finally, layer pruning takes a more aggressive approach by removing entire layers of the network. This significantly reduces the network’s depth and thereby its capacity to model complex patterns and hierarchies in the data. This approach necessitates a careful balance to ensure that the model’s predictive capability is not unduly compromised.

+

Figure fig-channel-layer-pruning demonstrates the difference between channel/filter wise pruning and layer pruning. When we prune a channel, we have to reconfigure the model’s architecture in order to adapt to the structural changes. One adjustment is changing the number of input channels in the subsequent layer (here, the third and deepest layer): changing the depths of the filters that are applied to the layer with the pruned channel. On the other hand, pruning an entire layer (removing all the channels in the layer) requires more drastic adjustements. The main one involves modifying the connections between the remaining layers to replace or bypass the pruned layer. In our case, we reconfigured had to connect the first and last layers. In all pruning cases, we have to fine-tune the new structure to adjust the weights.

+
+
+
+ +
+
+Figure 9.2: Channel vs layer pruning. +
+
+
+
+
+
Establishing a criteria for pruning
+

Establishing well-defined criteria for determining which specific structures to prune from a neural network model is a crucial component of the model pruning process. The core goal here is to identify and remove components that contribute the least to the model’s predictive capabilities, while retaining structures integral to preserving the model’s accuracy.

+

A widely adopted and effective strategy for systematically pruning structures relies on computing importance scores for individual components like neurons, filters, channels or layers. These scores serve as quantitative metrics to gauge the significance of each structure and its effect on the model’s output.

+

There are several techniques for assigning these importance scores:

+
    +
  • Weight magnitude-based pruning assigns scores based on the absolute values of the weights. Components with very small weights contribute minimally to activations and can be removed.
  • +
  • Gradient-based pruning utilizes the gradients of the loss function with respect to each weight to determine sensitivity. Weights with low gradient magnitudes when altered have little effect on the loss and can be pruned.
  • +
  • Activation-based pruning tracks activation values for neurons/filters over a validation dataset. Consistently low activation values suggest less relevance, warranting removal.
  • +
  • Taylor expansion approximates the change in loss function from removing a given weight. Weights with negligible impact on loss are prime candidates for pruning.
  • +
+

The idea is to measure, either directly or indirectly, the contribution of each component to the model’s output. Structures with minimal influence according to the defined criteria are pruned first. This enables selective, optimized pruning that maximally compresses models while preserving predictive capacity. In general, it is important to evaluate the impact of removing particular structures on the model’s output.

+
+
+
Selecting a pruning strategy
+

The pruning strategy orchestrates how structures are removed and integrates with subsequent model fine-tuning to recover predictive performance. Two main structured pruning strategies exist: iterative pruning and one-shot pruning.

+

Iterative pruning gradually removes structures across multiple cycles of pruning followed by fine-tuning. In each cycle, a small set of structures are pruned based on importance criteria. The model is then fine-tuned, allowing it to adjust smoothly to the structural changes before the next pruning iteration. This gradual, cyclic approach prevents abrupt accuracy drops. It allows the model to slowly adapt as structures are reduced across iterations.

+

Consider a situation where we wish to prune the 6 least effective channels (based on some specific critera) from a convolutional neural network. In Figure fig-iterative-pruning, we show a simplified pruning process carried over 3 iterations. In every iteration, we only prune 2 channels. Removing the channels results in accuracy degradation. In the first iteration, the accuracy drops from 0.995 to 0.971. However, after we fine-tune the model on the new structure, we are able to recover from the performance loss, bringing the accuracy up to 0.992. Since the structural changes are minor and gradual, the network can more easily adapt to them. Running the same process 2 more times, we end up with a final accuracy of 0.991 (a loss of only 0.4% from the original) and 27% decrease in the number of channels. Thus, iterative pruning enables us to maintain performance while benefiting from increased computational efficiency due to the decreased model size.

+
+
+
+ +
+
+Figure 9.3: Iterative pruning. +
+
+
+

One-shot pruning takes a more aggressive approach by pruning a large portion of structures simultaneously in one shot based on predefined importance criteria. This is followed by extensive fine-tuning to recover model accuracy. While faster, this aggressive strategy can degrade accuracy if the model cannot recover during fine-tuning.

+

The choice between these strategies involves weighing factors like model size, target sparsity level, available compute and acceptable accuracy losses. One-shot pruning can rapidly compress models, but iterative pruning may enable better accuracy retention for a target level of pruning. In practice, the strategy is tailored based on use case constraints. The overarching aim is to generate an optimal strategy that removes redundancy, achieves efficiency gains through pruning, and finely tunes the model to stabilize accuracy at an acceptable level for deployment.

+

Now consider the same network we had in the iterative pruning example. Whereas in the iterative process we pruned 2 channels at a time, in the one-shot pruning we would prune the 6 channels at once (Figure fig-oneshot-pruning). Removing 27% of the network’s channel simultaneously alters the structure significantly, causing the accuracy to drop from 0.995 to 0.914. Given the major changes, the network is not able to properly adapt during fine-tuning, and the accuracy went up to 0.943, a 5% degradation from the accuracy of the unpruned network. While the final structures in both iterative pruning and oneshot pruning processes are identical, the former is able to maintain high performance while the latter suffers significant degradations.

+
+
+
+ +
+
+Figure 9.4: One-shot pruning. +
+
+
+
+
+
+

Advantages of Structured Pruning

+

Structured pruning brings forth a myriad of advantages that cater to various facets of model deployment and utilization, especially in environments where computational resources are constrained.

+
+
Computational Efficiency
+

By eliminating entire structures, such as neurons or channels, structured pruning significantly diminishes the computational load during both training and inference phases, thereby enabling faster model predictions and training convergence. Moreover, the removal of structures inherently reduces the model’s memory footprint, ensuring that it demands less storage and memory during operation, which is particularly beneficial in memory-constrained environments like TinyML systems.

+
+
+
Hardware Efficiency
+

Structured pruning often results in models that are more amenable to deployment on specialized hardware, such as Field-Programmable Gate Arrays (FPGAs) or Application-Specific Integrated Circuits (ASICs), due to the regularity and simplicity of the pruned architecture. With reduced computational requirements, it translates to lower energy consumption, which is crucial for battery-powered devices and sustainable computing practices.

+
+
+
Maintenance and Deployment
+

The pruned model, while smaller, retains its original architectural form, which can simplify the deployment pipeline and ensure compatibility with existing systems and frameworks. Also, with fewer parameters and simpler structures, the pruned model becomes easier to manage and monitor in production environments, potentially reducing the overhead associated with model maintenance and updates. Later on, when we dive into MLOps, this need will become apparent.

+
+
+
+

Unstructured Pruning

+

Unstructured pruning is, as its name suggests, pruning the model without regard to model-specific substructure. As mentioned above, it offers a greater aggression in pruning and can achieve higher model sparsities while maintaining accuracy given less constraints on what can and can’t be pruned. Generally, post-training unstructured pruning consists of an importance criterion for individual model parameters/weights, pruning/removal of weights that fall below the criteria, and optional fine-tuning after to try and recover the accuracy lost during weight removal.

+

Unstructured pruning has some advantages over structured pruning: removing individual weights instead of entire model substructures often leads in practice to lower model accuracy decreases. Furthermore, generally determining the criterion of importance for an individual weight is much simpler than for an entire substructure of parameters in structured pruning, making the former preferable for cases where that overhead is hard or unclear to compute. Similarly, the actual process of structured pruning is generally less flexible, as removing individual weights is generally simpler than removing entire substructures and ensuring the model still works.

+

Unstructured pruning, while offering the potential for significant model size reduction and enhanced deployability, brings with it challenges related to managing sparse representations and ensuring computational efficiency. It is particularly useful in scenarios where achieving the highest possible model compression is paramount and where the deployment environment can handle sparse computations efficiently.

+

Table tbl-pruning_methods provides a concise comparison between structured and unstructured pruning. In this table, aspects related to the nature and architecture of the pruned model (Definition, Model Regularity, and Compression Level) are grouped together, followed by aspects related to computational considerations (Computational Efficiency and Hardware Compatibility), and ending with aspects related to the implementation and adaptation of the pruned model (Implementation Complexity and Fine-Tuning Complexity). Both pruning strategies offer unique advantages and challenges, as shown in Table tbl-pruning_methods, and the selection between them should be influenced by specific project and deployment requirements.

+
+
+
+Table 9.1: Comparison of structured versus unstructured pruning. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AspectStructured PruningUnstructured Pruning
DefinitionPruning entire structures (e.g., neurons, channels, layers) within the networkPruning individual weights or neurons, resulting in sparse matrices or non-regular network structures
Model RegularityMaintains a regular, structured network architectureResults in irregular, sparse network architectures
Compression LevelMay offer limited model compression compared to unstructured pruningCan achieve higher model compression due to fine-grained pruning
Computational EfficiencyTypically more computationally efficient due to maintaining regular structuresCan be computationally inefficient due to sparse weight matrices, unless specialized hardware/software is used
Hardware CompatibilityGenerally better compatible with various hardware due to regular structuresMay require hardware that efficiently handles sparse computations to realize benefits
Implementation ComplexityOften simpler to implement and manage due to maintaining network structureCan be complex to manage and compute due to sparse representations
Fine-Tuning ComplexityMay require less complex fine-tuning strategies post-pruningMight necessitate more complex retraining or fine-tuning strategies post-pruning
+
+
+
+

In Figure fig-structured-unstructured we have exapmles that illustrate the differences between unstructured and structured pruning. Observe that unstructured pruning can lead to models that no longer obey high-level structural guaruntees of their original unpruned counterparts: the left network is no longer a fully connected network after pruning. Structured pruning on the other hand maintains those invariants: in the middle, the fully connected network is pruned in a way that the pruned network is still fully connected; likewise, the CNN maintains its convolutional structure, albeit with fewer filters.

+
+
+
+ +
+
+Figure 9.5: Unstructured vs structured pruning. Credit: Qi et al. (2021). +
+
+Qi, Chen, Shibo Shen, Rongpeng Li, Zhifeng Zhao, Qing Liu, Jing Liang, and Honggang Zhang. 2021. “An Efficient Pruning Scheme of Deep Neural Networks for Internet of Things Applications.” EURASIP Journal on Advances in Signal Processing 2021 (1). https://doi.org/10.1186/s13634-021-00744-4. +
+
+
+
+

Lottery Ticket Hypothesis

+

Pruning has evolved from a purely post-training technique that came at the cost of some accuracy, to a powerful meta-learning approach applied during training to reduce model complexity. This advancement in turn improves compute, memory, and latency efficiency at both training and inference.

+

A breakthrough finding that catalyzed this evolution was the lottery ticket hypothesis by Frankle and Carbin (2019). They empirically discovered by Jonathan Frankle and Michael Carbin. Their work states that within dense neural networks, there exist sparse subnetworks, referred to as “winning tickets,” that can match or even exceed the performance of the original model when trained in isolation. Specifically, these winning tickets, when initialized using the same weights as the original network, can achieve similarly high training convergence and accuracy on a given task. It is worthwhile pointing out that they empirically discovered the lottery ticket hypothesis, which was later formalized.

+
+Frankle, Jonathan, and Michael Carbin. 2019. “The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.” In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https://openreview.net/forum?id=rJl-b3RcF7. +

The intuition behind this hypothesis is that, during the training process of a neural network, many neurons and connections become redundant or unimportant, particularly with the inclusion of training techniques encouraging redundancy like dropout. Identifying, pruning out, and initializing these “winning tickets’’ allows for faster training and more efficient models, as they contain the essential model decision information for the task. Furthermore, as generally known with the bias-variance tradeoff theory, these tickets suffer less from overparameterization and thus generalize better rather than overfitting to the task.

+

In Figure fig-lottery-ticket-hypothesis we have an example experiment showing pruning and training experiments on a fully connected LeNet over a variety of pruning ratios. In the left plot, notice how heavy pruning reveals a more efifcient subnetwork (in green) that is 21.1% the size of the original network (in blue), The subnetwork achieves higher accuracy and in a faster manner than the unpruned version (green line is above the blue line). However, pruning has a limit (sweet spot), and further pruning will produce performance degredations and eventually drop below the unpruned version’s performance (notice how the red, purple, and brown subnetworks gradually drop in accuracy performance) due to the significant loss in the number of parameters.

+
+
+
+ +
+
+Figure 9.6: Lottery ticket hypothesis experiments. +
+
+
+

The following is the process of finding the winning lottery ticket subnetwork, as also shown in Figure fig-winning-ticket (left side):

+

1- Initialize the network’s weights to random values.

+

2- Train the network until it converges to the desired performance.

+

3- Prune out some percentage of the edges with the lowest weight values.

+

4- Reinitialize the network with the same random values from step 1.

+

5- Repeat steps 2-4 for a number of times, or as long as the accuracy doesn’t significantly degrade.

+

When we finish, we are left with a pruned network (Figure fig-winning-ticket right side), which is a subnetwork of the one we start with. The subnetwork should have a significantly smaller structure, while maintaining a comparable level of accuracy.

+
+
+
+ +
+
+Figure 9.7: Finding the winning ticket subnetwork. +
+
+
+
+
+

Challenges & Limitations

+

There is no free lunch with pruning optimizations, with some choices coming with both improvements and costs to considers. Below we discuss some tradeoffs for practitioners to consider.

+
+
Quality vs. Size Reduction
+

A key challenge in both structured and unstructured pruning is balancing size reduction with maintaining or improving predictive performance. This trade-off becomes more complex with unstructured pruning, where individual weight removal can create sparse weight matrices. Ensuring the pruned model retains generalization capacity while becoming more computationally efficient is critical, often requiring extensive experimentation and validation.

+
+
+
Determining Pruning Criteria
+

Establishing a robust pruning criteria, whether for removing entire structures (structured pruning) or individual weights (unstructured pruning), is challenging. The criteria must accurately identify elements whose removal minimally impacts performance. For unstructured pruning, this might involve additional complexities due to the potential for generating sparse weight matrices, which can be computationally inefficient on certain hardware.

+
+
+
Fine-Tuning and Retraining
+

Post-pruning fine-tuning is imperative in both structured and unstructured pruning to recover lost performance and stabilize the model. The challenge encompasses determining the extent, duration, and nature of the fine-tuning process, which can be influenced by the pruning method and the degree of pruning applied.

+
+
+
Scalability of Pruning Strategies
+

Ensuring that pruning strategies, whether structured or unstructured, are scalable and applicable across various models and domains is challenging. Unstructured pruning might introduce additional challenges related to managing and deploying models with sparse weight matrices, especially in hardware that is not optimized for sparse computations.

+
+
+
Hardware Compatibility and Efficiency
+

Especially pertinent to unstructured pruning, hardware compatibility and efficiency become critical. Unstructured pruning often results in sparse weight matrices, which may not be efficiently handled by certain hardware, potentially negating the computational benefits of pruning (see Figure fig-sparse-matrix). Ensuring that pruned models, particularly those resulting from unstructured pruning, are compatible and efficient on the target hardware is a significant consideration.

+
+
+
Complexity in Implementing Pruning Algorithms
+

Unstructured pruning might introduce additional complexity in implementing pruning algorithms due to the need to manage sparse representations of weights. Developing or adapting algorithms that can efficiently handle, store, and compute sparse weight matrices is an additional challenge and consideration in unstructured pruning.

+
+ +
+
+
+

9.2.2 Model Compression

+

Model compression techniques are crucial for deploying deep learning models on resource-constrained devices. These techniques aim to create smaller, more efficient models that preserve the predictive performance of the original models.

+
+

Knowledge Distillation

+

One popular technique is knowledge distillation (KD), which transfers knowledge from a large, complex “teacher” model to a smaller “student” model. The key idea is to train the student model to mimic the teacher’s outputs. The concept of KD was first popularized by Hinton (2005).

+
+Hinton, Geoffrey. 2005. “Van Nostrand’s Scientific Encyclopedia.” Wiley. https://doi.org/10.1002/0471743984.vse0673. +
+
Overview and Benefits
+

At its core, KD strategically leverages the refined outputs of a pre-trained teacher model to transfer knowledge to a smaller student model. The key technique is using “soft targets” derived from the teacher’s probabilistic predictions. Specifically, the teacher’s outputs are passed through a temperature-scaled softmax function, yielding softened probability distributions over classes. This softening provides richer supervision signals for the student model compared to hard target labels.

+

The loss function is another critical component that typically amalgamates a distillation loss, which measures the divergence between the teacher and student outputs, and a classification loss, which ensures the student model adheres to the true data labels. The Kullback-Leibler (KL) divergence is commonly employed to quantify the distillation loss, providing a measure of the discrepancy between the probability distributions output by the teacher and student models.

+

Another core concept is “temperature scaling” in the softmax function. It plays the role of controlling the granularity of the information distilled from the teacher model. A higher temperature parameter produces softer, more informative distributions, thereby facilitating the transfer of more nuanced knowledge to the student model. However, it also introduces the challenge of effectively balancing the trade-off between the informativeness of the soft targets and the stability of the training process.

+

These components, when adeptly configured and harmonized, enable the student model to assimilate the teacher model’s knowledge, crafting a pathway towards efficient and robust smaller models that retain the predictive prowess of their larger counterparts. Figure fig-knowledge-distillation visualizes the training procedure of knowledge distillation. Note how the logits or soft labels of the teacher model are used to provide a distillation loss for the student model to learn from.

+
+
+
+ +
+
+Figure 9.9: Knowledge distillation training process. Credit: IntelLabs (2023). +
+
+IntelLabs. 2023. “Knowledge Distillation - Neural Network Distiller.” https://intellabs.github.io/distiller/knowledge_distillation.html. +
+
+
+
+
Challenges
+

However, KD has a unique set of challenges and considerations that researchers and practitioners must attentively address. One of the challenges is in the meticulous tuning of hyperparameters, such as the temperature parameter in the softmax function and the weighting between the distillation and classification loss in the objective function. Striking a balance that effectively leverages the softened outputs of the teacher model while maintaining fidelity to the true data labels is non-trivial and can significantly impact the student model’s performance and generalization capabilities.

+

Furthermore, the architecture of the student model itself poses a considerable challenge. Designing a model that is compact to meet computational and memory constraints, while still being capable of assimilating the essential knowledge from the teacher model, demands a nuanced understanding of model capacity and the inherent trade-offs involved in compression. The student model must be carefully architected to navigate the dichotomy of size and performance, ensuring that the distilled knowledge is meaningfully captured and utilized. Moreover, the choice of teacher model, which inherently influences the quality and nature of the knowledge to be transferred, is important and it introduces an added layer of complexity to the KD process.

+

These challenges underscore the necessity for a thorough and nuanced approach to implementing KD, ensuring that the resultant student models are both efficient and effective in their operational contexts.

+
+
+
+

Low-rank Matrix Factorization

+

Similar in approximation theme, low-rank matrix factorization (LRMF) is a mathematical technique used in linear algebra and data analysis to approximate a given matrix by decomposing it into two or more lower-dimensional matrices. The fundamental idea is to express a high-dimensional matrix as a product of lower-rank matrices, which can help reduce the complexity of data while preserving its essential structure. Mathematically, given a matrix \(A \in \mathbb{R}^{m \times n}\), LRMF seeks matrices \(U \in \mathbb{R}^{m \times k}\) and \(V \in \mathbb{R}^{k \times n}\) such that \(A \approx UV\), where \(k\) is the rank and is typically much smaller than \(m\) and \(n\).

+
+
Background and Benefits
+

One of the seminal works in the realm of matrix factorization, particularly in the context of recommendation systems, is the paper by Koren, Bell, and Volinsky (2009). The authors delve into various factorization models, providing insights into their efficacy in capturing the underlying patterns in the data and enhancing predictive accuracy in collaborative filtering. LRMF has been widely applied in recommendation systems (such as Netflix, Facebook, etc.), where the user-item interaction matrix is factorized to capture latent factors corresponding to user preferences and item attributes.

+
+Koren, Yehuda, Robert Bell, and Chris Volinsky. 2009. “Matrix Factorization Techniques for Recommender Systems.” Computer 42 (8): 30–37. https://doi.org/10.1109/mc.2009.263. +

The main advantage of low-rank matrix factorization lies in its ability to reduce data dimensionality as shown in Figure fig-matrix-factorization, where there are fewer parameters to store, making it computationally more efficient and reducing storage requirements at the cost of some additional compute. This can lead to faster computations and more compact data representations, which is especially valuable when dealing with large datasets. Additionally, it may aid in noise reduction and can reveal underlying patterns and relationships in the data.

+

Figure fig-matrix-factorization illustrates the decrease in parameterization enabled by low-rank matrix factorization. Observe how the matrix \(M\) can be approximated by the product of matrices \(L_k\) and \(R_k^T\). For intuition, most fully connected layers in networks are stored as a projection matrix \(M\), which requires \(m \times n\) parameter to be loaded on computation. However, by decomposing and approximating it as the product of two lower rank matrices, we thus only need to store \(m \times k + k\times n\) parameters in terms of storage while incurring an additional compute cost of the matrix multiplication. So long as \(k < n/2\), this factorization has fewer parameters total to store while adding a computation of runtime \(O(mkn)\) (Gu (2023)).

+
+Gu, Ivy. 2023. “Deep Learning Model Compression (Ii) by Ivy Gu Medium.” https://ivygdy.medium.com/deep-learning-model-compression-ii-546352ea9453. +
+
+
+ +
+
+Figure 9.10: Low matrix factorization. Credit: The Clever Machine. +
+
+
+
+
+
Challenges
+

But practitioners and researchers encounter a spectrum of challenges and considerations that necessitate careful attention and strategic approaches. As with any lossy compression technique, we may lose information during this approximation process: choosing the correct rank that balances the information lost and the computational costs is tricky as well and adds an additional hyper-parameter to tune for.

+

Low-rank matrix factorization is a valuable tool for dimensionality reduction and making compute fit onto edge devices but, like other techniques, needs to be carefully tuned to the model and task at hand. A key challenge resides in managing the computational complexity inherent to LRMF, especially when grappling with high-dimensional and large-scale data. The computational burden, particularly in the context of real-time applications and massive datasets, remains a significant hurdle for effectively using LRMF.

+

Moreover, the conundrum of choosing the optimal rank, (k), for the factorization introduces another layer of complexity. The selection of (k) inherently involves a trade-off between approximation accuracy and model simplicity, and identifying a rank that adeptly balances these conflicting objectives often demands a combination of domain expertise, empirical validation, and sometimes, heuristic approaches. The challenge is further amplified when the data encompasses noise or when the inherent low-rank structure is not pronounced, making the determination of a suitable (k) even more elusive.

+

Handling missing or sparse data, a common occurrence in applications like recommendation systems, poses another substantial challenge. Traditional matrix factorization techniques, such as Singular Value Decomposition (SVD), are not directly applicable to matrices with missing entries, necessitating the development and application of specialized algorithms that can factorize incomplete matrices while mitigating the risks of overfitting to the observed entries. This often involves incorporating regularization terms or constraining the factorization in specific ways, which in turn introduces additional hyperparameters that need to be judiciously selected.

+

Furthermore, in scenarios where data evolves or grows over time, developing LRMF models that can adapt to new data without necessitating a complete re-factorization is a critical yet challenging endeavor. Online and incremental matrix factorization algorithms seek to address this by enabling the update of factorized matrices as new data arrives, yet ensuring stability, accuracy, and computational efficiency in these dynamic settings remains an intricate task. This is particularly challenging in the space of TinyML, where edge redeployment for refreshed models can be quite challenging.

+
+
+
+

Tensor Decomposition

+

Similar to low-rank matrix factorization, more complex models may store weights in higher dimensions, such as tensors: tensor decomposition is the higher-dimensional analogue of matrix factorization, where a model tensor is decomposed into lower rank components (see Figure fig-tensor-decomposition), which again are easier to compute on and store but may suffer from the same issues as mentioned above of information loss and nuanced hyperparameter tuning. Mathematically, given a tensor \(\mathcal{A}\), tensor decomposition seeks to represent \(\mathcal{A}\) as a combination of simpler tensors, facilitating a compressed representation that approximates the original data while minimizing the loss of information.

+

The work of Tamara G. Kolda and Brett W. Bader, “Tensor Decompositions and Applications” (2009), stands out as a seminal paper in the field of tensor decompositions. The authors provide a comprehensive overview of various tensor decomposition methods, exploring their mathematical underpinnings, algorithms, and a wide array of applications, ranging from signal processing to data mining. Of course, the reason we are discussing it is because it has huge potential for system performance improvements, particularly in the space of TinyML, where throughput and memory footprint savings are crucial to feasibility of deployments.

+
+
+
+ +
+
+Figure 9.11: Tensor decomposition. Credit: Xinyu (n.d.). +
+
+Xinyu, Chen. n.d. +
+
+
+

Exercise 9.2 (Scalable Model Compression with TensorFlow)  

+
+
+ +
+
+

This Colab dives into a technique for compressing models while maintaining high accuracy. The key idea is to train a model with an extra penalty term that encourages the model to be more compressible. Then, the model is encoded using a special coding scheme that aligns with this penalty. This approach allows you to achieve compressed models that perform just as well as the original models and is useful in deploying models to devices with limited resources like mobile phones and edge devices.

+

+
+
+
+
+
+
+

9.2.3 Edge-Aware Model Design

+

Finally, we reach the other end of the hardware-software gradient, where we specifically make model architecture decisions directly given knowledge of the edge devices we wish to deploy on.

+

As covered in previous sections, edge devices are constrained specifically with limitations on memory and parallelizable computations: as such, if there are critical inference speed requirements, computations must be flexible enough to satisfy hardware constraints, something that can be designed at the model architecture level. Furthermore, trying to cram SOTA large ML models onto edge devices even after pruning and compression is generally infeasible purely due to size: the model complexity itself must be chosen with more nuance as to more feasibly fit the device. Edge ML developers have approached this architectural challenge both through designing bespoke edge ML model architectures and through device-aware neural architecture search (NAS), which can more systematically generate feasible on-device model architectures.

+
+

Model Design Techniques

+

One edge friendly architecture design is depthwise separable convolutions. Commonly used in deep learning for image processing, it consists of two distinct steps: the first is the depthwise convolution, where each input channel is convolved independently with its own set of learnable filters, as show in Figure fig-depthwise-convolution. This step reduces computational complexity by a significant margin compared to standard convolutions, as it drastically reduces the number of parameters and computations involved. The second step is the pointwise convolution, which combines the output of the depthwise convolution channels through a 1x1 convolution, creating inter-channel interactions. This approach offers several advantages. Pros include reduced model size, faster inference times, and often better generalization due to fewer parameters, making it suitable for mobile and embedded applications. However, depthwise separable convolutions may not capture complex spatial interactions as effectively as standard convolutions and might require more depth (layers) to achieve the same level of representational power, potentially leading to longer training times. Nonetheless, their efficiency in terms of parameters and computation makes them a popular choice in modern convolutional neural network architectures.

+
+
+
+ +
+
+Figure 9.12: Depthwise separable convolutions. Credit: Hegde (2023). +
+
+Hegde, Sumant. 2023. “An Introduction to Separable Convolutions - Analytics Vidhya.” https://www.analyticsvidhya.com/blog/2021/11/an-introduction-to-separable-convolutions/. +
+
+
+
+

Example Model Architectures

+

In this vein, a number of recent architectures have been, from inception, specifically designed for maximizing accuracy on an edge deployment, notably SqueezeNet, MobileNet, and EfficientNet.

+
    +
  • SqueezeNet by Iandola et al. (2016) for instance, utilizes a compact architecture with 1x1 convolutions and fire modules to minimize the number of parameters while maintaining strong accuracy.

  • +
  • MobileNet by Howard et al. (2017), on the other hand, employs the aforementioned depthwise separable convolutions to reduce both computation and model size.

  • +
  • EfficientNet by Tan and Le (2023) takes a different approach by optimizing network scaling (i.e. varying the depth, width and resolution of a network) and compound scaling, a more nuanced variation network scaling, to achieve superior performance with fewer parameters.

  • +
+
+Iandola, Forrest N, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: Alexnet-level Accuracy with 50x Fewer Parameters and 0.5 MB Model Size.” ArXiv Preprint abs/1602.07360. https://arxiv.org/abs/1602.07360. +
+Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint. https://arxiv.org/abs/1704.04861. +
+Tan, Mingxing, and Quoc V. Le. 2023. “Demystifying Deep Learning.” Wiley. https://doi.org/10.1002/9781394205639.ch6. +

These models are essential in the context of edge computing where limited processing power and memory require lightweight yet effective models that can efficiently perform tasks such as image recognition, object detection, and more. Their design principles showcase the importance of intentionally tailored model architecture for edge computing, where performance and efficiency must fit within constraints.

+
+ +
+
+
+

9.3 Efficient Numerics Representation

+

Numerics representation involves a myriad of considerations, including, but not limited to, the precision of numbers, their encoding formats, and the arithmetic operations facilitated. It invariably involves a rich array of different trade-offs, where practitioners are tasked with navigating between numerical accuracy and computational efficiency. For instance, while lower-precision numerics may offer the allure of reduced memory usage and expedited computations, they concurrently present challenges pertaining to numerical stability and potential degradation of model accuracy.

+
+

Motivation

+

The imperative for efficient numerics representation arises, particularly as efficient model optimization alone falls short when adapting models for deployment on low-powered edge devices operating under stringent constraints.

+

Beyond minimizing memory demands, the tremendous potential of efficient numerics representation lies in, but is not limited to, these fundamental ways. By diminishing computational intensity, efficient numerics can thereby amplify computational speed, allowing more complex models to compute on low-powered devices. Reducing the bit precision of weights and activations on heavily over-parameterized models enables condensation of model size for edge devices without significantly harming the model’s predictive accuracy. With the omnipresence of neural networks in models, efficient numerics has a unique advantage in leveraging the layered structure of NNs to vary numeric precision across layers, minimizing precision in resistant layers while preserving higher precision in sensitive layers.

+

In this section, we will dive into how practitioners can harness the principles of hardware-software co-design at the lowest levels of a model to facilitate compatibility with edge devices. Kicking off with an introduction to the numerics, we will examine its implications for device memory and computational complexity. Subsequently, we will embark on a discussion regarding the trade-offs entailed in adopting this strategy, followed by a deep dive into a paramount method of efficient numerics: quantization.

+
+
+

9.3.1 The Basics

+
+

Types

+

Numerical data, the bedrock upon which machine learning models stand, manifest in two primary forms. These are integers and floating point numbers.

+

Integers: Whole numbers, devoid of fractional components, integers (e.g., -3, 0, 42) are key in scenarios demanding discrete values. For instance, in ML, class labels in a classification task might be represented as integers, where “cat”, “dog”, and “bird” could be encoded as 0, 1, and 2 respectively.

+

Floating-Point Numbers: Encompassing real numbers, floating-point numbers (e.g., -3.14, 0.01, 2.71828) afford the representation of values with fractional components. In ML model parameters, weights might be initialized with small floating-point values, such as 0.001 or -0.045, to commence the training process. Currently, there are 4 popular precision formats discussed below.

+

Variable bit widths: Beyond the standard widths, research is ongoing into extremely low bit-width numerics, even down to binary or ternary representations. Extremely low bit-width operations can offer significant speedups and reduce power consumption even further. While challenges remain in maintaining model accuracy with such drastic quantization, advances continue to be made in this area.

+
+
+

Precision

+

Precision, delineating the exactness with which a number is represented, bifurcates typically into single, double, half and in recent years there have been a number of other precisions that have emerged to better support machine learning tasks efficiently on the underlying hardware.

+

Double Precision (Float64): Allocating 64 bits, double precision (e.g., 3.141592653589793) provides heightened accuracy, albeit demanding augmented memory and computational resources. In scientific computations, where precision is paramount, variables like π might be represented with Float64.

+

Single Precision (Float32): With 32 bits at its disposal, single precision (e.g., 3.1415927) strikes a balance between numerical accuracy and memory conservation. In ML, Float32 might be employed to store weights during training to maintain a reasonable level of precision.

+

Half Precision (Float16): Constrained to 16 bits, half precision (e.g., 3.14) curtails memory usage and can expedite computations, albeit sacrificing numerical accuracy and range. In ML, especially during inference on resource-constrained devices, Float16 might be utilized to reduce the model’s memory footprint.

+

Bfloat16: Brain Floating-Point Format or Bfloat16, also employs 16 bits but allocates them differently compared to FP16: 1 bit for the sign, 8 bits for the exponent (resulting in the same number range as in float32), and 7 bits for the fraction. This format, developed by Google, prioritizes a larger exponent range over precision, making it particularly useful in deep learning applications where the dynamic range is crucial.

+

Figure fig-3float illustrates the differences between the three floating-point formats: Float32, Float16, and BFloat16.

+
+
+
+ +
+
+Figure 9.13: Three floating-point formats. +
+
+
+

Integer: Integer representations are made using 8, 4, and 2 bits. They are often used during the inference phase of neural networks, where the weights and activations of the model are quantized to these lower precisions. Integer representations are deterministic and offer significant speed and memory advantages over floating-point representations. For many inference tasks, especially on edge devices, the slight loss in accuracy due to quantization is often acceptable given the efficiency gains. An extreme form of integer numerics is for binary neural networks (BNNs), where weights and activations are constrained to one of two values: either +1 or -1.

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PrecisionProsCons
FP32 (Floating Point 32-bit)Standard precision used in most deep learning frameworks.
High accuracy due to ample representational capacity.
Well-suited for training.
High memory usage.
Slower inference times compared to quantized models.
Higher energy consumption.
FP16 (Floating Point 16-bit)Reduces memory usage compared to FP32.
Speeds up computations on hardware that supports FP16.
Often used in mixed-precision training to balance speed and accuracy.
Lower representational capacity compared to FP32.
Risk of numerical instability in some models or layers.
INT8 (8-bit Integer)Significantly reduced memory footprint compared to floating-point representations.
Faster inference if hardware supports INT8 computations.
Suitable for many post-training quantization scenarios.
Quantization can lead to some accuracy loss.
Requires careful calibration during quantization to minimize accuracy degradation.
INT4 (4-bit Integer)Even lower memory usage than INT8.
Further speed-up potential for inference.
Higher risk of accuracy loss compared to INT8.
Calibration during quantization becomes more critical.
BinaryMinimal memory footprint (only 1 bit per parameter).
Extremely fast inference due to bitwise operations.
Power efficient.
Significant accuracy drop for many tasks.
Complex training dynamics due to extreme quantization.
TernaryLow memory usage but slightly more than binary.
Offers a middle ground between representation and efficiency.
Accuracy might still be lower than higher precision models.
Training dynamics can be complex.
+
+
+

Numeric Encoding and Storage

+

Numeric encoding, the art of transmuting numbers into a computer-amenable format, and their subsequent storage are critical for computational efficiency. For instance, floating-point numbers might be encoded using the IEEE 754 standard, which apportions bits among sign, exponent, and fraction components, thereby enabling the representation of a vast array of values with a single format. There are a few new IEEE floating point formats that have been defined specifically for AI workloads:

+
    +
  • bfloat16- A 16-bit floating point format introduced by Google. It has 8 bits for exponent, 7 bits for mantissa and 1 bit for sign. Offers a reduced precision compromise between 32-bit float and 8-bit integers. Supported on many hardware accelerators.
  • +
  • posit - A configurable format that can represent different levels of precision based on exponent bits. Aims to be more efficient than IEEE 754 binary floats. Has adjustable dynamic range and precision.
  • +
  • Flexpoint - A format introduced by Intel that can dynamically adjust precision across layers or within a layer. Allows tuning precision to accuracy and hardware requirements.
  • +
  • BF16ALT - A proposed 16-bit format by ARM as an alternative to bfloat16. Uses additional bit in exponent to prevent overflow/underflow.
  • +
  • TF32 - Introduced by Nvidia for Ampere GPUs. Uses 10 bits for exponent instead of 8 bits like FP32. Improves model training performance while maintaining accuracy.
  • +
  • FP8 - 8-bit floating point format that keeps 6 bits for mantissa and 2 bits for exponent. Enables better dynamic range than integers.
  • +
+

The key goals of these new formats are to provide lower precision alternatives to 32-bit floats for better computational efficiency and performance on AI accelerators while maintaining model accuracy. They offer different tradeoffs in terms of precision, range and implementation cost/complexity.

+
+
+
+

9.3.2 Efficiency Benefits

+

Numerical efficiency matters for machine learning workloads for a number of reasons:

+

Computational Efficiency: High-precision computations (like FP32 or FP64) can be slow and resource-intensive. By reducing numeric precision, one can achieve faster computation times, especially on specialized hardware that supports lower precision.

+

Memory Efficiency: Storage requirements decrease with reduced numeric precision. For instance, FP16 requires half the memory of FP32. This is crucial when deploying models to edge devices with limited memory or when working with very large models.

+

Power Efficiency: Lower precision computations often consume less power, which is especially important for battery-operated devices.

+

Noise Introduction: Interestingly, the noise introduced by using lower precision can sometimes act as a regularizer, helping to prevent overfitting in some models.

+

Hardware Acceleration: Many modern AI accelerators and GPUs are optimized for lower precision operations, leveraging the efficiency benefits of such numerics.

+

Efficient numerics is not just about reducing the bit-width of numbers but understanding the trade-offs between accuracy and efficiency. As machine learning models become more pervasive, especially in real-world, resource-constrained environments, the focus on efficient numerics will continue to grow. By thoughtfully selecting and leveraging the appropriate numeric precision, one can achieve robust model performance while optimizing for speed, memory, and energy.

+
+
+

9.3.3 Numeric Representation Nuances

+

There are a number of nuances with numerical representations for ML that require us to have an understanding of both the theoretical and practical aspects of numerics representation, as well as a keen awareness of the specific requirements and constraints of the application domain.

+
+

Memory Usage

+

The memory footprint of ML models, particularly those of considerable complexity and depth, can be substantial, thereby posing a significant challenge in both training and deployment phases. For instance, a deep neural network with 100 million parameters, represented using Float32 (32 bits or 4 bytes per parameter), would necessitate approximately 400 MB of memory just for storing the model weights. This does not account for additional memory requirements during training for storing gradients, optimizer states, and forward pass caches, which can further amplify the memory usage, potentially straining the resources on certain hardware, especially edge devices with limited memory capacity.

+
+
+

Impact on Model Parameters and Weights

+

The numeric representation casts a significant impact on the storage and computational requisites of ML model parameters and weights. For instance, a model utilizing Float64 for weights will demand double the memory and potentially increased computational time compared to a counterpart employing Float32. A weight matrix, for instance, with dimensions [1000, 1000] using Float64 would consume approximately 8MB of memory, whereas using Float32 would halve this to approximately 4MB.

+
+
+

Computational Complexity

+

Numerical precision directly impacts computational complexity, influencing the time and resources required to perform arithmetic operations. For example, operations using Float64 generally consume more computational resources than their Float32 or Float16 counterparts (see Figure fig-quantized-energy). In the realm of ML, where models might need to process millions of operations (e.g., multiplications and additions in matrix operations during forward and backward passes), even minor differences in the computational complexity per operation can aggregate into a substantial impact on training and inference times. As shown in Figure fig-models-speeds, quantized models can be many times faster than their unquantized versions.

+
+
+
+ +
+
+Figure 9.14: Energy use by quantized operations. Credit: Mark Horowitz, Stanford University. +
+
+
+
+
+
+ +
+
+Figure 9.15: Speed of three different models in normal and quantized form. +
+
+
+

In addition to pure runtimes, there is also a concern over energy efficiency. Not all numerical computations are created equal from the underlying hardware standpoint. Some numerical operations are more energy efficient than others. For example, Figure fig-operations-energy-comparison below shows that integer addition is much more energy efficient than integer multiplication.

+
+
+
+ +
+
+Figure 9.16: Energy use by quantized operations. Credit: Isscc (2014). +
+
+Isscc. 2014. “Computing’s Energy Problem (and What We Can Do about It).” https://ieeexplore.ieee.org/document/6757323. +
+
+
+
+

Hardware Compatibility

+

Ensuring compatibility and optimized performance across diverse hardware platforms is another challenge in numerics representation. Different hardware, such as CPUs, GPUs, TPUs, and FPGAs, have varying capabilities and optimizations for handling different numeric precisions. For example, certain GPUs might be optimized for Float32 computations, while others might provide accelerations for Float16. Developing and optimizing ML models that can leverage the specific numerical capabilities of different hardware, while ensuring that the model maintains its accuracy and robustness, requires careful consideration and potentially additional development and testing efforts.

+
+
+

Precision and Accuracy Trade-offs

+

The trade-off between numerical precision and model accuracy is a nuanced challenge in numerics representation. Utilizing lower-precision numerics, such as Float16, might conserve memory and expedite computations but can also introduce issues like quantization error and reduced numerical range. For instance, training a model with Float16 might introduce challenges in representing very small gradient values, potentially impacting the convergence and stability of the training process. Furthermore, in certain applications, such as scientific simulations or financial computations, where high precision is paramount, the use of lower-precision numerics might not be permissible due to the risk of accruing significant errors.

+
+
+

Trade-off Examples

+

To understand and appreciate the nuances let’s consider some use case examples. Through these we will realize that the choice of numeric representation is not merely a technical decision but a strategic one, influencing the model’s predictive acumen, its computational demands, and its deployability across diverse computational environments. In this section we will look at a couple of examples to better understand the trade-offs with numerics and how they tie to the real world.

+
+
Autonomous Vehicles
+

In the domain of autonomous vehicles, ML models are employed to interpret sensor data and make real-time decisions. The models must process high-dimensional data from various sensors (e.g., LiDAR, cameras, radar) and execute numerous computations within a constrained time frame to ensure safe and responsive vehicle operation. So the trade-offs here would include:

+
    +
  • Memory Usage: Storing and processing high-resolution sensor data, especially in floating-point formats, can consume substantial memory.
  • +
  • Computational Complexity: Real-time processing demands efficient computations, where higher-precision numerics might impede the timely execution of control actions.
  • +
+
+
+
Mobile Health Applications
+

Mobile health applications often utilize ML models for tasks like activity recognition, health monitoring, or predictive analytics, operating within the resource-constrained environment of mobile devices. The trade-offs here would include:

+
    +
  • Precision and Accuracy Trade-offs: Employing lower-precision numerics to conserve resources might impact the accuracy of health predictions or anomaly detections, which could have significant implications for user health and safety.
  • +
  • Hardware Compatibility: Models need to be optimized for diverse mobile hardware, ensuring efficient operation across a wide range of devices with varying numerical computation capabilities.
  • +
+
+
+
High-Frequency Trading (HFT) Systems
+

HFT systems leverage ML models to make rapid trading decisions based on real-time market data. These systems demand extremely low-latency responses to capitalize on short-lived trading opportunities.

+
    +
  • Computational Complexity: The models must process and analyze vast streams of market data with minimal latency, where even slight delays, potentially introduced by higher-precision numerics, can result in missed opportunities.
  • +
  • Precision and Accuracy Trade-offs: Financial computations often demand high numerical precision to ensure accurate pricing and risk assessments, posing challenges in balancing computational efficiency and numerical accuracy.
  • +
+
+
+
Edge-Based Surveillance Systems
+

Surveillance systems deployed on edge devices, like security cameras, utilize ML models for tasks like object detection, activity recognition, and anomaly detection, often operating under stringent resource constraints.

+
    +
  • Memory Usage: Storing pre-trained models and processing video feeds in real-time demands efficient memory usage, which can be challenging with high-precision numerics.
  • +
  • Hardware Compatibility: Ensuring that models can operate efficiently on edge devices with varying hardware capabilities and optimizations for different numeric precisions is crucial for widespread deployment.
  • +
+
+
+
Scientific Simulations
+

ML models are increasingly being utilized in scientific simulations, such as climate modeling or molecular dynamics simulations, to enhance predictive capabilities and reduce computational demands.

+
    +
  • Precision and Accuracy Trade-offs: Scientific simulations often require high numerical precision to ensure accurate and reliable results, which can conflict with the desire to reduce computational demands via lower-precision numerics.
  • +
  • Computational Complexity: The models must manage and process complex, high-dimensional simulation data efficiently to ensure timely results and enable large-scale or long-duration simulations.
  • +
+

These examples illustrate diverse scenarios where the challenges of numerics representation in ML models are prominently manifested. Each system presents a unique set of requirements and constraints, necessitating tailored strategies and solutions to navigate the challenges of memory usage, computational complexity, precision-accuracy trade-offs, and hardware compatibility.

+
+
+
+
+

9.3.4 Quantization

+

Quantization is prevalent in various scientific and technological domains, and it essentially involves the mapping or constraining of a continuous set or range into a discrete counterpart to minimize the number of bits required.

+
+

History

+

Historically, the idea of quantization is not novel and can be traced back to ancient times, particularly in the realm of music and astronomy. In music, the Greeks utilized a system of tetrachords, segmenting the continuous range of pitches into discrete notes, thereby quantizing musical sounds. In astronomy and physics, the concept of quantization was present in the discretized models of planetary orbits, as seen in the Ptolemaic and Copernican systems.

+

During the 1800s, quantization-based discretization was used to approximate the calculation of integrals, and further used to investigate the impact of rounding errors on the integration result. With algorithms, Lloyd’s K-Means Algorithm is a classic example of quantization. However, the term “quantization” was firmly embedded in scientific literature with the advent of quantum mechanics in the early 20th century, where it was used to describe the phenomenon that certain physical properties, such as energy, exist only in discrete, quantized states. This principle was pivotal in explaining phenomena at the atomic and subatomic levels. In the digital age, quantization found its application in signal processing, where continuous signals are converted into a discrete digital form, and in numerical algorithms, where computations on real-valued numbers are performed with finite-precision arithmetic.

+

Extending upon this second application and relevant to this section, it is used in computer science to optimize neural networks by reducing the precision of the network weights. Thus, quantization, as a concept, has been subtly woven into the tapestry of scientific and technological development, evolving and adapting to the needs and discoveries of various epochs.

+
+
+

Initial Breakdown

+

We begin our foray into quantization with a brief analysis of one important use for quantization.

+

In signal processing, the continuous sine wave (shown in Figure fig-sine-wave) can be quantized into discrete values through a process known as sampling. This is a fundamental concept in digital signal processing and is crucial for converting analog signals (like the continuous sine wave) into a digital form that can be processed by computers. The sine wave is a prevalent example due to its periodic and smooth nature, making it a useful tool for explaining concepts like frequency, amplitude, phase, and, of course, quantization.

+
+
+
+ +
+
+Figure 9.17: Sine Wave. +
+
+
+

In the quantized version shown in Figure fig-quantized-sine-wave, the continuous sine wave (Figure fig-sine-wave) is sampled at regular intervals (in this case, every \(\frac{\pi}{4}\) radians), and only these sampled values are represented in the digital version of the signal. The step-wise lines between the points show one way to represent the quantized signal in a piecewise-constant form. This is a simplified example of how analog-to-digital conversion works, where a continuous signal is mapped to a discrete set of values, enabling it to be represented and processed digitally.

+
+
+
+ +
+
+Figure 9.18: Quantized Sine Wave. +
+
+
+

Returning to the context of Machine Learning (ML), quantization refers to the process of constraining the possible values that numerical parameters (such as weights and biases) can take to a discrete set, thereby reducing the precision of the parameters and consequently, the model’s memory footprint. When properly implemented, quantization can reduce model size by up to 4x and improve inference latency and throughput by up to 2-3x. Figure fig-quantized-models-size illustrates the impact that quantization has on different models’ sizes: for example, an Image Classification model like ResNet-v2 can be compressed from 180MB down to 45MB with 8-bit quantization. There is typically less than 1% loss in model accuracy from well tuned quantization. Accuracy can often be recovered by re-training the quantized model with quantization aware training techniques. Therefore, this technique has emerged to be very important in deploying ML models to resource-constrained environments, such as mobile devices, IoT devices, and edge computing platforms, where computational resources (memory and processing power) are limited.

+
+
+
+ +
+
+Figure 9.19: Effect of quantization on model sizes. Credit: HarvardX. +
+
+
+

There are several dimensions to quantization such as uniformity, stochasticity (or determinism), symmetry, granularity (across layers/channels/groups or even within channels), range calibration considerations (static vs dynamic), and fine-tuning methods (QAT, PTQ, ZSQ). We examine these below.

+
+
+
+

9.3.5 Types

+
+

Uniform Quantization

+

Uniform quantization involves mapping continuous or high-precision values to a lower-precision representation using a uniform scale. This means that the interval between each possible quantized value is consistent. For example, if weights of a neural network layer are quantized to 8-bit integers (values between 0 and 255), a weight with a floating-point value of 0.56 might be mapped to an integer value of 143, assuming a linear mapping between the original and quantized scales. Due to its use of integer or fixed-point math pipelines, this form of quantization allows computation on the quantized domain without the need to dequantize beforehand.

+

The process for implementing uniform quantization starts with choosing a range of real numbers to be quantized. The next step is to select a quantization function and map the real values to the integers representable by the bit-width of the quantized representation. For instance, a popular choice for a quantization function is:

+

\[ +Q(r)=Int(r/S) - Z +\]

+

where Q is the quantization operator, r is a real valued input (in our case, an activation or weight), S is a real valued scaling factor, and Z is an integer zero point. The Int function maps a real value to an integer value through a rounding operation. Through this function, we have effectively mapped real values r to some integer values, resulting in quantized levels which are uniformly spaced.

+

When the need arises for practitioners to retrieve the original higher precision values, real values r can be recovered from quantized values through an operation known as dequantization. In the example above, this would mean performing the following operation on our quantized value:

+

\[ +\bar{r} = S(Q(r) + Z) +\]

+

As discussed, some precision in the real value is lost by quantization. In this case, the recovered value \(\bar{r}\) will not exactly match r due to the rounding operation. This is an important tradeoff to note; however, in many successful uses of quantization, the loss of precision can be negligible and the test accuracy remains high. Despite this, uniform quantization continues to be the current de-facto choice due to its simplicity and efficient mapping to hardware.

+
+
+

Non-uniform Quantization

+

Non-uniform quantization, on the other hand, does not maintain a consistent interval between quantized values. This approach might be used to allocate more possible discrete values in regions where the parameter values are more densely populated, thereby preserving more detail where it is most needed. For instance, in bell-shaped distributions of weights with long tails, a set of weights in a model predominantly lies within a certain range; thus, more quantization levels might be allocated to that range to preserve finer details, enabling us to better capture information. However, one major weakness of non-uniform quantization is that it requires dequantization before higher precision computations due to its non-uniformity, restricting its ability to accelerate computation compared to uniform quantization.

+

Typically, a rule-based non-uniform quantization uses a logarithmic distribution of exponentially increasing steps and levels as opposed to linearly. Another popular branch lies in binary-code-based quantization where real number vectors are quantized into binary vectors with a scaling factor. Notably, there is no closed form solution for minimizing errors between the real value and non-uniformly quantized value, so most quantizations in this field rely on heuristic solutions. For instance, recent work by Xu et al. (2018) formulates non-uniform quantization as an optimization problem where the quantization steps/levels in quantizer Q are adjusted to minimize the difference between the original tensor and quantized counterpart.

+
+Xu, Chen, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. 2018. “Alternating Multi-Bit Quantization for Recurrent Neural Networks.” In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=S19dR9x0b. +

\[ +\min_Q ||Q(r)-r||^2 +\]

+

Furthermore, learnable quantizers can be jointly trained with model parameters, and the quantization steps/levels are generally trained with iterative optimization or gradient descent. Additionally, clustering has been used to alleviate information loss from quantization. While capable of capturing higher levels of detail, non-uniform quantization schemes can be difficult to deploy efficiently on general computation hardware, making it less-preferred to methods which use uniform quantization.

+
+
+
+ +
+
+Figure 9.20: Quantization uniformity. Credit: Gholami et al. (2021). +
+
+
+
+
+

Stochastic Quantization

+

Unlike the two previous approaches which generate deterministic mappings, there is some work exploring the idea of stochastic quantization for quantization aware training and reduced precision training. This approach maps floating numbers up or down with a probability associated to the magnitude of the weight update. The hope generated by high level intuition is that such a probabilistic approach may allow a neural network to explore more, as compared to deterministic quantization. Supposedly, enabling a stochastic rounding may allow neural networks to escape local optimums, thereby updating its parameters. Below are two example stochastic mapping functions:

+

+
+
+
+ +
+
+Figure 9.21: Integer vs Binary quantization functions. +
+
+
+
+
+

Zero Shot Quantization

+

Zero-shot quantization refers to the process of converting a full-precision deep learning model directly into a low-precision, quantized model without the need for any retraining or fine-tuning on the quantized model. The primary advantage of this approach is its efficiency, as it eliminates the often time-consuming and resource-intensive process of retraining a model post-quantization. By leveraging techniques that anticipate and minimize quantization errors, zero-shot quantization aims to maintain the model’s original accuracy even after reducing its numerical precision. It is particularly useful for Machine Learning as a Service (MLaaS) providers aiming to expedite the deployment of their customer’s workloads without having to access their datasets.

+
+
+
+

9.3.6 Calibration

+

Calibration is the process of selecting the most effective clipping range [\(\alpha\), \(\beta\)] for weights and activations to be quantized to. For example, consider quantizing activations that originally have a floating-point range between -6 and 6 to 8-bit integers. If you just take the minimum and maximum possible 8-bit integer values (-128 to 127) as your quantization range, it might not be the most effective. Instead, calibration would involve passing a representative dataset then use this observed range for quantization.

+

There are many calibration methods but a few commonly used include:

+
    +
  • Max: Use the maximum absolute value seen during calibration. However, this method is susceptible to outlier data. Notice how in Figure fig-resnet-activations-histogram, we have an outlier cluster around 2.1, while the rest are clustered around smaller values.
  • +
  • Entropy: Use KL divergence to minimize information loss between the original floating-point values and values that could be represented by the quantized format. This is the default method used by TensorRT.
  • +
  • Percentile: Set the range to a percentile of the distribution of absolute values seen during calibration. For example, 99% calibration would clip 1% of the largest magnitude values.
  • +
+
+
+
+ +
+
+Figure 9.22: Input activations to layer 3 in ResNet50. Credit: @Wu, Judd, and Isaev (2020). +
+
+
+

Importantly, the quality of calibration can make a difference between a quantized model that retains most of its accuracy and one that degrades significantly. Hence, it’s an essential step in the quantization process. When choosing a calibration range, there are two types: symmetric and asymmetric.

+
+

Symmetric Quantization

+

Symmetric quantization maps real values to a symmetrical clipping range centered around 0. This involves choosing a range [\(\alpha\), \(\beta\)] where \(\alpha = -\beta\). For example, one symmetrical range would be based on the min/max values of the real values such that: -\(\alpha = \beta = max(abs(r_{max}), abs(r_{min}))\).

+

Symmetric clipping ranges are the most widely adopted in practice as they have the advantage of easier implementation. In particular, the mapping of zero to zero in the clipping range (sometimes called “zeroing out of the zero point”) can lead to reduction in computational cost during inference (Wu, Judd, and Isaev (2020)).

+
+
+

Asymmetric Quantization

+

Asymmetric quantization maps real values to an asymmetrical clipping range that isn’t necessarily centered around 0, as shown in Figure fig-quantization-symmetry on the right. It involves choosing a range [\(\alpha\), \(\beta\)] where \(\alpha \neq -\beta\). For example, selecting a range based on the minimum and maximum real values, or where \(\alpha = r_{min}\) and \(\beta = r_{max}\), creates an asymmetric range. Typically, asymmetric quantization produces tighter clipping ranges compared to symmetric quantization, which is important when target weights and activations are imbalanced, e.g., the activation after the ReLU always has non-negative values. Despite producing tighter clipping ranges, asymmetric quantization is less preferred to symmetric quantization as it doesn’t always zero out the real value zero.

+
+
+
+ +
+
+Figure 9.23: Quantization (a)symmetry. Credit: Gholami et al. (2021). +
+
+
+
+
+

Granularity

+

Upon deciding the type of clipping range, it is essential to tighten the range to allow a model to retain as much of its accuracy as possible. We’ll be taking a look at convolutional neural networks as our way of exploring methods that fine tune the granularity of clipping ranges for quantization. The input activation of a layer in our CNN undergoes convolution with multiple convolutional filters. Every convolutional filter can possess a unique range of values. Notice how in Figure fig-quantization-granularity, the range for Filter1 is much smaller than that for Filter 3. Consequently, one distinguishing feature of quantization approaches is the precision with which the clipping range [α,β] is determined for the weights.

+
+
+
+ +
+
+Figure 9.24: Quantization granularity: variable ranges. Credit: Gholami et al. (2021). +
+
+
+
    +
  1. Layerwise Quantization: This approach determines the clipping range by considering all of the weights in the convolutional filters of a layer. Then, the same clipping range is used for all convolutional filters. It’s the simplest to implement, and, as such, it often results in sub-optimal accuracy due the wide variety of differing ranges between filters. For example, a convolutional kernel with a narrower range of parameters loses its quantization resolution due to another kernel in the same layer having a wider range.
  2. +
  3. Groupwise Quantization: This approach groups different channels inside a layer to calculate the clipping range. This method can be helpful when the distribution of parameters across a single convolution/activation varies a lot. In practice, this method was useful in Q-BERT (Shen et al. 2020) for quantizing Transformer (Vaswani et al. 2017) models that consist of fully-connected attention layers. The downside with this approach comes with the extra cost of accounting for different scaling factors.
  4. +
  5. Channelwise Quantization: This popular method uses a fixed range for each convolutional filter that is independent of other channels. Because each channel is assigned a dedicated scaling factor, this method ensures a higher quantization resolution and often results in higher accuracy.
  6. +
  7. Sub-channelwise Quantization: Taking channelwise quantization to the extreme, this method determines the clipping range with respect to any groups of parameters in a convolution or fully-connected layer. It may result in considerable overhead since different scaling factors need to be taken into account when processing a single convolution or fully-connected layer.
  8. +
+
+Shen, Sheng, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2020. “Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT.” Proceedings of the AAAI Conference on Artificial Intelligence 34 (05): 8815–21. https://doi.org/10.1609/aaai.v34i05.6409. +
+Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” Advances in Neural Information Processing Systems 30. +

Of these, channelwise quantization is the current standard used for quantizing convolutional kernels, since it enables the adjustment of clipping ranges for each individual kernel with negligible overhead.

+
+
+

Static and Dynamic Quantization

+

After determining the type and granularity of the clipping range, practitioners must decide when ranges are determined in their range calibration algorithms. There are two approaches to quantizing activations: static quantization and dynamic quantization.

+

Static quantization is the most frequently used approach. In this, the clipping range is pre-calculated and static during inference. It does not add any computational overhead, but, consequently, results in lower accuracy as compared to dynamic quantization. A popular method of implementing this is to run a series of calibration inputs to compute the typical range of activations [Quantization and training of neural networks for efficient integer-arithmetic-only inference, Dyadic neural network quantization].

+

Dynamic quantization is an alternative approach which dynamically calculates the range for each activation map during runtime. The approach requires real-time computations which might have a very high overhead. By doing this, dynamic quantization often achieves the highest accuracy as the range is calculated specifically for each input.

+

Between the two, calculating the range dynamically usually is very costly, so most practitioners will often use static quantization instead.

+
+
+
+

9.3.7 Techniques

+

The two prevailing techniques for quantizing models are Post Training Quantization and Quantization Aware Training.

+

Post Training Quantization - Post-training quantization (PTQ) is a quantization technique where the model is quantized after it has been trained. The model is trained in floating point and then weights and activations are quantized as a post-processing step. This is the simplest approach and does not require access to the training data. Unlike Quantization-Aware Training (QAT), PTQ sets weight and activation quantization parameters directly, making it low-overhead and suitable for limited or unlabeled data situations. However, not readjusting the weights after quantizing, especially in low-precision quantization can lead to very different behavior and thus lower accuracy. To tackle this, techniques like bias correction, equalizing weight ranges, and adaptive rounding methods have been developed. PTQ can also be applied in zero-shot scenarios, where no training or testing data are available. This method has been made even more efficient to benefit compute- and memory- intensive large language models. Recently, SmoothQuant, a training-free, accuracy-preserving, and general-purpose PTQ solution which enables 8-bit weight, 8-bit activation quantization for LLMs, has been developed, demonstrating up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy (Xiao et al. (2022)).

+

In PTQ, a pretrained model undergoes a calibration process, as shown in Figure fig-PTQ-diagram. Calibration involves using a separate dataset known as calibration data, a specific subset of the training data reserved for quantization to help find the appropriate clipping ranges and scaling factors.

+
+
+
+ +
+
+Figure 9.25: Post-Training Quantization and calibration. Credit: Gholami et al. (2021). +
+
+
+

Quantization Aware Training - Quantization-aware training (QAT) is a fine-tuning of the PTQ model. The model is trained aware of quantization, allowing it to adjust for quantization effects. This produces better accuracy with quantized inference. Quantizing a trained neural network model with methods such as PTQ introduces perturbations that can deviate the model from its original convergence point. For instance, Krishnamoorthi showed that even with per-channel quantization, networks like MobileNet do not reach baseline accuracy with int8 Post Training Quantization (PTQ) and require Quantization Aware Training (QAT) (Krishnamoorthi (2018)).To address this, QAT retrains the model with quantized parameters, employing forward and backward passes in floating point but quantizing parameters after each gradient update. Handling the non-differentiable quantization operator is crucial; a widely used method is the Straight Through Estimator (STE), approximating the rounding operation as an identity function. While other methods and variations exist, STE remains the most commonly used due to its practical effectiveness. In QAT, a pretrained model is quantized and then finetuned using training data to adjust parameters and recover accuracy degradation, as shown in Figure fig-QAT-diagram. The calibration process is often conducted in parallel with the finetuning process for QAT.

+
+
+
+ +
+
+Figure 9.26: Quantization-Aware Training. Credit: Gholami et al. (2021). +
+
+Gholami, Dong Kim, Mahoney Yao, and Keutzer. 2021. “A Survey of Quantization Methods for Efficient Neural Network Inference).” ArXiv Preprint. https://arxiv.org/abs/2103.13630. +
+
+

Quantization-Aware Training serves as a natural extension of Post-Training Quantization. Following the initial quantization performed by PTQ, QAT is used to further refine and fine-tune the quantized parameters - see how in Figure fig-QAT-PTQ-relation, the PTQ model undergoes an additional step, QAT. It involves a retraining process where the model is exposed to additional training iterations using the original data. This dynamic training approach allows the model to adapt and adjust its parameters, compensating for the performance degradation caused by quantization.

+
+
+
+ +
+
+Figure 9.27: PTQ and QAT. Credit: “The Ultimate Guide to Deep Learning Model Quantization and Quantization-Aware Training” (n.d.). +
+
+“The Ultimate Guide to Deep Learning Model Quantization and Quantization-Aware Training.” n.d. https://deci.ai/quantization-and-quantization-aware-training/. +
+
+

Figure fig-quantization-methods-summary shows the relative accuracy of different models after PTQ and QAT. In almost all cases, QAT yields a better accuracy than PTQ. Consider for example EfficientNet b0. After PTQ, the accuracy drops from 76.85% to 72.06%. But when we apply QAT, the accuracy rebounds to 76.95% (with even a slight improvement over the original accuracy).

+
+
+
+ +
+
+Figure 9.28: Relative accuracies of PTQ and QAT. Credit: Wu, Judd, and Isaev (2020). +
+
+
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Feature/TechniquePost Training QuantizationQuantization Aware TrainingDynamic Quantization
Pros
Simplicity
Accuracy Preservation
Adaptability
Optimized PerformancePotentially
Cons
Accuracy DegradationPotentially
Computational Overhead
Implementation Complexity
Tradeoffs
Speed vs. Accuracy
Accuracy vs. Cost
Adaptability vs. Overhead
+
+
+

9.3.8 Weights vs. Activations

+

Weight Quantization: Involves converting the continuous or high-precision weights of a model to lower-precision, such as converting Float32 weights to quantized INT8 (integer) weights - in Figure fig-weight-activations-quantization, weight quantization is taking place in the second step (red squares) when we multiply the inputs. This reduces the model size, thereby reducing the memory required to store the model and the computational resources needed to perform inference. For example, consider a weight matrix in a neural network layer with Float32 weights as [0.215, -1.432, 0.902, …]. Through weight quantization, these might be mapped to INT8 values like [27, -183, 115, …], significantly reducing the memory required to store them.

+
+
+
+ +
+
+Figure 9.29: Weight and activation quantization. Credit: HarvardX. +
+
+
+

Activation Quantization: Involves quantizing the activation values (outputs of layers) during model inference. This can reduce the computational resources required during inference, but it introduces additional challenges in maintaining model accuracy due to the reduced precision of intermediate computations. For example, in a convolutional neural network (CNN), the activation maps (feature maps) produced by convolutional layers, originally in Float32, might be quantized to INT8 during inference to accelerate computation, especially on hardware optimized for integer arithmetic. Additionally, recent work has explored the use of Activation-aware Weight Quantization for LLM compression and acceleration, which involves protecting only 1% of the most important salient weights by observing the activations not weights (Lin et al. (2023)).

+
+
+

9.3.9 Trade-offs

+

Quantization invariably introduces a trade-off between model size/performance and accuracy. While it significantly reduces the memory footprint and can accelerate inference, especially on hardware optimized for low-precision arithmetic, the reduced precision can degrade model accuracy.

+

Model Size: A model with weights represented as Float32 being quantized to INT8 can theoretically reduce the model size by a factor of 4, enabling it to be deployed on devices with limited memory. The model size of large language models is developing at a faster pace than the GPU memory in recent years, leading to a big gap between the supply and demand for memory. Figure fig-model-size-pace illustrates the recent trend of the widening gap between model size (red line) and acceleartor memory (yellow line). Quantization and model compression techniques can help bridge the gap

+
+
+
+ +
+
+Figure 9.30: Model size vs. accelerator memory. Credit: Xiao et al. (2022). +
+
+Xiao, Seznec Lin, Demouth Wu, and Han. 2022. SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models.” ArXiv Preprint. https://arxiv.org/abs/2211.10438. +
+
+

Inference Speed: Quantization can also accelerate inference, as lower-precision arithmetic is computationally less expensive. For example, certain hardware accelerators, like Google’s Edge TPU, are optimized for INT8 arithmetic and can perform inference significantly faster with INT8 quantized models compared to their floating-point counterparts. The reduction in memory from quantization helps reduce the amount of data transmission, saving up memory and speeding the process. Figure fig-nvidia-turing compares the increase in throughput and the reduction in bandwidth memory for different data type on the NVIDIA Turing GPU.

+
+
+
+ +
+
+Figure 9.31: Benefits of lower precision data types. Credit: Wu, Judd, and Isaev (2020). +
+
+Wu, Zhang Judd, and Micikevicius Isaev. 2020. “Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation).” ArXiv Preprint. https://arxiv.org/abs/2004.09602. +
+
+

Accuracy: The reduction in numerical precision post-quantization can lead to a degradation in model accuracy, which might be acceptable in certain applications (e.g., image classification) but not in others (e.g., medical diagnosis). Therefore, post-quantization, the model typically requires re-calibration or fine-tuning to mitigate accuracy loss. Furthermore, recent work has explored the use of Activation-aware Weight Quantization (Lin et al. (2023)) which is based on the observation that protecting only 1% of salient weights can greatly reduce quantization error.

+
+
+

9.3.10 Quantization and Pruning

+

Pruning and quantization work well together, and it’s been found that pruning doesn’t hinder quantization. In fact, pruning can help reduce quantization error. Intuitively, this is due to pruning reducing the number of weights to quantize, thereby reducing the accumulated error from quantization. For example, an unpruned AlexNet has 60 million weights to quantize whereas a pruned AlexNet only has 6.7 million weights to quantize. This significant drop in weights helps reduce the error between quantizing the unpruned AlexNet vs. the pruned AlexNet. Furthermore, recent work has found that quantization-aware pruning generates more computationally efficient models than either pruning or quantization alone; It typically performs similar to or better in terms of computational efficiency compared to other neural architecture search techniques like Bayesian optimization (Hawks et al. (2021)).

+
+
+
+ +
+
+Figure 9.32: Accuracy vs. compression rate under different compression methods. Credit: Han, Mao, and Dally (2015). +
+
+Han, Song, Huizi Mao, and William J Dally. 2015. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.” arXiv Preprint arXiv:1510.00149. +
+
+
+
+

9.3.11 Edge-aware Quantization

+

Quantization not only reduces model size but also enables faster computations and draws less power, making it vital to edge development. Edge devices typically have tight resource constraints with compute, memory, and power, which are impossible to meet for many of the deep NN models of today. Furthermore, edge processors do not support floating point operations, making integer quantization particularly important for chips like GAP-8, a RISC-V SoC for edge inference with a dedicated CNN accelerator, which only support integer arithmetic..

+

One hardware platform utilizing quantization is the ARM Cortex-M group of 32-bit RISC ARM processor cores. They leverage fixed-point quantization with power of two scaling factors so that quantization and dequantization can be efficiently done by bit shifting. Additionally, Google Edge TPUs, Google’s emerging solution for running inference at the edge, is designed for small, low-powered devices and can only support 8-bit arithmetic. Many complex neural network models that could only be deployed on servers due to their high computational needs can now be run on edge devices thanks to recent advancements (e.g. quantization methods) in edge computing field.

+

In addition to being an indispensable technique for many edge processors, quantization has also brought noteworthy improvements to non-edge processors such as encouraging such processors to meet the Service Level Agreement (SLA) requirements such as 99th percentile latency.

+

Thus, quantization combined with efficient low-precision logic and dedicated deep learning accelerators, has been one crucial driving force for the evolution of such edge processors.

+

The video below is a lecture on quantization and the different quantization methods.

+
+
+
+
+

9.4 Efficient Hardware Implementation

+

Efficient hardware implementation transcends the selection of suitable components; it requires a holistic understanding of how software will interact with underlying architectures. The essence of achieving peak performance in TinyML applications lies not only in refining algorithms to hardware but also in ensuring that the hardware is strategically tailored to support these algorithms. This synergy between hardware and software is crucial. As we delve deeper into the intricacies of efficient hardware implementation, the significance of a co-design approach, where hardware and software are developed in tandem, becomes increasingly evident. This section provides an overview of the techniques of how hardware and the interactions between hardware and software can be optimized to improve models performance.

+ + +
+

9.4.3 Kernel Optimizations

+

Kernel Optimizations are modifications made to the kernel to enhance the performance of machine learning models onf resource-constrained devices. We will separate kernel optimizations into two types.

+
+

General Kernel Optimizations

+

These are kernel optimizations that all devices can benefit from. They provide technics to convert the code to more efficient instructions.

+
+
Loop unrolling
+

Instead of having a loop with loop control (incrementing the loop counter, checking the loop termination condition) the loop can be unrolled and the overhead of loop control can be omitted. This may also provide additional opportunities for parallelism that may not be possible with the loop structure. This can be particularly beneficial for tight loops, where the boy of the loop is a small number of instructions with a lot of iterations.

+
+
+
Blocking
+

Blocking is used to make memory access patterns more efficient. If we have three computations the first and the last need to access cache A and the second needs to access cache B, blocking blocks the first two computations together to reduce the number of memory reads needed.

+
+
+
Tiling
+

Similarly to blocking, tiling divides data and computation into chunks, but extends beyond cache improvements. Tiling creates independent partitions of computation that can be run in parallel, which can result in significant performance improvements.:

+
+
+
Optimized Kernel Libraries
+

This comprises developing optimized kernels that take full advantage of a specific hardware. One example is the CMSIS-NN library, which is a collection of efficient neural network kernels developed to optimize the performance and minimize the memory footprint of models on Arm Cortex-M processors, which are common on IoT edge devices. The kernel leverage multiple hardware capabilities of Cortex-M processors like Single Instruction Multple Data (SIMD), Floating Point Units (FPUs) and M-Profile Vector Extensions (MVE). These optimization make common operations like matrix multiplications more efficient, boosting the performance of model operations on Cortex-M processors. (Lai, Suda, and Chandra 2018)

+
+Lai, Liangzhen, Naveen Suda, and Vikas Chandra. 2018. CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-m CPUs.” https://arxiv.org/abs/1801.06601. +
+
+
+
+

9.4.4 Compute-in-Memory (CiM)

+

This is one example of Algorithm-Hardware Co-design. CiM is a computing paradigm that performs computation within memory. Therefore, CiM architectures allow for operations to be performed directly on the stored data, without the need to shuttle data back and forth between separate processing and memory units. This design paradigm is particularly beneficial in scenarios where data movement is a primary source of energy consumption and latency, such as in TinyML applications on edge devices. Figure fig-computing-memory is one example of using CiM in TinyML: keyword spotting requires an always-on process that looks for certain wake words (such as ‘Hey, Siri’). Given the resource-intensive nature of this task, integrating CiM for the always-on keyword detection model can enhance efficiency.

+

Through algorithm-hardware co-design, the algorithms can be optimized to leverage the unique characteristics of CiM architectures, and conversely, the CiM hardware can be customized or configured to better support the computational requirements and characteristics of the algorithms. This is achieved by using the analog properties of memory cells, such as addition and multiplication in DRAM. (Zhou et al. 2021)

+
+
+
+ +
+
+Figure 9.34: CiM for keyword spotting. Credit: Zhou et al. (2021). +
+
+Zhou, Chuteng, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, and Paul N. Whatmough. 2021. AnalogNets: Ml-hw Co-Design of Noise-Robust TinyML Models and Always-on Analog Compute-in-Memory Accelerator.” https://arxiv.org/abs/2111.06503. +
+
+
+
+

9.4.5 Memory Access Optimization

+

Different devices may have different memory hierarchies. Optimizing for the specific memory hierarchy in the specific hardware can lead to great performance improvements by reducing the costly operations of reading and writing to memory. Dataflow optimization can be achieved by optimizing for reusing data within a single layer and across multiple layers. This dataflow optimization can be tailored to the specific memory hierarchy of the hardware, which can lead to greater benefits than general optimizations for different hardwares.

+
+

Leveraging Sparsity

+

Pruning is a fundamental approach to compress models to make them compatible with resource constrained devices. This results in sparse models where a lot of weights are 0’s. Therefore, leveraging this sparsity can lead to significant improvements in performance. Tools were created to achieve exactly this. RAMAN, is a sparseTinyML accelerator designed for inference on edge devices. RAMAN overlap input and output activations on the same memory space, reducing storage requirements by up to 50%. (Krishna et al. 2023)

+
+Krishna, Adithya, Srikanth Rohit Nudurupati, Chandana D G, Pritesh Dwivedi, André van Schaik, Mahesh Mehendale, and Chetan Singh Thakur. 2023. RAMAN: A Re-Configurable and Sparse TinyML Accelerator for Inference on Edge.” https://arxiv.org/abs/2306.06493. +
+
+

Optimization Frameworks

+

Optimization Frameworks have been introduced to exploit the specific capabilities of the hardware to accelerate the software. One example of such a framework is hls4ml - Figure fig-hls4ml-workflow provides an overview of the framework’s workflow. This open-source software-hardware co-design workflow aids in interpreting and translating machine learning algorithms for implementation with both FPGA and ASIC technologies. Features such as network optimization, new Python APIs, quantization-aware pruning, and end-to-end FPGA workflows are embedded into the hls4ml framework, leveraging parallel processing units, memory hierarchies, and specialized instruction sets to optimize models for edge hardware. Moreover, hls4ml is capable of translating machine learning algorithms directly into FPGA firmware.

+
+
+
+ +
+
+Figure 9.35: hls4ml framework workflow. Credit: Fahim et al. (2021). +
+
+Fahim, Farah, Benjamin Hawks, Christian Herwig, James Hirschauer, Sergo Jindariani, Nhan Tran, Luca P. Carloni, et al. 2021. “Hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices.” https://arxiv.org/abs/2103.05579. +
+
+

One other framework for FPGAs that focuses on a holistic approach is CFU Playground (Prakash et al. 2023)

+
+Prakash, Shvetank, Tim Callahan, Joseph Bushagour, Colby Banbury, Alan V. Green, Pete Warden, Tim Ansell, and Vijay Janapa Reddi. 2023. CFU Playground: Full-stack Open-Source Framework for Tiny Machine Learning (TinyML) Acceleration on FPGAs.” In 2023 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). Vol. abs/2201.01863. IEEE. https://doi.org/10.1109/ispass57527.2023.00024. +
+
+

Hardware Built Around Software

+

In a contrasting approach, hardware can be custom-designed around software requirements to optimize the performance for a specific application. This paradigm creates specialized hardware to better adapt to the specifics of the software, thus reducing computational overhead and improving operational efficiency. One example of this approach is a voice-recognition application by (Kwon and Park 2021). The paper proposes a structure wherein preprocessing operations, traditionally handled by software, are allocated to custom-designed hardware. This technique was achieved by introducing resistor-transistor logic to an inter-integrated circuit sound module for windowing and audio raw data acquisition in the voice-recognition application. Consequently, this offloading of preprocessing operations led to a reduction in computational load on the software, showcasing a practical application of building hardware around software to enhance the efficiency and performance.

+
+
+
+ +
+
+Figure 9.36: Delegating data processing to an FPGA. Credit: Kwon and Park (2021). +
+
+Kwon, Jisu, and Daejin Park. 2021. Hardware/Software Co-Design for TinyML Voice-Recognition Application on Resource Frugal Edge Devices.” Applied Sciences 11 (22): 11073. https://doi.org/10.3390/app112211073. +
+
+
+
+

SplitNets

+

SplitNets were introduced in the context of Head-Mounted systems. They distribute the Deep Neural Networks (DNNs) workload among camera sensors and an aggregator. This is particularly compelling the in context of TinyML. The SplitNet framework is a split-aware NAS to find the optimal neural network architecture to achieve good accuracy, split the model among the sensors and the aggregator, and minimize the communication between the sensors and the aggregator. Figure fig-splitnet-performance demonstrates how SplitNets (in red) achieves higher accuracy for lower latency (running on ImageNet) than different approaches, such as running the DNN on-sensor (All-on-sensor; in green) or on mobile (All-on-aggregator; in blue). Minimal communication is important in TinyML where memory is highly constrained, this way the sensors conduct some of the processing on their chips and then they send only the necessary information to the aggregator. When testing on ImageNet, SplitNets were able to reduce the latency by one order of magnitude on head-mounted devices. This can be helpful when the sensor has its own chip. (Dong et al. 2022)

+
+
+
+ +
+
+Figure 9.37: SplitNets vs other approaches. Credit: Dong et al. (2022). +
+
+Dong, Xin, Barbara De Salvo, Meng Li, Chiao Liu, Zhongnan Qu, H. T. Kung, and Ziyun Li. 2022. SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12549–59. IEEE. https://doi.org/10.1109/cvpr52688.2022.01223. +
+
+
+
+

Hardware Specific Data Augmentation

+

Each edge device may possess unique sensor characteristics, leading to specific noise patterns that can impact model performance. One example is audio data, where variations stemming from the choice of microphone are prevalent. Applications such as Keyword Spotting can experience substantial enhancements by incorporating data recorded from devices similar to those intended for deployment. Fine-tuning of existing models can be employed to adapt the data precisely to the sensor’s distinctive characteristics.

+
+
+
+
+

9.5 Software and Framework Support

+

While all of the aforementioned techniques like pruning, quantization, and efficient numerics are well-known, they would remain impractical and inaccessible without extensive software support. For example, directly quantizing weights and activations in a model would require manually modifying the model definition and inserting quantization operations throughout. Similarly, directly pruning model weights requires manipulating weight tensors. Such tedious approaches become infeasible at scale.

+

Without the extensive software innovation across frameworks, optimization tools and hardware integration, most of these techniques would remain theoretical or only viable to experts. Without framework APIs and automation to simplify applying these optimizations, they would not see adoption. Software support makes them accessible to general practitioners and unlocks real-world benefits. In addition, issues such as hyperparameter tuning for pruning, managing the trade-off between model size and accuracy, and ensuring compatibility with target devices pose hurdles that developers must navigate.

+
+

9.5.1 Built-in Optimization APIs

+

Major machine learning frameworks like TensorFlow, PyTorch, and MXNet provide libraries and APIs to allow common model optimization techniques to be applied without requiring custom implementations. For example, TensorFlow offers the TensorFlow Model Optimization Toolkit which contains modules like:

+
    +
  • quantization - Applies quantization-aware training to convert floating point models to lower precision like int8 with minimal accuracy loss. Handles weight and activation quantization.
  • +
  • sparsity - Provides pruning APIs to induce sparsity and remove unnecessary connections in models like neural networks. Can prune weights, layers, etc.
  • +
  • clustering - Supports model compression by clustering weights into groups for higher compression rates.
  • +
+

These APIs allow users to enable optimization techniques like quantization and pruning without directly modifying model code. Parameters like target sparsity rates, quantization bit-widths etc. can be configured. Similarly, PyTorch provides torch.quantization for converting models to lower precision representations. TorchTensor and TorchModule form the base classes for quantization support. It also offers torch.nn.utils.prune for built-in pruning of models. MXNet offers gluon.contrib layers that add quantization capabilities like fixed point rounding and stochastic rounding of weights/activations during training. This allows quantization to be readily included in gluon models.

+

The core benefit of built-in optimizations is that users can apply them without re-implementing complex techniques. This makes optimized models accessible to a broad range of practitioners. It also ensures best practices are followed by building on research and experience implementing the methods. As new optimizations emerge, frameworks strive to provide native support and APIs where possible to further lower the barrier to efficient ML. The availability of these tools is key to widespread adoption.

+
+
+

9.5.2 Automated Optimization Tools

+

Automated optimization tools provided by frameworks can analyze models and automatically apply optimizations like quantization, pruning, and operator fusion to make the process easier and accessible without excessive manual tuning. In effect, this builds on top of the previous section. For example, TensorFlow provides the TensorFlow Model Optimization Toolkit which contains modules like:

+
    +
  • QuantizationAwareTraining - Automatically quantizes weights and activations in a model to lower precision like UINT8 or INT8 with minimal accuracy loss. It inserts fake quantization nodes during training so that the model can learn to be quantization-friendly.
  • +
  • Pruning - Automatically removes unnecessary connections in a model based on analysis of weight importance. Can prune entire filters in convolutional layers or attention heads in transformers. Handles iterative re-training to recover any accuracy loss.
  • +
  • GraphOptimizer - Applies graph optimizations like operator fusion to consolidate operations and reduce execution latency, especially for inference. In Figure fig-graph-optimizer, you can see the original (Source Graph) on the left, and how its operations are transformed (consolidated) on the right. Notice how Block1 in Source Graph has 3 separate steps (Convolution, BiasAdd, and Activation), which are then consolidated together in Block1 on Optimized Graph.
  • +
+
+
+
+ +
+
+Figure 9.38: GraphOptimizer. Credit: Wess et al. (2020). +
+
+Wess, Matthias, Matvey Ivanov, Christoph Unger, and Anvesh Nookala. 2020. “ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked Models.” IEEE. https://doi.org/10.1109/ACCESS.2020.3047259. +
+
+

These automated modules only require the user to provide the original floating point model, and handle the end-to-end optimization pipeline including any re-training to regain accuracy. Other frameworks like PyTorch also offer increasing automation support, for example through torch.quantization.quantize_dynamic. Automated optimization makes efficient ML accessible to practitioners without optimization expertise.

+
+
+

9.5.3 Hardware Optimization Libraries

+

Hardware libraries like TensorRT and TensorFlow XLA allow models to be highly optimized for target hardware through techniques that we discussed earlier.

+

Quantization: For example, TensorRT and TensorFlow Lite both support quantization of models during conversion to their format. This provides speedups on mobile SoCs with INT8/INT4 support.

+

Kernel Optimization: For instance, TensorRT does auto-tuning to optimize CUDA kernels based on the GPU architecture for each layer in the model graph. This extracts maximum throughput.

+

Operator Fusion: TensorFlow XLA does aggressive fusion to create optimized binary for TPUs. On mobile, frameworks like NCNN also support fused operators. ` Hardware-Specific Code: Libraries are used to generate optimized binary code specialized for the target hardware. For example, TensorRT uses Nvidia CUDA/cuDNN libraries which are hand-tuned for each GPU architecture. This hardware-specific coding is key for performance. On TinyML devices, this can mean assembly code optimized for a Cortex M4 CPU for example. Vendors provide CMSIS-NN and other libraries.

+

Data Layout Optimizations - We can efficiently leverage memory hierarchy of hardware like cache and registers through techniques like tensor/weight rearrangement, tiling, and reuse. For example, TensorFlow XLA optimizes buffer layouts to maximize TPU utilization. This helps any memory constrained systems.

+

Profiling-based Tuning - We can use profiling tools to identify bottlenecks. For example, adjust kernel fusion levels based on latency profiling. On mobile SoCs, vendors like Qualcomm provide profilers in SNPE to find optimization opportunities in CNNs. This data-driven approach is important for performance.

+

By integrating framework models with these hardware libraries through conversion and execution pipelines, ML developers can achieve significant speedups and efficiency gains from low-level optimizations tailored to the target hardware. The tight integration between software and hardware is key to enabling performant deployment of ML applications, especially on mobile and TinyML devices.

+
+
+

9.5.4 Visualizing Optimizations

+

Implementing model optimization techniques without visibility into the effects on the model can be challenging. Dedicated tooling or visualization tools can provide critical and useful insight into model changes and helps track the optimization process. Let’s consider the optimizations we considered earlier, such as pruning for sparsity and quantization.

+ +
+
Quantization
+

Converting models to lower numeric precisions through quantization introduces errors that can impact model accuracy if not properly tracked and addressed. Visualizing quantization error distributions provides valuable insights into the effects of reduced precision numerics applied to different parts of a model. For this, histograms of the quantization errors for weights and activations can be generated. These histograms can reveal the shape of the error distribution - whether they resemble a Gaussian distribution or contain significant outliers and spikes. Figure fig-quantization-error shows the distributions of different quantization methods. Large outliers may indicate issues with particular layers handling the quantization. Comparing the histograms across layers highlights any problem areas standing out with abnormally high errors.

+
+
+
+ +
+
+Figure 9.40: Quantization errors. Credit: Kuzmin et al. (2022). +
+
+Kuzmin, Andrey, Mart Van Baalen, Yuwei Ren, Markus Nagel, Jorn Peters, and Tijmen Blankevoort. 2022. FP8 Quantization: The Power of the Exponent.” https://arxiv.org/abs/2208.09225. +
+
+

Activation visualizations are also important to detect overflow issues. By color mapping the activations before and after quantization, any values pushed outside the intended ranges become visible. This reveals saturation and truncation issues that could skew the information flowing through the model. Detecting these errors allows recalibrating activations to prevent loss of information (Mandal 2022). Figure fig-color-mapping is a color mapping of the AlexNet convolutional kernels.

+
+
+
+ +
+
+Figure 9.41: Color mapping of activations. Credit: Krizhevsky, Sutskever, and Hinton (2012). +
+
+Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. 2012. “ImageNet Classification with Deep Convolutional Neural Networks.” Edited by F. Pereira, C. J. Burges, L. Bottou, and K. Q. Weinberger 25. https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf. +
+
+

Other techniques, such as tracking the overall mean square quantization error at each step of the quantization-aware training process identifies fluctuations and divergences. Sudden spikes in the tracking plot may indicate points where quantization is disrupting the model training. Monitoring this metric builds intuition on model behavior under quantization. Together these techniques turn quantization into a transparent process. The empirical insights enable practitioners to properly assess quantization effects. They pinpoint areas of the model architecture or training process to recalibrate based on observed quantization issues. This helps achieve numerically stable and accurate quantized models.

+

Providing this data enables practitioners to properly assess the impact of quantization and identify potential problem areas of the model to recalibrate or redesign to be more quantization friendly. This empirical analysis builds intuition on achieving optimal quantization.

+

Visualization tools can provide insights that help practitioners better understand the effects of optimizations on their models. The visibility enables correcting issues early before accuracy or performance is impacted significantly. It also aids applying optimizations more effectively for specific models. These optimization analytics help build intuition when transitioning models to more efficient representations.

+
+
+
+

9.5.5 Model Conversion and Deployment

+

Once models have been successfully optimized in frameworks like TensorFlow and PyTorch, specialized model conversion and deployment platforms are needed to bridge the gap to running them on target devices.

+

TensorFlow Lite - TensorFlow’s platform to convert models to a lightweight format optimized for mobile, embedded and edge devices. Supports optimizations like quantization, kernel fusion, and stripping away unused ops. Models can be executed using optimized TensorFlow Lite kernels on device hardware. Critical for mobile and TinyML deployment.

+

ONNX Runtime - Performs model conversion and inference for models in the open ONNX model format. Provides optimized kernels, supports hardware accelerators like GPUs, and cross-platform deployment from cloud to edge. Allows framework-agnostic deployment. Figure fig-interop is an ONNX interoperability map, including major popular frameworks.

+
+
+
+ +
+
+Figure 9.42: Interoperablily of ONNX. Credit: TowardsDataScience. +
+
+
+

PyTorch Mobile - Enables PyTorch models to be run on iOS and Android by converting to mobile-optimized representations. Provides efficient mobile implementations of ops like convolution and special functions optimized for mobile hardware.

+

These platforms integrate with hardware drivers, operating systems, and accelerator libraries on devices to execute models efficiently using hardware optimization. They also offload operations to dedicated ML accelerators where present. The availability of these proven, robust deployment platforms bridges the gap between optimizing models in frameworks and actual deployment to billions of devices. They allow users to focus on model development rather than building custom mobile runtimes. Continued innovation to support new hardware and optimizations in these platforms is key to widespread ML optimizations.

+

By providing these optimized deployment pipelines, the entire workflow from training to device deployment can leverage model optimizations to deliver performant ML applications. This end-to-end software infrastructure has helped drive the adoption of on-device ML.

+
+
+
+

9.6 Conclusion

+

In this chapter we’ve discussed model optimization across the software-hardware span. We dove deep into efficient model representation, where we covered the nuances of structured and unstructured pruning and other techniques for model compression such as knowledge distillation and matrix and tensor decomposition. We also dove briefly into edge-specific model design at the parameter and model architecture level, exploring topics like edge-specific models and hardware-aware NAS.

+

We then explored efficient numerics representations, where we covered the basics of numerics, numeric encodings and storage, benefits of efficient numerics, and the nuances of numeric representation with memory usage, computational complexity, hardware compatibility, and tradeoff scenarios. We finished by honing in on an efficient numerics staple: quantization, where we examined its history, calibration, techniques, and interaction with pruning.

+

Finally, we looked at how we can make optimizations specific to the hardware we have. We explored how we can find model architectures tailored to the hardware, make optimizations in the kernel to better handle the model, and frameworks built to make the most use out of the hardware. We also looked at how we can go the other way around and build hardware around our specific software and talked about splitting networks to run on multiple processors available on the edge device.

+

By understanding the full picture of the degrees of freedom within model optimization both away and close to the hardware and the tradeoffs to consider when implementing these methods, practitioners can develop a more thoughtful pipeline for compressing their workloads onto edge devices.

+
+
+

Resources

+

Here is a curated list of resources to support both students and instructors in their learning and teaching journey. We are continuously working on expanding this collection and will be adding new exercises in the near future.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides serve as a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we also offer a series of hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/privacy_security/privacy_security.html b/contents/privacy_security/privacy_security.html new file mode 100644 index 00000000..5a2b86cf --- /dev/null +++ b/contents/privacy_security/privacy_security.html @@ -0,0 +1,2412 @@ + + + + + + + + + +Machine Learning Systems - 14  Security & Privacy + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

14  Security & Privacy

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: An illustration on privacy and security in machine learning systems. The image shows a digital landscape with a network of interconnected nodes and data streams, symbolizing machine learning algorithms. In the foreground, there’s a large lock superimposed over the network, representing privacy and security. The lock is semi-transparent, allowing the underlying network to be partially visible. The background features binary code and digital encryption symbols, emphasizing the theme of cybersecurity. The color scheme is a mix of blues, greens, and grays, suggesting a high-tech, digital environment.
+
+
+

Security and privacy are critical when developing real-world machine learning systems. As machine learning is increasingly applied to sensitive domains like healthcare, finance, and personal data, protecting confidentiality and preventing misuse of data and models becomes imperative. Anyone aiming to build robust and responsible ML systems must grasp potential security and privacy risks such as data leaks, model theft, adversarial attacks, bias, and unintended access to private information. We also need to understand best practices for mitigating these risks. Most importantly, security and privacy cannot be an afterthought and must be proactively addressed throughout the ML system development lifecycle - from data collection and labeling to model training, evaluation, and deployment. Embedding security and privacy considerations into each stage of building, deploying, and managing machine learning systems is essential for safely unlocking the benefits of A.I.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand key ML privacy and security risks, such as data leaks, model theft, adversarial attacks, bias, and unintended data access.

  • +
  • Learn from historical hardware and embedded systems security incidents.

  • +
  • Identify threats to ML models like data poisoning, model extraction, membership inference, and adversarial examples.

  • +
  • Recognize hardware security threats to embedded ML spanning hardware bugs, physical attacks, side channels, counterfeit components, etc.

  • +
  • Explore embedded ML defenses, such as trusted execution environments, secure boot, physical unclonable functions, and hardware security modules.

  • +
  • Discuss privacy issues handling sensitive user data with embedded ML, including regulations.

  • +
  • Learn privacy-preserving ML techniques like differential privacy, federated learning, homomorphic encryption, and synthetic data generation.

  • +
  • Understand tradeoffs between privacy, accuracy, efficiency, threat models, and trust assumptions.

  • +
  • Recognize the need for a cross-layer perspective spanning electrical, firmware, software, and physical design when securing embedded ML devices.

  • +
+
+
+
+

14.1 Introduction

+

Machine learning has evolved substantially from its academic origins, where privacy was not a primary concern. As ML migrated into commercial and consumer applications, the data became more sensitive - encompassing personal information like communications, purchases, and health data. This explosion of data availability fueled rapid advancements in ML capabilities. However, it also exposed new privacy risks, as demonstrated by incidents like the AOL data leak in 2006 and the Cambridge Analytica scandal.

+

These events highlighted the growing need to address privacy in ML systems. In this chapter, we explore privacy and security considerations together, as they are inherently linked in ML:

+
    +
  • Privacy refers to controlling access to sensitive user data, such as financial information or biometric data collected by an ML application.

  • +
  • Security protects ML systems and data from hacking, theft, and misuse.

  • +
+

For example, an ML-powered home security camera must secure video feeds against unauthorized access and provide privacy protections to ensure only intended users can view the footage. A breach of either security or privacy could expose private user moments.

+

Embedded ML systems like smart assistants and wearables are ubiquitous and process intimate user data. However, their computational constraints often prevent heavy security protocols. Designers must balance performance needs with rigorous security and privacy standards tailored to embedded hardware limitations.

+

This chapter provides essential knowledge for addressing the complex privacy and security landscape of embedded ML. We will explore vulnerabilities and cover various techniques that enhance privacy and security within embedded systems’ resource constraints.

+

We hope that by building a holistic understanding of risks and safeguards, you will gain the principles to develop secure, ethical, embedded ML applications.

+
+
+

14.2 Terminology

+

In this chapter, we will discuss security and privacy together, so there are key terms that we need to be clear about.

+
    +
  • Privacy: Consider an ML-powered home security camera that identifies and records potential threats. This camera records identifiable information, including faces, of individuals approaching and potentially entering this home. Privacy concerns may surround who can access this data.

  • +
  • Security: Consider an ML-powered home security camera that identifies and records potential threats. The security aspect would ensure that hackers cannot access these video feeds and recognition models.

  • +
  • Threat: Using our home security camera example, a threat could be a hacker trying to access live feeds or stored videos or using false inputs to trick the system.

  • +
  • Vulnerability: A common vulnerability might be a poorly secured network through which the camera connects to the internet, which could be exploited to access the data.

  • +
+
+
+

14.3 Historical Precedents

+

While the specifics of machine learning hardware security can be distinct, the embedded systems field has a history of security incidents that provide critical lessons for all connected systems, including those using ML. Here are detailed explorations of past breaches:

+
+

14.3.1 Stuxnet

+

In 2010, something unexpected was found on a computer in Iran - a very complicated computer virus that experts had never seen before. Stuxnet was a malicious computer worm that targeted supervisory control and data acquisition (SCADA) systems and was designed to damage Iran’s nuclear program (Farwell and Rohozinski 2011). Stuxnet was using four “zero-day exploits” - attacks that take advantage of secret weaknesses in software that no one knows about yet. This made Stuxnet very sneaky and hard to detect.

+
+Farwell, James P., and Rafal Rohozinski. 2011. “Stuxnet and the Future of Cyber War.” Survival 53 (1): 23–40. https://doi.org/10.1080/00396338.2011.555586. +

But Stuxnet wasn’t designed to steal information or spy on people. Its goal was physical destruction - to sabotage centrifuges at Iran’s Natanz nuclear plant! So how did the virus get onto computers at the Natanz plant, which was supposed to be disconnected from the outside world for security? Experts think someone inserted a USB stick containing Stuxnet into the internal Natanz network. This allowed the virus to “jump” from an outside system onto the isolated nuclear control systems and wreak havoc.

+

Stuxnet was incredibly advanced malware built by national governments to cross from the digital realm into real-world infrastructure. It specifically targeted important industrial machines, where embedded machine learning is highly applicable in a way never done before. The virus provided a wake-up call about how sophisticated cyberattacks could now physically destroy equipment and facilities.

+

This breach was significant due to its sophistication; Stuxnet specifically targeted programmable logic controllers (PLCs) used to automate electromechanical processes such as the speed of centrifuges for uranium enrichment. The worm exploited vulnerabilities in the Windows operating system to gain access to the Siemens Step7 software controlling the PLCs. Despite not being a direct attack on ML systems, Stuxnet is relevant for all embedded systems as it showcases the potential for state-level actors to design attacks that bridge the cyber and physical worlds with devastating effects.

+
+
+

14.3.2 Jeep Cherokee Hack

+

The Jeep Cherokee hack was a groundbreaking event demonstrating the risks inherent in increasingly connected automobiles (Miller 2019). In a controlled demonstration, security researchers remotely exploited a vulnerability in the Uconnect entertainment system, which had a cellular connection to the internet. They were able to control the vehicle’s engine, transmission, and brakes, alarming the automotive industry into recognizing the severe safety implications of cyber vulnerabilities in vehicles.

+
+Miller, Charlie. 2019. “Lessons Learned from Hacking a Car.” IEEE Design &Amp; Test 36 (6): 7–9. https://doi.org/10.1109/mdat.2018.2863106. +

The video below is a short documentary of the attack.

+
+

While this wasn’t an attack on an ML system per se, the reliance of modern vehicles on embedded systems for safety-critical functions has significant parallels to the deployment of ML in embedded systems, underscoring the need for robust security at the hardware level.

+
+
+

14.3.3 Mirai Botnet

+

The Mirai botnet involved the infection of networked devices such as digital cameras and DVR players (Antonakakis et al. 2017). In October 2016, the botnet was used to conduct one of the largest DDoS attacks, disrupting internet access across the United States. The attack was possible because many devices used default usernames and passwords, which were easily exploited by the Mirai malware to control the devices.

+
+Antonakakis, Manos, Tim April, Michael Bailey, Matt Bernhard, Elie Bursztein, Jaime Cochran, Zakir Durumeric, et al. 2017. “Understanding the Mirai Botnet.” In 26th USENIX Security Symposium (USENIX Security 17), 1093–1110. +

The following video presentation explains how the Mirai Botnet works.

+
+

Although the devices were not ML-based, the incident is a stark reminder of what can happen when numerous embedded devices with poor security controls are networked, which is becoming more common with the growth of ML-based IoT devices.

+
+
+

14.3.4 Implications

+

These historical breaches demonstrate the cascading effects of hardware vulnerabilities in embedded systems. Each incident offers a precedent for understanding the risks and designing better security protocols. For instance, the Mirai botnet highlights the immense destructive potential when threat actors can gain control over networked devices with weak security, a situation becoming increasingly common with ML systems. Many current ML devices function as “edge” devices meant to collect and process data locally before sending it to the cloud. Much like the cameras and DVRs compromised by Mirai, edge ML devices often rely on embedded hardware like ARM processors and run lightweight O.S. like Linux. Securing the device credentials is critical.

+

Similarly, the Jeep Cherokee hack was a watershed moment for the automotive industry. It exposed serious vulnerabilities in the growing network-connected vehicle systems and their lack of isolation from core drive systems like brakes and steering. In response, auto manufacturers invested heavily in new cybersecurity measures, though gaps likely remain.

+

Chrysler did a recall to patch the vulnerable Uconnect software, allowing the remote exploit. This included adding network-level protections to prevent unauthorized external access and compartmentalizing in-vehicle systems to limit lateral movement. Additional layers of encryption were added for commands sent over the CAN bus within vehicles.

+

The incident also spurred the creation of new cybersecurity standards and best practices. The Auto-ISAC was established for automakers to share intelligence, and the NHTSA guided management risks. New testing and audit procedures were developed to assess vulnerabilities proactively. The aftereffects continue to drive change in the automotive industry as cars become increasingly software-defined.

+

Unfortunately, manufacturers often overlook security in the rush to develop new ML edge devices - using default passwords, unencrypted communications, unsecured firmware updates, etc. Any such vulnerabilities could allow attackers to gain access and control devices at scale by infecting them with malware. With a botnet of compromised ML devices, attackers could leverage their aggregated computational power for DDoS attacks on critical infrastructure.

+

While these events didn’t directly involve machine learning hardware, the principles of the attacks carry over to ML systems, which often involve similar embedded devices and network architectures. As ML hardware often operates in continuous interaction with the physical world, securing it against such breaches is paramount. The evolution of security measures in response to these incidents provides valuable insights into protecting current and future ML systems from analogous vulnerabilities.

+

The distributed nature of ML edge devices means threats can propagate quickly across networks. And if devices are being used for mission-critical purposes like medical devices, industrial controls, or self-driving vehicles, the potential physical damage from weaponized ML bots could be severe. Just like Mirai demonstrated the dangerous potential of poorly secured IoT devices, the litmus test for ML hardware security will be how vulnerable or resilient these devices are to worm-like attacks. The stakes are raised as ML spreads to safety-critical domains, putting the onus on manufacturers and system operators to incorporate the lessons from Mirai.

+

The lesson is the importance of designing for security from the outset and having layered defenses. For ML systems, the Jeep case highlights potential blindspots around externally facing software interfaces and isolation between subsystems. Manufacturers of ML devices and platforms should assume a similar proactive and comprehensive approach to security rather than leaving it as an afterthought. Rapid response and dissemination of best practices will be key as threats evolve.

+
+
+
+

14.4 Security Threats to ML Models

+

ML models face security risks that can undermine their integrity, performance, and trustworthiness if not properly addressed. While there are several different threats, the key threats include: Model theft, where adversaries steal the proprietary model parameters and the sensitive data they contain. Data poisoning, which compromises models through data tampering. Adversarial attacks deceive the model to make incorrect or unwanted predictions.

+
+

14.4.1 Model Theft

+

Model theft occurs when an attacker gains unauthorized access to a deployed ML model. The concern here is the theft of the model’s structure and trained parameters and the proprietary data it contains (Ateniese et al. 2015). Model theft is a real and growing threat, as demonstrated by cases like ex-Google engineer Anthony Levandowski, who allegedly stole Waymo’s self-driving car designs and started a competing company. Beyond economic impacts, model theft can seriously undermine privacy and enable further attacks.

+
+Ateniese, Giuseppe, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. “Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers.” Int. J. Secur. Netw. 10 (3): 137. https://doi.org/10.1504/ijsn.2015.071829. +

For instance, consider an ML model developed for personalized recommendations in an e-commerce application. If a competitor steals this model, they gain insights into business analytics, customer preferences, and even trade secrets embedded within the model’s data. Attackers could leverage stolen models to craft more effective inputs for model inversion attacks, deducing private details about the model’s training data. A cloned e-commerce recommendation model could reveal customer purchase behaviors and demographics.

+

To understand model inversion attacks, consider a facial recognition system used to grant access to secured facilities. The system is trained on a dataset of employee photos. An attacker could infer features of the original dataset by observing the model’s output to various inputs. For example, suppose the model’s confidence level for a particular face is significantly higher for a given set of features. In that case, an attacker might deduce that someone with those features is likely in the training dataset.

+

The methodology of model inversion typically involves the following steps:

+
    +
  • Accessing Model Outputs: The attacker queries the ML model with input data and observes the outputs. This is often done through a legitimate interface, like a public API.

  • +
  • Analyzing Confidence Scores: For each input, the model provides a confidence score that reflects how similar the input is to the training data.

  • +
  • Reverse-Engineering: By analyzing the confidence scores or output probabilities, attackers can use optimization techniques to reconstruct what they believe is close to the original input data.

  • +
+

One historical example of such a vulnerability being explored was the research on inversion attacks against the U.S. Netflix Prize dataset, where researchers demonstrated that it was possible to learn about an individual’s movie preferences, which could lead to privacy breaches (Narayanan and Shmatikov 2006).

+
+Narayanan, Arvind, and Vitaly Shmatikov. 2006. “How to Break Anonymity of the Netflix Prize Dataset.” arXiv Preprint Cs/0610105. +

Model theft implies that it could lead to economic losses, undermine competitive advantage, and violate user privacy. There’s also the risk of model inversion attacks, where an adversary could input various data into the stolen model to infer sensitive information about the training data.

+

Based on the desired asset, model theft attacks can be divided into two categories: exact model properties and approximate model behavior.

+
+
Stealing Exact Model Properties
+

In these attacks, the objective is to extract information about concrete metrics, such as a network’s learned parameters, fine-tuned hyperparameters, and the model’s internal layer architecture (Oliynyk, Mayer, and Rauber 2023).

+
    +
  • Learned Parameters: Adversaries aim to steal a model’s learned knowledge (weights and biases) in order to replicate it. Parameter theft is generally used in conjunction with other attacks, such as architecture theft, which lacks parameter knowledge.

  • +
  • Fine-Tuned Hyperparameters: Training is costly, and finding the right configuration of hyperparameters (such as the learning rate and regularization) can be a very long and expensive process. Thus, stealing an optimized model’s hyperparameters can allow an adversary to replicate the model without the high training costs.

  • +
  • Model Architecture: This attack concerns the specific design and structure of the model, such as layers, neurons, and connectivity patterns. Aside from reducing associated training costs, it can provide an attacker; this type of theft is especially dangerous because it concerns core I.P. theft, which can affect a company’s competitive edge. Architecture theft can be achieved by exploiting side-channel attacks (discussed later).

  • +
+
+
+
Stealing Approximate Model Behavior
+

Instead of focusing on extracting exact numerical values of the model’s parameters, these attacks aim to reproduce the model’s behavior (predictions and effectiveness), decision-making, and high-level characteristics (Oliynyk, Mayer, and Rauber 2023). These techniques aim to achieve similar outcomes while allowing for internal deviations in parameters and architecture. Types of approximate behavior theft include achieving the same level of effectiveness and obtaining prediction consistency.

+
+Oliynyk, Daryna, Rudolf Mayer, and Andreas Rauber. 2023. “I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences.” ACM Comput. Surv. 55 (14s): 1–41. https://doi.org/10.1145/3595292. +
    +
  • Level of Effectiveness: Attackers aim to replicate the model’s decision-making capabilities rather than focus on the precise parameter values. This is done through understanding the overall behavior of the model. Consider a scenario where an attacker wants to copy the behavior of an image classification model. By analyzing the model’s decision boundaries, the attack tunes its model to reach an effectiveness comparable to the original model. This could entail analyzing 1) the confusion matrix to understand the balance of prediction metrics (true positive, true negative, false positive, false negative) and 2)other performance metrics, such as F1 score and precision, to ensure that the two models are comparable.

  • +
  • Prediction Consistency: The attacker tries to align their model’s prediction patterns with the target model’s. This involves matching prediction outputs (both positive and negative) on the same set of inputs and ensuring distributional consistency across different classes. For instance, consider a natural language processing (NLP) model that generates sentiment analysis for move reviews (labels reviews as positive, neutral, or negative). The attacker will try to fine-tune their model to match the prediction of the original models on the same set of movie reviews. This includes ensuring that the model makes the same mistakes (mispredictions) that the targeted model makes.

  • +
+
+
+

Case Study

+

In 2018, Tesla filed a lawsuit against self-driving car startup Zoox, alleging former employees stole confidential data and trade secrets related to Tesla’s autonomous driving assistance system.

+

Tesla claimed that several of its former employees took over 10 G.B. of proprietary data, including ML models and source code, before joining Zoox. This allegedly included one of Tesla’s crucial image recognition models for identifying objects.

+

The theft of this sensitive proprietary model could help Zoox shortcut years of ML development and duplicate Tesla’s capabilities. Tesla argued this theft of I.P. caused major financial and competitive harm. There were also concerns it could allow model inversion attacks to infer private details about Tesla’s testing data.

+

The Zoox employees denied stealing any proprietary information. However, the case highlights the significant risks of model theft—enabling the cloning of commercial models, causing economic impacts, and opening the door for further data privacy violations.

+
+
+
+

14.4.2 Data Poisoning

+

Data poisoning is an attack where the training data is tampered with, leading to a compromised model (Biggio, Nelson, and Laskov 2012). Attackers can modify existing training examples, insert new malicious data points, or influence the data collection process. The poisoned data is labeled in such a way as to skew the model’s learned behavior. This can be particularly damaging in applications where ML models make automated decisions based on learned patterns. Beyond training sets, poisoning tests, and validation data can allow adversaries to boost reported model performance artificially.

+
+Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning Attacks Against Support Vector Machines.” In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf. +

The process usually involves the following steps:

+
    +
  • Injection: The attacker adds incorrect or misleading examples into the training set. These examples are often designed to look normal to cursory inspection but have been carefully crafted to disrupt the learning process.

  • +
  • Training: The ML model trains on this manipulated dataset and develops skewed understandings of the data patterns.

  • +
  • Deployment: Once the model is deployed, the corrupted training leads to flawed decision-making or predictable vulnerabilities the attacker can exploit.

  • +
+

The impacts of data poisoning extend beyond just classification errors or accuracy drops. For instance, if incorrect or malicious data is introduced into a traffic sign recognition system’s training set, the model may learn to misclassify stop signs as yield signs, which can have dangerous real-world consequences, especially in embedded autonomous systems like autonomous vehicles.

+

Data poisoning can degrade a model’s accuracy, force it to make incorrect predictions or cause it to behave unpredictably. In critical applications like healthcare, such alterations can lead to significant trust and safety issues.

+

There are six main categories of data poisoning (Oprea, Singhal, and Vassilev 2022):

+
+Oprea, Alina, Anoop Singhal, and Apostol Vassilev. 2022. “Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?” Computer 55 (11): 94–99. https://doi.org/10.1109/mc.2022.3190787. +
    +
  • Availability Attacks: These attacks aim to compromise a model’s overall functionality. They cause it to misclassify most testing samples, rendering the model unusable for practical applications. An example is label flipping, where labels of a specific, targeted class are replaced with labels from a different one.

  • +
  • Targeted Attacks: In contrast to availability attacks, targeted attacks aim to compromise a small number of the testing samples. So, the effect is localized to a limited number of classes, while the model maintains the same original level of accuracy on the majority of the classes. The targeted nature of the attack requires the attacker to possess knowledge of the model’s classes, making detecting these attacks more challenging.

  • +
  • Backdoor Attacks: In these attacks, an adversary targets specific patterns in the data. The attacker introduces a backdoor(a malicious, hidden trigger or pattern) into the training data, such as manipulating certain features in structured data or manipulating a pattern of pixels at a fixed position. This causes the model to associate the malicious pattern with specific labels. As a result, when the model encounters test samples that contain a malicious pattern, it makes false predictions.

  • +
  • Subpopulation Attacks: Attackers selectively choose to compromise a subset of the testing samples while maintaining accuracy on the rest of the samples. You can think of these attacks as a combination of availability and targeted attacks: performing availability attacks (performance degradation) within the scope of a targeted subset. Although subpopulation attacks may seem very similar to targeted attacks, the two have clear differences:

  • +
  • Scope: While targeted attacks target a selected set of samples, subpopulation attacks target a general subpopulation with similar feature representations. For example, in a targeted attack, an actor inserts manipulated images of a ‘speed bump’ warning sign(with carefully crafted perturbation or patterns), which causes an autonomous car to fail to recognize such a sign and slow down. On the other hand, manipulating all samples of people with a British accent so that a speech recognition model would misclassify a British person’s speech is an example of a subpopulation attack.

  • +
  • Knowledge: While targeted attacks require a high degree of familiarity with the data, subpopulation attacks require less intimate knowledge to be effective.

  • +
+
+

Case Study 1

+

In 2017, researchers demonstrated a data poisoning attack against a popular toxicity classification model called Perspective (Hosseini et al. 2017). This ML model detects toxic comments online.

+
+Hosseini, Hossein, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. “Deceiving Google’s Perspective Api Built for Detecting Toxic Comments.” ArXiv Preprint abs/1702.08138. https://arxiv.org/abs/1702.08138. +

The researchers added synthetically generated toxic comments with slight misspellings and grammatical errors to the model’s training data. This slowly corrupted the model, causing it to misclassify increasing numbers of severely toxic inputs as non-toxic over time.

+

After retraining on the poisoned data, the model’s false negative rate increased from 1.4% to 27% - allowing extremely toxic comments to bypass detection. The researchers warned this stealthy data poisoning could enable the spread of hate speech, harassment, and abuse if deployed against real moderation systems.

+

This case highlights how data poisoning can degrade model accuracy and reliability. For social media platforms, a poisoning attack that impairs toxicity detection could lead to the proliferation of harmful content and distrust of ML moderation systems. The example demonstrates why securing training data integrity and monitoring for poisoning is critical across application domains.

+
+
+

Case Study 2

+

Interestingly enough, data poisoning attacks are not always malicious (Shan et al. 2023). Nightshade, a tool developed by a team led by Professor Ben Zhao at the University of Chicago, utilizes data poisoning to help artists protect their art against scraping and copyright violations by generative A.I. models. Artists can use the tool to modify their images subtly before uploading them online.

+

While these changes are indiscernible to the human eye, they can significantly disrupt the performance of generative A.I. models when incorporated into the training data. Generative models can be manipulated to generate hallucinations and weird images. For example, with only 300 poisoned images, the University of Chicago researchers could trick the latest Stable Diffusion model into generating images of dogs that look like cats or images of cows when prompted for cars.

+

As the number of poisoned images on the internet increases, the performance of the models that use scraped data will deteriorate exponentially. First, the poisoned data is hard to detect and requires manual elimination. Second, the “poison” spreads quickly to other labels because generative models rely on connections between words and concepts as they generate images. So a poisoned image of a “car” could spread into generated images associated with words like “truck,” “train,” “bus,” etc.

+

On the other hand, this tool can be used maliciously and can affect legitimate applications of the generative models. This shows the very challenging and novel nature of machine learning attacks.

+

Figure fig-poisoning demonstrates the effects of different levels of data poisoning (50 samples, 100 samples, and 300 samples of poisoned images) on generating images in different categories. Notice how the images start deforming and deviating from the desired category. For example, after 300 poison samples, a car prompt generates a cow.

+
+
+
+ +
+
+Figure 14.1: Data poisoning. Credit: Shan et al. (2023). +
+
+Shan, Shawn, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y Zhao. 2023. “Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.” ArXiv Preprint abs/2310.13828. https://arxiv.org/abs/2310.13828. +
+
+
+
+
+

14.4.3 Adversarial Attacks

+

Adversarial attacks aim to trick models into making incorrect predictions by providing them with specially crafted, deceptive inputs (called adversarial examples) (Parrish et al. 2023). By adding slight perturbations to input data, adversaries can “hack” a model’s pattern recognition and deceive it. These are sophisticated techniques where slight, often imperceptible alterations to input data can trick an ML model into making a wrong prediction.

+
+Parrish, Alicia, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, et al. 2023. “Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models.” ArXiv Preprint abs/2305.14384. https://arxiv.org/abs/2305.14384. +
+Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. “Zero-Shot Text-to-Image Generation.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:8821–31. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/ramesh21a.html. +
+Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042. +

One can generate prompts that lead to unsafe images in text-to-image models like DALLE (Ramesh et al. 2021) or Stable Diffusion (Rombach et al. 2022). For example, by altering the pixel values of an image, attackers can deceive a facial recognition system into identifying a face as a different person.

+

Adversarial attacks exploit the way ML models learn and make decisions during inference. These models work on the principle of recognizing patterns in data. An adversary crafts special inputs with perturbations to mislead the model’s pattern recognition—essentially ‘hacking’ the model’s perceptions.

+

Adversarial attacks fall under different scenarios:

+
    +
  • Whitebox Attacks: The attacker has full knowledge of the target model’s internal workings, including the training data,parameters, and architecture. This comprehensive access creates favorable conditions for exploiting the model’s vulnerabilities. The attacker can use specific and subtle weaknesses to craft effective adversarial examples.

  • +
  • Blackbox Attacks: In contrast to whitebox attacks, in blackbox attacks, the attacker has little to no knowledge of the target model. To carry out the attack, the adversarial actor must carefully observe the model’s output behavior.

  • +
  • Greybox Attacks: These fall between blackbox and whitebox attacks. The attacker has only partial knowledge about the target model’s internal design. For example, the attacker could have knowledge about training data but not the architecture or parameters. In the real world, practical attacks fall under black black-box box grey-boxes.

  • +
+

The landscape of machine learning models is complex and broad, especially given their relatively recent integration into commercial applications. This rapid adoption, while transformative, has brought to light numerous vulnerabilities within these models. Consequently, various adversarial attack methods have emerged, each strategically exploiting different aspects of different models. Below, we highlight a subset of these methods, showcasing the multifaceted nature of adversarial attacks on machine learning models:

+
    +
  • Generative Adversarial Networks (GANs) are deep learning models consisting of two networks competing against each other: a generator and a discriminator (Goodfellow et al. 2020). The generator tries to synthesize realistic data while the discriminator evaluates whether they are real or fake. GANs can be used to craft adversarial examples. The generator network is trained to produce inputs that the target model misclassifies. These GAN-generated images can then attack a target classifier or detection model. The generator and the target model are engaged in a competitive process, with the generator continually improving its ability to create deceptive examples and the target model enhancing its resistance to such examples. GANs provide a powerful framework for crafting complex and diverse adversarial inputs, illustrating the adaptability of generative models in the adversarial landscape.

  • +
  • Transfer Learning Adversarial Attacks exploit the knowledge transferred from a pre-trained model to a target model, creating adversarial examples that can deceive both models. These attacks pose a growing concern, particularly when adversaries have knowledge of the feature extractor but lack access to the classification head (the part or layer responsible for making the final classifications). Referred to as”headless attacks,” these transferable adversarial strategies leverage the expressive capabilities of feature extractors to craft perturbations while oblivious to the label space or training data. The existence of such attacks underscores the importance of developing robust defenses for transfer learning applications, especially since pre-trained models are commonly used (Abdelkader et al. 2020).

  • +
+
+Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622. +
+Abdelkader, Ahmed, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, and Chen Zhu. 2020. “Headless Horseman: Adversarial Attacks on Transfer Learning Models.” In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3087–91. IEEE. https://doi.org/10.1109/icassp40776.2020.9053181. +
+

Case Study

+

In 2017, researchers conducted experiments by placing small black and white stickers on stop signs (Eykholt et al. 2017). When viewed by a normal human eye, the stickers did not obscure the sign or prevent interpretability. However, when images of the stickers stop signs were fed into standard traffic sign classification ML models, they were misclassified as speed limit signs over 85% of the time.

+
+Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. “Robust Physical-World Attacks on Deep Learning Models.” ArXiv Preprint abs/1707.08945. https://arxiv.org/abs/1707.08945. +

This demonstration showed how simple adversarial stickers could trick ML systems into misreading critical road signs. If deployed in the real world, these attacks could endanger public safety, causing autonomous vehicles to misinterpret stop signs as speed limits. Researchers warned this could potentially cause dangerous rolling stops or acceleration into intersections.

+

This case study provides a concrete illustration of how adversarial examples exploit how ML models recognize patterns. By subtly manipulating the input data, attackers can induce incorrect predictions and create serious risks for safety-critical applications like self-driving cars. The attack’s simplicity shows how even minor changes imperceptible to humans can lead models astray. Developers need robust defenses against such threats.

+
+
+
+
+

14.5 Security Threats to ML Hardware

+

Discussing the threats to embedded ML hardware security in a structured order is useful for a clear and in-depth understanding of the potential pitfalls of ML systems. We will begin with hardware bugs. We address the issues where intrinsic design flaws in the hardware can be a gateway to exploitation. This forms the fundamental knowledge required to understand the genesis of hardware vulnerabilities. Moving to physical attacks establishes the basic threat model, as these are the most overt and direct methods of compromising hardware integrity. Fault-injection attacks naturally extend this discussion, showing how specific manipulations can induce systematic failures.

+

Advancing to side-channel attacks next will show the increasing complexity, as these rely on exploiting indirect information leakages, requiring a nuanced understanding of hardware operations and environmental interactions. Leaky interfaces will show how external communication channels can become vulnerable, leading to accidental data exposures. Counterfeit hardware discussions benefit from prior explorations of hardware integrity and exploitation techniques, as they often compound these issues with additional risks due to their questionable provenance. Finally, supply chain risks encompass all concerns above and frame them within the context of the hardware’s journey from production to deployment, highlighting the multifaceted nature of hardware security and the need for vigilance at every stage.

+

Table tbl-threat_types overview table summarizing the topics:

+
+
+
+Table 14.1: Threat types on hardware security. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Threat TypeDescriptionRelevance to Embedded ML Hardware Security
Hardware BugsIntrinsic flaws in hardware designs that can compromise system integrity.Foundation of hardware vulnerability.
Physical AttacksDirect exploitation of hardware through physical access or manipulation.Basic and overt threat model.
Fault-injection AttacksInduction of faults to cause errors in hardware operation, leading to potential system compromise.Systematic manipulation leading to failure.
Side-Channel AttacksExploitation of leaked information from hardware operation to extract sensitive data.Indirect attack via environmental observation.
Leaky InterfacesVulnerabilities arising from interfaces that expose data unintentionally.Data exposure through communication channels.
Counterfeit HardwareUse of unauthorized hardware components that may have security flaws.Compounded vulnerability issues.
Supply Chain RisksRisks introduced through the hardware lifecycle, from production to deployment.Cumulative and multifaceted security challenges.
+
+
+
+
+

14.5.1 Hardware Bugs

+

Hardware is not immune to the pervasive issue of design flaws or bugs. Attackers can exploit these vulnerabilities to access, manipulate, or extract sensitive data, breaching the confidentiality and integrity that users and services depend on. An example of such vulnerabilities came to light with the discovery of Meltdown and Spectre—two hardware vulnerabilities that exploit critical vulnerabilities in modern processors. These bugs allow attackers to bypass the hardware barrier that separates applications, allowing a malicious program to read the memory of other programs and the operating system.

+

Meltdown (Kocher et al. 2019a) and Spectre (Kocher et al. 2019b) work by taking advantage of optimizations in modern CPUs that allow them to speculatively execute instructions out of order before validity checks have been completed. This reveals data that should be inaccessible, which the attack captures through side channels like caches. The technical complexity demonstrates the difficulty of eliminating vulnerabilities even with extensive validation.

+
+———, et al. 2019a. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP). IEEE. https://doi.org/10.1109/sp.2019.00002. +
+Kocher, Paul, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, et al. 2019b. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP). IEEE. https://doi.org/10.1109/sp.2019.00002. +

If an ML system is processing sensitive data, such as personal user information or proprietary business analytics, Meltdown and Spectre represent a real and present danger to data security. Consider the case of an ML accelerator card designed to speed up machine learning processes, such as the ones we discussed in the A.I. Hardware chapter. These accelerators work with the CPU to handle complex calculations, often related to data analytics, image recognition, and natural language processing. If such an accelerator card has a vulnerability akin to Meltdown or Spectre, it could leak the data it processes. An attacker could exploit this flaw not just to siphon off data but also to gain insights into the ML model’s workings, including potentially reverse-engineering the model itself (thus, going back to the issue of model theft.

+

A real-world scenario where this could be devastating would be in the healthcare industry. ML systems routinely process highly sensitive patient data to help diagnose, plan treatment, and forecast outcomes. A bug in the system’s hardware could lead to the unauthorized disclosure of personal health information, violating patient privacy and contravening strict regulatory standards like the Health Insurance Portability and Accountability Act (HIPAA)

+

The Meltdown and Spectre vulnerabilities are stark reminders that hardware security is not just about preventing unauthorized physical access but also about ensuring that the hardware’s architecture does not become a conduit for data exposure. Similar hardware design flaws regularly emerge in CPUs, accelerators, memory, buses, and other components. This necessitates ongoing retroactive mitigations and performance tradeoffs in deployed systems. Proactive solutions like confidential computing architectures could mitigate entire classes of vulnerabilities through fundamentally more secure hardware design. Thwarting hardware bugs requires rigor at every design stage, validation, and deployment.

+
+
+

14.5.2 Physical Attacks

+

Physical tampering refers to the direct, unauthorized manipulation of physical computing resources to undermine the integrity of machine learning systems. It’s a particularly insidious attack because it circumvents traditional cybersecurity measures, which often focus more on software vulnerabilities than hardware threats.

+

Physical tampering can take many forms, from the relatively simple, such as someone inserting a USB device loaded with malicious software into a server, to the highly sophisticated, such as embedding a hardware Trojan during the manufacturing process of a microchip (discussed later in greater detail in the Supply Chain section). ML systems are susceptible to this attack because they rely on the accuracy and integrity of their hardware to process and analyze vast amounts of data correctly.

+

Consider an ML-powered drone used for geographical mapping. The drone’s operation relies on a series of onboard systems, including a navigation module that processes inputs from various sensors to determine its path. If an attacker gains physical access to this drone, they could replace the genuine navigation module with a compromised one that includes a backdoor. This manipulated module could then alter the drone’s flight path to conduct surveillance over restricted areas or even smuggle contraband by flying undetected routes.

+

Another example is the physical tampering of biometric scanners used for access control in secure facilities. By introducing a modified sensor that transmits biometric data to an unauthorized receiver, an attacker can access personal identification data to authenticate individuals.

+

There are several ways that physical tampering can occur in ML hardware:

+
    +
  • Manipulating sensors: Consider an autonomous vehicle that relies on cameras and LiDAR for situational awareness. An attacker could carefully calibrate the physical alignment of these sensors to introduce blindspots or distort critical distances. This could impair object detection and endanger passengers.

  • +
  • Hardware trojans: Malicious circuit modifications can introduce trojans that activate under certain inputs. For example, an ML accelerator chip could function normally until a rare trigger case occurs, causing it to accelerate unsafely.

  • +
  • Tampering with memory: Physically exposing and manipulating memory chips could allow the extraction of encrypted ML model parameters. Fault injection techniques can also corrupt model data to degrade accuracy.

  • +
  • Introducing backdoors: Gaining physical access to servers, an adversary could use hardware keyloggers to capture passwords and create backdoor accounts for persistent access. These could then be used to exfiltrate ML training data over time.

  • +
  • Supply chain attacks: Manipulating third-party hardware components or compromising manufacturing and shipping channels creates systemic vulnerabilities that are difficult to detect and remediate.

  • +
+
+
+

14.5.3 Fault-injection Attacks

+

By intentionally introducing faults into ML hardware, attackers can induce errors in the computational process, leading to incorrect outputs. This manipulation compromises the integrity of ML operations and can serve as a vector for further exploitation, such as system reverse engineering or security protocol bypass. Fault injection involves intentionally disrupting normal computations in a system through external interference (Joye and Tunstall 2012). By precisely triggering computational errors, adversaries can alter program execution in ways that degrade reliability or leak sensitive information.

+
+Joye, Marc, and Michael Tunstall. 2012. Fault Analysis in Cryptography. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29656-7. +
+Barenghi, Alessandro, Guido M. Bertoni, Luca Breveglieri, Mauro Pellicioli, and Gerardo Pelosi. 2010. “Low Voltage Fault Attacks to AES.” In 2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), 7–12. IEEE; IEEE. https://doi.org/10.1109/hst.2010.5513121. +
+Hutter, Michael, Jorn-Marc Schmidt, and Thomas Plos. 2009. “Contact-Based Fault Injections and Power Analysis on RFID Tags.” In 2009 European Conference on Circuit Theory and Design, 409–12. IEEE; IEEE. https://doi.org/10.1109/ecctd.2009.5275012. +
+Amiel, Frederic, Christophe Clavier, and Michael Tunstall. 2006. “Fault Analysis of DPA-Resistant Algorithms.” In International Workshop on Fault Diagnosis and Tolerance in Cryptography, 223–36. Springer. +
+Agrawal, Dakshi, Selcuk Baktir, Deniz Karakoyunlu, Pankaj Rohatgi, and Berk Sunar. 2007. Trojan Detection Using IC Fingerprinting.” In 2007 IEEE Symposium on Security and Privacy (SP ’07), 29–45. Springer; IEEE. https://doi.org/10.1109/sp.2007.36. +
+Skorobogatov, Sergei. 2009. “Local Heating Attacks on Flash Memory Devices.” In 2009 IEEE International Workshop on Hardware-Oriented Security and Trust, 1–6. IEEE; IEEE. https://doi.org/10.1109/hst.2009.5225028. +
+Skorobogatov, Sergei P, and Ross J Anderson. 2003. “Optical Fault Induction Attacks.” In Cryptographic Hardware and Embedded Systems-CHES 2002: 4th International Workshop Redwood Shores, CA, USA, August 1315, 2002 Revised Papers 4, 2–12. Springer. +

Various physical tampering techniques can be used for fault injection. Low voltage (Barenghi et al. 2010), power spikes (Hutter, Schmidt, and Plos 2009), clock glitches (Amiel, Clavier, and Tunstall 2006), electromagnetic pulses (Agrawal et al. 2007), temperate increase (S. Skorobogatov 2009) and laser strikes (S. P. Skorobogatov and Anderson 2003) are common hardware attack vectors. They are precisely timed to induce faults like flipped bits or skipped instructions during key operations.

+

For ML systems, consequences include impaired model accuracy, denial of service, extraction of private training data or model parameters, and reverse engineering of model architectures. Attackers could use fault injection to force misclassifications, disrupt autonomous systems, or steal intellectual property.

+

For example, in (Breier et al. 2018), the authors successfully injected a fault attack into a deep neural network deployed on a microcontroller. They used a laser to heat specific transistors, forcing them to switch states. In one instance, they used this method to attack a ReLU activation function, resulting in the function always outputting a value of 0, regardless of the input. In the assembly code in Figure fig-injection, the attack caused the executing program to always skip the jmp end instruction on line 6. This means that HiddenLayerOutput[i] is always set to 0, overwriting any values written to it on lines 4 and 5. As a result, the targeted neurons are rendered inactive, resulting in misclassifications.

+
+
+
+ +
+
+Figure 14.2: Fault-injection demonstrated with assembly code. Credit: Breier et al. (2018). +
+
+Breier, Jakub, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2018. “Deeplaser: Practical Fault Attack on Deep Neural Networks.” ArXiv Preprint abs/1806.05859. https://arxiv.org/abs/1806.05859. +
+
+

An attacker’s strategy could be to infer information about the activation functions using side-channel attacks (discussed next). Then, the attacker could attempt to target multiple activation function computations by randomly injecting faults into the layers as close to the output layer as possible, increasing the likelihood and impact of the attack.

+

Embedded devices are particularly vulnerable due to limited physical hardening and resource constraints that restrict robust runtime defenses. Without tamper-resistant packaging, attacker access to system buses and memory enables precise fault strikes. Lightweight embedded ML models also lack redundancy to overcome errors.

+

These attacks can be particularly insidious because they bypass traditional software-based security measures, often not accounting for physical disruptions. Furthermore, because ML systems rely heavily on the accuracy and reliability of their hardware for tasks like pattern recognition, decision-making, and automated responses, any compromise in their operation due to fault injection can have serious and wide-ranging consequences.

+

Mitigating fault injection risks necessitates a multilayer approach. Physical hardening through tamper-proof enclosures and design obfuscation helps reduce access. Lightweight anomaly detection can identify unusual sensor inputs or erroneous model outputs (Hsiao et al. 2023). Error-correcting memories minimize disruption, while data encryption safeguards information. Emerging model watermarking techniques trace stolen parameters.

+
+Hsiao, Yu-Shun, Zishen Wan, Tianyu Jia, Radhika Ghosal, Abdulrahman Mahmoud, Arijit Raychowdhury, David Brooks, Gu-Yeon Wei, and Vijay Janapa Reddi. 2023. MAVFI: An End-to-End Fault Analysis Framework with Anomaly Detection and Recovery for Micro Aerial Vehicles.” In 2023 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1–6. IEEE; IEEE. https://doi.org/10.23919/date56975.2023.10137246. +

However, balancing robust protections with embedded systems’ tight size and power limits remains challenging. Cryptography limits and lack of secure co-processors on cost-sensitive embedded hardware restrict options. Ultimately, fault injection resilience demands a cross-layer perspective spanning electrical, firmware, software, and physical design layers.

+
+
+

14.5.4 Side-Channel Attacks

+

Side-channel attacks are a category of security breach that depends on information gained from a computer system’s physical implementation. Unlike direct attacks on software or network vulnerabilities, side-channel attacks exploit a system’s hardware characteristics. These attacks can be particularly effective against complex machine learning systems, where large amounts of data are processed, and a high level of security is expected.

+

The fundamental premise of a side-channel attack is that a device’s operation can inadvertently leak information. Such leaks can come from various sources, including the electrical power a device consumes (Kocher, Jaffe, and Jun 1999), the electromagnetic fields it emits (Gandolfi, Mourtel, and Olivier 2001), the time it takes to process certain operations, or even the sounds it produces. Each channel can indirectly glimpse the system’s internal processes, revealing information that can compromise security.

+
+Kocher, Paul, Joshua Jaffe, and Benjamin Jun. 1999. “Differential Power Analysis.” In Advances in CryptologyCRYPTO’99: 19th Annual International Cryptology Conference Santa Barbara, California, USA, August 1519, 1999 Proceedings 19, 388–97. Springer. +
+Gandolfi, Karine, Christophe Mourtel, and Francis Olivier. 2001. “Electromagnetic Analysis: Concrete Results.” In Cryptographic Hardware and Embedded SystemsCHES 2001: Third International Workshop Paris, France, May 1416, 2001 Proceedings 3, 251–61. Springer. +
+Kocher, Paul, Joshua Jaffe, Benjamin Jun, and Pankaj Rohatgi. 2011. “Introduction to Differential Power Analysis.” Journal of Cryptographic Engineering 1 (1): 5–27. https://doi.org/10.1007/s13389-011-0006-y. +

For instance, consider a machine learning system performing encrypted transactions. Encryption algorithms are supposed to secure data but require computational work to encrypt and decrypt information. An attacker can analyze the power consumption patterns of the device performing encryption to figure out the cryptographic key. With sophisticated statistical methods, small variations in power usage during the encryption process can be correlated with the data being processed, eventually revealing the key. Some differential analysis attack techniques are Differential Power Analysis (DPA) (Kocher et al. 2011), Differential Electromagnetic Analysis (DEMA), and Correlation Power Analysis (CPA).

+

For example, consider an attacker trying to break the AES encryption algorithm using a differential analysis attack. The attacker would first need to collect many power or electromagnetic traces (a trace is a record of consumptions or emissions) of the device while performing AES encryption.

+

Once the attacker has collected sufficient traces, they would then use a statistical technique to identify correlations between the traces and the different values of the plaintext (original, unencrypted text) and ciphertext (encrypted text). These correlations would then be used to infer the value of a bit in the AES key and, eventually, the entire key. Differential analysis attacks are dangerous because they are low-cost, effective, and non-intrusive, allowing attackers to bypass algorithmic and hardware-level security measures. Compromises by these attacks are also hard to detect because they do not physically modify the device or break the encryption algorithm.

+

Below is a simplified visualization of how analyzing the power consumption patterns of the encryption device can help us extract information about the algorithm’s operations and, in turn, the secret data. Say we have a device that takes a 5-byte password as input. We will analyze and compare the different voltage patterns that are measured while the encryption device performs operations on the input to authenticate the password.

+

First, consider the power analysis of the device’s operations after entering a correct password in the first picture in Figure fig-encryption. The dense blue graph outputs the encryption device’s voltage measurement. What matters here is the comparison between the different analysis charts rather than the specific details of what is going on in each scenario.

+
+
+
+ +
+
+Figure 14.3: Power analysis of an encryption device with a correct password. Credit: Colin O’Flynn. +
+
+
+

Let’s look at the power analysis chart when we enter an incorrect password in Figure fig-encryption2. The first three bytes of the password are correct. As a result, we can see that the voltage patterns are very similar or identical between the two charts, up to and including the fourth byte. After the device processes the fourth byte, it determines a mismatch between the secret key and the attempted input. We notice a change in the pattern at the transition point between the fourth and fifth bytes: the voltage has gone up (the current has gone down) because the device has stopped processing the rest of the input.

+
+
+
+ +
+
+Figure 14.4: Power analysis of an encryption device with a (partially) wrong password. Credit: Colin O’Flynn. +
+
+
+

Figure fig-encryption3 describes another chart of a completely wrong password. After the device finishes processing the first byte, it determines that it is incorrect and stops further processing - the voltage goes up and the current down.

+
+
+
+ +
+
+Figure 14.5: Power analysis of an encryption device with a wrong password. Credit: Colin O’Flynn. +
+
+
+

The example above shows how we can infer information about the encryption process and the secret key by analyzing different inputs and trying to ‘eavesdrop’ on the device’s operations on each input byte.

+

For a more detailed explanation, watch the video below.

+
+

Another example is an ML system for speech recognition, which processes voice commands to perform actions. By measuring the time it takes for the system to respond to commands or the power used during processing, an attacker could infer what commands are being processed and thus learn about the system’s operational patterns. Even more subtle, the sound emitted by a computer’s fan or hard drive could change in response to the workload, which a sensitive microphone could pick up and analyze to determine what kind of operations are being performed.

+

In real-world scenarios, side-channel attacks have been used to extract encryption keys and compromise secure communications. One of the earliest recorded side-channel attacks dates back to the 1960s when British intelligence agency MI5 faced the challenge of deciphering encrypted communications from the Egyptian Embassy in London. Their cipher-breaking attempts were thwarted by the computational limitations of the time until an ingenious observation changed the game.

+

MI5 agent Peter Wright proposed using a microphone to capture the subtle acoustic signatures emitted from the embassy’s rotor cipher machine during encryption (Burnet and Thomas 1989). The distinct mechanical clicks of the rotors as operators configured them daily leaked critical information about the initial settings. This simple side channel of sound enabled MI5 to dramatically reduce the complexity of deciphering messages. This early acoustic leak attack highlights that side-channel attacks are not merely a digital age novelty but a continuation of age-old cryptanalytic principles. The notion that where there is a signal, there is an opportunity for interception remains foundational. From mechanical clicks to electrical fluctuations and beyond, side channels enable adversaries to extract secrets indirectly through careful signal analysis.

+
+Burnet, David, and Richard Thomas. 1989. “Spycatcher: The Commodification of Truth.” J. Law Soc. 16 (2): 210. https://doi.org/10.2307/1410360. +
+Asonov, D., and R. Agrawal. 2004. “Keyboard Acoustic Emanations.” In IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004, 3–11. IEEE; IEEE. https://doi.org/10.1109/secpri.2004.1301311. +
+Gnad, Dennis R. E., Fabian Oboril, and Mehdi B. Tahoori. 2017. “Voltage Drop-Based Fault Attacks on FPGAs Using Valid Bitstreams.” In 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 1–7. IEEE; IEEE. https://doi.org/10.23919/fpl.2017.8056840. +
+Zhao, Mark, and G. Edward Suh. 2018. FPGA-Based Remote Power Side-Channel Attacks.” In 2018 IEEE Symposium on Security and Privacy (SP), 229–44. IEEE; IEEE. https://doi.org/10.1109/sp.2018.00049. +

Today, acoustic cryptanalysis has evolved into attacks like keyboard eavesdropping (Asonov and Agrawal 2004). Electrical side channels range from power analysis on cryptographic hardware (Gnad, Oboril, and Tahoori 2017) to voltage fluctuations (Zhao and Suh 2018) on machine learning accelerators. Timing, electromagnetic emission, and even heat footprints can likewise be exploited. New and unexpected side channels often emerge as computing becomes more interconnected and miniaturized.

+

Just as MI5’s analog acoustic leak transformed their codebreaking, modern side-channel attacks circumvent traditional boundaries of cyber defense. Understanding the creative spirit and historical persistence of side channel exploits is key knowledge for developers and defenders seeking to secure modern machine learning systems comprehensively against digital and physical threats.

+
+
+

14.5.5 Leaky Interfaces

+

Leaky interfaces in embedded systems are often overlooked backdoors that can become significant security vulnerabilities. While designed for legitimate purposes such as communication, maintenance, or debugging, these interfaces may inadvertently provide attackers with a window through which they can extract sensitive information or inject malicious data.

+

An interface becomes “leaky” when it exposes more information than it should, often due to a lack of stringent access controls or inadequate shielding of the transmitted data. Here are some real-world examples of leaky interface issues causing security problems in IoT and embedded devices:

+
    +
  • Baby Monitors: Many WiFi-enabled baby monitors have been found to have unsecured interfaces for remote access. This allowed attackers to gain live audio and video feeds from people’s homes, representing a major privacy violation.

  • +
  • Pacemakers: Interface vulnerabilities were discovered in some pacemakers that could allow attackers to manipulate cardiac functions if exploited. This presents a potentially life-threatening scenario.

  • +
  • Smart Lightbulbs: A researcher found he could access unencrypted data from smart lightbulbs via a debug interface, including WiFi credentials, allowing him to gain access to the connected network (Greengard 2015).

  • +
  • Smart Cars: If left unsecured, The OBD-II diagnostic port has been shown to provide an attack vector into automotive systems. Researchers could use it to control brakes and other components (Miller and Valasek 2015).

  • +
+
+Greengard, Samuel. 2015. The Internet of Things. The MIT Press. https://doi.org/10.7551/mitpress/10277.001.0001. +
+Miller, Charlie, and Chris Valasek. 2015. “Remote Exploitation of an Unaltered Passenger Vehicle.” Black Hat USA 2015 (S 91): 1–91. +

While the above are not directly connected with ML, consider the example of a smart home system with an embedded ML component that controls home security based on behavior patterns it learns over time. The system includes a maintenance interface accessible via the local network for software updates and system checks. If this interface does not require strong authentication or the data transmitted through it is not encrypted, an attacker on the same network could gain access. They could then eavesdrop on the homeowner’s daily routines or reprogram the security settings by manipulating the firmware.

+

Such leaks are a privacy issue and a potential entry point for more damaging exploits. The exposure of training data, model parameters, or ML outputs from a leak could help adversaries construct adversarial examples or reverse-engineer models. Access through a leaky interface could also be used to alter an embedded device’s firmware, loading it with malicious code that could turn off the device, intercept data, or use it in botnet attacks.

+

To mitigate these risks, a multi-layered approach is necessary, spanning technical controls like authentication, encryption, anomaly detection, policies and processes like interface inventories, access controls, auditing, and secure development practices. Turning off unnecessary interfaces and compartmentalizing risks via a zero-trust model provide additional protection.

+

As designers of embedded ML systems, we should assess interfaces early in development and continually monitor them post-deployment as part of an end-to-end security lifecycle. Understanding and securing interfaces is crucial for ensuring the overall security of embedded ML.

+
+
+

14.5.6 Counterfeit Hardware

+

ML systems are only as reliable as the underlying hardware. In an era where hardware components are global commodities, the rise of counterfeit or cloned hardware presents a significant challenge. Counterfeit hardware encompasses any components that are unauthorized reproductions of original parts. Counterfeit components infiltrate ML systems through complex supply chains that stretch across borders and involve numerous stages from manufacture to delivery.

+

A single lapse in the supply chain’s integrity can result in the insertion of counterfeit parts designed to closely imitate the functions and appearance of genuine hardware. For instance, a facial recognition system for high-security access control may be compromised if equipped with counterfeit processors. These processors could fail to accurately process and verify biometric data, potentially allowing unauthorized individuals to access restricted areas.

+

The challenge with counterfeit hardware is multifaceted. It undermines the quality and reliability of ML systems, as these components may degrade faster or perform unpredictably due to substandard manufacturing. The security risks are also profound; counterfeit hardware can contain vulnerabilities ripe for exploitation by malicious actors. For example, a cloned network router in an ML data center might include a hidden backdoor, enabling data interception or network intrusion without detection.

+

Furthermore, counterfeit hardware poses legal and compliance risks. Companies inadvertently utilizing counterfeit parts in their ML systems may face serious legal repercussions, including fines and sanctions for failing to comply with industry regulations and standards. This is particularly true for sectors where compliance with specific safety and privacy regulations is mandatory, such as healthcare and finance.

+

The issue of counterfeit hardware is exacerbated by economic pressures to reduce costs, which can compel businesses to source from lower-cost suppliers without stringent verification processes. This economizing can inadvertently introduce counterfeit parts into otherwise secure systems. Additionally, detecting these counterfeits is inherently difficult since they are created to pass as the original components, often requiring sophisticated equipment and expertise to identify.

+

In ML, where decisions are made in real time and based on complex computations, the consequences of hardware failure are inconvenient and potentially dangerous. Stakeholders in the field of ML need to understand these risks thoroughly. The issues presented by counterfeit hardware necessitate a deep dive into the current challenges facing ML system integrity and emphasize the importance of vigilant, informed management of the hardware life cycle within these advanced systems.

+
+
+

14.5.7 Supply Chain Risks

+

The threat of counterfeit hardware is closely tied to broader supply chain vulnerabilities. Globalized, interconnected supply chains create multiple opportunities for compromised components to infiltrate a product’s lifecycle. Supply chains involve numerous entities, from design to manufacturing, assembly, distribution, and integration. A lack of transparency and oversight of each partner makes verifying integrity at every step challenging. Lapses anywhere along the chain can allow the insertion of counterfeit parts.

+

For example, a contracted manufacturer may unknowingly receive and incorporate recycled electronic waste containing dangerous counterfeits. An untrustworthy distributor could smuggle in cloned components. Insider threats at any vendor might deliberately mix counterfeits into legitimate shipments.

+

Once counterfeits enter the supply stream, they move quickly through multiple hands before ending up in ML systems where detection is difficult. Advanced counterfeits like refurbished parts or clones with repackaged externals can masquerade as authentic components, passing visual inspection.

+

To identify fakes, thorough technical profiling using micrography, X-ray screening, component forensics, and functional testing is often required. However, such costly analysis is impractical for large-volume procurement.

+

Strategies like supply chain audits, screening suppliers, validating component provenance, and adding tamper-evident protections can help mitigate risks. However, given global supply chain security challenges, a zero-trust approach is prudent. Designing ML systems to utilize redundant checking, fail-safes, and continuous runtime monitoring provides resilience against component compromises.

+

Rigorous validation of hardware sources coupled with fault-tolerant system architectures offers the most robust defense against the pervasive risks of convoluted, opaque global supply chains.

+
+

Case Study

+

In 2018, Bloomberg Businessweek published an alarming story that got much attention in the tech world. The article claimed that Supermicro had secretly planted tiny spy chips on server hardware. Reporters said Chinese state hackers working with Supermicro could sneak these tiny chips onto motherboards during manufacturing. The tiny chips allegedly gave the hackers backdoor access to servers used by over 30 major companies, including Apple and Amazon.

+

If true, this would allow hackers to spy on private data or even tamper with systems. However, after investigating, Apple and Amazon found no proof that such hacked Supermicro hardware existed. Other experts questioned whether the Bloomberg article was accurate reporting.

+

Whether the story is completely true or not is not our concern from a pedagogical viewpoint. However, this incident drew attention to the risks of global supply chains for hardware, especially manufactured in China. When companies outsource and buy hardware components from vendors worldwide, there needs to be more visibility into the process. In this complex global pipeline, there are concerns that counterfeits or tampered hardware could be slipped in somewhere along the way without tech companies realizing it. Companies relying too much on single manufacturers or distributors creates risk. For instance, due to the over-reliance on TSMC for semiconductor manufacturing, the U.S. has invested 50 billion dollars into the CHIPS Act.

+

As ML moves into more critical systems, verifying hardware integrity from design through production and delivery is crucial. The reported Supermicro backdoor demonstrated that for ML security, we cannot take global supply chains and manufacturing for granted. We must inspect and validate hardware at every link in the chain.

+
+
+
+
+

14.6 Embedded ML Hardware Security

+
+

14.6.1 Trusted Execution Environments

+
+

About TEE

+

A Trusted Execution Environment (TEE) is a secure area within a main processor that provides a high level of security for the execution of code and protection of data. TEEs operate by isolating the execution of sensitive tasks from the rest of the device’s operations, thereby creating an environment resistant to attacks from software and hardware vectors.

+
+
+

Benefits

+

TEEs are particularly valuable in scenarios where sensitive data must be processed or where the integrity of a system’s operations is critical. In the context of ML hardware, TEEs ensure that the ML algorithms and data are protected against tampering and leakage. This is essential because ML models often process private information, trade secrets, or data that could be exploited if exposed.

+

For instance, a TEE can protect ML model parameters from being extracted by malicious software on the same device. This protection is vital for privacy and maintaining the integrity of the ML system, ensuring that the models perform as expected and do not provide skewed outputs due to manipulated parameters. Apple’s Secure Enclave, found in iPhones and iPads, is a form of TEE that provides an isolated environment to protect sensitive user data and cryptographic operations.

+

In ML systems, TEEs can:

+
    +
  • Securely perform model training and inference, ensuring the computation results remain confidential.

  • +
  • Protect the confidentiality of input data, like biometric information, used for personal identification or sensitive classification tasks.

  • +
  • Secure ML models by preventing reverse engineering, which can protect proprietary information and maintain a competitive advantage.

  • +
  • Enable secure updates to ML models, ensuring that updates come from a trusted source and have not been tampered with in transit.

  • +
+

The importance of TEEs in ML hardware security stems from their ability to protect against external and internal threats, including the following:

+
    +
  • Malicious Software: TEEs can prevent high-privilege malware from accessing sensitive areas of the ML system.

  • +
  • Physical Tampering: By integrating with hardware security measures, TEEs can protect against physical tampering that attempts to bypass software security.

  • +
  • Side-channel Attacks: Although not impenetrable, TEEs can mitigate certain side-channel attacks by controlling access to sensitive operations and data patterns.

  • +
+
+
+

Mechanics

+

The fundamentals of TEEs contain four main parts:

+
    +
  • Isolated Execution: Code within a TEE runs in a separate environment from the device’s main operating system. This isolation protects the code from unauthorized access by other applications.

  • +
  • Secure Storage: TEEs can securely store cryptographic keys,authentication tokens, and sensitive data, preventing access by regular applications running outside the TEE.

  • +
  • Integrity Protection: TEEs can verify the integrity of code and data, ensuring that they have not been altered before execution or during storage.

  • +
  • Data Encryption: Data handled within a TEE can be encrypted, making it unreadable to entities without the proper keys, which are also managed within the TEE.

  • +
+

Here are some examples of TEEs that provide hardware-based security for sensitive applications:

+
    +
  • ARMTrustZone:This technology creates secure and normal world execution environments isolated using hardware controls and implemented in many mobile chipsets.

  • +
  • IntelSGX:Intel’s Software Guard Extensions provide an enclave for code execution that protects against certain software attacks, specifically O.S. layer attacks. They are used to safeguard workloads in the cloud.

  • +
  • Qualcomm Secure Execution Environment:A Hardware sandbox on Qualcomm chipsets for mobile payment and authentication apps.

  • +
  • Apple SecureEnclave:TEE for biometric data and key management on iPhones and iPads.Facilitates mobile payments.

  • +
+

Figure fig-enclave is a diagram demonstrating a secure enclave isolated from the main processor to provide an extra layer of security. The secure enclave has a boot ROM to establish a hardware root of trust, an AES engine for efficient and secure cryptographic operations, and protected memory. It also has a mechanism to store information securely on attached storage separate from the NAND flash storage used by the application processor and operating system. This design keeps sensitive user data secure even when the Application Processor kernel becomes compromised.

+
+
+
+ +
+
+Figure 14.6: System-on-chip secure enclave. Credit: Apple. +
+
+
+
+
+

Tradeoffs

+

If TEEs are so good, why don’t all systems have TEE enabled by default? The decision to implement a TEE is not taken lightly. There are several reasons why a TEE might only be present in some systems by default. Here are some tradeoffs and challenges associated with TEEs:

+

Cost: Implementing TEEs involves additional costs. There are direct costs for the hardware and indirect costs associated with developing and maintaining secure software for TEEs. These costs may only be justifiable for some devices, especially low-margin products.

+

Complexity: TEEs add complexity to system design and development. Integrating a TEE with existing systems requires a substantial redesign of the hardware and software stack, which can be a barrier, especially for legacy systems.

+

Performance Overhead: While TEEs offer enhanced security, they can introduce performance overhead. For example, the additional steps in verifying and encrypting data can slow down system performance, which may be critical in time-sensitive applications.

+

Development Challenges: Developing for TEEs requires specialized knowledge and often must adhere to strict development protocols. This can extend development time and complicate the debugging and testing processes.

+

Scalability and Flexibility: TEEs, due to their secure nature, may impose limitations on scalability and flexibility. Upgrading secure components or scaling the system for more users or data can be more challenging when everything must pass through a secure, enclosed environment.

+

Energy Consumption: The increased processing required for encryption, decryption, and integrity checks can lead to higher energy consumption, a significant concern for battery-powered devices.

+

Market Demand: Not all markets or applications require the level of security provided by TEEs. For many consumer applications, the perceived risk may be low enough that manufacturers opt not to include TEEs in their designs.

+

Security Certification and Assurance: Systems with TEEs may need rigorous security certifications with bodies like Common Criteria (CC) or the European Union Agency for Cybersecurity (ENISA), which can be lengthy and expensive. Some organizations may choose to refrain from implementing TEEs to avoid these hurdles.

+

Limited Resource Devices: Devices with limited processing power, memory, or storage may only support TEEs without compromising their primary functionality.

+
+
+
+

14.6.2 Secure Boot

+
+

About

+

A secure boot is a security standard that ensures a device boots using only software trusted by the original equipment manufacturer (OEM). When the device starts up, the firmware checks the signature of each piece of boot software, including the bootloader, kernel, and base operating system, to ensure it’s not tampered with. If the signatures are valid, the device continues to boot. If not, the boot process stops to prevent potential security threats from executing.

+
+
+

Benefits

+

The integrity of an ML system is critical from the moment it is powered on. A compromised boot process could undermine the system by allowing malicious software to load before the operating system and ML applications start. This could lead to manipulated ML operations, stolen data, or the device being repurposed for malicious activities such as botnets or crypto-mining.

+

Secure Boot helps protect embedded ML hardware in several ways:

+
    +
  • Protecting ML Data: Ensuring that the data used by ML models, which may include private or sensitive information, is not exposed to tampering or theft during the boot process.

  • +
  • Guarding Model Integrity: Maintaining the ML models’ integrity is important, as tampering with them could lead to incorrect or malicious outcomes.

  • +
  • Secure Model Updates: Enabling secure updates to ML models and algorithms, ensuring that updates are authenticated and have not been altered.

  • +
+
+
+

Mechanics

+

TEEs benefit from Secure Boot in multiple ways. Figure fig-secure-boot illustrates a flow diagram of a trusted embedded system. For instance, during initial validation, Secure Boot ensures that the code running inside the TEE is the correct and untampered version approved by the device manufacturer. It can ensure resilience against tampering by verifying the digital signatures of the firmware and other critical components; Secure Boot prevents unauthorized modifications that could undermine the TEE’s security properties. Secure Boot establishes a foundation of trust upon which the TEE can securely operate, enabling secure operations such as cryptographic key management, secure processing, and sensitive data handling.

+
+
+
+ +
+
+Figure 14.7: Secure Boot flow. Credit: R. V. and A. (2018). +
+
+R. V., Rashmi, and Karthikeyan A. 2018. “Secure Boot of Embedded Applications - a Review.” In 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 291–98. IEEE. https://doi.org/10.1109/iceca.2018.8474730. +
+
+
+
+

Case Study: Apple’s Face ID

+

Let’s take a real-world example. Apple’s Face ID technology uses advanced machine learning algorithms to enable facial recognition on iPhones and iPads. It relies on a sophisticated framework of sensors and software to accurately map the geometry of a user’s face. For Face ID to function securely and protect user biometric data, the device’s operations must be trustworthy from the moment it is powered on, which is where Secure Boot plays a crucial role. Here’s how Secure Boot works in conjunction with Face ID:

+

Initial Verification: When an iPhone is powered on, the Secure Boot process begins in the Secure Enclave, a coprocessor providing an extra security layer. The Secure Enclave is responsible for processing fingerprint data for Touch ID and facial recognition data for Face ID. The boot process verifies that Apple has signed the Secure Enclave’s firmware and has not been tampered with. This step ensures that the firmware used to process biometric data is authentic and safe.

+

Continuous Security Checks: After the initial power-on self-test and verification by Secure Boot, the Secure Enclave communicates with the device’s main processor to continue the secure boot chain. It verifies the digital signatures of the iOS kernel and other critical boot components before allowing the boot process to proceed. This chained trust model prevents unauthorized modifications to the bootloader and operating system, which could compromise the device’s security.

+

Face Data Processing: Once the device has completed its secure boot sequence, the Secure Enclave can interact safely with the ML algorithms that power Face ID. Facial recognition involves projecting and analyzing over 30,000 invisible dots to create a depth map of the user’s face and an infrared image. This data is then converted into a mathematical representation and compared with the registered face data securely stored in the Secure Enclave.

+

Secure Enclave and Data Protection: The Secure Enclave is designed to protect sensitive data and handle the cryptographic operations that secure it. It ensures that even if the operating system kernel is compromised, the facial data cannot be accessed by unauthorized apps or attackers. Face ID data never leaves the device and is not backed up to iCloud or anywhere else.

+

Firmware Updates: Apple frequently releases firmware updates to address security vulnerabilities and improve the functionality of its systems. Secure Boot ensures that each firmware update is authenticated and that only updates signed by Apple are installed on the device, preserving the integrity and security of the Face ID system.

+

By using Secure Boot with dedicated hardware like the Secure Enclave, Apple can provide strong security assurances for sensitive operations like facial recognition.

+
+
+

Challenges

+

Implementing Secure Boot poses several challenges that must be addressed to realize its full benefits.

+

Key Management Complexity: Generating, storing, distributing, rotating, and revoking cryptographic keys provably securely is extremely challenging yet vital for maintaining the chain of trust. Any compromise of keys cripples protections. Large enterprises managing multitudes of device keys face particular scale challenges.

+

Performance Overhead: Checking cryptographic signatures during Boot can add 50-100ms or more per component verified. This delay may be prohibitive for time-sensitive or resource-constrained applications. However, performance impacts can be reduced through parallelization and hardware acceleration.

+

Signing Burden: Developers must diligently ensure that all software components involved in the boot process - bootloaders, firmware, OS kernel, drivers, applications, etc. are correctly signed by trusted keys. Accommodating third-party code signing remains an issue.

+

Cryptographic Verification: Secure algorithms and protocols must validate the legitimacy of keys and signatures, avoid tampering or bypass, and support revocation. Accepting dubious keys undermines trust.

+

Customizability Constraints: Vendor-locked Secure Boot architectures limit user control and upgradability. Open-source bootloaders like u-boot and coreboot enable security while supporting customizability.

+

Scalable Standards: Emerging standards like Device Identifier Composition Engine (DICE) and IDevID promise to securely provision and manage device identities and keys at scale across ecosystems.

+

Adopting Secure Boot requires following security best practices around key management, crypto validation, signed updates, and access control. Secure Boot provides a robust foundation for building device integrity and trust when implemented with care.

+
+
+
+

14.6.3 Hardware Security Modules

+
+

About HSM

+

A Hardware Security Module (HSM) is a physical device that manages digital keys for strong authentication and provides crypto-processing. These modules are designed to be tamper-resistant and provide a secure environment for performing cryptographic operations. HSMs can come in standalone devices, plug-in cards, or integrated circuits on another device.

+

HSMs are crucial for various security-sensitive applications because they offer a hardened, secure enclave for storing cryptographic keys and executing cryptographic functions. They are particularly important for ensuring the security of transactions, identity verifications, and data encryption.

+
+
+

Benefits

+

HSMs provide several functionalities that are beneficial for the security of ML systems:

+

Protecting Sensitive Data: In machine learning applications, models often process sensitive data that can be proprietary or personal. HSMs protect the encryption keys used to secure this data, both at rest and in transit, from exposure or theft.

+

Ensuring Model Integrity: The integrity of ML models is vital for their reliable operation. HSMs can securely manage the signing and verification processes for ML software and firmware, ensuring unauthorized parties have not altered the models.

+

Secure Model Training and Updates: The training and updating of ML models involve the processing of potentially sensitive data. HSMs ensure that these processes are conducted within a secure cryptographic boundary, protecting against the exposure of training data and unauthorized model updates.

+
+
+

Tradeoffs

+

HSMs involve several tradeoffs for embedded ML. These tradeoffs are similar to TEEs, but for completeness, we will also discuss them here through the lens of HSM.

+

Cost: HSMs are specialized devices that can be expensive to procure and implement, raising the overall cost of an ML project. This may be a significant factor for embedded systems, where cost constraints are often stricter.

+

Performance Overhead: While secure, the cryptographic operations performed by HSMs can introduce latency. Any added delay can be critical in high-performance embedded ML applications where inference must happen in real-time, such as in autonomous vehicles or translation devices.

+

Physical Space: Embedded systems are often limited by physical space, and adding an HSM can be challenging in tightly constrained environments. This is especially true for consumer electronics and wearable technology, where size and form factor are key considerations.

+

Power Consumption: HSMs require power for their operation, which can be a drawback for battery-operated devices with long battery life. The secure processing and cryptographic operations can drain the battery faster, a significant tradeoff for mobile or remote embedded ML applications.

+

Complexity in Integration: Integrating HSMs into existing hardware systems adds complexity. It often requires specialized knowledge to manage the secure communication between the HSM and the system’s processor and develop software capable of interfacing with the HSM.

+

Scalability: Scaling an ML solution that uses HSMs can be challenging. Managing a fleet of HSMs and ensuring uniformity in security practices across devices can become complex and costly when the deployment size increases, especially when dealing with embedded systems where communication is costly.

+

Operational Complexity: HSMs can make updating firmware and ML models more complex. Every update must be signed and possibly encrypted, which adds steps to the update process and may require secure mechanisms for key management and update distribution.

+

Development and Maintenance: The secure nature of HSMs means that only limited personnel have access to the HSM for development and maintenance purposes. This can slow down the development process and make routine maintenance more difficult.

+

Certification and Compliance: Ensuring that an HSM meets specific industry standards and compliance requirements can add to the time and cost of development. This may involve undergoing rigorous certification processes and audits.

+
+
+
+

14.6.4 Physical Unclonable Functions (PUFs)

+
+

About

+

Physical Unclonable Functions (PUFs) provide a hardware-intrinsic means for cryptographic key generation and device authentication by harnessing the inherent manufacturing variability in semiconductor components. During fabrication, random physical factors such as doping variations, line edge roughness, and dielectric thickness result in microscale differences between semiconductors, even when produced from the same masks. These create detectable timing and power variances that act as a "fingerprint” unique to each chip. PUFs exploit this phenomenon by incorporating integrated circuits to amplify minute timing or power differences into measurable digital outputs.

+

When stimulated with an input challenge, the PUF circuit produces an output response based on the device’s intrinsic physical characteristics. Due to their physical uniqueness, the same challenge will yield a different response on other devices. This challenge-response mechanism can be used to generate keys securely and identifiers tied to the specific hardware, perform device authentication, or securely store secrets. For example, a key derived from a PUF will only work on that device and cannot be cloned or extracted even with physical access or full reverse engineering (Gao, Al-Sarawi, and Abbott 2020).

+
+
+

Benefits

+

PUF key generation avoids external key storage, which risks exposure. It also provides a foundation for other hardware security primitives like Secure Boot. Implementation challenges include managing varying reliability and entropy across different PUFs, sensitivity to environmental conditions, and susceptibility to machine learning modeling attacks. When designed carefully, PUFs enable promising applications in IP protection, trusted computing, and anti-counterfeiting.

+
+
+

Utility

+

Machine learning models are rapidly becoming a core part of the functionality for many embedded devices, such as smartphones, smart home assistants, and autonomous drones. However, securing ML on resource-constrained embedded hardware can be challenging. This is where physical unclonable functions (PUFs) come in uniquely handy. Let’s look at some examples of how PUFs can be useful.

+

PUFs provide a way to generate unique fingerprints and cryptographic keys tied to the physical characteristics of each chip on the device. Let’s take an example. We have a smart camera drone that uses embedded ML to track objects. A PUF integrated into the drone’s processor could create a device-specific key to encrypt the ML model before loading it onto the drone. This way, even if an attacker somehow hacks the drone and tries to steal the model, they won’t be able to use it on another device!

+

The same PUF key could also create a digital watermark embedded in the ML model. If that model ever gets leaked and posted online by someone trying to pirate it, the watermark could help prove it came from your stolen drone and didn’t originate from the attacker. Also, imagine the drone camera connects to the cloud to offload some of its ML processing. The PUF can authenticate that the camera is legitimate before the cloud will run inference on sensitive video feeds. The cloud could verify that the drone has not been physically tampered with by checking that the PUF responses have not changed.

+

PUFs enable all this security through their challenge-response behavior’s inherent randomness and hardware binding. Without needing to store keys externally, PUFs are ideal for securing embedded ML with limited resources. Thus, they offer a unique advantage over other mechanisms.

+
+
+

Mechanics

+

The working principle behind PUFs, shown in Figure fig-pfu, involves generating a "challenge-response” pair, where a specific input (the challenge) to the PUF circuit results in an output (the response) that is determined by the unique physical properties of that circuit. This process can be likened to a fingerprinting mechanism for electronic devices. Devices that utilize ML for processing sensor data can employ PUFs to secure communication between devices and prevent the execution of ML models on counterfeit hardware.

+

Figure fig-pfu illustrates an overview of the PUF basics: a) PUF can be thought of as a unique fingerprint for each piece of hardware; b) an Optical PUF is a special plastic token that is illuminated, creating a unique speckle pattern that is then recorded; c) in an APUF (Arbiter PUF), challenge bits select different paths, and a judge decides which one is faster, giving a response of ‘1’ or ‘0’; d) in an SRAM PUF, the response is determined by the mismatch in the threshold voltage of transistors, where certain conditions lead to a preferred response of ‘1’. Each of these methods uses specific characteristics of the hardware to create a unique identifier.

+
+
+
+ +
+
+Figure 14.8: PUF basics. Credit: Gao, Al-Sarawi, and Abbott (2020). +
+
+Gao, Yansong, Said F. Al-Sarawi, and Derek Abbott. 2020. “Physical Unclonable Functions.” Nature Electronics 3 (2): 81–91. https://doi.org/10.1038/s41928-020-0372-5. +
+
+
+
+

Challenges

+

There are a few challenges with PUFs. The PUF response can be sensitive to environmental conditions, such as temperature and voltage fluctuations, leading to inconsistent behavior that must be accounted for in the design. Also, since PUFs can generate many unique challenge-response pairs, managing and ensuring the consistency of these pairs across the device’s lifetime can be challenging. Last but not least, integrating PUF technology may increase the overall manufacturing cost of a device, although it can save costs in key management over the device’s lifecycle.

+
+
+
+
+

14.7 Privacy Concerns in Data Handling

+

Handling personal and sensitive data securely and ethically is critical as machine learning permeates devices like smartphones, wearables, and smart home appliances. For medical hardware, handling data securely and ethically is further required by law through the Health Insurance Portability and Accountability Act (HIPAA). These embedded ML systems pose unique privacy risks, given their intimate proximity to users’ lives.

+
+

14.7.1 Sensitive Data Types

+

Embedded ML devices like wearables, smart home assistants, and autonomous vehicles frequently process highly personal data that requires careful handling to maintain user privacy and prevent misuse. Specific examples include medical reports and treatment plans processed by health wearables, private conversations continuously captured by smart home assistants, and detailed driving habits collected by connected cars. Compromise of such sensitive data can lead to serious consequences like identity theft, emotional manipulation, public shaming, and mass surveillance overreach.

+

Sensitive data takes many forms - structured records like contact lists and unstructured content like conversational audio and video streams. In medical settings, protected health information (PHI) is collected by doctors throughout every interaction and is heavily regulated by strict HIPAA guidelines. Even outside of medical settings, sensitive data can still be collected in the form of Personally Identifiable Information (PII), which is defined as “any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.” Examples of PII include email addresses, social security numbers, and phone numbers, among other fields. PII is collected in medical settings and other settings (financial applications, etc) and is heavily regulated by Department of Labor policies.

+

Even derived model outputs could indirectly leak details about individuals. Beyond just personal data, proprietary algorithms and datasets also warrant confidentiality protections. In the Data Engineering section, we covered several topics in detail.

+

Techniques like de-identification, aggregation, anonymization, and federation can help transform sensitive data into less risky forms while retaining analytical utility. However, diligent controls around access, encryption, auditing, consent, minimization, and compliance practices are still essential throughout the data lifecycle. Regulations like GDPR categorize different classes of sensitive data and prescribe responsibilities around their ethical handling. Standards like NIST 800-53 provide rigorous security control guidance for confidentiality protection. With growing reliance on embedded ML, understanding sensitive data risks is crucial.

+
+
+

14.7.2 Applicable Regulations

+

Many embedded ML applications handle sensitive user data under HIPAA, GDPR, and CCPA regulations. Understanding the protections mandated by these laws is crucial for building compliant systems.

+
    +
  • HIPAA Privacy Rule establishes care providers that conduct certain governs medical data privacy and security in the US, with severe penalties for violations. Any health-related embedded ML devices like diagnostic wearables or assistive robots would need to implement controls like audit trails, access controls, and encryption prescribed by HIPAA.

  • +
  • GDPR imposes transparency, retention limits, and user rights on EU citizen data, even when processed by companies outside the EU. Smart home systems capturing family conversations or location patterns would need GDPR compliance. Key requirements include data minimization, encryption, and mechanisms for consent and erasure.

  • +
  • CCPA, which applies in California, protects consumer data privacy through provisions like required disclosures and opt-out rights—ioT gadgets like smart speakers and fitness trackers Californians use likely to fall under its scope.

  • +
  • The CCPA was the first state-specific set of regulations regarding privacy concerns. Following the CCPA, similar regulations were also enacted in 10 other states, with some states proposing bills for consumer data privacy protections.

  • +
+

Additionally, when relevant to the application, sector-specific rules govern telematics, financial services, utilities, etc. Best practices like Privacy by design, impact assessments, and maintaining audit trails help embed compliance if it is not already required by law. Given potentially costly penalties, consulting legal/compliance teams is advisable when developing regulated embedded ML systems.

+
+
+

14.7.3 De-identification

+

If medical data is de-identified thoroughly, HIPAA guidelines do not directly apply, and there are far fewer regulations. However, medical data needs to be de-identified using HIPAA methods (Safe Harbor methods or Expert Determination methods) for HIPAA guidelines to no longer apply.

+
+

Safe Harbor Methods

+

Safe Harbor methods are most commonly used for de-identifying protected healthcare information due to the limited resources needed compared to Expert Determination methods. Safe Harbor de-identification requires scrubbing datasets of any data that falls into one of 18 categories. The following categories are listed as sensitive information based on the Safe Harbor standard:

+
    +
  • Name, Geographic locator, Birthdate, Phone Number, Email Address, addresses, Social Security Numbers, Medical Record Numbers, health beneficiary Numbers, Device Identifiers and Serial Numbers, Certificate/License Numbers (Birth Certificate, Drivers License, etc), Account Numbers, Vehicle Identifiers, Website URLs, FullFace Photos and Comparable Images, Biometric Identifiers, Any other unique identifiers
  • +
+

For most of these categories, all data must be removed regardless of the circumstances. For other categories, including geographical information and birthdate, the data can be partially removed enough to make the information hard to re-identify. For example, if a zip code is large enough, the first 3 digits can remain since there are enough people in the geographic area to make re-identification difficult. Birthdates need to be scrubbed of all elements except birth year, and all ages above 89 need to be aggregated into a 90+ category.

+
+
+

Expert Determination Methods

+

Safe Harbor methods work for several cases of medical data de-identification, though re-identification is still possible in some cases. For example, let’s say you collect data on a patient in an urban city with a large zip code, but you have documented a rare disease that they have—a disease that only 25 people have in the entire city. Given geographic data coupled with birth year, it is highly possible that someone can re-identify this individual, which is an extremely detrimental privacy breach.

+

In unique cases like these, expert determination data de-identification methods are preferred. Expert determination de-identification requires a “person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods for rendering information not individually identifiable” to evaluate a dataset and determine if the risk of re-identification of individual data in a given dataset in combination with publicly available data (voting records, etc.), is extremely small.

+

Expert Determination de-identification is understandably harder to complete than Safe Harbour de-identification due to the cost and feasibility of accessing an expert to verify the likelihood of re-identifying a dataset. However, in many cases, expert determination is required to ensure that re-identification of data is extremely unlikely.

+
+
+
+

14.7.4 Data Minimization

+

Data minimization involves collecting, retaining, and processing only the necessary user data to reduce privacy risks from embedded ML systems. This starts by restricting the data types and instances gathered to the bare minimum required for the system’s core functionality. For example, an object detection model only collects the images needed for that specific computer vision task. Similarly, a voice assistant would limit audio capture to specific spoken commands rather than persistently recording ambient sounds.

+

Where possible, temporary data that briefly resides in memory without persisting storage provides additional minimization. A clear legal basis, like user consent, should be established for collection and retention. Sandboxing and access controls prevent unauthorized use beyond intended tasks. Retention periods should be defined based on purpose, with secure deletion procedures removing expired data.

+

Data minimization can be broken down into 3 categories:

+
    +
  1. “Data must be adequate about the purpose that is pursued.” Data omission can limit the accuracy of models trained on the data and any general usefulness of a dataset. Data minimization requires a minimum amount of data to be collected from users while creating a dataset that adds value to others.

  2. +
  3. The data collected from users must be relevant to the purpose of the data collection.

  4. +
  5. Users’ data should be limited to only the necessary data to fulfill the purpose of the initial data collection. If similarly robust and accurate results can be obtained from a smaller dataset, any additional data beyond this smaller dataset should not be collected.

  6. +
+

Emerging techniques like differential Privacy, federated learning, and synthetic data generation allow useful insights derived from less raw user data. Performing data flow mapping and impact assessments helps identify opportunities to minimize raw data usage.

+

Methodologies like Privacy by Design (Cavoukian 2009) consider such minimization early in system architecture. Regulations like GDPR also mandate data minimization principles. With a multilayered approach across legal, technical, and process realms, data minimization limits risks in embedded ML products.

+
+Cavoukian, Ann. 2009. “Privacy by Design.” Office of the Information and Privacy Commissioner. +
+

Case Study - Performance-Based Data Minimization

+

Performance-based data minimization (Biega et al. 2020) focuses on expanding upon the third category of data minimization mentioned above, namely limitation. It specifically defines the robustness of model results on a given dataset by certain performance metrics, such that data should not be additionally collected if it does not significantly improve performance. Performance metrics can be divided into two categories:

+
+Biega, Asia J., Peter Potash, Hal Daumé, Fernando Diaz, and Michèle Finck. 2020. “Operationalizing the Legal Principle of Data Minimization for Personalization.” In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, edited by Jimmy Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, 399–408. ACM. https://doi.org/10.1145/3397271.3401034. +
    +
  1. Global data minimization performance
  2. +
+
    +
  1. Satisfied if a dataset minimizes the amount of per-user data while its mean performance across all data is comparable to the mean performance of the original, unminimized dataset.
  2. +
+
    +
  1. Per user data minimization performance
  2. +
+
    +
  1. Satisfied if a dataset minimizes the amount of per-user data while the minimum performance of individual user data is comparable to that of individual user data in the original, unminimized dataset.
  2. +
+

Performance-based data minimization can be leveraged in machine-learning settings, including movie recommendation algorithms and e-commerce settings.

+

Global data minimization is much more feasible than per-user data minimization, given the much more significant difference in per-user losses between the minimized and original datasets.

+
+
+ +
+

14.7.6 Privacy Concerns in Machine Learning

+
+

Generative AI

+

Privacy and security concerns have also risen with the public use of generative AI models, including OpenAI’s GPT4 and other LLMs. ChatGPT, in particular, has been discussed more recently about Privacy, given all the personal information collected from ChatGPT users. In June, a class action lawsuit was filed against ChatGPT due to concerns that it was trained on proprietary medical and personal information without proper permissions or consent. As a result of these privacy concerns, many companies have prohibited their employees from accessing ChatGPT, and uploading private, company related information to the chatbot. Further, ChatGPT is susceptible to prompt injection and other security attacks that could compromise the privacy of the proprietary data upon which it was trained.

+
+
Case Study
+

While ChatGPT has instituted protections to prevent people from accessing private and ethically questionable information, several individuals have successfully bypassed these protections through prompt injection and other security attacks. As demonstrated in Figure fig-role-play, users can bypass ChatGPT protections to mimic the tone of a “deceased grandmother” to learn how to bypass a web application firewall (Gupta et al. 2023).

+
+
+
+ +
+
+Figure 14.9: Grandma role play to bypass safety restrictions. Credit: Gupta et al. (2023). +
+
+
+

Further, users have also successfully used reverse psychology to manipulate ChatGPT and access information initially prohibited by the model. In Figure fig-role-play2, a user is initially prevented from learning about piracy websites through ChatGPT but can bypass these restrictions using reverse psychology.

+
+
+
+ +
+
+Figure 14.10: Reverse psychology to bypass safety restrictions. Credit: Gupta et al. (2023). +
+
+Gupta, Maanak, Charankumar Akiri, Kshitiz Aryal, Eli Parker, and Lopamudra Praharaj. 2023. “From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy.” #IEEE_O_ACC# 11: 80218–45. https://doi.org/10.1109/access.2023.3300381. +
+
+

The ease at which security attacks can manipulate ChatGPT is concerning, given the private information it was trained upon without consent. Further research on data privacy in LLMs and generative AI should focus on preventing the model from being so naive to prompt injection attacks.

+
+
+
+

Data Erasure

+

Many previous regulations mentioned above, including GDPR, include a “right to be forgotten” clause. This clause essentially states that “the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay.” However, in several cases, even if user data has been erased from a platform, the data is only partially erased if a machine learning model has been trained on this data for separate purposes. Through methods similar to membership inference attacks, other individuals can still predict the training data a model was trained upon, even if the data’s presence was explicitly removed online.

+

One approach to addressing privacy concerns with machine learning training data has been through differential privacy methods. For example, by adding Laplacian noise in the training set, a model can be robust to membership inference attacks, preventing deleted data from being recovered. Another approach to preventing deleted data from being inferred from security attacks is simply retraining the model from scratch on the remaining data. Since this process is time-consuming and computationally expensive, other researchers have attempted to address privacy concerns surrounding inferring model training data through a process called machine unlearning, in which a model actively iterates on itself to remove the influence of “forgotten” data that it might have been trained on, as mentioned below.

+
+
+
+
+

14.8 Privacy-Preserving ML Techniques

+

Many techniques have been developed to preserve privacy, each addressing different aspects and data security challenges. These methods can be broadly categorized into several key areas: Differential Privacy, which focuses on statistical privacy in data outputs; Federated Learning, emphasizing decentralized data processing; Homomorphic Encryption and Secure Multi-party Computation (SMC), both enabling secure computations on encrypted or private data; Data Anonymization and Data Masking and Obfuscation, which alter data to protect individual identities; Private Set Intersection and Zero-Knowledge Proofs, facilitating secure data comparisons and validations; Decentralized Identifiers (DIDs) for self-sovereign digital identities; Privacy-Preserving Record Linkage (PPRL), linking data across sources without exposure; Synthetic Data Generation, creating artificial datasets for safe analysis; and Adversarial Learning Techniques, enhancing data or model resistance to privacy attacks.

+

Given the extensive range of these techniques, it is not feasible to delve into each in depth within a single course or discussion, let alone for anyone to know it all in its glorious detail. Therefore, we will explore a few specific techniques in relative detail, providing a deeper understanding of their principles, applications, and the unique privacy challenges they address in machine learning. This focused approach will give us a more comprehensive and practical understanding of key privacy-preserving methods in modern ML systems.

+
+

14.8.1 Differential Privacy

+
+

Core Idea

+

Differential Privacy is a framework for quantifying and managing the privacy of individuals in a dataset (Dwork et al. 2006). It provides a mathematical guarantee that the privacy of individuals in the dataset will not be compromised, regardless of any additional knowledge an attacker may possess. The core idea of differential Privacy is that the outcome of any analysis (like a statistical query) should be essentially the same, whether any individual’s data is included in the dataset or not. This means that by observing the analysis result, one cannot determine whether any individual’s data was used in the computation.

+
+Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. “Calibrating Noise to Sensitivity in Private Data Analysis.” In Theory of Cryptography, edited by Shai Halevi and Tal Rabin, 265–84. Berlin, Heidelberg: Springer Berlin Heidelberg. +

For example, let’s say a database contains medical records for 10 patients. We want to release statistics about the prevalence of diabetes in this sample without revealing one patient’s condition. To do this, we could add a small amount of random noise to the true count before releasing it. If the true number of diabetes patients is 6, we might add noise from a Laplace distribution to randomly output 5, 6, or 7 each with some probability. An observer now can’t tell if any single patient has diabetes based only on the noisy output. The query result looks similar to whether each patient’s data is included or excluded. This is differential Privacy. More formally, a randomized algorithm satisfies ε-differential Privacy if, for any neighbor databases D and Dʹ differing by only one entry, the probability of any outcome changes by at most a factor of ε. A lower ε provides stronger privacy guarantees.

+

The Laplace Mechanism is one of the most straightforward and commonly used methods to achieve differential Privacy. It involves adding noise that follows a Laplace distribution to the data or query results. Apart from the Laplace Mechanism, the general principle of adding noise is central to differential Privacy. The idea is to add random noise to the data or the results of a query. The noise is calibrated to ensure the necessary privacy guarantee while keeping the data useful.

+

While the Laplace distribution is common, other distributions like Gaussian can also be used. Laplace noise is used for strict ε-differential Privacy for low-sensitivity queries. In contrast, Gaussian distributions can be used when Privacy is not guaranteed, known as (ϵ, 𝛿)-Differential Privacy. In this relaxed version of differential Privacy, epsilon and delta define the amount of Privacy guaranteed when releasing information or a model related to a dataset. Epsilon sets a bound on how much information can be learned about the data based on the output. At the same time, delta allows for a small probability of the privacy guarantee to be violated. The choice between Laplace, Gaussian, and other distributions will depend on the specific requirements of the query and the dataset and the tradeoff between Privacy and accuracy.

+

To illustrate the tradeoff of Privacy and accuracy in (\(\epsilon\), \(\delta\))-differential Privacy, the following graphs in Figure fig-tradeoffs show the results on accuracy for different noise levels on the MNIST dataset, a large dataset of handwritten digits (Abadi et al. 2016). The delta value (black line; right y-axis) denotes the level of privacy relaxation (a high value means Privacy is less stringent). As Privacy becomes more relaxed, the accuracy of the model increases.

+
+
+
+ +
+
+Figure 14.11: Privacy-accuracy tradeoff. Credit: Abadi et al. (2016). +
+
+Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318. +
+
+

The key points to remember about differential Privacy are the following:

+
    +
  • Adding Noise: The fundamental technique in differential Privacy is adding controlled random noise to the data or query results. This noise masks the contribution of individual data points.

  • +
  • Balancing Act: There’s a balance between Privacy and accuracy. More noise (lower ϵ) in the data means higher Privacy but less accuracy in the model’s results.

  • +
  • Universality: Differential Privacy doesn’t rely on assumptions about what an attacker knows. This makes it robust against re-identification attacks, where an attacker tries to uncover individual data.

  • +
  • Applicability: It can be applied to various types of data and queries, making it a versatile tool for privacy-preserving data analysis.

  • +
+
+
+

Tradeoffs

+

There are several tradeoffs to make with differential Privacy, as is the case with any algorithm. But let’s focus on the computational-specific tradeoffs since we care about ML systems. There are some key computational considerations and tradeoffs when implementing differential Privacy in a machine-learning system:

+

Noise generation: Implementing differential Privacy introduces several important computational tradeoffs compared to standard machine learning techniques. One major consideration is the need to securely generate random noise from distributions like Laplace or Gaussian that get added to query results and model outputs. High-quality cryptographic random number generation can be computationally expensive.

+

Sensitivity analysis: Another key requirement is rigorously tracking the sensitivity of the underlying algorithms to single data points getting added or removed. This global sensitivity analysis is required to calibrate the noise levels properly. However, analyzing worst-case sensitivity can substantially increase computational complexity for complex model training procedures and data pipelines.

+

Privacy budget management: Managing the privacy loss budget across multiple queries and learning iterations is another bookkeeping overhead. The system must keep track of cumulative privacy costs and compose them to explain overall privacy guarantees. This adds a computational burden beyond just running queries or training models.

+

Batch vs. online tradeoffs: For online learning systems with continuous high-volume queries, differentially private algorithms require new mechanisms to maintain utility and prevent too much accumulated privacy loss since each query can potentially alter the privacy budget. Batch offline processing is simpler from a computational perspective as it processes data in large batches, where each batch is treated as a single query. High-dimensional sparse data also increases sensitivity analysis challenges.

+

Distributed training: When training models using distributed or federated approaches, new cryptographic protocols are needed to track and bound privacy leakage across nodes. Secure multiparty computation with encrypted data for differential Privacy adds substantial computational load.

+

While differential Privacy provides strong formal privacy guarantees, implementing it rigorously requires additions and modifications to the machine learning pipeline at a computational cost. Managing these overheads while preserving model accuracy remains an active research area.

+
+
+

Case Study

+

Apple’s implementation of differential Privacy in iOS and MacOS provides a prominent real-world example of how differential Privacy can be deployed at large scale. Apple wanted to collect aggregated usage statistics across their ecosystem to improve products and services, but aimed to do so without compromising individual user privacy.

+

To achieve this, they implemented differential privacy techniques directly on user devices to anonymize data points before sending them to Apple servers. Specifically, Apple uses the Laplace mechanism to inject carefully calibrated random noise. For example, suppose a user’s location history contains [Work, Home, Work, Gym, Work, Home]. In that case, the differentially private version might replace the exact locations with a noisy sample like [Gym, Home, Work, Work, Home, Work].

+

Apple tunes the Laplace noise distribution to provide a high level of Privacy while preserving the utility of aggregated statistics. Increasing noise levels provides stronger privacy guarantees (lower ε values in DP terminology) but can reduce data utility. Apple’s privacy engineers empirically optimized this tradeoff based on their product goals.

+

Apple obtains high-fidelity aggregated statistics by aggregating hundreds of millions of noisy data points from devices. For instance, they can analyze new iOS apps’ features while masking any user’s app behaviors. On-device computation avoids sending raw data to Apple servers.

+

The system uses hardware-based secure random number generation to sample from the Laplace distribution on devices efficiently. Apple also had to optimize its differentially private algorithms and pipeline to operate under the computational constraints of consumer hardware.

+

Multiple third-party audits have verified that Apple’s system provides rigorous differential privacy protections in line with their stated policies. Of course, assumptions around composition over time and potential re-identification risks still apply. Apple’s deployment shows how differential Privacy can be realized in large real-world products when backed by sufficient engineering resources.

+
+

Exercise 14.1 (Differential Privacy - TensorFlow Privacy)  

+
+
+ +
+
+

Want to train an ML model without compromising anyone’s secrets? Differential Privacy is like a superpower for your data! In this Colab, we’ll use TensorFlow Privacy to add special noise during training. This makes it way harder for anyone to determine if a single person’s data was used, even if they have sneaky ways of peeking at the model.

+

+
+
+
+
+
+
+

14.8.2 Federated Learning

+
+

Core Idea

+

Federated Learning (FL) is a type of machine learning in which a model is built and distributed across multiple devices or servers while keeping the training data localized. It was previously discussed in the Model Optimizations chapter. Still, we will recap it here briefly to complete it and focus on things that pertain to this chapter.

+

FL aims to train machine learning models across decentralized networks of devices or systems while keeping all training data localized. Figure fig-fl-lifecycle illustrates this process: each participating device leverages its local data to calculate model updates, which are then aggregated to build an improved global model. However, the raw training data is never directly shared, transferred, or compiled. This privacy-preserving approach allows for the joint development of ML models without centralizing the potentially sensitive training data in one place.

+
+
+
+ +
+
+Figure 14.12: Federated Learning lifecycle. Credit: Jin et al. (2020). +
+
+Jin, Yilun, Xiguang Wei, Yang Liu, and Qiang Yang. 2020. “Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective.” arXiv Preprint arXiv:2002.11545. +
+
+

One of the most common model aggregation algorithms is Federated Averaging (FedAvg), where the global model is created by averaging all of the parameters from local parameters. While FedAvg works well with independent and identically distributed data (IID), alternate algorithms like Federated Proximal (FedProx) are crucial in real-world applications where data is often non-IID. FedProx is designed for the FL process when there is significant heterogeneity in the client updates due to diverse data distributions across devices, computational capabilities, or varied amounts of data.

+

By leaving the raw data distributed and exchanging only temporary model updates, federated learning provides a more secure and privacy-enhancing alternative to traditional centralized machine learning pipelines. This allows organizations and users to benefit collaboratively from shared models while maintaining control and ownership over sensitive data. The decentralized nature of FL also makes it robust to single points of failure.

+

Imagine a group of hospitals that want to collaborate on a study to predict patient outcomes based on their symptoms. However, they cannot share their patient data due to privacy concerns and regulations like HIPAA. Here’s how Federated Learning can help.

+
    +
  • Local Training: Each hospital trains a machine learning model on patient data. This training happens locally, meaning the data never leaves the hospital’s servers.

  • +
  • Model Sharing: After training, each hospital only sends the model (specifically, its parameters or weights ) to a central server. It does not send any patient data.

  • +
  • Aggregating Models: The central server aggregates these models from all hospitals into a single, more robust model. This process typically involves averaging the model parameters.

  • +
  • Benefit: The result is a machine learning model that has learned from a wide range of patient data without sharing sensitive data or removing it from its original location.

  • +
+
+
+

Tradeoffs

+

There are several system performance-related aspects of FL in machine learning systems. It would be wise to understand these tradeoffs because there is no “free lunch” for preserving Privacy through FL (Li et al. 2020).

+
+Li, Tian, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. “Federated Learning: Challenges, Methods, and Future Directions.” IEEE Signal Process Mag. 37 (3): 50–60. https://doi.org/10.1109/msp.2020.2975749. +

Communication Overhead and Network Constraints: In FL, one of the most significant challenges is managing the communication overhead. This involves the frequent transmission of model updates between a central server and numerous client devices, which can be bandwidth-intensive. The total number of communication rounds and the size of transmitted messages per round need to be reduced to minimize communication further. This can lead to substantial network traffic, especially in scenarios with many participants. Additionally, latency becomes a critical factor — the time taken for these updates to be sent, aggregated, and redistributed can introduce delays. This affects the overall training time and impacts the system’s responsiveness and real-time capabilities. Managing this communication while minimizing bandwidth usage and latency is crucial for implementing FL.

+

Computational Load on Local Devices: FL relies on client devices (like smartphones or IoT devices, which especially matter in TinyML) for model training, which often have limited computational power and battery life. Running complex machine learning algorithms locally can strain these resources, leading to potential performance issues. Moreover, the capabilities of these devices can vary significantly, resulting in uneven contributions to the model training process. Some devices process updates faster and more efficiently than others, leading to disparities in the learning process. Balancing the computational load to ensure consistent participation and efficiency across all devices is a key challenge in FL.

+

Model Training Efficiency: FL’s decentralized nature can impact model training’s efficiency. Achieving convergence, where the model no longer significantly improves, can be slower in FL than in centralized training methods. This is particularly true in cases where the data is non-IID (non-independent and identically distributed) across devices. Additionally, the algorithms used for aggregating model updates play a critical role in the training process. Their efficiency directly affects the speed and effectiveness of learning. Developing and implementing algorithms that can handle the complexities of FL while ensuring timely convergence is essential for the system’s performance.

+

Scalability Challenges: Scalability is a significant concern in FL, especially as the number of participating devices increases. Managing and coordinating model updates from many devices adds complexity and can strain the system. Ensuring that the system architecture can efficiently handle this increased load without degrading performance is crucial. This involves not just handling the computational and communication aspects but also maintaining the quality and consistency of the model as the scale of the operation grows. A key challenge is designing FL systems that scale effectively while maintaining performance.

+

Data Synchronization and Consistency: Ensuring data synchronization and maintaining model consistency across all participating devices in FL is challenging. Keeping all devices synchronized with the latest model version can be difficult in environments with intermittent connectivity or devices that go offline periodically. Furthermore, maintaining consistency in the learned model, especially when dealing with a wide range of devices with different data distributions and update frequencies, is crucial. This requires sophisticated synchronization and aggregation strategies to ensure that the final model accurately reflects the learnings from all devices.

+

Energy Consumption: The energy consumption of client devices in FL is a critical factor, particularly for battery-powered devices like smartphones and other TinyML/IoT devices. The computational demands of training models locally can lead to significant battery drain, which might discourage continuous participation in the FL process. Balancing the computational requirements of model training with energy efficiency is essential. This involves optimizing algorithms and training processes to reduce energy consumption while achieving effective learning outcomes. Ensuring energy-efficient operation is key to user acceptance and the sustainability of FL systems.

+
+
+

Case Studies

+

Here are a couple of real-world case studies that can illustrate the use of federated learning:

+
+
Google Gboard
+

Google uses federated learning to improve predictions on its Gboard mobile keyboard app. The app runs a federated learning algorithm on users’ devices to learn from their local usage patterns and text predictions while keeping user data private. The model updates are aggregated in the cloud to produce an enhanced global model. This allows for providing next-word predictions personalized to each user’s typing style while avoiding directly collecting sensitive typing data. Google reported that the federated learning approach reduced prediction errors by 25% compared to the baseline while preserving Privacy.

+
+
+
Healthcare Research
+

The UK Biobank and American College of Cardiology combined datasets to train a model for heart arrhythmia detection using federated learning. The datasets could not be combined directly due to legal and Privacy restrictions. Federated learning allowed collaborative model development without sharing protected health data, with only model updates exchanged between the parties. This improved model accuracy as it could leverage a wider diversity of training data while meeting regulatory requirements.

+
+
+
Financial Services
+

Banks are exploring using federated learning for anti-money laundering (AML) detection models. Multiple banks could jointly improve AML Models without sharing confidential customer transaction data with competitors or third parties. Only the model updates need to be aggregated rather than raw transaction data. This allows access to richer training data from diverse sources while avoiding regulatory and confidentiality issues around sharing sensitive financial customer data.

+

These examples demonstrate how federated learning provides tangible privacy benefits and enables collaborative ML in settings where direct data sharing is impossible.

+
+
+
+
+

14.8.3 Machine Unlearning

+
+

Core Idea

+

Machine unlearning is a fairly new process that describes how the influence of a subset of training data can be removed from the model. Several methods have been used to perform machine unlearning and remove the influence of a subset of training data from the final model. A baseline approach might consist of simply fine-tuning the model for more epochs on just the data that should be remembered to decrease the influence of the data “forgotten” by the model. Since this approach doesn’t explicitly remove the influence of data that should be erased, membership inference attacks are still possible, so researchers have adopted other approaches to unlearn data from a model explicitly. One type of approach that researchers have adopted includes adjusting the model loss function to treat the losses of the “forget set explicitly” (data to be unlearned) and the “retain set” (remaining data that should still be remembered) differently (Tarun et al. 2022; Khan and Swaroop 2021).

+
+Tarun, Ayush K, Vikram S Chundawat, Murari Mandal, and Mohan Kankanhalli. 2022. “Deep Regression Unlearning.” ArXiv Preprint abs/2210.08196. https://arxiv.org/abs/2210.08196. +
+Khan, Mohammad Emtiyaz, and Siddharth Swaroop. 2021. “Knowledge-Adaptation Priors.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 19757–70. https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html. +
+
+

Case Study

+

Some researchers demonstrate a real-life example of machine unlearning approaches applied to SOTA machine learning models through training an LLM, LLaMA2-7b, to unlearn any references to Harry Potter (Eldan and Russinovich 2023). Though this model took 184K GPU hours to pre-train, it only took 1 GPU hour of fine-tuning to erase the model’s ability to generate or recall Harry Potter-related content without noticeably compromising the accuracy of generating content unrelated to Harry Potter. Figure fig-hp-prompts demonstrates how the model output changes before (Llama-7b-chat-hf column) and after (Finetuned Llama-b column) unlearning has occurred.

+
+
+
+ +
+
+Figure 14.13: Llama unlearning Harry Potter. Credit: Eldan and Russinovich (2023). +
+
+Eldan, Ronen, and Mark Russinovich. 2023. “Who’s Harry Potter? Approximate Unlearning in LLMs.” ArXiv Preprint abs/2310.02238. https://arxiv.org/abs/2310.02238. +
+
+
+
+

Other Uses

+
+
Removing adversarial data
+

Deep learning models have previously been shown to be vulnerable to adversarial attacks, in which the attacker generates adversarial data similar to the original training data, where a human cannot tell the difference between the real and fabricated data. The adversarial data results in the model outputting incorrect predictions, which could have detrimental consequences in various applications, including healthcare diagnosis predictions. Machine unlearning has been used to unlearn the influence of adversarial data to prevent these incorrect predictions from occurring and causing any harm

+
+
+
+
+

14.8.4 Homomorphic Encryption

+
+

Core Idea

+

Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, generating an encrypted result that, when decrypted, matches the result of operations performed on the plaintext. For example, multiplying two numbers encrypted with homomorphic encryption produces an encrypted product that decrypts the actual product of the two numbers. This means that data can be processed in an encrypted form, and only the resulting output needs to be decrypted, significantly enhancing data security, especially for sensitive information.

+

Homomorphic encryption enables outsourced computation on encrypted data without exposing the data itself to the external party performing the operations. However, only certain computations like addition and multiplication are supported in partially homomorphic schemes. Fully homomorphic encryption (FHE) that can handle any computation is even more complex. The number of possible operations is limited before noise accumulation corrupts the ciphertext.

+

To use homomorphic encryption across different entities, carefully generated public keys must be exchanged for operations across separately encrypted data. This advanced encryption technique enables previously impossible secure computation paradigms but requires expertise to implement correctly for real-world systems.

+
+
+

Benefits

+

Homomorphic encryption enables machine learning model training and inference on encrypted data, ensuring that sensitive inputs and intermediate values remain confidential. This is critical in healthcare, finance, genetics, and other domains, which are increasingly relying on ML to analyze sensitive and regulated data sets containing billions of personal records.

+

Homomorphic encryption thwarts attacks like model extraction and membership inference that could expose private data used in ML workflows. It provides an alternative to TEEs using hardware enclaves for confidential computing. However, current schemes have high computational overheads and algorithmic limitations that constrain real-world applications.

+

Homomorphic encryption realizes the decades-old vision of secure multipartymultiparty computation by allowing computation on ciphertexts. Conceptualized in the 1970s, the first fully homomorphic cryptosystems emerged in 2009, enabling arbitrary computations. Ongoing research is making these techniques more efficient and practical.

+

Homomorphic encryption shows great promise in enabling privacy-preserving machine learning under emerging data regulations. However, given constraints, one should carefully evaluate its applicability against other confidential computing approaches. Extensive resources exist to explore homomorphic encryption and track progress in easing adoption barriers.

+
+
+

Mechanics

+
    +
  1. Data Encryption: Before data is processed or sent to an ML model, it is encrypted using a homomorphic encryption scheme and public key. For example, encrypting numbers \(x\) and \(y\) generates ciphertexts \(E(x)\) and \(E(y)\).

  2. +
  3. Computation on Ciphertext: The ML algorithm processes the encrypted data directly. For instance, multiplying the ciphertexts \(E(x)\) and \(E(y)\) generates \(E(xy)\). More complex model training can also be done on ciphertexts.

  4. +
  5. Result Encryption: The result \(E(xy)\) remains encrypted and can only be decrypted by someone with the corresponding private key to reveal the actual product \(xy\).

  6. +
+

Only authorized parties with the private key can decrypt the final outputs, protecting the intermediate state. However, noise accumulates with each operation, preventing further computation without decryption.

+

Beyond healthcare, homomorphic encryption enables confidential computing for applications like financial fraud detection, insurance analytics, genetics research, and more. It offers an alternative to techniques like multipartymultiparty computation and TEEs. Ongoing research aims to improve the efficiency and capabilities.

+

Tools like HElib, SEAL, and TensorFlow HE provide libraries for exploring implementing homomorphic encryption in real-world machine learning pipelines.

+
+
+

Tradeoffs

+

For many real-time and embedded applications, fully homomorphic encryption remains impractical for the following reasons.

+

Computational Overhead: Homomorphic encryption imposes very high computational overheads, often resulting in slowdowns of over 100x for real-world ML applications. This makes it impractical for many time-sensitive or resource-constrained uses. Optimized hardware and parallelization can help but not eliminate this issue.

+

Complexity of Implementation The sophisticated algorithms require deep expertise in cryptography to be implemented correctly. Nuances like format compatibility with floating point ML models and scalable key management pose hurdles. This complexity hinders widespread practical adoption.

+

Algorithmic Limitations: Current schemes restrict the functions and depth of computations supported, limiting the models and data volumes that can be processed. Ongoing research is pushing these boundaries, but restrictions remain.

+

Hardware Acceleration: Homomorphic encryption requires specialized hardware, such as secure processors or coprocessors with TEEs, which adds design and infrastructure costs.

+

Hybrid Designs: Rather than encrypting entire workflows, selective application of homomorphic encryption to critical subcomponents can achieve protection while minimizing overheads.

+
+

Exercise 14.2 (Homomorphic Encryption)  

+
+
+ +
+
+

Ready to unlock the power of encrypted computation? Homomorphic encryption is like a magic trick for your data! In this Colab, we’ll learn how to do calculations on secret numbers without ever revealing them. Imagine training a model on data you can’t even see – that’s the power of this mind-bending technology.

+

+
+
+
+
+
+
+

14.8.5 Secure MultipartyMultiparty Communication

+
+

Core Idea

+

The overarching goal of MPC is to enable different parties to jointly compute a function over their inputs while keeping those inputs private. For example, two organizations may want to collaborate on training a machine learning model by combining their respective data sets. Still, they cannot directly reveal that data due to Privacy or confidentiality constraints. MPC aims to provide protocols and techniques that allow them to achieve the benefits of pooled data for model accuracy without compromising the privacy of each organization’s sensitive data.

+

At a high level, MPC works by carefully splitting the computation into parts that each party can execute independently using their private input. The results are then combined to reveal only the final output of the function and nothing about the intermediate values. Cryptographic techniques are used to guarantee that the partial results remain private provably.

+

Let’s take a simple example of an MPC protocol. One of the most basic MPC protocols is the secure addition of two numbers. Each party splits its input into random shares that are secretly distributed. They exchange the shares and locally compute the sum of the shares, which reconstructs the final sum without revealing the individual inputs. For example, if Alice has input x and Bob has input y:

+
    +
  1. Alice generates random \(x_1\) and sets \(x_2 = x - x_1\)

  2. +
  3. Bob generates random \(y_1\) and sets \(y_2 = y - y_1\)

  4. +
  5. Alice sends \(x_1\) to Bob, Bob sends \(y_1\) to Alice (keeping \(x_2\) and \(y_2\) secret)

  6. +
  7. Alice computes \(x_2 + y_1 = s_1\), Bob computes \(x_1 + y_2 = s_2\)

  8. +
  9. \(s_1 + s_2 = x + y\) is the final sum, without revealing \(x\) or \(y\).

  10. +
+

Alice’s and Bob’s individual inputs (\(x\) and \(y\)) remain private, and each party only reveals one number associated with their original inputs. The random spits ensure no information about the original numbers disclosed

+

Secure Comparison: Another basic operation is a secure comparison of two numbers, determining which is greater than the other. This can be done using techniques like Yao’s Garbled Circuits, where the comparison circuit is encrypted to allow joint evaluation of the inputs without leaking them.

+

Secure Matrix Multiplication: Matrix operations like multiplication are essential for machine learning. MPC techniques like additive secret sharing can be used to split matrices into random shares, compute products on the shares, and then reconstruct the result.

+

Secure Model Training: Distributed machine learning training algorithms like federated averaging can be made secure using MPC. Model updates computed on partitioned data at each node are secretly shared between nodes and aggregated to train the global model without exposing individual updates.

+

The core idea behind MPC protocols is to divide the computation into steps that can be executed jointly without revealing intermediate sensitive data. This is accomplished by combining cryptographic techniques like secret sharing, homomorphic encryption, oblivious transfer, and garbled circuits. MPC protocols enable the collaborative computation of sensitive data while providing provable privacy guarantees. This privacy-preserving capability is essential for many machine learning applications today involving multiple parties that cannot directly share their raw data.

+

The main approaches used in MPC include:

+
    +
  • Homomorphic encryption: Special encryption allows computations to be carried out on encrypted data without decrypting it.

  • +
  • Secret sharing: The private data is divided into random shares distributed to each party. Computations are done locally on the shares and finally reconstructed.

  • +
  • Oblivious transfer: A protocol where a receiver obtains a subset of data from a sender, but the sender does not know which specific data was transferred.

  • +
  • Garbled circuits: The function to be computed is represented as a Boolean circuit that is encrypted (“garbled”) to allow joint evaluation without revealing inputs.

  • +
+
+
+

Tradeoffs

+

While MPC protocols provide strong privacy guarantees, they come at a high computational cost compared to plain computations. Every secure operation, like addition, multiplication, comparison, etc., requires more processing orders than the equivalent unencrypted operation. This overhead stems from the underlying cryptographic techniques:

+
    +
  • In partially homomorphic encryption, each computation on ciphertexts requires costly public-key operations. Fully homomorphic encryption has even higher overheads.

  • +
  • Secret sharing divides data into multiple shares, so even basic operations require manipulating many shares.

  • +
  • Oblivious transfer and garbled circuits add masking and encryption to hide data access patterns and execution flows.

  • +
  • MPC systems require extensive communication and interaction between parties to compute on shares/ciphertexts jointly.

  • +
+

As a result, MPC protocols can slow down computations by 3-4 orders of magnitude compared to plain implementations. This becomes prohibitively expensive for large datasets and models. Therefore, training machine learning models on encrypted data using MPC remains infeasible today for realistic dataset sizes due to the overhead. Clever optimizations and approximations are needed to make MPC practical.

+

Ongoing MPC research aims to close this efficiency gap through cryptographic advances, new algorithms, trusted hardware like SGX enclaves, and leveraging accelerators like GPUs/TPUs. However, in the foreseeable future, some degree of approximation and performance tradeoff is needed to scale MPC to meet the demands of real-world machine learning systems.

+
+
+
+

14.8.6 Synthetic Data Generation

+
+

Core Idea

+

Synthetic data generation has emerged as an important privacy-preserving machine learning approach that allows models to be developed and tested without exposing real user data. The key idea is to train generative models on real-world datasets and then sample from these models to synthesize artificial data that statistically match the original data distribution but does not contain actual user information. For example, a GAN could be trained on a dataset of sensitive medical records to learn the underlying patterns and then used to sample synthetic patient data.

+

The primary challenge of synthesizing data is to ensure adversaries are unable to re-identify the original dataset. A simple approach to achieving synthetic data is adding noise to the original dataset, which still risks privacy leakage. When noise is added to data in the context of differential privacy, sophisticated mechanisms based on the data’s sensitivity are used to calibrate the amount and distribution of noise. Through these mathematically rigorous frameworks, differential Privacy generally guarantees Privacy at some level, which is the primary goal of this privacy-preserving technique. Beyond preserving privacy, synthetic data combats multiple data availability issues such as imbalanced datasets, scarce datasets, and anomaly detection.

+

Researchers can freely share this synthetic data and collaborate on modeling without revealing private medical information. Well-constructed synthetic data protects Privacy while providing utility for developing accurate models. Key techniques to prevent reconstructing the original data include adding differential privacy noise during training, enforcing plausibility constraints, and using multiple diverse generative models. Here are some common approaches for generating synthetic data:

+
    +
  • Generative Adversarial Networks (GANs): GANs are an AI algorithm used in unsupervised learning where two neural networks compete against each other in a game. Figure fig-gans is an overview of the GAN system. The generator network (big red box) is responsible for producing the synthetic data, and the discriminator network (yellow box) evaluates the authenticity of the data by distinguishing between fake data created by the generator network and the real data. The generator and discriminator networks learn and update their parameters based on the results. The discriminator acts as a metric on how similar the fake and real data are to one another. It is highly effective at generating realistic data and is a popular approach for generating synthetic data.
  • +
+
+
+
+ +
+
+Figure 14.14: Flowchart of GANs. Credit: Rosa and Papa (2021). +
+
+Rosa, Gustavo H. de, and João P. Papa. 2021. “A Survey on Text Generation Using Generative Adversarial Networks.” Pattern Recogn. 119 (November): 108098. https://doi.org/10.1016/j.patcog.2021.108098. +
+
+
    +
  • Variational Autoencoders (VAEs): VAEs are neural networks capable of learning complex probability distributions and balancing data generation quality and computational efficiency. They encode data into a latent space where they learn the distribution to decode the data back.

  • +
  • Data Augmentation: This involves transforming existing data to create new, altered data. For example, flipping, rotating, and scaling (uniformly or non-uniformly) original images can help create a more diverse, robust image dataset before training an ML model.

  • +
  • Simulations: Mathematical models can simulate real-world systems or processes to mimic real-world phenomena. This is highly useful in scientific research, urban planning, and economics.

  • +
+
+
+

Benefits

+

While synthetic data may be necessary due to Privacy or compliance risks, it is widely used in machine learning models when available data is of poor quality, scarce, or inaccessible. Synthetic data offers more efficient and effective development by streamlining robust model training, testing, and deployment processes. It allows researchers to share models more widely without breaching privacy laws and regulations. Collaboration between users of the same dataset will be facilitated, which will help broaden the capabilities and advancements in ML research.

+

There are several motivations for using synthetic data in machine learning:

+
    +
  • Privacy and compliance: Synthetic data avoids exposing personal information, allowing more open sharing and collaboration. This is important when working with sensitive datasets like healthcare records or financial information.

  • +
  • Data scarcity: When insufficient real-world data is available, synthetic data can augment training datasets. This improves model accuracy when limited data is a bottleneck.

  • +
  • Model testing: Synthetic data provides privacy-safe sandboxes for testing model performance, debugging issues, and monitoring for bias.

  • +
  • Data labeling: High-quality labeled training data is often scarce and expensive. Synthetic data can help auto-generate labeled examples.

  • +
+
+
+

Tradeoffs

+

While synthetic data aims to remove any evidence of the original dataset, privacy leakage is still a risk since the synthetic data mimics the original data. The statistical information and distribution are similar, if not the same, between the original and synthetic data. By resampling from the distribution, adversaries may still be able to recover the original training samples. Due to their inherent learning processes and complexities, neural networks might accidentally reveal sensitive information about the original training data.

+

A core challenge with synthetic data is the potential gap between synthetic and real-world data distributions. Despite advancements in generative modeling techniques, synthetic data may only partially capture real data’s complexity, diversity, and nuanced patterns. This can limit the utility of synthetic data for robustly training machine learning models. Rigorously evaluating synthetic data quality through adversary methods and comparing model performance to real data benchmarks helps assess and improve fidelity. However, inherently, synthetic data remains an approximation.

+

Another critical concern is the privacy risks of synthetic data. Generative models may leak identifiable information about individuals in the training data, which could enable reconstruction of private information. Emerging adversarial attacks demonstrate the challenges in preventing identity leakage from synthetic data generation pipelines. Techniques like differential Privacy can help safeguard Privacy but come with tradeoffs in data utility. There is an inherent tension between producing useful synthetic data and fully protecting sensitive training data, which must be balanced.

+

Additional pitfalls of synthetic data include amplified biases, labeling difficulties, the computational overhead of training generative models, storage costs, and failure to account for out-of-distribution novel data. While these are secondary to the core synthetic-real gap and privacy risks, they remain important considerations when evaluating the suitability of synthetic data for particular machine-learning tasks. As with any technique, the advantages of synthetic data come with inherent tradeoffs and limitations that require thoughtful mitigation strategies.

+
+
+
+

14.8.7 Summary

+

While all the techniques we have discussed thus far aim to enable privacy-preserving machine learning, they involve distinct mechanisms and tradeoffs. Factors like computational constraints, required trust assumptions, threat models, and data characteristics help guide the selection process for a particular use case. However, finding the right balance between Privacy, accuracy, and efficiency necessitates experimentation and empirical evaluation for many applications. Below is a comparison table of the key privacy-preserving machine learning techniques and their pros and cons:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TechniqueProsCons
Differential PrivacyStrong formal privacy guarantees
Robust to auxiliary data attacks
Versatile for many data types and analyses
Accuracy loss from noise addition
Computational overhead for sensitivity analysis and noise generation
Federated LearningAllows collaborative learning without sharing raw data
Data remains decentralized improving security
No need for encrypted computation
Increased communication overhead
Potentially slower model convergence
Uneven client device capabilities
Secure Multi-Party ComputationEnables joint computation on sensitive data
Provides cryptographic privacy guarantees
Flexible protocols for various functions
Very high computational overhead
Complexity of implementation
Algorithmic constraints on function depth
Homomorphic EncryptionAllows computation on encrypted data
Prevents intermediate state exposure
Extremely high computational cost
Complex cryptographic implementations
Restrictions on function types
Synthetic Data GenerationEnables data sharing without leakage
Mitigates data scarcity problems
Synthetic-real gap in distributions
Potential for reconstructing private data
Biases and labeling challenges
+
+
+
+

14.9 Conclusion

+

Machine learning hardware security is critical as embedded ML systems are increasingly deployed in safety-critical domains like medical devices, industrial controls, and autonomous vehicles. We have explored various threats spanning hardware bugs, physical attacks, side channels, supply chain risks, etc. Defenses like TEEs, Secure Boot, PUFs, and hardware security modules provide multilayer protection tailored for resource-constrained embedded devices.

+

However, continual vigilance is essential to track emerging attack vectors and address potential vulnerabilities through secure engineering practices across the hardware lifecycle. As ML and embedded ML spread, maintaining rigorous security foundations that match the field’s accelerating pace of innovation remains imperative.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/responsible_ai/responsible_ai.html b/contents/responsible_ai/responsible_ai.html new file mode 100644 index 00000000..be56c09f --- /dev/null +++ b/contents/responsible_ai/responsible_ai.html @@ -0,0 +1,1716 @@ + + + + + + + + + +Machine Learning Systems - 15  Responsible AI + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

15  Responsible AI

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Illustration of responsible AI in a futuristic setting with the universe in the backdrop: A human hand or hands nurturing a seedling that grows into an AI tree, symbolizing a neural network. The tree has digital branches and leaves, resembling a neural network, to represent the interconnected nature of AI. The background depicts a future universe where humans and animals with general intelligence collaborate harmoniously. The scene captures the initial nurturing of the AI as a seedling, emphasizing the ethical development of AI technology in harmony with humanity and the universe.
+
+
+

As machine learning models grow across various domains, these algorithms have the potential to perpetuate historical biases, breach privacy, or enable unethical automated decisions if developed without thoughtful consideration of their societal impacts. Even systems created with good intentions can ultimately discriminate against certain demographic groups, enable surveillance, or lack transparency into their behaviors and decision-making processes. As such, machine learning engineers and companies have an ethical responsibility to proactively ensure principles of fairness, accountability, safety, and transparency are reflected in their models to prevent harm and build public trust.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand responsible AI’s core principles and motivations, including fairness, transparency, privacy, safety, and accountability.

  • +
  • Learn technical methods for implementing responsible AI principles, such as detecting dataset biases, building interpretable models, adding noise for privacy, and testing model robustness.

  • +
  • Recognize organizational and social challenges to achieving responsible AI, including data quality, model objectives, communication, and job impacts.

  • +
  • Knowledge of ethical frameworks and considerations for AI systems, spanning AI safety, human autonomy, and economic consequences.

  • +
  • Appreciate the increased complexity and costs of developing ethical, trustworthy AI systems compared to unprincipled AI.

  • +
+
+
+
+

15.1 Introduction

+

Machine learning models are increasingly used to automate decisions in high-stakes social domains like healthcare, criminal justice, and employment. However, without deliberate care, these algorithms can perpetuate biases, breach privacy, or cause other harm. For instance, a loan approval model solely trained on data from high-income neighborhoods could disadvantage applicants from lower-income areas. This motivates the need for responsible machine learning - creating fair, accountable, transparent, and ethical models.

+

Several core principles underlie responsible ML. Fairness ensures models do not discriminate based on gender, race, age, and other attributes. Explainability enables humans to interpret model behaviors and improve transparency. Robustness and safety techniques prevent vulnerabilities like adversarial examples. Rigorous testing and validation help reduce unintended model weaknesses or side effects.

+

Implementing responsible ML presents both technical and ethical challenges. Developers must grapple with defining fairness mathematically, balancing competing objectives like accuracy vs interpretability, and securing quality training data. Organizations must also align incentives, policies, and culture to uphold ethical AI.

+

This chapter will equip you to critically evaluate AI systems and contribute to developing beneficial and ethical machine learning applications by covering the foundations, methods, and real-world implications of responsible ML. The responsible ML principles discussed are crucial knowledge as algorithms mediate more aspects of human society.

+
+
+

15.2 Definition

+

Responsible AI is about developing AI that positively impacts society under human ethics and values. There is no universally agreed-upon definition of “responsible AI,” but here is a summary of how it is commonly described. Responsible AI refers to designing, developing, and deploying artificial intelligence systems in an ethical, socially beneficial way. The core goal is to create trustworthy, unbiased, fair, transparent, accountable, and safe AI. While there is no canonical definition, responsible AI is generally considered to encompass principles such as:

+
    +
  • Fairness: Avoiding biases, discrimination, and potential harm to certain groups or populations

  • +
  • Explainability: Enabling humans to understand and interpret how AI models make decisions

  • +
  • Transparency: Openly communicating how AI systems operate, are built, and are evaluated

  • +
  • Accountability: Having processes to determine responsibility and liability for AI failures or negative impacts

  • +
  • Robustness: Ensuring AI systems are secure, reliable, and behave as intended

  • +
  • Privacy: Protecting sensitive user data and adhering to privacy laws and ethics

  • +
+

Putting these principles into practice involves technical techniques, corporate policies, governance frameworks, and moral philosophy. There are also ongoing debates around defining ambiguous concepts like fairness and determining how to balance competing objectives.

+
+
+

15.3 Principles and Concepts

+
+

15.3.1 Transparency and Explainability

+

Machine learning models are often criticized as mysterious “black boxes” - opaque systems where it’s unclear how they arrived at particular predictions or decisions. For example, an AI system called COMPAS used to assess criminal recidivism risk in the US was found to be racially biased against black defendants. Still, the opacity of the algorithm made it difficult to understand and fix the problem. This lack of transparency can obscure biases, errors, and deficiencies.

+

Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques like LIME, Shapley values, and saliency maps empower humans to understand and validate model logic. Laws like the EU’s GDPR also mandate transparency, which requires explainability for certain automated decisions. Overall, transparency and explainability are critical pillars of responsible AI.

+
+
+

15.3.2 Fairness, Bias, and Discrimination

+

ML models trained on historically biased data often perpetuate and amplify those prejudices. Healthcare algorithms have been shown to disadvantage black patients by underestimating their needs (Obermeyer et al. 2019). Facial recognition needs to be more accurate for women and people of color. Such algorithmic discrimination can negatively impact people’s lives in profound ways.

+
+Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–53. https://doi.org/10.1126/science.aax2342. +

Different philosophical perspectives also exist on fairness - for example, is it fairer to treat all individuals equally or try to achieve equal outcomes for groups? Ensuring fairness requires proactively detecting and mitigating biases in data and models. However, achieving perfect fairness is tremendously difficult due to contrasting mathematical definitions and ethical perspectives. Still, promoting algorithmic fairness and non-discrimination is a key responsibility in AI development.

+
+
+

15.3.3 Privacy and Data Governance

+

Maintaining individuals’ privacy is an ethical obligation and legal requirement for organizations deploying AI systems. Regulations like the EU’s GDPR mandate data privacy protections and rights, such as the ability to access and delete one’s data.

+

However, maximizing the utility and accuracy of data for training models can conflict with preserving privacy - modeling disease progression could benefit from access to patients’ full genomes, but sharing such data widely violates privacy.

+

Responsible data governance involves carefully anonymizing data, controlling access with encryption, getting informed consent from data subjects, and collecting the minimum data needed. Honoring privacy is challenging but critical as AI capabilities and adoption expand.

+
+
+

15.3.4 Safety and Robustness

+

Putting AI systems into real-world operation requires ensuring they are safe, reliable, and robust, especially for human interaction scenarios. Self-driving cars from Uber and Tesla have been involved in deadly crashes due to unsafe behaviors.

+

Adversarial attacks that subtly alter input data can also fool ML models and cause dangerous failures if systems are not resistant. Deepfakes represent another emerging threat area.

+

Below is a deepfake video of Barack Obama that went viral a few years ago.

+
+

Promoting safety requires extensive testing, risk analysis, human oversight, and designing systems that combine multiple weak models to avoid single points of failure. Rigorous safety mechanisms are essential for the responsible deployment of capable AI.

+
+
+

15.3.5 Accountability and Governance

+

When AI systems eventually fail or produce harmful outcomes, mechanisms must exist to address resultant issues, compensate affected parties, and assign responsibility. Both corporate accountability policies and government regulations are indispensable for responsible AI governance. For instance, Illinois’ Artificial Intelligence Video Interview Act requires companies to disclose and obtain consent for AI video analysis, promoting accountability.

+

Without clear accountability, even harms caused unintentionally could go unresolved, furthering public outrage and distrust. Oversight boards, impact assessments, grievance redress processes, and independent audits promote responsible development and deployment.

+
+
+
+

15.4 Cloud, Edge & Tiny ML

+

While these principles broadly apply across AI systems, certain responsible AI considerations are unique or pronounced when dealing with machine learning on embedded devices versus traditional server-based modeling. Therefore, we present a high-level taxonomy comparing responsible AI considerations across cloud, edge, and TinyML systems.

+
+

15.4.1 Summary

+

The table below summarizes how responsible AI principles manifest differently across cloud, edge, and TinyML architectures and how core considerations tie into their unique capabilities and limitations. Each environment’s constraints and tradeoffs shape how we approach transparency, accountability, governance, and other pillars of responsible AI.

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PrincipleCloud MLEdge MLTinyML
ExplainabilityComplex models supportedLightweight requiredSevere limits
FairnessBroad data availableOn-device biasesLimited data labels
PrivacyCloud data vulnerabilitiesMore sensitive dataData dispersed
SafetyHacking threatsReal-world interactionAutonomous devices
AccountabilityCorporate policiesSupply chain issuesComponent tracing
GovernanceExternal oversight feasibleSelf-governance neededProtocol constraints
+
+
+

15.4.2 Explainability

+

For cloud-based machine learning, explainability techniques can leverage significant compute resources, enabling complex methods like SHAP values or sampling-based approaches to interpret model behaviors. For example, Microsoft’s InterpretML toolkit provides explainability techniques tailored for cloud environments.

+

However, edge ML operates on resource-constrained devices, requiring more lightweight explainability methods that can run locally without excessive latency. Techniques like LIME (Ribeiro, Singh, and Guestrin 2016) approximate model explanations using linear models or decision trees to avoid expensive computations, which makes them ideal for resource-constrained devices. However, LIME requires training hundreds to even thousands of models to generate good explanations, which is often infeasible given edge computing constraints. In contrast, saliency-based methods are often much faster in practice, only requiring a single forward pass through the network to estimate feature importance. This greater efficiency makes such methods better suited to edge devices with limited compute resources where low-latency explanations are critical.

+

Given tiny hardware capabilities, embedded systems pose the most significant challenges for explainability. More compact models and limited data make inherent model transparency easier. Explaining decisions may not be feasible on high-size and power-optimized microcontrollers. DARPA’s Transparent Computing program aims to develop extremely low overhead explainability, especially for TinyML devices like sensors and wearables.

+
+
+

15.4.3 Fairness

+

For cloud machine learning, vast datasets and computing power enable detecting biases across large heterogeneous populations and mitigating them through techniques like re-weighting data samples. However, biases may emerge from the broad behavioral data used to train cloud models. Amazon’s Fairness Flow framework helps assess cloud ML fairness.

+

Edge ML relies on limited on-device data, making analyzing biases across diverse groups harder. However, edge devices interact closely with individuals, providing an opportunity to adapt locally for fairness. Google’s Federated Learning distributes model training across devices to incorporate individual differences.

+

TinyML poses unique challenges for fairness with highly dispersed specialized hardware and minimal training data. Bias testing is difficult across diverse devices. Collecting representative data from many devices to mitigate bias has scale and privacy hurdles. DARPA’s Assured Neuro Symbolic Learning and Reasoning (ANSR) efforts are geared toward developing fairness techniques given extreme hardware constraints.

+
+
+

15.4.4 Safety

+

Key safety risks for cloud ML include model hacking, data poisoning, and malware disrupting cloud services. Robustness techniques like adversarial training, anomaly detection, and diversified models aim to harden cloud ML against attacks. Redundancy can help prevent single points of failure.

+

Edge ML and TinyML interact with the physical world, so reliability and safety validation are critical. Rigorous testing platforms like Foretellix synthetically generate edge scenarios to validate safety. TinyML safety is magnified by autonomous devices with limited supervision. TinyML safety often relies on collective coordination - swarms of drones maintain safety through redundancy. Physical control barriers also constrain unsafe TinyML device behaviors.

+

In summary, safety is crucial but manifests differently in each domain. Cloud ML guards against hacking, edge ML interacts physically, so reliability is key, and TinyML leverages distributed coordination for safety. Understanding the nuances guides appropriate safety techniques.

+
+
+

15.4.5 Accountability

+

Cloud ML’s accountability centers on corporate practices like responsible AI committees, ethical charters, and processes to address harmful incidents. Third-party audits and external government oversight promote cloud ML accountability.

+

Edge ML accountability is more complex with distributed devices and supply chain fragmentation. Companies are accountable for devices, but components come from various vendors. Industry standards help coordinate edge ML accountability across stakeholders.

+

With TinyML, accountability mechanisms must be traced across long, complex supply chains of integrated circuits, sensors, and other hardware. TinyML certification schemes help track component provenance. Trade associations should ideally promote shared accountability for ethical TinyML.

+
+
+

15.4.6 Governance

+

Organizations institute internal governance for cloud ML, such as ethics boards, audits, and model risk management. But external governance also oversees cloud ML, like regulations on bias and transparency such as the AI Bill of Rights, General Data Protection Regulation (GDPR), and California Consumer Protection Act (CCPA). Third-party auditing supports cloud ML governance.

+

Edge ML is more decentralized, requiring responsible self-governance by developers and companies deploying models locally. Industry associations coordinate governance across edge ML vendors, and open software helps align incentives for ethical edge ML.

+

Extreme decentralization and complexity make external governance infeasible with TinyML. TinyML relies on protocols and standards for self-governance baked into model design and hardware. Cryptography enables the provable trustworthiness of TinyML devices.

+
+
+

15.4.7 Privacy

+

For cloud ML, vast amounts of user data are concentrated in the cloud, creating risks of exposure through breaches. Differential privacy techniques add noise to cloud data to preserve privacy. Strict access controls and encryption protect cloud data at rest and in transit.

+

Edge ML moves data processing onto user devices, reducing aggregated data collection but increasing potential sensitivity as personal data resides on the device. Apple uses on-device ML and differential privacy to train models while minimizing data sharing. Data anonymization and secure enclaves protect on-device data.

+

TinyML distributes data across many resource-constrained devices, making centralized breaches unlikely and making scale anonymization challenging. Data minimization and using edge devices as intermediaries help TinyML privacy.

+

So, while cloud ML must protect expansive centralized data, edge ML secures sensitive on-device data, and TinyML aims for minimal distributed data sharing due to constraints. While privacy is vital throughout, techniques must match the environment. Understanding nuances allows for selecting appropriate privacy preservation approaches.

+
+
+
+

15.5 Technical Aspects

+
+

15.5.1 Detecting and Mitigating Bias

+

A large body of work has demonstrated that machine learning models can exhibit bias, from underperforming people of a certain identity to making decisions that limit groups’ access to important resources (Buolamwini and Gebru 2018).

+

Ensuring fair and equitable treatment for all groups affected by machine learning systems is crucial as these models increasingly impact people’s lives in areas like lending, healthcare, and criminal justice. We typically evaluate model fairness by considering “subgroup attributes” unrelated to the prediction task that capture identities like race, gender, or religion. For example, in a loan default prediction model, subgroups could include race, gender, or religion. When models are trained naively to maximize accuracy, they often ignore subgroup performance. However, this can negatively impact marginalized communities.

+

To illustrate, imagine a model predicting loan repayment where the plusses (+’s) represent repayment and the circles (O’s) represent default, as shown in Figure fig-fairness-example. The optimal accuracy would be correctly classifying all of Group A while misclassifying some of Group B’s creditworthy applicants as defaults. If positive classifications allow access loans, Group A would receive many more loans—which would naturally result in a biased outcome.

+
+
+
+ +
+
+Figure 15.1: Fairness and accuracy. +
+
+
+

Alternatively, correcting the biases against Group B would likely increase “false positives” and reduce accuracy for Group A. Or, we could train separate models focused on maximizing true positives for each group. However, this would require explicitly using sensitive attributes like race in the decision process.

+

As we see, there are inherent tensions around priorities like accuracy versus subgroup fairness and whether to explicitly account for protected classes. Reasonable people can disagree on the appropriate tradeoffs. Constraints around costs and implementation options further complicate matters. Overall, ensuring the fair and ethical use of machine learning involves navigating these complex challenges.

+

Thus, the fairness literature has proposed three main fairness metrics for quantifying how fair a model performs over a dataset (Hardt, Price, and Srebro 2016). Given a model h and a dataset D consisting of (x,y,s) samples, where x is the data features, y is the label, and s is the subgroup attribute, and we assume there are simply two subgroups a and b, we can define the following.

+
    +
  1. Demographic Parity asks how accurate a model is for each subgroup. In other words, P(h(X) = Y S = a) = P(h(X) = Y S = b)

  2. +
  3. Equalized Odds asks how precise a model is on positive and negative samples for each subgroup. P(h(X) = y S = a, Y = y) = P(h(X) = y S = b, Y = y)

  4. +
  5. Equality of Opportunity is a special case of equalized odds that only asks how precise a model is on positive samples. This is relevant in cases such as resource allocation, where we care about how positive (i.e., resource-allocated) labels are distributed across groups. For example, we care that an equal proportion of loans are given to both men and women. P(h(X) = 1 S = a, Y = 1) = P(h(X) = 1 S = b, Y = 1)

  6. +
+

Note: These definitions often take a narrow view when considering binary comparisons between two subgroups. Another thread of fair machine learning research focusing on multicalibration and multiaccuracy considers the interactions between an arbitrary number of identities, acknowledging the inherent intersectionality of individual identities in the real world (Hébert-Johnson et al. 2018).

+
+Hébert-Johnson, Úrsula, Michael P. Kim, Omer Reingold, and Guy N. Rothblum. 2018. “Multicalibration: Calibration for the (Computationally-Identifiable) Masses.” In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, edited by Jennifer G. Dy and Andreas Krause, 80:1944–53. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v80/hebert-johnson18a.html. +
+

Context Matters

+

Before making any technical decisions to develop an unbiased ML algorithm, we need to understand the context surrounding our model. Here are some of the key questions to think about:

+
    +
  • Who will this model make decisions for?
  • +
  • Who is represented in the training data?
  • +
  • Who is represented, and who is missing at the table of engineers, designers, and managers?
    +
  • +
  • What sort of long-lasting impacts could this model have? For example, will it impact an individual’s financial security at a generational scale, such as determining college admissions or admitting a loan for a house?
    +
  • +
  • What historical and systematic biases are present in this setting, and are they present in the training data the model will generalize from?
  • +
+

Understanding a system’s social, ethical, and historical background is critical to preventing harm and should inform decisions throughout the model development lifecycle. After understanding the context, one can make various technical decisions to remove bias. First, one must decide what fairness metric is the most appropriate criterion for optimizing. Next, there are generally three main areas where one can intervene to debias an ML system.

+

First, preprocessing is when one balances a dataset to ensure fair representation or even increases the weight on certain underrepresented groups to ensure the model performs well. Second, in processing attempts to modify the training process of an ML system to ensure it prioritizes fairness. This can be as simple as adding a fairness regularizer (Lowy et al. 2021) to training an ensemble of models and sampling from them in a specific manner (Agarwal et al. 2018).

+
+Lowy, Andrew, Rakesh Pavan, Sina Baharlouei, Meisam Razaviyayn, and Ahmad Beirami. 2021. “Fermi: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information.” +
+Agarwal, Alekh, Alina Beygelzimer, Miroslav Dudı́k, John Langford, and Hanna M. Wallach. 2018. “A Reductions Approach to Fair Classification.” In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, edited by Jennifer G. Dy and Andreas Krause, 80:60–69. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v80/agarwal18a.html. +
+Alghamdi, Wael, Hsiang Hsu, Haewon Jeong, Hao Wang, Peter Michalak, Shahab Asoodeh, and Flavio Calmon. 2022. “Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection.” Adv. Neur. In. 35: 38747–60. +
+Hardt, Moritz, Eric Price, and Nati Srebro. 2016. “Equality of Opportunity in Supervised Learning.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 3315–23. https://proceedings.neurips.cc/paper/2016/hash/9d2682367c3935defcb1f9e247a97c0d-Abstract.html. +

Finally, post-processing debases a model after the fact, taking a trained model and modifying its predictions in a specific manner to ensure fairness is preserved (Alghamdi et al. 2022; Hardt, Price, and Srebro 2016). Post-processing builds on the preprocessing and in-processing steps by providing another opportunity to address bias and fairness issues in the model after it has already been trained.

+

The three-step process of preprocessing, in-processing, and post-processing provides a framework for intervening at different stages of model development to mitigate issues around bias and fairness. While preprocessing and in-processing focus on data and training, post-processing allows for adjustments after the model has been fully trained. Together, these three approaches give multiple opportunities to detect and remove unfair bias.

+
+
+

Thoughtful Deployment

+

The breadth of existing fairness definitions and debiasing interventions underscores the need for thoughtful assessment before deploying ML systems. As ML researchers and developers, responsible model development requires proactively educating ourselves on the real-world context, consulting domain experts and end-users, and centering harm prevention.

+

Rather than seeing fairness considerations as a box to check, we must deeply engage with the unique social implications and ethical tradeoffs around each model we build. Every technical choice about datasets, model architectures, evaluation metrics, and deployment constraints embeds values. By broadening our perspective beyond narrow technical metrics, carefully evaluating tradeoffs, and listening to impacted voices, we can work to ensure our systems expand opportunity rather than encode bias.

+

The path forward lies not in an arbitrary debiasing checklist but in a commitment to understanding and upholding our ethical responsibility at each step. This commitment starts with proactively educating ourselves and consulting others rather than just going through the motions of a fairness checklist. It requires engaging deeply with ethical tradeoffs in our technical choices, evaluating impacts on different groups, and listening to those voices most impacted.

+

Ultimately, responsible and ethical AI systems do not come from checkbox debiasing but from upholding our duty to assess harms, broaden perspectives, understand tradeoffs, and ensure we provide opportunity for all groups. This ethical responsibility should drive every step.

+

The connection between the paragraphs is that the first paragraph establishes the need for a thoughtful assessment of fairness issues rather than a checkbox approach. The second paragraph then expands on what that thoughtful assessment looks like in practice—engaging with tradeoffs, evaluating impacts on groups, and listening to impacted voices. Finally, the last paragraph refers to avoiding an “arbitrary debiasing checklist” and committing to ethical responsibility through assessment, understanding tradeoffs, and providing opportunity.

+
+
+
+

15.5.2 Preserving Privacy

+

Recent incidents have demonstrated how AI models can memorize sensitive user data in ways that violate privacy. For example, as shown in Figure XXX below, Stable Diffusion’s art generations were found to mimic identifiable artists’ styles and replicate existing photos, concerning many (Ippolito et al. 2023). These risks are amplified with personalized ML systems deployed in intimate environments like homes or wearables.

+

Imagine if a smart speaker uses our conversations to improve the quality of service to end users who genuinely want it. Still, others could violate privacy by trying to extract what the speaker “remembers.” Figure fig-diffusion-model-example below shows how diffusion models can memorize and generate individual training examples (Ippolito et al. 2023).

+
+
+
+ +
+
+Figure 15.2: Diffusion models memorizing samples from training data. Credit: Ippolito et al. (2023). +
+
+Ippolito, Daphne, Florian Tramer, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher Choquette Choo, and Nicholas Carlini. 2023. “Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy.” In Proceedings of the 16th International Natural Language Generation Conference, 5253–70. Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.inlg-main.3. +
+
+

Adversaries can use these memorization capabilities and train models to detect if specific training data influenced a target model. For example, membership inference attacks train a secondary model that learns to detect a change in the target model’s outputs when making inferences over data it was trained on versus not trained on (Shokri et al. 2017).

+
+Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. “Membership Inference Attacks Against Machine Learning Models.” In 2017 IEEE Symposium on Security and Privacy (SP), 3–18. IEEE; IEEE. https://doi.org/10.1109/sp.2017.41. +
+Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318. +

ML devices are especially vulnerable because they are often personalized on user data and are deployed in even more intimate settings such as the home. Private machine learning techniques have evolved to establish safeguards against adversaries, as mentioned in the Security and Privacy chapter to combat these privacy issues. Methods like differential privacy add mathematical noise during training to obscure individual data points’ influence on the model. Popular techniques like DP-SGD (Abadi et al. 2016) also clip gradients to limit what the model leaks about the data. Still, users should also be able to delete the impact of their data after the fact.

+
+
+

15.5.3 Machine Unlearning

+

With ML devices personalized to individual users and then deployed to remote edges without connectivity, a challenge arises—how can models responsively “forget” data points after deployment? If users request their data be removed from a personalized model, the lack of connectivity makes retraining infeasible. Thus, efficient on-device data forgetting is necessary but poses hurdles.

+

Initial unlearning approaches faced limitations in this context. Given the resource constraints, retrieving models from scratch on the device to forget data points proves inefficient or even impossible. Fully retraining also requires retaining all the original training data on the device, which brings its own security and privacy risks. Common machine unlearning techniques (Bourtoule et al. 2021) for remote embedded ML systems fail to enable responsive, secure data removal.

+
+Bourtoule, Lucas, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. “Machine Unlearning.” In 2021 IEEE Symposium on Security and Privacy (SP), 141–59. IEEE; IEEE. https://doi.org/10.1109/sp40001.2021.00019. +

However, newer methods show promise in modifying models to approximately forget data [?] without full retraining. While the accuracy loss from avoiding full rebuilds is modest, guaranteeing data privacy should still be the priority when handling sensitive user information ethically. Even slight exposure to private data can violate user trust. As ML systems become deeply personalized, efficiency and privacy must be enabled from the start—not afterthoughts.

+

Recent policy discussions which include the European Union’s General Data, Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), the Act on the Protection of Personal Information (APPI), and Canada’s proposed Consumer Privacy Protection Act (CPPA), require the deletion of private information. These policies, coupled with AI incidents like Stable Diffusion memorizing artist data, have underscored the ethical need for users to delete their data from models after training.

+

The right to remove data arises from privacy concerns around corporations or adversaries misusing sensitive user information. Machine unlearning refers to removing the influence of specific points from an already-trained model. Naively, this involves full retraining without the deleted data. However, connectivity constraints often make retraining infeasible for ML systems personalized and deployed to remote edges. If a smart speaker learns from private home conversations, retaining access to delete that data is important.

+

Although limited, methods are evolving to enable efficient approximations of retraining for unlearning. By modifying models’ inference time, they can mimic “forgetting” data without full access to training data. However, most current techniques are restricted to simple models, still have resource costs, and trade some accuracy. Though methods are evolving, enabling efficient data removal and respecting user privacy remains imperative for responsible TinyML deployment.

+
+
+

15.5.4 Adversarial Examples and Robustness

+

Machine learning models, especially deep neural networks, have a well-documented Achilles heel: they often break when even tiny perturbations are made to their inputs (Szegedy et al. 2014). This surprising fragility highlights a major robustness gap threatening real-world deployment in high-stakes domains. It also opens the door for adversarial attacks designed to fool models deliberately.

+

Machine learning models can exhibit surprising brittleness—minor input tweaks can cause shocking malfunctions, even in state-of-the-art deep neural networks (Szegedy et al. 2014). This unpredictability around out-of-sample data underscores gaps in model generalization and robustness. Given the growing ubiquity of ML, it also enables adversarial threats that weaponize models’ blindspots.

+

Deep neural networks demonstrate an almost paradoxical dual nature - human-like proficiency in training distributions coupled with extreme fragility to tiny input perturbations (Szegedy et al. 2014). This adversarial vulnerability gap highlights gaps in standard ML procedures and threats to real-world reliability. At the same time, it can be exploited: attackers can find model-breaking points humans wouldn’t perceive.

+
+Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. “Intriguing Properties of Neural Networks.” In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, edited by Yoshua Bengio and Yann LeCun. http://arxiv.org/abs/1312.6199. +

Figure fig-adversarial-example includes an example of a small meaningless perturbation that changes a model prediction. This fragility has real-world impacts: lack of robustness undermines trust in deploying models for high-stakes applications like self-driving cars or medical diagnosis. Moreover, the vulnerability leads to security threats: attackers can deliberately craft adversarial examples that are perceptually indistinguishable from normal data but cause model failures.

+
+
+
+ +
+
+Figure 15.3: Perturbation effect on prediction. Credit: Microsoft. +
+
+
+

For instance, past work shows successful attacks that trick models for tasks like NSFW detection (Bhagoji et al. 2018), ad-blocking (Tramèr et al. 2019), and speech recognition (Carlini et al. 2016). While errors in these domains already pose security risks, the problem extends beyond IT security. Recently, adversarial robustness has been proposed as an additional performance metric by approximating worst-case behavior.

+
+Bhagoji, Arjun Nitin, Warren He, Bo Li, and Dawn Song. 2018. “Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms.” In Proceedings of the European Conference on Computer Vision (ECCV), 154–69. +
+Tramèr, Florian, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, and Dan Boneh. 2019. AdVersarial: Perceptual Ad Blocking Meets Adversarial Machine Learning.” In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2005–21. ACM. https://doi.org/10.1145/3319535.3354222. +
+Carlini, Nicholas, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. 2016. “Hidden Voice Commands.” In 25th USENIX Security Symposium (USENIX Security 16), 513–30. +

The surprising model fragility highlighted above casts doubt on real-world reliability and opens the door to adversarial manipulation. This growing vulnerability underscores several needs. First, moral robustness evaluations are essential for quantifying model vulnerabilities before deployment. Approximating worst-case behavior surfaces blindspots.

+

Second, effective defenses across domains must be developed to close these robustness gaps. With security on the line, developers cannot ignore the threat of attacks exploiting model weaknesses. Moreover, we cannot afford any fragility-induced failures for safety-critical applications like self-driving vehicles and medical diagnosis. Lives are at stake.

+

Finally, the research community continues mobilizing rapidly in response. Interest in adversarial machine learning has exploded as attacks reveal the need to bridge the robustness gap between synthetic and real-world data. Conferences now commonly feature defenses for securing and stabilizing models. The community recognizes that model fragility is a critical issue that must be addressed through robustness testing, defense development, and ongoing research. By surfacing blindspots and responding with principled defenses, we can work to ensure reliability and safety for machine learning systems, especially in high-stakes domains.

+
+
+

15.5.5 Building Interpretable Models

+

As models are deployed more frequently in high-stakes settings, practitioners, developers, downstream end-users, and increasing regulation have highlighted the need for explainability in machine learning. The goal of many interpretability and explainability methods is to provide practitioners with more information about the models’ overall behavior or the behavior given a specific input. This allows users to decide whether or not a model’s output or prediction is trustworthy.

+

Such analysis can help developers debug models and improve performance by pointing out biases, spurious correlations, and failure modes of models. In cases where models can surpass human performance on a task, interpretability can help users and researchers better understand relationships in their data and previously unknown patterns.

+

There are many classes of explainability/interpretability methods, including post hoc explainability, inherent interpretability, and mechanistic interpretability. These methods aim to make complex machine learning models more understandable and ensure users can trust model predictions, especially in critical settings. By providing transparency into model behavior, explainability techniques are an important tool for developing safe, fair, and reliable AI systems.

+
+

Post Hoc Explainability

+

Post hoc explainability methods typically explain the output behavior of a black-box model on a specific input. Popular methods include counterfactual explanations, feature attribution methods, and concept-based explanations.

+

Counterfactual explanations, also frequently called algorithmic recourse, “If X had not occurred, Y would not have occurred” (Wachter, Mittelstadt, and Russell 2017). For example, consider a person applying for a bank loan whose application is rejected by a model. They may ask their bank for recourse or how to change to be eligible for a loan. A counterfactual explanation would tell them which features they need to change and by how much such that the model’s prediction changes.

+
+Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” SSRN Electronic Journal 31: 841. https://doi.org/10.2139/ssrn.3063289. +
+Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.” In 2017 IEEE International Conference on Computer Vision (ICCV), 618–26. IEEE. https://doi.org/10.1109/iccv.2017.74. +
+Smilkov, Daniel, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. “Smoothgrad: Removing Noise by Adding Noise.” ArXiv Preprint abs/1706.03825. https://arxiv.org/abs/1706.03825. +
+Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. Why Should i Trust You? Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. +
+Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4765–74. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html. +

Feature attribution methods highlight the input features that are important or necessary for a particular prediction. For a computer vision model, this would mean highlighting the individual pixels that contributed most to the predicted label of the image. Note that these methods do not explain how those pixels/features impact the prediction, only that they do. Common methods include input gradients, GradCAM (Selvaraju et al. 2017), SmoothGrad (Smilkov et al. 2017), LIME (Ribeiro, Singh, and Guestrin 2016), and SHAP (Lundberg and Lee 2017).

+

By providing examples of changes to input features that would alter a prediction (counterfactuals) or indicating the most influential features for a given prediction (attribution), these post hoc explanation techniques shed light on model behavior for individual inputs. This granular transparency helps users determine whether they can trust and act upon specific model outputs.

+

Concept-based explanations aim to explain model behavior and outputs using a pre-defined set of semantic concepts (e.g., the model recognizes scene class “bedroom” based on the presence of concepts “bed” and “pillow”). Recent work shows that users often prefer these explanations to attribution and example-based explanations because they “resemble human reasoning and explanations” (Vikram V. Ramaswamy et al. 2023b). Popular concept-based explanation methods include TCAV (Cai et al. 2019), Network Dissection (Bau et al. 2017), and interpretable basis decomposition (Zhou et al. 2018).

+
+Ramaswamy, Vikram V, Sunnie SY Kim, Ruth Fong, and Olga Russakovsky. 2023b. UFO: A Unified Method for Controlling Understandability and Faithfulness Objectives in Concept-Based Explanations for CNNs.” ArXiv Preprint abs/2303.15632. https://arxiv.org/abs/2303.15632. +
+Cai, Carrie J., Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, et al. 2019. “Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, edited by Jennifer G. Dy and Andreas Krause, 80:2673–82. Proceedings of Machine Learning Research. ACM. https://doi.org/10.1145/3290605.3300234. +
+Bau, David, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. “Network Dissection: Quantifying Interpretability of Deep Visual Representations.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3319–27. IEEE. https://doi.org/10.1109/cvpr.2017.354. +
+Zhou, Bolei, Yiyou Sun, David Bau, and Antonio Torralba. 2018. “Interpretable Basis Decomposition for Visual Explanation.” In Proceedings of the European Conference on Computer Vision (ECCV), 119–34. +
+Ramaswamy, Vikram V., Sunnie S. Y. Kim, Ruth Fong, and Olga Russakovsky. 2023a. “Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10932–41. IEEE. https://doi.org/10.1109/cvpr52729.2023.01052. +

Note that these methods are extremely sensitive to the size and quality of the concept set, and there is a tradeoff between their accuracy and faithfulness and their interpretability or understandability to humans (Vikram V. Ramaswamy et al. 2023a). However, by mapping model predictions to human-understandable concepts, concept-based explanations can provide transparency into the reasoning behind model outputs.

+
+
+

Inherent Interpretability

+

Inherently interpretable models are constructed such that their explanations are part of the model architecture and are thus naturally faithful, which sometimes makes them preferable to post-hoc explanations applied to black-box models, especially in high-stakes domains where transparency is imperative (Rudin 2019). Often, these models are constrained so that the relationships between input features and predictions are easy for humans to follow (linear models, decision trees, decision sets, k-NN models), or they obey structural knowledge of the domain, such as monotonicity (Gupta et al. 2016), causality, or additivity (Lou et al. 2013; Beck and Jackman 1998).

+
+Rudin, Cynthia. 2019. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1 (5): 206–15. https://doi.org/10.1038/s42256-019-0048-x. +
+Gupta, Maya, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander Van Esbroeck. 2016. “Monotonic Calibrated Interpolated Look-up Tables.” The Journal of Machine Learning Research 17 (1): 3790–3836. +
+Lou, Yin, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. “Accurate Intelligible Models with Pairwise Interactions.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, edited by Inderjit S. Dhillon, Yehuda Koren, Rayid Ghani, Ted E. Senator, Paul Bradley, Rajesh Parekh, Jingrui He, Robert L. Grossman, and Ramasamy Uthurusamy, 623–31. ACM. https://doi.org/10.1145/2487575.2487579. +
+Beck, Nathaniel, and Simon Jackman. 1998. “Beyond Linearity by Default: Generalized Additive Models.” Am. J. Polit. Sci. 42 (2): 596. https://doi.org/10.2307/2991772. +
+Koh, Pang Wei, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. “Concept Bottleneck Models.” In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, 119:5338–48. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v119/koh20a.html. +
+Chen, Chaofan, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan Su. 2019. “This Looks Like That: Deep Learning for Interpretable Image Recognition.” In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, edited by Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, 8928–39. https://proceedings.neurips.cc/paper/2019/hash/adf7ee2dcf142b0e11888e72b43fcb75-Abstract.html. +

However, more recent works have relaxed the restrictions on inherently interpretable models, using black-box models for feature extraction and a simpler inherently interpretable model for classification, allowing for faithful explanations that relate high-level features to prediction. For example, Concept Bottleneck Models (Koh et al. 2020) predict a concept set c that is passed into a linear classifier. ProtoPNets (Chen et al. 2019) dissect inputs into linear combinations of similarities to prototypical parts from the training set.

+
+
+

Mechanistic Interpretability

+

Mechanistic interpretability methods seek to reverse engineer neural networks, often analogizing them to how one might reverse engineer a compiled binary or how neuroscientists attempt to decode the function of individual neurons and circuits in brains. Most research in mechanistic interpretability views models as a computational graph (Geiger et al. 2021), and circuits are subgraphs with distinct functionality (Wang and Zhan 2019). Current approaches to extracting circuits from neural networks and understanding their functionality rely on human manual inspection of visualizations produced by circuits (Olah et al. 2020).

+
+Geiger, Atticus, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. “Causal Abstractions of Neural Networks.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 9574–86. https://proceedings.neurips.cc/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html. +
+Wang, LingFeng, and YaQing Zhan. 2019. “A Conceptual Peer Review Model for arXiv and Other Preprint Databases.” Learn. Publ. 32 (3): 213–19. https://doi.org/10.1002/leap.1229. +
+Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. 2020. “Zoom in: An Introduction to Circuits.” Distill 5 (3): e00024–001. https://doi.org/10.23915/distill.00024.001. +
+Davarzani, Samaneh, David Saucier, Purva Talegaonkar, Erin Parker, Alana Turner, Carver Middleton, Will Carroll, et al. 2023. “Closing the Wearable Gap: Footankle Kinematic Modeling via Deep Learning Models Based on a Smart Sock Wearable.” Wearable Technologies 4. https://doi.org/10.1017/wtc.2023.3. +

Alternatively, some approaches build sparse autoencoders that encourage neurons to encode disentangled interpretable features (Davarzani et al. 2023). This field is much newer than existing areas in explainability and interpretability, and as such, most works are generally exploratory rather than solution-oriented.

+

There are many problems in mechanistic interpretability, including the polysemanticity of neurons and circuits, the inconvenience and subjectivity of human labeling, and the exponential search space for identifying circuits in large models with billions or trillions of neurons.

+
+
+

Challenges and Considerations

+

As methods for interpreting and explaining models progress, it is important to note that humans overtrust and misuse interpretability tools (Kaur et al. 2020) and that a user’s trust in a model due to an explanation can be independent of the correctness of the explanations (Lakkaraju and Bastani 2020). As such, it is necessary that aside from assessing the faithfulness/correctness of explanations, researchers must also ensure that interpretability methods are developed and deployed with a specific user in mind and that user studies are performed to evaluate their efficacy and usefulness in practice.

+
+Kaur, Harmanpreet, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, edited by Regina Bernhaupt, Florian ’Floyd’Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, et al., 1–14. ACM. https://doi.org/10.1145/3313831.3376219. +
+Lakkaraju, Himabindu, and Osbert Bastani. 2020. ”How Do i Fool You?”: Manipulating User Trust via Misleading Black Box Explanations.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 79–85. ACM. https://doi.org/10.1145/3375627.3375833. +

Furthermore, explanations should be tailored to the user’s expertise, the task they are using the explanation for and the corresponding minimal amount of information required for the explanation to be useful to prevent information overload.

+

While interpretability/explainability are popular areas in machine learning research, very few works study their intersection with TinyML and edge computing. Given that a significant application of TinyML is healthcare, which often requires high transparency and interpretability, existing techniques must be tested for scalability and efficiency concerning edge devices. Many methods rely on extra forward and backward passes, and some even require extensive training in proxy models, which are infeasible on resource-constrained microcontrollers.

+

That said, explainability methods can be highly useful in developing models for edge devices, as they can give insights into how input data and models can be compressed and how representations may change post-compression. Furthermore, many interpretable models are often smaller than their black-box counterparts, which could benefit TinyML applications.

+
+
+
+

15.5.6 Monitoring Model Performance

+

While developers may train models that seem adversarially robust, fair, and interpretable before deployment, it is imperative that both the users and the model owners continue to monitor the model’s performance and trustworthiness during the model’s full lifecycle. Data is frequently changing in practice, which can often result in distribution shifts. These distribution shifts can profoundly impact the model’s vanilla predictive performance and its trustworthiness (fairness, robustness, and interpretability) in real-world data.

+

Furthermore, definitions of fairness frequently change with time, such as what society considers a protected attribute, and the expertise of the users asking for explanations may also change.

+

To ensure that models keep up to date with such changes in the real world, developers must continually evaluate their models on current and representative data and standards and update models when necessary.

+
+
+
+

15.6 Implementation Challenges

+
+

15.6.1 Organizational and Cultural Structures

+

While innovation and regulation are often seen as having competing interests, many countries have found it necessary to provide oversight as AI systems expand into more sectors. As illustrated in Figure fig-human-centered-ai, this oversight has become crucial as these systems continue permeating various industries and impacting people’s lives (see Human-Centered AI, Chapter 8 “Government Interventions and Regulations”.

+
+
+
+ +
+
+Figure 15.4: How various groups impact human-centered AI. Credit: Shneiderman (2020). +
+
+Shneiderman, Ben. 2020. “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM Trans. Interact. Intell. Syst. 10 (4): 1–31. https://doi.org/10.1145/3419764. +
+
+

Among these are:

+ +
+
+

15.6.2 Obtaining Quality and Representative Data

+

As discussed in the Data Engineering chapter, responsible AI design must occur at all pipeline stages, including data collection. This begs the question: what does it mean for data to be high-quality and representative? Consider the following scenarios that hinder the representativeness of data:

+
+

Subgroup Imbalance

+

This is likely what comes to mind when hearing “representative data.” Subgroup imbalance means the dataset contains relatively more data from one subgroup than another. This imbalance can negatively affect the downstream ML model by causing it to overfit a subgroup of people while performing poorly on another.

+

One example consequence of subgroup imbalance is racial discrimination in facial recognition technology (Buolamwini and Gebru 2018); commercial facial recognition algorithms have up to 34% worse error rates on darker-skinned females than lighter-skinned males.

+
+Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. PMLR. +

Note that data imbalance goes both ways, and subgroups can also be harmful overrepresented in the dataset. For example, the Allegheny Family Screening Tool (AFST) predicts the likelihood that a child will eventually be removed from a home. The AFST produces disproportionate scores for different subgroups, one of the reasons being that it is trained on historically biased data, sourced from juvenile and adult criminal legal systems, public welfare agencies, and behavioral health agencies and programs.

+
+
+

Quantifying Target Outcomes

+

This occurs in applications where the ground-truth label cannot be measured or is difficult to represent in a single quantity. For example, an ML model in a mobile wellness application may want to predict individual stress levels. The true stress labels themselves are impossible to obtain directly and must be inferred from other biosignals, such as heart rate variability and user self-reported data. In these situations, noise is built into the data by design, making this a challenging ML task.

+
+
+

Distribution Shift

+

Data may no longer represent a task if a major external event causes the data source to change drastically. The most common way to think about distribution shifts is with respect to time; for example, data on consumer shopping habits collected pre-covid may no longer be present in consumer behavior today.

+

The transfer causes another form of distribution shift. For instance, when applying a triage system that was trained on data from one hospital to another, a distribution shift may occur if the two hospitals are very different.#

+
+
+

Gathering Data

+

A reasonable solution for many of the above problems with non-representative or low-quality data is to collect more; we can collect more data targeting an underrepresented subgroup or from the target hospital to which our model might be transferred. However, for some reasons, gathering more data is an inappropriate or infeasible solution for the task at hand.

+
    +
  • Data collection can be harmful. This is the paradox of exposure, the situation in which those who stand to significantly gain from their data being collected are also those who are put at risk by the collection process (D’ignazio and Klein (2023), Chapter 4). For example, collecting more data on non-binary individuals may be important for ensuring the fairness of the ML application, but it also puts them at risk, depending on who is collecting the data and how (whether the data is easily identifiable, contains sensitive content, etc.).

  • +
  • Data collection can be costly. In some domains, such as healthcare, obtaining data can be costly in terms of time and money.

  • +
  • Biased data collection. Electronic Health Records is a huge data source for ML-driven healthcare applications. Issues of subgroup representation aside, the data itself may be collected in a biased manner. For example, negative language (“nonadherent,” “unwilling”) is disproportionately used on black patients (Himmelstein, Bates, and Zhou 2022).

  • +
+
+D’ignazio, Catherine, and Lauren F Klein. 2023. Data Feminism. MIT press. +
+Himmelstein, Gracie, David Bates, and Li Zhou. 2022. “Examination of Stigmatizing Language in the Electronic Health Record.” JAMA Network Open 5 (1): e2144967. https://doi.org/10.1001/jamanetworkopen.2021.44967. +

We conclude with several additional strategies for maintaining data quality: improving understanding of the data, data exploration, and intr. First, fostering a deeper understanding of the data is crucial. This can be achieved through the implementation of standardized labels and measures of data quality, such as in the Data Nutrition Project.

+

Collaborating with organizations responsible for collecting data helps ensure the data is interpreted correctly. Second, employing effective tools for data exploration is important. Visualization techniques and statistical analyses can reveal issues with the data. Finally, establishing a feedback loop within the ML pipeline is essential for understanding the real-world implications of the data. Metrics, such as fairness measures, allow us to define “data quality” in the context of the downstream application; improving fairness may directly improve the quality of the predictions that the end users receive.

+
+
+
+

15.6.3 Balancing Accuracy and Other Objectives

+

Machine learning models are often evaluated on accuracy alone, but this single metric cannot fully capture model performance and tradeoffs for responsible AI systems. Other ethical dimensions, such as fairness, robustness, interpretability, and privacy, may compete with pure predictive accuracy during model development. For instance, inherently interpretable models such as small decision trees or linear classifiers with simplified features intentionally trade some accuracy for transparency in the model behavior and predictions. While these simplified models achieve lower accuracy by not capturing all the complexity in the dataset, improved interpretability builds trust by enabling direct analysis by human practitioners.

+

Additionally, certain techniques meant to improve adversarial robustness, such as adversarial training examples or dimensionality reduction, can degrade the accuracy of clean validation data. In sensitive applications like healthcare, focusing narrowly on state-of-the-art accuracy carries ethical risks if it allows models to rely more on spurious correlations that introduce bias or use opaque reasoning. Therefore, the appropriate performance objectives depend greatly on the sociotechnical context.

+

Methodologies like Value Sensitive Design provide frameworks for formally evaluating the priorities of various stakeholders within the real-world deployment system. These elucidate tensions between values like accuracy, interpretation, ility, and fail and redness, which can then guide responsible tradeoff decisions. For a medical diagnosis system, achieving the highest accuracy may not be the singular goal - improving transparency to build practitioner trust or reducing bias towards minority groups could justify small losses in accuracy. Analyzing the sociotechnical context is key for setting these objectives.

+

By taking a holistic view, we can responsibly balance accuracy with other ethical objectives for model success. Ongoing performance monitoring along multiple dimensions is crucial as the system evolves after deployment.

+
+
+
+

15.7 Ethical Considerations in AI Design

+

We must discuss at least some of the many ethical issues at stake in designing and applying AI systems and diverse frameworks for approaching these issues, including those from AI safety, Human-Computer Interaction (HCI), and Science, Technology, and Society (STS).

+
+

15.7.1 AI Safety and Value Alignment

+

In 1960, Norbert Weiner wrote, “’if we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively… we had better be quite sure that the purpose put into the machine is the purpose which we desire” (Wiener 1960).

+
+Wiener, Norbert. 1960. “Some Moral and Technical Consequences of Automation: As Machines Learn They May Develop Unforeseen Strategies at Rates That Baffle Their Programmers.” Science 131 (3410): 1355–58. https://doi.org/10.1126/science.131.3410.1355. +
+Russell, Stuart. 2021. “Human-Compatible Artificial Intelligence.” Human-Like Machine Intelligence, 3–23. +

In recent years, as the capabilities of deep learning models have achieved, and sometimes even surpassed, human abilities, the issue of creating AI systems that act in accord with human intentions instead of pursuing unintended or undesirable goals has become a source of concern (Russell 2021). Within the field of AI safety, a particular goal concerns “value alignment,” or the problem of how to code the “right” purpose into machines Human-Compatible Artificial Intelligence. Present AI research assumes we know the objectives we want to achieve and “studies the ability to achieve objectives, not the design of those objectives.”

+

However, complex real-world deployment contexts make explicitly defining “the right purpose” for machines difficult, requiring frameworks for responsible and ethical goal-setting. Methodologies like Value Sensitive Design provide formal mechanisms to surface tensions between stakeholder values and priorities.

+

By taking a holistic sociotechnical view, we can better ensure intelligent systems pursue objectives that align with broad human intentions rather than maximizing narrow metrics like accuracy alone. Achieving this in practice remains an open and critical research question as AI capabilities advance rapidly.

+

The absence of this alignment can lead to several AI safety issues, as have been documented in a variety of deep learning models. A common feature of systems that optimize for an objective is that variables not directly included in the objective may be set to extreme values to help optimize for that objective, leading to issues characterized as specification gaming, reward hacking, etc., in reinforcement learning (RL).

+

In recent years, a particularly popular implementation of RL has been models pre-trained using self-supervised learning and fine-tuned reinforcement learning from human feedback (RLHF) (Christiano et al. 2017). Ngo 2022 (Ngo, Chan, and Mindermann 2022) argues that by rewarding models for appearing harmless and ethical while also maximizing useful outcomes, RLHF could encourage the emergence of three problematic properties: situationally aware reward hacking, where policies exploit human fallibility to gain high reward, misaligned internally-represented goals that generalize beyond the RLHF fine-tuning distribution, and power-seeking strategies.

+
+Christiano, Paul F., Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. “Deep Reinforcement Learning from Human Preferences.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4299–4307. https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html. +
+Ngo, Richard, Lawrence Chan, and Sören Mindermann. 2022. “The Alignment Problem from a Deep Learning Perspective.” ArXiv Preprint abs/2209.00626. https://arxiv.org/abs/2209.00626. +
+Van Noorden, Richard. 2016. ArXiv Preprint Server Plans Multimillion-Dollar Overhaul.” Nature 534 (7609): 602–2. https://doi.org/10.1038/534602a. +

Similarly, Van Noorden (2016) outlines six concrete problems for AI safety, including avoiding negative side effects, avoiding reward hacking, scalable oversight for aspects of the objective that are too expensive to be frequently evaluated during training, safe exploration strategies that encourage creativity while preventing harm, and robustness to distributional shift in unseen testing environments.

+
+
+

15.7.2 Autonomous Systems and Control [and Trust]

+

The consequences of autonomous systems that act independently of human oversight and often outside human judgment have been well documented across several industries and use cases. Most recently, the California Department of Motor Vehicles suspended Cruise’s deployment and testing permits for its autonomous vehicles citing “unreasonable risks to public safety”. One such accident occurred when a vehicle struck a pedestrian who stepped into a crosswalk after the stoplight had turned green, and the vehicle was allowed to proceed. In 2018, a pedestrian crossing the street with her bike was killed when a self-driving Uber car, which was operating in autonomous mode, failed to accurately classify her moving body as an object to be avoided.

+

Autonomous systems beyond self-driving vehicles are also susceptible to such issues, with potentially graver consequences, as remotely-powered drones are already reshaping warfare. While such incidents bring up important ethical questions regarding who should be held responsible when these systems fail, they also highlight the technical challenges of giving full control of complex, real-world tasks to machines.

+

At its core, there is a tension between human and machine autonomy. Engineering and computer science disciplines have tended to focus on machine autonomy. For example, as of 2019, a search for the word “autonomy” in the Digital Library of the Association for Computing Machinery (ACM) reveals that of the top 100 most cited papers, 90% are on machine autonomy (Calvo et al. 2020). In an attempt to build systems for the benefit of humanity, these disciplines have taken, without question, increasing productivity, efficiency, and automation as primary strategies for benefiting humanity.

+
+McCarthy, John. 1981. “Epistemological Problems of Artificial Intelligence.” In Readings in Artificial Intelligence, 459–65. Elsevier. https://doi.org/10.1016/b978-0-934613-03-3.50035-0. +

These goals put machine automation at the forefront, often at the expense of the human. This approach suffers from inherent challenges, as noted since the early days of AI through the Frame problem and qualification problem, which formalizes the observation that it is impossible to specify all the preconditions needed for a real-world action to succeed (McCarthy 1981).

+

These logical limitations have given rise to mathematical approaches such as Responsibility-sensitive safety (RSS) (Shalev-Shwartz, Shammah, and Shashua 2017), which is aimed at breaking down the end goal of an automated driving system (namely safety) into concrete and checkable conditions that can be rigorously formulated in mathematical terms. The goal of RSS is that those safety rules guarantee ADS safety in the rigorous form of mathematical proof. However, such approaches tend towards using automation to address the problems of automation and are susceptible to many of the same issues.

+
+Shalev-Shwartz, Shai, Shaked Shammah, and Amnon Shashua. 2017. “On a Formal Model of Safe and Scalable Self-Driving Cars.” ArXiv Preprint abs/1708.06374. https://arxiv.org/abs/1708.06374. +
+Friedman, Batya. 1996. “Value-Sensitive Design.” Interactions 3 (6): 16–23. https://doi.org/10.1145/242485.242493. +
+Peters, Dorian, Rafael A. Calvo, and Richard M. Ryan. 2018. “Designing for Motivation, Engagement and Wellbeing in Digital Experience.” Front. Psychol. 9 (May): 797. https://doi.org/10.3389/fpsyg.2018.00797. +
+Ryan, Richard M., and Edward L. Deci. 2000. “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being.” Am. Psychol. 55 (1): 68–78. https://doi.org/10.1037/0003-066x.55.1.68. +

Another approach to combating these issues is to focus on the human-centered design of interactive systems that incorporate human control. Value-sensitive design (Friedman 1996) described three key design factors for a user interface that impact autonomy, including system capability, complexity, misrepresentation, and fluidity. A more recent model, called METUX (A Model for Motivation, Engagement, and Thriving in the User Experience), leverages insights from Self-determination Theory (SDT) in Psychology to identify six distinct spheres of technology experience that contribute to the design systems that promote well-being and human flourishing (Peters, Calvo, and Ryan 2018). SDT defines autonomy as acting by one’s goals and values, which is distinct from the use of autonomy as simply a synonym for either independence or being in control (Ryan and Deci 2000).

+

Calvo 2020 elaborates on METUX and its six “spheres of technology experience” in the context of AI-recommender systems (Calvo et al. 2020). They propose these spheres—adoption, Interface, Tasks, Behavior, Life, and Society—as a way of organizing thinking and evaluation of technology design in order to appropriately capture contradictory and downstream impacts on human autonomy when interacting with AI systems.

+
+Calvo, Rafael A, Dorian Peters, Karina Vold, and Richard M Ryan. 2020. “Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry.” Ethics of Digital Well-Being: A Multidisciplinary Approach, 31–54. +
+
+

15.7.3 Economic Impacts on Jobs, Skills, Wages

+

A major concern of the current rise of AI technologies is widespread unemployment. As AI systems’ capabilities expand, many fear these technologies will cause an absolute loss of jobs as they replace current workers and overtake alternative employment roles across industries. However, changing economic landscapes at the hands of automation is not new, and historically, have been found to reflect patterns of displacement rather than replacement (Shneiderman 2022)—Chapter 4. In particular, automation usually lowers costs and increases quality, greatly increasing access and demand. The need to serve these growing markets pushes production, creating new jobs.

+
+———. 2022. Human-Centered AI. Oxford University Press. +

Furthermore, studies have found that attempts to achieve “lights-out” automation – productive and flexible automation with a minimal number of human workers – have been unsuccessful. Attempts to do so have led to what the MIT Work of the Future taskforce has termed “zero-sum automation”, in which process flexibility is sacrificed for increased productivity.

+

In contrast, the task force proposes a “positive-sum automation” approach in which flexibility is increased by designing technology that strategically incorporates humans where they are very much needed, making it easier for line employees to train and debug robots, using a bottom-up approach to identifying what tasks should be automated; and choosing the right metrics for measuring success (see MIT’s Work of the Future).

+

However, the optimism of the high-level outlook does not preclude individual harm, especially to those whose skills and jobs will be rendered obsolete by automation. Public and legislative pressure, as well as corporate social responsibility efforts, will need to be directed at creating policies that share the benefits of automation with workers and result in higher minimum wages and benefits.

+
+
+

15.7.4 Scientific Communication and AI Literacy

+

A 1993 survey of 3000 North American adults’ beliefs about the “electronic thinking machine” revealed two primary perspectives of the early computer: the “beneficial tool of man” perspective and the “awesome thinking machine” perspective. The attitudes contributing to the “awesome thinking machine” view in this and other studies revealed a characterization of computers as “intelligent brains, smarter than people, unlimited, fast, mysterious, and frightening” (Martin 1993). These fears highlight an easily overlooked component of responsible AI, especially amidst the rush to commercialize such technologies: scientific communication that accurately communicates the capabilities and limitations of these systems while providing transparency about the limitations of experts’ knowledge about these systems.

+
+Martin, C. Dianne. 1993. “The Myth of the Awesome Thinking Machine.” Commun. ACM 36 (4): 120–33. https://doi.org/10.1145/255950.153587. +
+Handlin, Oscar. 1965. “Science and Technology in Popular Culture.” Daedalus-Us., 156–70. +

As AI systems’ capabilities expand beyond most people’s comprehension, there is a natural tendency to assume the kinds of apocalyptic worlds painted by our media. This is partly due to the apparent difficulty of assimilating scientific information, even in technologically advanced cultures, which leads to the products of science being perceived as magic—“understandable only in terms of what it did, not how it worked” (Handlin 1965).

+

While tech companies should be held responsible for limiting grandiose claims and not falling into cycles of hype, research studying scientific communication, especially concerning (generative) AI, will also be useful in tracking and correcting public understanding of these technologies. An analysis of the Scopus scholarly database found that such research is scarce, with only a handful of papers mentioning both “science communication” and “artificial intelligence” (Schäfer 2023).

+
+Schäfer, Mike S. 2023. “The Notorious GPT: Science Communication in the Age of Artificial Intelligence.” Journal of Science Communication 22 (02): Y02. https://doi.org/10.22323/2.22020402. +
+Lindgren, Simon. 2023. Handbook of Critical Studies of Artificial Intelligence. Edward Elgar Publishing. +
+Ng, Davy Tsz Kit, Jac Ka Lok Leung, Kai Wah Samuel Chu, and Maggie Shen Qiao. 2021. AI Literacy: Definition, Teaching, Evaluation and Ethical Issues.” Proceedings of the Association for Information Science and Technology 58 (1): 504–9. +

Research that exposes the perspectives, frames, and images of the future promoted by academic institutions, tech companies, stakeholders, regulators, journalists, NGOs, and others will also help to identify potential gaps in AI literacy among adults (Lindgren 2023). Increased focus on AI literacy from all stakeholders will be important in helping people whose skills are rendered obsolete by AI automation (Ng et al. 2021).

+

“But even those who never acquire that understanding need assurance that there is a connection between the goals of science and their welfare, and above all, that the scientist is not a man altogether apart but one who shares some of their value.” (Handlin, 1965)

+
+
+
+

15.8 Conclusion

+

Responsible artificial intelligence is crucial as machine learning systems exert growing influence across healthcare, employment, finance, and criminal justice sectors. While AI promises immense benefits, thoughtlessly designed models risk perpetrating harm through biases, privacy violations, unintended behaviors, and other pitfalls.

+

Upholding principles of fairness, explainability, accountability, safety, and transparency enables the development of ethical AI aligned with human values. However, implementing these principles involves surmounting complex technical and social challenges around detecting dataset biases, choosing appropriate model tradeoffs, securing quality training data, and more. Frameworks like value-sensitive design guide balancing accuracy versus other objectives based on stakeholder needs.

+

Looking forward, advancing responsible AI necessitates continued research and industry commitment. More standardized benchmarks are required to compare model biases and robustness. As personalized TinyML expands, enabling efficient transparency and user control for edge devices warrants focus. Revised incentive structures and policies must encourage deliberate, ethical development before reckless deployment. Education around AI literacy and its limitations will further contribute to public understanding.

+

Responsible methods underscore that while machine learning offers immense potential, thoughtless application risks adverse consequences. Cross-disciplinary collaboration and human-centered design are imperative so AI can promote broad social benefit. The path ahead lies not in an arbitrary checklist but in a steadfast commitment to understand and uphold our ethical responsibility at each step. By taking conscientious action, the machine learning community can lead AI toward empowering all people equitably and safely.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+

Coming soon.

+
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/robust_ai/robust_ai.html b/contents/robust_ai/robust_ai.html new file mode 100644 index 00000000..2b27c4be --- /dev/null +++ b/contents/robust_ai/robust_ai.html @@ -0,0 +1,2624 @@ + + + + + + + + + +Machine Learning Systems - 18  Robust AI + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

18  Robust AI

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Create an image featuring an advanced AI system symbolized by an intricate, glowing neural network, deeply nested within a series of progressively larger and more fortified shields. Each shield layer represents a layer of defense, showcasing the system’s robustness against external threats and internal errors. The neural network, at the heart of this fortress of shields, radiates with connections that signify the AI’s capacity for learning and adaptation. This visual metaphor emphasizes not only the technological sophistication of the AI but also its resilience and security, set against the backdrop of a state-of-the-art, secure server room filled with the latest in technological advancements. The image aims to convey the concept of ultimate protection and resilience in the field of artificial intelligence.
+
+
+

The development of robust machine learning systems has become increasingly crucial. As these systems are deployed in various critical applications, from autonomous vehicles to healthcare diagnostics, ensuring their resilience to faults and errors is paramount.

+

Robust AI, in the context of hardware faults, software faults, and errors, plays an important role in maintaining the reliability, safety, and performance of machine learning systems. By addressing the challenges posed by transient, permanent, and intermittent hardware faults (Ahmadilivani et al. 2024), as well as bugs, design flaws, and implementation errors in software (H. Zhang 2008), robust AI techniques enable machine learning systems to operate effectively even in adverse conditions.

+

This chapter explores the fundamental concepts, techniques, and tools for building fault-tolerant and error-resilient machine learning systems. It empowers researchers and practitioners to develop AI solutions that can withstand the complexities and uncertainties of real-world environments.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the importance of robust and resilient AI systems in real-world applications.

  • +
  • Identify and characterize hardware faults, software faults, and their impact on ML systems.

  • +
  • Recognize and develop defensive strategies against threats posed by adversarial attacks, data poisoning, and distribution shifts.

  • +
  • Learn techniques for detecting, mitigating, and designing fault-tolerant ML systems.

  • +
  • Become familiar with tools and frameworks for studying and enhancing ML system resilience throughout the AI development lifecycle.

  • +
+
+
+
+

18.1 Introduction

+

Robust AI refers to a system’s ability to maintain its performance and reliability in the presence of hardware, software, and errors. A robust machine learning system is designed to be fault-tolerant and error-resilient, capable of operating effectively even under adverse conditions.

+

As ML systems become increasingly integrated into various aspects of our lives, from cloud-based services to edge devices and embedded systems, the impact of hardware and software faults on their performance and reliability becomes more significant. In the future, as ML systems become more complex and are deployed in even more critical applications, the need for robust and fault-tolerant designs will be paramount.

+

ML systems are expected to play crucial roles in autonomous vehicles, smart cities, healthcare, and industrial automation domains. In these domains, the consequences of hardware or software faults can be severe, potentially leading to loss of life, economic damage, or environmental harm.

+

Researchers and engineers must focus on developing advanced techniques for fault detection, isolation, and recovery to mitigate these risks and ensure the reliable operation of future ML systems.

+

This chapter will focus specifically on three main categories of faults and errors that can impact the robustness of ML systems: hardware faults, software faults, and human errors.

+
    +
  • Hardware Faults: Transient, permanent, and intermittent faults can affect the hardware components of an ML system, corrupting computations and degrading performance.

  • +
  • Model Robustness: ML models can be vulnerable to adversarial attacks, data poisoning, and distribution shifts, which can induce targeted misclassifications, skew the model’s learned behavior, or compromise the system’s integrity and reliability.

  • +
  • Software Faults: Bugs, design flaws, and implementation errors in the software components, such as algorithms, libraries, and frameworks, can propagate errors and introduce vulnerabilities.

  • +
+

The specific challenges and approaches to achieving robustness may vary depending on the scale and constraints of the ML system. Large-scale cloud computing or data center systems may focus on fault tolerance and resilience through redundancy, distributed processing, and advanced error detection and correction techniques. In contrast, resource-constrained edge devices or embedded systems face unique challenges due to limited computational power, memory, and energy resources.

+

Regardless of the scale and constraints, the key characteristics of a robust ML system include fault tolerance, error resilience, and performance maintenance. By understanding and addressing the multifaceted challenges to robustness, we can develop trustworthy and reliable ML systems that can navigate the complexities of real-world environments.

+

This chapter is not just about exploring ML systems’ tools, frameworks, and techniques for detecting and mitigating faults, attacks, and distributional shifts. It’s about emphasizing the crucial role of each one of you in prioritizing resilience throughout the AI development lifecycle, from data collection and model training to deployment and monitoring. By proactively addressing the challenges to robustness, we can unlock the full potential of ML technologies while ensuring their safe, reliable, and responsible deployment in real-world applications.

+

As AI continues to shape our future, the potential of ML technologies is immense. But it’s only when we build resilient systems that can withstand the challenges of the real world that we can truly harness this potential. This is a defining factor in the success and societal impact of this transformative technology, and it’s within our reach.

+
+
+

18.2 Real-World Examples

+

Here are some real-world examples of cases where faults in hardware or software have caused major issues in ML systems across cloud, edge, and embedded environments:

+
+

18.2.1 Cloud

+

In February 2017, Amazon Web Services (AWS) experienced a significant outage due to human error during maintenance. An engineer inadvertently entered an incorrect command, causing many servers to be taken offline. This outage disrupted many AWS services, including Amazon’s AI-powered assistant, Alexa. As a result, Alexa-powered devices, such as Amazon Echo and third-party products using Alexa Voice Service, could not respond to user requests for several hours. This incident highlights the potential impact of human errors on cloud-based ML systems and the need for robust maintenance procedures and failsafe mechanisms.

+

In another example (Vangal et al. 2021), Facebook encountered a silent data corruption (SDC) issue within its distributed querying infrastructure, as shown in Figure fig-sdc-example. Facebook’s infrastructure includes a querying system that fetches and executes SQL and SQL-like queries across multiple datasets using frameworks like Presto, Hive, and Spark. One of the applications that utilized this querying infrastructure was a compression application to reduce the footprint of data stores. In this compression application, files were compressed when not being read and decompressed when a read request was made. Before decompression, the file size was checked to ensure it was greater than zero, indicating a valid compressed file with contents.

+
+Vangal, Sriram, Somnath Paul, Steven Hsu, Amit Agarwal, Saurabh Kumar, Ram Krishnamurthy, Harish Krishnamurthy, James Tschanz, Vivek De, and Chris H. Kim. 2021. “Wide-Range Many-Core SoC Design in Scaled CMOS: Challenges and Opportunities.” IEEE Trans. Very Large Scale Integr. VLSI Syst. 29 (5): 843–56. https://doi.org/10.1109/tvlsi.2021.3061649. +
+
+
+ +
+
+Figure 18.1: Silent data corruption in database applications (Source: Facebook) +
+
+
+

However, in one instance, when the file size was being computed for a valid non-zero-sized file, the decompression algorithm invoked a power function from the Scala library. Unexpectedly, the Scala function returned a zero size value for the file despite having a known non-zero decompressed size. As a result, the decompression was not performed, and the file was not written to the output database. This issue manifested sporadically, with some occurrences of the same file size computation returning the correct non-zero value.

+

The impact of this silent data corruption was significant, leading to missing files and incorrect data in the output database. The application relying on the decompressed files failed due to the data inconsistencies. In the case study presented in the paper, Facebook’s infrastructure, which consists of hundreds of thousands of servers handling billions of requests per day from their massive user base, encountered a silent data corruption issue. The affected system processed user queries, image uploads, and media content, which required fast, reliable, and secure execution.

+

This case study illustrates how silent data corruption can propagate through multiple layers of an application stack, leading to data loss and application failures in a large-scale distributed system. The intermittent nature of the issue and the lack of explicit error messages made it particularly challenging to diagnose and resolve. But this is not restricted to just Meta, even other companies such as Google that operate AI hypercomputers face this challenge. Figure fig-sdc-jeffdean Jeff Dean, Chief Scientist at Google DeepMind and Google Research, discusses SDCS and their impact on ML systems.

+
+
+
+ +
+
+Figure 18.2: Silent data corruption (SDC) errors are a major issue for AI hypercomputers. (Source: Jeff Dean at MLSys 2024, Keynote (Google)) +
+
+
+
+
+

18.2.2 Edge

+

Regarding examples of faults and errors in edge ML systems, one area that has gathered significant attention is the domain of self-driving cars. Self-driving vehicles rely heavily on machine learning algorithms for perception, decision-making, and control, making them particularly susceptible to the impact of hardware and software faults. In recent years, several high-profile incidents involving autonomous vehicles have highlighted the challenges and risks associated with deploying these systems in real-world environments.

+

In May 2016, a fatal accident occurred when a Tesla Model S operating on Autopilot crashed into a white semi-trailer truck crossing the highway. The Autopilot system, which relied on computer vision and machine learning algorithms, failed to recognize the white trailer against a bright sky background. The driver, who was reportedly watching a movie when the crash, did not intervene in time, and the vehicle collided with the trailer at full speed. This incident raised concerns about the limitations of AI-based perception systems and the need for robust failsafe mechanisms in autonomous vehicles. It also highlighted the importance of driver awareness and the need for clear guidelines on using semi-autonomous driving features, as shown in Figure fig-tesla-example.

+
+
+
+ +
+
+Figure 18.3: Tesla in the fatal California crash was on Autopilot (Source: BBC News) +
+
+
+

In March 2018, an Uber self-driving test vehicle struck and killed a pedestrian crossing the street in Tempe, Arizona. The incident was caused by a software flaw in the vehicle’s object recognition system, which failed to identify the pedestrians appropriately to avoid them as obstacles. The safety driver, who was supposed to monitor the vehicle’s operation and intervene if necessary, was found distracted during the crash. This incident led to widespread scrutiny of Uber’s self-driving program and raised questions about the readiness of autonomous vehicle technology for public roads. It also emphasized the need for rigorous testing, validation, and safety measures in developing and deploying AI-based self-driving systems.

+

In 2021, Tesla faced increased scrutiny following several accidents involving vehicles operating on Autopilot mode. Some of these accidents were attributed to issues with the Autopilot system’s ability to detect and respond to certain road situations, such as stationary emergency vehicles or obstacles in the road. For example, in April 2021, a Tesla Model S crashed into a tree in Texas, killing two passengers. Initial reports suggested that no one was in the driver’s seat at the time of the crash, raising questions about the use and potential misuse of Autopilot features. These incidents highlight the ongoing challenges in developing robust and reliable autonomous driving systems and the need for clear regulations and consumer education regarding the capabilities and limitations of these technologies.

+
+
+

18.2.3 Embedded

+

Embedded systems, which often operate in resource-constrained environments and safety-critical applications, have long faced challenges related to hardware and software faults. As AI and machine learning technologies are increasingly integrated into these systems, the potential for faults and errors takes on new dimensions, with the added complexity of AI algorithms and the critical nature of the applications in which they are deployed.

+

Let’s consider a few examples, starting with outer space exploration. NASA’s Mars Polar Lander mission in 1999 suffered a catastrophic failure due to a software error in the touchdown detection system (Figure fig-nasa-example). The spacecraft’s onboard software mistakenly interpreted the noise from the deployment of its landing legs as a sign that it had touched down on the Martian surface. As a result, the spacecraft prematurely shut down its engines, causing it to crash into the surface. This incident highlights the critical importance of robust software design and extensive testing in embedded systems, especially those operating in remote and unforgiving environments. As AI capabilities are integrated into future space missions, ensuring these systems’ reliability and fault tolerance will be paramount to mission success.

+
+
+
+ +
+
+Figure 18.4: NASA’s Failed Mars Polar Lander mission in 1999 cost over $200M (Source: SlashGear) +
+
+
+

Back on earth, in 2015, a Boeing 787 Dreamliner experienced a complete electrical shutdown during a flight due to a software bug in its generator control units. The bug caused the generator control units to enter a failsafe mode, cutting power to the aircraft’s electrical systems and forcing an emergency landing. This incident underscores the potential for software faults to have severe consequences in complex embedded systems like aircraft. As AI technologies are increasingly applied in aviation, such as in autonomous flight systems and predictive maintenance, ensuring the robustness and reliability of these systems will be critical to passenger safety.

+

As AI capabilities increasingly integrate into embedded systems, the potential for faults and errors becomes more complex and severe. Imagine a smart pacemaker that has a sudden glitch. A patient could die from that effect. Therefore, AI algorithms, such as those used for perception, decision-making, and control, introduce new sources of potential faults, such as data-related issues, model uncertainties, and unexpected behaviors in edge cases. Moreover, the opaque nature of some AI models can make it challenging to identify and diagnose faults when they occur.

+
+
+
+

18.3 Hardware Faults

+

Hardware faults are a significant challenge in computing systems, including traditional and ML systems. These faults occur when physical components, such as processors, memory modules, storage devices, or interconnects, malfunction or behave abnormally. Hardware faults can cause incorrect computations, data corruption, system crashes, or complete system failure, compromising the integrity and trustworthiness of the computations performed by the system (Jha et al. 2019). A complete system failure refers to a situation where the entire computing system becomes unresponsive or inoperable due to a critical hardware malfunction. This type of failure is the most severe, as it renders the system unusable and may lead to data loss or corruption, requiring manual intervention to repair or replace the faulty components.

+

Understanding the taxonomy of hardware faults is essential for anyone working with computing systems, especially in the context of ML systems. ML systems rely on complex hardware architectures and large-scale computations to train and deploy models that learn from data and make intelligent predictions or decisions. However, hardware faults can introduce errors and inconsistencies in the MLOps pipeline, affecting the trained models’ accuracy, robustness, and reliability (G. Li et al. 2017).

+

Knowing the different types of hardware faults, their mechanisms, and their potential impact on system behavior is crucial for developing effective strategies to detect, mitigate, and recover them. This knowledge is necessary for designing fault-tolerant computing systems, implementing robust ML algorithms, and ensuring the overall dependability of ML-based applications.

+

The following sections will explore the three main categories of hardware faults: transient, permanent, and intermittent. We will discuss their definitions, characteristics, causes, mechanisms, and examples of how they manifest in computing systems. We will also cover detection and mitigation techniques specific to each fault type.

+
    +
  • Transient Faults: Transient faults are temporary and non-recurring. They are often caused by external factors such as cosmic rays, electromagnetic interference, or power fluctuations. A common example of a transient fault is a bit flip, where a single bit in a memory location or register changes its value unexpectedly. Transient faults can lead to incorrect computations or data corruption, but they do not cause permanent damage to the hardware.

  • +
  • Permanent Faults: Permanent faults, also called hard errors, are irreversible and persist over time. They are typically caused by physical defects or wear-out of hardware components. Examples of permanent faults include stuck-at faults, where a bit or signal is permanently set to a specific value (e.g., always 0 or always 1), and device failures, such as a malfunctioning processor or a damaged memory module. Permanent faults can result in complete system failure or significant performance degradation.

  • +
  • Intermittent Faults: Intermittent faults are recurring faults that appear and disappear intermittently. Unstable hardware conditions, such as loose connections, aging components, or manufacturing defects, often cause them. Intermittent faults can be challenging to diagnose and reproduce because they may occur sporadically and under specific conditions. Examples include intermittent short circuits or contact resistance issues. Intermittent faults can lead to unpredictable system behavior and intermittent errors.

  • +
+

By the end of this discussion, readers will have a solid understanding of fault taxonomy and its relevance to traditional computing and ML systems. This foundation will help them make informed decisions when designing, implementing, and deploying fault-tolerant solutions, improving the reliability and trustworthiness of their computing systems and ML applications.

+
+

18.3.1 Transient Faults

+

Transient faults in hardware can manifest in various forms, each with its own unique characteristics and causes. These faults are temporary in nature and do not result in permanent damage to the hardware components.

+
+

Definition and Characteristics

+

Some of the common types of transient faults include Single Event Upsets (SEUs) caused by ionizing radiation, voltage fluctuations (Reddi and Gupta 2013) due to power supply noise or electromagnetic interference, Electromagnetic Interference (EMI) induced by external electromagnetic fields, Electrostatic Discharge (ESD) resulting from sudden static electricity flow, crosstalk caused by unintended signal coupling, ground bounce triggered by simultaneous switching of multiple outputs, timing violations due to signal timing constraint breaches, and soft errors in combinational logic affecting the output of logic circuits (Mukherjee, Emer, and Reinhardt 2005). Understanding these different types of transient faults is crucial for designing robust and resilient hardware systems that can mitigate their impact and ensure reliable operation.

+
+Reddi, Vijay Janapa, and Meeta Sharma Gupta. 2013. Resilient Architecture Design for Voltage Variation. Springer International Publishing. https://doi.org/10.1007/978-3-031-01739-1. +
+Mukherjee, S. S., J. Emer, and S. K. Reinhardt. 2005. “The Soft Error Problem: An Architectural Perspective.” In 11th International Symposium on High-Performance Computer Architecture, 243–47. IEEE; IEEE. https://doi.org/10.1109/hpca.2005.37. +

All of these transient faults are characterized by their short duration and non-permanent nature. They do not persist or leave any lasting impact on the hardware. However, they can still lead to incorrect computations, data corruption, or system misbehavior if not properly handled.

+

+
+
+

Causes of Transient Faults

+

Transient faults can be attributed to various external factors. One common cause is cosmic rays, high-energy particles originating from outer space. When these particles strike sensitive areas of the hardware, such as memory cells or transistors, they can induce charge disturbances that alter the stored or transmitted data. This is illustrated in Figure fig-transient-fault. Another cause of transient faults is electromagnetic interference (EMI) from nearby devices or power fluctuations. EMI can couple with the circuits and cause voltage spikes or glitches that temporarily disrupt the normal operation of the hardware.

+
+
+
+ +
+
+Figure 18.5: Mechanism of Hardware Transient Fault Occurrence (Source: NTT) +
+
+
+
+
+

Mechanisms of Transient Faults

+

Transient faults can manifest through different mechanisms depending on the affected hardware component. In memory devices like DRAM or SRAM, transient faults often lead to bit flips, where a single bit changes its value from 0 to 1 or vice versa. This can corrupt the stored data or instructions. In logic circuits, transient faults can cause glitches or voltage spikes propagating through the combinational logic, resulting in incorrect outputs or control signals. Transient faults can also affect communication channels, causing bit errors or packet losses during data transmission.

+
+
+

Impact on ML Systems

+

A common example of a transient fault is a bit flip in the main memory. If an important data structure or critical instruction is stored in the affected memory location, it can lead to incorrect computations or program misbehavior. If a transient fault occurs in the memory storing the model weights or gradients. For instance, a bit flip in the memory storing a loop counter can cause the loop to execute indefinitely or terminate prematurely. Transient faults in control registers or flag bits can alter the flow of program execution, leading to unexpected jumps or incorrect branch decisions. In communication systems, transient faults can corrupt transmitted data packets, resulting in retransmissions or data loss.

+

In ML systems, transient faults can have significant implications during the training phase (He et al. 2023). ML training involves iterative computations and updates to model parameters based on large datasets. If a transient fault occurs in the memory storing the model weights or gradients, it can lead to incorrect updates and compromise the convergence and accuracy of the training process. Figure fig-sdc-training-fault Show a real-world example from Google’s production fleet where an SDC anomaly caused a significant difference in the gradient norm.

+
+
+
+ +
+
+Figure 18.6: SDC in ML training phase results in anomalies in the gradient norm. (Source: Jeff Dean, MLSys 2024 Keynote (Google)) +
+
+
+

For example, a bit flip in the weight matrix of a neural network can cause the model to learn incorrect patterns or associations, leading to degraded performance (Wan et al. 2021). Transient faults in the data pipeline, such as corruption of training samples or labels, can also introduce noise and affect the quality of the learned model.

+
+Wan, Zishen, Aqeel Anwar, Yu-Shun Hsiao, Tianyu Jia, Vijay Janapa Reddi, and Arijit Raychowdhury. 2021. “Analyzing and Improving Fault Tolerance of Learning-Based Navigation Systems.” In 2021 58th ACM/IEEE Design Automation Conference (DAC), 841–46. IEEE; IEEE. https://doi.org/10.1109/dac18074.2021.9586116. +

During the inference phase, transient faults can impact the reliability and trustworthiness of ML predictions. If a transient fault occurs in the memory storing the trained model parameters or in the computation of the inference results, it can lead to incorrect or inconsistent predictions. For instance, a bit flip in the activation values of a neural network can alter the final classification or regression output (Mahmoud et al. 2020).

+

In safety-critical applications, such as autonomous vehicles or medical diagnosis, transient faults during inference can have severe consequences, leading to incorrect decisions or actions (G. Li et al. 2017; Jha et al. 2019). Ensuring the resilience of ML systems against transient faults is crucial to maintaining the integrity and reliability of the predictions.

+
+Li, Guanpeng, Siva Kumar Sastry Hari, Michael Sullivan, Timothy Tsai, Karthik Pattabiraman, Joel Emer, and Stephen W. Keckler. 2017. “Understanding Error Propagation in Deep Learning Neural Network (DNN) Accelerators and Applications.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 1–12. ACM. https://doi.org/10.1145/3126908.3126964. +
+
+
+

18.3.2 Permanent Faults

+

Permanent faults are hardware defects that persist and cause irreversible damage to the affected components. These faults are characterized by their persistent nature and require repair or replacement of the faulty hardware to restore normal system functionality.

+
+

Definition and Characteristics

+

Permanent faults are hardware defects that cause persistent and irreversible malfunctions in the affected components. The faulty component remains non-operational until a permanent fault is repaired or replaced. These faults are characterized by their consistent and reproducible nature, meaning that the faulty behavior is observed every time the affected component is used. Permanent faults can impact various hardware components, such as processors, memory modules, storage devices, or interconnects, leading to system crashes, data corruption, or complete system failure.

+

One notable example of a permanent fault is the Intel FDIV bug, which was discovered in 1994. The FDIV bug was a flaw in certain Intel Pentium processors’ floating-point division (FDIV) units. The bug caused incorrect results for specific division operations, leading to inaccurate calculations.

+

The FDIV bug occurred due to an error in the lookup table used by the division unit. In rare cases, the processor would fetch an incorrect value from the lookup table, resulting in a slightly less precise result than expected. For instance, Figure fig-permanent-fault shows a fraction 4195835/3145727 plotted on a Pentium processor with the FDIV permanent fault. The triangular regions are where erroneous calculations occurred. Ideally, all correct values would round to 1.3338, but the erroneous results show 1.3337, indicating a mistake in the 5th digit.

+

Although the error was small, it could compound over many division operations, leading to significant inaccuracies in mathematical calculations. The impact of the FDIV bug was significant, especially for applications that relied heavily on precise floating-point division, such as scientific simulations, financial calculations, and computer-aided design. The bug led to incorrect results, which could have severe consequences in fields like finance or engineering.

+
+
+
+ +
+
+Figure 18.7: Intel Pentium processor with the FDIV permanent fault. The triangular regions are where erroneous calculations occurred. (Source: Byte Magazine) +
+
+
+

The Intel FDIV bug is a cautionary tale for the potential impact of permanent faults on ML systems. In the context of ML, permanent faults in hardware components can lead to incorrect computations, affecting the accuracy and reliability of the models. For example, if an ML system relies on a processor with a faulty floating-point unit, similar to the Intel FDIV bug, it could introduce errors in the calculations performed during training or inference.

+

These errors can propagate through the model, leading to inaccurate predictions or skewed learning. In applications where ML is used for critical tasks, such as autonomous driving, medical diagnosis, or financial forecasting, the consequences of incorrect computations due to permanent faults can be severe.

+

It is crucial for ML practitioners to be aware of the potential impact of permanent faults and to incorporate fault-tolerant techniques, such as hardware redundancy, error detection and correction mechanisms, and robust algorithm design, to mitigate the risks associated with these faults. Additionally, thorough testing and validation of ML hardware components can help identify and address permanent faults before they impact the system’s performance and reliability.

+
+
+

Causes of Permanent Faults

+

Permanent faults can arise from several causes, including manufacturing defects and wear-out mechanisms. Manufacturing defects are inherent flaws introduced during the fabrication process of hardware components. These defects include improper etching, incorrect doping, or contamination, leading to non-functional or partially functional components.

+

On the other hand, wear-out mechanisms occur over time as the hardware components are subjected to prolonged use and stress. Factors such as electromigration, oxide breakdown, or thermal stress can cause gradual degradation of the components, eventually leading to permanent failures.

+
+
+

Mechanisms of Permanent Faults

+

Permanent faults can manifest through various mechanisms, depending on the nature and location of the fault. Stuck-at faults (Seong et al. 2010) are common permanent faults where a signal or memory cell remains fixed at a particular value (either 0 or 1) regardless of the inputs, as illustrated in Figure fig-stuck-fault.

+
+Seong, Nak Hee, Dong Hyuk Woo, Vijayalakshmi Srinivasan, Jude A. Rivers, and Hsien-Hsin S. Lee. 2010. SAFER: Stuck-at-fault Error Recovery for Memories.” In 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture, 115–24. IEEE; IEEE. https://doi.org/10.1109/micro.2010.46. +
+
+
+ +
+
+Figure 18.8: Stuck-at Fault Model in Digital Circuits (Source: Accendo Reliability) +
+
+
+

Stuck-at faults can occur in logic gates, memory cells, or interconnects, causing incorrect computations or data corruption. Another mechanism is device failures, where a component, such as a transistor or a memory cell, completely ceases to function. This can be due to manufacturing defects or severe wear-out. Bridging faults occur when two or more signal lines are unintentionally connected, causing short circuits or incorrect logic behavior.

+

In addition to stuck-at faults, there are several other types of permanent faults that can affect digital circuits that can impact an ML system. Delay faults can cause the propagation delay of a signal to exceed the specified limit, leading to timing violations. Interconnect faults, such as open faults (broken wires), resistive faults (increased resistance), or capacitive faults (increased capacitance), can cause signal integrity issues or timing violations. Memory cells can also suffer from various faults, including transition faults (inability to change state), coupling faults (interference between adjacent cells), and neighborhood pattern sensitive faults (faults that depend on the values of neighboring cells). Other permanent faults can occur in the power supply network or the clock distribution network, affecting the functionality and timing of the circuit.

+
+
+

Impact on ML Systems

+

Permanent faults can severely affect the behavior and reliability of computing systems. For example, a stuck-at-fault in a processor’s arithmetic logic unit (ALU) can cause incorrect computations, leading to erroneous results or system crashes. A permanent fault in a memory module, such as a stuck-at fault in a specific memory cell, can corrupt the stored data, causing data loss or program misbehavior. In storage devices, permanent faults like bad sectors or device failures can result in data inaccessibility or complete loss of stored information. Permanent interconnect faults can disrupt communication channels, causing data corruption or system hangs.

+

Permanent faults can significantly affect ML systems during the training and inference phases. During training, permanent faults in processing units or memory can lead to incorrect computations, resulting in corrupted or suboptimal models (He et al. 2023). Furthermore, faults in storage devices can corrupt the training data or the stored model parameters, leading to data loss or model inconsistencies (He et al. 2023).

+
+Zhang, Jeff Jun, Tianyu Gu, Kanad Basu, and Siddharth Garg. 2018. “Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator.” In 2018 IEEE 36th VLSI Test Symposium (VTS), 1–6. IEEE; IEEE. https://doi.org/10.1109/vts.2018.8368656. +

During inference, permanent faults can impact the reliability and correctness of ML predictions. Faults in the processing units can produce incorrect results or cause system failures, while faults in memory storing the model parameters can lead to corrupted or outdated models being used for inference (J. J. Zhang et al. 2018).

+

To mitigate the impact of permanent faults in ML systems, fault-tolerant techniques must be employed at both the hardware and software levels. Hardware redundancy, such as duplicating critical components or using error-correcting codes (Kim, Sullivan, and Erez 2015), can help detect and recover from permanent faults. Software techniques, such as checkpoint and restart mechanisms (Egwutuoha et al. 2013), can enable the system to recover from permanent faults by returning to a previously saved state. Regular monitoring, testing, and maintenance of ML systems can help identify and replace faulty components before they cause significant disruptions.

+
+Kim, Jungrae, Michael Sullivan, and Mattan Erez. 2015. “Bamboo ECC: Strong, Safe, and Flexible Codes for Reliable Computer Memory.” In 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA), 101–12. IEEE; IEEE. https://doi.org/10.1109/hpca.2015.7056025. +
+Egwutuoha, Ifeanyi P., David Levy, Bran Selic, and Shiping Chen. 2013. “A Survey of Fault Tolerance Mechanisms and Checkpoint/Restart Implementations for High Performance Computing Systems.” The Journal of Supercomputing 65 (3): 1302–26. https://doi.org/10.1007/s11227-013-0884-0. +

Designing ML systems with fault tolerance in mind is crucial to ensure their reliability and robustness in the presence of permanent faults. This may involve incorporating redundancy, error detection and correction mechanisms, and fail-safe strategies into the system architecture. By proactively addressing the challenges posed by permanent faults, ML systems can maintain their integrity, accuracy, and trustworthiness, even in the face of hardware failures.

+
+
+
+

18.3.3 Intermittent Faults

+

Intermittent faults are hardware faults that occur sporadically and unpredictably in a system. An example is illustrated in Figure fig-intermittent-fault, where cracks in the material can introduce increased resistance in circuitry. These faults are particularly challenging to detect and diagnose because they appear and disappear intermittently, making it difficult to reproduce and isolate the root cause. Intermittent faults can lead to system instability, data corruption, and performance degradation.

+
+
+
+ +
+
+Figure 18.9: Increased resistance due to an intermittent fault – crack between copper bump and package solder (Source: Constantinescu) +
+
+
+
+

Definition and Characteristics

+

Intermittent faults are characterized by their sporadic and non-deterministic nature. They occur irregularly and may appear and disappear spontaneously, with varying durations and frequencies. These faults do not consistently manifest every time the affected component is used, making them harder to detect than permanent faults. Intermittent faults can affect various hardware components, including processors, memory modules, storage devices, or interconnects. They can cause transient errors, data corruption, or unexpected system behavior.

+

Intermittent faults can significantly impact the behavior and reliability of computing systems (Rashid, Pattabiraman, and Gopalakrishnan 2015). For example, an intermittent fault in a processor’s control logic can cause irregular program flow, leading to incorrect computations or system hangs. Intermittent faults in memory modules can corrupt data values, resulting in erroneous program execution or data inconsistencies. In storage devices, intermittent faults can cause read/write errors or data loss. Intermittent faults in communication channels can lead to data corruption, packet loss, or intermittent connectivity issues. These faults can cause system crashes, data integrity problems, or performance degradation, depending on the severity and frequency of the intermittent failures.

+
+———. 2015. “Characterizing the Impact of Intermittent Hardware Faults on Programs.” IEEE Trans. Reliab. 64 (1): 297–310. https://doi.org/10.1109/tr.2014.2363152. +
+
+

Causes of Intermittent Faults

+

Intermittent faults can arise from several causes, both internal and external, to the hardware components (Constantinescu 2008). One common cause is aging and wear-out of the components. As electronic devices age, they become more susceptible to intermittent failures due to degradation mechanisms such as electromigration, oxide breakdown, or solder joint fatigue.

+
+Constantinescu, Cristian. 2008. “Intermittent Faults and Effects on Reliability of Integrated Circuits.” In 2008 Annual Reliability and Maintainability Symposium, 370–74. IEEE; IEEE. https://doi.org/10.1109/rams.2008.4925824. +

Manufacturing defects or process variations can also introduce intermittent faults, where marginal or borderline components may exhibit sporadic failures under specific conditions, as shown in Figure fig-intermittent-fault-dram.

+

Environmental factors, such as temperature fluctuations, humidity, or vibrations, can trigger intermittent faults by altering the electrical characteristics of the components. Loose or degraded connections, such as those in connectors or printed circuit boards, can cause intermittent faults.

+
+
+
+ +
+
+Figure 18.10: Residue induced intermittent fault in a DRAM chip (Source: Hynix Semiconductor) +
+
+
+
+
+

Mechanisms of Intermittent Faults

+

Intermittent faults can manifest through various mechanisms, depending on the underlying cause and the affected component. One mechanism is the intermittent open or short circuit, where a signal path or connection becomes temporarily disrupted or shorted, causing erratic behavior. Another mechanism is the intermittent delay fault (J. Zhang et al. 2018), where the timing of signals or propagation delays becomes inconsistent, leading to synchronization issues or incorrect computations. Intermittent faults can manifest as transient bit flips or soft errors in memory cells or registers, causing data corruption or incorrect program execution.

+
+Zhang, Jeff, Kartheek Rangineni, Zahra Ghodsi, and Siddharth Garg. 2018. ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Learning Accelerators.” In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1–6. IEEE. https://doi.org/10.1109/dac.2018.8465918. +
+
+

Impact on ML Systems

+

In the context of ML systems, intermittent faults can introduce significant challenges and impact the system’s reliability and performance. During the training phase, intermittent faults in processing units or memory can lead to inconsistencies in computations, resulting in incorrect or noisy gradients and weight updates. This can affect the convergence and accuracy of the training process, leading to suboptimal or unstable models. Intermittent data storage or retrieval faults can corrupt the training data, introducing noise or errors that degrade the quality of the learned models (He et al. 2023).

+
+He, Yi, Mike Hutton, Steven Chan, Robert De Gruijl, Rama Govindaraju, Nishant Patil, and Yanjing Li. 2023. “Understanding and Mitigating Hardware Failures in Deep Learning Training Systems.” In Proceedings of the 50th Annual International Symposium on Computer Architecture, 1–16. IEEE; ACM. https://doi.org/10.1145/3579371.3589105. +

During the inference phase, intermittent faults can impact the reliability and consistency of ML predictions. Faults in the processing units or memory can cause incorrect computations or data corruption, leading to erroneous or inconsistent predictions. Intermittent faults in the data pipeline can introduce noise or errors in the input data, affecting the accuracy and robustness of the predictions. In safety-critical applications, such as autonomous vehicles or medical diagnosis systems, intermittent faults can have severe consequences, leading to incorrect decisions or actions that compromise safety and reliability.

+

Mitigating the impact of intermittent faults in ML systems requires a multifaceted approach (Rashid, Pattabiraman, and Gopalakrishnan 2012). At the hardware level, techniques such as robust design practices, component selection, and environmental control can help reduce the occurrence of intermittent faults. Redundancy and error correction mechanisms can be employed to detect and recover from intermittent failures. At the software level, runtime monitoring, anomaly detection, and fault-tolerant techniques can be incorporated into the ML pipeline. This may include techniques such as data validation, outlier detection, model ensembling, or runtime model adaptation to handle intermittent faults gracefully.

+
+Rashid, Layali, Karthik Pattabiraman, and Sathish Gopalakrishnan. 2012. “Intermittent Hardware Errors Recovery: Modeling and Evaluation.” In 2012 Ninth International Conference on Quantitative Evaluation of Systems, 220–29. IEEE; IEEE. https://doi.org/10.1109/qest.2012.37. +

Designing ML systems resilient to intermittent faults is crucial to ensuring their reliability and robustness. This involves incorporating fault-tolerant techniques, runtime monitoring, and adaptive mechanisms into the system architecture. By proactively addressing the challenges of intermittent faults, ML systems can maintain their accuracy, consistency, and trustworthiness, even in sporadic hardware failures. Regular testing, monitoring, and maintenance of ML systems can help identify and mitigate intermittent faults before they cause significant disruptions or performance degradation.

+
+
+
+

18.3.4 Detection and Mitigation

+

This section explores various fault detection techniques, including hardware-level and software-level approaches, and discusses effective mitigation strategies to enhance the resilience of ML systems. Additionally, we will look into resilient ML system design considerations, present case studies and examples, and highlight future research directions in fault-tolerant ML systems.

+
+

Fault Detection Techniques

+

Fault detection techniques are important for identifying and localizing hardware faults in ML systems. These techniques can be broadly categorized into hardware-level and software-level approaches, each offering unique capabilities and advantages.

+
+
Hardware-level fault detection
+

Hardware-level fault detection techniques are implemented at the physical level of the system and aim to identify faults in the underlying hardware components. There are several hardware techniques, but broadly, we can bucket these different mechanisms into the following categories.

+

Built-in self-test (BIST) mechanisms: BIST is a powerful technique for detecting faults in hardware components (Bushnell and Agrawal 2002). It involves incorporating additional hardware circuitry into the system for self-testing and fault detection. BIST can be applied to various components, such as processors, memory modules, or application-specific integrated circuits (ASICs). For example, BIST can be implemented in a processor using scan chains, which are dedicated paths that allow access to internal registers and logic for testing purposes.

+
+Bushnell, Michael L, and Vishwani D Agrawal. 2002. “Built-in Self-Test.” Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits, 489–548. +

During the BIST process, predefined test patterns are applied to the processor’s internal circuitry, and the responses are compared against expected values. Any discrepancies indicate the presence of faults. Intel’s Xeon processors, for instance, include BIST mechanisms to test the CPU cores, cache memory, and other critical components during system startup.

+

Error detection codes: Error detection codes are widely used to detect data storage and transmission errors (Hamming 1950). These codes add redundant bits to the original data, allowing the detection of bit errors. Example: Parity checks are a simple form of error detection code shown in Figure fig-parity. In a single-bit parity scheme, an extra bit is appended to each data word, making the number of 1s in the word even (even parity) or odd (odd parity).

+
+Hamming, R. W. 1950. “Error Detecting and Error Correcting Codes.” Bell Syst. Tech. J. 29 (2): 147–60. https://doi.org/10.1002/j.1538-7305.1950.tb00463.x. +
+
+
+ +
+
+Figure 18.11: Parity bit example (Source: Computer Hope) +
+
+
+

When reading the data, the parity is checked, and if it doesn’t match the expected value, an error is detected. More advanced error detection codes, such as cyclic redundancy checks (CRC), calculate a checksum based on the data and append it to the message. The checksum is recalculated at the receiving end and compared with the transmitted checksum to detect errors. Error-correcting code (ECC) memory modules, commonly used in servers and critical systems, employ advanced error detection and correction codes to detect and correct single-bit or multi-bit errors in memory.

+

Hardware redundancy and voting mechanisms: Hardware redundancy involves duplicating critical components and comparing their outputs to detect and mask faults (Sheaffer, Luebke, and Skadron 2007). Voting mechanisms, such as triple modular redundancy (TMR), employ multiple instances of a component and compare their outputs to identify and mask faulty behavior (Arifeen, Hassan, and Lee 2020).

+
+Sheaffer, Jeremy W, David P Luebke, and Kevin Skadron. 2007. “A Hardware Redundancy and Recovery Mechanism for Reliable Scientific Computation on Graphics Processors.” In Graphics Hardware, 2007:55–64. Citeseer. +
+Arifeen, Tooba, Abdus Sami Hassan, and Jeong-A Lee. 2020. “Approximate Triple Modular Redundancy: A Survey.” #IEEE_O_ACC# 8: 139851–67. https://doi.org/10.1109/access.2020.3012673. +
+Yeh, Y. C. 1996. “Triple-Triple Redundant 777 Primary Flight Computer.” In 1996 IEEE Aerospace Applications Conference. Proceedings, 1:293–307. IEEE; IEEE. https://doi.org/10.1109/aero.1996.495891. +

In a TMR system, three identical instances of a hardware component, such as a processor or a sensor, perform the same computation in parallel. The outputs of these instances are fed into a voting circuit, which compares the results and selects the majority value as the final output. If one of the instances produces an incorrect result due to a fault, the voting mechanism masks the error and maintains the correct output. TMR is commonly used in aerospace and aviation systems, where high reliability is critical. For instance, the Boeing 777 aircraft employs TMR in its primary flight computer system to ensure the availability and correctness of flight control functions (Yeh 1996).

+

Tesla’s self-driving computers employ a redundant hardware architecture to ensure the safety and reliability of critical functions, such as perception, decision-making, and vehicle control, as shown in Figure fig-tesla-dmr. One key component of this architecture is using dual modular redundancy (DMR) in the car’s onboard computer systems.

+
+
+
+ +
+
+Figure 18.12: Tesla full self-driving computer with dual redundant SoCs (Source: Tesla) +
+
+
+

In Tesla’s DMR implementation, two identical hardware units, often called “redundant computers” or “redundant control units,” perform the same computations in parallel (Bannon et al. 2019). Each unit independently processes sensor data, executes perception and decision-making algorithms, and generates control commands for the vehicle’s actuators (e.g., steering, acceleration, and braking).

+
+Bannon, Pete, Ganesh Venkataramanan, Debjit Das Sarma, and Emil Talpes. 2019. “Computer and Redundancy Solution for the Full Self-Driving Computer.” In 2019 IEEE Hot Chips 31 Symposium (HCS), 1–22. IEEE Computer Society; IEEE. https://doi.org/10.1109/hotchips.2019.8875645. +

The outputs of these two redundant units are continuously compared to detect any discrepancies or faults. If the outputs match, the system assumes that both units function correctly, and the control commands are sent to the vehicle’s actuators. However, if there is a mismatch between the outputs, the system identifies a potential fault in one of the units and takes appropriate action to ensure safe operation.

+

The system may employ additional mechanisms to determine which unit is faulty in a mismatch. This can involve using diagnostic algorithms, comparing the outputs with data from other sensors or subsystems, or analyzing the consistency of the outputs over time. Once the faulty unit is identified, the system can isolate it and continue operating using the output from the non-faulty unit.

+

DMR in Tesla’s self-driving computer provides an extra safety and fault tolerance layer. By having two independent units performing the same computations, the system can detect and mitigate faults that may occur in one of the units. This redundancy helps prevent single points of failure and ensures that critical functions remain operational despite hardware faults.

+

Furthermore, Tesla also incorporates additional redundancy mechanisms beyond DMR. For example, they utilize redundant power supplies, steering and braking systems, and diverse sensor suites (e.g., cameras, radar, and ultrasonic sensors) to provide multiple layers of fault tolerance. These redundancies collectively contribute to the overall safety and reliability of the self-driving system.

+

It’s important to note that while DMR provides fault detection and some level of fault tolerance, TMR may provide a different level of fault masking. In DMR, if both units experience simultaneous faults or the fault affects the comparison mechanism, the system may be unable to identify the fault. Therefore, Tesla’s SDCs rely on a combination of DMR and other redundancy mechanisms to achieve a high level of fault tolerance.

+

The use of DMR in Tesla’s self-driving computer highlights the importance of hardware redundancy in safety-critical applications. By employing redundant computing units and comparing their outputs, the system can detect and mitigate faults, enhancing the overall safety and reliability of the self-driving functionality.

+

Google employs redundant hot spares to deal with SDC issues within its data centers, thereby enhancing the reliability of critical functions. As illustrated in Figure fig-sdc-controller, during the normal training phase, multiple synchronous training workers function flawlessly. However, if a worker becomes defective and causes SDC, an SDC checker automatically identifies the issues. Upon detecting the SDC, the SDC checker moves the training to a hot spare and sends the defective machine for repair. This redundancy safeguards the continuity and reliability of ML training, effectively minimizing downtime and preserving data integrity.

+
+
+
+ +
+
+Figure 18.13: Google employs hot spare cores to transparently handle SDCs in the data center. (Source: Jeff Dean, MLSys 2024 Keynote (Google)) +
+
+
+

Watchdog timers: Watchdog timers are hardware components that monitor the execution of critical tasks or processes (Pont and Ong 2002). They are commonly used to detect and recover from software or hardware faults that cause a system to become unresponsive or stuck in an infinite loop. In an embedded system, a watchdog timer can be configured to monitor the execution of the main control loop, as illustrated in Figure fig-watchdog. The software periodically resets the watchdog timer to indicate that it functions correctly. Suppose the software fails to reset the timer within a specified time limit (timeout period). In that case, the watchdog timer assumes that the system has encountered a fault and triggers a predefined recovery action, such as resetting the system or switching to a backup component. Watchdog timers are widely used in automotive electronics, industrial control systems, and other safety-critical applications to ensure the timely detection and recovery from faults.

+
+Pont, Michael J, and Royan HL Ong. 2002. “Using Watchdog Timers to Improve the Reliability of Single-Processor Embedded Systems: Seven New Patterns and a Case Study.” In Proceedings of the First Nordic Conference on Pattern Languages of Programs, 159–200. Citeseer. +
+
+
+ +
+
+Figure 18.14: Watchdog timer example in detecting MCU faults (Source: Ablic) +
+
+
+
+
+
Software-level fault detection
+

Software-level fault detection techniques rely on software algorithms and monitoring mechanisms to identify system faults. These techniques can be implemented at various levels of the software stack, including the operating system, middleware, or application level.

+

Runtime monitoring and anomaly detection: Runtime monitoring involves continuously observing the behavior of the system and its components during execution (Francalanza et al. 2017). It helps detect anomalies, errors, or unexpected behavior that may indicate the presence of faults. For example, consider an ML-based image classification system deployed in a self-driving car. Runtime monitoring can be implemented to track the classification model’s performance and behavior (Mahmoud et al. 2021).

+
+Francalanza, Adrian, Luca Aceto, Antonis Achilleos, Duncan Paul Attard, Ian Cassar, Dario Della Monica, and Anna Ingólfsdóttir. 2017. “A Foundation for Runtime Monitoring.” In International Conference on Runtime Verification, 8–29. Springer. +
+Mahmoud, Abdulrahman, Siva Kumar Sastry Hari, Christopher W. Fletcher, Sarita V. Adve, Charbel Sakr, Naresh Shanbhag, Pavlo Molchanov, Michael B. Sullivan, Timothy Tsai, and Stephen W. Keckler. 2021. “Optimizing Selective Protection for CNN Resilience.” In 2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE), 127–38. IEEE. https://doi.org/10.1109/issre52982.2021.00025. +
+Chandola, Varun, Arindam Banerjee, and Vipin Kumar. 2009. “Anomaly Detection: A Survey.” ACM Comput. Surv. 41 (3): 1–58. https://doi.org/10.1145/1541880.1541882. +

Anomaly detection algorithms can be applied to the model’s predictions or intermediate layer activations, such as statistical outlier detection or machine learning-based approaches (e.g., One-Class SVM or Autoencoders) (Chandola, Banerjee, and Kumar 2009). Figure fig-ad shows example of anomaly detection. Suppose the monitoring system detects a significant deviation from the expected patterns, such as a sudden drop in classification accuracy or out-of-distribution samples. In that case, it can raise an alert indicating a potential fault in the model or the input data pipeline. This early detection allows for timely intervention and fault mitigation strategies to be applied.

+
+
+
+ +
+
+Figure 18.15: Examples of anomaly detection. (a) Fully supervised anomaly detection, (b) normal-only anomaly detection, (c, d, e) semi-supervised anomaly detection, (f) unsupervised anomaly detection (Source: Google) +
+
+
+

Consistency checks and data validation: Consistency checks and data validation techniques ensure data integrity and correctness at different processing stages in an ML system (Lindholm et al. 2019). These checks help detect data corruption, inconsistencies, or errors that may propagate and affect the system’s behavior. Example: In a distributed ML system where multiple nodes collaborate to train a model, consistency checks can be implemented to validate the integrity of the shared model parameters. Each node can compute a checksum or hash of the model parameters before and after the training iteration, as shown in Figure fig-ad. Any inconsistencies or data corruption can be detected by comparing the checksums across nodes. Additionally, range checks can be applied to the input data and model outputs to ensure they fall within expected bounds. For instance, if an autonomous vehicle’s perception system detects an object with unrealistic dimensions or velocities, it can indicate a fault in the sensor data or the perception algorithms (Wan et al. 2023).

+
+Lindholm, Andreas, Dave Zachariah, Petre Stoica, and Thomas B. Schon. 2019. “Data Consistency Approach to Model Validation.” #IEEE_O_ACC# 7: 59788–96. https://doi.org/10.1109/access.2019.2915109. +
+Wan, Zishen, Yiming Gan, Bo Yu, S Liu, A Raychowdhury, and Y Zhu. 2023. “Vpp: The Vulnerability-Proportional Protection Paradigm Towards Reliable Autonomous Machines.” In Proceedings of the 5th International Workshop on Domain Specific System Architecture (DOSSA), 1–6. +
+Kawazoe Aguilera, Marcos, Wei Chen, and Sam Toueg. 1997. “Heartbeat: A Timeout-Free Failure Detector for Quiescent Reliable Communication.” In Distributed Algorithms: 11th International Workshop, WDAG’97 Saarbrücken, Germany, September 2426, 1997 Proceedings 11, 126–40. Springer. +

Heartbeat and timeout mechanisms: Heartbeat mechanisms and timeouts are commonly used to detect faults in distributed systems and ensure the liveness and responsiveness of components (Kawazoe Aguilera, Chen, and Toueg 1997). These are quite similar to the watchdog timers found in hardware. For example, in a distributed ML system, where multiple nodes collaborate to perform tasks such as data preprocessing, model training, or inference, heartbeat mechanisms can be implemented to monitor the health and availability of each node. Each node periodically sends a heartbeat message to a central coordinator or its peer nodes, indicating its status and availability. Suppose a node fails to send a heartbeat within a specified timeout period, as shown in Figure fig-heartbeat. In that case, it is considered faulty, and appropriate actions can be taken, such as redistributing the workload or initiating a failover mechanism. Timeouts can also be used to detect and handle hanging or unresponsive components. For example, if a data loading process exceeds a predefined timeout threshold, it may indicate a fault in the data pipeline, and the system can take corrective measures.

+
+
+
+ +
+
+Figure 18.16: Heartbeat messages in distributed systems (Source: GeeksforGeeks) +
+
+
+ +

Software-implemented fault tolerance (SIFT) techniques: SIFT techniques introduce redundancy and fault detection mechanisms at the software level to enhance the reliability and fault tolerance of the system (Reis et al. 2005). Example: N-version programming is a SIFT technique where multiple functionally equivalent software component versions are developed independently by different teams. This can be applied to critical components such as the model inference engine in an ML system. Multiple versions of the inference engine can be executed in parallel, and their outputs can be compared for consistency. It is considered the correct result if most versions produce the same output. If there is a discrepancy, it indicates a potential fault in one or more versions, and appropriate error-handling mechanisms can be triggered. Another example is using software-based error correction codes, such as Reed-Solomon codes (Plank 1997), to detect and correct errors in data storage or transmission, as shown in Figure fig-Reed-Solomon. These codes add redundancy to the data, enabling detecting and correcting certain errors and enhancing the system’s fault tolerance.

+
+Reis, G. A., J. Chang, N. Vachharajani, R. Rangan, and D. I. August. 2005. SWIFT: Software Implemented Fault Tolerance.” In International Symposium on Code Generation and Optimization, 243–54. IEEE; IEEE. https://doi.org/10.1109/cgo.2005.34. +
+Plank, James S. 1997. “A Tutorial on ReedSolomon Coding for Fault-Tolerance in RAID-Like Systems.” Software: Practice and Experience 27 (9): 995–1012. +
+
+
+ +
+
+Figure 18.17: n-bits representation of the Reed-Solomon codes (Source: GeeksforGeeks) +
+
+
+
+

Exercise 18.1 (Anomaly Detection)  

+
+
+ +
+
+

In this Colab, play the role of an AI fault detective! You’ll build an autoencoder-based anomaly detector to pinpoint errors in heart health data. Learn how to identify malfunctions in ML systems, a vital skill for creating dependable AI. We’ll use Keras Tuner to fine-tune your autoencoder for top-notch fault detection. This experience directly links to the Robust AI chapter, demonstrating the importance of fault detection in real-world applications like healthcare and autonomous systems. Get ready to strengthen the reliability of your AI creations!

+

+
+
+
+
+
+
+
+

18.3.5 Summary

+

Table tbl-fault_types provides an extensive comparative analysis of transient, permanent, and intermittent faults. It outlines the primary characteristics or dimensions that distinguish these fault types. Here, we summarize the relevant dimensions we examined and explore the nuances that differentiate transient, permanent, and intermittent faults in greater detail.

+
+
+
+Table 18.1: Comparison of transient, permanent, and intermittent faults. +
+
+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DimensionTransient FaultsPermanent FaultsIntermittent Faults
DurationShort-lived, temporaryPersistent, remains until repair or replacementSporadic, appears and disappears intermittently
PersistenceDisappears after the fault condition passesConsistently present until addressedRecurs irregularly, not always present
CausesExternal factors (e.g., electromagnetic interference, cosmic rays)Hardware defects, physical damage, wear-outUnstable hardware conditions, loose connections, aging components
ManifestationBit flips, glitches, temporary data corruptionStuck-at faults, broken components, complete device failuresOccasional bit flips, intermittent signal issues, sporadic malfunctions
Impact on ML SystemsIntroduces temporary errors or noise in computationsCauses consistent errors or failures, affecting reliabilityLeads to sporadic and unpredictable errors, challenging to diagnose and mitigate
DetectionError detection codes, comparison with expected valuesBuilt-in self-tests, error detection codes, consistency checksMonitoring for anomalies, analyzing error patterns and correlations
MitigationError correction codes, redundancy, checkpoint and restartHardware repair or replacement, component redundancy, failover mechanismsRobust design, environmental control, runtime monitoring, fault-tolerant techniques
+
+
+
+
+
+
+

18.4 ML Model Robustness

+
+

18.4.1 Adversarial Attacks

+
+

Definition and Characteristics

+

Adversarial attacks aim to trick models into making incorrect predictions by providing them with specially crafted, deceptive inputs (called adversarial examples) (Parrish et al. 2023). By adding slight perturbations to input data, adversaries can "hack" a model’s pattern recognition and deceive it. These are sophisticated techniques where slight, often imperceptible alterations to input data can trick an ML model into making a wrong prediction, as shown in Figure fig-adversarial-attack-noise-example.

+
+Parrish, Alicia, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, et al. 2023. “Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models.” ArXiv Preprint abs/2305.14384. https://arxiv.org/abs/2305.14384. +
+
+
+ +
+
+Figure 18.18: A small adversarial noise added to the original image can make the neural network classify the image as a Guacamole instead of an Egyptian cat (Source: Sutanto) +
+
+
+

One can generate prompts that lead to unsafe images in text-to-image models like DALLE (Ramesh et al. 2021) or Stable Diffusion (Rombach et al. 2022). For example, by altering the pixel values of an image, attackers can deceive a facial recognition system into identifying a face as a different person.

+
+Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. “Zero-Shot Text-to-Image Generation.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:8821–31. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/ramesh21a.html. +
+Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042. +

Adversarial attacks exploit the way ML models learn and make decisions during inference. These models work on the principle of recognizing patterns in data. An adversary crafts special inputs with perturbations to mislead the model’s pattern recognition---essentially ‘hacking’ the model’s perceptions.

+

Adversarial attacks fall under different scenarios:

+
    +
  • Whitebox Attacks: The attacker fully knows the target model’s internal workings, including the training data, parameters, and architecture (Ye and Hamidi 2021). This comprehensive access creates favorable conditions for attackers to exploit the model’s vulnerabilities. The attacker can use specific and subtle weaknesses to craft effective adversarial examples.

  • +
  • Blackbox Attacks: In contrast to white-box attacks, black-box attacks involve the attacker having little to no knowledge of the target model (Guo et al. 2019). To carry out the attack, the adversarial actor must carefully observe the model’s output behavior.

  • +
  • Greybox Attacks: These fall between blackbox and whitebox attacks. The attacker has only partial knowledge about the target model’s internal design (Xu et al. 2021). For example, the attacker could have knowledge about training data but not the architecture or parameters. In the real world, practical attacks fall under black black-box box grey-boxes.

  • +
+
+Ye, Linfeng, and Shayan Mohajer Hamidi. 2021. “Thundernna: A White Box Adversarial Attack.” arXiv Preprint arXiv:2111.12305. +
+Guo, Chuan, Jacob Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Weinberger. 2019. “Simple Black-Box Adversarial Attacks.” In International Conference on Machine Learning, 2484–93. PMLR. +
+Xu, Ying, Xu Zhong, Antonio Jimeno Yepes, and Jey Han Lau. 2021. Grey-Box Adversarial Attack and Defence for Sentiment Classification.” arXiv Preprint arXiv:2103.11576. +

The landscape of machine learning models is complex and broad, especially given their relatively recent integration into commercial applications. This rapid adoption, while transformative, has brought to light numerous vulnerabilities within these models. Consequently, various adversarial attack methods have emerged, each strategically exploiting different aspects of different models. Below, we highlight a subset of these methods, showcasing the multifaceted nature of adversarial attacks on machine learning models:

+
    +
  • Generative Adversarial Networks (GANs) are deep learning models that consist of two networks competing against each other: a generator and a discriminator (Goodfellow et al. 2020). The generator tries to synthesize realistic data while the discriminator evaluates whether they are real or fake. GANs can be used to craft adversarial examples. The generator network is trained to produce inputs that the target model misclassifies. These GAN-generated images can then attack a target classifier or detection model. The generator and the target model are engaged in a competitive process, with the generator continually improving its ability to create deceptive examples and the target model enhancing its resistance to such examples. GANs provide a powerful framework for crafting complex and diverse adversarial inputs, illustrating the adaptability of generative models in the adversarial landscape.

  • +
  • Transfer Learning Adversarial Attacks exploit the knowledge transferred from a pre-trained model to a target model, creating adversarial examples that can deceive both models. These attacks pose a growing concern, particularly when adversaries have knowledge of the feature extractor but lack access to the classification head (the part or layer responsible for making the final classifications). Referred to as "headless attacks," these transferable adversarial strategies leverage the expressive capabilities of feature extractors to craft perturbations while being oblivious to the label space or training data. The existence of such attacks underscores the importance of developing robust defenses for transfer learning applications, especially since pre-trained models are commonly used (Abdelkader et al. 2020).

  • +
+
+Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622. +
+Abdelkader, Ahmed, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, and Chen Zhu. 2020. “Headless Horseman: Adversarial Attacks on Transfer Learning Models.” In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3087–91. IEEE. https://doi.org/10.1109/icassp40776.2020.9053181. +
+
+

Mechanisms of Adversarial Attacks

+
+
+
+ +
+
+Figure 18.19: Gradient-Based Attacks (Source: Ivezic) +
+
+
+

Gradient-based Attacks

+

One prominent category of adversarial attacks is gradient-based attacks. These attacks leverage the gradients of the ML model’s loss function to craft adversarial examples. The Fast Gradient Sign Method (FGSM) is a well-known technique in this category. FGSM perturbs the input data by adding small noise in the gradient direction, aiming to maximize the model’s prediction error. FGSM can quickly generate adversarial examples, as shown in Figure fig-gradient-attack, by taking a single step in the gradient direction.

+

Another variant, the Projected Gradient Descent (PGD) attack, extends FGSM by iteratively applying the gradient update step, allowing for more refined and powerful adversarial examples. The Jacobian-based Saliency Map Attack (JSMA) is another gradient-based approach that identifies the most influential input features and perturbs them to create adversarial examples.

+

Optimization-based Attacks

+

These attacks formulate the generation of adversarial examples as an optimization problem. The Carlini and Wagner (C&W) attack is a prominent example in this category. It aims to find the smallest perturbation that can cause misclassification while maintaining the perceptual similarity to the original input. The C&W attack employs an iterative optimization process to minimize the perturbation while maximizing the model’s prediction error.

+

Another optimization-based approach is the Elastic Net Attack to DNNs (EAD), which incorporates elastic net regularization to generate adversarial examples with sparse perturbations.

+

Transfer-based Attacks

+

Transfer-based attacks exploit the transferability property of adversarial examples. Transferability refers to the phenomenon where adversarial examples crafted for one ML model can often fool other models, even if they have different architectures or were trained on different datasets. This enables attackers to generate adversarial examples using a surrogate model and then transfer them to the target model without requiring direct access to its parameters or gradients. Transfer-based attacks highlight the generalization of adversarial vulnerabilities across different models and the potential for black-box attacks.

+

Physical-world Attacks

+

Physical-world attacks bring adversarial examples into the realm of real-world scenarios. These attacks involve creating physical objects or manipulations that can deceive ML models when captured by sensors or cameras. Adversarial patches, for example, are small, carefully designed patches that can be placed on objects to fool object detection or classification models. When attached to real-world objects, these patches can cause models to misclassify or fail to detect the objects accurately. Adversarial objects, such as 3D-printed sculptures or modified road signs, can also be crafted to deceive ML systems in physical environments.

+

Summary

+

Table tbl-attack_types a concise overview of the different categories of adversarial attacks, including gradient-based attacks (FGSM, PGD, JSMA), optimization-based attacks (C&W, EAD), transfer-based attacks, and physical-world attacks (adversarial patches and objects). Each attack is briefly described, highlighting its key characteristics and mechanisms.

+
+
+
+Table 18.2: Different attack types on ML models. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Attack CategoryAttack NameDescription
Gradient-basedFast Gradient Sign Method (FGSM)Perturbs input data by adding small noise in the gradient direction to maximize prediction error.
Projected Gradient Descent (PGD)Extends FGSM by iteratively applying the gradient update step for more refined adversarial examples.
Jacobian-based Saliency Map Attack (JSMA)Identifies influential input features and perturbs them to create adversarial examples.
Optimization-basedCarlini and Wagner (C&W) AttackFinds the smallest perturbation that causes misclassification while maintaining perceptual similarity.
Elastic Net Attack to DNNs (EAD)Incorporates elastic net regularization to generate adversarial examples with sparse perturbations.
Transfer-basedTransferability-based AttacksExploits the transferability of adversarial examples across different models, enabling black-box attacks.
Physical-worldAdversarial PatchesSmall, carefully designed patches placed on objects to fool object detection or classification models.
Adversarial ObjectsPhysical objects (e.g., 3D-printed sculptures, modified road signs) crafted to deceive ML systems in real-world scenarios.
+
+
+
+

The mechanisms of adversarial attacks reveal the intricate interplay between the ML model’s decision boundaries, the input data, and the attacker’s objectives. By carefully manipulating the input data, attackers can exploit the model’s sensitivities and blind spots, leading to incorrect predictions. The success of adversarial attacks highlights the need for a deeper understanding of ML models’ robustness and generalization properties.

+

Defending against adversarial attacks requires a multifaceted approach. Adversarial training is one common defense strategy in which models are trained on adversarial examples to improve robustness. Exposing the model to adversarial examples during training teaches it to classify them correctly and become more resilient to attacks. Defensive distillation, input preprocessing, and ensemble methods are other techniques that can help mitigate the impact of adversarial attacks.

+

As adversarial machine learning evolves, researchers explore new attack mechanisms and develop more sophisticated defenses. The arms race between attackers and defenders drives the need for constant innovation and vigilance in securing ML systems against adversarial threats. Understanding the mechanisms of adversarial attacks is crucial for developing robust and reliable ML models that can withstand the ever-evolving landscape of adversarial examples.

+
+
+

Impact on ML Systems

+

Adversarial attacks on machine learning systems have emerged as a significant concern in recent years, highlighting the potential vulnerabilities and risks associated with the widespread adoption of ML technologies. These attacks involve carefully crafted perturbations to input data that can deceive or mislead ML models, leading to incorrect predictions or misclassifications, as shown in Figure fig-adversarial-googlenet. The impact of adversarial attacks on ML systems is far-reaching and can have serious consequences in various domains.

+

One striking example of the impact of adversarial attacks was demonstrated by researchers in 2017. They experimented with small black and white stickers on stop signs (Eykholt et al. 2017). To the human eye, these stickers did not obscure the sign or prevent its interpretability. However, when images of the sticker-modified stop signs were fed into standard traffic sign classification ML models, a shocking result emerged. The models misclassified the stop signs as speed limit signs over 85% of the time.

+
+Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. “Robust Physical-World Attacks on Deep Learning Models.” ArXiv Preprint abs/1707.08945. https://arxiv.org/abs/1707.08945. +

This demonstration shed light on the alarming potential of simple adversarial stickers to trick ML systems into misreading critical road signs. The implications of such attacks in the real world are significant, particularly in the context of autonomous vehicles. If deployed on actual roads, these adversarial stickers could cause self-driving cars to misinterpret stop signs as speed limits, leading to dangerous situations, as shown in Figure fig-graffiti. Researchers warned that this could result in rolling stops or unintended acceleration into intersections, endangering public safety.

+
+
+
+ +
+
+Figure 18.20: Adversarial example generation applied to GoogLeNet (Szegedy et al., 2014a) on ImageNet (Source: Goodfellow) +
+
+
+
+
+
+ +
+
+Figure 18.21: Graffiti on a stop sign tricked a self-driving car into thinking it was a 45 mph speed limit sign (Source: Eykholt) +
+
+
+

The case study of the adversarial stickers on stop signs provides a concrete illustration of how adversarial examples exploit how ML models recognize patterns. By subtly manipulating the input data in ways that are invisible to humans, attackers can induce incorrect predictions and create serious risks, especially in safety-critical applications like autonomous vehicles. The attack’s simplicity highlights the vulnerability of ML models to even minor changes in the input, emphasizing the need for robust defenses against such threats.

+

The impact of adversarial attacks extends beyond the degradation of model performance. These attacks raise significant security and safety concerns, particularly in domains where ML models are relied upon for critical decision-making. In healthcare applications, adversarial attacks on medical imaging models could lead to misdiagnosis or incorrect treatment recommendations, jeopardizing patient well-being (M.-J. Tsai, Lin, and Lee 2023). In financial systems, adversarial attacks could enable fraud or manipulation of trading algorithms, resulting in substantial economic losses.

+
+Tsai, Min-Jen, Ping-Yi Lin, and Ming-En Lee. 2023. “Adversarial Attacks on Medical Image Classification.” Cancers 15 (17): 4228. https://doi.org/10.3390/cancers15174228. +
+Fursov, Ivan, Matvey Morozov, Nina Kaploukhaya, Elizaveta Kovtun, Rodrigo Rivera-Castro, Gleb Gusev, Dmitry Babaev, Ivan Kireev, Alexey Zaytsev, and Evgeny Burnaev. 2021. “Adversarial Attacks on Deep Models for Financial Transaction Records.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining, 2868–78. ACM. https://doi.org/10.1145/3447548.3467145. +

Moreover, adversarial vulnerabilities undermine the trustworthiness and interpretability of ML models. If carefully crafted perturbations can easily fool models, confidence in their predictions and decisions erodes. Adversarial examples expose the models’ reliance on superficial patterns and the inability to capture the true underlying concepts, challenging the reliability of ML systems (Fursov et al. 2021).

+

Defending against adversarial attacks often requires additional computational resources and can impact the overall system performance. Techniques like adversarial training, where models are trained on adversarial examples to improve robustness, can significantly increase training time and computational requirements (Bai et al. 2021). Runtime detection and mitigation mechanisms, such as input preprocessing (Addepalli et al. 2020) or prediction consistency checks, introduce latency and affect the real-time performance of ML systems.

+
+Bai, Tao, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. 2021. “Recent Advances in Adversarial Training for Adversarial Robustness.” arXiv Preprint arXiv:2102.01356. +
+Addepalli, Sravanti, B. S. Vivek, Arya Baburaj, Gaurang Sriramanan, and R. Venkatesh Babu. 2020. “Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1020–29. IEEE. https://doi.org/10.1109/cvpr42600.2020.00110. +

The presence of adversarial vulnerabilities also complicates the deployment and maintenance of ML systems. System designers and operators must consider the potential for adversarial attacks and incorporate appropriate defenses and monitoring mechanisms. Regular updates and retraining of models become necessary to adapt to new adversarial techniques and maintain system security and performance over time.

+

The impact of adversarial attacks on ML systems is significant and multifaceted. These attacks expose ML models’ vulnerabilities, from degrading model performance and raising security and safety concerns to challenging model trustworthiness and interpretability. Developers and researchers must prioritize the development of robust defenses and countermeasures to mitigate the risks posed by adversarial attacks. By addressing these challenges, we can build more secure, reliable, and trustworthy ML systems that can withstand the ever-evolving landscape of adversarial threats.

+
+

Exercise 18.2 (Adversarial Attacks)  

+
+
+ +
+
+

Get ready to become an AI adversary! In this Colab, you’ll become a white-box hacker, learning to craft attacks that deceive image classification models. We’ll focus on the Fast Gradient Sign Method (FGSM), where you’ll weaponize a model’s gradients against it! You’ll deliberately distort images with tiny perturbations, observing how they increasingly fool the AI more intensely. This hands-on exercise highlights the importance of building secure AI – a critical skill as AI integrates into cars and healthcare. The Colab directly ties into the Robust AI chapter of your book, moving adversarial attacks from theory into your own hands-on experience.

+

+

Think you can outsmart an AI? In this Colab, learn how to trick image classification models with adversarial attacks. We’ll use methods like FGSM to change images and subtly fool the AI. Discover how to design deceptive image patches and witness the surprising vulnerability of these powerful models. This is crucial knowledge for building truly robust AI systems!

+

+
+
+
+
+
+
+

18.4.2 Data Poisoning

+
+

Definition and Characteristics

+

Data poisoning is an attack where the training data is tampered with, leading to a compromised model (Biggio, Nelson, and Laskov 2012), as shown in Figure fig-poisoning-example. Attackers can modify existing training examples, insert new malicious data points, or influence the data collection process. The poisoned data is labeled in such a way as to skew the model’s learned behavior. This can be particularly damaging in applications where ML models make automated decisions based on learned patterns. Beyond training sets, poisoning tests, and validation data can allow adversaries to boost reported model performance artificially.

+
+Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning Attacks Against Support Vector Machines.” In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf. +
+
+
+ +
+
+Figure 18.22: NightShade’s poisoning effects on Stable Diffusion (Source: TOMÉ) +
+
+
+

The process usually involves the following steps:

+
    +
  • Injection: The attacker adds incorrect or misleading examples into the training set. These examples are often designed to look normal to cursory inspection but have been carefully crafted to disrupt the learning process.

  • +
  • Training: The ML model trains on this manipulated dataset and develops skewed understandings of the data patterns.

  • +
  • Deployment: Once the model is deployed, the corrupted training leads to flawed decision-making or predictable vulnerabilities the attacker can exploit.

  • +
+

The impact of data poisoning extends beyond classification errors or accuracy drops. In critical applications like healthcare, such alterations can lead to significant trust and safety issues (Marulli, Marrone, and Verde 2022). Later, we will discuss a few case studies of these issues.

+
+Marulli, Fiammetta, Stefano Marrone, and Laura Verde. 2022. “Sensitivity of Machine Learning Approaches to Fake and Untrusted Data in Healthcare Domain.” Journal of Sensor and Actuator Networks 11 (2): 21. https://doi.org/10.3390/jsan11020021. +
+Oprea, Alina, Anoop Singhal, and Apostol Vassilev. 2022. “Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?” Computer 55 (11): 94–99. https://doi.org/10.1109/mc.2022.3190787. +

There are six main categories of data poisoning (Oprea, Singhal, and Vassilev 2022):

+
    +
  • Availability Attacks: These attacks aim to compromise the overall functionality of a model. They cause it to misclassify most testing samples, rendering the model unusable for practical applications. An example is label flipping, where labels of a specific, targeted class are replaced with labels from a different one.

  • +
  • Targeted Attacks: In contrast to availability attacks, targeted attacks aim to compromise a small number of the testing samples. So, the effect is localized to a limited number of classes, while the model maintains the same original level of accuracy for the majority of the classes. The targeted nature of the attack requires the attacker to possess knowledge of the model’s classes, making detecting these attacks more challenging.

  • +
  • Backdoor Attacks: In these attacks, an adversary targets specific patterns in the data. The attacker introduces a backdoor (a malicious, hidden trigger or pattern) into the training data, such as manipulating certain features in structured data or manipulating a pattern of pixels at a fixed position. This causes the model to associate the malicious pattern with specific labels. As a result, when the model encounters test samples that contain a malicious pattern, it makes false predictions.

  • +
  • Subpopulation Attacks: Attackers selectively choose to compromise a subset of the testing samples while maintaining accuracy on the rest of the samples. You can think of these attacks as a combination of availability and targeted attacks: performing availability attacks (performance degradation) within the scope of a targeted subset. Although subpopulation attacks may seem very similar to targeted attacks, the two have clear differences:

  • +
  • Scope: While targeted attacks target a selected set of samples, subpopulation attacks target a general subpopulation with similar feature representations. For example, in a targeted attack, an actor inserts manipulated images of a ‘speed bump’ warning sign (with carefully crafted perturbations or patterns), which causes an autonomous car to fail to recognize such a sign and slow down. On the other hand, manipulating all samples of people with a British accent so that a speech recognition model would misclassify a British person’s speech is an example of a subpopulation attack.

  • +
  • Knowledge: While targeted attacks require a high degree of familiarity with the data, subpopulation attacks require less intimate knowledge to be effective.

  • +
+

The characteristics of data poisoning include:

+

Subtle and hard-to-detect manipulations of training data: Data poisoning often involves subtle manipulations of the training data that are carefully crafted to be difficult to detect through casual inspection. Attackers employ sophisticated techniques to ensure that the poisoned samples blend seamlessly with the legitimate data, making them easier to identify with thorough analysis. These manipulations can target specific features or attributes of the data, such as altering numerical values, modifying categorical labels, or introducing carefully designed patterns. The goal is to influence the model’s learning process while evading detection, allowing the poisoned data to subtly corrupt the model’s behavior.

+

Can be performed by insiders or external attackers: Data poisoning attacks can be carried out by various actors, including malicious insiders with access to the training data and external attackers who find ways to influence the data collection or preprocessing pipeline. Insiders pose a significant threat because they often have privileged access and knowledge of the system, enabling them to introduce poisoned data without raising suspicions. On the other hand, external attackers may exploit vulnerabilities in data sourcing, crowdsourcing platforms, or data aggregation processes to inject poisoned samples into the training dataset. This highlights the importance of implementing strong access controls, data governance policies, and monitoring mechanisms to mitigate the risk of insider threats and external attacks.

+

Exploits vulnerabilities in data collection and preprocessing: Data poisoning attacks often exploit vulnerabilities in the machine learning pipeline’s data collection and preprocessing stages. Attackers carefully design poisoned samples to evade common data validation techniques, ensuring that the manipulated data still falls within acceptable ranges, follows expected distributions, or maintains consistency with other features. This allows the poisoned data to pass through data preprocessing steps without detection. Furthermore, poisoning attacks can take advantage of weaknesses in data preprocessing, such as inadequate data cleaning, insufficient outlier detection, or lack of integrity checks. Attackers may also exploit the lack of robust data provenance and lineage tracking mechanisms to introduce poisoned data without leaving a traceable trail. Addressing these vulnerabilities requires rigorous data validation, anomaly detection, and data provenance tracking techniques to ensure the integrity and trustworthiness of the training data.

+

Disrupts the learning process and skews model behavior: Data poisoning attacks are designed to disrupt the learning process of machine learning models and skew their behavior towards the attacker’s objectives. The poisoned data is typically manipulated with specific goals, such as skewing the model’s behavior towards certain classes, introducing backdoors, or degrading overall performance. These manipulations are not random but targeted to achieve the attacker’s desired outcomes. By introducing label inconsistencies, where the manipulated samples have labels that do not align with their true nature, poisoning attacks can confuse the model during training and lead to biased or incorrect predictions. The disruption caused by poisoned data can have far-reaching consequences, as the compromised model may make flawed decisions or exhibit unintended behavior when deployed in real-world applications.

+

Impacts model performance, fairness, and trustworthiness: Poisoned data in the training dataset can have severe implications for machine learning models’ performance, fairness, and trustworthiness. Poisoned data can degrade the accuracy and performance of the trained model, leading to increased misclassifications or errors in predictions. This can have significant consequences, especially in critical applications where the model’s outputs inform important decisions. Moreover, poisoning attacks can introduce biases and fairness issues, causing the model to make discriminatory or unfair decisions for certain subgroups or classes. This undermines machine learning systems’ ethical and social responsibilities and can perpetuate or amplify existing biases. Furthermore, poisoned data erodes the trustworthiness and reliability of the entire ML system. The model’s outputs become questionable and potentially harmful, leading to a loss of confidence in the system’s integrity. The impact of poisoned data can propagate throughout the entire ML pipeline, affecting downstream components and decisions that rely on the compromised model. Addressing these concerns requires robust data governance, regular model auditing, and ongoing monitoring to detect and mitigate the effects of data poisoning attacks.

+
+
+

Mechanisms of Data Poisoning

+

Data poisoning attacks can be carried out through various mechanisms, exploiting different ML pipeline vulnerabilities. These mechanisms allow attackers to manipulate the training data and introduce malicious samples that can compromise the model’s performance, fairness, or integrity. Understanding these mechanisms is crucial for developing effective defenses against data poisoning and ensuring the robustness of ML systems. Data poisoning mechanisms can be broadly categorized based on the attacker’s approach and the stage of the ML pipeline they target. Some common mechanisms include modifying training data labels, altering feature values, injecting carefully crafted malicious samples, exploiting data collection and preprocessing vulnerabilities, manipulating data at the source, poisoning data in online learning scenarios, and collaborating with insiders to manipulate data.

+

Each of these mechanisms presents unique challenges and requires different mitigation strategies. For example, detecting label manipulation may involve analyzing the distribution of labels and identifying anomalies (Zhou et al. 2018), while preventing feature manipulation may require secure data preprocessing and anomaly detection techniques (Carta et al. 2020). Defending against insider threats may involve strict access control policies and monitoring of data access patterns. Moreover, the effectiveness of data poisoning attacks often depends on the attacker’s knowledge of the ML system, including the model architecture, training algorithms, and data distribution. Attackers may use adversarial machine learning or data synthesis techniques to craft samples that are more likely to bypass detection and achieve their malicious objectives.

+
+Zhou, Peng, Xintong Han, Vlad I. Morariu, and Larry S. Davis. 2018. “Learning Rich Features for Image Manipulation Detection.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1053–61. IEEE. https://doi.org/10.1109/cvpr.2018.00116. +
+Carta, Salvatore, Alessandro Sebastian Podda, Diego Reforgiato Recupero, and Roberto Saia. 2020. “A Local Feature Engineering Strategy to Improve Network Anomaly Detection.” Future Internet 12 (10): 177. https://doi.org/10.3390/fi12100177. +
+
+
+ +
+
+Figure 18.23: Garbage In – Garbage Out (Source: Information Matters) +
+
+
+

Modifying training data labels: One of the most straightforward mechanisms of data poisoning is modifying the training data labels. In this approach, the attacker selectively changes the labels of a subset of the training samples to mislead the model’s learning process as shown in Figure fig-distribution-shift-example. For example, in a binary classification task, the attacker might flip the labels of some positive samples to negative, or vice versa. By introducing such label noise, the attacker aims to degrade the model’s performance or cause it to make incorrect predictions for specific target instances.

+

Altering feature values in training data: Another mechanism of data poisoning involves altering the feature values of the training samples without modifying the labels. The attacker carefully crafts the feature values to introduce specific biases or vulnerabilities into the model. For instance, in an image classification task, the attacker might add imperceptible perturbations to a subset of images, causing the model to learn a particular pattern or association. This type of poisoning can create backdoors or trojans in the trained model, which specific input patterns can trigger.

+

Injecting carefully crafted malicious samples: In this mechanism, the attacker creates malicious samples designed to poison the model. These samples are crafted to have a specific impact on the model’s behavior while blending in with the legitimate training data. The attacker might use techniques such as adversarial perturbations or data synthesis to generate poisoned samples that are difficult to detect. The attacker aims to manipulate the model’s decision boundaries by injecting these malicious samples into the training data or introducing targeted misclassifications.

+

Exploiting data collection and preprocessing vulnerabilities: Data poisoning attacks can also exploit the data collection and preprocessing pipeline vulnerabilities. If the data collection process is not secure or there are weaknesses in the data preprocessing steps, an attacker can manipulate the data before it reaches the training phase. For example, if data is collected from untrusted sources or issues in data cleaning or aggregation, an attacker can introduce poisoned samples or manipulate the data to their advantage.

+

Manipulating data at the source (e.g., sensor data): In some cases, attackers can manipulate the data at its source, such as sensor data or input devices. By tampering with the sensors or manipulating the environment in which data is collected, attackers can introduce poisoned samples or bias the data distribution. For instance, in a self-driving car scenario, an attacker might manipulate the sensors or the environment to feed misleading information into the training data, compromising the model’s ability to make safe and reliable decisions.

+
+
+
+ +
+
+Figure 18.24: Data Poisoning Attack (Source: Sikandar) +
+
+
+

Poisoning data in online learning scenarios: Data poisoning attacks can also target ML systems that employ online learning, where the model is continuously updated with new data in real time. In such scenarios, an attacker can gradually inject poisoned samples over time, slowly manipulating the model’s behavior. Online learning systems are particularly vulnerable to data poisoning because they adapt to new data without extensive validation, making it easier for attackers to introduce malicious samples, as shown in Figure fig-poisoning-attack-example.

+

Collaborating with insiders to manipulate data: Sometimes, data poisoning attacks can involve collaboration with insiders with access to the training data. Malicious insiders, such as employees or data providers, can manipulate the data before it is used to train the model. Insider threats are particularly challenging to detect and prevent, as the attackers have legitimate access to the data and can carefully craft the poisoning strategy to evade detection.

+

These are the key mechanisms of data poisoning in ML systems. Attackers often employ these mechanisms to make their attacks more effective and harder to detect. The risk of data poisoning attacks grows as ML systems become increasingly complex and rely on larger datasets from diverse sources. Defending against data poisoning requires a multifaceted approach. ML practitioners and system designers must be aware of the various mechanisms of data poisoning and adopt a comprehensive approach to data security and model resilience. This includes secure data collection, robust data validation, and continuous model performance monitoring. Implementing secure data collection and preprocessing practices is crucial to prevent data poisoning at the source. Data validation and anomaly detection techniques can also help identify and mitigate potential poisoning attempts. Monitoring model performance for signs of data poisoning is also essential to detect and respond to attacks promptly.

+
+
+

Impact on ML Systems

+

Data poisoning attacks can severely affect ML systems, compromising their performance, reliability, and trustworthiness. The impact of data poisoning can manifest in various ways, depending on the attacker’s objectives and the specific mechanism used. Let’s explore each of the potential impacts in detail.

+

Degradation of model performance: One of the primary impacts of data poisoning is the degradation of the model’s overall performance. By manipulating the training data, attackers can introduce noise, biases, or inconsistencies that hinder the model’s ability to learn accurate patterns and make reliable predictions. This can reduce accuracy, precision, recall, or other performance metrics. The degradation of model performance can have significant consequences, especially in critical applications such as healthcare, finance, or security, where the reliability of predictions is crucial.

+

Misclassification of specific targets: Data poisoning attacks can also be designed to cause the model to misclassify specific target instances. Attackers may introduce carefully crafted poisoned samples similar to the target instances, leading the model to learn incorrect associations. This can result in the model consistently misclassifying the targeted instances, even if it performs well on other inputs. Such targeted misclassification can have severe consequences, such as causing a malware detection system to overlook specific malicious files or leading to the wrong diagnosis in a medical imaging application.

+

Backdoors and trojans in trained models: Data poisoning can introduce backdoors or trojans into the trained model. Backdoors are hidden functionalities that allow attackers to trigger specific behaviors or bypass normal authentication mechanisms. On the other hand, Trojans are malicious components embedded within the model that can activate specific input patterns. By poisoning the training data, attackers can create models that appear to perform normally but contain hidden vulnerabilities that can be exploited later. Backdoors and trojans can compromise the integrity and security of the ML system, allowing attackers to gain unauthorized access, manipulate predictions, or exfiltrate sensitive information.

+

Biased or unfair model outcomes: Data poisoning attacks can introduce biases or unfairness into the model’s predictions. By manipulating the training data distribution or injecting samples with specific biases, attackers can cause the model to learn and perpetuate discriminatory patterns. This can lead to unfair treatment of certain groups or individuals based on sensitive attributes such as race, gender, or age. Biased models can have severe societal implications, reinforcing existing inequalities and discriminatory practices. Ensuring fairness and mitigating biases is crucial for building trustworthy and ethical ML systems.

+

Increased false positives or false negatives: Data poisoning can also impact the model’s ability to correctly identify positive or negative instances, leading to increased false positives or false negatives. False positives occur when the model incorrectly identifies a negative instance as positive, while false negatives happen when a positive instance is misclassified as negative. The consequences of increased false positives or false negatives can be significant depending on the application. For example, in a fraud detection system, high false positives can lead to unnecessary investigations and customer frustration, while high false negatives can allow fraudulent activities to go undetected.

+

Compromised system reliability and trustworthiness: Data poisoning attacks can undermine ML systems’ overall reliability and trustworthiness. When models are trained on poisoned data, their predictions become reliable and trustworthy. This can erode user confidence in the system and lead to a loss of trust in the decisions made by the model. In critical applications where ML systems are relied upon for decision-making, such as autonomous vehicles or medical diagnosis, compromised reliability can have severe consequences, putting lives and property at risk.

+

Addressing the impact of data poisoning requires a proactive approach to data security, model testing, and monitoring. Organizations must implement robust measures to ensure the integrity and quality of training data, employ techniques to detect and mitigate poisoning attempts, and continuously monitor the performance and behavior of deployed models. Collaboration between ML practitioners, security experts, and domain specialists is essential to develop comprehensive strategies for preventing and responding to data poisoning attacks.

+
+
Case Study 1
+

In 2017, researchers demonstrated a data poisoning attack against a popular toxicity classification model called Perspective (Hosseini et al. 2017). This ML model detects toxic comments online.

+
+Hosseini, Hossein, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. “Deceiving Google’s Perspective Api Built for Detecting Toxic Comments.” ArXiv Preprint abs/1702.08138. https://arxiv.org/abs/1702.08138. +

The researchers added synthetically generated toxic comments with slight misspellings and grammatical errors to the model’s training data. This slowly corrupted the model, causing it to misclassify increasing numbers of severely toxic inputs as non-toxic over time.

+

After retraining on the poisoned data, the model’s false negative rate increased from 1.4% to 27% - allowing extremely toxic comments to bypass detection. The researchers warned this stealthy data poisoning could enable the spread of hate speech, harassment, and abuse if deployed against real moderation systems.

+

This case highlights how data poisoning can degrade model accuracy and reliability. For social media platforms, a poisoning attack that impairs toxicity detection could lead to the proliferation of harmful content and distrust of ML moderation systems. The example demonstrates why securing training data integrity and monitoring for poisoning is critical across application domains.

+
+
+
Case Study 2
+
+
+
+ +
+
+Figure 18.25: Samples of dirty-label poison data regarding mismatched text/image pairs (Source: Shan) +
+
+
+

Interestingly enough, data poisoning attacks are not always malicious (Shan et al. 2023). Nightshade, a tool developed by a team led by Professor Ben Zhao at the University of Chicago, utilizes data poisoning to help artists protect their art against scraping and copyright violations by generative AI models. Artists can use the tool to make subtle modifications to their images before uploading them online, as shown in Figure fig-dirty-label-example.

+

While these changes are indiscernible to the human eye, they can significantly disrupt the performance of generative AI models when incorporated into the training data. Generative models can be manipulated to generate hallucinations and weird images. For example, with only 300 poisoned images, the University of Chicago researchers could trick the latest Stable Diffusion model into generating images of dogs that look like cats or images of cows when prompted for cars.

+

As the number of poisoned images on the internet increases, the performance of the models that use scraped data will deteriorate exponentially. First, the poisoned data is hard to detect and requires manual elimination. Second, the "poison" spreads quickly to other labels because generative models rely on connections between words and concepts as they generate images. So a poisoned image of a "car" could spread into generated images associated with words like "truck,” "train,” " bus,” etc.

+

On the other hand, this tool can be used maliciously and can affect legitimate applications of the generative models. This shows the very challenging and novel nature of machine learning attacks.

+

Figure fig-poisoning demonstrates the effects of different levels of data poisoning (50 samples, 100 samples, and 300 samples of poisoned images) on generating images in different categories. Notice how the images start deforming and deviating from the desired category. For example, after 300 poison samples, a car prompt generates a cow.

+
+
+
+ +
+
+Figure 18.26: Data poisoning (Source: Shan et al. (2023)) +
+
+Shan, Shawn, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y Zhao. 2023. “Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.” ArXiv Preprint abs/2310.13828. https://arxiv.org/abs/2310.13828. +
+
+
+

Exercise 18.3 (Poisoning Attacks)  

+
+
+ +
+
+

Get ready to explore the dark side of AI security! In this Colab, you’ll learn about data poisoning – how bad data can trick AI models into making wrong decisions. We’ll focus on a real-world attack against a Support Vector Machine (SVM), observing how the AI’s behavior changes under attack. This hands-on exercise will highlight why protecting AI systems is crucial, especially as they become more integrated into our lives. Think like a hacker, understand the vulnerability, and brainstorm how to defend our AI systems!

+

+
+
+
+
+
+
+
+

18.4.3 Distribution Shifts

+
+

Definition and Characteristics

+

Distribution shift refers to the phenomenon where the data distribution encountered by an ML model during deployment (inference) differs from the distribution it was trained on, as shown in Figure fig-distribution-shift. This is not so much an attack as it is that the model’s robustness will vary over time. In other words, the data’s statistical properties, patterns, or underlying assumptions can change between the training and test phases.

+
+
+
+ +
+
+Figure 18.27: The curly brackets enclose the distribution shift between the environments. Here, z stands for the spurious feature, and y stands for label class (Source: Xin) +
+
+
+

The key characteristics of distribution shift include:

+

Domain mismatch: The input data during inference comes from a different domain or distribution than the training data. When the input data during inference comes from a domain or distribution different from the training data, it can significantly affect the model’s performance. This is because the model has learned patterns and relationships specific to the training domain, and when applied to a different domain, those learned patterns may not hold. For example, consider a sentiment analysis model trained on movie reviews. Suppose this model is applied to analyze sentiment in tweets. In that case, it may need help to accurately classify the sentiment because the language, grammar, and context of tweets can differ from movie reviews. This domain mismatch can result in poor performance and unreliable predictions, limiting the model’s practical utility.

+

Temporal drift: The data distribution evolves, leading to a gradual or sudden shift in the input characteristics. Temporal drift is important because ML models are often deployed in dynamic environments where the data distribution can change over time. If the model is not updated or adapted to these changes, its performance can gradually degrade. For instance, the patterns and behaviors associated with fraudulent activities may evolve in a fraud detection system as fraudsters adapt their techniques. If the model is not retrained or updated to capture these new patterns, it may fail to detect new types of fraud effectively. Temporal drift can lead to a decline in the model’s accuracy and reliability over time, making monitoring and addressing this type of distribution shift crucial.

+

Contextual changes: The ML model’s context can vary, resulting in different data distributions based on factors such as location, user behavior, or environmental conditions. Contextual changes matter because ML models are often deployed in various contexts or environments that can have different data distributions. If the model cannot generalize well to these different contexts, its performance may improve. For example, consider a computer vision model trained to recognize objects in a controlled lab environment. When deployed in a real-world setting, factors such as lighting conditions, camera angles, or background clutter can vary significantly, leading to a distribution shift. If the model is robust to these contextual changes, it may be able to accurately recognize objects in the new environment, limiting its practical utility.

+

Unrepresentative training data: The training data may only partially capture the variability and diversity of the real-world data encountered during deployment. Unrepresentative training data can lead to biased or skewed models that perform poorly on real-world data. Suppose the training data needs to capture the variability and diversity of the real-world data adequately. In that case, the model may learn patterns specific to the training set but needs to generalize better to new, unseen data. This can result in poor performance, biased predictions, and limited model applicability. For instance, if a facial recognition model is trained primarily on images of individuals from a specific demographic group, it may struggle to accurately recognize faces from other demographic groups when deployed in a real-world setting. Ensuring that the training data is representative and diverse is crucial for building models that can generalize well to real-world scenarios.

+
+
+
+ +
+
+Figure 18.28: Concept drift refers to a change in data patterns and relationships over time (Source: Evidently AI) +
+
+
+

Distribution shift can manifest in various forms, such as:

+

Covariate shift: The distribution of the input features (covariates) changes while the conditional distribution of the target variable given the input remains the same. Covariate shift matters because it can impact the model’s ability to make accurate predictions when the input features (covariates) differ between the training and test data. Even if the relationship between the input features and the target variable remains the same, a change in the distribution of the input features can affect the model’s performance. For example, consider a model trained to predict housing prices based on features like square footage, number of bedrooms, and location. Suppose the distribution of these features in the test data significantly differs from the training data (e.g., the test data contains houses with much larger square footage). In that case, the model’s predictions may become less accurate. Addressing covariate shifts is important to ensure the model’s robustness and reliability when applied to new data.

+

Concept drift: The relationship between the input features and the target variable changes over time, altering the underlying concept the model is trying to learn, as shown in Figure fig-drift-over-time. Concept drift is important because it indicates changes in the fundamental relationship between the input features and the target variable over time. When the underlying concept that the model is trying to learn shifts, its performance can deteriorate if not adapted to the new concept. For instance, in a customer churn prediction model, the factors influencing customer churn may evolve due to market conditions, competitor offerings, or customer preferences. If the model is not updated to capture these changes, its predictions may become less accurate and irrelevant. Detecting and adapting to concept drift is crucial to maintaining the model’s effectiveness and alignment with evolving real-world concepts.

+

Domain generalization: The model must generalize to unseen domains or distributions not present during training. Domain generalization is important because it enables ML models to be applied to new, unseen domains without requiring extensive retraining or adaptation. In real-world scenarios, training data that covers all possible domains or distributions that the model may encounter is often infeasible. Domain generalization techniques aim to learn domain-invariant features or models that can generalize well to new domains. For example, consider a model trained to classify images of animals. If the model can learn features invariant to different backgrounds, lighting conditions, or poses, it can generalize well to classify animals in new, unseen environments. Domain generalization is crucial for building models that can be deployed in diverse and evolving real-world settings.

+

The presence of a distribution shift can significantly impact the performance and reliability of ML models, as the models may need help generalizing well to the new data distribution. Detecting and adapting to distribution shifts is crucial to ensure ML systems’ robustness and practical utility in real-world scenarios.

+
+
+

Mechanisms of Distribution Shifts

+

The mechanisms of distribution shift, such as changes in data sources, temporal evolution, domain-specific variations, selection bias, feedback loops, and adversarial manipulations, are important to understand because they help identify the underlying causes of distribution shift. By understanding these mechanisms, practitioners can develop targeted strategies to mitigate their impact and improve the model’s robustness. Here are some common mechanisms:

+
+
+
+ +
+
+Figure 18.29: Temporal evolution (Source: Białek) +
+
+
+

Changes in data sources: Distribution shifts can occur when the data sources used for training and inference differ. For example, if a model is trained on data from one sensor but deployed on data from another sensor with different characteristics, it can lead to a distribution shift.

+

Temporal evolution: Over time, the underlying data distribution can evolve due to changes in user behavior, market dynamics, or other temporal factors. For instance, in a recommendation system, user preferences may shift over time, leading to a distribution shift in the input data, as shown in Figure fig-temporal-evoltion.

+

Domain-specific variations: Different domains or contexts can have distinct data distributions. A model trained on data from one domain may only generalize well to another domain with appropriate adaptation techniques. For example, an image classification model trained on indoor scenes may struggle when applied to outdoor scenes.

+

Selection bias: A Distribution shift can arise from selection bias during data collection or sampling. If the training data does not represent the true population or certain subgroups are over- or underrepresented, this can lead to a mismatch between the training and test distributions.

+

Feedback loops: In some cases, the predictions or actions taken by an ML model can influence future data distribution. For example, in a dynamic pricing system, the prices set by the model can impact customer behavior, leading to a shift in the data distribution over time.

+

Adversarial manipulations: Adversaries can intentionally manipulate the input data to create a distribution shift and deceive the ML model. By introducing carefully crafted perturbations or generating out-of-distribution samples, attackers can exploit the model’s vulnerabilities and cause it to make incorrect predictions.

+

Understanding the mechanisms of distribution shift is important for developing effective strategies to detect and mitigate its impact on ML systems. By identifying the sources and characteristics of the shift, practitioners can design appropriate techniques, such as domain adaptation, transfer learning, or continual learning, to improve the model’s robustness and performance under distributional changes.

+
+
+

Impact on ML Systems

+

Distribution shifts can significantly negatively impact the performance and reliability of ML systems. Here are some key ways in which distribution shift can affect ML models:

+

Degraded predictive performance: When the data distribution encountered during inference differs from the training distribution, the model’s predictive accuracy can deteriorate. The model may need help generalizing the new data well, leading to increased errors and suboptimal performance.

+

Reduced reliability and trustworthiness: Distribution shift can undermine the reliability and trustworthiness of ML models. If the model’s predictions become unreliable or inconsistent due to the shift, users may lose confidence in the system’s outputs, leading to potential misuse or disuse of the model.

+

Biased predictions: Distribution shift can introduce biases in the model’s predictions. If the training data does not represent the real-world distribution or certain subgroups are underrepresented, the model may make biased predictions that discriminate against certain groups or perpetuate societal biases.

+

Increased uncertainty and risk: Distribution shift introduces additional uncertainty and risk into the ML system. The model’s behavior and performance may become less predictable, making it challenging to assess its reliability and suitability for critical applications. This uncertainty can lead to increased operational risks and potential failures.

+

Adaptability challenges: ML models trained on a specific data distribution may need help to adapt to changing environments or new domains. The lack of adaptability can limit the model’s usefulness and applicability in dynamic real-world scenarios where the data distribution evolves.

+

Maintenance and update difficulties: Distribution shift can complicate the maintenance and updating of ML models. As the data distribution changes, the model may require frequent retraining or fine-tuning to maintain its performance. This can be time-consuming and resource-intensive, especially if the shift occurs rapidly or continuously.

+

Vulnerability to adversarial attacks: Distribution shift can make ML models more vulnerable to adversarial attacks. Adversaries can exploit the model’s sensitivity to distributional changes by crafting adversarial examples outside the training distribution, causing the model to make incorrect predictions or behave unexpectedly.

+

To mitigate the impact of distribution shifts, it is crucial to develop robust ML systems that detect and adapt to distributional changes. Techniques such as domain adaptation, transfer learning, and continual learning can help improve the model’s generalization ability across different distributions. ML model monitoring, testing, and updating are also necessary to ensure their performance and reliability during distribution shifts.

+
+
+
+

18.4.4 Detection and Mitigation

+
+

Adversarial Attacks

+

As you may recall from above, adversarial attacks pose a significant threat to the robustness and reliability of ML systems. These attacks involve crafting carefully designed inputs, known as adversarial examples, to deceive ML models and cause them to make incorrect predictions. To safeguard ML systems against adversarial attacks, developing effective techniques for detecting and mitigating these threats is crucial.

+
+
Adversarial Example Detection Techniques
+

Detecting adversarial examples is the first line of defense against adversarial attacks. Several techniques have been proposed to identify and flag suspicious inputs that may be adversarial.

+

Statistical methods aim to detect adversarial examples by analyzing the statistical properties of the input data. These methods often compare the input data distribution to a reference distribution, such as the training data distribution or a known benign distribution. Techniques like the Kolmogorov-Smirnov (Berger and Zhou 2014) test or the Anderson-Darling test can be used to measure the discrepancy between the distributions and flag inputs that deviate significantly from the expected distribution.

+
+Berger, Vance W, and YanYan Zhou. 2014. “Kolmogorovsmirnov Test: Overview.” Wiley Statsref: Statistics Reference Online. +

Kernel density estimation (KDE) is a non-parametric technique used to estimate the probability density function of a dataset. In the context of adversarial example detection, KDE can be used to estimate the density of benign examples in the input space. Adversarial examples often lie in low-density regions and can be detected by comparing their estimated density to a threshold. Inputs with an estimated density below the threshold are flagged as potential adversarial examples.

+

Another technique is feature squeezing (Panda, Chakraborty, and Roy 2019), which reduces the complexity of the input space by applying dimensionality reduction or discretization. The idea behind feature squeezing is that adversarial examples often rely on small, imperceptible perturbations that can be eliminated or reduced through these transformations. Inconsistencies can be detected by comparing the model’s predictions on the original input and the squeezed input, indicating the presence of adversarial examples.

+
+Panda, Priyadarshini, Indranil Chakraborty, and Kaushik Roy. 2019. “Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks.” #IEEE_O_ACC# 7: 70157–68. https://doi.org/10.1109/access.2019.2919463. +

Model uncertainty estimation techniques aim to quantify the confidence or uncertainty associated with a model’s predictions. Adversarial examples often exploit regions of high uncertainty in the model’s decision boundary. By estimating the uncertainty using techniques like Bayesian neural networks, dropout-based uncertainty estimation, or ensemble methods, inputs with high uncertainty can be flagged as potential adversarial examples.

+
+
+
Adversarial Defense Strategies
+

Once adversarial examples are detected, various defense strategies can be employed to mitigate their impact and improve the robustness of ML models.

+

Adversarial training is a technique that involves augmenting the training data with adversarial examples and retraining the model on this augmented dataset. Exposing the model to adversarial examples during training teaches it to classify them correctly and becomes more robust to adversarial attacks. Adversarial training can be performed using various attack methods, such as the Fast Gradient Sign Method (FGSM) or Projected Gradient Descent (PGD) (Madry et al. 2017).

+
+Madry, Aleksander, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. “Towards Deep Learning Models Resistant to Adversarial Attacks.” arXiv Preprint arXiv:1706.06083. +
+Papernot, Nicolas, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. “Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks.” In 2016 IEEE Symposium on Security and Privacy (SP), 582–97. IEEE; IEEE. https://doi.org/10.1109/sp.2016.41. +

Defensive distillation (Papernot et al. 2016) is a technique that trains a second model (the student model) to mimic the behavior of the original model (the teacher model). The student model is trained on the soft labels produced by the teacher model, which are less sensitive to small perturbations. Using the student model for inference can reduce the impact of adversarial perturbations, as the student model learns to generalize better and is less sensitive to adversarial noise.

+

Input preprocessing and transformation techniques aim to remove or mitigate the effect of adversarial perturbations before feeding the input to the ML model. These techniques include image denoising, JPEG compression, random resizing, padding, or applying random transformations to the input data. By reducing the impact of adversarial perturbations, these preprocessing steps can help improve the model’s robustness to adversarial attacks.

+

Ensemble methods combine multiple models to make more robust predictions. The ensemble can reduce the impact of adversarial attacks by using a diverse set of models with different architectures, training data, or hyperparameters. Adversarial examples that fool one model may not fool others in the ensemble, leading to more reliable and robust predictions. Model diversification techniques, such as using different preprocessing techniques or feature representations for each model in the ensemble, can further enhance the robustness.

+
+
+
Robustness Evaluation and Testing
+

Conduct thorough evaluation and testing to assess the effectiveness of adversarial defense techniques and measure the robustness of ML models.

+

Adversarial robustness metrics quantify the model’s resilience to adversarial attacks. These metrics can include the model’s accuracy on adversarial examples, the average distortion required to fool the model, or the model’s performance under different attack strengths. By comparing these metrics across different models or defense techniques, practitioners can assess and compare their robustness levels.

+

Standardized adversarial attack benchmarks and datasets provide a common ground for evaluating and comparing the robustness of ML models. These benchmarks include datasets with pre-generated adversarial examples and tools and frameworks for generating adversarial attacks. Examples of popular adversarial attack benchmarks include the MNIST-C, CIFAR-10-C, and ImageNet-C (Hendrycks and Dietterich 2019) datasets, which contain corrupted or perturbed versions of the original datasets.

+
+Hendrycks, Dan, and Thomas Dietterich. 2019. “Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.” arXiv Preprint arXiv:1903.12261. +

Practitioners can develop more robust and resilient ML systems by leveraging these adversarial example detection techniques, defense strategies, and robustness evaluation methods. However, it is important to note that adversarial robustness is an ongoing research area, and no single technique provides complete protection against all types of adversarial attacks. A comprehensive approach that combines multiple defense mechanisms and regular testing is essential to maintain the security and reliability of ML systems in the face of evolving adversarial threats.

+
+
+
+

Data Poisoning

+

Recall that data poisoning is an attack that targets the integrity of the training data used to build ML models. By manipulating or corrupting the training data, attackers can influence the model’s behavior and cause it to make incorrect predictions or perform unintended actions. Detecting and mitigating data poisoning attacks is crucial to ensure the trustworthiness and reliability of ML systems, as shown in Figure fig-adversarial-attack-injection.

+
+
Anomaly Detection Techniques for Identifying Poisoned Data
+
+
+
+ +
+
+Figure 18.30: Malicious data injection (Source: Li) +
+
+
+

Statistical outlier detection methods identify data points that deviate significantly from most data. These methods assume that poisoned data instances are likely to be statistical outliers. Techniques such as the Z-score method, Tukey’s method, or the [Mahalanobis] distance can be used to measure the deviation of each data point from the central tendency of the dataset. Data points that exceed a predefined threshold are flagged as potential outliers and considered suspicious for data poisoning.

+

Clustering-based methods group similar data points together based on their features or attributes. The assumption is that poisoned data instances may form distinct clusters or lie far away from the normal data clusters. By applying clustering algorithms like K-means, DBSCAN, or hierarchical clustering, anomalous clusters or data points that do not belong to any cluster can be identified. These anomalous instances are then treated as potentially poisoned data.

+
+
+
+ +
+
+Figure 18.31: Autoencoder (Source: Dertat) +
+
+
+

Autoencoders are neural networks trained to reconstruct the input data from a compressed representation, as shown in Figure fig-autoencoder. They can be used for anomaly detection by learning the normal patterns in the data and identifying instances that deviate from them. During training, the autoencoder is trained on clean, unpoisoned data. At inference time, the reconstruction error for each data point is computed. Data points with high reconstruction errors are considered abnormal and potentially poisoned, as they do not conform to the learned normal patterns.

+
+
+
Data Sanitization and Preprocessing Techniques
+

Data poisoning can be avoided by cleaning data, which involves identifying and removing or correcting noisy, incomplete, or inconsistent data points. Techniques such as data deduplication, missing value imputation, and outlier removal can be applied to improve the quality of the training data. By eliminating or filtering out suspicious or anomalous data points, the impact of poisoned instances can be reduced.

+

Data validation involves verifying the integrity and consistency of the training data. This can include checking for data type consistency, range validation, and cross-field dependencies. By defining and enforcing data validation rules, anomalous or inconsistent data points indicative of data poisoning can be identified and flagged for further investigation.

+

Data provenance and lineage tracking involve maintaining a record of data’s origin, transformations, and movements throughout the ML pipeline. By documenting the data sources, preprocessing steps, and any modifications made to the data, practitioners can trace anomalies or suspicious patterns back to their origin. This helps identify potential points of data poisoning and facilitates the investigation and mitigation process.

+
+
+
Robust Training Techniques
+

Robust optimization techniques can be used to modify the training objective to minimize the impact of outliers or poisoned instances. This can be achieved by using robust loss functions less sensitive to extreme values, such as the Huber loss or the modified Huber loss. Regularization techniques, such as L1 or L2 regularization, can also help in reducing the model’s sensitivity to poisoned data by constraining the model’s complexity and preventing overfitting.

+

Robust loss functions are designed to be less sensitive to outliers or noisy data points. Examples include the modified Huber loss, the Tukey loss (Beaton and Tukey 1974), and the trimmed mean loss. These loss functions down-weight or ignore the contribution of abnormal instances during training, reducing their impact on the model’s learning process. Robust objective functions, such as the minimax or distributionally robust objective, aim to optimize the model’s performance under worst-case scenarios or in the presence of adversarial perturbations.

+
+Beaton, Albert E., and John W. Tukey. 1974. “The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data.” Technometrics 16 (2): 147. https://doi.org/10.2307/1267936. +

Data augmentation techniques involve generating additional training examples by applying random transformations or perturbations to the existing data Figure fig-data-augmentation. This helps in increasing the diversity and robustness of the training dataset. By introducing controlled variations in the data, the model becomes less sensitive to specific patterns or artifacts that may be present in poisoned instances. Randomization techniques, such as random subsampling or bootstrap aggregating, can also help reduce the impact of poisoned data by training multiple models on different subsets of the data and combining their predictions.

+
+
+
+ +
+
+Figure 18.32: An image of the number “3” in original form and with basic augmentations applied. +
+
+
+
+
+
Secure and Trusted Data Sourcing
+

Implementing the best data collection and curation practices can help mitigate the risk of data poisoning. This includes establishing clear data collection protocols, verifying the authenticity and reliability of data sources, and conducting regular data quality assessments. Sourcing data from trusted and reputable providers and following secure data handling practices can reduce the likelihood of introducing poisoned data into the training pipeline.

+

Strong data governance and access control mechanisms are essential to prevent unauthorized modifications or tampering with the training data. This involves defining clear roles and responsibilities for data access, implementing access control policies based on the principle of least privilege, and monitoring and logging data access activities. By restricting access to the training data and maintaining an audit trail, potential data poisoning attempts can be detected and investigated.

+

Detecting and mitigating data poisoning attacks requires a multifaceted approach that combines anomaly detection, data sanitization, robust training techniques, and secure data sourcing practices. By implementing these measures, ML practitioners can enhance the resilience of their models against data poisoning and ensure the integrity and trustworthiness of the training data. However, it is important to note that data poisoning is an active area of research, and new attack vectors and defense mechanisms continue to emerge. Staying informed about the latest developments and adopting a proactive and adaptive approach to data security is crucial for maintaining the robustness of ML systems.

+
+
+
+

Distribution Shifts

+
+
Detecting and Mitigating Distribution Shifts
+

Recall that distribution shifts occur when the data distribution encountered by a machine learning (ML) model during deployment differs from the distribution it was trained on. These shifts can significantly impact the model’s performance and generalization ability, leading to suboptimal or incorrect predictions. Detecting and mitigating distribution shifts is crucial to ensure the robustness and reliability of ML systems in real-world scenarios.

+
+
+
Detection Techniques for Distribution Shifts
+

Statistical tests can be used to compare the distributions of the training and test data to identify significant differences. Techniques such as the Kolmogorov-Smirnov test or the Anderson-Darling test measure the discrepancy between two distributions and provide a quantitative assessment of the presence of distribution shift. By applying these tests to the input features or the model’s predictions, practitioners can detect if there is a statistically significant difference between the training and test distributions.

+

Divergence metrics quantify the dissimilarity between two probability distributions. Commonly used divergence metrics include the Kullback-Leibler (KL) divergence and the [Jensen-Shannon (JS)] divergence. By calculating the divergence between the training and test data distributions, practitioners can assess the extent of the distribution shift. High divergence values indicate a significant difference between the distributions, suggesting the presence of a distribution shift.

+

Uncertainty quantification techniques, such as Bayesian neural networks or ensemble methods, can estimate the uncertainty associated with the model’s predictions. When a model is applied to data from a different distribution, its predictions may have higher uncertainty. By monitoring the uncertainty levels, practitioners can detect distribution shifts. If the uncertainty consistently exceeds a predetermined threshold for test samples, it suggests that the model is operating outside its trained distribution.

+

In addition, domain classifiers are trained to distinguish between different domains or distributions. Practitioners can detect distribution shifts by training a classifier to differentiate between the training and test domains. If the domain classifier achieves high accuracy in distinguishing between the two domains, it indicates a significant difference in the underlying distributions. The performance of the domain classifier serves as a measure of the distribution shift.

+
+
+
Mitigation Techniques for Distribution Shifts
+
+
+
+ +
+
+Figure 18.33: Transfer learning (Source: Bhavsar) +
+
+
+

Transfer learning leverages knowledge gained from one domain to improve performance in another, as shown in Figure fig-transfer-learning. By using pre-trained models or transferring learned features from a source domain to a target domain, transfer learning can help mitigate the impact of distribution shifts. The pre-trained model can be fine-tuned on a small amount of labeled data from the target domain, allowing it to adapt to the new distribution. Transfer learning is particularly effective when the source and target domains share similar characteristics or when labeled data in the target domain is scarce.

+

Continual learning, also known as lifelong learning, enables ML models to learn continuously from new data distributions while retaining knowledge from previous distributions. Techniques such as elastic weight consolidation (EWC) (Kirkpatrick et al. 2017) or gradient episodic memory (GEM) (Lopez-Paz and Ranzato 2017) allow models to adapt to evolving data distributions over time. These techniques aim to balance the plasticity of the model (ability to learn from new data) with the stability of the model (retaining previously learned knowledge). By incrementally updating the model with new data and mitigating catastrophic forgetting, continual learning helps models stay robust to distribution shifts.

+
+Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, et al. 2017. “Overcoming Catastrophic Forgetting in Neural Networks.” Proc. Natl. Acad. Sci. 114 (13): 3521–26. https://doi.org/10.1073/pnas.1611835114. +
+Lopez-Paz, David, and Marc’Aurelio Ranzato. 2017. “Gradient Episodic Memory for Continual Learning.” Adv Neural Inf Process Syst 30. +

Data augmentation techniques, such as those we have seen previously, involve applying transformations or perturbations to the existing training data to increase its diversity and improve the model’s robustness to distribution shifts. By introducing variations in the data, such as rotations, translations, scaling, or adding noise, data augmentation helps the model learn invariant features and generalize better to unseen distributions. Data augmentation can be performed during training and inference to enhance the model’s ability to handle distribution shifts.

+

Ensemble methods combine multiple models to make predictions more robust to distribution shifts. By training models on different subsets of the data, using different algorithms, or with different hyperparameters, ensemble methods can capture diverse aspects of the data distribution. When presented with a shifted distribution, the ensemble can leverage the strengths of individual models to make more accurate and stable predictions. Techniques like bagging, boosting, or stacking can create effective ensembles.

+

Regularly updating models with new data from the target distribution is crucial to mitigate the impact of distribution shifts. As the data distribution evolves, models should be retrained or fine-tuned on the latest available data to adapt to the changing patterns. Monitoring model performance and data characteristics can help detect when an update is necessary. By keeping the models up to date, practitioners can ensure they remain relevant and accurate in the face of distribution shifts.

+

Evaluating models using robust metrics less sensitive to distribution shifts can provide a more reliable assessment of model performance. Metrics such as the area under the precision-recall curve (AUPRC) or the F1 score are more robust to class imbalance and can better capture the model’s performance across different distributions. Additionally, using domain-specific evaluation metrics that align with the desired outcomes in the target domain can provide a more meaningful measure of the model’s effectiveness.

+

Detecting and mitigating distribution shifts is an ongoing process that requires continuous monitoring, adaptation, and improvement. By employing a combination of detection techniques and mitigation strategies, ML practitioners can proactively identify and address distribution shifts, ensuring the robustness and reliability of their models in real-world deployments. It is important to note that distribution shifts can take various forms and may require domain-specific approaches depending on the nature of the data and the application. Staying informed about the latest research and best practices in handling distribution shifts is essential for building resilient ML systems.

+
+
+
+
+
+

18.5 Software Faults

+
+

Definition and Characteristics

+

Software faults refer to defects, errors, or bugs in the runtime software frameworks and components that support the execution and deployment of ML models (Myllyaho et al. 2022). These faults can arise from various sources, such as programming mistakes, design flaws, or compatibility issues (H. Zhang 2008), and can have significant implications for ML systems’ performance, reliability, and security. Software faults in ML frameworks exhibit several key characteristics:

+
+Myllyaho, Lalli, Mikko Raatikainen, Tomi Männistö, Jukka K. Nurminen, and Tommi Mikkonen. 2022. “On Misbehaviour and Fault Tolerance in Machine Learning Systems.” J. Syst. Software 183 (January): 111096. https://doi.org/10.1016/j.jss.2021.111096. +
+Zhang, Hongyu. 2008. “On the Distribution of Software Faults.” IEEE Trans. Software Eng. 34 (2): 301–2. https://doi.org/10.1109/tse.2007.70771. +
    +
  • Diversity: Software faults can manifest in different forms, ranging from simple logic and syntax mistakes to more complex issues like memory leaks, race conditions, and integration problems. The variety of fault types adds to the challenge of detecting and mitigating them effectively.

  • +
  • Propagation: In ML systems, software faults can propagate through the various layers and components of the framework. A fault in one module can trigger a cascade of errors or unexpected behavior in other parts of the system, making it difficult to pinpoint the root cause and assess the full impact of the fault.

  • +
  • Intermittency: Some software faults may exhibit intermittent behavior, occurring sporadically or under specific conditions. These faults can be particularly challenging to reproduce and debug, as they may manifest inconsistently during testing or normal operation.

  • +
  • Interaction with ML models: Software faults in ML frameworks can interact with the trained models in subtle ways. For example, a fault in the data preprocessing pipeline may introduce noise or bias into the model’s inputs, leading to degraded performance or incorrect predictions. Similarly, faults in the model serving component may cause inconsistencies between the training and inference environments.

  • +
  • Impact on system properties: Software faults can compromise various desirable properties of ML systems, such as performance, scalability, reliability, and security. Faults may lead to slowdowns, crashes, incorrect outputs, or vulnerabilities that attackers can exploit.

  • +
  • Dependency on external factors: The occurrence and impact of software faults in ML frameworks often depend on external factors, such as the choice of hardware, operating system, libraries, and configurations. Compatibility issues and version mismatches can introduce faults that are difficult to anticipate and mitigate.

  • +
+

Understanding the characteristics of software faults in ML frameworks is crucial for developing effective fault prevention, detection, and mitigation strategies. By recognizing the diversity, propagation, intermittency, and impact of software faults, ML practitioners can design more robust and reliable systems resilient to these issues.

+
+
+

Mechanisms of Software Faults in ML Frameworks

+

Machine learning frameworks, such as TensorFlow, PyTorch, and sci-kit-learn, provide powerful tools and abstractions for building and deploying ML models. However, these frameworks are not immune to software faults that can impact ML systems’ performance, reliability, and correctness. Let’s explore some of the common software faults that can occur in ML frameworks:

+

Memory Leaks and Resource Management Issues: Improper memory management, such as failing to release memory or close file handles, can lead to memory leaks and resource exhaustion over time. This issue is compounded by inefficient memory usage, where creating unnecessary copies of large tensors or not leveraging memory-efficient data structures can cause excessive memory consumption and degrade system performance. Additionally, failing to manage GPU memory properly can result in out-of-memory errors or suboptimal utilization of GPU resources, further exacerbating the problem as shown in Figure fig-gpu-out-of-memory.

+
+
+
+ +
+
+Figure 18.34: Example of GPU out-of-the-memory and suboptimal utilization issues +
+
+
+

Synchronization and Concurrency Problems: Incorrect synchronization between threads or processes can lead to race conditions, deadlocks, or inconsistent behavior in multi-threaded or distributed ML systems. This issue is often tied to improper handling of asynchronous operations, such as non-blocking I/O or parallel data loading, which can cause synchronization issues and impact the correctness of the ML pipeline. Moreover, proper coordination and communication between distributed nodes in a cluster can result in consistency or stale data during training or inference, compromising the reliability of the ML system.

+

Compatibility Issues: Mismatches between the versions of ML frameworks, libraries, or dependencies can introduce compatibility problems and runtime errors. Upgrading or changing the versions of underlying libraries without thoroughly testing the impact on the ML system can lead to unexpected behavior or breakages. Furthermore, inconsistencies between the training and deployment environments, such as differences in hardware, operating systems, or package versions, can cause compatibility issues and affect the reproducibility of ML models, making it challenging to ensure consistent performance across different platforms.

+

Numerical Instability and Precision Errors: Inadequate handling of numerical instabilities, such as division by zero, underflow, or overflow, can lead to incorrect calculations or convergence issues during training. This problem is compounded by insufficient precision or rounding errors, which can accumulate over time and impact the accuracy of the ML models, especially in deep learning architectures with many layers. Moreover, improper scaling or normalization of input data can cause numerical instabilities and affect the convergence and performance of optimization algorithms, resulting in suboptimal or unreliable model performance.

+

Inadequate Error Handling and Exception Management: Proper error handling and exception management can prevent ML systems from crashing or behaving unexpectedly when encountering exceptional conditions or invalid inputs. Failing to catch and handle specific exceptions or relying on generic exception handling can make it difficult to diagnose and recover from errors gracefully, leading to system instability and reduced reliability. Furthermore, incomplete or misleading error messages can hinder the ability to effectively debug and resolve software faults in ML frameworks, prolonging the time required to identify and fix issues.

+
+
+

Impact on ML Systems

+

Software faults in machine learning frameworks can have significant and far-reaching impacts on ML systems’ performance, reliability, and security. Let’s explore the various ways in which software faults can affect ML systems:

+

Performance Degradation and System Slowdowns: Memory leaks and inefficient resource management can lead to gradual performance degradation over time as the system becomes increasingly memory-constrained and spends more time on garbage collection or memory swapping (Maas et al. 2024). This issue is compounded by synchronization issues and concurrency bugs, which can cause delays, reduced throughput, and suboptimal utilization of computational resources, especially in multi-threaded or distributed ML systems. Furthermore, compatibility problems or inefficient code paths can introduce additional overhead and slowdowns, affecting the overall performance of the ML system.

+
+Maas, Martin, David G. Andersen, Michael Isard, Mohammad Mahdi Javanmard, Kathryn S. McKinley, and Colin Raffel. 2024. “Combining Machine Learning and Lifetime-Based Resource Management for Memory Allocation and Beyond.” Commun. ACM 67 (4): 87–96. https://doi.org/10.1145/3611018. +

Incorrect Predictions or Outputs: Software faults in data preprocessing, feature engineering, or model evaluation can introduce biases, noise, or errors propagating through the ML pipeline and resulting in incorrect predictions or outputs. Over time, numerical instabilities, precision errors, or rounding issues can accumulate and lead to degraded accuracy or convergence problems in the trained models. Moreover, faults in the model serving or inference components can cause inconsistencies between the expected and actual outputs, leading to incorrect or unreliable predictions in production.

+

Reliability and Stability Issues: Software faults can cause Unparalleled exceptions, crashes, or sudden terminations that can compromise the reliability and stability of ML systems, especially in production environments. Intermittent or sporadic faults can be difficult to reproduce and diagnose, leading to unpredictable behavior and reduced confidence in the ML system’s outputs. Additionally, faults in checkpointing, model serialization, or state management can cause data loss or inconsistencies, affecting the reliability and recoverability of the ML system.

+

Security Vulnerabilities: Software faults, such as buffer overflows, injection vulnerabilities, or improper access control, can introduce security risks and expose the ML system to potential attacks or unauthorized access. Adversaries may exploit faults in the preprocessing or feature extraction stages to manipulate the input data and deceive the ML models, leading to incorrect or malicious behavior. Furthermore, inadequate protection of sensitive data, such as user information or confidential model parameters, can lead to data breaches or privacy violations (Q. Li et al. 2023).

+
+Li, Qinbin, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. 2023. “A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection.” IEEE Trans. Knowl. Data Eng. 35 (4): 3347–66. https://doi.org/10.1109/tkde.2021.3124599. +

Difficulty in Reproducing and Debugging: Software faults can make it challenging to reproduce and debug issues in ML systems, especially when the faults are intermittent or dependent on specific runtime conditions. Incomplete or ambiguous error messages, coupled with the complexity of ML frameworks and models, can prolong the debugging process and hinder the ability to identify and fix the underlying faults. Moreover, inconsistencies between development, testing, and production environments can make reproducing and diagnosing faults in specific contexts difficult.

+

Increased Development and Maintenance Costs Software faults can lead to increased development and maintenance costs, as teams spend more time and resources debugging, fixing, and validating the ML system. The need for extensive testing, monitoring, and fault-tolerant mechanisms to mitigate the impact of software faults can add complexity and overhead to the ML development process. Frequent patches, updates, and bug fixes to address software faults can disrupt the development workflow and require additional effort to ensure the stability and compatibility of the ML system.

+

Understanding the potential impact of software faults on ML systems is crucial for prioritizing testing efforts, implementing fault-tolerant designs, and establishing effective monitoring and debugging practices. By proactively addressing software faults and their consequences, ML practitioners can build more robust, reliable, and secure ML systems that deliver accurate and trustworthy results.

+
+
+

Detection and Mitigation

+

Detecting and mitigating software faults in machine learning frameworks is essential to ensure ML systems’ reliability, performance, and security. Let’s explore various techniques and approaches that can be employed to identify and address software faults effectively:

+

Thorough Testing and Validation: Comprehensive unit testing of individual components and modules can verify their correctness and identify potential faults early in development. Integration testing validates the interaction and compatibility between different components of the ML framework, ensuring seamless integration. Systematic testing of edge cases, boundary conditions, and exceptional scenarios helps uncover hidden faults and vulnerabilities. Continuous testing and regression testing as shown in Figure fig-regression-testing detect faults introduced by code changes or updates to the ML framework.

+
+
+
+ +
+
+Figure 18.35: Automated regression testing (Source: UTOR) +
+
+
+

Static Code Analysis and Linting: Utilizing static code analysis tools automatically identifies potential coding issues, such as syntax errors, undefined variables, or security vulnerabilities. Enforcing coding standards and best practices through linting tools maintains code quality and reduces the likelihood of common programming mistakes. Conducting regular code reviews allows manual inspection of the codebase, identification of potential faults, and ensures adherence to coding guidelines and design principles.

+

Runtime Monitoring and Logging: Implementing comprehensive logging mechanisms captures relevant information during runtime, such as input data, model parameters, and system events. Monitoring key performance metrics, resource utilization, and error rates helps detect anomalies, performance bottlenecks, or unexpected behavior. Employing runtime assertion checks and invariants validates assumptions and detects violations of expected conditions during program execution. Utilizing profiling tools identifies performance bottlenecks, memory leaks, or inefficient code paths that may indicate the presence of software faults.

+

Fault-Tolerant Design Patterns: Implementing error handling and exception management mechanisms enables graceful handling and recovery from exceptional conditions or runtime errors. Employing redundancy and failover mechanisms, such as backup systems or redundant computations, ensures the availability and reliability of the ML system in the presence of faults. Designing modular and loosely coupled architectures minimizes the propagation and impact of faults across different components of the ML system. Utilizing checkpointing and recovery mechanisms (Eisenman et al. 2022) allows the system to resume from a known stable state in case of failures or interruptions.

+
+Eisenman, Assaf, Kiran Kumar Matam, Steven Ingram, Dheevatsa Mudigere, Raghuraman Krishnamoorthi, Krishnakumar Nair, Misha Smelyanskiy, and Murali Annavaram. 2022. “Check-n-Run: A Checkpointing System for Training Deep Learning Recommendation Models.” In 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22), 929–43. +

Regular Updates and Patches: Staying up to date with the latest versions and patches of the ML frameworks, libraries, and dependencies provides benefits from bug fixes, security updates, and performance improvements. Monitoring release notes, security advisories, and community forums inform practitioners about known issues, vulnerabilities, or compatibility problems in the ML framework. Establishing a systematic process for testing and validating updates and patches before applying them to production systems ensures stability and compatibility.

+

Containerization and Isolation: Leveraging containerization technologies, such as Docker or Kubernetes, encapsulates ML components and their dependencies in isolated environments. Utilizing containerization ensures consistent and reproducible runtime environments across development, testing, and production stages, reducing the likelihood of compatibility issues or environment-specific faults. Employing isolation techniques, such as virtual environments or sandboxing, prevents faults or vulnerabilities in one component from affecting other parts of the ML system.

+

Automated Testing and Continuous Integration/Continuous Deployment (CI/CD): Implement automated testing frameworks and scripts, execute comprehensive test suites, and catch faults early in development. Integrating automated testing into the CI/CD pipeline, as shown in Figure fig-CI-CD-procedure, ensures that code changes are thoroughly tested before being merged or deployed to production. Utilizing continuous monitoring and automated alerting systems detects and notifies developers and operators about potential faults or anomalies in real-time.

+
+
+
+ +
+
+Figure 18.36: Continuous Integration/Continuous Deployment (CI/CD) procedure (Source: geeksforgeeks) +
+
+
+

Adopting a proactive and systematic approach to fault detection and mitigation can significantly improve ML systems’ robustness, reliability, and maintainability. By investing in comprehensive testing, monitoring, and fault-tolerant design practices, organizations can minimize the impact of software faults and ensure their ML systems’ smooth operation in production environments.

+
+

Exercise 18.4 (Fault Tolerance)  

+
+
+ +
+
+

Get ready to become an AI fault-fighting superhero! Software glitches can derail machine learning systems, but in this Colab, you’ll learn how to make them resilient. We’ll simulate software faults to see how AI can break, then explore techniques to save your ML model’s progress, like checkpoints in a game. You’ll see how to train your AI to bounce back after a crash, ensuring it stays on track. This is crucial for building reliable, trustworthy AI, especially in critical applications. So gear up because this Colab directly connects with the Robust AI chapter – you’ll move from theory to hands-on troubleshooting and build AI systems that can handle the unexpected!

+

+
+
+
+
+
+
+

18.6 Tools and Frameworks

+

Given the significance or importance of developing robust AI systems, in recent years, researchers and practitioners have developed a wide range of tools and frameworks to understand how hardware faults manifest and propagate to impact ML systems. These tools and frameworks play a crucial role in evaluating the resilience of ML systems to hardware faults by simulating various fault scenarios and analyzing their impact on the system’s performance. This enables designers to identify potential vulnerabilities and develop effective mitigation strategies, ultimately creating more robust and reliable ML systems that can operate safely despite hardware faults. This section provides an overview of widely used fault models in the literature and the tools and frameworks developed to evaluate the impact of such faults on ML systems.

+
+

18.6.1 Fault Models and Error Models

+

As discussed previously, hardware faults can manifest in various ways, including transient, permanent, and intermittent faults. In addition to the type of fault under study, how the fault manifests is also important. For example, does the fault happen in a memory cell or during the computation of a functional unit? Is the impact on a single bit, or does it impact multiple bits? Does the fault propagate all the way and impact the application (causing an error), or does it get masked quickly and is considered benign? All these details impact what is known as the fault model, which plays a major role in simulating and measuring what happens to a system when a fault occurs.

+

To effectively study and understand the impact of hardware faults on ML systems, it is essential to understand the concepts of fault models and error models. A fault model describes how a hardware fault manifests itself in the system, while an error model represents how the fault propagates and affects the system’s behavior.

+

Fault models can be categorized based on various characteristics:

+
    +
  • Duration: Transient faults occur briefly and then disappear, while permanent faults persist indefinitely. Intermittent faults occur sporadically and may be difficult to diagnose.

  • +
  • Location: Faults can occur in hardware parts, such as memory cells, functional units, or interconnects.

  • +
  • Granularity: Faults can affect a single bit (e.g., bitflip) or multiple bits (e.g., burst errors) within a hardware component.

  • +
+

On the other hand, error models describe how a fault propagates through the system and manifests as an error. An error may cause the system to deviate from its expected behavior, leading to incorrect results or even system failures. Error models can be defined at different levels of abstraction, from the hardware level (e.g., register-level bitflips) to the software level (e.g., corrupted weights or activations in an ML model).

+

The fault model (or error model, typically the more applicable terminology in understanding the robustness of an ML system) plays a major role in simulating and measuring what happens to a system when a fault occurs. The chosen model informs the assumptions made about the system being studied. For example, a system focusing on single-bit transient errors (Sangchoolie, Pattabiraman, and Karlsson 2017) would not be well-suited to understand the impact of permanent, multi-bit flip errors (Wilkening et al. 2014), as it is designed assuming a different model altogether.

+
+Wilkening, Mark, Vilas Sridharan, Si Li, Fritz Previlon, Sudhanva Gurumurthi, and David R. Kaeli. 2014. “Calculating Architectural Vulnerability Factors for Spatial Multi-Bit Transient Faults.” In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture, 293–305. IEEE; IEEE. https://doi.org/10.1109/micro.2014.15. +

Furthermore, implementing an error model is also an important consideration, particularly regarding where an error is said to occur in the compute stack. For instance, a single-bit flip model at the architectural register level differs from a single-bit flip in the weight of a model at the PyTorch level. Although both target a similar error model, the former would usually be modeled in an architecturally accurate simulator (like gem5 [binkert2011gem5]), which captures error propagation compared to the latter, focusing on value propagation through a model.

+

Recent research has shown that certain characteristics of error models may exhibit similar behaviors across different levels of abstraction (Sangchoolie, Pattabiraman, and Karlsson 2017) (Papadimitriou and Gizopoulos 2021). For example, single-bit errors are generally more problematic than multi-bit errors, regardless of whether they are modeled at the hardware or software level. However, other characteristics, such as error masking (Mohanram and Touba 2003) as shown in Figure fig-error-masking, may not always be accurately captured by software-level models, as they can hide underlying system effects.

+
+Sangchoolie, Behrooz, Karthik Pattabiraman, and Johan Karlsson. 2017. “One Bit Is (Not) Enough: An Empirical Study of the Impact of Single and Multiple Bit-Flip Errors.” In 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 97–108. IEEE; IEEE. https://doi.org/10.1109/dsn.2017.30. +
+Papadimitriou, George, and Dimitris Gizopoulos. 2021. “Demystifying the System Vulnerability Stack: Transient Fault Effects Across the Layers.” In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 902–15. IEEE; IEEE. https://doi.org/10.1109/isca52012.2021.00075. +
+Mohanram, K., and N. A. Touba. 2003. “Partial Error Masking to Reduce Soft Error Failure Rate in Logic Circuits.” In Proceedings. 16th IEEE Symposium on Computer Arithmetic, 433–40. IEEE; IEEE Comput. Soc. https://doi.org/10.1109/dftvs.2003.1250141. +
+
+
+ +
+
+Figure 18.37: Example of error masking in microarchitectural components (Ko 2021) +
+
+Ko, Yohan. 2021. “Characterizing System-Level Masking Effects Against Soft Errors.” Electronics 10 (18): 2286. https://doi.org/10.3390/electronics10182286. +
+
+

Some tools, such as Fidelity (He, Balaprakash, and Li 2020), aim to bridge the gap between hardware-level and software-level error models by mapping patterns between the two levels of abstraction (Cheng et al. 2016). This allows for more accurate modeling of hardware faults in software-based tools, essential for developing robust and reliable ML systems. Lower-level tools typically represent more accurate error propagation characteristics but must be faster in simulating many errors due to the complex nature of hardware system designs. On the other hand, higher-level tools, such as those implemented in ML frameworks like PyTorch or TensorFlow, which we will discuss soon in the later sections, are often faster and more efficient for evaluating the robustness of ML systems.

+
+Cheng, Eric, Shahrzad Mirkhani, Lukasz G. Szafaryn, Chen-Yong Cher, Hyungmin Cho, Kevin Skadron, Mircea R. Stan, et al. 2016. “Clear: \(<\)U\(>\)c\(<\)/u\(>\) Ross \(<\)u\(>\)-l\(<\)/u\(>\) Ayer \(<\)u\(>\)e\(<\)/u\(>\) Xploration for \(<\)u\(>\)a\(<\)/u\(>\) Rchitecting \(<\)u\(>\)r\(<\)/u\(>\) Esilience - Combining Hardware and Software Techniques to Tolerate Soft Errors in Processor Cores.” In Proceedings of the 53rd Annual Design Automation Conference, 1–6. ACM. https://doi.org/10.1145/2897937.2897996. +

In the following subsections, we will discuss various hardware-based and software-based fault injection methods and tools, highlighting their capabilities, limitations, and the fault and error models they support.

+
+
+

18.6.2 Hardware-based Fault Injection

+

An error injection tool is a tool that allows the user to implement a particular error model, such as a transient single-bit flip during inference Figure fig-hardware-errors. Most error injection tools are software-based, as software-level tools are faster for ML robustness studies. However, hardware-based fault injection methods are still important for grounding the higher-level error models, as they are considered the most accurate way to study the impact of faults on ML systems by directly manipulating the hardware to introduce faults. These methods allow researchers to observe the system’s behavior under real-world fault conditions. Both software-based and hardware-based error injection tools are described in this section in more detail.

+
+
+
+ +
+
+Figure 18.38: Hardware errors can occur due to a variety of reasons and at different times and/or locations in a system, which can be explored when studying the impact of hardware-based errors on systems (Ahmadilivani et al. 2024) +
+
+Ahmadilivani, Mohammad Hasan, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, and Maksim Jenihhin. 2024. “A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks.” ACM Comput. Surv. 56 (6): 1–39. https://doi.org/10.1145/3638242. +
+
+
+

Methods

+

Two of the most common hardware-based fault injection methods are FPGA-based fault injection and radiation or beam testing.

+

FPGA-based Fault Injection: Field-Programmable Gate Arrays (FPGAs) are reconfigurable integrated circuits that can be programmed to implement various hardware designs. In the context of fault injection, FPGAs offer high precision and accuracy, as researchers can target specific bits or sets of bits within the hardware. By modifying the FPGA configuration, faults can be introduced at specific locations and times during the execution of an ML model. FPGA-based fault injection allows for fine-grained control over the fault model, enabling researchers to study the impact of different types of faults, such as single-bit flips or multi-bit errors. This level of control makes FPGA-based fault injection a valuable tool for understanding the resilience of ML systems to hardware faults.

+

Radiation or Beam Testing: Radiation or beam testing (Velazco, Foucard, and Peronnard 2010) involves exposing the hardware running an ML model to high-energy particles, such as protons or neutrons as illustrated in Figure fig-beam-testing. These particles can cause bitflips or other types of faults in the hardware, mimicking the effects of real-world radiation-induced faults. Beam testing is widely regarded as a highly accurate method for measuring the error rate induced by particle strikes on a running application. It provides a realistic representation of the faults in real-world environments, particularly in applications exposed to high radiation levels, such as space systems or particle physics experiments. However, unlike FPGA-based fault injection, beam testing could be more precise in targeting specific bits or components within the hardware, as it might be difficult to aim the beam of particles to a particular bit in the hardware. Despite being quite expensive from a research standpoint, beam testing is a well-regarded industry practice for reliability.

+
+Velazco, Raoul, Gilles Foucard, and Paul Peronnard. 2010. “Combining Results of Accelerated Radiation Tests and Fault Injections to Predict the Error Rate of an Application Implemented in SRAM-Based FPGAs.” IEEE Trans. Nucl. Sci. 57 (6): 3500–3505. https://doi.org/10.1109/tns.2010.2087355. +

+
+
+
+ +
+
+Figure 18.39: Radiation test setup for semiconductor components (Lee et al. 2022) (Source: JD Instrument) +
+
+Lee, Minwoong, Namho Lee, Huijeong Gwon, Jongyeol Kim, Younggwan Hwang, and Seongik Cho. 2022. “Design of Radiation-Tolerant High-Speed Signal Processing Circuit for Detecting Prompt Gamma Rays by Nuclear Explosion.” Electronics 11 (18): 2970. https://doi.org/10.3390/electronics11182970. +
+
+
+
+

Limitations

+

Despite their high accuracy, hardware-based fault injection methods have several limitations that can hinder their widespread adoption:

+

Cost: FPGA-based fault injection and beam testing require specialized hardware and facilities, which can be expensive to set up and maintain. The cost of these methods can be a significant barrier for researchers and organizations with limited resources.

+

Scalability: Hardware-based methods are generally slower and less scalable than software-based methods. Injecting faults and collecting data on hardware can take time, limiting the number of experiments performed within a given timeframe. This can be particularly challenging when studying the resilience of large-scale ML systems or conducting statistical analyses that require many fault injection experiments.

+

Flexibility: Hardware-based methods may not be as flexible as software-based methods in terms of the range of fault models and error models they can support. Modifying the hardware configuration or the experimental setup to accommodate different fault models can be more challenging and time-consuming than software-based methods.

+

Despite these limitations, hardware-based fault injection methods remain essential tools for validating the accuracy of software-based methods and for studying the impact of faults on ML systems in realistic settings. By combining hardware-based and software-based methods, researchers can gain a more comprehensive understanding of ML systems’ resilience to hardware faults and develop effective mitigation strategies.

+
+
+
+

18.6.3 Software-based Fault Injection Tools

+

With the rapid development of ML frameworks in recent years, software-based fault injection tools have gained popularity in studying the resilience of ML systems to hardware faults. These tools simulate the effects of hardware faults by modifying the software representation of the ML model or the underlying computational graph. The rise of ML frameworks such as TensorFlow, PyTorch, and Keras has facilitated the development of fault injection tools that are tightly integrated with these frameworks, making it easier for researchers to conduct fault injection experiments and analyze the results.

+
+
Advantages and Trade-offs
+

Software-based fault injection tools offer several advantages over hardware-based methods:

+

Speed: Software-based tools are generally faster than hardware-based methods, as they do not require the modification of physical hardware or the setup of specialized equipment. This allows researchers to conduct more fault injection experiments in a shorter time, enabling more comprehensive analyses of the resilience of ML systems.

+

Flexibility: Software-based tools are more flexible than hardware-based methods in terms of the range of fault and error models they can support. Researchers can easily modify the fault injection tool’s software implementation to accommodate different fault models or to target specific components of the ML system.

+

Accessibility: Software-based tools are more accessible than hardware-based methods, as they do not require specialized hardware or facilities. This makes it easier for researchers and practitioners to conduct fault injection experiments and study the resilience of ML systems, even with limited resources.

+
+
+
Limitations
+

Software-based fault injection tools also have some limitations compared to hardware-based methods:

+

Accuracy: Software-based tools may not always capture the full range of effects that hardware faults can have on the system. As these tools operate at a higher level of abstraction, they may need to catch up on some of the low-level hardware interactions and error propagation mechanisms that can impact the behavior of the ML system.

+

Fidelity: Software-based tools may provide a different level of Fidelity than hardware-based methods in terms of representing real-world fault conditions. The accuracy of the results obtained from software-based fault injection experiments may depend on how closely the software model approximates the actual hardware behavior.

+
+
+
+ +
+
+Figure 18.40: Comparison of techniques at layers of abstraction (Source: MAVFI) +
+
+
+
+
+
Types of Fault Injection Tools
+

Software-based fault injection tools can be categorized based on their target frameworks or use cases. Here, we will discuss some of the most popular tools in each category:

+

Ares (Reagen et al. 2018), a fault injection tool initially developed for the Keras framework in 2018, emerged as one of the first tools to study the impact of hardware faults on deep neural networks (DNNs) in the context of the rising popularity of ML frameworks in the mid-to-late 2010s. The tool was validated against a DNN accelerator implemented in silicon, demonstrating its effectiveness in modeling hardware faults. Ares provides a comprehensive study on the impact of hardware faults in both weights and activation values, characterizing the effects of single-bit flips and bit-error rates (BER) on hardware structures. Later, the Ares framework was extended to support the PyTorch ecosystem, enabling researchers to investigate hardware faults in a more modern setting and further extending its utility in the field.

+
+Reagen, Brandon, Udit Gupta, Lillian Pentecost, Paul Whatmough, Sae Kyu Lee, Niamh Mulholland, David Brooks, and Gu-Yeon Wei. 2018. “Ares: A Framework for Quantifying the Resilience of Deep Neural Networks.” In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1–6. IEEE. https://doi.org/10.1109/dac.2018.8465834. +
+
+
+ +
+
+Figure 18.41: Hardware bitflips in ML workloads can cause phantom objects and misclassifications, which can erroneously be used downstream by larger systems, such as in autonomous driving. Shown above is a correct and faulty version of the same image using the PyTorchFI injection framework. +
+
+
+

PyTorchFI (Mahmoud et al. 2020), a fault injection tool specifically designed for the PyTorch framework, was developed in 2020 in collaboration with Nvidia Research. It enables the injection of faults into the weights, activations, and gradients of PyTorch models, supporting a wide range of fault models. By leveraging the GPU acceleration capabilities of PyTorch, PyTorchFI provides a fast and efficient implementation for conducting fault injection experiments on large-scale ML systems, as shown in Figure fig-phantom-objects. The tool’s speed and ease of use have led to widespread adoption in the community, resulting in multiple developer-led projects, such as PyTorchALFI by Intel Labs, which focuses on safety in automotive environments. Follow-up PyTorch-centric tools for fault injection include Dr. DNA by Meta (Ma et al. 2024) (which further facilitates the Pythonic programming model for ease of use), and the GoldenEye framework (Mahmoud et al. 2022), which incorporates novel numerical datatypes (such as AdaptivFloat (Tambe et al. 2020) and BlockFloat in the context of hardware bit flips.

+
+Mahmoud, Abdulrahman, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, Sarita V. Adve, Christopher W. Fletcher, Iuri Frosio, and Siva Kumar Sastry Hari. 2020. PyTorchFI: A Runtime Perturbation Tool for DNNs.” In 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-w), 25–31. IEEE; IEEE. https://doi.org/10.1109/dsn-w50199.2020.00014. +
+Ma, Dongning, Fred Lin, Alban Desmaison, Joel Coburn, Daniel Moore, Sriram Sankar, and Xun Jiao. 2024. Dr. DNA: Combating Silent Data Corruptions in Deep Learning Using Distribution of Neuron Activations.” In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, 239–52. ACM. https://doi.org/10.1145/3620666.3651349. +
+Mahmoud, Abdulrahman, Thierry Tambe, Tarek Aloui, David Brooks, and Gu-Yeon Wei. 2022. GoldenEye: A Platform for Evaluating Emerging Numerical Data Formats in DNN Accelerators.” In 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 206–14. IEEE. https://doi.org/10.1109/dsn53405.2022.00031. +
+Tambe, Thierry, En-Yu Yang, Zishen Wan, Yuntian Deng, Vijay Janapa Reddi, Alexander Rush, David Brooks, and Gu-Yeon Wei. 2020. “Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference.” In 2020 57th ACM/IEEE Design Automation Conference (DAC), 1–6. IEEE; IEEE. https://doi.org/10.1109/dac18072.2020.9218516. +
+Chen, Zitao, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben. 2020. TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications.” In 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE), 426–35. IEEE; IEEE. https://doi.org/10.1109/issre5003.2020.00047. +
+Chen, Zitao, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben. 2019. \(<\)i\(>\)BinFI\(<\)/i\(>\): An Efficient Fault Injector for Safety-Critical Machine Learning Systems.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. SC ’19. New York, NY, USA: ACM. https://doi.org/10.1145/3295500.3356177. +

TensorFI (Chen et al. 2020), or the TensorFlow Fault Injector, is a fault injection tool developed specifically for the TensorFlow framework. Analogous to Ares and PyTorchFI, TensorFI is considered the state-of-the-art tool for ML robustness studies in the TensorFlow ecosystem. It allows researchers to inject faults into the computational graph of TensorFlow models and study their impact on the model’s performance, supporting a wide range of fault models. One of the key benefits of TensorFI is its ability to evaluate the resilience of various ML models, not just DNNs. Further advancements, such as BinFi (Chen et al. 2019), provide a mechanism to speed up error injection experiments by focusing on the "important" bits in the system, accelerating the process of ML robustness analysis and prioritizing the critical components of a model.

+

NVBitFI (T. Tsai et al. 2021), a general-purpose fault injection tool developed by Nvidia for their GPU platforms, operates at a lower level compared to framework-specific tools like Ares, PyTorchFI, and TensorFlow. While these tools focus on various deep learning platforms to implement and perform robustness analysis, NVBitFI targets the underlying hardware assembly code for fault injection. This allows researchers to inject faults into any application running on Nvidia GPUs, making it a versatile tool for studying the resilience of ML systems and other GPU-accelerated applications. By enabling users to inject errors at the architectural level, NVBitFI provides a more general-purpose fault model that is not restricted to just ML models. As Nvidia’s GPU systems are commonly used in many ML-based systems, NVBitFI is a valuable tool for comprehensive fault injection analysis across various applications.

+
+Tsai, Timothy, Siva Kumar Sastry Hari, Michael Sullivan, Oreste Villa, and Stephen W. Keckler. 2021. NVBitFI: Dynamic Fault Injection for GPUs.” In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 284–91. IEEE; IEEE. https://doi.org/10.1109/dsn48987.2021.00041. +
+
Domain-specific Examples
+

Domain-specific fault injection tools have been developed to address various ML application domains’ unique challenges and requirements, such as autonomous vehicles and robotics. This section highlights three domain-specific fault injection tools: DriveFI and PyTorchALFI for autonomous vehicles and MAVFI for uncrewed aerial vehicles (UAVs). These tools enable researchers to inject hardware faults into these complex systems’ perception, control, and other subsystems, allowing them to study the impact of faults on system performance and safety. The development of these software-based fault injection tools has greatly expanded the capabilities of the ML community to develop more robust and reliable systems that can operate safely and effectively in the presence of hardware faults.

+

DriveFI (Jha et al. 2019) is a fault injection tool designed for autonomous vehicles. It enables the injection of hardware faults into the perception and control pipelines of autonomous vehicle systems, allowing researchers to study the impact of these faults on the system’s performance and safety. DriveFI has been integrated with industry-standard autonomous driving platforms, such as Nvidia DriveAV and Baidu Apollo, making it a valuable tool for evaluating the resilience of autonomous vehicle systems.

+
+Jha, Saurabh, Subho Banerjee, Timothy Tsai, Siva K. S. Hari, Michael B. Sullivan, Zbigniew T. Kalbarczyk, Stephen W. Keckler, and Ravishankar K. Iyer. 2019. ML-Based Fault Injection for Autonomous Vehicles: A Case for Bayesian Fault Injection.” In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 112–24. IEEE; IEEE. https://doi.org/10.1109/dsn.2019.00025. +
+Gräfe, Ralf, Qutub Syed Sha, Florian Geissler, and Michael Paulitsch. 2023. “Large-Scale Application of Fault Injection into PyTorch Models -an Extension to PyTorchFI for Validation Efficiency.” In 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-s), 56–62. IEEE; IEEE. https://doi.org/10.1109/dsn-s58398.2023.00025. +

PyTorchALFI (Gräfe et al. 2023) is an extension of PyTorchFI developed by Intel Labs for the autonomous vehicle domain. It builds upon PyTorchFI’s fault injection capabilities. It adds features specifically tailored for evaluating the resilience of autonomous vehicle systems, such as the ability to inject faults into the camera and LiDAR sensor data.

+

MAVFI (Hsiao et al. 2023) is a fault injection tool designed for the robotics domain, specifically for uncrewed aerial vehicles (UAVs). MAVFI is built on top of the Robot Operating System (ROS) framework and allows researchers to inject faults into the various components of a UAV system, such as sensors, actuators, and control algorithms. By evaluating the impact of these faults on the UAV’s performance and stability, researchers can develop more resilient and fault-tolerant UAV systems.

+
+Hsiao, Yu-Shun, Zishen Wan, Tianyu Jia, Radhika Ghosal, Abdulrahman Mahmoud, Arijit Raychowdhury, David Brooks, Gu-Yeon Wei, and Vijay Janapa Reddi. 2023. MAVFI: An End-to-End Fault Analysis Framework with Anomaly Detection and Recovery for Micro Aerial Vehicles.” In 2023 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1–6. IEEE; IEEE. https://doi.org/10.23919/date56975.2023.10137246. +

The development of software-based fault injection tools has greatly expanded the capabilities of researchers and practitioners to study the resilience of ML systems to hardware faults. By leveraging the speed, flexibility, and accessibility of these tools, the ML community can develop more robust and reliable systems that can operate safely and effectively in the presence of hardware faults.

+
+
+
+
+

18.6.4 Bridging the Gap between Hardware and Software Error Models

+

While software-based fault injection tools offer many advantages in speed, flexibility, and accessibility, they may not always accurately capture the full range of effects that hardware faults can have on the system. This is because software-based tools operate at a higher level of abstraction than hardware-based methods and may miss some of the low-level hardware interactions and error propagation mechanisms that can impact the behavior of the ML system.

+

As Bolchini et al. (2023) illustrates in their work, hardware errors can manifest in complex spatial distribution patterns that are challenging to fully replicate with software-based fault injection alone. They identify four distinct patterns: (a) single point, where the fault corrupts a single value in a feature map; (b) same row, where the fault corrupts a partial or entire row in a single feature map; (c) bullet wake, where the fault corrupts the same location across multiple feature maps; and (d) shatter glass, which combines the effects of same row and bullet wake patterns, as shown in Figure fig-hardware-errors-bolchini. These intricate error propagation mechanisms highlight the need for hardware-aware fault injection techniques to accurately assess the resilience of ML systems.

+
+
+
+ +
+
+Figure 18.42: Hardware errors may manifest themselves in different ways at the software level, as classified by Bolchini et al. (Bolchini et al. 2023) +
+
+Bolchini, Cristiana, Luca Cassano, Antonio Miele, and Alessandro Toschi. 2023. “Fast and Accurate Error Simulation for CNNs Against Soft Errors.” IEEE Trans. Comput. 72 (4): 984–97. https://doi.org/10.1109/tc.2022.3184274. +
+
+

Researchers have developed tools to address this issue by bridging the gap between low-level hardware error models and higher-level software error models. One such tool is Fidelity, designed to map patterns between hardware-level faults and their software-level manifestations.

+
+

Fidelity: Bridging the Gap

+

Fidelity (He, Balaprakash, and Li 2020) is a tool for accurately modeling hardware faults in software-based fault injection experiments. It achieves this by carefully studying the relationship between hardware-level faults and their impact on the software representation of the ML system.

+
+He, Yi, Prasanna Balaprakash, and Yanjing Li. 2020. FIdelity: Efficient Resilience Analysis Framework for Deep Learning Accelerators.” In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 270–81. IEEE; IEEE. https://doi.org/10.1109/micro50266.2020.00033. +

The key insights behind Fidelity are:

+
    +
  • Fault Propagation: Fidelity models how faults propagate through the hardware and manifest as errors in the software-visible state of the system. By understanding these propagation patterns, Fidelity can more accurately simulate the effects of hardware faults in software-based experiments.

  • +
  • Fault Equivalence: Fidelity identifies equivalent classes of hardware faults that produce similar software-level errors. This allows researchers to design software-based fault models that are representative of the underlying hardware faults without the need to model every possible hardware fault individually.

  • +
  • Layered Approach: Fidelity employs a layered approach to fault modeling, where the effects of hardware faults are propagated through multiple levels of abstraction, from the hardware to the software level. This approach ensures that the software-based fault models are grounded in the actual behavior of the hardware.

  • +
+

By incorporating these insights, Fidelity enables software-based fault injection tools to capture the effects of hardware faults on ML systems accurately. This is particularly important for safety-critical applications, where the system’s resilience to hardware faults is paramount.

+
+
+

Importance of Capturing True Hardware Behavior

+

Capturing true hardware behavior in software-based fault injection tools is crucial for several reasons:

+
    +
  • Accuracy: By accurately modeling the effects of hardware faults, software-based tools can provide more reliable insights into the resilience of ML systems. This is essential for designing and validating fault-tolerant systems that can operate safely and effectively in the presence of hardware faults.

  • +
  • Reproducibility: When software-based tools accurately capture hardware behavior, fault injection experiments become more reproducible across different platforms and environments. This is important for the scientific study of ML system resilience, as it allows researchers to compare and validate results across different studies and implementations.

  • +
  • Efficiency: Software-based tools that capture true hardware behavior can be more efficient in their fault injection experiments by focusing on the most representative and impactful fault models. This allows researchers to cover a wider range of fault scenarios and system configurations with limited computational resources.

  • +
  • Mitigation Strategies: Understanding how hardware faults manifest at the software level is crucial for developing effective mitigation strategies. By accurately capturing hardware behavior, software-based fault injection tools can help researchers identify the most vulnerable components of the ML system and design targeted hardening techniques to improve resilience.

  • +
+

Tools like Fidelity are vital in advancing the state-of-the-art in ML system resilience research. These tools enable researchers to conduct more accurate, reproducible, and efficient fault injection experiments by bridging the gap between hardware and software error models. As the complexity and criticality of ML systems continue to grow, the importance of capturing true hardware behavior in software-based fault injection tools will only become more apparent.

+

Ongoing research in this area aims to refine the mapping between hardware and software error models and develop new techniques for efficiently simulating hardware faults in software-based experiments. As these tools mature, they will provide the ML community with increasingly powerful and accessible means to study and improve the resilience of ML systems to hardware faults.

+
+
+
+
+

18.7 Conclusion

+

Developing robust and resilient AI is paramount as machine learning systems become increasingly integrated into safety-critical applications and real-world environments. This chapter has explored the key challenges to AI robustness arising from hardware faults, malicious attacks, distribution shifts, and software bugs.

+

Some of the key takeaways include the following:

+
    +
  • Hardware Faults: Transient, permanent, and intermittent faults in hardware components can corrupt computations and degrade the performance of machine learning models if not properly detected and mitigated. Techniques such as redundancy, error correction, and fault-tolerant designs play a crucial role in building resilient ML systems that can withstand hardware faults.

  • +
  • Model Robustness: Malicious actors can exploit vulnerabilities in ML models through adversarial attacks and data poisoning, aiming to induce targeted misclassifications, skew the model’s learned behavior, or compromise the system’s integrity and reliability. Also, distribution shifts can occur when the data distribution encountered during deployment differs from those seen during training, leading to performance degradation. Implementing defensive measures, including adversarial training, anomaly detection, robust model architectures, and techniques such as domain adaptation, transfer learning, and continual learning, is essential to safeguard against these challenges and ensure the model’s reliability and generalization in dynamic environments.

  • +
  • Software Faults: Faults in ML frameworks, libraries, and software stacks can propagate errors, degrade performance, and introduce security vulnerabilities. Rigorous testing, runtime monitoring, and adopting fault-tolerant design patterns are essential for building robust software infrastructure supporting reliable ML systems.

  • +
+

As ML systems take on increasingly complex tasks with real-world consequences, prioritizing resilience becomes critical. The tools and frameworks discussed in this chapter, including fault injection techniques, error analysis methods, and robustness evaluation frameworks, provide practitioners with the means to thoroughly test and harden their ML systems against various failure modes and adversarial conditions.

+

Moving forward, resilience must be a central focus throughout the entire AI development lifecycle, from data collection and model training to deployment and monitoring. By proactively addressing the multifaceted challenges to robustness, we can develop trustworthy, reliable ML systems that can navigate the complexities and uncertainties of real-world environments.

+

Future research in robust ML should continue to advance techniques for detecting and mitigating faults, attacks, and distributional shifts. Additionally, exploring novel paradigms for developing inherently resilient AI architectures, such as self-healing systems or fail-safe mechanisms, will be crucial in pushing the boundaries of AI robustness. By prioritizing resilience and investing in developing robust AI systems, we can unlock the full potential of machine learning technologies while ensuring their safe, reliable, and responsible deployment in real-world applications. As AI continues to shape our future, building resilient systems that can withstand the challenges of the real world will be a defining factor in the success and societal impact of this transformative technology.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+

Coming soon.

+
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/sustainable_ai/sustainable_ai.html b/contents/sustainable_ai/sustainable_ai.html new file mode 100644 index 00000000..d10e4f20 --- /dev/null +++ b/contents/sustainable_ai/sustainable_ai.html @@ -0,0 +1,1947 @@ + + + + + + + + + +Machine Learning Systems - 16  Sustainable AI + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

16  Sustainable AI

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: 3D illustration on a light background of a sustainable AI network interconnected with a myriad of eco-friendly energy sources. The AI actively manages and optimizes its energy from sources like solar arrays, wind turbines, and hydro dams, emphasizing power efficiency and performance. Deep neural networks spread throughout, receiving energy from these sustainable resources.
+
+
+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand AI’s environmental impact, including energy consumption, carbon emissions, electronic waste, and biodiversity effects.
  • +
  • Learn about methods and best practices for developing sustainable AI systems
  • +
  • Appreciate the importance of taking a lifecycle perspective when evaluating and addressing the sustainability of AI systems.
  • +
  • Recognize the roles various stakeholders, such as researchers, corporations, policymakers, and end users, play in furthering responsible and sustainable AI progress.
  • +
  • Learn about specific frameworks, metrics, and tools to enable greener AI development.
  • +
  • Appreciate real-world case studies like Google’s 4M efficiency practices that showcase how organizations are taking tangible steps to improve AI’s environmental record
  • +
+
+
+
+

16.1 Introduction

+

The rapid advancements in artificial intelligence (AI) and machine learning (ML) have led to many beneficial applications and optimizations for performance efficiency. However, the remarkable growth of AI comes with a significant yet often overlooked cost: its environmental impact. The most recent report released by the IPCC, the international body leading scientific assessments of climate change and its impacts, emphasized the pressing importance of tackling climate change. Without immediate efforts to decrease global \(\textrm{CO}_2\) emissions by at least 43 percent before 2030, we exceed global warming of 1.5 degrees Celsius (Winkler et al. 2022). This could initiate positive feedback loops, pushing temperatures even higher. Next to environmental issues, the United Nations recognized 17 Sustainable Development Goals (SDGs), in which AI can play an important role, and vice versa, play an important role in the development of AI systems. As the field continues expanding, considering sustainability is crucial.

+
+Winkler, Harald, Franck Lecocq, Hans Lofgren, Maria Virginia Vilariño, Sivan Kartha, and Joana Portugal-Pereira. 2022. “Examples of Shifting Development Pathways: Lessons on How to Enable Broader, Deeper, and Faster Climate Action.” Climate Action 1 (1). https://doi.org/10.1007/s44168-022-00026-1. +
+Maslej, Nestor, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, et al. 2023. “Artificial Intelligence Index Report 2023.” ArXiv Preprint abs/2310.03715. https://arxiv.org/abs/2310.03715. +

AI systems, particularly large language models like GPT-3 and computer vision models like DALL-E 2, require massive amounts of computational resources for training. For example, GPT-3 was estimated to consume 1,300 megawatt-hours of electricity, which is equal to 1,450 average US households in an entire month (Maslej et al. 2023), or put another way, it consumed enough energy to supply an average US household for 120 years! This immense energy demand stems primarily from power-hungry data centers with servers running intense computations to train these complex neural networks for days or weeks.

+

Current estimates indicate that the carbon emissions produced from developing a single, sophisticated AI model can equal the emissions over the lifetime of five standard gasoline-powered vehicles (Strubell, Ganesh, and McCallum 2019). A significant portion of the electricity presently consumed by data centers is generated from nonrenewable sources such as coal and natural gas, resulting in data centers contributing around 1% of total worldwide carbon emissions. This is comparable to the emissions from the entire airline sector. This immense carbon footprint demonstrates the pressing need to transition to renewable power sources such as solar and wind to operate AI development.

+
+Prakash, Shvetank, Matthew Stewart, Colby Banbury, Mark Mazumder, Pete Warden, Brian Plancher, and Vijay Janapa Reddi. 2023. “Is TinyML Sustainable? Assessing the Environmental Impacts of Machine Learning on Microcontrollers.” ArXiv Preprint. https://arxiv.org/abs/2301.11899. +

Additionally, even small-scale AI systems deployed to edge devices as part of TinyML have environmental impacts that should not be ignored (Prakash, Stewart, et al. 2023). The specialized hardware required for AI has an environmental toll from natural resource extraction and manufacturing. GPUs, CPUs, and chips like TPUs depend on rare earth metals whose mining and processing generate substantial pollution. The production of these components also has its energy demands. Furthermore, collecting, storing, and preprocessing data used to train both small- and large-scale models comes with environmental costs, further exacerbating the sustainability implications of ML systems.

+

Thus, while AI promises innovative breakthroughs in many fields, sustaining progress requires addressing sustainability challenges. AI can continue advancing responsibly by optimizing models’ efficiency, exploring alternative specialized hardware and renewable energy sources for data centers, and tracking its overall environmental impact.

+
+
+

16.2 Social and Ethical Responsibility

+

The environmental impact of AI is not just a technical issue but also an ethical and social one. As AI becomes more integrated into our lives and industries, its sustainability becomes increasingly critical.

+
+

16.2.1 Ethical Considerations

+

The scale of AI’s environmental footprint raises profound ethical questions about the responsibilities of AI developers and companies to minimize their carbon emissions and energy usage. As the creators of AI systems and technologies that can have sweeping global impacts, developers have an ethical obligation to consciously integrate environmental stewardship into their design process, even if sustainability comes at the cost of some efficiency gains.

+

There is a clear and present need for us to have open and honest conversations about AI’s environmental tradeoffs earlier in the development lifecycle. Researchers should feel empowered to voice concerns if organizational priorities do not align with ethical goals, as in the case of the open letter to pause giant AI experiments.

+

Additionally, there is an increasing need for AI companies to scrutinize their contributions to climate change and environmental harm. Large tech firms are responsible for the cloud infrastructure, data center energy demands, and resource extraction required to power today’s AI. Leadership should assess whether organizational values and policies promote sustainability, from hardware manufacturing through model training pipelines.

+

Furthermore, more than voluntary self-regulation may be needed– -governments may need to introduce new regulations aimed at sustainable AI standards and practices if we hope to curb the projected energy explosion of ever-larger models. Reported metrics like computing usage, carbon footprint, and efficiency benchmarks could hold organizations accountable.

+

Through ethical principles, company policies, and public rules, AI technologists and corporations have a profound duty to our planet to ensure the responsible and sustainable advancement of technology positioned to transform modern society radically. We owe it to future generations to get this right.

+
+
+

16.2.2 Long-term Sustainability

+

The massive projected expansion of AI raises urgent concerns about its long-term sustainability. As AI software and applications rapidly increase in complexity and usage across industries, demand for computing power and infrastructure will skyrocket exponentially in the coming years.

+

To put the scale of projected growth in perspective, the total computing capacity required for training AI models saw an astonishing 350,000x increase from 2012 to 2019 (R. Schwartz et al. 2020). Researchers forecast over an order of magnitude growth each year moving forward as personalized AI assistants, autonomous technology, precision medicine tools, and more are developed. Similar trends are estimated for embedded ML systems, with an estimated 2.5 billion AI-enabled edge devices deployed by 2030.

+

Managing this expansion level requires software and hardware-focused breakthroughs in efficiency and renewable integration from AI engineers and scientists. On the software side, novel techniques in model optimization, distillation, pruning, low-precision numerics, knowledge sharing between systems, and other areas must become widespread best practices to curb energy needs. For example, realizing even a 50% reduced computational demand per capability doubling would have massive compounding on total energy.

+

On the hardware infrastructure side, due to increasing costs of data transfer, storage, cooling, and space, continuing today’s centralized server farm model at data centers is likely infeasible long-term (Lannelongue, Grealey, and Inouye 2021). Exploring alternative decentralized computing options around “edge AI” on local devices or within telco networks can alleviate scaling pressures on power-hungry hyper scale data centers. Likewise, the shift towards carbon-neutral, hybrid renewable energy sources powering leading cloud provider data centers worldwide will be essential.

+
+Lannelongue, Loı̈c, Jason Grealey, and Michael Inouye. 2021. “Green Algorithms: Quantifying the Carbon Footprint of Computation.” Adv. Sci. 8 (12): 2100707. https://doi.org/10.1002/advs.202100707. +
+
+

16.2.3 AI for Environmental Good

+

While much focus goes on AI’s sustainability challenges, these powerful technologies provide unique solutions to combat climate change and drive environmental progress. For example, ML can continuously optimize smart power grids to improve renewable integration and electricity distribution efficiency across networks (Zhang, Han, and Deng 2018). Models can ingest the real-time status of a power grid and weather forecasts to allocate and shift sources responding to supply and demand.

+
+Zhang, Dongxia, Xiaoqing Han, and Chunyu Deng. 2018. “Review on the Research and Practice of Deep Learning and Reinforcement Learning in Smart Grids.” CSEE Journal of Power and Energy Systems 4 (3): 362–70. https://doi.org/10.17775/cseejpes.2018.00520. +
+Lam, Remi, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri, et al. 2023. “Learning Skillful Medium-Range Global Weather Forecasting.” Science 382 (6677): 1416–21. https://doi.org/10.1126/science.adi2336. +
+Kurth, Thorsten, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, and Anima Anandkumar. 2023. FourCastNet: Accelerating Global High-Resolution Weather Forecasting Using Adaptive Fourier Neural Operators.” In Proceedings of the Platform for Advanced Scientific Computing Conference, 1–11. ACM. https://doi.org/10.1145/3592979.3593412. +

Fine-tuned neural networks have also proven remarkably effective at next-generation weather forecasting (Lam et al. 2023) and climate modeling (Kurth et al. 2023). They can rapidly analyze massive volumes of climate data to boost extreme event preparation and resource planning for hurricanes, floods, droughts, and more. Climate researchers have achieved state-of-the-art storm path accuracy by combining AI simulations with traditional numerical models.

+

AI also enables better tracking of biodiversity (Silvestro et al. 2022), wildlife (D. Schwartz et al. 2021), ecosystems, and illegal deforestation using drones and satellite feeds. Computer vision algorithms can automate species population estimates and habitat health assessments over huge untracked regions. These capabilities provide conservationists with powerful tools for combating poaching (Bondi et al. 2018), reducing species extinction risks, and understanding ecological shifts.

+
+Silvestro, Daniele, Stefano Goria, Thomas Sterner, and Alexandre Antonelli. 2022. “Improving Biodiversity Protection Through Artificial Intelligence.” Nature Sustainability 5 (5): 415–24. https://doi.org/10.1038/s41893-022-00851-6. +
+Schwartz, Daniel, Jonathan Michael Gomes Selman, Peter Wrege, and Andreas Paepcke. 2021. “Deployment of Embedded Edge-AI for Wildlife Monitoring in Remote Regions.” In 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), 1035–42. IEEE; IEEE. https://doi.org/10.1109/icmla52953.2021.00170. +
+Bondi, Elizabeth, Ashish Kapoor, Debadeepta Dey, James Piavis, Shital Shah, Robert Hannaford, Arvind Iyer, Lucas Joppa, and Milind Tambe. 2018. “Near Real-Time Detection of Poachers from Drones in AirSim.” In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, edited by Jérôme Lang, 5814–16. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2018/847. +

Targeted investment in AI applications for environmental sustainability, cross-sector data sharing, and model accessibility can profoundly accelerate solutions to pressing ecological issues. Emphasizing AI for social good steers innovation in cleaner directions, guiding these world-shaping technologies towards ethical and responsible development.

+
+
+

16.2.4 Case Study

+

Google’s data centers are foundational to powering products like Search, Gmail, and YouTube, which are used by billions daily. However, keeping the vast server farms up and running requires substantial energy, particularly for vital cooling systems. Google continuously strives to enhance efficiency across operations. Yet progress was proving difficult through traditional methods alone, considering the complex, custom dynamics involved. This challenge prompted an ML breakthrough, yielding potential savings.

+

After over a decade of optimizing data center design, inventing energy-efficient computing hardware, and securing renewable energy sources, Google brought DeepMind scientists to unlock further advances. The AI experts faced intricate factors surrounding the functioning of industrial cooling apparatuses. Equipment like pumps and chillers interact nonlinearly, while external weather and internal architectural variables also change. Capturing this complexity confounded rigid engineering formulas and human intuition.

+

The DeepMind team leveraged Google’s extensive historical sensor data detailing temperatures, power draw, and other attributes as training inputs. They built a flexible system based on neural networks to model the relationships and predict optimal configurations, minimizing power usage effectiveness (PUE) (Barroso, Hölzle, and Ranganathan 2019); PUE is the standard measurement for gauging how efficiently a data center uses energy gives the proportion of total facility power consumed divided by the power directly used for computing operations. When tested live, the AI system delivered remarkable gains beyond prior innovations, lowering cooling energy by 40% for a 15% drop in total PUE, a new site record. The generalizable framework learned cooling dynamics rapidly across shifting conditions that static rules could not match. The breakthrough highlights AI’s rising role in transforming modern tech and enabling a sustainable future.

+
+Barroso, Luiz André, Urs Hölzle, and Parthasarathy Ranganathan. 2019. The Datacenter as a Computer: Designing Warehouse-Scale Machines. Springer International Publishing. https://doi.org/10.1007/978-3-031-01761-2. +
+
+
+

16.3 Energy Consumption

+
+

16.3.1 Understanding Energy Needs

+

Understanding the energy needs for training and operating AI models is crucial in the rapidly evolving field of A.I. With AI entering widespread use in many new fields (Bohr and Memarzadeh 2020; Sudhakar, Sze, and Karaman 2023), the demand for AI-enabled devices and data centers is expected to explode. This understanding helps us understand why AI, particularly deep learning, is often labeled energy-intensive.

+
+Bohr, Adam, and Kaveh Memarzadeh. 2020. “The Rise of Artificial Intelligence in Healthcare Applications.” In Artificial Intelligence in Healthcare, 25–60. Elsevier. https://doi.org/10.1016/b978-0-12-818438-7.00002-2. +
+

Energy Requirements for AI Training

+

The training of complex AI systems like large deep learning models can demand startlingly high levels of computing power–with profound energy implications. Consider OpenAI’s state-of-the-art language model GPT-3 as a prime example. This system pushes the frontiers of text generation through algorithms trained on massive datasets. Yet, the energy GPT-3 consumed for a single training cycle could rival an entire small town’s monthly usage. In recent years, these generative AI models have gained increasing popularity, leading to more models being trained. Next to the increased number of models, the number of parameters in these models will also increase. Research shows that increasing the model size (number of parameters), dataset size, and compute used for training improves performance smoothly with no signs of saturation (Kaplan et al. 2020). See how, in Figure fig-scaling-laws, the test loss decreases as each of the 3 increases above.

+
+
+
+ +
+
+Figure 16.1: Performance improves with compute, dataset set, and model size. Credit: Kaplan et al. (2020). +
+
+Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. “Scaling Laws for Neural Language Models.” ArXiv Preprint abs/2001.08361. https://arxiv.org/abs/2001.08361. +
+
+

What drives such immense requirements? During training, models like GPT-3 learn their capabilities by continuously processing huge volumes of data to adjust internal parameters. The processing capacity enabling AI’s rapid advances also contributes to surging energy usage, especially as datasets and models balloon. GPT-3 highlights a steady trajectory in the field where each leap in AI’s sophistication traces back to ever more substantial computational power and resources. Its predecessor, GPT-2, required 10x less training to compute only 1.5 billion parameters, a difference now dwarfed by magnitudes as GPT-3 comprises 175 billion parameters. Sustaining this trajectory toward increasingly capable AI raises energy and infrastructure provision challenges ahead.

+
+
+

Operational Energy Use

+

Developing and training AI models requires immense data, computing power, and energy. However, the deployment and operation of those models also incur significant recurrent resource costs over time. AI systems are now integrated across various industries and applications and are entering the daily lives of an increasing demographic. Their cumulative operational energy and infrastructure impacts could eclipse the upfront model training.

+

This concept is reflected in the demand for training and inference hardware in data centers and on the edge. Inference refers to using a trained model to make predictions or decisions on real-world data. According to a recent McKinsey analysis, the need for advanced systems to train ever-larger models is rapidly growing. However, inference computations already make up a dominant and increasing portion of total AI workloads, as shown in Figure fig-mckinsey. Running real-time inference with trained models–whether for image classification, speech recognition, or predictive analytics–invariably demands computing hardware like servers and chips. However, even a model handling thousands of facial recognition requests or natural language queries daily is dwarfed by massive platforms like Meta. Where inference on millions of photos and videos shared on social media, the infrastructure energy requirements continue to scale!

+
+
+
+ +
+
+Figure 16.2: Market size for inference and training hardware. Credit: McKinsey. +
+
+
+

Algorithms powering AI-enabled smart assistants, automated warehouses, self-driving vehicles, tailored healthcare, and more have marginal individual energy footprints. However, the projected proliferation of these technologies could add hundreds of millions of endpoints running AI algorithms continually, causing the scale of their collective energy requirements to surge. Current efficiency gains need help to counterbalance this sheer growth.

+

AI is expected to see an annual growth rate of 37.3% between 2023 and 2030. Yet, applying the same growth rate to operational computing could multiply annual AI energy needs up to 1,000 times by 2030. So, while model optimization tackles one facet, responsible innovation must also consider total lifecycle costs at global deployment scales that were unfathomable just years ago but now pose infrastructure and sustainability challenges ahead.

+
+
+
+

16.3.2 Data Centers and Their Impact

+

As the demand for AI services grows, the impact of data centers on the energy consumption of AI systems is becoming increasingly important. While these facilities are crucial for the advancement and deployment of AI, they contribute significantly to its energy footprint.

+
+

Scale

+

Data centers are the essential workhorses enabling the recent computational demands of advanced AI systems. For example, leading providers like Meta operate massive data centers spanning up to the size of multiple football fields, housing hundreds of thousands of high-capacity servers optimized for parallel processing and data throughput.

+

These massive facilities provide the infrastructure for training complex neural networks on vast datasets. For instance, based on leaked information, OpenAI’s language model GPT-4 was trained on Azure data centers packing over 25,000 Nvidia A100 GPUs, used continuously for over 90 to 100 days.

+

Additionally, real-time inference for consumer AI applications at scale is only made possible by leveraging the server farms inside data centers. Services like Alexa, Siri, and Google Assistant process billions of voice requests per month from users globally by relying on data center computing for low-latency response. In the future, expanding cutting-edge use cases like self-driving vehicles, precision medicine diagnostics, and accurate climate forecasting models will require significant computational resources to be obtained by tapping into vast on-demand cloud computing resources from data centers. Some emerging applications, like autonomous cars, have harsh latency and bandwidth constraints. Locating data center-level computing power on the edge rather than the cloud will be necessary.

+

MIT research prototypes have shown trucks and cars with onboard hardware performing real-time AI processing of sensor data equivalent to small data centers (Sudhakar, Sze, and Karaman 2023). These innovative “data centers on wheels” demonstrate how vehicles like self-driving trucks may need embedded data center-scale compute on board to achieve millisecond system latency for navigation, though still likely supplemented by wireless 5G connectivity to more powerful cloud data centers.

+
+Sudhakar, Soumya, Vivienne Sze, and Sertac Karaman. 2023. “Data Centers on Wheels: Emissions from Computing Onboard Autonomous Vehicles.” IEEE Micro 43 (1): 29–39. https://doi.org/10.1109/mm.2022.3219803. +

The bandwidth, storage, and processing capacities required to enable this future technology at scale will depend heavily on advancements in data center infrastructure and AI algorithmic innovations.

+
+
+

Energy Demand

+

The energy demand of data centers can roughly be divided into 4 components—infrastructure, network, storage, and servers. In Figure fig-energydemand, we see that the data infrastructure (which includes cooling, lighting, and controls) and the servers use most of the total energy budget of data centers in the US (Shehabi et al. 2016). This section breaks down the energy demand for the servers and the infrastructure. For the latter, the focus is on cooling systems, as cooling is the dominant factor in energy consumption in the infrastructure.

+
+Shehabi, Arman, Sarah Smith, Dale Sartor, Richard Brown, Magnus Herrlin, Jonathan Koomey, Eric Masanet, Nathaniel Horner, Inês Azevedo, and William Lintner. 2016. “United States Data Center Energy Usage Report.” +
+
+
+ +
+
+Figure 16.3: Data centers energy consumption in the US. Credit: International Energy Agency (IEA). +
+
+
+
+
Servers
+

The increase in energy consumption of data centers stems mainly from exponentially growing AI computing requirements. NVIDIA DGX H100 machines that are optimized for deep learning can draw up to 10.2 kW at peak. Leading providers operate data centers with hundreds to thousands of these power-hungry DGX nodes networked to train the latest AI models. For example, the supercomputer developed for OpenAI is a single system with over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.

+

The intensive computations needed across an entire facility’s densely packed fleet and supporting hardware result in data centers drawing tens of megawatts around the clock. Overall, advancing AI algorithms continue to expand data center energy consumption as more DGX nodes get deployed to keep pace with projected growth in demand for AI compute resources over the coming years.

+
+
+
Cooling Systems
+

To keep the beefy servers fed at peak capacity and cool, data centers require tremendous cooling capacity to counteract the heat produced by densely packed servers, networking equipment, and other hardware running computationally intensive workloads without pause. With large data centers packing thousands of server racks operating at full tilt, massive industrial-scale cooling towers and chillers are required, using energy amounting to 30-40% of the total data center electricity footprint (Dayarathna, Wen, and Fan 2016). Consequently, companies are looking for alternative methods of cooling. For example, Microsoft’s data center in Ireland leverages a nearby fjord to exchange heat using over half a million gallons of seawater daily.

+

Recognizing the importance of energy-efficient cooling, there have been innovations aimed at reducing this energy demand. Techniques like free cooling, which uses outside air or water sources when conditions are favorable, and the use of AI to optimize cooling systems are examples of how the industry adapts. These innovations reduce energy consumption, lower operational costs, and lessen the environmental footprint. However, exponential increases in AI model complexity continue to demand more servers and acceleration hardware operating at higher utilization, translating to rising heat generation and ever greater energy used solely for cooling purposes.

+
+
+
+

The Environmental Impact

+

The environmental impact of data centers is not only caused by the direct energy consumption of the data center itself (Siddik, Shehabi, and Marston 2021). Data center operation involves the supply of treated water to the data center and the discharge of wastewater from the data center. Water and wastewater facilities are major electricity consumers.

+
+Siddik, Md Abu Bakar, Arman Shehabi, and Landon Marston. 2021. “The Environmental Footprint of Data Centers in the United States.” Environ. Res. Lett. 16 (6): 064017. https://doi.org/10.1088/1748-9326/abfba1. +
+Davis, Jacqueline, Daniel Bizo, Andy Lawrence, Owen Rogers, and Max Smolaks. 2022. “Uptime Institute Global Data Center Survey 2022.” Uptime Institute. +

Next to electricity usage, there are many more aspects to the environmental impacts of these data centers. The water usage of the data centers can lead to water scarcity issues, increased water treatment needs, and proper wastewater discharge infrastructure. Also, raw materials required for construction and network transmission considerably impact environmental t the environment, and components in data centers need to be upgraded and maintained. Where almost 50 percent of servers were refreshed within 3 years of usage, refresh cycles have shown to slow down (Davis et al. 2022). Still, this generates significant e-waste, which can be hard to recycle.

+
+
+
+

16.3.3 Energy Optimization

+

Ultimately, measuring and understanding the energy consumption of AI facilitates optimizing energy consumption.

+

One way to reduce the energy consumption of a given amount of computational work is to run it on more energy-efficient hardware. For instance, TPU chips can be more energy-efficient compared to CPUs when it comes to running large tensor computations for AI, as TPUs can run such computations much faster without drawing significantly more power than CPUs. Another way is to build software systems aware of energy consumption and application characteristics. Good examples are systems works such as Zeus (You, Chung, and Chowdhury 2023) and Perseus (Chung et al. 2023), both of which characterize the tradeoff between computation time and energy consumption at various levels of an ML training system to achieve energy reduction without end-to-end slowdown. In reality, building both energy-efficient hardware and software and combining their benefits should be promising, along with open-source frameworks (e.g., Zeus) that facilitate community efforts.

+
+You, Jie, Jae-Won Chung, and Mosharaf Chowdhury. 2023. “Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training.” In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), 119–39. Boston, MA: USENIX Association. https://www.usenix.org/conference/nsdi23/presentation/you. +
+Chung, Jae-Won, Yile Gu, Insu Jang, Luoxi Meng, Nikhil Bansal, and Mosharaf Chowdhury. 2023. “Perseus: Removing Energy Bloat from Large Model Training.” ArXiv Preprint abs/2312.06902. https://arxiv.org/abs/2312.06902. +
+
+
+

16.4 Carbon Footprint

+

The massive electricity demands of data centers can lead to significant environmental externalities absent an adequate renewable power supply. Many facilities rely heavily on nonrenewable energy sources like coal and natural gas. For example, data centers are estimated to produce up to 2% of total global \(\textrm{CO}_2\) emissions which is closing the gap with the airline industry. As mentioned in previous sections, the computational demands of AI are set to increase. The emissions of this surge are threefold. First, data centers are projected to increase in size (Liu et al. 2020). Secondly, emissions during training are set to increase significantly (Patterson et al. 2022). Thirdly, inference calls to these models are set to increase dramatically.

+
+Liu, Yanan, Xiaoxia Wei, Jinyu Xiao, Zhijie Liu, Yang Xu, and Yun Tian. 2020. “Energy Consumption and Emission Mitigation Prediction Based on Data Center Traffic and PUE for Global Data Centers.” Global Energy Interconnection 3 (3): 272–82. https://doi.org/10.1016/j.gloei.2020.07.008. +

Without action, this exponential demand growth risks ratcheting up the carbon footprint of data centers further to unsustainable levels. Major providers have pledged carbon neutrality and committed funds to secure clean energy, but progress remains incremental compared to overall industry expansion plans. More radical grid decarbonization policies and renewable energy investments may prove essential to counteracting the climate impact of the coming tide of new data centers aimed at supporting the next generation of AI.

+
+

16.4.1 Definition and Significance

+

The concept of a ‘carbon footprint’ has emerged as a key metric. This term refers to the total amount of greenhouse gasses, particularly carbon dioxide, emitted directly or indirectly by an individual, organization, event, or product. These emissions significantly contribute to the greenhouse effect, accelerating global warming and climate change. The carbon footprint is measured in terms of carbon dioxide equivalents (\(\textrm{CO}_2\)e), allowing for a comprehensive account that includes various greenhouse gasses and their relative environmental impact. Examples of this as applied to large-scale ML tasks are shown in Figure fig-carbonfootprint.

+
+
+
+ +
+
+Figure 16.4: Carbon footprint of large-scale ML tasks. Credit: Wu et al. (2022). +
+
+
+

Considering the carbon footprint is especially important in AI AI’s rapid advancement and integration into various sectors, bringing its environmental impact into sharp focus. AI systems, particularly those involving intensive computations like deep learning and large-scale data processing, are known for their substantial energy demands. This energy, often drawn from power grids, may still predominantly rely on fossil fuels, leading to significant greenhouse gas emissions.

+

Take, for example, training large AI models such as GPT-3 or complex neural networks. These processes require immense computational power, typically provided by data centers. The energy consumption associated with operating these centers, particularly for high-intensity tasks, results in notable greenhouse gas emissions. Studies have highlighted that training a single AI model can generate carbon emissions comparable to that of the lifetime emissions of multiple cars, shedding light on the environmental cost of developing advanced AI technologies (Dayarathna, Wen, and Fan 2016). Figure fig-carboncars shows a comparison from lowest to highest carbon footprints, starting with a roundtrip flight between NY and SF, human life average per year, American life average per year, US car including fuel over a lifetime, and a Transformer model with neural architecture search, which has the highest footprint.

+
+
+
+ +
+
+Figure 16.5: Carbon footprint of NLP model in lbs of \(\textrm{CO}_2\) equivalent. Credit: Dayarathna, Wen, and Fan (2016). +
+
+Dayarathna, Miyuru, Yonggang Wen, and Rui Fan. 2016. “Data Center Energy Consumption Modeling: A Survey.” IEEE Communications Surveys &Amp; Tutorials 18 (1): 732–94. https://doi.org/10.1109/comst.2015.2481183. +
+
+

Moreover, AI’s carbon footprint extends beyond the operational phase. The entire lifecycle of AI systems, including the manufacturing of computing hardware, the energy used in data centers for cooling and maintenance, and the disposal of electronic waste, contributes to their overall carbon footprint. We have discussed some of these aspects earlier, and we will discuss the waste aspects later in this chapter.

+
+
+

16.4.2 The Need for Awareness and Action

+

Understanding the carbon footprint of AI systems is crucial for several reasons. Primarily, it is a step towards mitigating the impacts of climate change. As AI continues to grow and permeate different aspects of our lives, its contribution to global carbon emissions becomes a significant concern. Awareness of these emissions can inform decisions made by developers, businesses, policymakers, and even ML engineers and scientists like us to ensure a balance between technological innovation and environmental responsibility.

+

Furthermore, this understanding stimulates the drive towards ‘Green AI’ (R. Schwartz et al. 2020). This approach focuses on developing AI technologies that are efficient, powerful, and environmentally sustainable. It encourages exploring energy-efficient algorithms, using renewable energy sources in data centers, and adopting practices that reduce A. I’m the overall environmental impact.

+

In essence, the carbon footprint is an essential consideration in developing and applying AI technologies. As AI evolves and its applications become more widespread, managing its carbon footprint is key to ensuring that this technological progress aligns with the broader environmental sustainability goals.

+
+
+

16.4.3 Estimating the AI Carbon Footprint

+

Estimating AI systems’ carbon footprint is critical in understanding their environmental impact. This involves analyzing the various elements contributing to emissions throughout AI technologies’ lifecycle and employing specific methodologies to quantify these emissions accurately. Many different methods for quantifying ML’s carbon emissions have been proposed.

+

The carbon footprint of AI encompasses several key elements, each contributing to the overall environmental impact. First, energy is consumed during the AI model training and operational phases. The source of this energy heavily influences the carbon emissions. Once trained, these models, depending on their application and scale, continue to consume electricity during operation. Next to energy considerations, the hardware used stresses the environment as well.

+

The carbon footprint varies significantly based on the energy sources used. The composition of the sources providing the energy used in the grid varies widely depending on geographical region and even time in a single day! For example, in the USA, roughly 60 percent of the total energy supply is still covered by fossil fuels. Nuclear and renewable energy sources cover the remaining 40 percent. These fractions are not constant throughout the day. As renewable energy production usually relies on environmental factors, such as solar radiation and pressure fields, they do not provide a constant energy source.

+

The variability of renewable energy production has been an ongoing challenge in the widespread use of these sources. Looking at Figure fig-energyprod, which shows data for the European grid, we see that it is supposed to be able to produce the required amount of energy throughout the day. While solar energy peaks in the middle of the day, wind energy shows two distinct peaks in the mornings and evenings. Currently, we rely on fossil and coal-based energy generation methods to supply the lack of energy during times when renewable energy does not meet requirements,

+

Innovation in energy storage solutions is required to enable constant use of renewable energy sources. The base energy load is currently met with nuclear energy. This constant energy source does not directly emit carbon emissions but needs to be faster to accommodate the variability of renewable energy sources. Tech companies such as Microsoft have shown interest in nuclear energy sources to power their data centers. As the demand of data centers is more constant than the demand of regular households, nuclear energy could be used as a dominant source of energy.

+
+
+
+ +
+
+Figure 16.6: Energy sources and generation capabilities. Credit: Energy Charts.. +
+
+
+

Additionally, the manufacturing and disposal of AI hardware add to the carbon footprint. Producing specialized computing devices, such as GPUs and CPUs, is energy- and resource-intensive. This phase often relies on energy sources that contribute to greenhouse gas emissions. The electronics industry’s manufacturing process has been identified as one of the eight big supply chains responsible for more than 50 percent of global emissions (Challenge 2021). Furthermore, the end-of-life disposal of this hardware, which can lead to electronic waste, also has environmental implications. As mentioned, servers have a refresh cycle of roughly 3 to 5 years. Of this e-waste, currently only 17.4 percent is properly collected and recycled.. The carbon emissions of this e-waste has shown an increase of more than 50 percent between 2014 and 2020 (Singh and Ogunseitan 2022).

+
+Challenge, WEF Net-Zero. 2021. “The Supply Chain Opportunity.” In World Economic Forum: Geneva, Switzerland. +
+Singh, Narendra, and Oladele A. Ogunseitan. 2022. “Disentangling the Worldwide Web of e-Waste and Climate Change Co-Benefits.” Circular Economy 1 (2): 100011. https://doi.org/10.1016/j.cec.2022.100011. +

As is clear from the above, a proper Life Cycle Analysis is necessary to portray all relevant aspects of the emissions caused by AI. Another method is carbon accounting, which quantifies the amount of carbon dioxide emissions directly and indirectly associated with AI operations. This measurement typically uses \(\textrm{CO}_2\) equivalents, allowing for a standardized way of reporting and assessing emissions.

+
+

Exercise 16.1 (AI’s Carbon Footprint)  

+
+
+ +
+
+

Did you know that the cutting-edge AI models you might use have an environmental impact? This exercise will delve into an AI system’s “carbon footprint.” You’ll learn how data centers’ energy demands, large AI models’ training, and even hardware manufacturing contribute to greenhouse gas emissions. We’ll discuss why it’s crucial to be aware of this impact, and you’ll learn methods to estimate the carbon footprint of your own AI projects. Get ready to explore the intersection of AI and environmental sustainability!

+

+
+
+
+
+
+
+

16.5 Beyond Carbon Footprint

+

The current focus on reducing AI systems’ carbon emissions and energy consumption addresses one crucial aspect of sustainability. However, manufacturing the semiconductors and hardware that enable AI also carries severe environmental impacts that receive comparatively less public attention. Building and operating a leading-edge semiconductor fabrication plant, or “fab,” has substantial resource requirements and polluting byproducts beyond a large carbon footprint.

+

For example, a state-of-the-art fab producing state-of-the-art chips like in 5nm can require up to four million gallons of pure water each day. This water usage approaches what a city of half a million people would require for all needs. Sourcing this consistently places immense strain on local water tables and reservoirs, especially in already water-stressed regions that host many high-tech manufacturing hubs.

+

Additionally, over 250 unique hazardous chemicals are utilized at various stages of semiconductor production within fabs (Mills and Le Hunte 1997). These include volatile solvents like sulfuric acid, nitric acid, and hydrogen fluoride, along with arsine, phosphine, and other highly toxic substances. Preventing the discharge of these chemicals requires extensive safety controls and wastewater treatment infrastructure to avoid soil contamination and risks to surrounding communities. Any improper chemical handling or unanticipated spill carries dire consequences.

+
+Mills, Andrew, and Stephen Le Hunte. 1997. “An Overview of Semiconductor Photocatalysis.” J. Photochem. Photobiol., A 108 (1): 1–35. https://doi.org/10.1016/s1010-6030(97)00118-4. +

Beyond water consumption and chemical risks, fab operations also depend on rare metals sourcing, generate tons of dangerous waste products, and can hamper local biodiversity. This section will analyze these critical but less discussed impacts. With vigilance and investment in safety, the harms from semiconductor manufacturing can be contained while still enabling technological progress. However, ignoring these externalized issues will exacerbate ecological damage and health risks over the long run.

+
+

16.5.1 Water Usage and Stress

+

Semiconductor fabrication is an incredibly water-intensive process. Based on an article from 2009, a typical 300mm silicon wafer requires 8,328 liters of water, of which 5,678 liters is ultrapure water (Cope 2009). Today, a typical fab can use up to four million gallons of pure water. To operate one facility, TSMC’s latest fab in Arizona is projected to use 8.9 million gallons daily or nearly 3 percent of the city’s current water production. To put things in perspective, Intel and Quantis found that over 97% of their direct water consumption is attributed to semiconductor manufacturing operations within their fabrication facilities (Cooper et al. 2011).

+
+Cope, Gord. 2009. “Pure Water, Semiconductors and the Recession.” Global Water Intelligence 10 (10). +
+Cooper, Tom, Suzanne Fallender, Joyann Pafumi, Jon Dettling, Sebastien Humbert, and Lindsay Lessard. 2011. “A Semiconductor Company’s Examination of Its Water Footprint Approach.” In Proceedings of the 2011 IEEE International Symposium on Sustainable Systems and Technology, 1–6. IEEE; IEEE. https://doi.org/10.1109/issst.2011.5936865. +

This water is repeatedly used to flush away contaminants in cleaning steps and also acts as a coolant and carrier fluid in thermal oxidation, chemical deposition, and chemical mechanical planarization processes. During peak summer months, this approximates the daily water consumption of a city with a population of half a million people.

+

Despite being located in regions with sufficient water, the intensive usage can severely depress local water tables and drainage basins. For example, the city of Hsinchu in Taiwan suffered sinking water tables and seawater intrusion into aquifers due to excessive pumping to satisfy water supply demands from the Taiwan Semiconductor Manufacturing Company (TSMC) fab. In water-scarce inland areas like Arizona, massive water inputs are needed to support fabs despite already strained reservoirs.

+

Water discharge from fabs risks environmental contamination besides depletion if not properly treated. While much discharge is recycled within the fab, the purification systems still filter out metals, acids, and other contaminants that can pollute rivers and lakes if not cautiously handled (Prakash, Callahan, et al. 2023). These factors make managing water usage essential when mitigating wider sustainability impacts.

+
+
+

16.5.2 Hazardous Chemicals Usage

+

Modern semiconductor fabrication involves working with many highly hazardous chemicals under extreme conditions of heat and pressure (Kim et al. 2018). Key chemicals utilized include:

+
+Kim, Sunju, Chungsik Yoon, Seunghon Ham, Jihoon Park, Ohun Kwon, Donguk Park, Sangjun Choi, Seungwon Kim, Kwonchul Ha, and Won Kim. 2018. “Chemical Use in the Semiconductor Manufacturing Industry.” Int. J. Occup. Env. Heal. 24 (3-4): 109–18. https://doi.org/10.1080/10773525.2018.1519957. +
    +
  • Strong acids: Hydrofluoric, sulfuric, nitric, and hydrochloric acids rapidly eat through oxides and other surface contaminants but also pose toxicity dangers. Fabs can use thousands of metric tons of these acids annually, and accidental exposure can be fatal for workers.
  • +
  • Solvents: Key solvents like xylene, methanol, and methyl isobutyl ketone (MIBK) handle dissolving photoresists but have adverse health impacts like skin/eye irritation and narcotic effects if mishandled. They also create explosive and air pollution risks.
  • +
  • Toxic gases: Gas mixtures containing arsine (AsH3), phosphine (PH3), diborane (B2H6), germane (GeH4), etc., are some of the deadliest chemicals used in doping and vapor deposition steps. Minimal exposures can lead to poisoning, tissue damage, and even death without quick treatment.
  • +
  • Chlorinated compounds: Older chemical mechanical planarization formulations incorporated perchloroethylene, trichloroethylene, and other chlorinated solvents, which have since been banned due to their carcinogenic effects and impacts on the ozone layer. However, their prior release still threatens surrounding groundwater sources.
  • +
+

Strict handling protocols, protective equipment for workers, ventilation, filtrating/scrubbing systems, secondary containment tanks, and specialized disposal mechanisms are vital where these chemicals are used to minimize health, explosion, air, and environmental spill dangers (Wald and Jones 1987). But human errors and equipment failures still occasionally occur–highlighting why reducing fab chemical intensities is an ongoing sustainability effort.

+
+Wald, Peter H., and Jeffrey R. Jones. 1987. “Semiconductor Manufacturing: An Introduction to Processes and Hazards.” Am. J. Ind. Med. 11 (2): 203–21. https://doi.org/10.1002/ajim.4700110209. +
+
+

16.5.3 Resource Depletion

+

While silicon forms the base, there is an almost endless supply of silicon on Earth. In fact, silicon is the second most plentiful element found in the Earth’s crust, accounting for 27.7% of the crust’s total mass. Only oxygen exceeds silicon in abundance within the crust. Therefore, silicon is not necessary to consider for resource depletion. However, the various specialty metals and materials that enable the integrated circuit fabrication process and provide specific properties still need to be discovered. Maintaining supplies of these resources is crucial yet threatened by finite availability and geopolitical influences (Nakano 2021).

+
+Nakano, Jane. 2021. The Geopolitics of Critical Minerals Supply Chains. JSTOR. +
+Chen, H.-W. 2006. “Gallium, Indium, and Arsenic Pollution of Groundwater from a Semiconductor Manufacturing Area of Taiwan.” B. Environ. Contam. Tox. 77 (2): 289–96. https://doi.org/10.1007/s00128-006-1062-3. +

Gallium, indium, and arsenic are vital ingredients in forming ultra-efficient compound semiconductors in the highest-speed chips suited for 5G and AI applications (Chen 2006). However, these rare elements have relatively scarce natural deposits that are being depleted. The United States Geological Survey has indium on its list of most critical at-risk commodities, estimated to have less than a 15-year viable global supply at current demand growth (Davies 2011).

+

Helium is required in huge volumes for next-gen fabs to enable precise wafer cooling during operation. But helium’s relative rarity and the fact that once it vents into the atmosphere, it quickly escapes Earth make maintaining helium supplies extremely challenging long-term (Davies 2011). According to the US National Academies, substantial price increases and supply shocks are already occurring in this thinly traded market.

+
+Jha, A. R. 2014. Rare Earth Materials: Properties and Applications. CRC Press. https://doi.org/10.1201/b17045. +

Other risks include China’s control over 90% of the rare earth elements critical to semiconductor material production (Jha 2014). Any supply chain issues or trade disputes can lead to catastrophic raw material shortages, given the lack of current alternatives. In conjunction with helium shortages, resolving the limited availability and geographic imbalance in accessing essential ingredients remains a sector priority for sustainability.

+
+
+

16.5.4 Hazardous Waste Generation

+

Semiconductor fabs generate tons of hazardous waste annually as byproducts from the various chemical processes (Grossman 2007). The key waste streams include:

+
+Grossman, Elizabeth. 2007. High Tech Trash: Digital Devices, Hidden Toxics, and Human Health. Island press. +
    +
  • Gaseous waste: Fab ventilation systems capture harmful gases like arsine, phosphine, and germane and filter them out to avoid worker exposure. However, this produces significant quantities of dangerous condensed gas that need specialized treatment.
  • +
  • VOCs: Volatile organic compounds like xylene, acetone, and methanol are used extensively as photoresist solvents and are evaporated as emissions during baking, etching, and stripping. VOCs pose toxicity issues and require scrubbing systems to prevent release.
  • +
  • Spent acids: Strong acids such as sulfuric acid, hydrofluoric acid, and nitric acid get depleted in cleaning and etching steps, transforming into a corrosive, toxic soup that can dangerously react, releasing heat and fumes if mixed.
  • +
  • Sludge: Water treatment of discharged effluent contains concentrated heavy metals, acid residues, and chemical contaminants. Filter press systems separate this hazardous sludge.
  • +
  • Filter cake: Gaseous filtration systems generate multi-ton sticky cakes of dangerous absorbed compounds requiring containment.
  • +
+

Without proper handling procedures, storage tanks, packaging materials, and secondary containment, improper disposal of any of these waste streams can lead to dangerous spills, explosions, and environmental releases. The massive volumes mean even well-run fabs produce tons of hazardous waste year after year, requiring extensive treatment.

+
+
+

16.5.5 Biodiversity Impacts

+
+

Habitat Disruption and Fragmentation

+

Semiconductor fabs require large, contiguous land areas to accommodate cleanrooms, support facilities, chemical storage, waste treatment, and ancillary infrastructure. Developing these vast built-up spaces inevitably dismantles existing habitats, damaging sensitive biomes that may have taken decades to develop. For example, constructing a new fabrication module may level local forest ecosystems that species, like spotted owls and elk, rely upon for survival. The outright removal of such habitats severely threatens wildlife populations dependent on those lands.

+

Furthermore, pipelines, water channels, air and waste exhaust systems, access roads, transmission towers, and other support infrastructure fragment the remaining undisturbed habitats. Animals moving daily for food, water, and spawning can find their migration patterns blocked by these physical human barriers that bisect previously natural corridors.

+
+
+

Aquatic Life Disturbances

+

With semiconductor fabs consuming millions of gallons of ultra-pure water daily, accessing and discharging such volumes risks altering the suitability of nearby aquatic environments housing fish, water plants, amphibians, and other species. If the fab is tapping groundwater tables as its primary supply source, overdrawing at unsustainable rates can deplete lakes or lead to stream drying as water levels drop (Davies 2011).

+
+Davies, Emma. 2011. “Endangered Elements: Critical Thinking.” https://www.rsc.org/images/Endangered\%20Elements\%20-\%20Critical\%20Thinking\_tcm18-196054.pdf. +
+LeRoy Poff, N, MM Brinson, and JW Day. 2002. “Aquatic Ecosystems & Global Climate Change.” Pew Center on Global Climate Change. +
+Till, Aaron, Andrew L. Rypel, Andrew Bray, and Samuel B. Fey. 2019. “Fish Die-Offs Are Concurrent with Thermal Extremes in North Temperate Lakes.” Nat. Clim. Change 9 (8): 637–41. https://doi.org/10.1038/s41558-019-0520-y. +

Also, discharging wastewater at higher temperatures to cool fabrication equipment can shift downstream river conditions through thermal pollution. Temperature changes beyond thresholds that native species evolved for can disrupt reproductive cycles. Warmer water also holds less dissolved oxygen, critical to supporting aquatic plant and animal life (LeRoy Poff, Brinson, and Day 2002). Combined with traces of residual contaminants that escape filtration systems, the discharged water can cumulatively transform environments to be far less habitable for sensitive organisms (Till et al. 2019).

+
+
+

Air and Chemical Emissions

+

While modern semiconductor fabs aim to contain air and chemical discharges through extensive filtration systems, some levels of emissions often persist, raising risks for nearby flora and fauna. Air pollutants can carry downwind, including volatile organic compounds (VOCs), nitrogen oxide compounds (NOx), particulate matter from fab operational exhausts, and power plant fuel emissions.

+

As contaminants permeate local soils and water sources, wildlife ingesting affected food and water ingest toxic substances, which research shows can hamper cell function, reproduction rates, and longevity–slowly poisoning ecosystems (Hsu et al. 2016).

+
+Hsu, Liang-Ching, Ching-Yi Huang, Yen-Hsun Chuang, Ho-Wen Chen, Ya-Ting Chan, Heng Yi Teah, Tsan-Yao Chen, Chiung-Fen Chang, Yu-Ting Liu, and Yu-Min Tzou. 2016. “Accumulation of Heavy Metals and Trace Elements in Fluvial Sediments Received Effluents from Traditional and Semiconductor Industries.” Scientific Reports 6 (1): 34250. https://doi.org/10.1038/srep34250. +

Likewise, accidental chemical spills and improper waste handling, which release acids, BODs, and heavy metals into soils, can dramatically affect retention and leeching capabilities. Flora, such as vulnerable native orchids adapted to nutrient-poor substrates, can experience die-offs when contacted by foreign runoff chemicals that alter soil pH and permeability. One analysis found that a single 500-gallon nitric acid spill led to the regional extinction of a rare moss species in the year following when the acidic effluent reached nearby forest habitats. Such contamination events set off chain reactions across the interconnected web of life. Thus, strict protocols are essential to avoid hazardous discharge and runoff.

+
+
+
+
+

16.6 Life Cycle Analysis

+

Understanding the holistic environmental impact of AI systems requires a comprehensive approach that considers the entire life cycle of these technologies. Life Cycle Analysis (LCA) refers to a methodological framework used to quantify the environmental impacts across all stages in a product or system’s lifespan, from raw material extraction to end-of-life disposal. Applying LCA to AI systems can help identify priority areas to target for reducing overall environmental footprints.

+
+

16.6.1 Stages of an AI System’s Life Cycle

+

The life cycle of an AI system can be divided into four key phases:

+
    +
  • Design Phase: This includes the energy and resources used in researching and developing AI technologies. It encompasses the computational resources used for algorithm development and testing contributing to carbon emissions.

  • +
  • Manufacture Phase: This stage involves producing hardware components such as graphics cards, processors, and other computing devices necessary for running AI algorithms. Manufacturing these components often involves significant energy for material extraction, processing, and greenhouse gas emissions.

  • +
  • Use Phase: The next most energy-intensive phase involves the operational use of AI systems. It includes the electricity consumed in data centers for training and running neural networks and powering end-user applications. This is arguably one of the most carbon-intensive stages.

  • +
  • Disposal Phase: This final stage covers the end-of-life aspects of AI systems, including the recycling and disposal of electronic waste generated from outdated or non-functional hardware past their usable lifespan.

  • +
+
+
+

16.6.2 Environmental Impact at Each Stage

+

Design and Manufacturing

+

The environmental impact during these beginning-of-life phases includes emissions from energy use and resource depletion from extracting materials for hardware production. At the heart of AI hardware are semiconductors, primarily silicon, used to make the integrated circuits in processors and memory chips. This hardware manufacturing relies on metals like copper for wiring, aluminum for casings, and various plastics and composites for other components. It also uses rare earth metals and specialized alloys- elements like neodymium, terbium, and yttrium- used in small but vital quantities. For example, the creation of GPUs relies on copper and aluminum. At the same time, chips use rare earth metals, which is the mining process that can generate substantial carbon emissions and ecosystem damage.

+

Use Phase

+

AI computes the majority of emissions in the lifecycle due to continuous high-power consumption, especially for training and running models. This includes direct and indirect emissions from electricity usage and nonrenewable grid energy generation. Studies estimate training complex models can have a carbon footprint comparable to the lifetime emissions of up to five cars.

+

Disposal Phase

+

The disposal stage impacts include air and water pollution from toxic materials in devices, challenges associated with complex electronics recycling, and contamination when improperly handled. Harmful compounds from burned e-waste are released into the atmosphere. At the same time, landfill leakage of lead, mercury, and other materials poses risks of soil and groundwater contamination if not properly controlled. Implementing effective electronics recycling is crucial.

+
+

Exercise 16.2 (Tracking ML Emissions)  

+
+
+ +
+
+

In this exercise, you’ll delve into the environmental impact of training machine learning models. We’ll use CodeCarbon to track emissions, learn about Life Cycle Analysis (LCA) to understand AI’s carbon footprint, and explore strategies to make your ML model development more environmentally friendly. By the end, you’ll be equipped to track the carbon emissions of your models and start implementing greener practices in your projects.

+

+
+
+
+
+
+
+

16.7 Challenges in LCA

+
+

16.7.1 Lack of Consistency and Standards

+

One major challenge facing life cycle analysis (LCA) for AI systems is the need for consistent methodological standards and frameworks. Unlike product categories like building materials, which have developed international standards for LCA through ISO 14040, there are no firmly established guidelines for analyzing the environmental footprint of complex information technology like AI.

+

This absence of uniformity means researchers make differing assumptions and varying methodological choices. For example, a 2021 study from the University of Massachusetts Amherst (Strubell, Ganesh, and McCallum 2019) analyzed the life cycle emissions of several natural language processing models but only considered computational resource usage for training and omitted hardware manufacturing impacts. A more comprehensive 2020 study from Stanford University researchers included emissions estimates from producing relevant servers, processors, and other components, following an ISO-aligned LCA standard for computer hardware. However, these diverging choices in system boundaries and accounting approaches reduce robustness and prevent apples-to-apples comparisons of results.

+

Standardized frameworks and protocols tailored to AI systems’ unique aspects and rapid update cycles would provide more coherence. This could equip researchers and developers to understand environmental hotspots, compare technology options, and accurately track progress on sustainability initiatives across the AI field. Industry groups and international standards bodies like the IEEE or ACM should prioritize addressing this methodological gap.

+
+
+

16.7.2 Data Gaps

+

Another key challenge for comprehensive life cycle assessment of AI systems is substantial data gaps, especially regarding upstream supply chain impacts and downstream electronic waste flows. Most existing studies focus narrowly on the learner or usage phase emissions from computational power demands, which misses a significant portion of lifetime emissions (Gupta et al. 2022).

+

For example, little public data from companies exists quantifying energy use and emissions from manufacturing the specialized hardware components that enable AI–including high-end GPUs, ASIC chips, solid-state drives, and more. Researchers often rely on secondary sources or generic industry averages to approximate production impacts. Similarly, on average, there is limited transparency into downstream fate once AI systems are discarded after 4-5 years of usable lifespans.

+

While electronic waste generation levels can be estimated, specifics on hazardous material leakage, recycling rates, and disposal methods for the complex components are hugely uncertain without better corporate documentation or regulatory reporting requirements.

+

The need for fine-grained data on computational resource consumption for training different model types makes reliable per-parameter or per-query emissions calculations difficult even for the usage phase. Attempts to create lifecycle inventories estimating average energy needs for key AI tasks exist (Henderson et al. 2020; Anthony, Kanding, and Selvan 2020), but variability across hardware setups, algorithms, and input data uncertainty remains extremely high. Furthermore, real-time carbon intensity data, critical in accurately tracking operational carbon footprint, must be improved in many geographic locations, rendering existing tools for operational carbon emission mere approximations based on annual average carbon intensity values.

+
+Henderson, Peter, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. “Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning.” The Journal of Machine Learning Research 21 (1): 10039–81. +
+Anthony, Lasse F. Wolff, Benjamin Kanding, and Raghavendra Selvan. 2020. ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems. +

The challenge is that tools like CodeCarbon and ML \(\textrm{CO}_2\) but these are ad hoc approaches at best. Bridging the real data gaps with more rigorous corporate sustainability disclosures and mandated environmental impact reporting will be key for AI’s overall climatic impacts to be understood and managed.

+
+
+

16.7.3 Rapid Pace of Evolution

+

The extremely quick evolution of AI systems poses additional challenges in keeping life cycle assessments up-to-date and accounting for the latest hardware and software advancements. The core algorithms, specialized chips, frameworks, and technical infrastructure underpinning AI have all been advancing exceptionally fast, with new developments rapidly rendering prior systems obsolete.

+

For example, in deep learning, novel neural network architectures that achieve significantly better performance on key benchmarks or new optimized hardware like Google’s TPU chips can completely change an “average” model in less than a year. These swift shifts quickly make one-off LCA studies outdated for accurately tracking emissions from designing, running, or disposing of the latest AI.

+

However, the resources and access required to update LCAs continuously need to be improved. Frequently re-doing labor—and data-intensive life cycle inventories and impact modeling to stay current with AI’s state-of-the-art is likely infeasible for many researchers and organizations. However, updated analyses could notice environmental hotspots as algorithms and silicon chips continue rapidly evolving.

+

This presents difficulty in balancing dynamic precision through continuous assessment with pragmatic constraints. Some researchers have proposed simplified proxy metrics like tracking hardware generations over time or using representative benchmarks as an oscillating set of goalposts for relative comparisons, though granularity may be sacrificed. Overall, the challenge of rapid change will require innovative methodological solutions to prevent underestimating AI’s evolving environmental burdens.

+
+
+

16.7.4 Supply Chain Complexity

+

Finally, the complex and often opaque supply chains associated with producing the wide array of specialized hardware components that enable AI pose challenges for comprehensive life cycle modeling. State-of-the-art AI relies on cutting-edge advancements in processing chips, graphics cards, data storage, networking equipment, and more. However, tracking emissions and resource use across the tiered networks of globalized suppliers for all these components is extremely difficult.

+

For example, NVIDIA graphics processing units dominate much of the AI computing hardware, but the company relies on several discrete suppliers across Asia and beyond to produce GPUs. Many firms at each supplier tier choose to keep facility-level environmental data private, which could fully enable robust LCAs. Gaining end-to-end transparency down multiple levels of suppliers across disparate geographies with varying disclosure protocols and regulations poses barriers despite being crucial for complete boundary setting. This becomes even more complex when attempting to model emerging hardware accelerators like tensor processing units (TPUs), whose production networks still need to be made public.

+

Without tech giants’ willingness to require and consolidate environmental impact data disclosure from across their global electronics supply chains, considerable uncertainty will remain around quantifying the full lifecycle footprint of AI hardware enablement. More supply chain visibility coupled with standardized sustainability reporting frameworks specifically addressing AI’s complex inputs hold promise for enriching LCAs and prioritizing environmental impact reductions.

+
+
+
+

16.8 Sustainable Design and Development

+
+

16.8.1 Sustainability Principles

+

As the impact of AI on the environment becomes increasingly evident, the focus on sustainable design and development in AI is gaining prominence. This involves incorporating sustainability principles into AI design, developing energy-efficient models, and integrating these considerations throughout the AI development pipeline. There is a growing need to consider its sustainability implications and develop principles to guide responsible innovation. Below is a core set of principles. The principles flow from the conceptual foundation to practical execution to supporting implementation factors; the principles provide a full cycle perspective on embedding sustainability in AI design and development.

+

Lifecycle Thinking: Encouraging designers to consider the entire lifecycle of AI systems, from data collection and preprocessing to model development, training, deployment, and monitoring. The goal is to ensure sustainability is considered at each stage. This includes using energy-efficient hardware, prioritizing renewable energy sources, and planning to reuse or recycle retired models.

+

Future Proofing: Designing AI systems anticipating future needs and changes can enhance sustainability. This may involve making models adaptable via transfer learning and modular architectures. It also includes planning capacity for projected increases in operational scale and data volumes.

+

Efficiency and Minimalism: This principle focuses on creating AI models that achieve desired results with the least possible resource use. It involves simplifying models and algorithms to reduce computational requirements. Specific techniques include pruning redundant parameters, quantizing and compressing models, and designing efficient model architectures, such as those discussed in the Optimizations chapter.

+

Lifecycle Assessment (LCA) Integration: Analyzing environmental impacts throughout the development and deployment of lifecycles highlights unsustainable practices early on. Teams can then make adjustments instead of discovering issues late when they are more difficult to address. Integrating this analysis into the standard design flow avoids creating legacy sustainability problems.

+

Incentive Alignment: Economic and policy incentives should promote and reward sustainable AI development. These may include government grants, corporate initiatives, industry standards, and academic mandates for sustainability. Aligned incentives enable sustainability to become embedded in AI culture.

+

Sustainability Metrics and Goals: It is important to establish clearly defined Metrics that measure sustainability factors like carbon usage and energy efficiency. Establishing clear targets for these metrics provides concrete guidelines for teams to develop responsible AI systems. Tracking performance on metrics over time shows progress towards set sustainability goals.

+

Fairness, Transparency, and Accountability: Sustainable AI systems should be fair, transparent, and accountable. Models should be unbiased, with transparent development processes and mechanisms for auditing and redressing issues. This builds public trust and enables the identification of unsustainable practices.

+

Cross-disciplinary Collaboration: AI researchers teaming up with environmental scientists and engineers can lead to innovative systems that are high-performing yet environmentally friendly. Combining expertise from different fields from the start of projects enables sustainable thinking to be incorporated into the AI design process.

+

Education and Awareness: Workshops, training programs, and course curricula that cover AI sustainability raise awareness among the next generation of practitioners. This equips students with the knowledge to develop AI that consciously minimizes negative societal and environmental impacts. Instilling these values from the start shapes tomorrow’s professionals and company cultures.

+
+
+
+

16.9 Green AI Infrastructure

+

Green AI represents a transformative approach to AI that incorporates environmental sustainability as a fundamental principle across the AI system design and lifecycle (R. Schwartz et al. 2020). This shift is driven by growing awareness of AI technologies’ significant carbon footprint and ecological impact, especially the compute-intensive process of training complex ML models.

+
+Schwartz, Roy, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. “Green AI.” Commun. ACM 63 (12): 54–63. https://doi.org/10.1145/3381831. +

The essence of Green AI lies in its commitment to align AI advancement with sustainability goals around energy efficiency, renewable energy usage, and waste reduction. The introduction of Green AI ideals reflects maturing responsibility across the tech industry towards environmental stewardship and ethical technology practices. It moves beyond technical optimizations toward holistic life cycle assessment on how AI systems affect sustainability metrics. Setting new bars for ecologically conscious AI paves the way for the harmonious coexistence of technological progress and planetary health.

+
+

16.9.1 Energy Efficient AI Systems

+

Energy efficiency in AI systems is a cornerstone of Green AI, aiming to reduce the energy demands traditionally associated with AI development and operations. This shift towards energy-conscious AI practices is vital in addressing the environmental concerns raised by the rapidly expanding field of AI. By focusing on energy efficiency, AI systems can become more sustainable, lessening their environmental impact and paving the way for more responsible AI use.

+

As we discussed earlier, the training and operation of AI models, especially large-scale ones, are known for their high energy consumption, which stems from compute-intensive model architecture and reliance on vast amounts of training data. For example, it is estimated that training a large state-of-the-art neural network model can have a carbon footprint of 284 tonnes—equivalent to the lifetime emissions of 5 cars (Strubell, Ganesh, and McCallum 2019).

+
+Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2019. “Energy and Policy Considerations for Deep Learning in NLP.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–50. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/p19-1355. +

To tackle the massive energy demands, researchers and developers are actively exploring methods to optimize AI systems for better energy efficiency while maintaining model accuracy and performance. This includes techniques like the ones we have discussed in the model optimizations, efficient AI, and hardware acceleration chapters:

+
    +
  • Knowledge distillation to transfer knowledge from large AI models to miniature versions
  • +
  • Quantization and pruning approaches that reduce computational and space complexities
  • +
  • Low-precision numerics–lowering mathematical precision without impacting model quality
  • +
  • Specialized hardware like TPUs, neuromorphic chips tuned explicitly for efficient AI processing
  • +
+

One example is Intel’s work on Q8BERT—quantizing the BERT language model with 8-bit integers, leading to a 4x reduction in model size with minimal accuracy loss (Zafrir et al. 2019). The push for energy-efficient AI is not just a technical endeavor–it has tangible real-world implications. More performant systems lower AI’s operational costs and carbon footprint, making it accessible for widespread deployment on mobile and edge devices. It also paves the path toward the democratization of AI and mitigates unfair biases that can emerge from uneven access to computing resources across regions and communities. Pursuing energy-efficient AI is thus crucial for creating an equitable and sustainable future with AI.

+
+Zafrir, Ofir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8BERT: Quantized 8Bit BERT.” In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS), 36–39. IEEE; IEEE. https://doi.org/10.1109/emc2-nips53020.2019.00016. +
+
+

16.9.2 Sustainable AI Infrastructure

+

Sustainable AI infrastructure includes the physical and technological frameworks that support AI systems, focusing on environmental sustainability. This involves designing and operating AI infrastructure to minimize ecological impact, conserve resources, and reduce carbon emissions. The goal is to create a sustainable ecosystem for AI that aligns with broader environmental objectives.

+

Green data centers are central to sustainable AI infrastructure, optimized for energy efficiency, and often powered by renewable energy sources. These data centers employ advanced cooling technologies (Ebrahimi, Jones, and Fleischer 2014), energy-efficient server designs (Uddin and Rahman 2012), and smart management systems (Buyya, Beloglazov, and Abawajy 2010) to reduce power consumption. The shift towards green computing infrastructure also involves adopting energy-efficient hardware, like AI-optimized processors that deliver high performance with lower energy requirements, which we discussed in the AI. Acceleration chapter. These efforts collectively reduce the carbon footprint of running large-scale AI operations.

+
+Ebrahimi, Khosrow, Gerard F. Jones, and Amy S. Fleischer. 2014. “A Review of Data Center Cooling Technology, Operating Conditions and the Corresponding Low-Grade Waste Heat Recovery Opportunities.” Renewable Sustainable Energy Rev. 31 (March): 622–38. https://doi.org/10.1016/j.rser.2013.12.007. +
+Uddin, Mueen, and Azizah Abdul Rahman. 2012. “Energy Efficiency and Low Carbon Enabler Green IT Framework for Data Centers Considering Green Metrics.” Renewable Sustainable Energy Rev. 16 (6): 4078–94. https://doi.org/10.1016/j.rser.2012.03.014. +
+Buyya, Rajkumar, Anton Beloglazov, and Jemal Abawajy. 2010. “Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges.” https://arxiv.org/abs/1006.0308. +
+Chua, L. 1971. “Memristor-the Missing Circuit Element.” #IEEE_J_CT# 18 (5): 507–19. https://doi.org/10.1109/tct.1971.1083337. +

Integrating renewable energy sources, such as solar, wind, and hydroelectric power, into AI infrastructure is important for environmental sustainability (Chua 1971). Many tech companies and research institutions are investing in renewable energy projects to power their data centers. This not only helps in making AI operations carbon-neutral but also promotes the wider adoption of clean energy. Using renewable energy sources clearly shows commitment to environmental responsibility in the AI industry.

+

Sustainability in AI also extends to the materials and hardware used in creating AI systems. This involves choosing environmentally friendly materials, adopting recycling practices, and ensuring responsible electronic waste disposal. Efforts are underway to develop more sustainable hardware components, including energy-efficient chips designed for domain-specific tasks (such as AI accelerators) and environmentally friendly materials in device manufacturing (Cenci et al. 2021; Irimia-Vladu 2014). The lifecycle of these components is also a focus, with initiatives aimed at extending the lifespan of hardware and promoting recycling and reuse.

+
+Cenci, Marcelo Pilotto, Tatiana Scarazzato, Daniel Dotto Munchen, Paula Cristina Dartora, Hugo Marcelo Veit, Andrea Moura Bernardes, and Pablo R. Dias. 2021. Eco-Friendly ElectronicsA Comprehensive Review.” Adv. Mater. Technol. 7 (2): 2001263. https://doi.org/10.1002/admt.202001263. +
+Irimia-Vladu, Mihai. 2014. Green Electronics: Biodegradable and Biocompatible Materials and Devices for Sustainable Future.” Chem. Soc. Rev. 43 (2): 588–610. https://doi.org/10.1039/c3cs60235d. +

While strides are being made in sustainable AI infrastructure, challenges remain, such as the high costs of green technology and the need for global standards in sustainable practices. Future directions include more widespread adoption of green energy, further innovations in energy-efficient hardware, and international collaboration on sustainable AI policies. Pursuing sustainable AI infrastructure is not just a technical endeavor but a holistic approach that encompasses environmental, economic, and social aspects, ensuring that AI advances harmoniously with our planet’s health.

+
+
+

16.9.3 Frameworks and Tools

+

Access to the right frameworks and tools is essential to effectively implementing green AI practices. These resources are designed to assist developers and researchers in creating more energy-efficient and environmentally friendly AI systems. They range from software libraries optimized for low-power consumption to platforms that facilitate the development of sustainable AI applications.

+

Several software libraries and development environments are specifically tailored for Green AI. These tools often include features for optimizing AI models to reduce their computational load and, consequently, their energy consumption. For example, libraries in PyTorch and TensorFlow that support model pruning, quantization, and efficient neural network architectures enable developers to build AI systems that require less processing power and energy. Additionally, open-source communities like the Green Carbon Foundation are creating a centralized carbon intensity metric and building software for carbon-aware computing.

+

Energy monitoring tools are crucial for Green AI, as they allow developers to measure and analyze the energy consumption of their AI systems. By providing detailed insights into where and how energy is being used, these tools enable developers to make informed decisions about optimizing their models for better energy efficiency. This can involve adjustments in algorithm design, hardware selection, cloud computing software selection, or operational parameters. Figure fig-azuredashboard is a screenshot of an energy consumption dashboard provided by Microsoft’s cloud services platform.

+
+
+
+ +
+
+Figure 16.7: Microsoft Azure energy consumption dashboard. Credit: Will Buchanan. +
+
+
+

With the increasing integration of renewable energy sources in AI operations, frameworks facilitating this process are becoming more important. These frameworks help manage the energy supply from renewable sources like solar or wind power, ensuring that AI systems can operate efficiently with fluctuating energy inputs.

+

Beyond energy efficiency, sustainability assessment tools help evaluate the broader environmental impact of AI systems. These tools can analyze factors like the carbon footprint of AI operations, the lifecycle impact of hardware components (Gupta et al. 2022), and the overall sustainability of AI projects (Prakash, Callahan, et al. 2023).

+
+Gupta, Udit, Mariam Elgamal, Gage Hills, Gu-Yeon Wei, Hsien-Hsin S. Lee, David Brooks, and Carole-Jean Wu. 2022. “Act: Designing Sustainable Computer Systems with an Architectural Carbon Modeling Tool.” In Proceedings of the 49th Annual International Symposium on Computer Architecture, 784–99. ACM. https://doi.org/10.1145/3470496.3527408. +
+Prakash, Shvetank, Tim Callahan, Joseph Bushagour, Colby Banbury, Alan V. Green, Pete Warden, Tim Ansell, and Vijay Janapa Reddi. 2023. CFU Playground: Full-stack Open-Source Framework for Tiny Machine Learning (TinyML) Acceleration on FPGAs.” In 2023 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). Vol. abs/2201.01863. IEEE. https://doi.org/10.1109/ispass57527.2023.00024. +

The availability and ongoing development of Green AI frameworks and tools are critical for advancing sustainable AI practices. By providing the necessary resources for developers and researchers, these tools facilitate the creation of more environmentally friendly AI systems and encourage a broader shift towards sustainability in the tech community. As Green AI continues to evolve, these frameworks and tools will play a vital role in shaping a more sustainable future for AI.

+
+
+

16.9.4 Benchmarks and Leaderboards

+

Benchmarks and leaderboards are important for driving progress in Green AI, as they provide standardized ways to measure and compare different methods. Well-designed benchmarks that capture relevant metrics around energy efficiency, carbon emissions, and other sustainability factors enable the community to track advancements fairly and meaningfully.

+

Extensive benchmarks exist for tracking AI model performance, such as those extensively discussed in the Benchmarking chapter. Still, a clear and pressing need exists for additional standardized benchmarks focused on sustainability metrics like energy efficiency, carbon emissions, and overall ecological impact. Understanding the environmental costs of AI currently needs to be improved by a lack of transparency and standardized measurement around these factors.

+

Emerging efforts such as the ML.ENERGY Leaderboard, which provides performance and energy consumption benchmarking results for large language models (LLMs) text generation, assists in enhancing the understanding of the energy cost of GenAI deployment.

+

As with any benchmark, Green AI benchmarks must represent realistic usage scenarios and workloads. Benchmarks that focus narrowly on easily gamed metrics may lead to short-term gains but fail to reflect actual production environments where more holistic efficiency and sustainability measures are needed. The community should continue expanding benchmarks to cover diverse use cases.

+

Wider adoption of common benchmark suites by industry players will accelerate innovation in Green AI by allowing easier comparison of techniques across organizations. Shared benchmarks lower the barrier to demonstrating the sustainability benefits of new tools and best practices. However, when designing industry-wide benchmarks, care must be taken around issues like intellectual property, privacy, and commercial sensitivity. Initiatives to develop open reference datasets for Green AI evaluation may help drive broader participation.

+

As methods and infrastructure for Green AI continue maturing, the community must revisit benchmark design to ensure existing suites capture new techniques and scenarios well. Tracking the evolving landscape through regular benchmark updates and reviews will be important to maintain representative comparisons over time. Community efforts for benchmark curation can enable sustainable benchmark suites that stand the test of time. Comprehensive benchmark suites owned by research communities or neutral third parties like MLCommons may encourage wider participation and standardization.

+
+
+
+

16.10 Case Study: Google’s 4Ms

+

Over the past decade, AI has rapidly moved from academic research to large-scale production systems powering numerous Google products and services. As AI models and workloads have grown exponentially in size and computational demands, concerns have emerged about their energy consumption and carbon footprint. Some researchers predicted runaway growth in ML’s energy appetite that could outweigh efficiencies gained from improved algorithms and hardware (Thompson et al. 2021).

+
+Thompson, Neil C., Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso. 2021. “Deep Learning’s Diminishing Returns: The Cost of Improvement Is Becoming Unsustainable.” IEEE Spectr. 58 (10): 50–55. https://doi.org/10.1109/mspec.2021.9563954. +

However, Google’s production data reveals a different story—AI represents a steady 10-15% of total company energy usage from 2019 to 2021. This case study analyzes how Google applied a systematic approach leveraging four best practices—what they term the “4 Ms” of model efficiency, machine optimization, mechanization through cloud computing, and mapping to green locations—to bend the curve on emissions from AI workloads.

+

The scale of Google’s AI usage makes it an ideal case study. In 2021 alone, the company trained models like the 1.2 trillion-parameter GLam model. Analyzing how the application of AI has been paired with rapid efficiency gains in this environment helps us by providing a logical blueprint for the broader AI field to follow.

+

By transparently publishing detailed energy usage statistics, adopting rates of carbon-free clouds and renewables purchases, and more, alongside its technical innovations, Google has enabled outside researchers to measure progress accurately. Their study in the ACM CACM (Patterson et al. 2022) highlights how the company’s multipronged approach shows that runaway AI energy consumption predictions can be overcome by focusing engineering efforts on sustainable development patterns. The pace of improvements also suggests ML’s efficiency gains are just starting.

+
+Patterson, David, Joseph Gonzalez, Urs Holzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. 2022. “The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink.” Computer 55 (7): 18–28. https://doi.org/10.1109/mc.2022.3148714. +
+

16.10.1 Google’s 4M Best Practices

+

To curb emissions from their rapidly expanding AI workloads, Google engineers systematically identified four best practice areas–termed the “4 Ms”–where optimizations could compound to reduce the carbon footprint of ML:

+
    +
  • Model - Selecting efficient AI model architectures can reduce computation by 5-10X with no loss in model quality. Google has extensively researched developing sparse models and neural architecture search to create more efficient models like the Evolved Transformer and Primer.
  • +
  • Machine—Using hardware optimized for AI over general-purpose systems improves performance per watt by 2-5X. Google’s Tensor Processing Units (TPUs) led to 5-13X better carbon efficiency versus GPUs not optimized for ML.
  • +
  • Mechanization—By leveraging cloud computing systems tailored for high utilization over conventional on-premise data centers, energy costs are reduced by 1.4-2X. Google cites its data center’s power usage effectiveness as outpacing industry averages.
  • +
  • Map - Choosing data center locations with low-carbon electricity reduces gross emissions by another 5-10X. Google provides real-time maps highlighting the percentage of renewable energy used by its facilities.
  • +
+

Together, these practices created drastic compound efficiency gains. For example, optimizing the Transformer AI model on TPUs in a sustainable data center location cut energy use by 83. It lowered \(\textrm{CO}_2\) emissions by a factor of 747.

+
+
+

16.10.2 Significant Results

+

Despite exponential growth in AI adoption across products and services, Google’s efforts to improve the carbon efficiency of ML have produced measurable gains, helping to restrain overall energy appetite. One key data point highlighting this progress is that AI workloads have remained a steady 10% to 15% of total company energy use from 2019 to 2021. As AI became integral to more Google offerings, overall compute cycles dedicated to AI grew substantially. However, efficiencies in algorithms, specialized hardware, data center design, and flexible geography allowed sustainability to keep pace—with AI representing just a fraction of total data center electricity over years of expansion.

+

Other case studies underscore how an engineering focus on sustainable AI development patterns enabled rapid quality improvements in lockstep with environmental gains. For example, the natural language processing model GPT-3 was viewed as state-of-the-art in mid-2020. Yet its successor GLaM improved accuracy while cutting training compute needs and using cleaner data center energy–cutting CO2 emissions by a factor of 14 in just 18 months of model evolution.

+

Similarly, Google found past published speculation missing the mark on ML’s energy appetite by factors of 100 to 100,000X due to a lack of real-world metrics. By transparently tracking optimization impact, Google hoped to motivate efficiency while preventing overestimated extrapolations about ML’s environmental toll.

+

These data-driven case studies show how companies like Google are steering AI advancements toward sustainable trajectories and improving efficiency to outpace adoption growth. With further efforts around lifecycle analysis, inference optimization, and renewable expansion, companies can aim to accelerate progress, giving evidence that ML’s clean potential is only just being unlocked by current gains.

+
+
+

16.10.3 Further Improvements

+

While Google has made measurable progress in restraining the carbon footprint of its AI operations, the company recognizes further efficiency gains will be vital for responsible innovation given the technology’s ongoing expansion.

+

One area of focus is showing how advances are often incorrectly viewed as increasing unsustainable computing—like neural architecture search (NAS) to find optimized models— spur downstream savings, outweighing their upfront costs. Despite expending more energy on model discovery rather than hand-engineering, NAS cuts lifetime emissions by producing efficient designs callable across countless applications.

+

Additionally, the analysis reveals that focusing sustainability efforts on data center and server-side optimization makes sense, given the dominant energy draw versus consumer devices. Though Google aims to shrink inference impacts across processors like mobile phones, priority rests on improving training cycles and data center renewables procurement for maximal effect.

+

To that end, Google’s progress in pooling computing inefficiently designed cloud facilities highlights the value of scale and centralization. As more workloads shift away from inefficient on-premise servers, internet giants’ prioritization of renewable energy—with Google and Facebook matched 100% by renewables since 2017 and 2020, respectively—unlocks compounding emissions cuts.

+

Together, these efforts emphasize that while no resting on laurels is possible, Google’s multipronged approach shows that AI efficiency improvements are only accelerating. Cross-domain initiatives around lifecycle assessment, carbon-conscious development patterns, transparency, and matching rising AI demand with clean electricity supply pave a path toward bending the curve further as adoption grows. The company’s results compel the broader field towards replicating these integrated sustainability pursuits.

+
+
+
+

16.11 Embedded AI - Internet of Trash

+

While much attention has focused on making the immense data centers powering AI more sustainable, an equally pressing concern is the movement of AI capabilities into smart edge devices and endpoints. Edge/embedded AI allows near real-time responsiveness without connectivity dependencies. It also reduces transmission bandwidth needs. However, the increase of tiny devices leads to other risks.

+

Tiny computers, microcontrollers, and custom ASICs powering edge intelligence face size, cost, and power limitations that rule out high-end GPUs used in data centers. Instead, they require optimized algorithms and extremely compact, energy-efficient circuitry to run smoothly. However, engineering for these microscopic form factors opens up risks around planned obsolescence, disposability, and waste. Figure fig-iot-devices shows that the number of IoT devices is projected to reach 30 billion connected devices by 2030.

+
+
+
+ +
+
+Figure 16.8: Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2023. Credit: Statista. +
+
+
+

End-of-life handling of internet-connected gadgets embedded with sensors and AI remains an often overlooked issue during design. However, these products permeate consumer goods, vehicles, public infrastructure, industrial equipment, and more.

+
+

E-waste

+

Electronic waste, or e-waste, refers to discarded electrical equipment and components that enter the waste stream. This includes devices that have to be plugged in, have a battery, or electrical circuitry. With the rising adoption of internet-connected smart devices and sensors, e-waste volumes rapidly increase yearly. These proliferating gadgets contain toxic heavy metals like lead, mercury, and cadmium that become environmental and health hazards when improperly disposed of.

+

The amount of electronic waste being produced is growing at an alarming rate. Today, we already produce 50 million tons per year. By 2030, that figure is projected to jump to a staggering 75 million tons as consumer electronics consumption continues to accelerate. Global e-waste production will reach 120 million tonnes annually by 2050 (Un and Forum 2019). The soaring production and short lifecycles of our gadgets fuel this crisis, from smartphones and tablets to internet-connected devices and home appliances.

+

Developing nations are being hit the hardest as they need more infrastructure to process obsolete electronics safely. In 2019, formal e-waste recycling rates in poorer countries ranged from 13% to 23%. The remainder ends up illegally dumped, burned, or crudely dismantled, releasing toxic materials into the environment and harming workers and local communities. Clearly, more needs to be done to build global capacity for ethical and sustainable e-waste management, or we risk irreversible damage.

+

The danger is that crude handling of electronics to strip valuables exposes marginalized workers and communities to noxious burnt plastics/metals. Lead poisoning poses especially high risks to child development if ingested or inhaled. Overall, only about 20% of e-waste produced was collected using environmentally sound methods, according to UN estimates (Un and Forum 2019). So solutions for responsible lifecycle management are urgently required to contain the unsafe disposal as volume soars higher.

+
+Un, and World Economic Forum. 2019. A New Circular Vision for Electronics, Time for a Global Reboot. PACE - Platform for Accelerating the Circular Economy. https://www3.weforum.org/docs/WEF\_A\_New\_Circular\_Vision\_for\_Electronics.pdf. +
+
+

Disposable Electronics

+

The rapidly falling costs of microcontrollers, tiny rechargeable batteries, and compact communication hardware have enabled the embedding of intelligent sensor systems throughout everyday consumer goods. These internet-of-things (IoT) devices monitor product conditions, user interactions, and environmental factors to enable real-time responsiveness, personalization, and data-driven business decisions in the evolving connected marketplace.

+

However, these embedded electronics face little oversight or planning around sustainably handling their eventual disposal once the often plastic-encased products are discarded after brief lifetimes. IoT sensors now commonly reside in single-use items like water bottles, food packaging, prescription bottles, and cosmetic containers that overwhelmingly enter landfill waste streams after a few weeks to months of consumer use.

+

The problem accelerates as more manufacturers rush to integrate mobile chips, power sources, Bluetooth modules, and other modern silicon ICs, costing under US$1, into various merchandise without protocols for recycling, replacing batteries, or component reusability. Despite their small individual size, the volumes of these devices and lifetime waste burden loom large. Unlike regulating larger electronics, few policy constraints exist around materials requirements or toxicity in tiny disposable gadgets.

+

While offering convenience when working, the unsustainable combination of difficult retrievability and limited safe breakdown mechanisms causes disposable connected devices to contribute outsized shares of future e-waste volumes needing urgent attention.

+
+
+

Planned Obsolescence

+

Planned obsolescence refers to the intentional design strategy of manufacturing products with artificially limited lifetimes that quickly become non-functional or outdated. This spurs faster replacement purchase cycles as consumers find devices no longer meet their needs within a few years. However, electronics designed for premature obsolescence contribute to unsustainable e-waste volumes.

+

For example, gluing smartphone batteries and components together hinders repairability compared to modular, accessible assemblies. Rolling out software updates that deliberately slow system performance creates a perception that upgrading devices produced only several years earlier is worth it.

+

Likewise, fashionable introductions of new product generations with minor but exclusive feature additions make prior versions rapidly seem dated. These tactics compel buying new gadgets (e.g., iPhones) long before operational endpoints. When multiplied across fast-paced electronics categories, billions of barely worn items are discarded annually.

+

Planned obsolescence thus intensifies resource utilization and waste creation in making products with no intention for long lifetimes. This contradicts sustainability principles around durability, reuse, and material conservation. While stimulating continuous sales and gains for manufacturers in the short term, the strategy externalizes environmental costs and toxins onto communities lacking proper e-waste processing infrastructure.

+

Policy and consumer action are crucial to counter gadget designs that are needlessly disposable by default. Companies should also invest in product stewardship programs supporting responsible reuse and reclamation.

+

Consider the real-world example. Apple has faced scrutiny over the years for allegedly engaging in planned obsolescence to encourage customers to buy new iPhone models. The company allegedly designed its phones so that performance degrades over time or existing features become incompatible with new operating systems, which critics argue is meant to spur more rapid upgrade cycles. In 2020, Apple paid a 25 million Euros fine to settle a case in France where regulators found the company guilty of intentionally slowing down older iPhones without clearly informing customers via iOS updates.

+

By failing to be transparent about power management changes that reduced device performance, Apple participated in deceptive activities that reduced product lifespan to drive sales. The company claimed it was done to “smooth out” peaks that could suddenly cause older batteries to shut down. However, this example highlights the legal risks around employing planned obsolescence and not properly disclosing when functionality changes impact device usability over time- even leading brands like Apple can run into trouble if perceived as intentionally shortening product life cycles.

+
+
+
+

16.12 Policy and Regulatory Considerations

+
+

16.12.1 Measurement and Reporting Mandates

+

One policy mechanism that is increasingly relevant for AI systems is measurement and reporting requirements regarding energy consumption and carbon emissions. Mandated metering, auditing, disclosures, and more rigorous methodologies aligned to sustainability metrics can help address information gaps hindering efficiency optimizations.

+

Simultaneously, national or regional policies require companies above a certain size to utilize AI in their products or backend systems to report energy consumption or emissions associated with major AI workloads. Organizations like the Partnership on AI, IEEE, and NIST could help shape standardized methodologies. More complex proposals involve defining consistent ways to measure computational complexity, data center PUE, carbon intensity of energy supply, and efficiencies gained through AI-specific hardware.

+

Reporting obligations for public sector users procuring AI services—such as through proposed legislation in Europe—could also increase transparency. However, regulators must balance the additional measurement burden such mandates place on organizations against ongoing carbon reductions from ingraining sustainability-conscious development patterns.

+

To be most constructive, any measurement and reporting policies should focus on enabling continuous refinement rather than simplistic restrictions or caps. As AI advancements unfold rapidly, nimble governance guardrails that embed sustainability considerations into normal evaluation metrics can motivate positive change. However, overprescription risks constraining innovation if requirements grow outdated. AI efficiency policy aims to accelerate progress industry-wide by combining flexibility with appropriate transparency guardrails.

+
+
+

16.12.2 Restriction Mechanisms

+

In addition to reporting mandates, policymakers have several restriction mechanisms that could directly shape how AI systems are developed and deployed to curb emissions:

+

Caps on Computing Emissions: The European Commission’s proposed AI Act takes a horizontal approach that could allow setting economy-wide caps on the volume of computing power available for training AI models. Like emissions trading systems, caps aim to disincentivize extensive computing over sustainability indirectly. However, model quality could be improved to provide more pathways for procuring additional capacity.

+

Conditioning Access to Public Resources: Some experts have proposed incentives like only allowing access to public datasets or computing power for developing fundamentally efficient models rather than extravagant architectures. For example, the MLCommons benchmarking consortium founded by major tech firms could formally integrate efficiency into its standardized leaderboard metrics—however, conditioned access risks limiting innovation.

+

Financial Mechanisms: Analogous to carbon taxes on polluting industries, fees applied per unit of AI-related compute consumption could discourage unnecessary model scaling while funding efficiency innovations. Tax credits could alternatively reward organizations pioneering more accurate but compact AI techniques. However, financial tools require careful calibration between revenue generation and fairness and not over-penalizing productive uses of AI.

+

Technology Bans: If measurement consistently pinned extreme emissions on specific applications of AI without paths for remediation, outright bans present a tool of last resort for policymakers. However, given AI’s dual use, defining harmful versus beneficial deployments proves complex, necessitating holistic impact assessment before concluding no redeeming value exists. Banning promising technologies risks unintended consequences and requires caution.

+
+
+

16.12.3 Government Incentives

+

It is a common practice for governments to provide tax or other incentives to consumers or businesses when contributing to more sustainable technological practices. Such incentives already exist in the US for adopting solar panels or energy-efficient buildings. To the best of our knowledge, no such tax incentives exist for AI-specific development practices yet.

+

Another potential incentive program that is beginning to be explored is using government grants to fund Green AI projects. For example, in Spain, 300 million euros have been allocated to specifically fund projects in AI and sustainability. Government incentives are a promising avenue to encourage sustainable business and consumer behavior practices, but careful thought is required to determine how those incentives will fit into market demands (Cohen, Lobel, and Perakis 2016).

+
+Cohen, Maxime C., Ruben Lobel, and Georgia Perakis. 2016. “The Impact of Demand Uncertainty on Consumer Subsidies for Green Technology Adoption.” Manage. Sci. 62 (5): 1235–58. https://doi.org/10.1287/mnsc.2015.2173. +
+
+

16.12.4 Self-Regulation

+

Complimentary to potential government action, voluntary self-governance mechanisms allow the AI community to pursue sustainability ends without top-down intervention:

+

Renewables Commitments: Large AI practitioners like Google, Microsoft, Amazon, and Facebook have pledged to procure enough renewable electricity to match 100% of their energy demands. These commitments unlock compounding emissions cuts as compute scales up. Formalizing such programs incentivizes green data center regions. However, there are critiques on whether these pledges are enough (Monyei and Jenkins 2018).

+
+Monyei, Chukwuka G., and Kirsten E. H. Jenkins. 2018. “Electrons Have No Identity: Setting Right Misrepresentations in Google and Apples Clean Energy Purchasing.” Energy Research &Amp; Social Science 46 (December): 48–51. https://doi.org/10.1016/j.erss.2018.06.015. +

Internal Carbon Prices: Some organizations utilize shadow prices on carbon emissions to represent environmental costs in capital allocation decisions between AI projects. If modeled effectively, theoretical charges on development carbon footprints steer funding toward efficient innovations rather than solely accuracy gains.

+

Efficiency Development Checklists: Groups like the AI Sustainability Coalition suggest voluntary checklist templates highlighting model design choices, hardware configurations, and other factors architects can tune per application to restrain emissions. Organizations can drive change by ingraining sustainability as a primary success metric alongside accuracy and cost.

+

Independent Auditing: Even absent public disclosure mandates, firms specializing in technology sustainability audits help AI developers identify waste, create efficiency roadmaps, and benchmark progress via impartial reviews. Structuring such audits into internal governance procedures or the procurement process expands accountability.

+
+
+

16.12.5 Global Considerations

+

While measurement, restrictions, incentives, and self-regulation represent potential policy mechanisms for furthering AI sustainability, fragmentation across national regimes risks unintended consequences. As with other technology policy domains, divergence between regions must be carefully managed.

+

For example, due to regional data privacy concerns, OpenAI barred European users from accessing its viral ChatGPT chatbot. This came after the EU’s proposed AI Act signaled a precautionary approach, allowing the EC to ban certain high-risk AI uses and enforcing transparency rules that create uncertainty for releasing brand new models. However, it would be wise to caution against regulator action as it could inadvertently limit European innovation if regimes with lighter-touch regulation attract more private-sector AI research spending and talent. Finding common ground is key.

+

The OECD principles on AI and the United Nations frameworks underscore universally agreed-upon tenets all national policies should uphold: transparency, accountability, bias mitigation, and more. Constructively embedding sustainability as a core principle for responsible AI within international guidance can motivate unified action without sacrificing flexibility across divergent legal systems. Avoiding race-to-the-bottom dynamics hinges on enlightened multilateral cooperation.

+
+
+
+

16.13 Public Perception and Engagement

+

As societal attention and policy efforts aimed at environmental sustainability ramp up worldwide, there is growing enthusiasm for leveraging AI to help address ecological challenges. However, public understanding and attitudes toward the role of AI systems in sustainability contexts still need to be clarified and clouded by misconceptions. On the one hand, people hope advanced algorithms can provide new solutions for green energy, responsible consumption, decarbonization pathways, and ecosystem preservation. On the other, fears regarding the risks of uncontrolled AI also seep into the environmental domain and undermine constructive discourse. Furthermore, a lack of public awareness on key issues like transparency in developing sustainability-focused AI tools and potential biases in data or modeling also threaten to limit inclusive participation and degrade public trust.

+

Tackling complex, interdisciplinary priorities like environmental sustainability requires informed, nuanced public engagement and responsible advances in AI innovation. The path forward demands careful, equitable collaborative efforts between experts in ML, climate science, environmental policy, social science, and communication. Mapping the landscape of public perceptions, identifying pitfalls, and charting strategies to cultivate understandable, accessible, and trustworthy AI systems targeting shared ecological priorities will prove essential to realizing sustainability goals. This complex terrain warrants a deep examination of the sociotechnical dynamics involved.

+
+

16.13.1 AI Awareness

+

In May 2022, the Pew Research Center polled 5,101 US adults, finding 60% had heard or read “a little” about AI while 27% heard “a lot”–indicating decent broad recognition, but likely limited comprehension about details or applications. However, among those with some AI familiarity, concerns emerge regarding risks of personal data misuse according to agreed terms. Still, 62% felt AI could ease modern life if applied responsibly. Yet, a specific understanding of sustainability contexts still needs to be improved.

+

Studies attempting to categorize online discourse sentiments find a nearly even split between optimism and caution regarding deploying AI for sustainability goals. Factors driving positivity include hopes around better forecasting of ecological shifts using ML models. Negativity arises from a lack of confidence in self-supervised algorithms avoiding unintended consequences due to unpredictable human impacts on complex natural systems during training.

+

The most prevalent public belief remains that while AI does harbor the potential for accelerating solutions on issues like emission reductions and wildlife protections, inadequate safeguarding around data biases, ethical blindspots, and privacy considerations could be more appreciated risks if pursued carelessly, especially at scale. This leads to hesitancy around unconditional support without evidence of deliberate, democratically guided development.

+
+
+

16.13.2 Messaging

+

Optimistic efforts are highlighting AI’s sustainability promise and emphasize the potential for advanced ML to radically accelerate decarbonization effects from smart grids, personalized carbon tracking apps, automated building efficiency optimizations, and predictive analytics guiding targeted conservation efforts. More comprehensive real-time modeling of complex climate and ecological shifts using self-improving algorithms offers hope for mitigating biodiversity losses and averting worst-case scenarios.

+

However, cautionary perspectives, such as the Asilomar AI Principles, question whether AI itself could exacerbate sustainability challenges if improperly constrained. The rising energy demands of large-scale computing systems and the increasingly massive neural network model training conflict with clean energy ambitions. Lack of diversity in data inputs or developers’ priorities may downplay urgent environmental justice considerations. Near-term skeptical public engagement likely hinges on a need for perceivable safeguards against uncontrolled AI systems running amok on core ecological processes.

+

In essence, polarized framings either promote AI as an indispensable tool for sustainability problem-solving–if compassionately directed toward people and the planet–or present AI as an amplifier of existing harms insidiously dominating hidden facets of natural systems central to all life. Overcoming such impasses demands balancing honest trade-off discussions with shared visions for equitable, democratically governed technological progress targeting restoration.

+
+
+

16.13.3 Equitable Participation

+

Ensuring equitable participation and access should form a cornerstone of any sustainability initiative with the potential for major societal impacts. This principle applies equally to AI systems targeting environmental goals. However, commonly excluded voices like frontline, rural, or indigenous communities and future generations not present to consent could suffer disproportionate consequences from technology transformations. For instance, the Partnership on AI has launched events expressly targeting input from marginalized communities on deploying AI responsibly.

+

Ensuring equitable access and participation should form a cornerstone of any sustainability initiative with the potential for major societal impacts, whether AI or otherwise. However, inclusive engagement in environmental AI relies partly on the availability and understanding of fundamental computing resources. As the recent OECD report on National AI Compute Capacity highlights (Oecd 2023), many countries currently lack data or strategic plans mapping needs for the infrastructure required to fuel AI systems. This policy blindspot could constrain economic goals and exacerbate barriers to entry for marginalized populations. Their blueprint urges developing national AI compute capacity strategies along dimensions of capacity, accessibility, innovation pipelines, and resilience to anchor innovation. The underlying data storage needs to be improved, and model development platforms or specialized hardware could inadvertently concentrate AI progress in the hands of select groups. Therefore, planning for a balanced expansion of fundamental AI computing resources via policy initiatives ties directly to hopes for democratized sustainability problem-solving using equitable and transparent ML tools.

+
+Oecd. 2023. “A Blueprint for Building National Compute Capacity for Artificial Intelligence.” 350. Organisation for Economic Co-Operation; Development (OECD). https://doi.org/10.1787/876367e3-en. +

The key idea is that equitable participation in AI systems targeting environmental challenges relies in part on ensuring the underlying computing capacity and infrastructure are correct, which requires proactive policy planning from a national perspective.

+
+
+

16.13.4 Transparency

+

As public sector agencies and private companies alike rush towards adopting AI tools to help tackle pressing environmental challenges, calls for transparency around these systems’ development and functionality have begun to amplify. Explainable and interpretable ML features grow more crucial for building trust in emerging models aiming to guide consequential sustainability policies. Initiatives like the Montreal Carbon Pledge brought tech leaders together to commit to publishing impact assessments before launching environmental systems, as pledged below:

+

*“As institutional investors, we must act in the best long-term interests of our beneficiaries. In this fiduciary role, long-term investment risks are associated with greenhouse gas emissions, climate change, and carbon regulation.

+

Measuring our carbon footprint is integral to understanding better, quantifying, and managing the carbon and climate change-related impacts, risks, and opportunities in our investments. Therefore, as a first step, we commit to measuring and disclosing the carbon footprint of our investments annually to use this information to develop an engagement strategy and identify and set carbon footprint reduction targets.”*

+

We need a similar pledge for AI sustainability and responsibility. Widespread acceptance and impact of AI sustainability solutions will partly be on deliberate communication of validation schemes, metrics, and layers of human judgment applied before live deployment. Efforts like NIST’s Principles for Explainable AI can help foster transparency into AI systems. The National Institute of Standards and Technology (NIST) has published an influential set of guidelines dubbed the Principles for Explainable AI (Phillips et al. 2020). This framework articulates best practices for designing, evaluating, and deploying responsible AI systems with transparent and interpretable features that build critical user understanding and trust.

+
+Phillips, P Jonathon, Carina A Hahn, Peter C Fontana, David A Broniatowski, and Mark A Przybocki. 2020. “Four Principles of Explainable Artificial Intelligence.” Gaithersburg, Maryland 18. +

It delineates four core principles: Firstly, AI systems should provide contextually relevant explanations justifying the reasoning behind their outputs to appropriate stakeholders. Secondly, these AI explanations must communicate information meaningfully for their target audience’s appropriate comprehension level. Next is the accuracy principle, which dictates that explanations should faithfully reflect the actual process and logic informing an AI model’s internal mechanics for generating given outputs or recommendations based on inputs. Finally, a knowledge limits principle compels explanations to clarify an AI model’s boundaries in capturing the full breadth of real-world complexity, variance, and uncertainties within a problem space.

+

Altogether, these NIST principles offer AI practitioners and adopters guidance on key transparency considerations vital for developing accessible solutions prioritizing user autonomy and trust rather than simply maximizing predictive accuracy metrics alone. As AI rapidly advances across sensitive social contexts like healthcare, finance, employment, and beyond, such human-centered design guidelines will continue growing in importance for anchoring innovation to public interests.

+

This applies equally to the domain of environmental ability. Responsible and democratically guided AI innovation targeting shared ecological priorities depends on maintaining public vigilance, understanding, and oversight over otherwise opaque systems taking prominent roles in societal decisions. Prioritizing explainable algorithm designs and radical transparency practices per global standards can help sustain collective confidence that these tools improve rather than imperil hopes for a driven future.

+
+
+
+

16.14 Future Directions and Challenges

+

As we look towards the future, the role of AI in environmental sustainability is poised to grow even more significant. AI’s potential to drive advancements in renewable energy, climate modeling, conservation efforts, and more is immense. However, it is a two-sided coin, as we need to overcome several challenges and direct our efforts towards sustainable and responsible AI development.

+
+

16.14.1 Future Directions

+

One key future direction is the development of more energy-efficient AI models and algorithms. This involves ongoing research and innovation in areas like model pruning, quantization, and the use of low-precision numerics, as well as developing the hardware to enable full profitability of these innovations. Even further, we look at alternative computing paradigms that do not rely on von-Neumann architectures. More on this topic can be found in the hardware acceleration chapter. The goal is to create AI systems that deliver high performance while minimizing energy consumption and carbon emissions.

+

Another important direction is the integration of renewable energy sources into AI infrastructure. As data centers continue to be major contributors to AI’s carbon footprint, transitioning to renewable energy sources like solar and wind is crucial. Developments in long-term, sustainable energy storage, such as Ambri, an MIT spinoff, could enable this transition. This requires significant investment and collaboration between tech companies, energy providers, and policymakers.

+
+
+

16.14.2 Challenges

+

Despite these promising directions, several challenges need to be addressed. One of the major challenges is the need for consistent standards and methodologies for measuring and reporting the environmental impact of AI. These methods must capture the complexity of the life cycles of AI models and system hardware. Next, efficient and environmentally sustainable AI infrastructure and system hardware are needed. This consists of three components. It aims to maximize the utilization of accelerator and system resources, prolong the lifetime of AI infrastructure, and design systems hardware with environmental impact in mind.

+

On the software side, we should trade off experimentation and the subsequent training cost. Techniques such as neural architecture search and hyperparameter optimization can be used for design space exploration. However, these are often very resource-intensive. Efficient experimentation can significantly reduce the environmental footprint overhead. Next, methods to reduce wasted training efforts should be explored.

+

To improve model quality, we often scale the dataset. However, the increased system resources required for data storage and ingestion caused by this scaling have a significant environmental impact (Wu et al. 2022). A thorough understanding of the rate at which data loses its predictive value and devising data sampling strategies is important.

+
+Wu, Carole-Jean, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, et al. 2022. “Sustainable Ai: Environmental Implications, Challenges and Opportunities.” Proceedings of Machine Learning and Systems 4: 795–813. +

Data gaps also pose a significant challenge. Without companies and governments openly sharing detailed and accurate data on energy consumption, carbon emissions, and other environmental impacts, it isn’t easy to develop effective strategies for sustainable AI.

+

Finally, the fast pace of AI development requires an agile approach to the policy imposed on these systems. The policy should ensure sustainable development without constraining innovation. This requires experts in all domains of AI, environmental sciences, energy, and policy to work together to achieve a sustainable future.

+
+
+
+

16.15 Conclusion

+

We must address sustainability considerations as AI rapidly expands across industries and society. AI promises breakthrough innovations, yet its environmental footprint threatens its widespread growth. This chapter analyzes multiple facets, from energy and emissions to waste and biodiversity impacts, that AI/ML developers must weigh when creating responsible AI systems.

+

Fundamentally, we require elevating sustainability as a primary design priority rather than an afterthought. Techniques like energy-efficient models, renewable-powered data centers, and hardware recycling programs offer solutions, but the holistic commitment remains vital. We need standards around transparency, carbon accounting, and supply chain disclosures to supplement technical gains. Still, examples like Google’s 4M efficiency practices containing ML energy use highlight that we can advance AI in lockstep with environmental objectives with concerted effort. We achieve this harmonious balance by having researchers, corporations, regulators, and users collaborate across domains. The aim is not perfect solutions but continuous improvement as we integrate AI across new sectors.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/training/training.html b/contents/training/training.html new file mode 100644 index 00000000..2724f6d3 --- /dev/null +++ b/contents/training/training.html @@ -0,0 +1,2350 @@ + + + + + + + + + +Machine Learning Systems - 7  AI Training + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

7  AI Training

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: An illustration for AI training, depicting a neural network with neurons that are being repaired and firing. The scene includes a vast network of neurons, each glowing and firing to represent activity and learning. Among these neurons, small figures resembling engineers and scientists are actively working, repairing and tweaking the neurons. These miniature workers symbolize the process of training the network, adjusting weights and biases to achieve convergence. The entire scene is a visual metaphor for the intricate and collaborative effort involved in AI training, with the workers representing the continuous optimization and learning within a neural network. The background is a complex array of interconnected neurons, creating a sense of depth and complexity.
+
+
+

Training is central to developing accurate and useful AI systems using machine learning techniques. At a high level, training involves feeding data into machine learning algorithms so they can learn patterns and make predictions. However, effectively training models requires tackling various challenges around data, algorithms, optimization of model parameters, and enabling generalization. This chapter will explore the nuances and considerations around training machine learning models.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the fundamental mathematics of neural networks, including linear transformations, activation functions, loss functions, backpropagation, and optimization via gradient descent.

  • +
  • Learn how to effectively leverage data for model training through proper splitting into train, validation, and test sets to enable generalization.

  • +
  • Learn various optimization algorithms like stochastic gradient descent and adaptations like momentum and Adam that accelerate training.

  • +
  • Understand hyperparameter tuning and regularization techniques to improve model generalization by reducing overfitting.

  • +
  • Learn proper weight initialization strategies matched to model architectures and activation choices that accelerate convergence.

  • +
  • Identify the bottlenecks posed by key operations like matrix multiplication during training and deployment.

  • +
  • Learn how hardware improvements like GPUs, TPUs, and specialized accelerators speed up critical math operations to accelerate training.

  • +
  • Understand parallelization techniques, both data and model parallelism, to distribute training across multiple devices and accelerate system throughput.

  • +
+
+
+
+

7.1 Introduction

+

Training is critical for developing accurate and useful AI systems using machine learning. The training aims to create a machine learning model that can generalize to new, unseen data rather than memorizing the training examples. This is done by feeding training data into algorithms that learn patterns from these examples by adjusting internal parameters.

+

The algorithms minimize a loss function, which compares their predictions on the training data to the known labels or solutions, guiding the learning. Effective training often requires high-quality, representative data sets large enough to capture variability in real-world use cases.

+

It also requires choosing an algorithm suited to the task, whether a neural network for computer vision, a reinforcement learning algorithm for robotic control, or a tree-based method for categorical prediction. Careful tuning is needed for the model structure, such as neural network depth and width, and learning parameters like step size and regularization strength.

+

Techniques to prevent overfitting like regularization penalties and validation with held-out data, are also important. Overfitting can occur when a model fits the training data too closely, failing to generalize to new data. This can happen if the model is too complex or trained too long.

+

To avoid overfitting, regularization techniques can help constrain the model. One regularization method is adding a penalty term to the loss function that discourages complexity, like the L2 norm of the weights. This penalizes large parameter values. Another technique is dropout, where a percentage of neurons is randomly set to zero during training. This reduces neuron co-adaptation.

+

Validation methods also help detect and avoid overfitting. Part of the training data is held out from the training loop as a validation set. The model is evaluated on this data. If validation error increases while training error decreases, overfitting occurs. The training can then be stopped early or regularized more strongly. Regularization and validation enable models to train to maximum capability without overfitting the training data.

+

Training takes significant computing resources, especially for deep neural networks used in computer vision, natural language processing, and other areas. These networks have millions of adjustable weights that must be tuned through extensive training. Hardware improvements and distributed training techniques have enabled training ever larger neural nets that can achieve human-level performance on some tasks.

+

In summary, some key points about training:

+
    +
  • Data is crucial: Machine learning models learn from examples in training data. More high-quality, representative data leads to better model performance. Data needs to be processed and formatted for training.
  • +
  • Algorithms learn from data: Different algorithms (neural networks, decision trees, etc.) have different approaches to finding patterns in data. Choosing the right algorithm for the task is important.
  • +
  • Training refines model parameters: Model training adjusts internal parameters to find patterns in data. Advanced models like neural networks have many adjustable weights. Training iteratively adjusts weights to minimize a loss function.
  • +
  • Generalization is the goal: A model that overfits the training data will not generalize well. Regularization techniques (dropout, early stopping, etc.) reduce overfitting. Validation data is used to evaluate generalization.
  • +
  • Training takes compute resources: Training complex models requires significant processing power and time. Hardware improvements and distributed training across GPUs/TPUs have enabled advances.
  • +
+

We will walk you through these details in the rest of the sections. Understanding how to effectively leverage data, algorithms, parameter optimization, and generalization through thorough training is essential for developing capable, deployable AI systems that work robustly in the real world.

+
+
+

7.2 Mathematics of Neural Networks

+

Deep learning has revolutionized machine learning and artificial intelligence, enabling computers to learn complex patterns and make intelligent decisions. The neural network is at the heart of the deep learning revolution, and as discussed in section 3, “Deep Learning Primer,” it is a cornerstone in some of these advancements.

+

Neural networks are made up of simple functions layered on each other. Each layer takes in some data, performs some computation, and passes it to the next layer. These layers learn progressively high-level features useful for the tasks the network is trained to perform. For example, in a network trained for image recognition, the input layer may take in pixel values, while the next layers may detect simple shapes like edges. The layers after that may detect more complex shapes like noses, eyes, etc. The final output layer classifies the image as a whole.

+

The network in a neural network refers to how these layers are connected. Each layer’s output is considered a single neuron and is connected to many other neurons in the layers preceding it, forming a “network.” The way these neurons interact is determined by the weights between them, which model synaptic strengths similar to that of a brain’s neuron. The neural network is trained by adjusting these weights. Concretely, the weights are initially set randomly, then input is fed in, the output is compared to the desired result, and finally, the weights are tweaked to improve the network. This process is repeated until the network reliably minimizes the loss, indicating it has learned the patterns in the data.

+

How is this process defined mathematically? Formally, neural networks are mathematical models that consist of alternating linear and nonlinear operations, parameterized by a set of learnable weights that are trained to minimize some loss function. This loss function measures how good our model is concerning fitting our training data, and it produces a numerical value when evaluated on our model against the training data. Training neural networks involves repeatedly evaluating the loss function on many different data points to measure how good our model is, then continuously tweaking the weights of our model using backpropagation so that the loss decreases, ultimately optimizing the model to fit our data.

+
+

7.2.1 Neural Network Notation

+

Diving into the details, the core of a neural network can be viewed as a sequence of alternating linear and nonlinear operations, as show in Figure fig-neural-net-diagram:

+

\[ +L_i = W_i A_{i-1} +\]

+

\[ +A_i = F_i(L_{i}) +\]

+
+
+
+ +
+
+

Why are the nonlinear operations necessary? If we only had linear layers, the entire network would be equivalent to a single linear layer consisting of the product of the linear operators. Hence, the nonlinear functions play a key role in the power of neural networks as they enhance the neural network’s ability to fit functions.

+
+
+
+
+
+
+ +
+
+

Convolutions are also linear operators and can be cast as a matrix multiplication.

+
+
+
+
+
+
+ +
+
+Figure 7.1: Neural network diagram. Credit: astroML. +
+
+
+

Where \(A_{0}\) is a vector input to the neural network (i.e., an image that we want the neural network to classify or some other data that the neural network operates on), \(A_{n}\) (where \(n\) is the number of layers of the network) is the vector output of the neural network (i.e., a vector of size 10 in the case of classifying pictures of handwritten digits), \(W_i\)s are the weights of the neural network that are tweaked at training time to fit our data, and \(F_{i}\) is that layer’s nonlinear activation function (i.e., ReLU, softmax, etc.). As defined, the intermediate output of the neural network is a vector of real-valued numbers with dimensions:

+

\[ +L_i, A_i \in \mathbb{R}^{d_{i}} +\]

+

Where \(d_{i}\) is the number of neurons at layer \(i\); in the case of the first layer \(i=0\), \(d_{i}\) is the dimension of the input data, and in the last layer \(i=n\), \(d_{n}\) is the dimension of the output label. Anything in between can be set arbitrarily and may be viewed as the architecture of the neural network (i.e., the dimensionality of the intermediate layers). The weights, which determine how each layer of the neural network interacts with each other, are matrices of real numbers with shape.

+

\[ +W_i \in \mathbb{R}^{d_{i} \times d_{i-1}} +\]

+

Our neural network, as defined, performs a sequence of linear and nonlinear operations on the input data (\(L_{0}\)) to obtain predictions (\(L_{n}\)), which hopefully is a good answer to what we want the neural network to do on the input (i.e., classify if the input image is a cat or not). Our neural network may then be represented succinctly as a function \(N\) which takes in an input \(x \in \mathbb{R}^{d_0}\) parameterized by \(W_1, ..., W_n\):

+

\[ +N(x; W_1, ... W_n) = \text{Let } A_0 = x, \text{ then output } A_n +\]

+

Next, we will see how to evaluate this neural network against training data by introducing a loss function.

+
+
+

7.2.2 Loss Function as a Measure of Goodness of Fit against Training Data

+

After defining our neural network, we are given some training data, which is a set of points \({(x_j, y_j)}\) for \(j=1..M\), and we want to evaluate how good our neural network is at fitting this data. To do this, we introduce a loss function, which is a function that takes the output of the neural network on a particular datapoint (\(N(x_j; W_1, ..., W_n)\)) and compares it against the “label” of that particular datapoint (the corresponding \(y_j\)), and outputs a single numerical scalar (i.e., one real number) that represents how “good” the neural network fit that particular data point; the final measure of how good the neural network is on the entire dataset is therefore just the average of the losses across all data points.

+

There are many different types of loss functions; for example, in the case of image classification, we might use the cross-entropy loss function, which tells us how well two vectors representing classification predictions compare (i.e., if our prediction predicts that an image is more likely a dog, but the label says it is a cat, it will return a high “loss,” indicating a bad fit).

+

Mathematically, this loss function is a function that takes in two real-valued vectors of the shape of the label and outputs a single numerical scalar. \[ +L: \mathbb{R}^{d_{n}} \times \mathbb{R}^{d_{n}} \longrightarrow \mathbb{R} +\]

+

The loss across the entire dataset can be written as the average loss across all data points in the training data.

+
+

Loss Function for Optimizing Neural Network Model on a Dataset \[ +L_{full} = \frac{1}{M} \sum_{j=1}^{M} L(N(x_j; W_1,...W_n), y_j) +\]

+
+
+
+

7.2.3 Training Neural Networks with Gradient Descent

+

Now that we can measure how well our network fits the training data, we can optimize the neural network weights to minimize this loss. At a high level, we tweak the parameters of the real-valued matrices \(W_i\)s to minimize the loss function \(L_{full}\). Overall, our mathematical objective is

+
+

Neural Network Training Objective \[ +min_{W_1, ..., W_n} L_{full} +\] \[ += min_{W_1, ..., W_n} \frac{1}{M} \sum_{j=1}^{M} L(N(x_j; W_1,...W_n), y_j) +\]

+
+

So, how do we optimize this objective? Recall from calculus that minimizing a function can be done by taking the function’s derivative concerning the input parameters and tweaking the parameters in the gradient direction. This technique is called gradient descent and concretely involves calculating the derivative of the loss function \(L_{full}\) concerning \(W_1, ..., W_n\) to obtain a gradient for these parameters to take a step in, then updating these parameters in the direction of the gradient. Thus, we can train our neural network using gradient descent, which repeatedly applies the update rule.

+
+

Gradient Descent Update Rule \[ +W_i := W_i - \lambda \frac{\partial L_{full}}{\partial W_i} \mbox{ for } i=1..n +\]

+
+
+
+
+ +
+
+

In practice, the gradient is computed over a minibatch of data points to improve computational efficiency. This is called stochastic gradient descent or batch gradient descent.

+
+
+
+

Where \(\lambda\) is the stepsize or learning rate of our tweaks, in training our neural network, we repeatedly perform the step above until convergence, or when the loss no longer decreases.@fig-gradient-descent illustrates this process: we want to reach the minimum point, which’s done by following the gradient (as illustrated with the blue arrows in the figure). This prior approach is known as full gradient descent since we are computing the derivative concerning the entire training data and only then taking a single gradient step; a more efficient approach is to calculate the gradient concerning just a random batch of data points and then taking a step, a process known as batch gradient descent or stochastic gradient descent (Robbins and Monro 1951), which is more efficient since now we are taking many more steps per pass of the entire training data. Next, we will cover the mathematics behind computing the gradient of the loss function concerning the \(W_i\)s, a process known as backpropagation.

+
+Robbins, Herbert, and Sutton Monro. 1951. “A Stochastic Approximation Method.” The Annals of Mathematical Statistics 22 (3): 400–407. https://doi.org/10.1214/aoms/1177729586. +
+
+
+ +
+
+Figure 7.2: Gradient descent. Credit: Towards Data Science. +
+
+
+
+
+

7.2.4 Backpropagation

+

Training neural networks involve repeated applications of the gradient descent algorithm, which involves computing the derivative of the loss function with respect to the \(W_i\)s. How do we compute the loss derivative concerning the \(W_i\)s, given that the \(W_i\)s are nested functions of each other in a deep neural network? The trick is to leverage the chain rule: we can compute the derivative of the loss concerning the \(W_i\)s by repeatedly applying the chain rule in a complete process known as backpropagation. Specifically, we can calculate the gradients by computing the derivative of the loss concerning the outputs of the last layer, then progressively use this to compute the derivative of the loss concerning each prior layer to the input layer. This process starts from the end of the network (the layer closest to the output) and progresses backwards, and hence gets its name backpropagation.

+

Let’s break this down. We can compute the derivative of the loss concerning the outputs of each layer of the neural network by using repeated applications of the chain rule.

+

\[ +\frac{\partial L_{full}}{\partial L_{n}} = \frac{\partial A_{n}}{\partial L_{n}} \frac{\partial L_{full}}{\partial A_{n}} +\]

+

\[ +\frac{\partial L_{full}}{\partial L_{n-1}} = \frac{\partial A_{n-1}}{\partial L_{n-1}} \frac{\partial L_{n}}{\partial A_{n-1}} \frac{\partial A_{n}}{\partial L_{n}} \frac{\partial L_{full}}{\partial A_{n}} +\]

+

or more generally

+

\[ +\frac{\partial L_{full}}{\partial L_{i}} = \frac{\partial A_{i}}{\partial L_{i}} \frac{\partial L_{i+1}}{\partial A_{i}} ... \frac{\partial A_{n}}{\partial L_{n}} \frac{\partial L_{full}}{\partial A_{n}} +\]

+
+
+
+ +
+
+

In what order should we perform this computation? From a computational perspective, performing the calculations from the end to the front is preferable. (i.e: first compute \(\frac{\partial L_{full}}{\partial A_{n}}\) then the prior terms, rather than start in the middle) since this avoids materializing and computing large jacobians. This is because \(\ \frac {\partial L_{full}}{\partial A_{n}}\) is a vector; hence, any matrix operation that includes this term has an output that is squished to be a vector. Thus, performing the computation from the end avoids large matrix-matrix multiplications by ensuring that the intermediate products are vectors.

+
+
+
+
+
+
+ +
+
+

In our notation, we assume the intermediate activations \(A_{i}\) are column vectors, rather than row vectors, hence the chain rule is \(\frac{\partial L}{\partial L_{i}} = \frac{\partial L_{i+1}}{\partial L_{i}} ... \frac{\partial L}{\partial L_{n}}\) rather than \(\frac{\partial L}{\partial L_{i}} = \frac{\partial L}{\partial L_{n}} ... \frac{\partial L_{i+1}}{\partial L_{i}}\)

+
+
+
+

After computing the derivative of the loss concerning the output of each layer, we can easily obtain the derivative of the loss concerning the parameters, again using the chain rule:

+

\[ +\frac{\partial L_{full}}{W_{i}} = \frac{\partial L_{i}}{\partial W_{i}} \frac{\partial L_{full}}{\partial L_{i}} +\]

+

And this is ultimately how the derivatives of the layers’ weights are computed using backpropagation! What does this concretely look like in a specific example? Below, we walk through a specific example of a simple 2-layer neural network on a regression task using an MSE loss function with 100-dimensional inputs and a 30-dimensional hidden layer:

+
+

Example of Backpropagation
+Suppose we have a two-layer neural network \[ +L_1 = W_1 A_{0} +\] \[ +A_1 = ReLU(L_1) +\] \[ +L_2 = W_2 A_{1} +\] \[ +A_2 = ReLU(L_2) +\] \[ +NN(x) = \mbox{Let } A_{0} = x \mbox{ then output } A_2 +\] where \(W_1 \in \mathbb{R}^{30 \times 100}\) and \(W_2 \in \mathbb{R}^{1 \times 30}\). Furthermore, suppose we use the MSE loss function: \[ +L(x, y) = (x-y)^2 +\] We wish to compute \[ +\frac{\partial L(NN(x), y)}{\partial W_i} \mbox{ for } i=1,2 +\] Note the following: \[ +\frac{\partial L(x, y)}{\partial x} = 2 \times (x-y) +\] \[ +\frac{\partial ReLU(x)}{\partial x} \delta = \left\{\begin{array}{lr} +0 & \text{for } x \leq 0 \\ +1 & \text{for } x \geq 0 \\ +\end{array}\right\} \odot \delta +\] \[ +\frac{\partial WA}{\partial A} \delta = W^T \delta +\] \[ +\frac{\partial WA}{\partial W} \delta = \delta A^T +\] Then we have \[ +\frac{\partial L(NN(x), y)}{\partial W_2} = \frac{\partial L_2}{\partial W_2} \frac{\partial A_2}{\partial L_2} \frac{\partial L(NN(x), y)}{\partial A_2} +\] \[ += (2L(NN(x) - y) \odot ReLU'(L_2)) A_1^T +\] and \[ +\frac{\partial L(NN(x), y)}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial A_1}{\partial L_1} \frac{\partial L_2}{\partial A_1} \frac{\partial A_2}{\partial L_2} \frac{\partial L(NN(x), y)}{\partial A_2} +\] \[ += [ReLU'(L_1) \odot (W_2^T [2L(NN(x) - y) \odot ReLU'(L_2)])] A_0^T +\]

+
+
+
+
+ +
+
+

Double-check your work by making sure that the shapes are correct!

+
    +
  • All Hadamard products (\(\odot\)) should operate on tensors of the same shape
  • +
  • All matrix multiplications should operate on matrices that share a common dimension (i.e., m by n, n by k)
  • +
  • All gradients concerning the weights should have the same shape as the weight matrices themselves
  • +
+
+
+
+

The entire backpropagation process can be complex, especially for very deep networks. Fortunately, machine learning frameworks like PyTorch support automatic differentiation, which performs backpropagation for us. In these frameworks, we simply need to specify the forward pass, and the derivatives will be automatically computed for us. Nevertheless, it is beneficial to understand the theoretical process that is happening under the hood in these machine-learning frameworks.

+
+
+
+ +
+
+

As seen above, intermediate activations \(A_i\) are reused in backpropagation. To improve performance, these activations are cached from the forward pass to avoid being recomputed. However, activations must be kept in memory between the forward and backward passes, leading to higher memory usage. If the network and batch size are large, this may lead to memory issues. Similarly, the derivatives with respect to each layer’s outputs are cached to avoid recomputation.

+
+
+
+
+

Exercise 7.1 (Neural Networks with Backpropagation and Gradient Descent)  

+
+
+ +
+
+

Unlock the math behind powerful neural networks! Deep learning might seem like magic, but it’s rooted in mathematical principles. In this chapter, you’ve broken down neural network notation, loss functions, and the powerful technique of backpropagation. Now, prepare to implement this theory with these Colab notebooks. Dive into the heart of how neural networks learn. You’ll see the math behind backpropagation and gradient descent, updating those weights step-by-step.

+

+
+
+
+
+
+
+

7.3 Differentiable Computation Graphs

+

In general, stochastic gradient descent using backpropagation can be performed on any computational graph that a user may define, provided that the operations of the computation are differentiable. As such, generic deep learning libraries like PyTorch and Tensorflow allow users to specify their computational process (i.e., neural networks) as a computational graph. Backpropagation is automatically performed via automatic differentiation when stochastic gradient descent is performed on these computational graphs. Framing AI training as an optimization problem on differentiable computation graphs is a general way to understand what is happening under the hood with deep learning systems.

+

The structure depicted in Figure fig-computational-graph showcases a segment of a differentiable computational graph. In this graph, the input ‘x’ is processed through a series of operations: it is first multiplied by a weight matrix ‘W’ (MatMul), then added to a bias ‘b’ (Add), and finally passed to an activation function, Rectified Linear Unit (ReLU). This sequence of operations gives us the output C. The graph’s differentiable nature means that each operation has a well-defined gradient. Automatic differentiation, as implemented in ML frameworks, leverages this property to efficiently compute the gradients of the loss with respect to each parameter in the network (e.g., ‘W’ and ‘b’).

+
+
+
+ +
+
+Figure 7.3: Computational Graph. Credit: TensorFlow. +
+
+
+
+
+

7.4 Training Data

+

To enable effective neural network training, the available data must be split into training, validation, and test sets. The training set is used to train the model parameters. The validation set evaluates the model during training to tune hyperparameters and prevent overfitting. The test set provides an unbiased final evaluation of the trained model’s performance.

+

Maintaining clear splits between train, validation, and test sets with representative data is crucial to properly training, tuning, and evaluating models to achieve the best real-world performance. To this end, we will learn about the common pitfalls or mistakes people make when creating these data splits.

+

Table tbl-training_splits compares the differences between training, validation, and test data splits:

+
+
+
+Table 7.1: Comparing training, validation, and test data splits. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + +
Data SplitPurposeTypical Size
Training SetTrain the model parameters60-80% of total data
Validation SetEvaluate model during training to tune hyperparameters and prevent overfitting∼20% of total data
Test SetProvide unbiased evaluation of final trained model∼20% of total data
+
+
+
+
+

7.4.1 Dataset Splits

+
+

Training Set

+

The training set is used to train the model. It is the largest subset, typically 60-80% of the total data. The model sees and learns from the training data to make predictions. A sufficiently large and representative training set is required for the model to learn the underlying patterns effectively.

+
+
+

Validation Set

+

The validation set evaluates the model during training, usually after each epoch. Typically, 20% of the data is allocated for the validation set. The model does not learn or update its parameters based on the validation data. It is used to tune hyperparameters and make other tweaks to improve training. Monitoring metrics like loss and accuracy on the validation set prevents overfitting on just the training data.

+
+
+

Test Set

+

The test set acts as a completely unseen dataset that the model did not see during training. It is used to provide an unbiased evaluation of the final trained model. Typically, 20% of the data is reserved for testing. Maintaining a hold-out test set is vital for obtaining an accurate estimate of how the trained model would perform on real-world unseen data. Data leakage from the test set must be avoided at all costs.

+

The relative proportions of the training, validation, and test sets can vary based on data size and application. However, following the general guidelines for a 60/20/20 split is a good starting point. Careful data splitting ensures models are properly trained, tuned, and evaluated to achieve the best performance.

+

The video below explains how to properly split the dataset into training, validation, and testing sets, ensuring an optimal training process.

+
+
+
+
+

7.4.2 Common Pitfalls and Mistakes

+
+

Insufficient Training Data

+

Allocating too little data to the training set is a common mistake when splitting data that can severely impact model performance. If the training set is too small, the model will not have enough samples to effectively learn the true underlying patterns in the data. This leads to high variance and causes the model to fail to generalize well to new data.

+

For example, if you train an image classification model to recognize handwritten digits, providing only 10 or 20 images per digit class would be completely inadequate. The model would need more examples to capture the wide variances in writing styles, rotations, stroke widths, and other variations.

+

As a rule of thumb, the training set size should be at least hundreds or thousands of examples for most machine learning algorithms to work effectively. Due to the large number of parameters, the training set often needs to be in the tens or hundreds of thousands for deep neural networks, especially those using convolutional layers.

+

Insufficient training data typically manifests in symptoms like high error rates on validation/test sets, low model accuracy, high variance, and overfitting on small training set samples. Collecting more quality training data is the solution. Data augmentation techniques can also help virtually increase the size of training data for images, audio, etc.

+

Carefully factoring in the model complexity and problem difficulty when allocating training samples is important to ensure sufficient data is available for the model to learn successfully. Following guidelines on minimum training set sizes for different algorithms is also recommended. More training data is needed to maintain the overall success of any machine learning application.

+

Consider Figure fig-over-under-fitting where we try to classify/split datapoints into two categories (here, by color): On the left, overfitting is depicted by a model that has learned the nuances in the training data too well (either the dataset was too small or we ran the model for too long), causing it to follow the noise along with the signal, as indicated by the line’s excessive curves. The right side shows underfitting, where the model’s simplicity prevents it from capturing the dataset’s underlying structure, resulting in a line that does not fit the data well. The center graph represents an ideal fit, where the model balances well between generalization and fitting, capturing the main trend of the data without being swayed by outliers. Although the model is not a perfect fit (it misses some points), we care more about its ability to recognize general patterns rather than idiosyncratic outliers.

+
+
+
+ +
+
+Figure 7.4: Data fitting: overfitting, right fit, and underfitting. Credit: MathWorks. +
+
+
+

Figure fig-fitting-time illustrates the process of fitting the data over time. When training, we search for the “sweet spot” between underfitting and overfitting. At first when the model hasn’t had enough time to learn the patterns in the data, we find ourselves in the underfitting zone, indicated by high error rates on the validation set (remember that the model is trained on the training set and we test its generalizability on the validation set, or data it hasn’t seen before). At some point, we achieve a global minimum for error rates, and ideally we want to stop the training there. If we continue training, the model will start “memorizing” or getting to know the data too well that the error rate starts going back up, since the model will fail to generalize to data it hasn’t seen before.

+
+
+
+ +
+
+Figure 7.5: Fitting the data overtime. Credit: IBM. +
+
+
+

The video below provides an overview of bias and variance and the relationship between the two concepts and model accuracy.

+
+
+
+

Data Leakage Between Sets

+

Data leakage refers to the unintentional transfer of information between the training, validation, and test sets. This violates the fundamental assumption that the splits are completely separated. Data leakage leads to seriously compromised evaluation results and inflated performance metrics.

+

A common way data leakage occurs is if some samples from the test set are inadvertently included in the training data. When evaluating the t-set, the model has already seen some of the data, which gives overly optimistic scores. For example, if 2% of the test data leaks into the training set of a binary classifier, it can result in an accuracy boost of up to 20%!

+

If the data splits are not done carefully, more subtle forms of leakage can happen. If the splits are not properly randomized and shuffled, samples close to each other in the dataset may end up across different splits. This creates information bleed through based on proximity in the dataset. Time series data is especially vulnerable unless special cross-validation techniques are used.

+

Preventing data leakage requires creating solid separation between splits—no sample should exist in more than one split. Shuffling and randomized splitting help create robust divisions. Cross-validation techniques can be used for more rigorous evaluation. Detecting leakage is difficult, but telltale signs include models doing way better on test vs. validation data.

+

Data leakage severely compromises the validity of the evaluation because the model has already partially seen the test data. No amount of tuning or complex architectures can substitute for clean data splits. It is better to be conservative and create complete separation between splits to avoid this fundamental mistake in machine learning pipelines.

+
+
+

Small or Unrepresentative Validation Set

+

The validation set evaluates models during training and for hyperparameter tuning. It must be bigger and representative of the real data distribution to provide reliable and stable evaluations during training, making model selection and tuning more difficult.

+

For example, if the validation set only contains 100 samples, the metrics calculated will have a high variance. Due to noise, the accuracy may fluctuate up to 5-10% between epochs. This makes it difficult to know if a drop in validation accuracy is due to overfitting or natural variance. With a larger validation set, say 1000 samples, the metrics will be much more stable.

+

Additionally, if the validation set is not representative, perhaps missing certain subclasses, the estimated skill of the model may be inflated. This could lead to poor hyperparameter choices or premature training stops. Models selected based on such biased validation sets do not generalize well to real data.

+

A good rule of thumb is that the validation set size should be at least several hundred samples and up to 10-20% of the training set. The splits should also be stratified, especially if working with imbalanced datasets. A larger validation set representing the original data characteristics is essential for proper model selection and tuning.

+

The validation set should also be manageable, leaving insufficient samples for training. Overall, the validation set is a critical piece of the data-splitting process, and care should be taken to avoid the pitfalls of small, inadequate samples that negatively impact model development.

+
+
+

Reusing the Test Set Multiple Times

+

The test set is designed to provide an unbiased evaluation of the fully trained model only once at the end of the model development process. Reusing the test set multiple times during development for model evaluation, hyperparameter tuning, model selection, etc., can result in overfitting on the test data.

+

If the test set is reused as part of the validation process, the model may start to see and learn from the test samples. This, coupled with intentionally or unintentionally optimizing model performance on the test set, can artificially inflate metrics like accuracy.

+

For example, suppose the test set is used repeatedly for model selection out of 5 architectures. In that case, the model may achieve 99% test accuracy by memorizing the samples rather than learning generalizable patterns. However, when deployed in the real world, the accuracy of new data could drop by 60%.

+

The best practice is to interact with the test set only once at the end to report unbiased metrics on how the final tuned model would perform in the real world. While developing the model, the validation set should be used for all parameter tuning, model selection, early stopping, etc.

+

Maintaining the complete separation of training/validation from the test set is essential to obtain accurate estimates of model performance. Even minor deviations from a single use of the test set could positively bias results and metrics, providing an overly optimistic view of real-world efficacy.

+
+
+

Same Data Splits Across Experiments

+

When comparing different machine learning models or experimenting with various architectures and hyperparameters, using the same data splits for training, validation, and testing across the different experiments can introduce bias and invalidate the comparisons.

+

If the same splits are reused, the evaluation results may be more balanced and accurately measure which model performs better. For example, a certain random data split may favor model A over model B irrespective of the algorithms. Reusing this split will then bias towards model A.

+

Instead, the data splits should be randomized or shuffled for each experimental iteration. This ensures that randomness in the sampling of the splits does not confer an unfair advantage to any model.

+

With different splits per experiment, the evaluation becomes more robust. Each model is tested on a wide range of test sets drawn randomly from the overall population, smoothing out variation and removing correlation between results.

+

Proper practice is to set a random seed before splitting the data for each experiment. Splitting should occur after shuffling/resampling as part of the experimental pipeline. Carrying out comparisons on the same splits violates the i.i.d (independent and identically distributed) assumption required for statistical validity.

+

Unique splits are essential for fair model comparisons. Though more compute-intensive, randomized allocation per experiment removes sampling bias and enables valid benchmarking. This highlights the true differences in model performance irrespective of a particular split’s characteristics.

+
+
+

Information Leakage Between Sets

+

Information leakage between the training, validation, and test sets occurs when information from one set inadvertently bleeds into another. This could happen due to flaws in the data-splitting process, which violates the assumption that the sets are mutually exclusive.

+

For example, consider a dataset sorted chronologically. If a simple random split is performed, samples close to each other in the dataset may end up in different splits. Models could then learn from ‘future’ data if test samples are leaked into the training set.

+

Similarly, distribution biases may persist across sets if the splits are not properly shuffled. The training set may contain more than certain outliers in the test set, compromising generalization. Issues like class imbalance may also get amplified if splitting is not stratified.

+

Another case is when datasets have linked, inherently connected samples, such as graphs, networks, or time series data. Naive splitting may isolate connected nodes or time steps into different sets. Models can make invalid assumptions based on partial information.

+

Preventing information leakage requires awareness of the dataset’s structure and relationships between samples. Shuffling, stratification, and grouped splitting of related samples can help mitigate leakage. Proper cross-validation procedures should be followed, mindful of temporal or sample proximity.

+

Subtle leakage of information between sets undermines model evaluation and training. It creates misleading results on model effectiveness. Data splitting procedures should account for sample relationships and distribution differences to ensure mutual exclusivity between sets.

+
+
+

Failing to Stratify Splits

+

When splitting data into training, validation, and test sets, failing to stratify the splits can result in an uneven representation of the target classes across the splits and introduce sampling bias. This is especially problematic for imbalanced datasets.

+

Stratified splitting involves sampling data points such that the proportion of output classes is approximately preserved in each split. For example, if performing a 70/30 train-test split on a dataset with 60% negative and 40% positive samples, stratification ensures ~60% negative and ~40% positive examples in both training and test sets.

+

Without stratification, random chance could result in the training split having 70% positive samples while the test has 30% positive samples. The model trained on this skewed training distribution will not generalize well. Class imbalance also compromises model metrics like accuracy.

+

Stratification works best when done using labels, though proxies like clustering can be used for unsupervised learning. It becomes essential for highly skewed datasets with rare classes that could easily be omitted from splits.

+

Libraries like Scikit-Learn have stratified splitting methods built into them. Failing to use them could inadvertently introduce sampling bias and hurt model performance on minority groups. After performing the splits, the overall class balance should be examined to ensure even representation across the splits.

+

Stratification provides a balanced dataset for both model training and evaluation. Though simple random splitting is easy, mindful of stratification needs, especially for real-world imbalanced data, results in more robust model development and evaluation.

+
+
+

Ignoring Time Series Dependencies

+

Time series data has an inherent temporal structure with observations depending on past context. Naively splitting time series data into train and test sets without accounting for this dependency leads to data leakage and lookahead bias.

+

For example, simply splitting a time series into the first 70% of training and the last 30% as test data will contaminate the training data with future data points. The model can use this information to “peek” ahead during training.

+

This results in an overly optimistic evaluation of the model’s performance. The model may appear to forecast the future accurately but has actually implicitly learned based on future data, which does not translate to real-world performance.

+

Proper time series cross-validation techniques, such as forward chaining, should be used to preserve order and dependency. The test set should only contain data points from a future time window that the model was not exposed to for training.

+

Failing to account for temporal relationships leads to invalid causality assumptions. If the training data contains future points, the model may also need to learn how to extrapolate forecasts further.

+

Maintaining the temporal flow of events and avoiding lookahead bias is key to properly training and testing time series models. This ensures they can truly predict future patterns and not just memorize past training data.

+
+
+

No Unseen Data for Final Evaluation

+

A common mistake when splitting data is failing to set aside some portion of the data just for the final evaluation of the completed model. All of the data is used for training, validation, and test sets during development.

+

This leaves no unseen data to get an unbiased estimate of how the final tuned model would perform in the real world. The metrics on the test set used during development may only partially reflect actual model skills.

+

For example, choices like early stopping and hyperparameter tuning are often optimized based on test set performance. This couples the model to the test data. An unseen dataset is needed to break this coupling and get true real-world metrics.

+

Best practice is to reserve a portion, such as 20-30% of the full dataset, solely for final model evaluation. This data should not be used for validation, tuning, or model selection during development.

+

Saving some unseen data allows for evaluating the completely trained model as a black box on real-world data. This provides reliable metrics to decide whether the model is ready for production deployment.

+

Failing to keep an unseen hold-out set for final validation risks optimizing results and overlooking potential failures before model release. Having some fresh data provides a final sanity check on real-world efficacy.

+
+
+

Overoptimizing on the Validation Set

+

The validation set is meant to guide the model training process, not serve as additional training data. Overoptimizing the validation set to maximize performance metrics treats it more like a secondary training set, leading to inflated metrics and poor generalization.

+

For example, techniques like extensively tuning hyperparameters or adding data augmentations targeted to boost validation accuracy can cause the model to fit too closely to the validation data. The model may achieve 99% validation accuracy but only 55% test accuracy.

+

Similarly, reusing the validation set for early stopping can also optimize the model specifically for that data. Stopping at the best validation performance overfits noise and fluctuations caused by the small validation size.

+

The validation set serves as a proxy to tune and select models. However, the goal remains maximizing real-world data performance, not the validation set. Minimizing the loss or error on validation data does not automatically translate to good generalization.

+

A good approach is to keep the use of the validation set minimal—hyperparameters can be tuned coarsely first on training data, for example. The validation set guides the training but should not influence or alter the model itself. It is a diagnostic, not an optimization tool.

+

When assessing performance on the validation set, care should be taken not to overfit. Tradeoffs are needed to build models that perform well on the overall population and are not overly tuned to the validation samples.

+
+
+
+
+

7.5 Optimization Algorithms

+

Stochastic gradient descent (SGD) is a simple yet powerful optimization algorithm for training machine learning models. It works by estimating the gradient of the loss function concerning the model parameters using a single training example and then updating the parameters in the direction that reduces the loss.

+

While conceptually straightforward, SGD needs a few areas for improvement. First, choosing a proper learning rate can be difficult—too small, and progress is very slow; too large, and parameters may oscillate and fail to converge. Second, SGD treats all parameters equally and independently, which may not be ideal in all cases. Finally, vanilla SGD uses only first-order gradient information, which results in slow progress on ill-conditioned problems.

+
+

7.5.1 Optimizations

+

Over the years, various optimizations have been proposed to accelerate and improve vanilla SGD. Ruder (2016) gives an excellent overview of the different optimizers. Briefly, several commonly used SGD optimization techniques include:

+
+Ruder, Sebastian. 2016. “An Overview of Gradient Descent Optimization Algorithms.” ArXiv Preprint abs/1609.04747. https://arxiv.org/abs/1609.04747. +

Momentum: Accumulates a velocity vector in directions of persistent gradient across iterations. This helps accelerate progress by dampening oscillations and maintains progress in consistent directions.

+

Nesterov Accelerated Gradient (NAG): A variant of momentum that computes gradients at the “look ahead” rather than the current parameter position. This anticipatory update prevents overshooting while the momentum maintains the accelerated progress.

+

RMSProp: Divides the learning rate by an exponentially decaying average of squared gradients. This has a similar normalizing effect as Adagrad but does not accumulate the gradients over time, avoiding a rapid decay of learning rates (Hinton 2017).

+
+Hinton, Geoffrey. 2017. “Overview of Minibatch Gradient Descent.” University of Toronto; University Lecture. +
+Duchi, John C., Elad Hazan, and Yoram Singer. 2010. “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” In COLT 2010 - the 23rd Conference on Learning Theory, Haifa, Israel, June 27-29, 2010, edited by Adam Tauman Kalai and Mehryar Mohri, 257–69. Omnipress. http://colt2010.haifa.il.ibm.com/papers/COLT2010proceedings.pdf\#page=265. +

Adagrad: An adaptive learning rate algorithm that maintains a per-parameter learning rate scaled down proportionate to each parameter’s historical sum of gradients. This helps eliminate the need to tune learning rates (Duchi, Hazan, and Singer 2010) manually.

+

Adadelta: A modification to Adagrad restricts the window of accumulated past gradients, thus reducing the aggressive decay of learning rates (Zeiler 2012).

+
+Zeiler, Matthew D. 2012. “Reinforcement and Systemic Machine Learning for Decision Making.” Wiley. https://doi.org/10.1002/9781118266502.ch6. +
+Kingma, Diederik P., and Jimmy Ba. 2015. “Adam: A Method for Stochastic Optimization.” In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, edited by Yoshua Bengio and Yann LeCun. http://arxiv.org/abs/1412.6980. +

Adam: - Combination of momentum and rmsprop where rmsprop modifies the learning rate based on the average of recent magnitudes of gradients. Displays very fast initial progress and automatically tunes step sizes (Kingma and Ba 2015).

+

Of these methods, Adam has widely considered the go-to optimization algorithm for many deep-learning tasks. It consistently outperforms vanilla SGD in terms of training speed and performance. Other optimizers may be better suited in some cases, particularly for simpler models.

+
+
+

7.5.2 Tradeoffs

+

Here is a pros and cons table for some of the main optimization algorithms for neural network training:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
AlgorithmProsCons
MomentumFaster convergence due to acceleration along gradients Less oscillation than vanilla SGDRequires tuning of momentum parameter
Nesterov Accelerated Gradient (NAG)Faster than standard momentum in some cases Anticipatory updates prevent overshootingMore complex to understand intuitively
AdagradEliminates need to tune learning rates manually Performs well on sparse gradientsLearning rate may decay too quickly on dense gradients
AdadeltaLess aggressive learning rate decay than AdagradStill sensitive to initial learning rate value
RMSPropAutomatically adjusts learning rates Works well in practiceNo major downsides
AdamCombination of momentum and adaptive learning rates Efficient and fast convergenceSlightly worse generalization performance in some cases
AMSGradImprovement to Adam addressing generalization issueNot as extensively used/tested as Adam
+
+
+

7.5.3 Benchmarking Algorithms

+

No single method is best for all problem types. This means we need comprehensive benchmarking to identify the most effective optimizer for specific datasets and models. The performance of algorithms like Adam, RMSProp, and Momentum varies due to batch size, learning rate schedules, model architecture, data distribution, and regularization. These variations underline the importance of evaluating each optimizer under diverse conditions.

+

Take Adam, for example, who often excels in computer vision tasks, unlike RMSProp, who may show better generalization in certain natural language processing tasks. Momentum’s strength lies in its acceleration in scenarios with consistent gradient directions, whereas Adagrad’s adaptive learning rates are more suited for sparse gradient problems.

+

This wide array of interactions among optimizers demonstrates the challenge of declaring a single, universally superior algorithm. Each optimizer has unique strengths, making it crucial to evaluate various methods to discover their optimal application conditions empirically.

+

A comprehensive benchmarking approach should assess the speed of convergence and factors like generalization error, stability, hyperparameter sensitivity, and computational efficiency, among others. This entails monitoring training and validation learning curves across multiple runs and comparing optimizers on various datasets and models to understand their strengths and weaknesses.

+

AlgoPerf, introduced by Dahl et al. (2021), addresses the need for a robust benchmarking system. This platform evaluates optimizer performance using criteria such as training loss curves, generalization error, sensitivity to hyperparameters, and computational efficiency. AlgoPerf tests various optimization methods, including Adam, LAMB, and Adafactor, across different model types like CNNs and RNNs/LSTMs on established datasets. It utilizes containerization and automatic metric collection to minimize inconsistencies and allows for controlled experiments across thousands of configurations, providing a reliable basis for comparing optimizers.

+
+Dahl, George E, Frank Schneider, Zachary Nado, Naman Agarwal, Chandramouli Shama Sastry, Philipp Hennig, Sourabh Medapati, et al. 2021. CSF Findings in Acute NMDAR and LGI1 AntibodyAssociated Autoimmune Encephalitis.” Neurology Neuroimmunology &Amp; Neuroinflammation 8 (6). https://doi.org/10.1212/nxi.0000000000001086. +

The insights gained from AlgoPerf and similar benchmarks are invaluable for guiding optimizers’ optimal choice or tuning. By enabling reproducible evaluations, these benchmarks contribute to a deeper understanding of each optimizer’s performance, paving the way for future innovations and accelerated progress in the field.

+
+
+
+

7.6 Hyperparameter Tuning

+

Hyperparameters are important settings in machine learning models that greatly impact how well your models ultimately perform. Unlike other model parameters that are learned during training, hyperparameters are specified by the data scientists or machine learning engineers before training the model.

+

Choosing the right hyperparameter values enables your models to learn patterns from data effectively. Some examples of key hyperparameters across ML algorithms include:

+
    +
  • Neural networks: Learning rate, batch size, number of hidden units, activation functions
  • +
  • Support vector machines: Regularization strength, kernel type and parameters
  • +
  • Random forests: Number of trees, tree depth
  • +
  • K-means: Number of clusters
  • +
+

The problem is that there are no reliable rules of thumb for choosing optimal hyperparameter configurations—you typically have to try out different values and evaluate performance. This process is called hyperparameter tuning.

+

In the early years of modern deep learning, researchers were still grappling with unstable and slow convergence issues. Common pain points included training losses fluctuating wildly, gradients exploding or vanishing, and extensive trial-and-error needed to train networks reliably. As a result, an early focal point was using hyperparameters to control model optimization. For instance, seminal techniques like batch normalization allowed faster model convergence by tuning aspects of internal covariate shift. Adaptive learning rate methods also mitigated the need for extensive manual schedules. These addressed optimization issues during training, such as uncontrolled gradient divergence. Carefully adapted learning rates are also the primary control factor for achieving rapid and stable convergence even today.

+

As computational capacity expanded exponentially in subsequent years, much larger models could be trained without falling prey to pure numerical optimization issues. The focus shifted towards generalization - though efficient convergence was a core prerequisite. State-of-the-art techniques like Transformers brought in parameters in billions. At such sizes, hyperparameters around capacity, regularization, ensembling, etc., took center stage for tuning rather than only raw convergence metrics.

+

The lesson is that understanding the acceleration and stability of the optimization process itself constitutes the groundwork. Initialization schemes, batch sizes, weight decays, and other training hyperparameters remain indispensable today. Mastering fast and flawless convergence allows practitioners to expand their focus on emerging needs around tuning for metrics like accuracy, robustness, and efficiency at scale.

+
+

7.6.1 Search Algorithms

+

When it comes to the critical process of hyperparameter tuning, there are several sophisticated algorithms that machine learning practitioners rely on to search through the vast space of possible model configurations systematically. Some of the most prominent hyperparameter search algorithms include:

+
    +
  • Grid Search: The most basic search method, where you manually define a grid of values to check for each hyperparameter. For example, checking learning rates = [0.01, 0.1, 1] and batch sizes = [32, 64, 128]. The key advantage is simplicity, but exploring all combinations leads to exponential search space explosion. Best for fine-tuning a few parameters.

  • +
  • Random Search: Instead of a grid, you define a random distribution per hyperparameter to sample values from during the search. This method is more efficient at searching a vast hyperparameter space. However, it is still somewhat arbitrary compared to more adaptive methods.

  • +
  • Bayesian Optimization: This is an advanced probabilistic approach for adaptive exploration based on a surrogate function to model performance over iterations. It is simple and efficient—it finds highly optimized hyperparameters in fewer evaluation steps. However, it requires more investment in setup (Snoek, Larochelle, and Adams 2012).

  • +
  • Evolutionary Algorithms: These algorithms Mimic natural selection principles. They generate populations of hyperparameter combinations and evolve them over time-based on performance. These algorithms offer robust search capabilities better suited for complex response surfaces. However, many iterations are required for reasonable convergence.

  • +
  • Neural Architecture Search: An approach to designing well-performing architectures for neural networks. Traditionally, NAS approaches use some form of reinforcement learning to propose neural network architectures, which are then repeatedly evaluated (Zoph and Le 2023).

  • +
+
+Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. 2012. “Practical Bayesian Optimization of Machine Learning Algorithms.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 2960–68. https://proceedings.neurips.cc/paper/2012/hash/05311655a15b75fab86956663e1819cd-Abstract.html. +
+Zoph, Barret, and Quoc V. Le. 2023. “Cybernetical Intelligence.” Wiley. https://doi.org/10.1002/9781394217519.ch17. +
+
+

7.6.2 System Implications

+

Hyperparameter tuning can significantly impact time to convergence during model training, directly affecting overall runtime. The right values for key training hyperparameters are crucial for efficient model convergence. For example, the hyperparameter’s learning rate controls the step size during gradient descent optimization. Setting a properly tuned learning rate schedule ensures the optimization algorithm converges quickly towards a good minimum. Too small a learning rate leads to painfully slow convergence, while too large a value causes the losses to fluctuate wildly. Proper tuning ensures rapid movement towards optimal weights and biases.

+

Similarly, the batch size for stochastic gradient descent impacts convergence stability. The right batch size smooths out fluctuations in parameter updates to approach the minimum faster. More batch sizes are needed to avoid noisy convergence, while large batch sizes fail to generalize and slow down convergence due to less frequent parameter updates. Tuning hyperparameters for faster convergence and reduced training duration has direct implications on cost and resource requirements for scaling machine learning systems:

+
    +
  • Lower computational costs: Shorter time to convergence means lower computational costs for training models. ML training often leverages large cloud computing instances like GPU and TPU clusters that incur heavy hourly charges. Minimizing training time directly reduces this resource rental cost, which tends to dominate ML budgets for organizations. Quicker iteration also lets data scientists experiment more freely within the same budget.

  • +
  • Reduced training time: Reduced training time unlocks opportunities to train more models using the same computational budget. Optimized hyperparameters stretch available resources further, allowing businesses to develop and experiment with more models under resource constraints to maximize performance.

  • +
  • Resource efficiency: Quicker training allows allocating smaller compute instances in the cloud since models require access to the resources for a shorter duration. For example, a one-hour training job allows using less powerful GPU instances compared to multi-hour training, which requires sustained compute access over longer intervals. This achieves cost savings, especially for large workloads.

  • +
+

There are other benefits as well. For instance, faster convergence reduces pressure on ML engineering teams regarding provisioning training resources. Simple model retraining routines can use lower-powered resources instead of requesting access to high-priority queues for constrained production-grade GPU clusters, freeing up deployment resources for other applications.

+
+
+

7.6.3 Auto Tuners

+

Given its importance, there is a wide array of commercial offerings to help with hyperparameter tuning. We will briefly touch on two examples: one focused on optimization for machine learning models targeting microcontrollers and another on cloud-scale ML.

+
+

BigML

+

Several commercial auto-tuning platforms are available to address this problem. One solution is Google’s Vertex AI Cloud, which has extensive integrated support for state-of-the-art tuning techniques.

+

One of the most salient capabilities of Google’s Vertex AI-managed machine learning platform is efficient, integrated hyperparameter tuning for model development. Successfully training performant ML models requires identifying optimal configurations for a set of external hyperparameters that dictate model behavior, posing a challenging high-dimensional search problem. Vertex AI aims to simplify this through Automated Machine Learning (AutoML) tooling.

+

Specifically, data scientists can leverage Vertex AI’s hyperparameter tuning engines by providing a labeled dataset and choosing a model type such as a Neural Network or Random Forest classifier. Vertex launches a Hyperparameter Search job transparently on the backend, fully handling resource provisioning, model training, metric tracking, and result analysis automatically using advanced optimization algorithms.

+

Under the hood, Vertex AutoML employs various search strategies to intelligently explore the most promising hyperparameter configurations based on previous evaluation results. Compared to standard Grid Search or Random Search methods, Bayesian Optimization offers superior sample efficiency, requiring fewer training iterations to arrive at optimized model quality. For more complex neural architecture search spaces, Vertex AutoML utilizes Population-Based Training approaches, which evolve candidate solutions over time analogous to natural selection principles.

+

Vertex AI aims to democratize state-of-the-art hyperparameter search techniques at the cloud scale for all ML developers, abstracting away the underlying orchestration and execution complexity. Users focus solely on their dataset, model requirements, and accuracy goals, while Vertex manages the tuning cycle, resource allocation, model training, accuracy tracking, and artifact storage under the hood. The result is getting deployment-ready, optimized ML models faster for the target problem.

+
+
+

TinyML

+

Edge Impulse’s Efficient On-device Neural Network Tuner (EON Tuner) is an automated hyperparameter optimization tool designed to develop microcontroller machine learning models. It streamlines the model development process by automatically finding the best neural network configuration for efficient and accurate deployment on resource-constrained devices.

+

The key functionality of the EON Tuner is as follows. First, developers define the model hyperparameters, such as number of layers, nodes per layer, activation functions, and learning rate annealing schedule. These parameters constitute the search space that will be optimized. Next, the target microcontroller platform is selected, providing embedded hardware constraints. The user can also specify optimization objectives, such as minimizing memory footprint, lowering latency, reducing power consumption, or maximizing accuracy.

+

With the defined search space and optimization goals, the EON Tuner leverages Bayesian hyperparameter optimization to explore possible configurations intelligently. Each prospective configuration is automatically implemented as a full model specification, trained, and evaluated for quality metrics. The continual process balances exploration and exploitation to arrive at optimized settings tailored to the developer’s chosen chip architecture and performance requirements.

+

The EON Tuner frees machine learning engineers from the demandingly iterative process of hand-tuning models by automatically tuning models for embedded deployment. The tool integrates seamlessly into the Edge Impulse workflow, taking models from concept to efficiently optimized implementations on microcontrollers. The expertise encapsulated in EON Tuner regarding ML model optimization for microcontrollers ensures beginner and experienced developers alike can rapidly iterate to models fitting their project needs.

+
+

Exercise 7.2 (Hyperparameter Tuning)  

+
+
+ +
+
+

Get ready to unlock the secrets of hyperparameter tuning and take your PyTorch models to the next level! Hyperparameters are like the hidden dials and knobs that control your model’s learning superpowers. In this Colab notebook, you’ll team up with Ray Tune to find those perfect hyperparameter combinations. Learn how to define what values to search through, set up your training code for optimization, and let Ray Tune do the heavy lifting. By the end, you’ll be a hyperparameter tuning pro!

+

+
+
+
+

The video below explains the systematic organization of the hyperparameter tuning process.

+
+
+
+
+
+

7.7 Regularization

+

Regularization is a critical technique for improving the performance and generalizability of machine learning models in applied settings. It refers to mathematically constraining or penalizing model complexity to avoid overfitting the training data. Without regularization, complex ML models are prone to overfitting the dataset and memorizing peculiarities and noise in the training set rather than learning meaningful patterns. They may achieve high training accuracy but perform poorly when evaluating new unseen inputs.

+

Regularization helps address this problem by placing constraints that favor simpler, more generalizable models that don’t latch onto sampling errors. Techniques like L1/L2 regularization directly penalize large parameter values during training, forcing the model to use the smallest parameters that can adequately explain the signal. Early stopping rules halt training when validation set performance stops improving - before the model starts overfitting.

+

Appropriate regularization is crucial when deploying models to new user populations and environments where distribution shifts are likely. For example, an irregularized fraud detection model trained at a bank may work initially but accrue technical debt over time as new fraud patterns emerge.

+

Regularizing complex neural networks also offers computational advantages—smaller models require less data augmentation, compute power, and data storage. Regularization also allows for more efficient AI systems, where accuracy, robustness, and resource management are thoughtfully balanced against training set limitations.

+

Several powerful regularization techniques are commonly used to improve model generalization. Architecting the optimal strategy requires understanding how each method affects model learning and complexity.

+
+

7.7.1 L1 and L2

+

Two of the most widely used regularization forms are L1 and L2 regularization. Both penalize model complexity by adding an extra term to the cost function optimized during training. This term grows larger as model parameters increase.

+

L2 regularization, also known as ridge regression, adds the sum of squared magnitudes of all parameters multiplied by a coefficient α. This quadratic penalty curtails extreme parameter values more aggressively than L1 techniques. Implementation requires only changing the cost function and tuning α.

+

\[R_{L2}(\Theta) = \alpha \sum_{i=1}^{n}\theta_{i}^2\]

+

Where:

+
    +
  • \(R_{L2}(\Theta)\) - The L2 regularization term that is added to the cost function
    +
  • +
  • \(\alpha\) - The L2 regularization hyperparameter that controls the strength of regularization
  • +
  • \(\theta_{i}\) - The ith model parameter
  • +
  • \(n\) - The number of parameters in the model
  • +
  • \(\theta_{i}^2\) - The square of each parameter
  • +
+

And the full L2 regularized cost function is:

+

\[J(\theta) = L(\theta) + R_{L2}(\Theta)\]

+

Where:

+
    +
  • \(L(\theta)\) - The original unregularized cost function
  • +
  • \(J(\theta)\) - The new regularized cost function
  • +
+

Both L1 and L2 regularization penalize large weights in the neural network. However, the key difference between L1 and L2 regularization is that L2 regularization penalizes the squares of the parameters rather than the absolute values. This key difference has a considerable impact on the resulting regularized weights. L1 regularization, or lasso regression, utilizes the absolute sum of magnitudes rather than the square multiplied by α. Penalizing the absolute value of weights induces sparsity since the gradient of the errors extrapolates linearly as the weight terms tend towards zero; this is unlike penalizing the squared value of the weights, where the penalty reduces as the weights tend towards 0. By inducing sparsity in the parameter vector, L1 regularization automatically performs feature selection, setting the weights of irrelevant features to zero. Unlike L2 regularization, L1 regularization leads to sparsity as weights are set to 0; in L2 regularization, weights are set to a value very close to 0 but generally never reach exact 0. L1 regularization encourages sparsity and has been used in some works to train sparse networks that may be more hardware efficient (Hoefler et al. 2021).

+
+Hoefler, Torsten, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. 2021. “Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks.” https://arxiv.org/abs/2102.00554. +

\[R_{L1}(\Theta) = \alpha \sum_{i=1}^{n}||\theta_{i}||\]

+

Where:

+
    +
  • \(R_{L1}(\Theta)\) - The L1 regularization term that is added to the cost function
  • +
  • \(\alpha\) - The L1 regularization hyperparameter that controls the strength of regularization
  • +
  • \(\theta_{i}\) - The i-th model parameter
  • +
  • \(n\) - The number of parameters in the model
  • +
  • \(||\theta_{i}||\) - The L1 norm, which takes the absolute value of each parameter
  • +
+

And the full L1 regularized cost function is:

+

\[J(\theta) = L(\theta) + R_{L1}(\Theta)\]

+

Where:

+
    +
  • \(L(\theta)\) - The original unregularized cost function
  • +
  • \(J(\theta)\) - The new regularized cost function
  • +
+

The choice between L1 and L2 depends on the expected model complexity and whether intrinsic feature selection is needed. Both require iterative tuning across a validation set to select the optimal α hyperparameter.

+

The two videos below explain how regularization works and can help reduce model overfitting to improve performance.

+
+
+
+
+

7.7.2 Dropout

+

Another widely adopted regularization method is dropout (Srivastava et al. 2014). During training, dropout randomly sets a fraction \(p\) of node outputs or hidden activations to zero. This encourages greater information distribution across more nodes rather than reliance on a small number of nodes. Come prediction time; the full neural network is used, with intermediate activations scaled by \(p\) to maintain output magnitudes. GPU optimizations make implementing dropout efficiently straightforward via frameworks like PyTorch and TensorFlow.

+
+Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” J. Mach. Learn. Res. http://jmlr.org/papers/v15/srivastava14a.html. +

Let’s be more pedantic. During training with dropout, each node’s output \(a_i\) is passed through a dropout mask \(r_i\) before being used by the next layer:

+

\[ ã_i = r_i \odot a_i \]

+

Where:

+
    +
  • \(a_i\) - output of node \(i\)
  • +
  • \(ã_i\) - output of node \(i\) after dropout
  • +
  • \(r_i\) - independent Bernoulli random variable with probability \(p\) of being 1
  • +
  • \(\odot\) - elementwise multiplication
  • +
+

This dropout mask \(r_i\) randomly sets a fraction \(1-p\) of activations to 0 during training, forcing the network to make redundant representations.

+

At test time, the dropout mask is removed, and the activations are rescaled by \(p\) to maintain expected output magnitudes:

+

\[ a_i^{test} = p a_i\]

+

Where:

+
    +
  • \(a_i^{test}\) - node output at test time
  • +
  • \(p\) - dropout probability hyperparameter
  • +
+

The key hyperparameter is \(p\), the fraction of nodes dropped, often set between 0.2 and 0.5. Larger networks tend to benefit from more dropout, while small networks risk underfitting if too many nodes are cut out. Trial and error combined with monitoring validation performance helps tune the dropout level.

+

The following video discusses the intuition behind the dropout regularization technique and how it works.

+
+
+
+

7.7.3 Early Stopping

+

The intuition behind early stopping involves tracking model performance on a held-out validation set across training epochs. At first, increases in training set fitness accompany gains in validation accuracy as the model picks up generalizable patterns. After some point, however, the model starts overfitting - latching onto peculiarities and noise in the training data that don’t apply more broadly. The validation performance peaks and then degrades if training continues. Early stopping rules halt training at this peak to prevent overfitting. This technique demonstrates how ML pipelines must monitor system feedback, not just unquestioningly maximize performance on a static training set. The system’s state evolves, and the optimal endpoints change.

+

Therefore, formal early stopping methods require monitoring a metric like validation accuracy or loss after each epoch. Common curves exhibit rapid initial gains that taper off, eventually plateauing and decreasing slightly as overfitting occurs. The optimal stopping point is often between 5 and 15 epochs past the peak, depending on patient thresholds. Tracking multiple metrics can improve signal since variance exists between measures.

+

Simple, early-stopping rules stop immediately at the first post-peak degradation. More robust methods introduce a patience parameter—the number of degrading epochs permitted before stopping. This avoids prematurely halting training due to transient fluctuations. Typical patience windows range from 50 to 200 validation batches. Wider windows incur the risk of overfitting. Formal tuning strategies can determine optimal patience.

+
+

Exercise 7.3 (Regularization)  

+
+
+ +
+
+

Battling Overfitting: Unlock the Secrets of Regularization! Overfitting is like your model memorizing the answers to a practice test, then failing the real exam. Regularization techniques are the study guides that help your model generalize and ace new challenges. In this Colab notebook, you’ll learn how to tune regularization parameters for optimal results using L1 & L2 regularization, dropout, and early stopping.

+

+
+
+
+

The following video covers a few other regularization methods that can reduce model overfitting.

+
+
+
+
+

7.8 Weight Initialization

+

Proper Initializing the weights in a neural network before training is a vital step directly impacting model performance. Randomly initializing weights to very large or small values can lead to problems like vanishing/exploding gradients, slow convergence of training, or getting trapped in poor local minima. Proper weight initialization accelerates model convergence during training and carries implications for system performance at inference time in production environments. Some key aspects include:

+
    +
  • Faster Time-to-Accuracy: Carefully tuned Initialization leads to faster convergence, which results in models reaching target accuracy milestones earlier in the training cycle. For instance, Xavier init could reduce time-to-accuracy by 20% versus bad random init. As training is typically the most time- and compute-intensive phase, this directly enhances ML system velocity and productivity.

  • +
  • Model Iteration Cycle Efficiency: If models train faster, the overall turnaround time for experimentation, evaluation, and model design iterations decreases significantly. Systems have more flexibility to explore architectures, data pipelines, etc, within given timeframes.

  • +
  • Impact on Necessary Training Epochs: The training process runs for multiple epochs - with each full pass through the data being an epoch. Good Initialization can reduce the epochs required to converge the loss and accuracy curves on the training set by 10-30%. This means tangible resource and infrastructure cost savings.

  • +
  • Effect on Training Hyperparameters: Weight initialization parameters interact strongly with certain regularization hyperparameters that govern the training dynamics, like learning rate schedules and dropout probabilities. Finding the right combination of settings is non-trivial. Appropriate Initialization smoothens this search.

  • +
+

Weight initialization has cascading benefits for machine learning engineering efficiency and minimized system resource overhead. It is an easily overlooked tactic that every practitioner should master. The choice of which weight initialization technique to use depends on factors like model architecture (number of layers, connectivity pattern, etc.), activation functions, and the specific problem being solved. Over the years, researchers have developed and empirically verified different initialization strategies targeted to common neural network architectures, which we will discuss here.

+
+

7.8.1 Uniform and Normal Initialization

+

When randomly initializing weights, two standard probability distributions are commonly used - uniform and Gaussian (normal). The uniform distribution sets an equal probability of the initial weight parameters falling anywhere within set minimum and maximum bounds. For example, the bounds could be -1 and 1, leading to a uniform spread of weights between these limits. The Gaussian distribution, on the other hand, concentrates probability around a mean value, following the shape of a bell curve. Most weight values will cluster in the region of the specified mean, with fewer samples towards the extreme ends. The standard deviation (std dev) parameter controls the spread around the mean.

+

The choice between uniform or normal Initialization depends on the network architecture and activation functions. For shallow networks, a normal distribution with a relatively small std dev (e.g., 0.01) is recommended. The bell curve prevents large weight values that could trigger training instability in small networks. For deeper networks, a normal distribution with higher std dev (say 0.5 or above) or uniform distribution may be preferred to account for vanishing gradient issues over many layers. The larger spread drives greater differentiation between neuron behaviors. Fine-tuning the initialization distribution parameters is crucial for stable and speedy model convergence. Monitoring training loss trends can diagnose issues for tweaking the parameters iteratively.

+
+
+

7.8.2 Xavier/Glorot Initialization

+

Proposed by Glorot and Bengio (2010), this initialization technique is specially designed for sigmoid and tanh activation functions. These saturated activations can cause vanishing or exploding gradients during backpropagation over many layers.

+
+Glorot, Xavier, and Yoshua Bengio. 2010. “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. https://proceedings.mlr.press/v9/glorot10a.html. +

The Xavier method cleverly sets the variance of the weight distribution based on the number of inputs and outputs to each layer. The intuition is that this balances the flow of information and gradients throughout the network. For example, consider a layer with 300 input units and 100 output units. Plugging this into the formula variance = 2/(#inputs + #outputs) gives a variance of 2/(300+100) = 0.01.

+

Sampling the initial weights from a uniform or normal distribution centered at 0 with this variance provides much smoother training convergence for deep sigmoid/tanh networks. The gradients are well-conditioned, preventing exponential vanishing or growth.

+
+
+

7.8.3 He Initialization

+

As proposed by He et al. (2015), this Initialization is tailored to ReLU (Rectified Linear Unit) activation functions. ReLUs introduce the dying neuron problem where units get stuck outputting all 0s if they receive strong negative inputs initially. This slows and hinders training.

+
+He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. IEEE. https://doi.org/10.1109/iccv.2015.123. +

He overcomes this by sampling weights from a distribution with a variance set based only on the number of inputs per layer, disregarding the outputs. This keeps the incoming signals small enough to activate the ReLUs into their linear regime from the beginning, avoiding dead units. For a layer with 1024 inputs, the formula variance = 2/1024 = 0.002 keeps most weights concentrated closely around 0.

+

This specialized Initialization allows ReLU networks to converge efficiently right from the start. The choice between Xavier and He must match the intended network activation function.

+
+

Exercise 7.4 (Weight Initialization)  

+
+
+ +
+
+

Get your neural network off to a strong start with weight initialization! How you set those initial weights can make or break your model’s training. Think of it like tuning the instruments in an orchestra before the concert. In this Colab notebook, you’ll learn that the right initialization strategy can save time, improve model performance, and make your deep-learning journey much smoother.

+

+
+
+
+

The video below emphasizes the importance of deliberately selecting initial weight values over random choices.

+
+
+
+
+

7.9 Activation Functions

+

Activation functions play a crucial role in neural networks. They introduce nonlinear behaviors that allow neural nets to model complex patterns. Element-wise activation functions are applied to the weighted sums coming into each neuron in the network. Without activation functions, neural nets would be reduced to linear regression models.

+

Ideally, activation functions possess certain desirable qualities:

+
    +
  • Nonlinear: They enable modeling complex relationships through nonlinear transformations of the input sum.
  • +
  • Differentiable: They must have well-defined first derivatives to enable backpropagation and gradient-based optimization during training.
  • +
  • Range-bounding: They constrain the output signal, preventing an explosion. For example, sigmoid squashes inputs to (0,1).
  • +
+

Additionally, properties like computational efficiency, monotonicity, and smoothness make some activations better suited over others based on network architecture and problem complexity.

+

We will briefly survey some of the most widely adopted activation functions and their strengths and limitations. We will also provide guidelines for selecting appropriate functions matched to ML system constraints and use case needs.

+
+

7.9.1 Sigmoid

+

The sigmoid activation applies a squashing S-shaped curve tightly binding the output between 0 and 1. It has the mathematical form:

+

\[ sigmoid(x) = \frac{1}{1+e^{-x}} \]

+

The exponentiation transform allows the function to smoothly transition from near 0 towards near 1 as the input moves from very negative to very positive. The monotonic rise covers the full (0,1) range.

+

Pros:

+

A smooth gradient is always available for a backdrop Output bounded preventing “exploding” Simple formula Cons:

+

Tendency to saturate at extremes, killing gradients (“vanishing”) Not zero-centered - outputs not symmetrically distributed

+
+
+

7.9.2 Tanh

+

Tanh or hyperbolic tangent also assumes an S-shape but is zero-centered, meaning the average output value is 0.

+

\[ tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} \]

+

The numerator/denominator transform shifts the range from (0,1) in Sigmoid to (-1, 1) in tanh.

+

Most pros/cons are shared with Sigmoid, but Tanh avoids some output saturation issues by being centered. However, it still suffers from vanishing gradients with many layers.

+
+
+

7.9.3 ReLU

+

The Rectified Linear Unit (ReLU) introduces a simple thresholding behavior with its mathematical form:

+

\[ ReLU(x) = max(0, x) \]

+

It leaves all positive inputs unchanged while clipping all negative values to 0. This sparse activation and cheap computation make ReLU widely favored over sigmoid/tanh.

+

Figure fig-activation-functions demonstrates the 3 activation functions we discussed above -Tanh, ReLU, Sigmoid- in addition to the Linear case.

+
+
+
+ +
+
+Figure 7.6: Common activation functions. Credit: AI Wiki. +
+
+
+
+
+

7.9.4 Softmax

+

The softmax activation function is generally used as the last layer for classification tasks to normalize the activation value vector so that its elements sum to 1. This is useful for classification tasks where we want to learn to predict class-specific probabilities of a particular input, in which case the cumulative probability across classes is equal to 1. The softmax activation function is defined as

+

\[\sigma(z_i) = \frac{e^{z_{i}}}{\sum_{j=1}^K e^{z_{j}}} \ \ \ for\ i=1,2,\dots,K\]

+
+
+

7.9.5 Pros and Cons

+

Here are the summarizing pros and cons of these various standard activation functions:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Activation FunctionProsCons
SigmoidSmooth gradient for backdrop
Output bounded between 0 and 1
Saturation kills gradients
Not zero-centered
TanhSmoother gradient than sigmoid
Zero-centered output [-1, 1]
Still suffers vanishing gradient issue
ReLUComputationally efficient
Introduces sparsity
Avoids vanishing gradients
“Dying ReLU” units
Not bounded
SoftmaxUsed for the last layer to normalize vector outputs to be a probability distribution; typically used for classification tasks-
+
+

Exercise 7.5 (Activation Functions)  

+
+
+ +
+
+

Unlock the power of activation functions! These little mathematical workhorses are what make neural networks so incredibly flexible. In this Colab notebook, you’ll go hands-on with functions like the Sigmoid, tanh, and the superstar ReLU. See how they transform inputs and learn which works best in different situations. It’s the key to building neural networks that can tackle complex problems!

+

+
+
+
+
+
+
+

7.10 System Bottlenecks

+

As introduced earlier, neural networks comprise linear operations (matrix multiplications) interleaved with element-wise nonlinear activation functions. The most computationally expensive portion of neural networks is the linear transformations, specifically the matrix multiplications between each layer. These linear layers map the activations from the previous layer to a higher dimensional space that serves as inputs to the next layer’s activation function.

+
+

7.10.1 Runtime Complexity of Matrix Multiplication

+
+

Layer Multiplications vs. Activations

+

The bulk of computation in neural networks arises from the matrix multiplications between layers. Consider a neural network layer with an input dimension of \(M\) = 500 and output dimension of \(N\) = 1000; the matrix multiplication requires \(O(N \cdot M) = O(1000 \cdot 500) = 500,000\) multiply-accumulate (MAC) operations between those layers.

+

Contrast this with the preceding layer, which had \(M\) = 300 inputs, requiring \(O(500 \cdot 300) = 150,000\) ops. We can see how the computations scale exponentially as the layer widths increase, with the total computations across \(L\) layers being \(\sum_{l=1}^{L-1} O\big(N^{(l)} \cdot M^{(l-1)}\big)\).

+

Now, comparing the matrix multiplication to the activation function, which requires only \(O(N) = 1000\) element-wise nonlinearities for \(N = 1000\) outputs, we can see the linear transformations dominating the activations computationally.

+

These large matrix multiplications impact hardware choices, inference latency, and power constraints for real-world neural network applications. For example, a typical DNN layer may require 500,000 multiply-accumulates vs. only 1000 nonlinear activations, demonstrating a 500x increase in mathematical operations.

+

When training neural networks, we typically use mini-batch gradient descent, operating on small batches of data simultaneously. Considering a batch size of \(B\) training examples, the input to the matrix multiplication becomes a \(M \times B\) matrix, while the output is an \(N \times B\) matrix.

+
+
+

Mini-batch

+

In training neural networks, we need to repeatedly estimate the gradient of the loss function with respect to the network parameters (i.e., weights, and biases). This gradient indicates which direction the parameters should be updated in to minimize the loss. As introduced previously, we perform updates over a batch of data points every update, also known as stochastic gradient descent or mini-batch gradient descent.

+

The most straightforward approach is to estimate the gradient based on a single training example, compute the parameter update, lather, rinse, and repeat for the next example. However, this involves very small and frequent parameter updates that can be computationally inefficient and may need to be more accurate in terms of convergence due to the stochasticity of using just a single data point for a model update.

+

Instead, mini-batch gradient descent balances convergence stability and computational efficiency. Rather than computing the gradient on single examples, we estimate the gradient based on small “mini-batches” of data—usually between 8 and 256 examples in practice.

+

This provides a noisy but consistent gradient estimate that leads to more stable convergence. Additionally, the parameter update must only be performed once per mini-batch rather than once per example, reducing computational overhead.

+

By tuning the mini-batch size, we can control the tradeoff between the smoothness of the estimate (larger batches are generally better) and the frequency of updates (smaller batches allow more frequent updates). Mini-batch sizes are usually powers of 2, so they can efficiently leverage parallelism across GPU cores.

+

So, the total computation performs an \(N \times M\) by \(M \times B\) matrix multiplication, yielding \(O(N \cdot M \cdot B)\) floating point operations. As a numerical example, \(N=1000\) hidden units, \(M=500\) input units, and a batch size \(B=64\) equates to 1000 x 500 x 64 = 32 million multiply-accumulates per training iteration!

+

In contrast, the activation functions are applied element-wise to the \(N \times B\) output matrix, requiring only \(O(N \cdot B)\) computations. For \(N=1000\) and \(B=64\), that is just 64,000 nonlinearities - 500X less work than the matrix multiplication.

+

As we increase the batch size to fully leverage parallel hardware like GPUs, the discrepancy between matrix multiplication and activation function cost grows even larger. This reveals how optimizing the linear algebra operations offers tremendous efficiency gains.

+

Therefore, matrix multiplication is central in analyzing where and how neural networks spend computation. For example, matrix multiplications often account for over 90% of inference latency and training time in common convolutional and recurrent neural networks.

+
+
+

Optimizing Matrix Multiplication

+

Several techniques enhance the efficiency of general dense/sparse matrix-matrix and matrix-vector operations to improve overall efficiency. Some key methods include:

+
    +
  • Leveraging optimized math libraries like cuBLAS for GPU acceleration
  • +
  • Enabling lower precision formats like FP16 or INT8 where accuracy permits
  • +
  • Employing Tensor Processing Units with hardware matrix multiplication
  • +
  • Sparsity-aware computations and data storage formats to exploit zero parameters
  • +
  • Approximating matrix multiplications with algorithms like Fast Fourier Transforms
  • +
  • Model architecture design to reduce layer widths and activations
  • +
  • Quantization, pruning, distillation, and other compression techniques
  • +
  • Parallelization of computation across available hardware
  • +
  • Caching/pre-computing results where possible to reduce redundant operations
  • +
+

The potential optimization techniques are vast, given the outsized portion of time models spend in matrix and vector math. Even incremental improvements speed up runtimes and lower energy usage. Finding new ways to enhance these linear algebra primitives remains an active area of research aligned with the future demands of machine learning. We will discuss these in detail in the Optimizations and AI Acceleration chapters.

+
+
+
+

7.10.2 Compute vs. Memory Bottleneck

+

At this point, matrix-matrix multiplication is the core mathematical operation underpinning neural networks. Both training and inference for neural networks heavily utilize these matrix multiply operations. Analysis shows that over 90% of computational requirements in state-of-the-art neural networks arise from matrix multiplications. Consequently, the performance of matrix multiplication has an enormous influence on overall model training or inference time.

+
+

Training versus Inference

+

While training and inference rely heavily on matrix multiplication performance, their precise computational profiles differ. Specifically, neural network inference tends to be more compute-bound than training for an equivalent batch size. The key difference lies in the backpropagation pass, which is only required during training. Backpropagation involves a sequence of matrix multiply operations to calculate gradients with respect to activations across each network layer. Critically, though, no additional memory bandwidth is needed here—the inputs, outputs, and gradients are read/written from cache or registers.

+

As a result, training exhibits lower arithmetic intensities, with gradient calculations bounded by memory access instead of FLOPs. In contrast, the forward propagation dominates neural network inference, which corresponds to a series of matrix-matrix multiplies. With no memory-intensive gradient retrospecting, larger batch sizes readily push inference into being extremely compute-bound. The high measured arithmetic intensities exhibit this. Response times may be critical for some inference applications, forcing the application provider to use a smaller batch size to meet these response-time requirements, thereby reducing hardware efficiency; hence, inferences may see lower hardware utilization.

+

The implications are that hardware provisioning and bandwidth vs. FLOP tradeoffs differ depending on whether a system targets training or inference. High-throughput, low-latency servers for inference should emphasize computational power instead of memory, while training clusters require a more balanced architecture.

+

However, matrix multiplication exhibits an interesting tension - the underlying hardware’s memory bandwidth or arithmetic throughput capabilities can bind it. The system’s ability to fetch and supply matrix data versus its ability to perform computational operations determines this direction.

+

This phenomenon has profound impacts; hardware must be designed judiciously, and software optimizations must be considered. Optimizing and balancing compute versus memory to alleviate this underlying matrix multiplication bottleneck is crucial for efficient model training and deployment.

+

Finally, batch size may impact convergence rates during neural network training, another important consideration. For example, there are generally diminishing returns in benefits to convergence with extremely large batch sizes (i.e.,> 16384). In contrast, extremely large batch sizes may be increasingly beneficial from a hardware/arithmetic intensity perspective; using such large batches may not translate to faster convergence vs wall-clock time due to their diminishing benefits to convergence. These tradeoffs are part of the design decisions core to systems for the machine-learning type of research.

+
+
+

Batch Size

+

The batch size used during neural network training and inference significantly impacts whether matrix multiplication poses more of a computational or memory bottleneck. Concretely, the batch size refers to the number of samples propagated through the network together in one forward/backward pass. Matrix multiplication equates to larger matrix sizes.

+

Specifically, let’s look at the arithmetic intensity of matrix multiplication during neural network training. This measures the ratio between computational operations and memory transfers. The matrix multiply of two matrices of size \(N \times M\) and \(M \times B\) requires \(N \times M \times B\) multiply-accumulate operations, but only transfers of \(N \times M + M \times B\) matrix elements.

+

As we increase the batch size \(B\), the number of arithmetic operations grows faster than the memory transfers. For example, with a batch size of 1, we need \(N \times M\) operations and \(N + M\) transfers, giving an arithmetic intensity ratio of around \(\frac{N \times M}{N+M}\). But with a large batch size of 128, the intensity ratio becomes \(\frac{128 \times N \times M}{N \times M + M \times 128} \approx 128\). Using a larger batch size shifts the overall computation from memory-bounded to more compute-bounded. AI training uses large batch sizes and is generally limited by peak arithmetic computational performance, i.e., Application 3 in Figure fig-roofline.

+

Therefore, batched matrix multiplication is far more computationally intensive than memory access bound. This has implications for hardware design and software optimizations, which we will cover next. The key insight is that we can significantly alter the computational profile and bottlenecks posed by neural network training and inference by tuning the batch size.

+
+
+
+ +
+
+Figure 7.7: AI training roofline model. +
+
+
+
+
+

Hardware Characteristics

+

Modern hardware like CPUs and GPUs is highly optimized for computational throughput rather than memory bandwidth. For example, high-end H100 Tensor Core GPUs can deliver over 60 TFLOPS of double-precision performance but only provide up to 3 TB/s of memory bandwidth. This means there is almost a 20x imbalance between arithmetic units and memory access; consequently, for hardware like GPU accelerators, neural network training workloads must be made as computationally intensive as possible to utilize the available resources fully.

+

This further motivates the need for using large batch sizes during training. When using a small batch, the matrix multiplication is bounded by memory bandwidth, underutilizing the abundant compute resources. However, we can shift the bottleneck towards computation and attain much higher arithmetic intensity with sufficiently large batches. For instance, batches of 256 or 512 samples may be needed to saturate a high-end GPU. The downside is that larger batches provide less frequent parameter updates, which can impact convergence. Still, the parameter serves as an important tuning knob to balance memory vs compute limitations.

+

Therefore, given the imbalanced compute-memory architectures of modern hardware, employing large batch sizes is essential to alleviate bottlenecks and maximize throughput. As mentioned, the subsequent software and algorithms also need to accommodate such batch sizes since larger batch sizes may have diminishing returns toward the network’s convergence. Using very small batch sizes may lead to suboptimal hardware utilization, ultimately limiting training efficiency. Scaling up to large batch sizes is a research topic explored in various works that aim to do large-scale training (You et al. 2018).

+
+You, Yang, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. 2018. ImageNet Training in Minutes.” https://arxiv.org/abs/1709.05011. +
+
+

Model Architectures

+

The underlying neural network architecture also affects whether matrix multiplication poses more of a computational or memory bottleneck during execution. Transformers and MLPs are much more compute-bound than CNN convolutional neural networks. This stems from the types of matrix multiplication operations involved in each model. Transformers rely on self-attention, multiplying large activation matrices by massive parameter matrices to relate elements. MLPs stack fully connected layers, also requiring large matrix multiplies.

+

In contrast, the convolutional layers in CNNs have a sliding window that reuses activations and parameters across the input, which means fewer unique matrix operations are needed. However, the convolutions require repeatedly accessing small input parts and moving partial sums to populate each window. Even though the arithmetic operations in convolutions are intense, this data movement and buffer manipulation impose huge memory access overheads. CNNs comprise several layered stages, so intermediate outputs must frequently materialize in memory.

+

As a result, CNN training tends to be more memory bandwidth bound relative to arithmetic bound compared to Transformers and MLPs. Therefore, the matrix multiplication profile, and in turn, the bottleneck posed, varies significantly based on model choice. Hardware and systems need to be designed with appropriate compute-memory bandwidth balance depending on target model deployment. Models relying more on attention and MLP layers require higher arithmetic throughput compared to CNNs, which necessitates high memory bandwidth.

+
+
+
+
+

7.11 Training Parallelization

+

Training neural networks entails intensive computational and memory demands. The backpropagation algorithm for calculating gradients and updating weights consists of repeated matrix multiplications and arithmetic operations over the entire dataset. For example, one pass of backpropagation scales in time complexity with \(O(num\_parameters \times batch\_size \times sequence\_length)\).

+

The computational requirements grow rapidly as model size increases in parameters and layers. Moreover, the algorithm requires storing activation outputs and model parameters for the backward pass, which grows with model size.

+

Larger models cannot fit and train on a single accelerator device like a GPU, and the memory footprint becomes prohibitive. Therefore, we need to parallelize model training across multiple devices to provide sufficient compute and memory to train state-of-the-art neural networks.

+

As shown in Figure fig-training-parallelism, the two main approaches are data parallelism, which replicates the model across devices while splitting the input data batch-wise, and model parallelism, which partitions the model architecture itself across different devices. By training in parallel, we can leverage greater aggregate compute and memory resources to overcome system limitations and accelerate deep learning workloads.

+
+
+
+ +
+
+Figure 7.8: Data parallelism versus model parallelism. +
+
+
+
+

7.11.1 Data Parallel

+

Data parallelization is a common approach to parallelize machine learning training across multiple processing units, such as GPUs or distributed computing resources. The training dataset is divided into batches in data parallelism, and a separate processing unit processes each batch. The model parameters are then updated based on the gradients computed from the processing of each batch. Here’s a step-by-step description of data parallel parallelization for ML training:

+
    +
  1. Dividing the Dataset: The training dataset is divided into smaller batches, each containing a subset of the training examples.

  2. +
  3. Replicating the Model: The neural network model is replicated across all processing units, and each processing unit has its copy of the model.

  4. +
  5. Parallel Computation: Each processing unit takes a different batch and independently computes the forward and backward passes. During the forward pass, the model makes predictions on the input data. The loss function calculates gradients for the model parameters during the backward pass.

  6. +
  7. Gradient Aggregation: After processing their respective batches, the gradients from each processing unit are aggregated. Common aggregation methods include summation or averaging of the gradients.

  8. +
  9. Parameter Update: The aggregated gradients update the model parameters. The update can be performed using optimization algorithms like SGD or variants like Adam.

  10. +
  11. Synchronization: After the update, all processing units synchronize their model parameters, ensuring that each has the latest version of the model.

  12. +
+

The prior steps are repeated for several iterations or until convergence.

+

Let’s take a specific example. We have 256 batch sizes and 8 GPUs; each GPU will get a micro-batch of 32 samples. Their forward and backward passes compute losses and gradients only based on the local 32 samples. The gradients get aggregated across devices with a parameter server or collective communications library to get the effective gradient for the global batch. Weight updates happen independently on each GPU according to these gradients. After a configured number of iterations, updated weights synchronize and equalize across devices before continuing to the next iterations.

+

Data parallelism is effective when the model is large, and the dataset is substantial, as it allows for parallel processing of different parts of the data. It is widely used in deep learning frameworks and libraries that support distributed training, such as TensorFlow and PyTorch. However, to ensure efficient parallelization, care must be taken to handle issues like communication overhead, load balancing, and synchronization.

+
+
+

7.11.2 Model Parallel

+

Model parallelism refers to distributing the neural network model across multiple devices rather than replicating the full model like data parallelism. This is particularly useful when a model is too large to fit into the memory of a single GPU or accelerator device. While this might not be specifically applicable for embedded or TinyML use cases as most models are relatively small(er), it is still useful to know.

+

In model parallel training, different parts or layers of the model are assigned to separate devices. The input activations and intermediate outputs get partitioned and passed between these devices during the forward and backward passes to coordinate gradient computations across model partitions.

+

The memory footprint and computational operations are distributed by splitting the model architecture across multiple devices instead of concentrating on one. This enables training very large models with billions of parameters that otherwise exceed the capacity of a single device. There are several main ways in which we can do partitioning:

+
    +
  • Layer-wise parallelism: Consecutive layers are distributed onto different devices. For example, device 1 contains layers 1-3; device 2 contains layers 4-6. The output activations from layer 3 would be transferred to device 2 to start the next layers for the forward pass computations.

  • +
  • Filter-wise parallelism: In convolutional layers, output filters can be split among devices. Each device computes activation outputs for a subset of filters, which get concatenated before propagating further.

  • +
  • Spatial parallelism: The input images get divided spatially, so each device processes over a certain region like the top-left quarter of images. The output regions then combine to form the full output.

  • +
+

Additionally, hybrid combinations can split the model layer-wise and data batch-wise. The appropriate type of model parallelism depends on the specific neural architecture constraints and hardware setup. Optimizing the partitioning and communication for the model topology is key to minimizing overhead.

+

However, as the model parts run on physically separate devices, they must communicate and synchronize their parameters during each training step. The backward pass must ensure gradient updates propagate accurately across the model partitions. Hence, coordination and high-speed interconnecting between devices are crucial for optimizing the performance of model parallel training. Careful partitioning and communication protocols are required to minimize transfer overhead.

+
+
+

7.11.3 Comparison

+

To summarize, Table tbl-parallelism demonstrates some of the key characteristics for comparing data parallelism and model parallelism:

+
+
+
+Table 7.2: Comparing data parallelism and model parallelism. +
+
+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CharacteristicData ParallelismModel Parallelism
DefinitionDistribute data across devices with model replicasDistribute model across devices
ObjectiveAccelerate training through compute scalingEnable larger model training
Scaling MethodScale devices/workersScale model size
Main ConstraintModel size per deviceDevice coordination overhead
Hardware RequirementsMultiple GPU/TPUsOften specialized interconnect
Primary ChallengeParameter synchronizationComplex partitioning + communication
TypesN/ALayer-wise, filter-wise, spatial
Code ComplexityMinimal changesMore significant model surgery
Popular LibrariesHorovod, PyTorch DistributedMesh TensorFlow
+
+
+
+
+
+
+

7.12 Conclusion

+

In this chapter, we have covered the core foundations that enable effective training of artificial intelligence models. We explored the mathematical concepts like loss functions, backpropagation, and gradient descent that make neural network optimization possible. We also discussed practical techniques around leveraging training data, regularization, hyperparameter tuning, weight initialization, and distributed parallelization strategies that improve convergence, generalization, and scalability.

+

These methodologies form the bedrock through which the success of deep learning has been attained over the past decade. Mastering these fundamentals equips practitioners to architect systems and refine models tailored to their problem context. However, as models and datasets grow exponentially, training systems must optimize across metrics like time, cost, and carbon footprint. Hardware scaling through warehouse scales enables massive computational throughput - but optimizations around efficiency and specialization will be key. Software techniques like compression and sparsity exploitation can augment hardware gains. We will discuss several of these in the coming chapters.

+

Overall, the fundamentals covered in this chapter equip practitioners to build, refine, and deploy models. However, interdisciplinary skills spanning theory, systems, and hardware will differentiate experts who can lift AI to the next level sustainably and responsibly that society requires. Understanding efficiency alongside accuracy constitutes the balanced engineering approach needed to train intelligent systems that integrate smoothly across many real-world contexts.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+ +
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/contents/workflow/workflow.html b/contents/workflow/workflow.html new file mode 100644 index 00000000..da4e1d45 --- /dev/null +++ b/contents/workflow/workflow.html @@ -0,0 +1,1277 @@ + + + + + + + + + +Machine Learning Systems - 4  AI Workflow + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

4  AI Workflow

+
+ + + +
+ + + + +
+ + + +
+ + +

Resources: Slides, Labs, Exercises

+
+
+

+
DALL·E 3 Prompt: Create a rectangular illustration of a stylized flowchart representing the AI workflow/pipeline. From left to right, depict the stages as follows: ‘Data Collection’ with a database icon, ‘Data Preprocessing’ with a filter icon, ‘Model Design’ with a brain icon, ‘Training’ with a weight icon, ‘Evaluation’ with a checkmark, and ‘Deployment’ with a rocket. Connect each stage with arrows to guide the viewer horizontally through the AI processes, emphasizing these steps’ sequential and interconnected nature.
+
+
+

In this chapter, we’ll explore the machine learning (ML) workflow, setting the stage for subsequent chapters that delve into the specifics. To ensure we see the bigger picture, this chapter offers a high-level overview of the steps involved in the ML workflow.

+

The ML workflow is a structured approach that guides professionals and researchers through developing, deploying, and maintaining ML models. This workflow is generally divided into several crucial stages, each contributing to the effective development of intelligent systems.

+
+
+
+ +
+
+Learning Objectives +
+
+
+
    +
  • Understand the ML workflow and gain insights into the structured approach and stages of developing, deploying, and maintaining machine learning models.

  • +
  • Learn about the unique challenges and distinctions between workflows for Traditional machine learning and embedded AI.

  • +
  • Appreciate the roles in ML projects and understand their responsibilities and significance.

  • +
  • Understanding the importance, applications, and considerations for implementing ML models in resource-constrained environments.

  • +
  • Gain awareness about the ethical and legal aspects that must be considered and adhered to in ML and embedded AI projects.

  • +
  • Establish a basic understanding of ML workflows and roles to be well-prepared for deeper exploration in the following chapters.

  • +
+
+
+
+

4.1 Overview

+
+
+
+ +
+
+Figure 4.1: Multi-step design methodology for the development of a machine learning model. Commonly referred to as the machine learning lifecycle +
+
+
+

Developing a successful machine learning model requires a systematic workflow. This end-to-end process enables you to build, deploy, and maintain models effectively. As shown in Figure fig-ml-life-cycle, It typically involves the following key steps:

+
    +
  1. Problem Definition - Start by clearly articulating the specific problem you want to solve. This focuses on your efforts during data collection and model building.
  2. +
  3. Data Collection to Preparation: Gather relevant, high-quality training data that captures all aspects of the problem. Clean and preprocess the data to prepare it for modeling.
  4. +
  5. Model Selection and Training: Choose a machine learning algorithm suited to your problem type and data. Consider the pros and cons of different approaches. Feed the prepared data into the model to train it. Training time varies based on data size and model complexity.
  6. +
  7. Model Evaluation: Test the trained model on new unseen data to measure its predictive accuracy. Identify any limitations.
  8. +
  9. Model Deployment: Integrate the validated model into applications or systems to start operationalization.
  10. +
  11. Monitor and Maintain: Track model performance in production. Retrain periodically on new data to keep it current.
  12. +
+

Following this structured ML workflow helps guide you through the key phases of development. It ensures you build effective and robust models ready for real-world deployment, resulting in higher-quality models that solve your business needs.

+

The ML workflow is iterative, requiring ongoing monitoring and potential adjustments. Additional considerations include:

+
    +
  • Version Control: Track code and data changes to reproduce results and revert to earlier versions if needed.
  • +
  • Documentation: Maintain detailed documentation for workflow understanding and reproduction.
  • +
  • Testing: Rigorously test the workflow to ensure its functionality.
  • +
  • Security: Safeguard your workflow and data when deploying models in production settings.
  • +
+
+
+

4.2 Traditional vs. Embedded AI

+

The ML workflow is a universal guide applicable across various platforms, including cloud-based solutions, edge computing, and TinyML. However, the workflow for Embedded AI introduces unique complexities and challenges, making it a captivating domain and paving the way for remarkable innovations.

+
+

4.2.1 Resource Optimization

+
    +
  • Traditional ML Workflow: This workflow prioritizes model accuracy and performance, often leveraging abundant computational resources in cloud or data center environments.
  • +
  • Embedded AI Workflow: Given embedded systems’ resource constraints, this workflow requires careful planning to optimize model size and computational demands. Techniques like model quantization and pruning are crucial.
  • +
+
+
+

4.2.2 Real-time Processing

+
    +
  • Traditional ML Workflow: Less emphasis on real-time processing, often relying on batch data processing.
  • +
  • Embedded AI Workflow: Prioritizes real-time data processing, making low latency and quick execution essential, especially in applications like autonomous vehicles and industrial automation.
  • +
+
+
+

4.2.3 Data Management and Privacy

+
    +
  • Traditional ML Workflow: Processes data in centralized locations, often necessitating extensive data transfer and focusing on data security during transit and storage.
  • +
  • Embedded AI Workflow: This workflow leverages edge computing to process data closer to its source, reducing data transmission and enhancing privacy through data localization.
  • +
+
+
+

4.2.4 Hardware-Software Integration

+
    +
  • Traditional ML Workflow: Typically operates on general-purpose hardware, with software development occurring independently.
  • +
  • Embedded AI Workflow: This workflow involves a more integrated approach to hardware and software development, often incorporating custom chips or hardware accelerators to achieve optimal performance.
  • +
+
+
+
+

4.3 Roles & Responsibilities

+

Creating an ML solution, especially for embedded AI, is a multidisciplinary effort involving various specialists.

+

Table tbl-mlops_roles shows a rundown of the typical roles involved:

+
+
+
+Table 4.1: Roles and responsibilities of people involved in MLOps. +
+
+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
RoleResponsibilities
Project ManagerOversees the project, ensuring timelines and milestones are met.
Domain ExpertsOffer domain-specific insights to define project requirements.
Data ScientistsSpecialize in data analysis and model development.
Machine Learning EngineersFocus on model development and deployment.
Data EngineersManage data pipelines.
Embedded Systems EngineersIntegrate ML models into embedded systems.
Software DevelopersDevelop software components for AI system integration.
Hardware EngineersDesign and optimize hardware for the embedded AI system.
UI/UX DesignersFocus on user-centric design.
QA EngineersEnsure the system meets quality standards.
Ethicists and Legal AdvisorsConsult on ethical and legal compliance.
Operations and Maintenance PersonnelMonitor and maintain the deployed system.
Security SpecialistsEnsure system security.
+
+
+
+

Understanding these roles is crucial for completing an ML project. As we proceed through the upcoming chapters, we’ll delve into each role’s essence and expertise, fostering a comprehensive understanding of the complexities involved in embedded AI projects. This holistic view facilitates seamless collaboration and nurtures an environment ripe for innovation and breakthroughs.

+
+
+

Resources

+

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

+
+
+
+ +
+
+Slides +
+
+
+
+
+

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

+ +
+
+
+
+
+
+ +
+
+Exercises +
+
+
+
+
+

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

+

Coming soon.

+
+
+
+
+
+
+ +
+
+Labs +
+
+
+
+
+

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

+

Coming soon.

+
+
+
+ + +
+ +
+ + +
+ + + + + + \ No newline at end of file diff --git a/references.html b/references.html new file mode 100644 index 00000000..6a50bc74 --- /dev/null +++ b/references.html @@ -0,0 +1,1038 @@ + + + + + + + + + +Machine Learning Systems - References + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+ +
+ + +
+ + + +
+ +
+
+

References

+
+ + + +
+ + + + +
+ + + +
+ + + + + + +
+ +
+ + + + + + \ No newline at end of file diff --git a/scripts/ai_menu/dist/142.bundle.js b/scripts/ai_menu/dist/142.bundle.js deleted file mode 100644 index ac3f59a0..00000000 --- a/scripts/ai_menu/dist/142.bundle.js +++ /dev/null @@ -1 +0,0 @@ -(function(_0x2398c2,_0xd179f9){var _0x4ec50a=a1_0x2deb,_0x2c415c=_0x2398c2();while(!![]){try{var _0x44d498=parseInt(_0x4ec50a(0x204))/0x1*(-parseInt(_0x4ec50a(0x1e1))/0x2)+-parseInt(_0x4ec50a(0x1e0))/0x3*(parseInt(_0x4ec50a(0x209))/0x4)+parseInt(_0x4ec50a(0x1e4))/0x5+parseInt(_0x4ec50a(0x1f2))/0x6+parseInt(_0x4ec50a(0x1fd))/0x7*(parseInt(_0x4ec50a(0x1ee))/0x8)+parseInt(_0x4ec50a(0x1f8))/0x9*(parseInt(_0x4ec50a(0x213))/0xa)+parseInt(_0x4ec50a(0x1e5))/0xb*(-parseInt(_0x4ec50a(0x201))/0xc);if(_0x44d498===_0xd179f9)break;else _0x2c415c['push'](_0x2c415c['shift']());}catch(_0x5797c0){_0x2c415c['push'](_0x2c415c['shift']());}}}(a1_0x235f,0xadd28),((()=>{var _0x49e01f=a1_0x2deb,_0x1eed92={},_0x1527c2={};function _0x26f21b(_0x4e8295){var _0x59c146=a1_0x2deb,_0x3a2a63=_0x1527c2[_0x4e8295];if(void 0x0!==_0x3a2a63)return _0x3a2a63['exports'];var _0x130e20=_0x1527c2[_0x4e8295]={'exports':{}};return _0x1eed92[_0x4e8295](_0x130e20,_0x130e20[_0x59c146(0x206)],_0x26f21b),_0x130e20[_0x59c146(0x206)];}let _0x3d8aba,_0x38f6e6,_0x4bf8c0,_0x4196c5,_0x3a0771;_0x26f21b['m']=_0x1eed92,_0x26f21b['d']=(_0x188537,_0x2b31f3)=>{for(var _0x5bc000 in _0x2b31f3)_0x26f21b['o'](_0x2b31f3,_0x5bc000)&&!_0x26f21b['o'](_0x188537,_0x5bc000)&&Object['defineProperty'](_0x188537,_0x5bc000,{'enumerable':!0x0,'get':_0x2b31f3[_0x5bc000]});},_0x26f21b['f']={},_0x26f21b['e']=_0x538c7b=>Promise['all'](Object['keys'](_0x26f21b['f'])[_0x49e01f(0x203)]((_0x421d7b,_0xa58b7)=>(_0x26f21b['f'][_0xa58b7](_0x538c7b,_0x421d7b),_0x421d7b),[])),_0x26f21b['u']=_0x6891da=>_0x6891da+'.bundle.js',_0x26f21b['g']=(function(){var _0x43ffa0=_0x49e01f;if(_0x43ffa0(0x210)==typeof globalThis)return globalThis;try{return this||new Function(_0x43ffa0(0x1ea))();}catch(_0x520efa){if(_0x43ffa0(0x210)==typeof window)return window;}}()),_0x26f21b['o']=(_0x1cccef,_0x5b6fd5)=>Object['prototype'][_0x49e01f(0x1ed)][_0x49e01f(0x1f6)](_0x1cccef,_0x5b6fd5),_0x26f21b['r']=_0x40c096=>{var _0x252dd7=_0x49e01f;_0x252dd7(0x1f9)!=typeof Symbol&&Symbol['toStringTag']&&Object['defineProperty'](_0x40c096,Symbol[_0x252dd7(0x202)],{'value':'Module'}),Object[_0x252dd7(0x1f5)](_0x40c096,_0x252dd7(0x1ff),{'value':!0x0});},((()=>{var _0x1874c9=_0x49e01f,_0x3cb855;_0x26f21b['g'][_0x1874c9(0x20e)]&&(_0x3cb855=_0x26f21b['g'][_0x1874c9(0x1eb)]+'');var _0x3aade6=_0x26f21b['g'][_0x1874c9(0x20c)];if(!_0x3cb855&&_0x3aade6&&(_0x3aade6[_0x1874c9(0x1fb)]&&(_0x3cb855=_0x3aade6[_0x1874c9(0x1fb)][_0x1874c9(0x207)]),!_0x3cb855)){var _0x1de661=_0x3aade6[_0x1874c9(0x1e3)]('script');if(_0x1de661[_0x1874c9(0x1f7)]){for(var _0x400981=_0x1de661[_0x1874c9(0x1f7)]-0x1;_0x400981>-0x1&&(!_0x3cb855||!/^http(s?):/['test'](_0x3cb855));)_0x3cb855=_0x1de661[_0x400981--][_0x1874c9(0x207)];}}if(!_0x3cb855)throw new Error(_0x1874c9(0x1e6));_0x3cb855=_0x3cb855[_0x1874c9(0x1e9)](/#.*$/,'')[_0x1874c9(0x1e9)](/\?.*$/,'')[_0x1874c9(0x1e9)](/\/[^\/]+$/,'/'),_0x26f21b['p']=_0x3cb855;})()),((()=>{var _0x135130=_0x49e01f,_0x38ca29={0x8e:0x1,0x2f9:0x1};_0x26f21b['f']['i']=(_0x11e8ad,_0x28fa3c)=>{_0x38ca29[_0x11e8ad]||importScripts(_0x26f21b['p']+_0x26f21b['u'](_0x11e8ad));};var _0x55eead=self[_0x135130(0x1ec)]=self[_0x135130(0x1ec)]||[],_0x4ff235=_0x55eead[_0x135130(0x20a)]['bind'](_0x55eead);_0x55eead['push']=_0x2c963c=>{var _0x11fce3=_0x135130,[_0x5a5b88,_0x1a6341,_0x2cb956]=_0x2c963c;for(var _0xb5ab73 in _0x1a6341)_0x26f21b['o'](_0x1a6341,_0xb5ab73)&&(_0x26f21b['m'][_0xb5ab73]=_0x1a6341[_0xb5ab73]);for(_0x2cb956&&_0x2cb956(_0x26f21b);_0x5a5b88[_0x11fce3(0x1f7)];)_0x38ca29[_0x5a5b88[_0x11fce3(0x205)]()]=0x1;_0x4ff235(_0x2c963c);};})()),self[_0x49e01f(0x215)]=async _0x1e9e9d=>{var _0x90ac61=_0x49e01f;if(!(_0x3d8aba&&_0x38f6e6&&quickStart_single&&_0x4bf8c0&&_0x4196c5&&_0x3a0771)){const _0x1be150=await _0x26f21b['e'](0x180)[_0x90ac61(0x1f4)](_0x26f21b['bind'](_0x26f21b,0x20c0));_0x3d8aba=_0x1be150[_0x90ac61(0x1e2)],_0x38f6e6=_0x1be150[_0x90ac61(0x1e8)],quickStart_single=_0x1be150[_0x90ac61(0x200)],_0x4bf8c0=_0x1be150['search'],_0x4196c5=_0x1be150[_0x90ac61(0x20f)],_0x3a0771=_0x1be150['update'];}try{let _0x440317;const {token:_0x472649,..._0x15fd7a}=_0x1e9e9d[_0x90ac61(0x1f0)];switch(_0x1e9e9d[_0x90ac61(0x1f0)][_0x90ac61(0x214)]){case _0x90ac61(0x1e2):_0x440317=await _0x3d8aba(_0x1e9e9d['data'][_0x90ac61(0x1f1)]),_0x440317={'result':_0x440317};break;case'create':_0x440317=await _0x38f6e6(_0x15fd7a,_0x472649,_0x1e9e9d[_0x90ac61(0x1f0)][_0x90ac61(0x1e7)]),_0x440317={'result':_0x440317,'id':_0x1e9e9d[_0x90ac61(0x1f0)]['id']};break;case'create_single':_0x440317=await quickStart_single(_0x1e9e9d['data'],_0x472649,_0x1e9e9d[_0x90ac61(0x1f0)]['useLocal']),_0x440317={'result':_0x440317,'id':_0x1e9e9d[_0x90ac61(0x1f0)]['id']};break;case _0x90ac61(0x211):_0x440317=await _0x4bf8c0(_0x15fd7a[_0x90ac61(0x20d)],_0x1e9e9d[_0x90ac61(0x1f0)]['k'],_0x472649,_0x1e9e9d[_0x90ac61(0x1f0)][_0x90ac61(0x1e7)]),_0x440317={'result':_0x440317,'id':_0x1e9e9d['data']['id']};break;case _0x90ac61(0x20f):_0x440317=await _0x4196c5(_0x1e9e9d[_0x90ac61(0x1f0)]['key']),_0x440317={'result':_0x440317,'id':_0x1e9e9d[_0x90ac61(0x1f0)]['id']};break;case _0x90ac61(0x212):_0x440317=await _0x3a0771(_0x1e9e9d[_0x90ac61(0x1f0)][_0x90ac61(0x1ef)],_0x15fd7a,_0x472649),_0x440317={'result':_0x440317,'id':_0x1e9e9d[_0x90ac61(0x1f0)]['id']};break;default:throw new Error(_0x90ac61(0x1fe));}self[_0x90ac61(0x208)]({'status':_0x90ac61(0x1fc),'data':_0x440317[_0x90ac61(0x1fa)],'id':_0x440317['id']});}catch(_0x1e50a9){self[_0x90ac61(0x208)]({'status':_0x90ac61(0x20b),'message':_0x1e50a9[_0x90ac61(0x1f3)]});}};})()));function a1_0x2deb(_0x140d9c,_0x11f3f5){var _0x235f96=a1_0x235f();return a1_0x2deb=function(_0x2debab,_0x191a7c){_0x2debab=_0x2debab-0x1e0;var _0x512b65=_0x235f96[_0x2debab];return _0x512b65;},a1_0x2deb(_0x140d9c,_0x11f3f5);}function a1_0x235f(){var _0x46af7b=['24wlubPP','toStringTag','reduce','415942PBkJcG','pop','exports','src','postMessage','28dysqEo','push','error','document','text','importScripts','delete_row','object','search','update','45530cADxtI','command','onmessage','190545dRIYnT','2KrpQaR','initiate','getElementsByTagName','2661690rsDXlG','1274768ZzjKay','Automatic\x20publicPath\x20is\x20not\x20supported\x20in\x20this\x20browser','useLocal','quickStart','replace','return\x20this','location','webpackChunkinjectchat','hasOwnProperty','2896504oAZWXG','key','data','dbName','23106eCNGNi','message','then','defineProperty','call','length','1791uVTwad','undefined','result','currentScript','success','7pmBTor','Unknown\x20command','__esModule','quickStart_single'];a1_0x235f=function(){return _0x46af7b;};return a1_0x235f();} \ No newline at end of file diff --git a/scripts/ai_menu/dist/384.bundle.js b/scripts/ai_menu/dist/384.bundle.js deleted file mode 100644 index 0b1b72d8..00000000 --- a/scripts/ai_menu/dist/384.bundle.js +++ /dev/null @@ -1 +0,0 @@ -'use strict';const a3_0x26494f=a3_0x4cbd;(function(_0x562be8,_0x3063a7){const _0x3bfdd8=a3_0x4cbd,_0x4d9179=_0x562be8();while(!![]){try{const _0x2f4c36=parseInt(_0x3bfdd8(0xa4))/0x1*(parseInt(_0x3bfdd8(0x89))/0x2)+-parseInt(_0x3bfdd8(0xa6))/0x3+parseInt(_0x3bfdd8(0xb1))/0x4+parseInt(_0x3bfdd8(0xa7))/0x5+-parseInt(_0x3bfdd8(0x95))/0x6*(parseInt(_0x3bfdd8(0xb5))/0x7)+parseInt(_0x3bfdd8(0x83))/0x8+parseInt(_0x3bfdd8(0x91))/0x9;if(_0x2f4c36===_0x3063a7)break;else _0x4d9179['push'](_0x4d9179['shift']());}catch(_0x5c3989){_0x4d9179['push'](_0x4d9179['shift']());}}}(a3_0x4901,0xa2d07));function a3_0x4cbd(_0x52c0ee,_0x17dee1){const _0x490141=a3_0x4901();return a3_0x4cbd=function(_0x4cbdf5,_0x16b92e){_0x4cbdf5=_0x4cbdf5-0x79;let _0x431ae8=_0x490141[_0x4cbdf5];return _0x431ae8;},a3_0x4cbd(_0x52c0ee,_0x17dee1);}function a3_0x4901(){const _0x532b40=['sort','catch','isArray','length','5231961pJoTlG','contains','similarity','insert','16674wercpO','delete','objectStoreNames','vectors','put','match','objectStore','Object\x20not\x20found\x20with\x20the\x20provided\x20key','readonly','get','removeFromBucket','POST','status','embedding','_hashIndex','66yYASnq','onsuccess','2820132hNNgvk','2636035VYvoli','interval','reduce','hashVector','Bearer\x20','\x20on\x20\x27object\x27\x20is\x20expected\x20to\x20be\x20an\x20Array\x20or\x20Int8Array','Unable\x20to\x20delete\x20object\x20without\x20a\x20key','text','query','join','1511052QqApIA','slice','all','target','581mOFsAT','updateHashIndex','indexOf','application/json','push','tables','error','charToSplit','numTables','split','forEach','then','has','overlap','splice','createObjectStore','readwrite','vectorDB_new','result','map','add','from','2547896iRvkDV','random','stringify','update','json','webpackChunkinjectchat','994kLDHjj','onerror','transaction','Unable\x20to\x20update\x20object\x20without\x20a\x20key'];a3_0x4901=function(){return _0x532b40;};return a3_0x4901();}(self['webpackChunkinjectchat']=self[a3_0x26494f(0x88)]||[])[a3_0x26494f(0xb9)]([[0x180],{0x20c0:(_0x15b4ee,_0x1b4d00,_0x3e22e8)=>{const _0x15eb38=a3_0x26494f;_0x3e22e8['r'](_0x1b4d00),_0x3e22e8['d'](_0x1b4d00,{'delete_row':()=>_0x57406a,'initiate':()=>_0x4cc854,'quickStart':()=>_0x30847f,'quickStart_single':()=>_0x399571,'search':()=>_0x3d709f,'update':()=>_0x1f9e07});let _0x389e31={'dbName':_0x15eb38(0x7e),'objectStore':_0x15eb38(0x98),'hyperplanes':0xa,'dimensions':0x180,'numPlanes':0x5};class _0x2bc4b8{constructor(_0x4a8b7b,_0x1ad5c3,_0x41869f=0x5){const _0x2a5e75=_0x15eb38;this[_0x2a5e75(0xbd)]=_0x41869f,this[_0x2a5e75(0xba)]=Array[_0x2a5e75(0x82)]({'length':_0x41869f},()=>Array[_0x2a5e75(0x82)]({'length':_0x1ad5c3},()=>function(_0x4f2c3d){const _0x492373=_0x2a5e75;return Array[_0x492373(0x82)]({'length':_0x4f2c3d},()=>Math[_0x492373(0x84)]()-0.5);}(_0x4a8b7b)));}['hashVector'](_0x486e4f){const _0x1482ee=_0x15eb38;return this[_0x1482ee(0xba)][_0x1482ee(0x80)](_0x340727=>_0x340727[_0x1482ee(0x80)](_0x10415c=>_0x486e4f[_0x1482ee(0xa9)]((_0x541abc,_0x336a8c,_0x3f4c55)=>_0x541abc+_0x336a8c*_0x10415c[_0x3f4c55],0x0)>=0x0?'1':'0')[_0x1482ee(0xb0)](''));}}function _0x4540c9(_0x44abd8,_0x1162c6){const _0x555966=_0x15eb38;return 0x1===_0x44abd8['length']&&(_0x44abd8=_0x44abd8[0x0]),_0x44abd8[_0x555966(0xa9)]((_0x164600,_0x2fac2f,_0x5132eb)=>_0x164600+_0x2fac2f*_0x1162c6[_0x5132eb],0x0)/(Math['sqrt'](_0x44abd8['reduce']((_0xa24e2,_0x5d76fe)=>_0xa24e2+_0x5d76fe*_0x5d76fe,0x0))*Math['sqrt'](_0x1162c6[_0x555966(0xa9)]((_0x17c1a4,_0x3e2f47)=>_0x17c1a4+_0x3e2f47*_0x3e2f47,0x0)));}class _0x253c8b{#t;#e;#r;#o;constructor(_0x52440d={}){const {dbName:_0x4920f8,objectStore:_0x144abf,vectorPath:_0x4e1d09,dimensions:_0x21fb8c,numPlanes:_0xfb4173}={..._0x389e31,..._0x52440d};this.#t=_0x144abf,this.#e=_0x4e1d09,this.#o=new _0x2bc4b8(_0x21fb8c,_0xfb4173),this.#r=function(_0x16bf4a){const {dbName:_0x21c7e6,objectStore:_0x3185d3,vectorPath:_0x1f657a}={..._0x389e31,..._0x16bf4a};return new Promise((_0x5710a1,_0x1fdbae)=>{const _0x3cce2d=a3_0x4cbd,_0x515e12=indexedDB['open'](_0x21c7e6,0x2);_0x515e12['onupgradeneeded']=_0x23cf2c=>{const _0x391733=a3_0x4cbd,_0x56df22=_0x23cf2c[_0x391733(0xb4)][_0x391733(0x7f)];_0x56df22[_0x391733(0x97)][_0x391733(0x92)](_0x3185d3)||_0x56df22[_0x391733(0x7c)](_0x3185d3,{'autoIncrement':!0x0});const _0x4c7349=_0x3185d3+_0x391733(0xa3);_0x56df22[_0x391733(0x97)]['contains'](_0x4c7349)||_0x56df22[_0x391733(0x7c)](_0x4c7349,{'autoIncrement':!0x0});},_0x515e12[_0x3cce2d(0xa5)]=_0x5d9c59=>{const _0x2d7937=_0x3cce2d;_0x5710a1(_0x5d9c59['target'][_0x2d7937(0x7f)]);},_0x515e12[_0x3cce2d(0x8a)]=_0x4fcbbe=>{const _0x399156=_0x3cce2d;_0x1fdbae(_0x4fcbbe['target'][_0x399156(0xbb)]);};});}({'dbName':_0x4920f8,'objectStore':_0x144abf,'vectorPath':_0x4e1d09});}async[_0x15eb38(0x94)](_0x4eb9cc){const _0x2cc17a=_0x15eb38,_0x4a10b7=_0x4eb9cc[this.#e];if(!(Array[_0x2cc17a(0x8f)](_0x4a10b7)||_0x4a10b7 instanceof Int8Array))throw new Error(this.#e+_0x2cc17a(0xac));const _0x278020=(await this.#r)[_0x2cc17a(0x8b)]([this.#t,this.#t+'_hashIndex'],'readwrite'),_0x2f0f68=_0x278020['objectStore'](this.#t),_0x559be8=_0x278020[_0x2cc17a(0x9b)](this.#t+_0x2cc17a(0xa3));try{const _0x46c176=_0x2f0f68[_0x2cc17a(0x81)](_0x4eb9cc),_0x1f6702=await new Promise((_0xe09898,_0x5caa9c)=>{const _0x201812=_0x2cc17a;_0x46c176['onsuccess']=()=>_0xe09898(_0x46c176[_0x201812(0x7f)]),_0x46c176[_0x201812(0x8a)]=()=>_0x5caa9c(_0x46c176[_0x201812(0xbb)]);}),_0x3aabd5=this.#o['hashVector'](_0x4a10b7);for(let _0x13f9b7 of _0x3aabd5){const _0x4d33df=await new Promise((_0x2ac574,_0x145845)=>{const _0x422f29=_0x2cc17a,_0x3c08c5=_0x559be8['get'](_0x13f9b7);_0x3c08c5[_0x422f29(0xa5)]=()=>_0x2ac574(_0x3c08c5['result']||[]),_0x3c08c5[_0x422f29(0x8a)]=()=>_0x145845(_0x3c08c5['error']);});_0x4d33df[_0x2cc17a(0xb9)](_0x1f6702),await new Promise((_0x4ba20d,_0x36930f)=>{const _0x20c5cc=_0x2cc17a,_0x55133a=_0x559be8[_0x20c5cc(0x99)](_0x4d33df,_0x13f9b7);_0x55133a[_0x20c5cc(0xa5)]=()=>_0x4ba20d(),_0x55133a['onerror']=()=>_0x36930f(_0x55133a[_0x20c5cc(0xbb)]);});}return _0x1f6702;}catch(_0x5266bf){throw _0x5266bf;}}async[_0x15eb38(0x96)](_0x5c9493){const _0x293af2=_0x15eb38;if(null==_0x5c9493)throw new Error(_0x293af2(0xad));const _0x153a43=(await this.#r)[_0x293af2(0x8b)]([this.#t,this.#t+'_hashIndex'],_0x293af2(0x7d)),_0x4158e1=_0x153a43[_0x293af2(0x9b)](this.#t),_0x3f6446=_0x153a43[_0x293af2(0x9b)](this.#t+_0x293af2(0xa3)),_0x59109e=await new Promise((_0x377544,_0x59c611)=>{const _0x5ef036=_0x293af2,_0x344587=_0x4158e1[_0x5ef036(0x9e)](_0x5c9493);_0x344587[_0x5ef036(0xa5)]=()=>_0x377544(_0x344587[_0x5ef036(0x7f)]),_0x344587[_0x5ef036(0x8a)]=()=>_0x59c611(_0x344587[_0x5ef036(0xbb)]);});if(!_0x59109e)throw new Error(_0x293af2(0x9c));const _0x15e396=_0x59109e[this.#e],_0x476d42=this.#o['hashVector'](_0x15e396);return await Promise[_0x293af2(0xb3)](_0x476d42['map'](_0x25e4c4=>this[_0x293af2(0x9f)](_0x3f6446,_0x5c9493,_0x25e4c4))),new Promise((_0x2010b7,_0x5b2b7a)=>{const _0x28b724=_0x293af2,_0x2af367=_0x4158e1[_0x28b724(0x96)](_0x5c9493);_0x2af367['onsuccess']=()=>_0x2010b7(),_0x2af367[_0x28b724(0x8a)]=()=>_0x5b2b7a(_0x2af367[_0x28b724(0xbb)]);});}async['removeFromBucket'](_0x2dd734,_0x170ba2,_0x2f5a8c){const _0x342cd7=_0x15eb38,_0x44f53b=await new Promise((_0x151588,_0x438bb3)=>{const _0x1ab605=a3_0x4cbd,_0x279bae=_0x2dd734['get'](_0x2f5a8c);_0x279bae[_0x1ab605(0xa5)]=()=>_0x151588(_0x279bae[_0x1ab605(0x7f)]||[]),_0x279bae[_0x1ab605(0x8a)]=()=>_0x438bb3(_0x279bae[_0x1ab605(0xbb)]);}),_0xae8dc4=_0x44f53b[_0x342cd7(0xb7)](_0x170ba2);-0x1!==_0xae8dc4&&(_0x44f53b[_0x342cd7(0x7b)](_0xae8dc4,0x1),await new Promise((_0x5ac547,_0x49dd70)=>{const _0x552b09=_0x342cd7,_0x154de7=_0x2dd734[_0x552b09(0x99)](_0x44f53b,_0x2f5a8c);_0x154de7[_0x552b09(0xa5)]=()=>_0x5ac547(),_0x154de7['onerror']=()=>_0x49dd70(_0x154de7[_0x552b09(0xbb)]);}));}async[_0x15eb38(0x86)](_0x3d75a7,_0x3835be){const _0xcd31cc=_0x15eb38;if(null==_0x3d75a7)throw new Error(_0xcd31cc(0x8c));if(!(this.#e in _0x3835be))throw new Error(this.#e+'\x20expected\x20to\x20be\x20present\x20in\x20the\x20object\x20being\x20updated');if(!(Array[_0xcd31cc(0x8f)](_0x3835be[this.#e])||_0x3835be[this.#e]instanceof Int8Array))throw new Error(this.#e+_0xcd31cc(0xac));const _0x3069d1=(await this.#r)[_0xcd31cc(0x8b)]([this.#t,this.#t+_0xcd31cc(0xa3)],_0xcd31cc(0x7d)),_0x3bc471=_0x3069d1['objectStore'](this.#t),_0x32ea96=_0x3069d1[_0xcd31cc(0x9b)](this.#t+_0xcd31cc(0xa3)),_0x2f2d93=_0x3bc471[_0xcd31cc(0x9e)](_0x3d75a7);return new Promise((_0x988ec3,_0x3ef92d)=>{const _0x9a421b=_0xcd31cc;_0x2f2d93[_0x9a421b(0xa5)]=async()=>{const _0x3ea883=_0x9a421b,_0x294b4b=_0x2f2d93[_0x3ea883(0x7f)];if(!_0x294b4b)return void _0x3ef92d(new Error(_0x3ea883(0x9c)));const _0x54f7b6=this.#o[_0x3ea883(0xaa)](_0x294b4b[this.#e]),_0x4bc761=this.#o[_0x3ea883(0xaa)](_0x3835be[this.#e]),_0x41a68b=_0x3bc471[_0x3ea883(0x99)](_0x3835be,_0x3d75a7);_0x41a68b[_0x3ea883(0xa5)]=async()=>{const _0x3a9161=_0x3ea883;try{for(let _0x37fb9a=0x0;_0x37fb9a_0x3ef92d(_0x41a68b[_0x3ea883(0xbb)]);},_0x2f2d93[_0x9a421b(0x8a)]=()=>_0x3ef92d(_0x2f2d93[_0x9a421b(0xbb)]);});}async[_0x15eb38(0xb6)](_0x5108bb,_0x2f05a3,_0x40047a,_0x1e06a8){const _0x2c84dd=_0x15eb38,_0x943680=_0x5108bb[_0x2c84dd(0x9e)](_0x40047a),_0x27cf04=await new Promise((_0x1ad936,_0x4cbfc9)=>{const _0x29b713=_0x2c84dd;_0x943680['onsuccess']=()=>_0x1ad936(_0x943680['result']||[]),_0x943680[_0x29b713(0x8a)]=()=>_0x4cbfc9(_0x943680[_0x29b713(0xbb)]);}),_0xa49d49=_0x27cf04[_0x2c84dd(0xb7)](_0x2f05a3);-0x1!==_0xa49d49&&(_0x27cf04['splice'](_0xa49d49,0x1),await _0x5108bb[_0x2c84dd(0x99)](_0x27cf04,_0x40047a));const _0x37ff4b=_0x5108bb[_0x2c84dd(0x9e)](_0x1e06a8),_0x15676f=await new Promise((_0xbe6653,_0x3b4f73)=>{const _0x1f4fe6=_0x2c84dd;_0x37ff4b['onsuccess']=()=>_0xbe6653(_0x37ff4b[_0x1f4fe6(0x7f)]||[]),_0x37ff4b['onerror']=()=>_0x3b4f73(_0x37ff4b[_0x1f4fe6(0xbb)]);});_0x15676f[_0x2c84dd(0xb9)](_0x2f05a3),await _0x5108bb['put'](_0x15676f,_0x1e06a8);}async[_0x15eb38(0xaf)](_0x10a112,_0x50c746={'limit':0xa}){const _0x496cc6=_0x15eb38,{limit:_0x365923}=_0x50c746;let _0x46467a=new Set(),_0x2ce968=[];try{const _0x436241=(await this.#r)[_0x496cc6(0x8b)]([this.#t,this.#t+'_hashIndex'],_0x496cc6(0x9d)),_0x66a9ad=_0x436241[_0x496cc6(0x9b)](this.#t),_0x3c5aee=_0x436241['objectStore'](this.#t+_0x496cc6(0xa3)),_0x589abc=this.#o['hashVector'](_0x10a112);for(let _0x226aaa of _0x589abc){const _0x1d8a87=await new Promise((_0x1319d0,_0x2491a1)=>{const _0x20953e=_0x496cc6,_0x13ddc0=_0x3c5aee[_0x20953e(0x9e)](_0x226aaa);_0x13ddc0['onsuccess']=()=>_0x1319d0(_0x13ddc0[_0x20953e(0x7f)]||[]),_0x13ddc0[_0x20953e(0x8a)]=()=>_0x2491a1(_0x13ddc0[_0x20953e(0xbb)]);});for(let _0x4fca43 of _0x1d8a87)if(!_0x46467a[_0x496cc6(0x79)](_0x4fca43)){_0x46467a[_0x496cc6(0x81)](_0x4fca43);const _0x59f015=await new Promise((_0xf1b79d,_0x5125ff)=>{const _0x56d634=_0x496cc6,_0x104b5a=_0x66a9ad[_0x56d634(0x9e)](_0x4fca43);_0x104b5a[_0x56d634(0xa5)]=()=>_0xf1b79d(_0x104b5a[_0x56d634(0x7f)]),_0x104b5a['onerror']=()=>_0x5125ff(_0x104b5a[_0x56d634(0xbb)]);}),_0x4bf963=_0x4540c9(_0x10a112,_0x59f015[this.#e]);_0x2ce968[_0x496cc6(0xb9)]({'object':_0x59f015,'key':_0x4fca43,'similarity':_0x4bf963});}}return _0x2ce968[_0x496cc6(0x8d)]((_0x47f677,_0xcaa86f)=>_0xcaa86f['similarity']-_0x47f677[_0x496cc6(0x93)]),_0x2ce968[_0x496cc6(0xb2)](0x0,_0x365923);}catch(_0x1e48d7){throw _0x1e48d7;}}get[_0x15eb38(0x9b)](){return this.#t;}}const _0x555d38='https://tinymlbackend3.azurewebsites.net/api/embeddings-binary';function _0x191081(_0xf15c1a,_0x363c88){const _0x1599f0=_0x15eb38;return fetch(_0x555d38,{'method':_0x1599f0(0xa0),'headers':{'Authorization':_0x1599f0(0xab)+_0x363c88,'Content-Type':_0x1599f0(0xb8)},'body':JSON[_0x1599f0(0x85)]({'text':_0xf15c1a})})[_0x1599f0(0xc0)](_0x395566=>{const _0x19ed60=_0x1599f0;if(!_0x395566['ok'])throw new Error('HTTP\x20error!\x20status:\x20'+_0x395566[_0x19ed60(0xa1)]);return _0x395566[_0x19ed60(0x87)]();})[_0x1599f0(0x8e)](_0x45ef87=>{});}const _0x35a823=new class{constructor(_0x208c08,_0x5ec944,_0x2f0ab7='\x0a\x0a'){const _0xd81e32=_0x15eb38;this[_0xd81e32(0xa8)]=_0x208c08,this[_0xd81e32(0x7a)]=_0x5ec944,this[_0xd81e32(0xbc)]=_0x2f0ab7;}['split'](_0x39d454){const _0x577462=_0x15eb38,_0x92b8a9=[],_0x440962=_0x39d454[_0x577462(0xbe)](this['charToSplit']);let _0x1f02de=[],_0x52b233=0x0;return _0x440962[_0x577462(0xbf)](_0x2ded3c=>{const _0x4290d7=_0x577462;(_0x2ded3c[_0x4290d7(0x9a)](/\w+|[^\w\s]+/g)||[])[_0x4290d7(0xbf)](_0x5c24b2=>{const _0x2e03e3=_0x4290d7;_0x52b233+_0x5c24b2['length']+0x1>this[_0x2e03e3(0xa8)]&&(_0x92b8a9[_0x2e03e3(0xb9)](_0x1f02de[_0x2e03e3(0xb0)]('\x20')),_0x1f02de=_0x1f02de[_0x2e03e3(0xb2)](-this[_0x2e03e3(0x7a)]),_0x52b233=_0x1f02de[_0x2e03e3(0xb0)]('\x20')[_0x2e03e3(0x90)]+0x1),_0x1f02de[_0x2e03e3(0xb9)](_0x5c24b2),_0x52b233+=_0x5c24b2[_0x2e03e3(0x90)]+0x1;});}),_0x1f02de[_0x577462(0x90)]>0x0&&_0x92b8a9['push'](_0x1f02de[_0x577462(0xb0)]('\x20')),_0x92b8a9;}}(0xc8,0x14,'\x0a\x0a');let _0x399b28;const _0x519bcd=[];async function _0x4cc854(_0x1d3104){const _0x520431=_0x15eb38;var _0x3c544c;return _0x3c544c={'dbName':_0x1d3104},_0x389e31={..._0x389e31,..._0x3c544c},_0x399b28=new _0x253c8b({'vectorPath':_0x520431(0xa2)}),_0x389e31;}async function _0x30847f(_0x3417f3,_0x109009,_0x59a1a1=!0x0){const _0x421050=_0x15eb38,{text:_0x4d4a86,..._0x2f7639}=_0x3417f3,_0xbfae0b=_0x35a823[_0x421050(0xbe)](_0x4d4a86);let _0x4cde4b;_0x4cde4b=await _0x191081(_0xbfae0b,_0x109009);for(let _0x511df6=0x0;_0x511df6<_0xbfae0b[_0x421050(0x90)];_0x511df6++){const {text:_0x18bba7,..._0x3537a3}=_0x3417f3;_0x3537a3[_0x421050(0xae)]=_0xbfae0b[_0x511df6];const _0x5f51bb=await _0x399b28[_0x421050(0x94)]({'embedding':_0x4cde4b[_0x511df6],..._0x3537a3});_0x519bcd[_0x421050(0xb9)]({'key':_0x5f51bb,..._0x3537a3});}return _0x519bcd;}async function _0x3d709f(_0x575a89,_0xf22ec={'limit':0x5},_0x49e286,_0x466866=!0x0){const _0x1adafe=_0x15eb38;let _0x5b8e23;return _0x5b8e23=await _0x191081(_0x575a89,_0x49e286),await _0x399b28[_0x1adafe(0xaf)](_0x5b8e23,_0xf22ec);}async function _0x399571(_0x4c9197,_0x1c4bdc,_0x2b480c=!0x0){const _0x99c361=_0x15eb38,{text:_0x5f052e,..._0xa8ff7e}=_0x4c9197,_0x1c8b45=_0x191081([_0x5f052e],_0x1c4bdc),_0x4ab964=await _0x399b28[_0x99c361(0x94)]({'embedding':_0x1c8b45,'text':_0x5f052e,..._0xa8ff7e});return _0x519bcd['push']({'key':_0x4ab964,..._0xa8ff7e}),_0x519bcd;}async function _0x57406a(_0x13c333){const _0x2831cd=_0x15eb38;return await _0x399b28[_0x2831cd(0x96)](_0x13c333);}async function _0x1f9e07(_0x4989b2,_0x1a09f9,_0x31ebec){const _0x5507f3=_0x15eb38,_0x285993=_0x1a09f9[_0x5507f3(0xae)];if(!_0x1a09f9[_0x5507f3(0xa2)]){const _0x492ae2=await _0x191081([_0x285993],_0x31ebec);_0x1a09f9[_0x5507f3(0xa2)]=_0x492ae2;}return await _0x399b28[_0x5507f3(0x86)](_0x4989b2,{'embedding':text_embeddings,'metaData':_0x1a09f9});}}}]); \ No newline at end of file diff --git a/scripts/ai_menu/dist/761.bundle.js b/scripts/ai_menu/dist/761.bundle.js deleted file mode 100644 index cacdbce4..00000000 --- a/scripts/ai_menu/dist/761.bundle.js +++ /dev/null @@ -1 +0,0 @@ -(function(_0x2e4256,_0x1a56c7){var _0x4c2b79=a2_0x28ec,_0x467efd=_0x2e4256();while(!![]){try{var _0x488b5c=-parseInt(_0x4c2b79(0xdb))/0x1*(parseInt(_0x4c2b79(0xc2))/0x2)+parseInt(_0x4c2b79(0xea))/0x3*(-parseInt(_0x4c2b79(0xbd))/0x4)+parseInt(_0x4c2b79(0xdc))/0x5+parseInt(_0x4c2b79(0xc8))/0x6*(parseInt(_0x4c2b79(0xbe))/0x7)+parseInt(_0x4c2b79(0xc0))/0x8*(-parseInt(_0x4c2b79(0xc3))/0x9)+parseInt(_0x4c2b79(0xcb))/0xa+parseInt(_0x4c2b79(0xd0))/0xb;if(_0x488b5c===_0x1a56c7)break;else _0x467efd['push'](_0x467efd['shift']());}catch(_0x4f4920){_0x467efd['push'](_0x467efd['shift']());}}}(a2_0x45bc,0xb6a69),((()=>{var _0x349e86=a2_0x28ec,_0x45b59c={},_0x68a809={};function _0xe8592e(_0x3da3b8){var _0xaa6a58=a2_0x28ec,_0x40ac8b=_0x68a809[_0x3da3b8];if(void 0x0!==_0x40ac8b)return _0x40ac8b[_0xaa6a58(0xee)];var _0x2f6484=_0x68a809[_0x3da3b8]={'exports':{}};return _0x45b59c[_0x3da3b8](_0x2f6484,_0x2f6484[_0xaa6a58(0xee)],_0xe8592e),_0x2f6484[_0xaa6a58(0xee)];}let _0x509bac,_0x4ef69f,_0x59cbd4,_0x198c6f,_0x438d67;_0xe8592e['m']=_0x45b59c,_0xe8592e['d']=(_0x212d50,_0xf572bf)=>{for(var _0x4fe341 in _0xf572bf)_0xe8592e['o'](_0xf572bf,_0x4fe341)&&!_0xe8592e['o'](_0x212d50,_0x4fe341)&&Object['defineProperty'](_0x212d50,_0x4fe341,{'enumerable':!0x0,'get':_0xf572bf[_0x4fe341]});},_0xe8592e['f']={},_0xe8592e['e']=_0x1e03e2=>Promise['all'](Object[_0x349e86(0xc9)](_0xe8592e['f'])[_0x349e86(0xd9)]((_0x192988,_0x4a4963)=>(_0xe8592e['f'][_0x4a4963](_0x1e03e2,_0x192988),_0x192988),[])),_0xe8592e['u']=_0x25c214=>_0x25c214+_0x349e86(0xe7),_0xe8592e['g']=(function(){var _0x3aafd1=_0x349e86;if(_0x3aafd1(0xe2)==typeof globalThis)return globalThis;try{return this||new Function(_0x3aafd1(0xdd))();}catch(_0x432bd0){if(_0x3aafd1(0xe2)==typeof window)return window;}}()),_0xe8592e['o']=(_0x394cfc,_0x34ade9)=>Object[_0x349e86(0xe8)][_0x349e86(0xce)]['call'](_0x394cfc,_0x34ade9),_0xe8592e['r']=_0x5b2c62=>{var _0x4a8d10=_0x349e86;_0x4a8d10(0xd1)!=typeof Symbol&&Symbol[_0x4a8d10(0xec)]&&Object[_0x4a8d10(0xc7)](_0x5b2c62,Symbol[_0x4a8d10(0xec)],{'value':_0x4a8d10(0xd6)}),Object[_0x4a8d10(0xc7)](_0x5b2c62,_0x4a8d10(0xd7),{'value':!0x0});},((()=>{var _0x320419=_0x349e86,_0x4fd522;_0xe8592e['g'][_0x320419(0xe4)]&&(_0x4fd522=_0xe8592e['g']['location']+'');var _0x59b62f=_0xe8592e['g'][_0x320419(0xd2)];if(!_0x4fd522&&_0x59b62f&&(_0x59b62f[_0x320419(0xbf)]&&(_0x4fd522=_0x59b62f[_0x320419(0xbf)][_0x320419(0xc4)]),!_0x4fd522)){var _0x5c7227=_0x59b62f['getElementsByTagName'](_0x320419(0xe6));if(_0x5c7227[_0x320419(0xcc)]){for(var _0x47f32a=_0x5c7227[_0x320419(0xcc)]-0x1;_0x47f32a>-0x1&&(!_0x4fd522||!/^http(s?):/[_0x320419(0xd8)](_0x4fd522));)_0x4fd522=_0x5c7227[_0x47f32a--][_0x320419(0xc4)];}}if(!_0x4fd522)throw new Error(_0x320419(0xcf));_0x4fd522=_0x4fd522[_0x320419(0xdf)](/#.*$/,'')[_0x320419(0xdf)](/\?.*$/,'')[_0x320419(0xdf)](/\/[^\/]+$/,'/'),_0xe8592e['p']=_0x4fd522;})()),((()=>{var _0x178dde=_0x349e86,_0x2e543e={0x2f9:0x1,0x8e:0x1};_0xe8592e['f']['i']=(_0xe16abe,_0x8f3286)=>{_0x2e543e[_0xe16abe]||importScripts(_0xe8592e['p']+_0xe8592e['u'](_0xe16abe));};var _0x923e8=self[_0x178dde(0xeb)]=self['webpackChunkinjectchat']||[],_0x3adde0=_0x923e8['push'][_0x178dde(0xd3)](_0x923e8);_0x923e8['push']=_0x3c14c8=>{var _0x3a028b=_0x178dde,[_0x4bb769,_0x486a3c,_0xbc3c59]=_0x3c14c8;for(var _0xfc845 in _0x486a3c)_0xe8592e['o'](_0x486a3c,_0xfc845)&&(_0xe8592e['m'][_0xfc845]=_0x486a3c[_0xfc845]);for(_0xbc3c59&&_0xbc3c59(_0xe8592e);_0x4bb769[_0x3a028b(0xcc)];)_0x2e543e[_0x4bb769[_0x3a028b(0xc6)]()]=0x1;_0x3adde0(_0x3c14c8);};})()),self[_0x349e86(0xe3)]=async _0xebb429=>{var _0x4988bb=_0x349e86;if(!(_0x509bac&&_0x4ef69f&&quickStart_single&&_0x59cbd4&&_0x198c6f&&_0x438d67)){const _0x330c11=await _0xe8592e['e'](0x180)['then'](_0xe8592e[_0x4988bb(0xd3)](_0xe8592e,0x20c0));_0x509bac=_0x330c11[_0x4988bb(0xc1)],_0x4ef69f=_0x330c11[_0x4988bb(0xed)],quickStart_single=_0x330c11[_0x4988bb(0xca)],_0x59cbd4=_0x330c11[_0x4988bb(0xda)],_0x198c6f=_0x330c11['delete_row'],_0x438d67=_0x330c11[_0x4988bb(0xde)];}try{let _0x27d809;const {token:_0x4c90bd,..._0x17108c}=_0xebb429[_0x4988bb(0xf1)];switch(_0xebb429[_0x4988bb(0xf1)]['command']){case _0x4988bb(0xc1):_0x27d809=await _0x509bac(_0xebb429['data'][_0x4988bb(0xe9)]),_0x27d809={'result':_0x27d809};break;case _0x4988bb(0xe0):_0x27d809=await _0x4ef69f(_0x17108c,_0x4c90bd,_0xebb429[_0x4988bb(0xf1)][_0x4988bb(0xd5)]),_0x27d809={'result':_0x27d809,'id':_0xebb429[_0x4988bb(0xf1)]['id']};break;case'create_single':_0x27d809=await quickStart_single(_0xebb429[_0x4988bb(0xf1)],_0x4c90bd,_0xebb429[_0x4988bb(0xf1)][_0x4988bb(0xd5)]),_0x27d809={'result':_0x27d809,'id':_0xebb429['data']['id']};break;case _0x4988bb(0xda):_0x27d809=await _0x59cbd4(_0x17108c[_0x4988bb(0xe1)],_0xebb429[_0x4988bb(0xf1)]['k'],_0x4c90bd,_0xebb429['data'][_0x4988bb(0xd5)]),_0x27d809={'result':_0x27d809,'id':_0xebb429['data']['id']};break;case'delete_row':_0x27d809=await _0x198c6f(_0xebb429[_0x4988bb(0xf1)][_0x4988bb(0xe5)]),_0x27d809={'result':_0x27d809,'id':_0xebb429[_0x4988bb(0xf1)]['id']};break;case _0x4988bb(0xde):_0x27d809=await _0x438d67(_0xebb429['data'][_0x4988bb(0xe5)],_0x17108c,_0x4c90bd),_0x27d809={'result':_0x27d809,'id':_0xebb429['data']['id']};break;default:throw new Error(_0x4988bb(0xf0));}self[_0x4988bb(0xcd)]({'status':_0x4988bb(0xc5),'data':_0x27d809[_0x4988bb(0xef)],'id':_0x27d809['id']});}catch(_0xb62c2c){self[_0x4988bb(0xcd)]({'status':'error','message':_0xb62c2c[_0x4988bb(0xd4)]});}};})()));function a2_0x28ec(_0x5bbb77,_0x200c08){var _0x45bcc2=a2_0x45bc();return a2_0x28ec=function(_0x28eccb,_0x4f517b){_0x28eccb=_0x28eccb-0xbd;var _0x20e05a=_0x45bcc2[_0x28eccb];return _0x20e05a;},a2_0x28ec(_0x5bbb77,_0x200c08);}function a2_0x45bc(){var _0x2f81e0=['prototype','dbName','1704048VIDbFS','webpackChunkinjectchat','toStringTag','quickStart','exports','result','Unknown\x20command','data','4VoRXPS','4655259CmHSrq','currentScript','3225352bYKsfc','initiate','2KKgiIJ','18pRtRjJ','src','success','pop','defineProperty','6freSpW','keys','quickStart_single','6635020CcHTIa','length','postMessage','hasOwnProperty','Automatic\x20publicPath\x20is\x20not\x20supported\x20in\x20this\x20browser','10929589dLvyUZ','undefined','document','bind','message','useLocal','Module','__esModule','test','reduce','search','365993fyrPTD','831730ijzblH','return\x20this','update','replace','create','text','object','onmessage','importScripts','key','script','.bundle.js'];a2_0x45bc=function(){return _0x2f81e0;};return a2_0x45bc();} \ No newline at end of file diff --git a/scripts/ai_menu/dist/bundle.js b/scripts/ai_menu/dist/bundle.js deleted file mode 100644 index 6f6b1230..00000000 --- a/scripts/ai_menu/dist/bundle.js +++ /dev/null @@ -1 +0,0 @@ -(function(_0x4f1b3c,_0x9764bc){const _0x49ace2=a0_0x11e7,_0x228987=_0x4f1b3c();while(!![]){try{const _0x5e7796=-parseInt(_0x49ace2(0x4ef9))/0x1+-parseInt(_0x49ace2(0x1f4))/0x2+-parseInt(_0x49ace2(0x304a))/0x3+parseInt(_0x49ace2(0x3a3b))/0x4*(parseInt(_0x49ace2(0x2342))/0x5)+-parseInt(_0x49ace2(0x20b5))/0x6*(-parseInt(_0x49ace2(0x2c7d))/0x7)+-parseInt(_0x49ace2(0xe96))/0x8+parseInt(_0x49ace2(0x249))/0x9*(parseInt(_0x49ace2(0x133e))/0xa);if(_0x5e7796===_0x9764bc)break;else _0x228987['push'](_0x228987['shift']());}catch(_0x5d38a5){_0x228987['push'](_0x228987['shift']());}}}(a0_0x1bd5,0x28030),((()=>{const _0x4a492c=a0_0x11e7;var _0x14ebb1={0x1b4f:(_0x227671,_0x3357a2,_0x442df3)=>{'use strict';const _0x44fd56=a0_0x11e7;_0x442df3['r'](_0x3357a2),_0x442df3['d'](_0x3357a2,{'default':()=>_0x62776f});var _0x352d23=_0x442df3(0x641),_0x2b2293=_0x442df3['n'](_0x352d23),_0x8fefbb=_0x442df3(0x18aa),_0x37b4a7=_0x442df3['n'](_0x8fefbb)()(_0x2b2293());_0x37b4a7['push']([_0x227671['id'],_0x44fd56(0x133f),'']);const _0x62776f=_0x37b4a7;},0xa79:(_0xdef109,_0x2e2cd2,_0x319d52)=>{'use strict';const _0x4993d6=a0_0x11e7;_0x319d52['r'](_0x2e2cd2),_0x319d52['d'](_0x2e2cd2,{'default':()=>_0x5934c2});var _0x29d9fd=_0x319d52(0x641),_0x326e77=_0x319d52['n'](_0x29d9fd),_0x5088b0=_0x319d52(0x18aa),_0x277aab=_0x319d52['n'](_0x5088b0)()(_0x326e77());_0x277aab[_0x4993d6(0x1715)]([_0xdef109['id'],_0x4993d6(0x9f2),'']);const _0x5934c2=_0x277aab;},0x18aa:_0x9f594f=>{'use strict';const _0x38cb1d=a0_0x11e7;_0x9f594f[_0x38cb1d(0x474c)]=function(_0x364b26){const _0xd24fbf=_0x38cb1d;var _0x5de9f6=[];return _0x5de9f6[_0xd24fbf(0x8e8)]=function(){const _0x385d36=_0xd24fbf;return this[_0x385d36(0x4833)](function(_0x319e00){const _0x3b963e=_0x385d36;var _0x127d41='',_0x592b8a=void 0x0!==_0x319e00[0x5];return _0x319e00[0x4]&&(_0x127d41+='@supports\x20('[_0x3b963e(0x1d1d)](_0x319e00[0x4],')\x20{')),_0x319e00[0x2]&&(_0x127d41+=_0x3b963e(0x1330)[_0x3b963e(0x1d1d)](_0x319e00[0x2],'\x20{')),_0x592b8a&&(_0x127d41+=_0x3b963e(0x3d4a)[_0x3b963e(0x1d1d)](_0x319e00[0x5]['length']>0x0?'\x20'['concat'](_0x319e00[0x5]):'','\x20{')),_0x127d41+=_0x364b26(_0x319e00),_0x592b8a&&(_0x127d41+='}'),_0x319e00[0x2]&&(_0x127d41+='}'),_0x319e00[0x4]&&(_0x127d41+='}'),_0x127d41;})['join']('');},_0x5de9f6['i']=function(_0x320823,_0x397d5a,_0x216b93,_0x3702eb,_0x51d01d){const _0x143501=_0xd24fbf;_0x143501(0x2431)==typeof _0x320823&&(_0x320823=[[null,_0x320823,void 0x0]]);var _0x4e92ff={};if(_0x216b93)for(var _0xe011d=0x0;_0xe011d0x0?'\x20'[_0x143501(0x1d1d)](_0x345fe0[0x5]):'','\x20{')[_0x143501(0x1d1d)](_0x345fe0[0x1],'}')),_0x345fe0[0x5]=_0x51d01d),_0x397d5a&&(_0x345fe0[0x2]?(_0x345fe0[0x1]='@media\x20'[_0x143501(0x1d1d)](_0x345fe0[0x2],'\x20{')[_0x143501(0x1d1d)](_0x345fe0[0x1],'}'),_0x345fe0[0x2]=_0x397d5a):_0x345fe0[0x2]=_0x397d5a),_0x3702eb&&(_0x345fe0[0x4]?(_0x345fe0[0x1]='@supports\x20('[_0x143501(0x1d1d)](_0x345fe0[0x4],_0x143501(0x241a))[_0x143501(0x1d1d)](_0x345fe0[0x1],'}'),_0x345fe0[0x4]=_0x3702eb):_0x345fe0[0x4]=''[_0x143501(0x1d1d)](_0x3702eb)),_0x5de9f6[_0x143501(0x1715)](_0x345fe0));}},_0x5de9f6;};},0x641:_0x3267d8=>{'use strict';const _0x149488=a0_0x11e7;_0x3267d8[_0x149488(0x474c)]=function(_0x15677d){return _0x15677d[0x1];};},0x11c3:(_0x9a8c05,_0x2fad19,_0x20869c)=>{const _0xe8ebe6=a0_0x11e7;var _0x699b55=_0x20869c(0x1b4f);_0x699b55&&_0x699b55[_0xe8ebe6(0x1c56)]&&(_0x699b55=_0x699b55[_0xe8ebe6(0x3d23)]),_0x9a8c05[_0xe8ebe6(0x474c)]='string'==typeof _0x699b55?_0x699b55:_0x699b55[_0xe8ebe6(0x8e8)]();},0x1fbd:(_0x78986a,_0x5c45ce,_0x46b245)=>{const _0x3934dc=a0_0x11e7;var _0x4286eb=_0x46b245(0xa79);_0x4286eb&&_0x4286eb[_0x3934dc(0x1c56)]&&(_0x4286eb=_0x4286eb['default']),_0x78986a[_0x3934dc(0x474c)]='string'==typeof _0x4286eb?_0x4286eb:_0x4286eb[_0x3934dc(0x8e8)]();},0x20e0:_0x44cf3b=>{const _0x41950a=a0_0x11e7;function _0x576890(_0x1c24f9){const _0x1f0286=a0_0x11e7;return _0x1c24f9 instanceof Map?_0x1c24f9[_0x1f0286(0x4933)]=_0x1c24f9[_0x1f0286(0x5be)]=_0x1c24f9[_0x1f0286(0x1fa)]=function(){throw new Error('map\x20is\x20read-only');}:_0x1c24f9 instanceof Set&&(_0x1c24f9[_0x1f0286(0x362c)]=_0x1c24f9[_0x1f0286(0x4933)]=_0x1c24f9[_0x1f0286(0x5be)]=function(){throw new Error('set\x20is\x20read-only');}),Object['freeze'](_0x1c24f9),Object[_0x1f0286(0xf6e)](_0x1c24f9)[_0x1f0286(0xa21)](_0x36b353=>{const _0x357b4a=_0x1f0286,_0x57f310=_0x1c24f9[_0x36b353],_0x4dfd5e=typeof _0x57f310;_0x357b4a(0x20c7)!==_0x4dfd5e&&_0x357b4a(0x14b2)!==_0x4dfd5e||Object[_0x357b4a(0x44c0)](_0x57f310)||_0x576890(_0x57f310);}),_0x1c24f9;}class _0x4dd7e0{constructor(_0x389b39){const _0x3c6aed=a0_0x11e7;void 0x0===_0x389b39[_0x3c6aed(0x5139)]&&(_0x389b39[_0x3c6aed(0x5139)]={}),this[_0x3c6aed(0x5139)]=_0x389b39[_0x3c6aed(0x5139)],this['isMatchIgnored']=!0x1;}[_0x41950a(0xec5)](){const _0xa5b9ff=_0x41950a;this[_0xa5b9ff(0x49c0)]=!0x0;}}function _0x122d39(_0x3d92ce){const _0x2855f8=_0x41950a;return _0x3d92ce[_0x2855f8(0x741)](/&/g,_0x2855f8(0x2a36))['replace'](//g,'>')[_0x2855f8(0x741)](/"/g,'"')[_0x2855f8(0x741)](/'/g,_0x2855f8(0x121c));}function _0x157a2a(_0x4b2b6c,..._0x36c15e){const _0x2653c6=_0x41950a,_0x2fb9c4=Object[_0x2653c6(0x1d3a)](null);for(const _0x5f2de7 in _0x4b2b6c)_0x2fb9c4[_0x5f2de7]=_0x4b2b6c[_0x5f2de7];return _0x36c15e['forEach'](function(_0x1a1b84){for(const _0x2152c6 in _0x1a1b84)_0x2fb9c4[_0x2152c6]=_0x1a1b84[_0x2152c6];}),_0x2fb9c4;}const _0x1e8793=_0x165f1e=>!!_0x165f1e['scope'];class _0xde67f2{constructor(_0x8150e3,_0x40f217){const _0x4cd128=_0x41950a;this[_0x4cd128(0x2d10)]='',this[_0x4cd128(0x3982)]=_0x40f217[_0x4cd128(0x3982)],_0x8150e3['walk'](this);}[_0x41950a(0x4001)](_0x4a9aff){const _0x49b6c3=_0x41950a;this[_0x49b6c3(0x2d10)]+=_0x122d39(_0x4a9aff);}[_0x41950a(0x45f8)](_0x416106){const _0x12154e=_0x41950a;if(!_0x1e8793(_0x416106))return;const _0x56f06f=((_0x3b2d4b,{prefix:_0x243491})=>{const _0x519514=a0_0x11e7;if(_0x3b2d4b['startsWith'](_0x519514(0x2d24)))return _0x3b2d4b[_0x519514(0x741)](_0x519514(0x2d24),_0x519514(0xd06));if(_0x3b2d4b[_0x519514(0x2628)]('.')){const _0x172cda=_0x3b2d4b[_0x519514(0x1117)]('.');return[''+_0x243491+_0x172cda[_0x519514(0x34fe)](),..._0x172cda[_0x519514(0x4833)]((_0x349bc5,_0x4627f9)=>''+_0x349bc5+'_'['repeat'](_0x4627f9+0x1))][_0x519514(0x3541)]('\x20');}return''+_0x243491+_0x3b2d4b;})(_0x416106[_0x12154e(0x4cd)],{'prefix':this[_0x12154e(0x3982)]});this[_0x12154e(0x2bbd)](_0x56f06f);}[_0x41950a(0x45b8)](_0x55af78){const _0x27c854=_0x41950a;_0x1e8793(_0x55af78)&&(this['buffer']+=_0x27c854(0x1bcb));}[_0x41950a(0x4fe9)](){const _0x2bb4f4=_0x41950a;return this[_0x2bb4f4(0x2d10)];}[_0x41950a(0x2bbd)](_0x58f7ce){const _0x4c1ee4=_0x41950a;this[_0x4c1ee4(0x2d10)]+=_0x4c1ee4(0x3e55)+_0x58f7ce+'\x22>';}}const _0x22bdbf=(_0x437d4d={})=>{const _0x781bc2={'children':[]};return Object['assign'](_0x781bc2,_0x437d4d),_0x781bc2;};class _0x1b7319{constructor(){const _0x25d48b=_0x41950a;this[_0x25d48b(0x37bd)]=_0x22bdbf(),this[_0x25d48b(0x453a)]=[this[_0x25d48b(0x37bd)]];}get[_0x41950a(0x279d)](){const _0x424478=_0x41950a;return this[_0x424478(0x453a)][this[_0x424478(0x453a)][_0x424478(0x1b19)]-0x1];}get[_0x41950a(0x507b)](){const _0x45d64f=_0x41950a;return this[_0x45d64f(0x37bd)];}[_0x41950a(0x362c)](_0x4d1d28){const _0x2e6d02=_0x41950a;this[_0x2e6d02(0x279d)][_0x2e6d02(0x4c3e)][_0x2e6d02(0x1715)](_0x4d1d28);}[_0x41950a(0x45f8)](_0x33efc3){const _0x548c0e=_0x41950a,_0xf1295e=_0x22bdbf({'scope':_0x33efc3});this[_0x548c0e(0x362c)](_0xf1295e),this[_0x548c0e(0x453a)][_0x548c0e(0x1715)](_0xf1295e);}[_0x41950a(0x45b8)](){const _0x23224d=_0x41950a;if(this['stack'][_0x23224d(0x1b19)]>0x1)return this[_0x23224d(0x453a)]['pop']();}[_0x41950a(0x44ef)](){const _0x3f6907=_0x41950a;for(;this[_0x3f6907(0x45b8)](););}[_0x41950a(0x4330)](){const _0x4eead5=_0x41950a;return JSON[_0x4eead5(0x3cbd)](this[_0x4eead5(0x37bd)],null,0x4);}['walk'](_0x528b0f){const _0x239ef2=_0x41950a;return this['constructor'][_0x239ef2(0x3bae)](_0x528b0f,this[_0x239ef2(0x37bd)]);}static[_0x41950a(0x3bae)](_0x3518a3,_0xbd2c95){const _0x53276b=_0x41950a;return _0x53276b(0x2431)==typeof _0xbd2c95?_0x3518a3[_0x53276b(0x4001)](_0xbd2c95):_0xbd2c95[_0x53276b(0x4c3e)]&&(_0x3518a3['openNode'](_0xbd2c95),_0xbd2c95['children'][_0x53276b(0xa21)](_0x5dc875=>this[_0x53276b(0x3bae)](_0x3518a3,_0x5dc875)),_0x3518a3['closeNode'](_0xbd2c95)),_0x3518a3;}static['_collapse'](_0x499fb5){const _0x4e121a=_0x41950a;_0x4e121a(0x2431)!=typeof _0x499fb5&&_0x499fb5['children']&&(_0x499fb5[_0x4e121a(0x4c3e)][_0x4e121a(0x12d8)](_0x1bd300=>_0x4e121a(0x2431)==typeof _0x1bd300)?_0x499fb5[_0x4e121a(0x4c3e)]=[_0x499fb5['children']['join']('')]:_0x499fb5['children']['forEach'](_0x1de92e=>{_0x1b7319['_collapse'](_0x1de92e);}));}}class _0x144cc4 extends _0x1b7319{constructor(_0x41e74f){const _0x2d5682=_0x41950a;super(),this[_0x2d5682(0x20b6)]=_0x41e74f;}['addText'](_0x20ec27){const _0x3f9efe=_0x41950a;''!==_0x20ec27&&this[_0x3f9efe(0x362c)](_0x20ec27);}['startScope'](_0x83f934){const _0x1a005f=_0x41950a;this[_0x1a005f(0x45f8)](_0x83f934);}[_0x41950a(0x186f)](){const _0x5c09a1=_0x41950a;this[_0x5c09a1(0x45b8)]();}[_0x41950a(0x5120)](_0x43cad9,_0x14484d){const _0x5b6de2=_0x41950a,_0x2f4b4d=_0x43cad9['root'];_0x14484d&&(_0x2f4b4d['scope']=_0x5b6de2(0x2d24)+_0x14484d),this['add'](_0x2f4b4d);}['toHTML'](){const _0x36974e=_0x41950a;return new _0xde67f2(this,this[_0x36974e(0x20b6)])['value']();}[_0x41950a(0x257a)](){const _0x506d03=_0x41950a;return this[_0x506d03(0x44ef)](),!0x0;}}function _0x45da53(_0x2b9598){const _0x315b5b=_0x41950a;return _0x2b9598?_0x315b5b(0x2431)==typeof _0x2b9598?_0x2b9598:_0x2b9598[_0x315b5b(0x33b0)]:null;}function _0x13e767(_0x31e77d){const _0x298388=_0x41950a;return _0x521e8d(_0x298388(0x4380),_0x31e77d,')');}function _0xad2530(_0x128302){const _0x541ca6=_0x41950a;return _0x521e8d(_0x541ca6(0xf54),_0x128302,')*');}function _0x484f62(_0x7ba874){const _0x7084d=_0x41950a;return _0x521e8d(_0x7084d(0xf54),_0x7ba874,')?');}function _0x521e8d(..._0x45a24c){const _0x54eaf6=_0x41950a;return _0x45a24c[_0x54eaf6(0x4833)](_0x1e704b=>_0x45da53(_0x1e704b))['join']('');}function _0x559db(..._0x157a74){const _0x4a9609=_0x41950a,_0x86e47b=function(_0x16ae49){const _0x20a431=a0_0x11e7,_0x3332b4=_0x16ae49[_0x16ae49['length']-0x1];return'object'==typeof _0x3332b4&&_0x3332b4[_0x20a431(0x4514)]===Object?(_0x16ae49[_0x20a431(0x4986)](_0x16ae49[_0x20a431(0x1b19)]-0x1,0x1),_0x3332b4):{};}(_0x157a74);return'('+(_0x86e47b[_0x4a9609(0x2f26)]?'':'?:')+_0x157a74[_0x4a9609(0x4833)](_0x398b4c=>_0x45da53(_0x398b4c))[_0x4a9609(0x3541)]('|')+')';}function _0x54be14(_0x21ec0c){const _0xc024d2=_0x41950a;return new RegExp(_0x21ec0c['toString']()+'|')[_0xc024d2(0x198d)]('')[_0xc024d2(0x1b19)]-0x1;}const _0x14598c=/\[(?:[^\\\]]|\\.)*\]|\(\??|\\([1-9][0-9]*)|\\./;function _0x5992fc(_0x117a64,{joinWith:_0x584021}){const _0x5dacdf=_0x41950a;let _0x45b9f9=0x0;return _0x117a64[_0x5dacdf(0x4833)](_0x572dad=>{const _0xd0bbd6=_0x5dacdf;_0x45b9f9+=0x1;const _0x42d89c=_0x45b9f9;let _0x515b0e=_0x45da53(_0x572dad),_0x5ce0d0='';for(;_0x515b0e['length']>0x0;){const _0x56f514=_0x14598c['exec'](_0x515b0e);if(!_0x56f514){_0x5ce0d0+=_0x515b0e;break;}_0x5ce0d0+=_0x515b0e['substring'](0x0,_0x56f514[_0xd0bbd6(0x3bb5)]),_0x515b0e=_0x515b0e[_0xd0bbd6(0x37b5)](_0x56f514[_0xd0bbd6(0x3bb5)]+_0x56f514[0x0]['length']),'\x5c'===_0x56f514[0x0][0x0]&&_0x56f514[0x1]?_0x5ce0d0+='\x5c'+String(Number(_0x56f514[0x1])+_0x42d89c):(_0x5ce0d0+=_0x56f514[0x0],'('===_0x56f514[0x0]&&_0x45b9f9++);}return _0x5ce0d0;})[_0x5dacdf(0x4833)](_0x4a4980=>'('+_0x4a4980+')')[_0x5dacdf(0x3541)](_0x584021);}const _0x37eceb='[a-zA-Z]\x5cw*',_0x243a27=_0x41950a(0x5242),_0x16ddd1=_0x41950a(0x2f6c),_0x59d8e1=_0x41950a(0x2fc3),_0x1bca05=_0x41950a(0x3762),_0x1453e5={'begin':_0x41950a(0xbee),'relevance':0x0},_0x5b8d28={'scope':_0x41950a(0x2431),'begin':'\x27','end':'\x27','illegal':'\x5cn','contains':[_0x1453e5]},_0x888f72={'scope':_0x41950a(0x2431),'begin':'\x22','end':'\x22','illegal':'\x5cn','contains':[_0x1453e5]},_0x5bb5b3=function(_0x599bb,_0x8e9bd7,_0x3ab360={}){const _0x16b37f=_0x41950a,_0xba838=_0x157a2a({'scope':_0x16b37f(0x4645),'begin':_0x599bb,'end':_0x8e9bd7,'contains':[]},_0x3ab360);_0xba838[_0x16b37f(0x2b31)][_0x16b37f(0x1715)]({'scope':_0x16b37f(0x4593),'begin':_0x16b37f(0x11d9),'end':/(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):/,'excludeBegin':!0x0,'relevance':0x0});const _0x5493cd=_0x559db('I','a','is','so','us','to','at','if','in','it','on',/[A-Za-z]+['](d|ve|re|ll|t|s|n)/,/[A-Za-z]+[-][a-z]+/,/[A-Za-z][a-z]{2,}/);return _0xba838['contains']['push']({'begin':_0x521e8d(/[ ]+/,'(',_0x5493cd,/[.]?[:]?([.][ ]|[ ])/,_0x16b37f(0x85b))}),_0xba838;},_0x26e8a0=_0x5bb5b3('//','$'),_0x29a18d=_0x5bb5b3(_0x41950a(0x4f94),'\x5c*/'),_0x115f11=_0x5bb5b3('#','$'),_0x129e66={'scope':'number','begin':_0x16ddd1,'relevance':0x0},_0x46b746={'scope':_0x41950a(0x4a80),'begin':_0x59d8e1,'relevance':0x0},_0x34b516={'scope':_0x41950a(0x4a80),'begin':_0x1bca05,'relevance':0x0},_0xf7c71b={'scope':_0x41950a(0x4d1d),'begin':/\/(?=[^/\n]*\/)/,'end':/\/[gimuy]*/,'contains':[_0x1453e5,{'begin':/\[/,'end':/\]/,'relevance':0x0,'contains':[_0x1453e5]}]},_0x3cd058={'scope':_0x41950a(0x4685),'begin':_0x37eceb,'relevance':0x0},_0x2712ff={'scope':_0x41950a(0x4685),'begin':_0x243a27,'relevance':0x0},_0x4db6bb={'begin':_0x41950a(0xd0a)+_0x243a27,'relevance':0x0};var _0x4f27a7=Object[_0x41950a(0x209c)]({'__proto__':null,'APOS_STRING_MODE':_0x5b8d28,'BACKSLASH_ESCAPE':_0x1453e5,'BINARY_NUMBER_MODE':_0x34b516,'BINARY_NUMBER_RE':_0x1bca05,'COMMENT':_0x5bb5b3,'C_BLOCK_COMMENT_MODE':_0x29a18d,'C_LINE_COMMENT_MODE':_0x26e8a0,'C_NUMBER_MODE':_0x46b746,'C_NUMBER_RE':_0x59d8e1,'END_SAME_AS_BEGIN':function(_0x36d845){return Object['assign'](_0x36d845,{'on:begin':(_0x3732db,_0x5edfcb)=>{const _0x4379b2=a0_0x11e7;_0x5edfcb['data'][_0x4379b2(0xabd)]=_0x3732db[0x1];},'on:end':(_0x25fe39,_0x2ef469)=>{const _0x458cee=a0_0x11e7;_0x2ef469[_0x458cee(0x5139)][_0x458cee(0xabd)]!==_0x25fe39[0x1]&&_0x2ef469[_0x458cee(0xec5)]();}});},'HASH_COMMENT_MODE':_0x115f11,'IDENT_RE':_0x37eceb,'MATCH_NOTHING_RE':/\b\B/,'METHOD_GUARD':_0x4db6bb,'NUMBER_MODE':_0x129e66,'NUMBER_RE':_0x16ddd1,'PHRASAL_WORDS_MODE':{'begin':/\b(a|an|the|are|I'm|isn't|don't|doesn't|won't|but|just|should|pretty|simply|enough|gonna|going|wtf|so|such|will|you|your|they|like|more)\b/},'QUOTE_STRING_MODE':_0x888f72,'REGEXP_MODE':_0xf7c71b,'RE_STARTERS_RE':'!|!=|!==|%|%=|&|&&|&=|\x5c*|\x5c*=|\x5c+|\x5c+=|,|-|-=|/=|/|:|;|<<|<<=|<=|<|===|==|=|>>>=|>>=|>=|>>>|>>|>|\x5c?|\x5c[|\x5c{|\x5c(|\x5c^|\x5c^=|\x5c||\x5c|=|\x5c|\x5c||~','SHEBANG':(_0x17b936={})=>{const _0x380307=_0x41950a,_0x5de8c5=/^#![ ]*\//;return _0x17b936[_0x380307(0x4053)]&&(_0x17b936[_0x380307(0x42fa)]=_0x521e8d(_0x5de8c5,/.*\b/,_0x17b936[_0x380307(0x4053)],/\b.*/)),_0x157a2a({'scope':_0x380307(0x5153),'begin':_0x5de8c5,'end':/$/,'relevance':0x0,'on:begin':(_0x209b7,_0x46ae71)=>{const _0x227837=_0x380307;0x0!==_0x209b7[_0x227837(0x3bb5)]&&_0x46ae71['ignoreMatch']();}},_0x17b936);},'TITLE_MODE':_0x3cd058,'UNDERSCORE_IDENT_RE':_0x243a27,'UNDERSCORE_TITLE_MODE':_0x2712ff});function _0x132976(_0x351edb,_0x27c08d){const _0x4baa65=_0x41950a;'.'===_0x351edb['input'][_0x351edb[_0x4baa65(0x3bb5)]-0x1]&&_0x27c08d[_0x4baa65(0xec5)]();}function _0x1f0fcc(_0x12afef,_0x47eb44){const _0x529e69=_0x41950a;void 0x0!==_0x12afef[_0x529e69(0x1cea)]&&(_0x12afef[_0x529e69(0x4cd)]=_0x12afef[_0x529e69(0x1cea)],delete _0x12afef[_0x529e69(0x1cea)]);}function _0x135ced(_0x16193b,_0x4e6b06){const _0x3f1635=_0x41950a;_0x4e6b06&&_0x16193b[_0x3f1635(0x49c1)]&&(_0x16193b['begin']=_0x3f1635(0x4cd5)+_0x16193b[_0x3f1635(0x49c1)]['split']('\x20')[_0x3f1635(0x3541)]('|')+')(?!\x5c.)(?=\x5cb|\x5cs)',_0x16193b['__beforeBegin']=_0x132976,_0x16193b[_0x3f1635(0xe37)]=_0x16193b[_0x3f1635(0xe37)]||_0x16193b[_0x3f1635(0x49c1)],delete _0x16193b['beginKeywords'],void 0x0===_0x16193b[_0x3f1635(0x48b6)]&&(_0x16193b['relevance']=0x0));}function _0x3edbd0(_0x268341,_0x529ee4){const _0x434e97=_0x41950a;Array[_0x434e97(0x22b4)](_0x268341[_0x434e97(0x2f3f)])&&(_0x268341['illegal']=_0x559db(..._0x268341[_0x434e97(0x2f3f)]));}function _0x1b669f(_0x4dd799,_0x100fc2){const _0x2185c7=_0x41950a;if(_0x4dd799[_0x2185c7(0x2d96)]){if(_0x4dd799['begin']||_0x4dd799['end'])throw new Error(_0x2185c7(0x3315));_0x4dd799[_0x2185c7(0x42fa)]=_0x4dd799[_0x2185c7(0x2d96)],delete _0x4dd799[_0x2185c7(0x2d96)];}}function _0x148eb8(_0x265613,_0x4a39f2){const _0x5a6fef=_0x41950a;void 0x0===_0x265613[_0x5a6fef(0x48b6)]&&(_0x265613[_0x5a6fef(0x48b6)]=0x1);}const _0x2153ba=(_0x267002,_0x4478ed)=>{const _0x3073b0=_0x41950a;if(!_0x267002[_0x3073b0(0x1f97)])return;if(_0x267002[_0x3073b0(0x482d)])throw new Error('beforeMatch\x20cannot\x20be\x20used\x20with\x20starts');const _0x3c3fee=Object[_0x3073b0(0x4e14)]({},_0x267002);Object[_0x3073b0(0x1ea9)](_0x267002)[_0x3073b0(0xa21)](_0x5e0de9=>{delete _0x267002[_0x5e0de9];}),_0x267002[_0x3073b0(0xe37)]=_0x3c3fee['keywords'],_0x267002[_0x3073b0(0x42fa)]=_0x521e8d(_0x3c3fee[_0x3073b0(0x1f97)],_0x13e767(_0x3c3fee[_0x3073b0(0x42fa)])),_0x267002[_0x3073b0(0x482d)]={'relevance':0x0,'contains':[Object[_0x3073b0(0x4e14)](_0x3c3fee,{'endsParent':!0x0})]},_0x267002[_0x3073b0(0x48b6)]=0x0,delete _0x3c3fee[_0x3073b0(0x1f97)];},_0x1df411=['of',_0x41950a(0x2663),'for','in',_0x41950a(0xc1a),'or','if',_0x41950a(0xaf5),_0x41950a(0x46f8),_0x41950a(0x144e),'value'],_0xe3894a=_0x41950a(0x1357);function _0x91f4dd(_0x393ae5,_0x98f597,_0x515ff0=_0xe3894a){const _0x3c3caa=_0x41950a,_0x4cf10d=Object['create'](null);return _0x3c3caa(0x2431)==typeof _0x393ae5?_0x264f9c(_0x515ff0,_0x393ae5['split']('\x20')):Array['isArray'](_0x393ae5)?_0x264f9c(_0x515ff0,_0x393ae5):Object[_0x3c3caa(0x1ea9)](_0x393ae5)[_0x3c3caa(0xa21)](function(_0x1dfeaa){const _0x1aa2d1=_0x3c3caa;Object[_0x1aa2d1(0x4e14)](_0x4cf10d,_0x91f4dd(_0x393ae5[_0x1dfeaa],_0x98f597,_0x1dfeaa));}),_0x4cf10d;function _0x264f9c(_0x22074e,_0x4ee083){const _0x37ba81=_0x3c3caa;_0x98f597&&(_0x4ee083=_0x4ee083['map'](_0xc38410=>_0xc38410[_0x37ba81(0x6e8)]())),_0x4ee083[_0x37ba81(0xa21)](function(_0x217c02){const _0x3fe435=_0x37ba81,_0x41fb68=_0x217c02[_0x3fe435(0x1117)]('|');_0x4cf10d[_0x41fb68[0x0]]=[_0x22074e,_0x3c1305(_0x41fb68[0x0],_0x41fb68[0x1])];});}}function _0x3c1305(_0x1f8a54,_0x13f700){return _0x13f700?Number(_0x13f700):function(_0x348c86){const _0x3d185b=a0_0x11e7;return _0x1df411[_0x3d185b(0x2628)](_0x348c86[_0x3d185b(0x6e8)]());}(_0x1f8a54)?0x0:0x1;}const _0x10e73a={},_0x5def54=_0x25708f=>{},_0x4e297b=(_0x52924c,_0x44642d)=>{_0x10e73a[_0x52924c+'/'+_0x44642d]||(_0x10e73a[_0x52924c+'/'+_0x44642d]=!0x0);},_0x4de8e7=new Error();function _0x3b0c3a(_0x27a489,_0x266e93,{key:_0x3479a6}){const _0x12edb7=_0x41950a;let _0x3ba23d=0x0;const _0x24ce37=_0x27a489[_0x3479a6],_0x200167={},_0x133c3a={};for(let _0x41a132=0x1;_0x41a132<=_0x266e93[_0x12edb7(0x1b19)];_0x41a132++)_0x133c3a[_0x41a132+_0x3ba23d]=_0x24ce37[_0x41a132],_0x200167[_0x41a132+_0x3ba23d]=!0x0,_0x3ba23d+=_0x54be14(_0x266e93[_0x41a132-0x1]);_0x27a489[_0x3479a6]=_0x133c3a,_0x27a489[_0x3479a6]['_emit']=_0x200167,_0x27a489[_0x3479a6]['_multi']=!0x0;}function _0x403c83(_0x56f7cd){const _0xa4a046=_0x41950a;!function(_0x248297){const _0x2ca930=a0_0x11e7;_0x248297[_0x2ca930(0x4cd)]&&_0x2ca930(0x20c7)==typeof _0x248297['scope']&&null!==_0x248297[_0x2ca930(0x4cd)]&&(_0x248297[_0x2ca930(0x4245)]=_0x248297[_0x2ca930(0x4cd)],delete _0x248297[_0x2ca930(0x4cd)]);}(_0x56f7cd),_0xa4a046(0x2431)==typeof _0x56f7cd[_0xa4a046(0x4245)]&&(_0x56f7cd[_0xa4a046(0x4245)]={'_wrap':_0x56f7cd[_0xa4a046(0x4245)]}),_0xa4a046(0x2431)==typeof _0x56f7cd[_0xa4a046(0x186f)]&&(_0x56f7cd[_0xa4a046(0x186f)]={'_wrap':_0x56f7cd[_0xa4a046(0x186f)]}),function(_0x181399){const _0x2e61b8=_0xa4a046;if(Array[_0x2e61b8(0x22b4)](_0x181399[_0x2e61b8(0x42fa)])){if(_0x181399['skip']||_0x181399['excludeBegin']||_0x181399[_0x2e61b8(0x1e8b)])throw _0x5def54('skip,\x20excludeBegin,\x20returnBegin\x20not\x20compatible\x20with\x20beginScope:\x20{}'),_0x4de8e7;if(_0x2e61b8(0x20c7)!=typeof _0x181399[_0x2e61b8(0x4245)]||null===_0x181399[_0x2e61b8(0x4245)])throw _0x5def54(_0x2e61b8(0x10fe)),_0x4de8e7;_0x3b0c3a(_0x181399,_0x181399[_0x2e61b8(0x42fa)],{'key':_0x2e61b8(0x4245)}),_0x181399[_0x2e61b8(0x42fa)]=_0x5992fc(_0x181399[_0x2e61b8(0x42fa)],{'joinWith':''});}}(_0x56f7cd),function(_0xc35d37){const _0x468e6a=_0xa4a046;if(Array['isArray'](_0xc35d37[_0x468e6a(0x2681)])){if(_0xc35d37[_0x468e6a(0xa59)]||_0xc35d37['excludeEnd']||_0xc35d37[_0x468e6a(0x2ef9)])throw _0x5def54(_0x468e6a(0x522e)),_0x4de8e7;if(_0x468e6a(0x20c7)!=typeof _0xc35d37['endScope']||null===_0xc35d37['endScope'])throw _0x5def54(_0x468e6a(0x1870)),_0x4de8e7;_0x3b0c3a(_0xc35d37,_0xc35d37[_0x468e6a(0x2681)],{'key':_0x468e6a(0x186f)}),_0xc35d37['end']=_0x5992fc(_0xc35d37[_0x468e6a(0x2681)],{'joinWith':''});}}(_0x56f7cd);}function _0x5451bd(_0x1f1387){const _0x12c75b=_0x41950a;function _0x229a10(_0x5eae36,_0x203802){const _0x377109=a0_0x11e7;return new RegExp(_0x45da53(_0x5eae36),'m'+(_0x1f1387['case_insensitive']?'i':'')+(_0x1f1387[_0x377109(0x37c1)]?'u':'')+(_0x203802?'g':''));}class _0x46788b{constructor(){const _0x40e114=a0_0x11e7;this['matchIndexes']={},this[_0x40e114(0x3205)]=[],this[_0x40e114(0x40d8)]=0x1,this[_0x40e114(0x25f1)]=0x0;}['addRule'](_0x226dba,_0x2fb048){const _0x796b9d=a0_0x11e7;_0x2fb048[_0x796b9d(0x25f1)]=this[_0x796b9d(0x25f1)]++,this[_0x796b9d(0x3699)][this[_0x796b9d(0x40d8)]]=_0x2fb048,this['regexes'][_0x796b9d(0x1715)]([_0x2fb048,_0x226dba]),this[_0x796b9d(0x40d8)]+=_0x54be14(_0x226dba)+0x1;}[_0x12c75b(0x23cd)](){const _0x466439=_0x12c75b;0x0===this['regexes'][_0x466439(0x1b19)]&&(this['exec']=()=>null);const _0x17e17e=this[_0x466439(0x3205)][_0x466439(0x4833)](_0x2b8838=>_0x2b8838[0x1]);this['matcherRe']=_0x229a10(_0x5992fc(_0x17e17e,{'joinWith':'|'}),!0x0),this['lastIndex']=0x0;}[_0x12c75b(0x198d)](_0x5d3d2d){const _0x4fb333=_0x12c75b;this[_0x4fb333(0x2dd6)][_0x4fb333(0x3655)]=this['lastIndex'];const _0xe8b90c=this[_0x4fb333(0x2dd6)]['exec'](_0x5d3d2d);if(!_0xe8b90c)return null;const _0x17dad2=_0xe8b90c[_0x4fb333(0x59b)]((_0x5661f8,_0x59aff1)=>_0x59aff1>0x0&&void 0x0!==_0x5661f8),_0x7dc2f6=this['matchIndexes'][_0x17dad2];return _0xe8b90c[_0x4fb333(0x4986)](0x0,_0x17dad2),Object[_0x4fb333(0x4e14)](_0xe8b90c,_0x7dc2f6);}}class _0x45d00a{constructor(){const _0x24db2b=_0x12c75b;this[_0x24db2b(0x39d9)]=[],this[_0x24db2b(0x1999)]=[],this[_0x24db2b(0x404e)]=0x0,this['lastIndex']=0x0,this[_0x24db2b(0x9e1)]=0x0;}[_0x12c75b(0x4d12)](_0x33ff60){const _0x186fba=_0x12c75b;if(this[_0x186fba(0x1999)][_0x33ff60])return this[_0x186fba(0x1999)][_0x33ff60];const _0x2c038c=new _0x46788b();return this[_0x186fba(0x39d9)][_0x186fba(0x384c)](_0x33ff60)['forEach'](([_0x123ecb,_0x2b6f43])=>_0x2c038c['addRule'](_0x123ecb,_0x2b6f43)),_0x2c038c['compile'](),this[_0x186fba(0x1999)][_0x33ff60]=_0x2c038c,_0x2c038c;}[_0x12c75b(0x425a)](){const _0x274bfe=_0x12c75b;return 0x0!==this[_0x274bfe(0x9e1)];}[_0x12c75b(0x46d6)](){const _0x466e39=_0x12c75b;this[_0x466e39(0x9e1)]=0x0;}['addRule'](_0x1e0e54,_0x3bc028){const _0x229f4f=_0x12c75b;this[_0x229f4f(0x39d9)][_0x229f4f(0x1715)]([_0x1e0e54,_0x3bc028]),'begin'===_0x3bc028[_0x229f4f(0xcfc)]&&this[_0x229f4f(0x404e)]++;}['exec'](_0x3446b1){const _0x5c622f=_0x12c75b,_0x33b674=this[_0x5c622f(0x4d12)](this[_0x5c622f(0x9e1)]);_0x33b674[_0x5c622f(0x3655)]=this['lastIndex'];let _0xce63c=_0x33b674[_0x5c622f(0x198d)](_0x3446b1);if(this[_0x5c622f(0x425a)]()){if(_0xce63c&&_0xce63c['index']===this[_0x5c622f(0x3655)]);else{const _0x2b740e=this['getMatcher'](0x0);_0x2b740e[_0x5c622f(0x3655)]=this[_0x5c622f(0x3655)]+0x1,_0xce63c=_0x2b740e['exec'](_0x3446b1);}}return _0xce63c&&(this['regexIndex']+=_0xce63c['position']+0x1,this[_0x5c622f(0x9e1)]===this[_0x5c622f(0x404e)]&&this[_0x5c622f(0x46d6)]()),_0xce63c;}}if(_0x1f1387[_0x12c75b(0x6a0)]||(_0x1f1387[_0x12c75b(0x6a0)]=[]),_0x1f1387[_0x12c75b(0x2b31)]&&_0x1f1387['contains'][_0x12c75b(0x2628)]('self'))throw new Error(_0x12c75b(0x1753));return _0x1f1387[_0x12c75b(0x201a)]=_0x157a2a(_0x1f1387[_0x12c75b(0x201a)]||{}),function _0x2ec09a(_0x494dd2,_0x4d01ea){const _0x1912f2=_0x12c75b,_0x340197=_0x494dd2;if(_0x494dd2[_0x1912f2(0x7cd)])return _0x340197;[_0x1f0fcc,_0x1b669f,_0x403c83,_0x2153ba][_0x1912f2(0xa21)](_0x32ab4f=>_0x32ab4f(_0x494dd2,_0x4d01ea)),_0x1f1387[_0x1912f2(0x6a0)][_0x1912f2(0xa21)](_0x3c5479=>_0x3c5479(_0x494dd2,_0x4d01ea)),_0x494dd2[_0x1912f2(0x2bd7)]=null,[_0x135ced,_0x3edbd0,_0x148eb8][_0x1912f2(0xa21)](_0x4655de=>_0x4655de(_0x494dd2,_0x4d01ea)),_0x494dd2['isCompiled']=!0x0;let _0x33d17b=null;return'object'==typeof _0x494dd2[_0x1912f2(0xe37)]&&_0x494dd2[_0x1912f2(0xe37)]['$pattern']&&(_0x494dd2['keywords']=Object[_0x1912f2(0x4e14)]({},_0x494dd2['keywords']),_0x33d17b=_0x494dd2[_0x1912f2(0xe37)]['$pattern'],delete _0x494dd2[_0x1912f2(0xe37)][_0x1912f2(0x4553)]),_0x33d17b=_0x33d17b||/\w+/,_0x494dd2[_0x1912f2(0xe37)]&&(_0x494dd2['keywords']=_0x91f4dd(_0x494dd2['keywords'],_0x1f1387[_0x1912f2(0x5e2)])),_0x340197[_0x1912f2(0x22f7)]=_0x229a10(_0x33d17b,!0x0),_0x4d01ea&&(_0x494dd2['begin']||(_0x494dd2[_0x1912f2(0x42fa)]=/\B|\b/),_0x340197[_0x1912f2(0x39a)]=_0x229a10(_0x340197[_0x1912f2(0x42fa)]),_0x494dd2[_0x1912f2(0x2681)]||_0x494dd2[_0x1912f2(0xd56)]||(_0x494dd2[_0x1912f2(0x2681)]=/\B|\b/),_0x494dd2[_0x1912f2(0x2681)]&&(_0x340197[_0x1912f2(0x4efb)]=_0x229a10(_0x340197['end'])),_0x340197[_0x1912f2(0xffc)]=_0x45da53(_0x340197[_0x1912f2(0x2681)])||'',_0x494dd2[_0x1912f2(0xd56)]&&_0x4d01ea[_0x1912f2(0xffc)]&&(_0x340197[_0x1912f2(0xffc)]+=(_0x494dd2['end']?'|':'')+_0x4d01ea[_0x1912f2(0xffc)])),_0x494dd2[_0x1912f2(0x2f3f)]&&(_0x340197[_0x1912f2(0x3817)]=_0x229a10(_0x494dd2['illegal'])),_0x494dd2[_0x1912f2(0x2b31)]||(_0x494dd2[_0x1912f2(0x2b31)]=[]),_0x494dd2[_0x1912f2(0x2b31)]=[][_0x1912f2(0x1d1d)](..._0x494dd2[_0x1912f2(0x2b31)]['map'](function(_0x47fde1){return function(_0x178cb6){const _0x35ea83=a0_0x11e7;_0x178cb6[_0x35ea83(0x2807)]&&!_0x178cb6['cachedVariants']&&(_0x178cb6[_0x35ea83(0x3aca)]=_0x178cb6[_0x35ea83(0x2807)]['map'](function(_0x3911b2){return _0x157a2a(_0x178cb6,{'variants':null},_0x3911b2);}));if(_0x178cb6[_0x35ea83(0x3aca)])return _0x178cb6[_0x35ea83(0x3aca)];if(_0x226e0d(_0x178cb6))return _0x157a2a(_0x178cb6,{'starts':_0x178cb6[_0x35ea83(0x482d)]?_0x157a2a(_0x178cb6[_0x35ea83(0x482d)]):null});if(Object[_0x35ea83(0x44c0)](_0x178cb6))return _0x157a2a(_0x178cb6);return _0x178cb6;}('self'===_0x47fde1?_0x494dd2:_0x47fde1);})),_0x494dd2[_0x1912f2(0x2b31)][_0x1912f2(0xa21)](function(_0x2a0288){_0x2ec09a(_0x2a0288,_0x340197);}),_0x494dd2[_0x1912f2(0x482d)]&&_0x2ec09a(_0x494dd2[_0x1912f2(0x482d)],_0x4d01ea),_0x340197['matcher']=function(_0x41e9d0){const _0x2d5715=_0x1912f2,_0x39c6cf=new _0x45d00a();return _0x41e9d0[_0x2d5715(0x2b31)][_0x2d5715(0xa21)](_0x397fc8=>_0x39c6cf[_0x2d5715(0x19d8)](_0x397fc8[_0x2d5715(0x42fa)],{'rule':_0x397fc8,'type':_0x2d5715(0x42fa)})),_0x41e9d0[_0x2d5715(0xffc)]&&_0x39c6cf[_0x2d5715(0x19d8)](_0x41e9d0['terminatorEnd'],{'type':_0x2d5715(0x2681)}),_0x41e9d0[_0x2d5715(0x2f3f)]&&_0x39c6cf[_0x2d5715(0x19d8)](_0x41e9d0[_0x2d5715(0x2f3f)],{'type':_0x2d5715(0x2f3f)}),_0x39c6cf;}(_0x340197),_0x340197;}(_0x1f1387);}function _0x226e0d(_0xe07db5){return!!_0xe07db5&&(_0xe07db5['endsWithParent']||_0x226e0d(_0xe07db5['starts']));}class _0xceb3d4 extends Error{constructor(_0x477d43,_0x5db5d7){const _0x4f8575=_0x41950a;super(_0x477d43),this[_0x4f8575(0x11d8)]=_0x4f8575(0x3ad8),this[_0x4f8575(0x2acd)]=_0x5db5d7;}}const _0x40b567=_0x122d39,_0x11f701=_0x157a2a,_0x36ef84=Symbol('nomatch'),_0x17ed34=function(_0x2ba124){const _0x32e672=_0x41950a,_0x3fbcc3=Object[_0x32e672(0x1d3a)](null),_0x17fec6=Object['create'](null),_0x18f19c=[];let _0x3717b2=!0x0;const _0x4d2d53='Could\x20not\x20find\x20the\x20language\x20\x27{}\x27,\x20did\x20you\x20forget\x20to\x20load/include\x20a\x20language\x20module?',_0x2c5ed7={'disableAutodetect':!0x0,'name':_0x32e672(0x14a1),'contains':[]};let _0x1b9407={'ignoreUnescapedHTML':!0x1,'throwUnescapedHTML':!0x1,'noHighlightRe':/^(no-?highlight)$/i,'languageDetectRe':/\blang(?:uage)?-([\w-]+)\b/i,'classPrefix':_0x32e672(0x273c),'cssSelector':_0x32e672(0xe6c),'languages':null,'__emitter':_0x144cc4};function _0x1e535e(_0x5e569b){const _0x427be5=_0x32e672;return _0x1b9407[_0x427be5(0x1962)][_0x427be5(0x1769)](_0x5e569b);}function _0x1459d7(_0x5d391b,_0x2b1c5a,_0x174042){const _0x3a4745=_0x32e672;let _0x115176='',_0x46673e='';'object'==typeof _0x2b1c5a?(_0x115176=_0x5d391b,_0x174042=_0x2b1c5a[_0x3a4745(0x34a4)],_0x46673e=_0x2b1c5a['language']):(_0x4e297b(_0x3a4745(0x3eca),_0x3a4745(0x2ba2)),_0x4e297b(_0x3a4745(0x3eca),_0x3a4745(0x15e7)),_0x46673e=_0x5d391b,_0x115176=_0x2b1c5a),void 0x0===_0x174042&&(_0x174042=!0x0);const _0x3259a6={'code':_0x115176,'language':_0x46673e};_0x397de0('before:highlight',_0x3259a6);const _0x527ff9=_0x3259a6[_0x3a4745(0xa34)]?_0x3259a6[_0x3a4745(0xa34)]:_0x5c49b9(_0x3259a6[_0x3a4745(0x286d)],_0x3259a6[_0x3a4745(0x4948)],_0x174042);return _0x527ff9['code']=_0x3259a6['code'],_0x397de0(_0x3a4745(0x3d83),_0x527ff9),_0x527ff9;}function _0x5c49b9(_0x1833bf,_0x11d1e6,_0xf96e33,_0x56950b){const _0x4bb16b=_0x32e672,_0xb0ca71=Object['create'](null);function _0x391e22(){const _0x4e4fe0=a0_0x11e7;if(!_0x5a9848[_0x4e4fe0(0xe37)])return void _0x3a5819[_0x4e4fe0(0x4001)](_0x5a6760);let _0x2af1c0=0x0;_0x5a9848['keywordPatternRe'][_0x4e4fe0(0x3655)]=0x0;let _0x31a738=_0x5a9848[_0x4e4fe0(0x22f7)][_0x4e4fe0(0x198d)](_0x5a6760),_0x3ba61b='';for(;_0x31a738;){_0x3ba61b+=_0x5a6760[_0x4e4fe0(0x37b5)](_0x2af1c0,_0x31a738[_0x4e4fe0(0x3bb5)]);const _0x52ac53=_0x2961d7['case_insensitive']?_0x31a738[0x0][_0x4e4fe0(0x6e8)]():_0x31a738[0x0],_0x32d877=(_0x5ab859=_0x52ac53,_0x5a9848[_0x4e4fe0(0xe37)][_0x5ab859]);if(_0x32d877){const [_0x81aa69,_0x4a5490]=_0x32d877;if(_0x3a5819[_0x4e4fe0(0x4001)](_0x3ba61b),_0x3ba61b='',_0xb0ca71[_0x52ac53]=(_0xb0ca71[_0x52ac53]||0x0)+0x1,_0xb0ca71[_0x52ac53]<=0x7&&(_0x4e4da0+=_0x4a5490),_0x81aa69[_0x4e4fe0(0x3bcf)]('_'))_0x3ba61b+=_0x31a738[0x0];else{const _0x410e90=_0x2961d7[_0x4e4fe0(0x201a)][_0x81aa69]||_0x81aa69;_0x43392a(_0x31a738[0x0],_0x410e90);}}else _0x3ba61b+=_0x31a738[0x0];_0x2af1c0=_0x5a9848[_0x4e4fe0(0x22f7)]['lastIndex'],_0x31a738=_0x5a9848[_0x4e4fe0(0x22f7)][_0x4e4fe0(0x198d)](_0x5a6760);}var _0x5ab859;_0x3ba61b+=_0x5a6760[_0x4e4fe0(0x37b5)](_0x2af1c0),_0x3a5819['addText'](_0x3ba61b);}function _0x1cc692(){const _0x27882f=a0_0x11e7;null!=_0x5a9848[_0x27882f(0xb49)]?(function(){const _0x542784=_0x27882f;if(''===_0x5a6760)return;let _0x4b326b=null;if('string'==typeof _0x5a9848[_0x542784(0xb49)]){if(!_0x3fbcc3[_0x5a9848[_0x542784(0xb49)]])return void _0x3a5819[_0x542784(0x4001)](_0x5a6760);_0x4b326b=_0x5c49b9(_0x5a9848['subLanguage'],_0x5a6760,!0x0,_0x518661[_0x5a9848['subLanguage']]),_0x518661[_0x5a9848[_0x542784(0xb49)]]=_0x4b326b[_0x542784(0x162f)];}else _0x4b326b=_0x5ba5e1(_0x5a6760,_0x5a9848[_0x542784(0xb49)][_0x542784(0x1b19)]?_0x5a9848[_0x542784(0xb49)]:null);_0x5a9848[_0x542784(0x48b6)]>0x0&&(_0x4e4da0+=_0x4b326b[_0x542784(0x48b6)]),_0x3a5819[_0x542784(0x5120)](_0x4b326b[_0x542784(0x2d5f)],_0x4b326b[_0x542784(0x286d)]);}()):_0x391e22(),_0x5a6760='';}function _0x43392a(_0xf36b8c,_0x496c5c){const _0x7e1547=a0_0x11e7;''!==_0xf36b8c&&(_0x3a5819[_0x7e1547(0x10b8)](_0x496c5c),_0x3a5819[_0x7e1547(0x4001)](_0xf36b8c),_0x3a5819['endScope']());}function _0x501d5b(_0x3c2f6f,_0x5f1892){const _0x37ad97=a0_0x11e7;let _0xf1d5cc=0x1;const _0x2cbc98=_0x5f1892[_0x37ad97(0x1b19)]-0x1;for(;_0xf1d5cc<=_0x2cbc98;){if(!_0x3c2f6f[_0x37ad97(0xb40)][_0xf1d5cc]){_0xf1d5cc++;continue;}const _0x2aff97=_0x2961d7[_0x37ad97(0x201a)][_0x3c2f6f[_0xf1d5cc]]||_0x3c2f6f[_0xf1d5cc],_0x12cdee=_0x5f1892[_0xf1d5cc];_0x2aff97?_0x43392a(_0x12cdee,_0x2aff97):(_0x5a6760=_0x12cdee,_0x391e22(),_0x5a6760=''),_0xf1d5cc++;}}function _0x9a9755(_0x5eed6f,_0x3ea212){const _0x1e1a47=a0_0x11e7;return _0x5eed6f[_0x1e1a47(0x4cd)]&&_0x1e1a47(0x2431)==typeof _0x5eed6f[_0x1e1a47(0x4cd)]&&_0x3a5819[_0x1e1a47(0x45f8)](_0x2961d7['classNameAliases'][_0x5eed6f[_0x1e1a47(0x4cd)]]||_0x5eed6f[_0x1e1a47(0x4cd)]),_0x5eed6f[_0x1e1a47(0x4245)]&&(_0x5eed6f[_0x1e1a47(0x4245)]['_wrap']?(_0x43392a(_0x5a6760,_0x2961d7['classNameAliases'][_0x5eed6f[_0x1e1a47(0x4245)][_0x1e1a47(0x2887)]]||_0x5eed6f['beginScope'][_0x1e1a47(0x2887)]),_0x5a6760=''):_0x5eed6f[_0x1e1a47(0x4245)][_0x1e1a47(0x3078)]&&(_0x501d5b(_0x5eed6f[_0x1e1a47(0x4245)],_0x3ea212),_0x5a6760='')),_0x5a9848=Object[_0x1e1a47(0x1d3a)](_0x5eed6f,{'parent':{'value':_0x5a9848}}),_0x5a9848;}function _0x5a954b(_0x4d93bf,_0xa56fbc,_0x1af112){const _0xb3fca3=a0_0x11e7;let _0x3eb923=function(_0x3c2194,_0x24eaea){const _0x169816=a0_0x11e7,_0x15a5db=_0x3c2194&&_0x3c2194[_0x169816(0x198d)](_0x24eaea);return _0x15a5db&&0x0===_0x15a5db[_0x169816(0x3bb5)];}(_0x4d93bf[_0xb3fca3(0x4efb)],_0x1af112);if(_0x3eb923){if(_0x4d93bf[_0xb3fca3(0x3c09)]){const _0x508c3e=new _0x4dd7e0(_0x4d93bf);_0x4d93bf[_0xb3fca3(0x3c09)](_0xa56fbc,_0x508c3e),_0x508c3e[_0xb3fca3(0x49c0)]&&(_0x3eb923=!0x1);}if(_0x3eb923){for(;_0x4d93bf[_0xb3fca3(0x4aa2)]&&_0x4d93bf[_0xb3fca3(0x46f8)];)_0x4d93bf=_0x4d93bf[_0xb3fca3(0x46f8)];return _0x4d93bf;}}if(_0x4d93bf[_0xb3fca3(0xd56)])return _0x5a954b(_0x4d93bf[_0xb3fca3(0x46f8)],_0xa56fbc,_0x1af112);}function _0x4a0c57(_0x4d87f4){const _0x8cc311=a0_0x11e7;return 0x0===_0x5a9848[_0x8cc311(0x22c1)][_0x8cc311(0x9e1)]?(_0x5a6760+=_0x4d87f4[0x0],0x1):(_0x133297=!0x0,0x0);}function _0x1d03b6(_0x4a9d20){const _0xca995e=a0_0x11e7,_0x1ea9f0=_0x4a9d20[0x0],_0x309920=_0x11d1e6[_0xca995e(0x37b5)](_0x4a9d20[_0xca995e(0x3bb5)]),_0x41cae6=_0x5a954b(_0x5a9848,_0x4a9d20,_0x309920);if(!_0x41cae6)return _0x36ef84;const _0x2ac107=_0x5a9848;_0x5a9848['endScope']&&_0x5a9848[_0xca995e(0x186f)][_0xca995e(0x2887)]?(_0x1cc692(),_0x43392a(_0x1ea9f0,_0x5a9848[_0xca995e(0x186f)][_0xca995e(0x2887)])):_0x5a9848[_0xca995e(0x186f)]&&_0x5a9848[_0xca995e(0x186f)][_0xca995e(0x3078)]?(_0x1cc692(),_0x501d5b(_0x5a9848[_0xca995e(0x186f)],_0x4a9d20)):_0x2ac107[_0xca995e(0xa59)]?_0x5a6760+=_0x1ea9f0:(_0x2ac107[_0xca995e(0x2ef9)]||_0x2ac107[_0xca995e(0x2730)]||(_0x5a6760+=_0x1ea9f0),_0x1cc692(),_0x2ac107[_0xca995e(0x2730)]&&(_0x5a6760=_0x1ea9f0));do{_0x5a9848[_0xca995e(0x4cd)]&&_0x3a5819[_0xca995e(0x45b8)](),_0x5a9848[_0xca995e(0xa59)]||_0x5a9848['subLanguage']||(_0x4e4da0+=_0x5a9848[_0xca995e(0x48b6)]),_0x5a9848=_0x5a9848[_0xca995e(0x46f8)];}while(_0x5a9848!==_0x41cae6[_0xca995e(0x46f8)]);return _0x41cae6[_0xca995e(0x482d)]&&_0x9a9755(_0x41cae6[_0xca995e(0x482d)],_0x4a9d20),_0x2ac107['returnEnd']?0x0:_0x1ea9f0['length'];}let _0x1c0449={};function _0x12a3a3(_0x300507,_0x2035fc){const _0x7e5c45=a0_0x11e7,_0x4dbacc=_0x2035fc&&_0x2035fc[0x0];if(_0x5a6760+=_0x300507,null==_0x4dbacc)return _0x1cc692(),0x0;if(_0x7e5c45(0x42fa)===_0x1c0449[_0x7e5c45(0xcfc)]&&_0x7e5c45(0x2681)===_0x2035fc[_0x7e5c45(0xcfc)]&&_0x1c0449[_0x7e5c45(0x3bb5)]===_0x2035fc['index']&&''===_0x4dbacc){if(_0x5a6760+=_0x11d1e6[_0x7e5c45(0x384c)](_0x2035fc[_0x7e5c45(0x3bb5)],_0x2035fc[_0x7e5c45(0x3bb5)]+0x1),!_0x3717b2){const _0x2705c5=new Error(_0x7e5c45(0x29e2)+_0x1833bf+')');throw _0x2705c5[_0x7e5c45(0x921)]=_0x1833bf,_0x2705c5[_0x7e5c45(0x2619)]=_0x1c0449[_0x7e5c45(0x1359)],_0x2705c5;}return 0x1;}if(_0x1c0449=_0x2035fc,_0x7e5c45(0x42fa)===_0x2035fc['type'])return function(_0x41b5d2){const _0x5563a6=_0x7e5c45,_0x58491b=_0x41b5d2[0x0],_0x342d67=_0x41b5d2[_0x5563a6(0x1359)],_0x39515c=new _0x4dd7e0(_0x342d67),_0x1ad3c5=[_0x342d67[_0x5563a6(0x2bd7)],_0x342d67[_0x5563a6(0x1468)]];for(const _0x97283c of _0x1ad3c5)if(_0x97283c&&(_0x97283c(_0x41b5d2,_0x39515c),_0x39515c['isMatchIgnored']))return _0x4a0c57(_0x58491b);return _0x342d67[_0x5563a6(0xa59)]?_0x5a6760+=_0x58491b:(_0x342d67[_0x5563a6(0x4458)]&&(_0x5a6760+=_0x58491b),_0x1cc692(),_0x342d67[_0x5563a6(0x1e8b)]||_0x342d67[_0x5563a6(0x4458)]||(_0x5a6760=_0x58491b)),_0x9a9755(_0x342d67,_0x41b5d2),_0x342d67[_0x5563a6(0x1e8b)]?0x0:_0x58491b['length'];}(_0x2035fc);if(_0x7e5c45(0x2f3f)===_0x2035fc[_0x7e5c45(0xcfc)]&&!_0xf96e33){const _0x2bcf5a=new Error(_0x7e5c45(0xf4c)+_0x4dbacc+_0x7e5c45(0x2270)+(_0x5a9848[_0x7e5c45(0x4cd)]||_0x7e5c45(0x4184))+'\x22');throw _0x2bcf5a[_0x7e5c45(0x2e63)]=_0x5a9848,_0x2bcf5a;}if(_0x7e5c45(0x2681)===_0x2035fc[_0x7e5c45(0xcfc)]){const _0x3b4548=_0x1d03b6(_0x2035fc);if(_0x3b4548!==_0x36ef84)return _0x3b4548;}if(_0x7e5c45(0x2f3f)===_0x2035fc['type']&&''===_0x4dbacc)return 0x1;if(_0x490344>0x186a0&&_0x490344>0x3*_0x2035fc[_0x7e5c45(0x3bb5)])throw new Error(_0x7e5c45(0x26f5));return _0x5a6760+=_0x4dbacc,_0x4dbacc[_0x7e5c45(0x1b19)];}const _0x2961d7=_0x4e7c77(_0x1833bf);if(!_0x2961d7)throw _0x5def54(_0x4d2d53[_0x4bb16b(0x741)]('{}',_0x1833bf)),new Error(_0x4bb16b(0x4e9b)+_0x1833bf+'\x22');const _0xe54ebe=_0x5451bd(_0x2961d7);let _0x2ee557='',_0x5a9848=_0x56950b||_0xe54ebe;const _0x518661={},_0x3a5819=new _0x1b9407[(_0x4bb16b(0x4712))](_0x1b9407);!(function(){const _0x2d449a=_0x4bb16b,_0x3e1406=[];for(let _0x509659=_0x5a9848;_0x509659!==_0x2961d7;_0x509659=_0x509659[_0x2d449a(0x46f8)])_0x509659[_0x2d449a(0x4cd)]&&_0x3e1406['unshift'](_0x509659[_0x2d449a(0x4cd)]);_0x3e1406[_0x2d449a(0xa21)](_0x2916bc=>_0x3a5819['openNode'](_0x2916bc));}());let _0x5a6760='',_0x4e4da0=0x0,_0x42a097=0x0,_0x490344=0x0,_0x133297=!0x1;try{if(_0x2961d7['__emitTokens'])_0x2961d7[_0x4bb16b(0x227b)](_0x11d1e6,_0x3a5819);else{for(_0x5a9848[_0x4bb16b(0x22c1)][_0x4bb16b(0x46d6)]();;){_0x490344++,_0x133297?_0x133297=!0x1:_0x5a9848[_0x4bb16b(0x22c1)][_0x4bb16b(0x46d6)](),_0x5a9848['matcher']['lastIndex']=_0x42a097;const _0x6f5167=_0x5a9848['matcher']['exec'](_0x11d1e6);if(!_0x6f5167)break;const _0x2f9f0c=_0x12a3a3(_0x11d1e6[_0x4bb16b(0x37b5)](_0x42a097,_0x6f5167['index']),_0x6f5167);_0x42a097=_0x6f5167['index']+_0x2f9f0c;}_0x12a3a3(_0x11d1e6[_0x4bb16b(0x37b5)](_0x42a097));}return _0x3a5819[_0x4bb16b(0x257a)](),_0x2ee557=_0x3a5819[_0x4bb16b(0x1485)](),{'language':_0x1833bf,'value':_0x2ee557,'relevance':_0x4e4da0,'illegal':!0x1,'_emitter':_0x3a5819,'_top':_0x5a9848};}catch(_0x398ecc){if(_0x398ecc[_0x4bb16b(0x4cdf)]&&_0x398ecc['message'][_0x4bb16b(0x2628)]('Illegal'))return{'language':_0x1833bf,'value':_0x40b567(_0x11d1e6),'illegal':!0x0,'relevance':0x0,'_illegalBy':{'message':_0x398ecc['message'],'index':_0x42a097,'context':_0x11d1e6['slice'](_0x42a097-0x64,_0x42a097+0x64),'mode':_0x398ecc[_0x4bb16b(0x2e63)],'resultSoFar':_0x2ee557},'_emitter':_0x3a5819};if(_0x3717b2)return{'language':_0x1833bf,'value':_0x40b567(_0x11d1e6),'illegal':!0x1,'relevance':0x0,'errorRaised':_0x398ecc,'_emitter':_0x3a5819,'_top':_0x5a9848};throw _0x398ecc;}}function _0x5ba5e1(_0x31965a,_0x35a84d){const _0xfcd205=_0x32e672;_0x35a84d=_0x35a84d||_0x1b9407[_0xfcd205(0x2217)]||Object['keys'](_0x3fbcc3);const _0x22eaca=function(_0x63ea7f){const _0x2df683=_0xfcd205,_0x292b40={'value':_0x40b567(_0x63ea7f),'illegal':!0x1,'relevance':0x0,'_top':_0x2c5ed7,'_emitter':new _0x1b9407[(_0x2df683(0x4712))](_0x1b9407)};return _0x292b40[_0x2df683(0x2d5f)][_0x2df683(0x4001)](_0x63ea7f),_0x292b40;}(_0x31965a),_0x31151f=_0x35a84d[_0xfcd205(0x1465)](_0x4e7c77)['filter'](_0x336aa0)[_0xfcd205(0x4833)](_0x40dafd=>_0x5c49b9(_0x40dafd,_0x31965a,!0x1));_0x31151f[_0xfcd205(0x2767)](_0x22eaca);const _0x55f20f=_0x31151f['sort']((_0x48c861,_0xbb1b5)=>{const _0x3ca8b5=_0xfcd205;if(_0x48c861[_0x3ca8b5(0x48b6)]!==_0xbb1b5['relevance'])return _0xbb1b5['relevance']-_0x48c861[_0x3ca8b5(0x48b6)];if(_0x48c861[_0x3ca8b5(0x286d)]&&_0xbb1b5[_0x3ca8b5(0x286d)]){if(_0x4e7c77(_0x48c861[_0x3ca8b5(0x286d)])[_0x3ca8b5(0x1f92)]===_0xbb1b5['language'])return 0x1;if(_0x4e7c77(_0xbb1b5[_0x3ca8b5(0x286d)])[_0x3ca8b5(0x1f92)]===_0x48c861[_0x3ca8b5(0x286d)])return-0x1;}return 0x0;}),[_0x2ebdc2,_0x14205a]=_0x55f20f,_0x486560=_0x2ebdc2;return _0x486560[_0xfcd205(0x592)]=_0x14205a,_0x486560;}function _0x26670b(_0x37dd8d){const _0x5c624d=_0x32e672;let _0x20d4ff=null;const _0x24ac07=function(_0xaa6ee2){const _0x13f0e9=a0_0x11e7;let _0x5892fc=_0xaa6ee2[_0x13f0e9(0x1cea)]+'\x20';_0x5892fc+=_0xaa6ee2['parentNode']?_0xaa6ee2[_0x13f0e9(0x26d8)][_0x13f0e9(0x1cea)]:'';const _0x3554ba=_0x1b9407[_0x13f0e9(0x331d)][_0x13f0e9(0x198d)](_0x5892fc);if(_0x3554ba){const _0x2561e5=_0x4e7c77(_0x3554ba[0x1]);return _0x2561e5||_0x4d2d53[_0x13f0e9(0x741)]('{}',_0x3554ba[0x1]),_0x2561e5?_0x3554ba[0x1]:'no-highlight';}return _0x5892fc[_0x13f0e9(0x1117)](/\s+/)[_0x13f0e9(0x5144)](_0x1d49dd=>_0x1e535e(_0x1d49dd)||_0x4e7c77(_0x1d49dd));}(_0x37dd8d);if(_0x1e535e(_0x24ac07))return;if(_0x397de0(_0x5c624d(0x4d9a),{'el':_0x37dd8d,'language':_0x24ac07}),_0x37dd8d[_0x5c624d(0x37e4)][_0x5c624d(0x4842)])return;if(_0x37dd8d[_0x5c624d(0x4c3e)][_0x5c624d(0x1b19)]>0x0&&(_0x1b9407[_0x5c624d(0x1dab)],_0x1b9407[_0x5c624d(0x267b)]))throw new _0xceb3d4(_0x5c624d(0x4327),_0x37dd8d[_0x5c624d(0x3cdf)]);_0x20d4ff=_0x37dd8d;const _0x5ac65e=_0x20d4ff[_0x5c624d(0x2f80)],_0x1535ad=_0x24ac07?_0x1459d7(_0x5ac65e,{'language':_0x24ac07,'ignoreIllegals':!0x0}):_0x5ba5e1(_0x5ac65e);_0x37dd8d['innerHTML']=_0x1535ad[_0x5c624d(0x4fe9)],_0x37dd8d[_0x5c624d(0x37e4)][_0x5c624d(0x4842)]=_0x5c624d(0x1df8),function(_0x263858,_0x50cedf,_0x28d095){const _0x53ecdc=_0x5c624d,_0x202ec2=_0x50cedf&&_0x17fec6[_0x50cedf]||_0x28d095;_0x263858[_0x53ecdc(0x1745)][_0x53ecdc(0x362c)](_0x53ecdc(0x170c)),_0x263858[_0x53ecdc(0x1745)][_0x53ecdc(0x362c)](_0x53ecdc(0xd06)+_0x202ec2);}(_0x37dd8d,_0x24ac07,_0x1535ad[_0x5c624d(0x286d)]),_0x37dd8d[_0x5c624d(0xa34)]={'language':_0x1535ad[_0x5c624d(0x286d)],'re':_0x1535ad[_0x5c624d(0x48b6)],'relevance':_0x1535ad[_0x5c624d(0x48b6)]},_0x1535ad[_0x5c624d(0x592)]&&(_0x37dd8d[_0x5c624d(0x592)]={'language':_0x1535ad[_0x5c624d(0x592)]['language'],'relevance':_0x1535ad['secondBest'][_0x5c624d(0x48b6)]}),_0x397de0(_0x5c624d(0x432b),{'el':_0x37dd8d,'result':_0x1535ad,'text':_0x5ac65e});}let _0x4922b8=!0x1;function _0x428843(){const _0x37b541=_0x32e672;if('loading'===document[_0x37b541(0x3a3f)])return void(_0x4922b8=!0x0);document[_0x37b541(0x492f)](_0x1b9407[_0x37b541(0x519d)])[_0x37b541(0xa21)](_0x26670b);}function _0x4e7c77(_0x3750fd){const _0x1ca85f=_0x32e672;return _0x3750fd=(_0x3750fd||'')[_0x1ca85f(0x6e8)](),_0x3fbcc3[_0x3750fd]||_0x3fbcc3[_0x17fec6[_0x3750fd]];}function _0x469efb(_0xb86bce,{languageName:_0xf0e122}){const _0x166886=_0x32e672;_0x166886(0x2431)==typeof _0xb86bce&&(_0xb86bce=[_0xb86bce]),_0xb86bce['forEach'](_0xde5641=>{const _0x4a0952=_0x166886;_0x17fec6[_0xde5641[_0x4a0952(0x6e8)]()]=_0xf0e122;});}function _0x336aa0(_0xcd4cc5){const _0x485491=_0x32e672,_0x472045=_0x4e7c77(_0xcd4cc5);return _0x472045&&!_0x472045[_0x485491(0x8ca)];}function _0x397de0(_0x40d077,_0xdafb00){const _0xd0fba8=_0x32e672,_0x35053d=_0x40d077;_0x18f19c[_0xd0fba8(0xa21)](function(_0x232174){_0x232174[_0x35053d]&&_0x232174[_0x35053d](_0xdafb00);});}_0x32e672(0x1daa)!=typeof window&&window[_0x32e672(0xc61)]&&window[_0x32e672(0xc61)]('DOMContentLoaded',function(){_0x4922b8&&_0x428843();},!0x1),Object[_0x32e672(0x4e14)](_0x2ba124,{'highlight':_0x1459d7,'highlightAuto':_0x5ba5e1,'highlightAll':_0x428843,'highlightElement':_0x26670b,'highlightBlock':function(_0x311617){const _0x2ba7d1=_0x32e672;return _0x4e297b(_0x2ba7d1(0x3eca),'highlightBlock\x20will\x20be\x20removed\x20entirely\x20in\x20v12.0'),_0x4e297b('10.7.0',_0x2ba7d1(0x2221)),_0x26670b(_0x311617);},'configure':function(_0x2d1951){_0x1b9407=_0x11f701(_0x1b9407,_0x2d1951);},'initHighlighting':()=>{const _0x602906=_0x32e672;_0x428843(),_0x4e297b(_0x602906(0x1378),_0x602906(0x4adb));},'initHighlightingOnLoad':function(){const _0x4acdaf=_0x32e672;_0x428843(),_0x4e297b(_0x4acdaf(0x1378),_0x4acdaf(0xe65));},'registerLanguage':function(_0x4a3232,_0x542919){const _0x4945fe=_0x32e672;let _0x1960d3=null;try{_0x1960d3=_0x542919(_0x2ba124);}catch(_0x324f5f){if(_0x5def54(_0x4945fe(0x4810)[_0x4945fe(0x741)]('{}',_0x4a3232)),!_0x3717b2)throw _0x324f5f;_0x5def54(_0x324f5f),_0x1960d3=_0x2c5ed7;}_0x1960d3[_0x4945fe(0x11d8)]||(_0x1960d3[_0x4945fe(0x11d8)]=_0x4a3232),_0x3fbcc3[_0x4a3232]=_0x1960d3,_0x1960d3[_0x4945fe(0x45f4)]=_0x542919[_0x4945fe(0x39e8)](null,_0x2ba124),_0x1960d3['aliases']&&_0x469efb(_0x1960d3['aliases'],{'languageName':_0x4a3232});},'unregisterLanguage':function(_0x403e49){const _0x27f0d1=_0x32e672;delete _0x3fbcc3[_0x403e49];for(const _0x29d13e of Object[_0x27f0d1(0x1ea9)](_0x17fec6))_0x17fec6[_0x29d13e]===_0x403e49&&delete _0x17fec6[_0x29d13e];},'listLanguages':function(){return Object['keys'](_0x3fbcc3);},'getLanguage':_0x4e7c77,'registerAliases':_0x469efb,'autoDetection':_0x336aa0,'inherit':_0x11f701,'addPlugin':function(_0x8a82a5){const _0x4c026b=_0x32e672;!function(_0x7b22c1){const _0xb1d98e=a0_0x11e7;_0x7b22c1[_0xb1d98e(0x30a8)]&&!_0x7b22c1['before:highlightElement']&&(_0x7b22c1['before:highlightElement']=_0x498b23=>{const _0x3c65e5=_0xb1d98e;_0x7b22c1[_0x3c65e5(0x30a8)](Object[_0x3c65e5(0x4e14)]({'block':_0x498b23['el']},_0x498b23));}),_0x7b22c1[_0xb1d98e(0x34d0)]&&!_0x7b22c1[_0xb1d98e(0x432b)]&&(_0x7b22c1[_0xb1d98e(0x432b)]=_0x1c93de=>{_0x7b22c1['after:highlightBlock'](Object['assign']({'block':_0x1c93de['el']},_0x1c93de));});}(_0x8a82a5),_0x18f19c[_0x4c026b(0x1715)](_0x8a82a5);},'removePlugin':function(_0x8936b8){const _0x12ba3d=_0x32e672,_0x166f44=_0x18f19c['indexOf'](_0x8936b8);-0x1!==_0x166f44&&_0x18f19c[_0x12ba3d(0x4986)](_0x166f44,0x1);}}),_0x2ba124[_0x32e672(0x142f)]=function(){_0x3717b2=!0x1;},_0x2ba124[_0x32e672(0x154f)]=function(){_0x3717b2=!0x0;},_0x2ba124[_0x32e672(0x3f16)]='11.9.0',_0x2ba124[_0x32e672(0x41d2)]={'concat':_0x521e8d,'lookahead':_0x13e767,'either':_0x559db,'optional':_0x484f62,'anyNumberOfTimes':_0xad2530};for(const _0x3cf927 in _0x4f27a7)_0x32e672(0x20c7)==typeof _0x4f27a7[_0x3cf927]&&_0x576890(_0x4f27a7[_0x3cf927]);return Object[_0x32e672(0x4e14)](_0x2ba124,_0x4f27a7),_0x2ba124;},_0x4d311b=_0x17ed34({});_0x4d311b[_0x41950a(0x26b6)]=()=>_0x17ed34({}),_0x44cf3b['exports']=_0x4d311b,_0x4d311b[_0x41950a(0x50fa)]=_0x4d311b,_0x4d311b[_0x41950a(0x3d23)]=_0x4d311b;},0xc97:(_0x1af50d,_0x284ce7,_0x57441f)=>{const _0x56755f=a0_0x11e7;var _0x5082e4=_0x57441f(0x20e0);_0x5082e4[_0x56755f(0x181b)]('1c',_0x57441f(0x1757)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x837),_0x57441f(0x14f0)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x88b),_0x57441f(0x13a7)),_0x5082e4[_0x56755f(0x181b)]('actionscript',_0x57441f(0x2382)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2d2c),_0x57441f(0xceb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x16d8),_0x57441f(0x3c9)),_0x5082e4[_0x56755f(0x181b)]('apache',_0x57441f(0x1ebd)),_0x5082e4[_0x56755f(0x181b)]('applescript',_0x57441f(0x270)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x343c),_0x57441f(0x707)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x204e),_0x57441f(0x134f)),_0x5082e4[_0x56755f(0x181b)]('armasm',_0x57441f(0x1e92)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2655),_0x57441f(0x72)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x27c5),_0x57441f(0x1064)),_0x5082e4[_0x56755f(0x181b)]('aspectj',_0x57441f(0x2709)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x46b6),_0x57441f(0x1156)),_0x5082e4[_0x56755f(0x181b)]('autoit',_0x57441f(0x7e5)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x33c),_0x57441f(0x275)),_0x5082e4[_0x56755f(0x181b)]('awk',_0x57441f(0xa8e)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1fc3),_0x57441f(0xf64)),_0x5082e4['registerLanguage']('bash',_0x57441f(0x21c1)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3042),_0x57441f(0x1991)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x40d3),_0x57441f(0x1a8d)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3e28),_0x57441f(0x7bc)),_0x5082e4['registerLanguage']('c',_0x57441f(0x2d2)),_0x5082e4['registerLanguage'](_0x56755f(0x5014),_0x57441f(0x19ab)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x65a),_0x57441f(0x25a7)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4b0c),_0x57441f(0x1fb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x43d4),_0x57441f(0x41a)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2a7c),_0x57441f(0x1d9b)),_0x5082e4[_0x56755f(0x181b)]('clojure-repl',_0x57441f(0x2581)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4f25),_0x57441f(0x7d8)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1d4f),_0x57441f(0x20ac)),_0x5082e4[_0x56755f(0x181b)]('coq',_0x57441f(0x25fa)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3935),_0x57441f(0x2450)),_0x5082e4['registerLanguage'](_0x56755f(0xf7c),_0x57441f(0x19aa)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3ea9),_0x57441f(0x6dc)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4c9a),_0x57441f(0x21d5)),_0x5082e4[_0x56755f(0x181b)]('csharp',_0x57441f(0x1bd0)),_0x5082e4['registerLanguage'](_0x56755f(0x539),_0x57441f(0x1a9d)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0xdd4),_0x57441f(0x21a4)),_0x5082e4[_0x56755f(0x181b)]('d',_0x57441f(0x1a1b)),_0x5082e4['registerLanguage'](_0x56755f(0x4116),_0x57441f(0x25a)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4d5c),_0x57441f(0x8e2)),_0x5082e4[_0x56755f(0x181b)]('delphi',_0x57441f(0x20fb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x436c),_0x57441f(0x2194)),_0x5082e4['registerLanguage'](_0x56755f(0x4df3),_0x57441f(0xda8)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3aba),_0x57441f(0x5dc)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x30f7),_0x57441f(0x23ed)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0xf25),_0x57441f(0x2299)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3127),_0x57441f(0x205c)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x16fe),_0x57441f(0xa0a)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x134a),_0x57441f(0x25fb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x24b2),_0x57441f(0x2094)),_0x5082e4[_0x56755f(0x181b)]('elixir',_0x57441f(0x25a0)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x44ac),_0x57441f(0x1fdb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4430),_0x57441f(0x1397)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x145e),_0x57441f(0x1104)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x499e),_0x57441f(0x1d94)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3cb8),_0x57441f(0x1ffc)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2f4b),_0x57441f(0x1c78)),_0x5082e4['registerLanguage'](_0x56755f(0x2002),_0x57441f(0x1f48)),_0x5082e4[_0x56755f(0x181b)]('flix',_0x57441f(0x2610)),_0x5082e4[_0x56755f(0x181b)]('fortran',_0x57441f(0x4a5)),_0x5082e4[_0x56755f(0x181b)]('fsharp',_0x57441f(0x2455)),_0x5082e4[_0x56755f(0x181b)]('gams',_0x57441f(0x1d47)),_0x5082e4['registerLanguage']('gauss',_0x57441f(0xeba)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0xda0),_0x57441f(0x212b)),_0x5082e4['registerLanguage'](_0x56755f(0x190d),_0x57441f(0xce1)),_0x5082e4[_0x56755f(0x181b)]('glsl',_0x57441f(0xec3)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x9c9),_0x57441f(0x4ab)),_0x5082e4[_0x56755f(0x181b)]('go',_0x57441f(0x2631)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4771),_0x57441f(0x1c48)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2b7),_0x57441f(0x598)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2ebf),_0x57441f(0x1d32)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3015),_0x57441f(0xfd)),_0x5082e4['registerLanguage'](_0x56755f(0x469b),_0x57441f(0x11a5)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1427),_0x57441f(0xc73)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x25e1),_0x57441f(0x17b)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x49aa),_0x57441f(0x102b)),_0x5082e4[_0x56755f(0x181b)]('hsp',_0x57441f(0xfa8)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x35ba),_0x57441f(0xd37)),_0x5082e4[_0x56755f(0x181b)]('hy',_0x57441f(0x1250)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x32f7),_0x57441f(0x1b39)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1a21),_0x57441f(0x5fd)),_0x5082e4['registerLanguage'](_0x56755f(0x43c),_0x57441f(0x1193)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3160),_0x57441f(0x1621)),_0x5082e4[_0x56755f(0x181b)]('java',_0x57441f(0x131f)),_0x5082e4['registerLanguage']('javascript',_0x57441f(0x1793)),_0x5082e4[_0x56755f(0x181b)]('jboss-cli',_0x57441f(0x15f7)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3289),_0x57441f(0x26d)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1f3f),_0x57441f(0x22aa)),_0x5082e4['registerLanguage']('julia-repl',_0x57441f(0x2142)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3b12),_0x57441f(0xb16)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x977),_0x57441f(0x17c9)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0xa79),_0x57441f(0xd9b)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x34cd),_0x57441f(0x1c04)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x6f1),_0x57441f(0x1a2f)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x101e),_0x57441f(0x208a)),_0x5082e4['registerLanguage'](_0x56755f(0x179f),_0x57441f(0x9bb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2f02),_0x57441f(0x1acb)),_0x5082e4['registerLanguage']('livescript',_0x57441f(0x171a)),_0x5082e4[_0x56755f(0x181b)]('llvm',_0x57441f(0x18fe)),_0x5082e4[_0x56755f(0x181b)]('lsl',_0x57441f(0x9b0)),_0x5082e4[_0x56755f(0x181b)]('lua',_0x57441f(0xf21)),_0x5082e4['registerLanguage'](_0x56755f(0x13da),_0x57441f(0x1df3)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0xd05),_0x57441f(0x1937)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1b07),_0x57441f(0x2212)),_0x5082e4['registerLanguage']('maxima',_0x57441f(0x2018)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x19fb),_0x57441f(0x26e1)),_0x5082e4['registerLanguage'](_0x56755f(0x1994),_0x57441f(0x15ce)),_0x5082e4['registerLanguage'](_0x56755f(0x4cd6),_0x57441f(0x204d)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x37f6),_0x57441f(0x3b0)),_0x5082e4['registerLanguage']('perl',_0x57441f(0x3b2)),_0x5082e4['registerLanguage'](_0x56755f(0x35a3),_0x57441f(0x17be)),_0x5082e4[_0x56755f(0x181b)]('monkey',_0x57441f(0x1eaa)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x5040),_0x57441f(0x2419)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4054),_0x57441f(0x2663)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2de9),_0x57441f(0x124b)),_0x5082e4['registerLanguage'](_0x56755f(0x19a0),_0x57441f(0x1c5b)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3259),_0x57441f(0x6d3)),_0x5082e4[_0x56755f(0x181b)]('nix',_0x57441f(0x140)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x11bf),_0x57441f(0x1a11)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3850),_0x57441f(0x8c)),_0x5082e4['registerLanguage'](_0x56755f(0x4cd2),_0x57441f(0x3af)),_0x5082e4['registerLanguage'](_0x56755f(0x3f85),_0x57441f(0x6ff)),_0x5082e4[_0x56755f(0x181b)]('openscad',_0x57441f(0x1a10)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4ab4),_0x57441f(0xbd4)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x40a2),_0x57441f(0x23cb)),_0x5082e4[_0x56755f(0x181b)]('pf',_0x57441f(0x2285)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x288c),_0x57441f(0x1adc)),_0x5082e4[_0x56755f(0x181b)]('php',_0x57441f(0xc27)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x299b),_0x57441f(0x6be)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2cd2),_0x57441f(0x2350)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x22e0),_0x57441f(0x1af7)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x391b),_0x57441f(0xf8e)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4405),_0x57441f(0x23f0)),_0x5082e4[_0x56755f(0x181b)]('profile',_0x57441f(0x1456)),_0x5082e4['registerLanguage'](_0x56755f(0x1264),_0x57441f(0x1818)),_0x5082e4['registerLanguage']('properties',_0x57441f(0x1dde)),_0x5082e4['registerLanguage'](_0x56755f(0x4575),_0x57441f(0x1c84)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x134b),_0x57441f(0x2161)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3d3e),_0x57441f(0x1f3b)),_0x5082e4[_0x56755f(0x181b)]('python',_0x57441f(0x45d)),_0x5082e4['registerLanguage']('python-repl',_0x57441f(0x11b)),_0x5082e4['registerLanguage']('q',_0x57441f(0x24a8)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x241f),_0x57441f(0x2645)),_0x5082e4['registerLanguage']('r',_0x57441f(0x1fc1)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3f52),_0x57441f(0x192e)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x256a),_0x57441f(0x185e)),_0x5082e4['registerLanguage'](_0x56755f(0x6ba),_0x57441f(0xb6d)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x6a6),_0x57441f(0x175c)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3e6d),_0x57441f(0x19fa)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4d9),_0x57441f(0x163a)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x724),_0x57441f(0x1521)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3e0b),_0x57441f(0x1e4a)),_0x5082e4[_0x56755f(0x181b)]('scala',_0x57441f(0xd79)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x13d1),_0x57441f(0x1864)),_0x5082e4['registerLanguage'](_0x56755f(0x853),_0x57441f(0x18b)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1114),_0x57441f(0x64b)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x203a),_0x57441f(0x226d)),_0x5082e4['registerLanguage'](_0x56755f(0x28fc),_0x57441f(0x1017)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x10bd),_0x57441f(0x4e)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1643),_0x57441f(0x1b2f)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x1ca2),_0x57441f(0x18b5)),_0x5082e4['registerLanguage'](_0x56755f(0x124d),_0x57441f(0x13b)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3be5),_0x57441f(0x1ed3)),_0x5082e4[_0x56755f(0x181b)]('stata',_0x57441f(0x9f2)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x26ba),_0x57441f(0x1fc0)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x547),_0x57441f(0x1665)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3e5e),_0x57441f(0x194b)),_0x5082e4['registerLanguage'](_0x56755f(0xd63),_0x57441f(0x5d8)),_0x5082e4['registerLanguage'](_0x56755f(0x462e),_0x57441f(0x96)),_0x5082e4[_0x56755f(0x181b)]('yaml',_0x57441f(0x15d4)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3619),_0x57441f(0x22ee)),_0x5082e4['registerLanguage'](_0x56755f(0x44a1),_0x57441f(0x4f8)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x115c),_0x57441f(0x26a0)),_0x5082e4[_0x56755f(0x181b)]('tp',_0x57441f(0x3db)),_0x5082e4[_0x56755f(0x181b)]('twig',_0x57441f(0xde)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x829),_0x57441f(0x21c0)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x20d6),_0x57441f(0x24f5)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x4342),_0x57441f(0x22e0)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x2cd9),_0x57441f(0x4b0)),_0x5082e4['registerLanguage'](_0x56755f(0x65e),_0x57441f(0xa4)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x3b1d),_0x57441f(0x2039)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x45db),_0x57441f(0x2211)),_0x5082e4[_0x56755f(0x181b)]('vim',_0x57441f(0x22cb)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x27fb),_0x57441f(0x2487)),_0x5082e4[_0x56755f(0x181b)](_0x56755f(0x455),_0x57441f(0x13cd)),_0x5082e4[_0x56755f(0x181b)]('x86asm',_0x57441f(0x1d5a)),_0x5082e4['registerLanguage']('xl',_0x57441f(0x21e3)),_0x5082e4['registerLanguage'](_0x56755f(0x685),_0x57441f(0xbeb)),_0x5082e4['registerLanguage'](_0x56755f(0xd83),_0x57441f(0x2229)),_0x5082e4[_0x56755f(0x50fa)]=_0x5082e4,_0x5082e4[_0x56755f(0x3d23)]=_0x5082e4,_0x1af50d[_0x56755f(0x474c)]=_0x5082e4;},0x1757:_0x1f66e2=>{const _0x3b4c33=a0_0x11e7;_0x1f66e2[_0x3b4c33(0x474c)]=function(_0xcf525f){const _0x1d99a2=_0x3b4c33,_0x3e9c0e=_0x1d99a2(0x1172),_0x3aa331=_0x1d99a2(0xd44),_0x4a64d9=_0x1d99a2(0x12ef),_0x56dd03=_0xcf525f[_0x1d99a2(0x46a1)](_0xcf525f['NUMBER_MODE']),_0x11c325={'className':_0x1d99a2(0x2431),'begin':_0x1d99a2(0x335b),'end':_0x1d99a2(0x17be),'contains':[{'begin':'\x22\x22'}]},_0x63aa56={'begin':'\x27','end':'\x27','excludeBegin':!0x0,'excludeEnd':!0x0,'contains':[{'className':'number','begin':'\x5cd{4}([\x5c.\x5c\x5c/:-]?\x5cd{2}){0,5}'}]},_0x29c9f4=_0xcf525f[_0x1d99a2(0x46a1)](_0xcf525f['C_LINE_COMMENT_MODE']);return{'name':_0x1d99a2(0x3342),'case_insensitive':!0x0,'keywords':{'$pattern':_0x3e9c0e,'keyword':_0x3aa331,'built_in':_0x1d99a2(0x48e1),'class':_0x1d99a2(0x1439),'type':_0x1d99a2(0x18eb),'literal':_0x4a64d9},'contains':[{'className':_0x1d99a2(0x5153),'begin':'#|&','end':'$','keywords':{'$pattern':_0x3e9c0e,'keyword':_0x3aa331+'загрузитьизфайла\x20вебклиент\x20вместо\x20внешнеесоединение\x20клиент\x20конецобласти\x20мобильноеприложениеклиент\x20мобильноеприложениесервер\x20наклиенте\x20наклиентенасервере\x20наклиентенасерверебезконтекста\x20насервере\x20насерверебезконтекста\x20область\x20перед\x20после\x20сервер\x20толстыйклиентобычноеприложение\x20толстыйклиентуправляемоеприложение\x20тонкийклиент\x20'},'contains':[_0x29c9f4]},{'className':_0x1d99a2(0x14b2),'variants':[{'begin':_0x1d99a2(0x291f),'end':'\x5c)','keywords':_0x1d99a2(0xc4f)},{'begin':_0x1d99a2(0x1da5),'keywords':_0x1d99a2(0xf97)}],'contains':[{'begin':'\x5c(','end':'\x5c)','endsParent':!0x0,'contains':[{'className':_0x1d99a2(0xddd),'begin':_0x3e9c0e,'end':',','excludeEnd':!0x0,'endsWithParent':!0x0,'keywords':{'$pattern':_0x3e9c0e,'keyword':_0x1d99a2(0xbd3),'literal':_0x4a64d9},'contains':[_0x56dd03,_0x11c325,_0x63aa56]},_0x29c9f4]},_0xcf525f[_0x1d99a2(0x46a1)](_0xcf525f[_0x1d99a2(0x2029)],{'begin':_0x3e9c0e})]},_0x29c9f4,{'className':'symbol','begin':'~','end':_0x1d99a2(0x2cab),'excludeEnd':!0x0},_0x56dd03,_0x11c325,_0x63aa56]};};},0x14f0:_0x494c9c=>{const _0x2e80aa=a0_0x11e7;_0x494c9c[_0x2e80aa(0x474c)]=function(_0xc644aa){const _0x4da4f4=_0x2e80aa,_0x5f0de1=_0xc644aa['regex'],_0x2a5aeb=_0xc644aa[_0x4da4f4(0x4e4f)](/;/,/$/);return{'name':'Augmented\x20Backus-Naur\x20Form','illegal':/[!@#$^&',?+~`|:]/,'keywords':['ALPHA','BIT','CHAR','CR',_0x4da4f4(0x1afb),_0x4da4f4(0x923),_0x4da4f4(0x31ff),_0x4da4f4(0x6b8),_0x4da4f4(0x14b5),_0x4da4f4(0x800),'LF',_0x4da4f4(0x41c),_0x4da4f4(0x2645),'SP',_0x4da4f4(0x295b),_0x4da4f4(0x8f9)],'contains':[{'scope':_0x4da4f4(0x1182),'match':/=\/?/},{'scope':'attribute','match':_0x5f0de1[_0x4da4f4(0x1d1d)](/^[a-zA-Z][a-zA-Z0-9-]*/,/(?=\s*=)/)},_0x2a5aeb,{'scope':_0x4da4f4(0x239b),'match':/%b[0-1]+(-[0-1]+|(\.[0-1]+)+)?/},{'scope':_0x4da4f4(0x239b),'match':/%d[0-9]+(-[0-9]+|(\.[0-9]+)+)?/},{'scope':_0x4da4f4(0x239b),'match':/%x[0-9A-F]+(-[0-9A-F]+|(\.[0-9A-F]+)+)?/},{'scope':'symbol','match':/%[si](?=".*")/},_0xc644aa['QUOTE_STRING_MODE'],_0xc644aa[_0x4da4f4(0x30be)]]};};},0x13a7:_0x156b73=>{_0x156b73['exports']=function(_0x507166){const _0x547a01=a0_0x11e7,_0x10ac9e=_0x507166[_0x547a01(0x41d2)],_0x4f48ca=['GET',_0x547a01(0x153c),_0x547a01(0x2b3e),_0x547a01(0x97e),'DELETE','CONNECT','OPTIONS','PATCH','TRACE'];return{'name':'Apache\x20Access\x20Log','contains':[{'className':_0x547a01(0x4a80),'begin':/^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(:\d{1,5})?\b/,'relevance':0x5},{'className':_0x547a01(0x4a80),'begin':/\b\d+\b/,'relevance':0x0},{'className':_0x547a01(0x2431),'begin':_0x10ac9e[_0x547a01(0x1d1d)](/"/,_0x10ac9e[_0x547a01(0x583)](..._0x4f48ca)),'end':/"/,'keywords':_0x4f48ca,'illegal':/\n/,'relevance':0x5,'contains':[{'begin':/HTTP\/[12]\.\d'/,'relevance':0x5}]},{'className':_0x547a01(0x2431),'begin':/\[\d[^\]\n]{8,}\]/,'illegal':/\n/,'relevance':0x1},{'className':_0x547a01(0x2431),'begin':/\[/,'end':/\]/,'illegal':/\n/,'relevance':0x0},{'className':'string','begin':/"Mozilla\/\d\.\d \(/,'end':/"/,'illegal':/\n/,'relevance':0x3},{'className':_0x547a01(0x2431),'begin':/"/,'end':/"/,'illegal':/\n/,'relevance':0x0}]};};},0x2382:_0x1edc36=>{const _0x1edc9b=a0_0x11e7;_0x1edc36[_0x1edc9b(0x474c)]=function(_0x4fdf07){const _0xa3af1a=_0x1edc9b,_0x40a192=_0x4fdf07[_0xa3af1a(0x41d2)],_0x5554dd=/[a-zA-Z_$][a-zA-Z0-9_$]*/,_0x3c0b1b=_0x40a192['concat'](_0x5554dd,_0x40a192[_0xa3af1a(0x1d1d)]('(\x5c.',_0x5554dd,')*')),_0x148e0b={'className':_0xa3af1a(0x132b),'begin':/[.]{3}/,'end':_0x5554dd,'relevance':0xa};return{'name':'ActionScript','aliases':['as'],'keywords':{'keyword':['as',_0xa3af1a(0x4e10),_0xa3af1a(0x2e7e),_0xa3af1a(0x31a3),_0xa3af1a(0x1390),_0xa3af1a(0xc01),'continue',_0xa3af1a(0x3d23),_0xa3af1a(0x5be),'do',_0xa3af1a(0x41aa),_0xa3af1a(0x2f9e),_0xa3af1a(0x3d4),'extends',_0xa3af1a(0x27e4),_0xa3af1a(0x37b2),_0xa3af1a(0x3c19),_0xa3af1a(0x14b2),'get','if','implements','import','in',_0xa3af1a(0x478e),_0xa3af1a(0xf3c),_0xa3af1a(0x321b),_0xa3af1a(0x2dad),'is',_0xa3af1a(0x37f7),_0xa3af1a(0x50dc),'new',_0xa3af1a(0x35a7),_0xa3af1a(0x4bd0),'private','protected',_0xa3af1a(0x39ce),_0xa3af1a(0xdfd),_0xa3af1a(0x1fa),_0xa3af1a(0x2c7c),_0xa3af1a(0x2cc),_0xa3af1a(0x857),_0xa3af1a(0x138f),_0xa3af1a(0x383),_0xa3af1a(0x422b),_0xa3af1a(0x3368),_0xa3af1a(0x84a),_0xa3af1a(0x469d),_0xa3af1a(0x27d6),_0xa3af1a(0x552),_0xa3af1a(0x2aa7)],'literal':[_0xa3af1a(0x4022),_0xa3af1a(0x3984),_0xa3af1a(0x1582),_0xa3af1a(0x1daa)]},'contains':[_0x4fdf07[_0xa3af1a(0xa4c)],_0x4fdf07[_0xa3af1a(0x291b)],_0x4fdf07['C_LINE_COMMENT_MODE'],_0x4fdf07['C_BLOCK_COMMENT_MODE'],_0x4fdf07[_0xa3af1a(0xd12)],{'match':[/\bpackage/,/\s+/,_0x3c0b1b],'className':{0x1:_0xa3af1a(0x1357),0x3:'title.class'}},{'match':[/\b(?:class|interface|extends|implements)/,/\s+/,_0x5554dd],'className':{0x1:_0xa3af1a(0x1357),0x3:_0xa3af1a(0x19e4)}},{'className':_0xa3af1a(0x5153),'beginKeywords':_0xa3af1a(0x5298),'end':/;/,'keywords':{'keyword':_0xa3af1a(0x5298)}},{'beginKeywords':_0xa3af1a(0x14b2),'end':/[{;]/,'excludeEnd':!0x0,'illegal':/\S/,'contains':[_0x4fdf07[_0xa3af1a(0x46a1)](_0x4fdf07[_0xa3af1a(0x2029)],{'className':'title.function'}),{'className':_0xa3af1a(0xddd),'begin':/\(/,'end':/\)/,'contains':[_0x4fdf07[_0xa3af1a(0xa4c)],_0x4fdf07[_0xa3af1a(0x291b)],_0x4fdf07['C_LINE_COMMENT_MODE'],_0x4fdf07[_0xa3af1a(0x23fe)],_0x148e0b]},{'begin':_0x40a192[_0xa3af1a(0x1d1d)](/:\s*/,/([*]|[a-zA-Z_$][a-zA-Z0-9_$]*)/)}]},_0x4fdf07[_0xa3af1a(0x3136)]],'illegal':/#/};};},0xceb:_0x48906e=>{const _0x2003d4=a0_0x11e7;_0x48906e[_0x2003d4(0x474c)]=function(_0x225e50){const _0x461649=_0x2003d4,_0xa0bb82=_0x461649(0x112e),_0x49a728=_0x461649(0x709)+_0xa0bb82,_0xe4efa8=_0x461649(0x4cd5)+(_0xa0bb82+_0x461649(0x134d)+_0x49a728+')?')+'|'+(_0xa0bb82+_0x461649(0x16b9)+_0xa0bb82+_0x461649(0x225e)+_0x49a728+')?')+')',_0x3ea7aa=_0x461649(0x2854),_0x25ae36=_0x461649(0x1b56),_0x178bf9=_0x225e50['COMMENT']('--','$'),_0x11d5fe={'begin':_0x461649(0x1326),'end':'\x5cs*(:=|;|\x5c)|=>|$)','illegal':_0x25ae36,'contains':[{'beginKeywords':_0x461649(0x2db6),'endsParent':!0x0},{'className':_0x461649(0x1357),'beginKeywords':_0x461649(0x22f)},{'className':'type','begin':_0x3ea7aa,'endsParent':!0x0,'relevance':0x0}]};return{'name':_0x461649(0xb72),'case_insensitive':!0x0,'keywords':{'keyword':['abort',_0x461649(0x3d4),_0x461649(0x4321),'return',_0x461649(0xbe0),_0x461649(0x3e5b),_0x461649(0xc1a),_0x461649(0x78b),'abstract',_0x461649(0x2681),_0x461649(0x3165),_0x461649(0x1a87),_0x461649(0x3fc9),'access',_0x461649(0x86c),'of','separate',_0x461649(0x4e86),_0x461649(0x4c7b),'or','some',_0x461649(0xc36),_0x461649(0x264a),_0x461649(0x34fd),'and',_0x461649(0x3c19),_0x461649(0x3ab5),'synchronized',_0x461649(0x26f6),_0x461649(0x14b2),_0x461649(0x1f58),'at',_0x461649(0x88a),_0x461649(0xff8),_0x461649(0x4bd0),_0x461649(0x3cef),_0x461649(0x42fa),_0x461649(0x139c),_0x461649(0x21fc),_0x461649(0x315c),_0x461649(0x4f1a),_0x461649(0x4ef4),_0x461649(0xaf5),'if',_0x461649(0x1285),_0x461649(0xcfc),'case','in','protected',_0x461649(0x30ea),_0x461649(0x321b),'is',_0x461649(0x2c1e),_0x461649(0x84a),_0x461649(0x45e8),_0x461649(0x51f),_0x461649(0x2943),_0x461649(0x42f1),'record','when',_0x461649(0x499),_0x461649(0x110b),_0x461649(0x22e5),_0x461649(0x552),_0x461649(0xe1b),_0x461649(0x5de),_0x461649(0x2aa7),'do','mod',_0x461649(0x4d49),_0x461649(0x32a6)],'literal':[_0x461649(0x23a3),'False']},'contains':[_0x178bf9,{'className':_0x461649(0x2431),'begin':/"/,'end':/"/,'contains':[{'begin':/""/,'relevance':0x0}]},{'className':_0x461649(0x2431),'begin':/'.'/},{'className':_0x461649(0x4a80),'begin':_0xe4efa8,'relevance':0x0},{'className':_0x461649(0x239b),'begin':'\x27'+_0x3ea7aa},{'className':_0x461649(0x4685),'begin':_0x461649(0x4393),'end':_0x461649(0x4038),'keywords':_0x461649(0x102d),'excludeBegin':!0x0,'excludeEnd':!0x0,'illegal':_0x25ae36},{'begin':_0x461649(0x5258),'end':'(\x5cbis|\x5cbwith|\x5cbrenames|\x5c)\x5cs*;)','keywords':_0x461649(0x4913),'returnBegin':!0x0,'contains':[_0x178bf9,{'className':_0x461649(0x4685),'begin':_0x461649(0x3c98),'end':'(\x5c(|\x5cs+|$)','excludeBegin':!0x0,'excludeEnd':!0x0,'illegal':_0x25ae36},_0x11d5fe,{'className':_0x461649(0xcfc),'begin':_0x461649(0x1e3a),'end':_0x461649(0x4f32),'keywords':_0x461649(0xdfd),'excludeBegin':!0x0,'excludeEnd':!0x0,'endsParent':!0x0,'illegal':_0x25ae36}]},{'className':_0x461649(0xcfc),'begin':_0x461649(0x1f55),'end':_0x461649(0x16cc),'keywords':_0x461649(0xcfc),'excludeBegin':!0x0,'illegal':_0x25ae36},_0x11d5fe]};};},0x3c9:_0x3fd9da=>{_0x3fd9da['exports']=function(_0xcee305){const _0x265304=a0_0x11e7,_0x5f4908={'className':_0x265304(0x43a),'begin':_0x265304(0x14e4)},_0x28d4fd={'className':_0x265304(0x239b),'begin':_0x265304(0x4946)},_0x554fa9={'className':_0x265304(0x1357),'begin':'<','end':'>','contains':[_0x5f4908,_0x28d4fd]};return _0x5f4908['contains']=[_0x554fa9],_0x28d4fd[_0x265304(0x2b31)]=[_0x554fa9],{'name':_0x265304(0x35f2),'aliases':[_0x265304(0x637)],'keywords':[_0x265304(0x3c19),'in|0','break',_0x265304(0x16d9),_0x265304(0x552),_0x265304(0x2f57),_0x265304(0xdfd),'if',_0x265304(0x3d4),_0x265304(0x2e7e),'switch',_0x265304(0x37f7),'is',_0x265304(0x1171),'or','and','xor',_0x265304(0xc1a),_0x265304(0x457a),'in','inout|10',_0x265304(0x3ab5),'override',_0x265304(0xce7),_0x265304(0x4ef4),_0x265304(0x39ce),'const',_0x265304(0x2b2a),_0x265304(0x27e4),_0x265304(0x2b1c),_0x265304(0x2f0b),_0x265304(0x4d7f),_0x265304(0x44d8),_0x265304(0xc83),'funcdef',_0x265304(0x138f),_0x265304(0x2cc),_0x265304(0x331),'from',_0x265304(0x321b),_0x265304(0x444f),_0x265304(0x422b),_0x265304(0x31a3),_0x265304(0xc14),'explicit',_0x265304(0x227a)],'illegal':_0x265304(0x37ef),'contains':[{'className':_0x265304(0x2431),'begin':'\x27','end':'\x27','illegal':'\x5cn','contains':[_0xcee305[_0x265304(0x4a76)]],'relevance':0x0},{'className':'string','begin':_0x265304(0xb00),'end':_0x265304(0xb00)},{'className':_0x265304(0x2431),'begin':'\x22','end':'\x22','illegal':'\x5cn','contains':[_0xcee305[_0x265304(0x4a76)]],'relevance':0x0},_0xcee305['C_LINE_COMMENT_MODE'],_0xcee305[_0x265304(0x23fe)],{'className':'string','begin':_0x265304(0x46b1),'end':'\x5c]'},{'beginKeywords':'interface\x20namespace','end':/\{/,'illegal':_0x265304(0x106e),'contains':[{'className':'symbol','begin':_0x265304(0x29c3)}]},{'beginKeywords':_0x265304(0x1390),'end':/\{/,'illegal':'[;.\x5c-]','contains':[{'className':'symbol','begin':_0x265304(0x29c3),'contains':[{'begin':_0x265304(0x3e3e),'contains':[{'className':_0x265304(0x239b),'begin':_0x265304(0x29c3)}]}]}]},_0x5f4908,_0x28d4fd,{'className':_0x265304(0x2706),'begin':_0x265304(0x3c71)},{'className':_0x265304(0x4a80),'relevance':0x0,'begin':_0x265304(0x1e64)}]};};},0x1ebd:_0x2cfc3b=>{const _0xd4c219=a0_0x11e7;_0x2cfc3b[_0xd4c219(0x474c)]=function(_0x5e0b7c){const _0x126e88=_0xd4c219,_0x5a4b0d={'className':_0x126e88(0x4a80),'begin':/\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(:\d{1,5})?/};return{'name':_0x126e88(0x316),'aliases':[_0x126e88(0x551)],'case_insensitive':!0x0,'contains':[_0x5e0b7c[_0x126e88(0x2bbe)],{'className':'section','begin':/<\/?/,'end':/>/,'contains':[_0x5a4b0d,{'className':'number','begin':/:\d{1,5}/},_0x5e0b7c['inherit'](_0x5e0b7c[_0x126e88(0x291b)],{'relevance':0x0})]},{'className':_0x126e88(0x263f),'begin':/\w+/,'relevance':0x0,'keywords':{'_':[_0x126e88(0xd8d),_0x126e88(0x3652),_0x126e88(0x3c6),_0x126e88(0x248a),_0x126e88(0xd16),_0x126e88(0x3eb1),_0x126e88(0x8c8),_0x126e88(0x124e),'sethandler',_0x126e88(0x231f),_0x126e88(0x2756),_0x126e88(0x20b6),_0x126e88(0x17f8),_0x126e88(0x4452),_0x126e88(0x3ef6),_0x126e88(0x15ae)]},'starts':{'end':/$/,'relevance':0x0,'keywords':{'literal':_0x126e88(0x195c)},'contains':[{'className':_0x126e88(0x5153),'begin':/\s\[/,'end':/\]$/},{'className':_0x126e88(0x3362),'begin':/[\$%]\{/,'end':/\}/,'contains':[_0x126e88(0x4454),{'className':_0x126e88(0x4a80),'begin':/[$%]\d+/}]},_0x5a4b0d,{'className':'number','begin':/\b\d+/},_0x5e0b7c['QUOTE_STRING_MODE']]}}],'illegal':/\S/};};},0x270:_0x26bfe2=>{const _0x2d0b68=a0_0x11e7;_0x26bfe2[_0x2d0b68(0x474c)]=function(_0x5dd212){const _0x36bd65=_0x2d0b68,_0x245ff5=_0x5dd212[_0x36bd65(0x41d2)],_0x19dad7=_0x5dd212['inherit'](_0x5dd212[_0x36bd65(0x291b)],{'illegal':null}),_0x2e3a63={'className':'params','begin':/\(/,'end':/\)/,'contains':[_0x36bd65(0x4454),_0x5dd212['C_NUMBER_MODE'],_0x19dad7]},_0x4d2eb2=_0x5dd212[_0x36bd65(0x4e4f)](/--/,/$/),_0x3173ff=[_0x4d2eb2,_0x5dd212[_0x36bd65(0x4e4f)](/\(\*/,/\*\)/,{'contains':[_0x36bd65(0x4454),_0x4d2eb2]}),_0x5dd212['HASH_COMMENT_MODE']];return{'name':_0x36bd65(0x1a3d),'aliases':[_0x36bd65(0x2a71)],'keywords':{'keyword':_0x36bd65(0x2499),'literal':_0x36bd65(0x377f),'built_in':_0x36bd65(0x2683)},'contains':[_0x19dad7,_0x5dd212[_0x36bd65(0xd12)],{'className':_0x36bd65(0x43a),'begin':_0x245ff5[_0x36bd65(0x1d1d)](/\b/,_0x245ff5[_0x36bd65(0x583)](/clipboard info/,/the clipboard/,/info for/,/list (disks|folder)/,/mount volume/,/path to/,/(close|open for) access/,/(get|set) eof/,/current date/,/do shell script/,/get volume settings/,/random number/,/set volume/,/system attribute/,/system info/,/time to GMT/,/(load|run|store) script/,/scripting components/,/ASCII (character|number)/,/localized string/,/choose (application|color|file|file name|folder|from list|remote application|URL)/,/display (alert|dialog)/),/\b/)},{'className':'built_in','begin':/^\s*return\b/},{'className':_0x36bd65(0x2706),'begin':/\b(text item delimiters|current application|missing value)\b/},{'className':_0x36bd65(0x1357),'begin':_0x245ff5['concat'](/\b/,_0x245ff5[_0x36bd65(0x583)](/apart from/,/aside from/,/instead of/,/out of/,/greater than/,/isn't|(doesn't|does not) (equal|come before|come after|contain)/,/(greater|less) than( or equal)?/,/(starts?|ends|begins?) with/,/contained by/,/comes (before|after)/,/a (ref|reference)/,/POSIX (file|path)/,/(date|time) string/,/quoted form/),/\b/)},{'beginKeywords':'on','illegal':/[${=;\n]/,'contains':[_0x5dd212['UNDERSCORE_TITLE_MODE'],_0x2e3a63]},..._0x3173ff],'illegal':/\/\/|->|=>|\[\[/};};},0x707:_0x150f82=>{const _0xaad434=a0_0x11e7;_0x150f82[_0xaad434(0x474c)]=function(_0x6cc65e){const _0x56cda0=_0xaad434,_0x48f9ab=_0x56cda0(0x1574),_0x4a8976={'keyword':['if','for',_0x56cda0(0x552),_0x56cda0(0x469d),'new',_0x56cda0(0x14b2),'do',_0x56cda0(0xdfd),_0x56cda0(0x27d6),'else',_0x56cda0(0x4e10)],'literal':[_0x56cda0(0xfae),_0x56cda0(0x24d6),_0x56cda0(0x3984),_0x56cda0(0x2de0),'Infinity','NaN',_0x56cda0(0x1c54),_0x56cda0(0x1582),'PI',_0x56cda0(0x29f6),_0x56cda0(0x368d),_0x56cda0(0x38d2),'true',_0x56cda0(0x1daa)],'built_in':[_0x56cda0(0x1bc9),_0x56cda0(0xb91),_0x56cda0(0x48e5),'Angle',_0x56cda0(0x954),_0x56cda0(0x4fac),_0x56cda0(0x2ef0),_0x56cda0(0x4b6a),'Asin',_0x56cda0(0x1d20),_0x56cda0(0x5081),_0x56cda0(0x12ce),'Average','Back',_0x56cda0(0x3d9b),'Boolean',_0x56cda0(0x1300),_0x56cda0(0x3367),_0x56cda0(0x1a6e),_0x56cda0(0x72e),_0x56cda0(0x167f),_0x56cda0(0x3742),_0x56cda0(0x26c8),_0x56cda0(0x3303),_0x56cda0(0x228),_0x56cda0(0x17b4),_0x56cda0(0x3ff8),_0x56cda0(0x2866),_0x56cda0(0x2299),'Cut',_0x56cda0(0x448e),'DateAdd',_0x56cda0(0x392d),_0x56cda0(0x374),_0x56cda0(0x24eb),_0x56cda0(0x1898),_0x56cda0(0x14e9),_0x56cda0(0x168d),'Dictionary',_0x56cda0(0x1f0e),_0x56cda0(0x2cff),_0x56cda0(0x1d2c),_0x56cda0(0x932),_0x56cda0(0x5d9),_0x56cda0(0x39be),_0x56cda0(0x2f3d),'DomainName',_0x56cda0(0x2e2e),_0x56cda0(0xd32),_0x56cda0(0x292e),_0x56cda0(0x489d),_0x56cda0(0x4dc6),_0x56cda0(0x4956),'Feature',_0x56cda0(0x13f9),_0x56cda0(0x198a),_0x56cda0(0x911),'FeatureSetByName','FeatureSetByPortalItem',_0x56cda0(0x192f),_0x56cda0(0x1f76),'Find',_0x56cda0(0x473d),_0x56cda0(0x1ed2),_0x56cda0(0x456c),_0x56cda0(0x30db),'FromJSON',_0x56cda0(0x2872),_0x56cda0(0x11ee),_0x56cda0(0x2e29),_0x56cda0(0x4a18),_0x56cda0(0x4543),_0x56cda0(0x445a),_0x56cda0(0x48fd),_0x56cda0(0x1030),_0x56cda0(0x4c2a),_0x56cda0(0x2caa),_0x56cda0(0xc9f),_0x56cda0(0x22bb),'IndexOf',_0x56cda0(0x2f30),_0x56cda0(0x4369),_0x56cda0(0x3590),_0x56cda0(0x4c6f),_0x56cda0(0x23fa),_0x56cda0(0x20a1),_0x56cda0(0x723),_0x56cda0(0xa13),_0x56cda0(0x78e),_0x56cda0(0x3bb0),_0x56cda0(0x1252),_0x56cda0(0x3067),_0x56cda0(0x20b0),_0x56cda0(0x435),_0x56cda0(0x188d),'Log',_0x56cda0(0x795),_0x56cda0(0x4a59),_0x56cda0(0x2d51),'Mean','Mid',_0x56cda0(0xa49),_0x56cda0(0x4c50),_0x56cda0(0x1770),_0x56cda0(0x1c7a),_0x56cda0(0x10c8),'Multipoint',_0x56cda0(0x48f6),_0x56cda0(0x23dd),'Now',_0x56cda0(0x3cd),_0x56cda0(0x8b4),_0x56cda0(0x487f),'Overlaps',_0x56cda0(0x1218),_0x56cda0(0x3ac1),'Polyline',_0x56cda0(0x4344),_0x56cda0(0x1691),'Pow','Proper',_0x56cda0(0x3f40),_0x56cda0(0x32e2),'Reduce',_0x56cda0(0xf8f),_0x56cda0(0x2f74),_0x56cda0(0x11ae),_0x56cda0(0x7bc),_0x56cda0(0xdc1),'RingIsClockwise',_0x56cda0(0x20af),_0x56cda0(0xa46),_0x56cda0(0x4752),_0x56cda0(0x151b),_0x56cda0(0x2a9a),'Simplify','Sin',_0x56cda0(0x22dc),_0x56cda0(0x3f44),'Splice',_0x56cda0(0x4cd9),'Sqrt',_0x56cda0(0x1446),_0x56cda0(0x1d00),_0x56cda0(0xe1f),_0x56cda0(0x4936),_0x56cda0(0x3f9d),'SymmetricDifference',_0x56cda0(0x2c97),_0x56cda0(0x2f37),_0x56cda0(0x36e7),_0x56cda0(0x2a47),_0x56cda0(0x581),_0x56cda0(0x3c91),_0x56cda0(0x417c),_0x56cda0(0x1d5b),_0x56cda0(0x2262),_0x56cda0(0x309d),'ToUTC',_0x56cda0(0x28b0),_0x56cda0(0x4c94),_0x56cda0(0x3ef3),_0x56cda0(0x2770),_0x56cda0(0x3c1e),_0x56cda0(0x40dc),_0x56cda0(0x41e8),_0x56cda0(0xe62),'TrackDuration',_0x56cda0(0x1953),_0x56cda0(0xde7),'TrackIndex',_0x56cda0(0x1590),'TrackSpeedWindow',_0x56cda0(0xb61),'TrackWindow',_0x56cda0(0x49b3),_0x56cda0(0x4046),'Union',_0x56cda0(0x4fe5),'UrlEncode',_0x56cda0(0x4736),'Week',_0x56cda0(0x2306),_0x56cda0(0x3807),'Within',_0x56cda0(0x37ec)]},_0x486fc6={'className':_0x56cda0(0x4a80),'variants':[{'begin':_0x56cda0(0x46c3)},{'begin':_0x56cda0(0x389b)},{'begin':_0x6cc65e[_0x56cda0(0x45be)]}],'relevance':0x0},_0x165fac={'className':'subst','begin':_0x56cda0(0x47da),'end':'\x5c}','keywords':_0x4a8976,'contains':[]},_0x507fa4={'className':_0x56cda0(0x2431),'begin':'`','end':'`','contains':[_0x6cc65e[_0x56cda0(0x4a76)],_0x165fac]};_0x165fac[_0x56cda0(0x2b31)]=[_0x6cc65e[_0x56cda0(0xa4c)],_0x6cc65e[_0x56cda0(0x291b)],_0x507fa4,_0x486fc6,_0x6cc65e[_0x56cda0(0x2db5)]];const _0x214ceb=_0x165fac[_0x56cda0(0x2b31)][_0x56cda0(0x1d1d)]([_0x6cc65e[_0x56cda0(0x23fe)],_0x6cc65e[_0x56cda0(0x2ae2)]]);return{'name':_0x56cda0(0x1b2b),'case_insensitive':!0x0,'keywords':_0x4a8976,'contains':[_0x6cc65e[_0x56cda0(0xa4c)],_0x6cc65e[_0x56cda0(0x291b)],_0x507fa4,_0x6cc65e['C_LINE_COMMENT_MODE'],_0x6cc65e[_0x56cda0(0x23fe)],{'className':_0x56cda0(0x239b),'begin':'\x5c$[datastore|feature|layer|map|measure|sourcefeature|sourcelayer|targetfeature|targetlayer|value|view]+'},_0x486fc6,{'begin':/[{,]\s*/,'relevance':0x0,'contains':[{'begin':_0x48f9ab+'\x5cs*:','returnBegin':!0x0,'relevance':0x0,'contains':[{'className':_0x56cda0(0x431d),'begin':_0x48f9ab,'relevance':0x0}]}]},{'begin':'('+_0x6cc65e['RE_STARTERS_RE']+_0x56cda0(0xc9d),'keywords':_0x56cda0(0xdfd),'contains':[_0x6cc65e[_0x56cda0(0x2ae2)],_0x6cc65e[_0x56cda0(0x23fe)],_0x6cc65e['REGEXP_MODE'],{'className':_0x56cda0(0x14b2),'begin':_0x56cda0(0x92c)+_0x48f9ab+_0x56cda0(0x21c5),'returnBegin':!0x0,'end':_0x56cda0(0x517c),'contains':[{'className':_0x56cda0(0xddd),'variants':[{'begin':_0x48f9ab},{'begin':/\(\s*\)/},{'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'keywords':_0x4a8976,'contains':_0x214ceb}]}]}],'relevance':0x0},{'beginKeywords':'function','end':/\{/,'excludeEnd':!0x0,'contains':[_0x6cc65e['inherit'](_0x6cc65e[_0x56cda0(0x2029)],{'className':_0x56cda0(0x20db),'begin':_0x48f9ab}),{'className':'params','begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'contains':_0x214ceb}],'illegal':/\[|%/},{'begin':/\$[(.]/}],'illegal':/#(?!!)/};};},0x134f:_0x129661=>{const _0x1d8679=a0_0x11e7;_0x129661[_0x1d8679(0x474c)]=function(_0x1d95ed){const _0x3f65d8=_0x1d8679,_0x3097c1={'type':['boolean','byte',_0x3f65d8(0x2506),'String'],'built_in':['KeyboardController',_0x3f65d8(0x3ec1),_0x3f65d8(0x23d9),'EthernetServer',_0x3f65d8(0x66c),_0x3f65d8(0x4021),_0x3f65d8(0x168f),_0x3f65d8(0x4dda),_0x3f65d8(0x1d90),'EsploraTFT',_0x3f65d8(0x491a),_0x3f65d8(0x3d72),'WiFiClient',_0x3f65d8(0x4b51),_0x3f65d8(0x487e),_0x3f65d8(0x57c),_0x3f65d8(0x4cb),_0x3f65d8(0x4bd6),_0x3f65d8(0xea1),_0x3f65d8(0x3709),_0x3f65d8(0x2588),_0x3f65d8(0x4567),'Keyboard',_0x3f65d8(0x1de3),_0x3f65d8(0x26c8),'GSMBand',_0x3f65d8(0xb5f),_0x3f65d8(0x21f8),_0x3f65d8(0x22a9),'WiFiUDP',_0x3f65d8(0xc53),_0x3f65d8(0x8dc),'USBHost','Firmata',_0x3f65d8(0x2367),_0x3f65d8(0x27b6),_0x3f65d8(0x50f0),'GSMPIN',_0x3f65d8(0x4161),'Bridge',_0x3f65d8(0xe4a),'EEPROM',_0x3f65d8(0x342e),'Mouse','Audio',_0x3f65d8(0x1d3b),_0x3f65d8(0x106b),_0x3f65d8(0xfa0),_0x3f65d8(0x2233),'WiFi','Wire',_0x3f65d8(0x1658),_0x3f65d8(0x33a5),'SPI','SD'],'_hints':[_0x3f65d8(0x265a),_0x3f65d8(0x110b),_0x3f65d8(0x4bdc),'analogWriteResolution',_0x3f65d8(0x2305),_0x3f65d8(0xa26),_0x3f65d8(0x4a8b),_0x3f65d8(0xf21),_0x3f65d8(0x3e06),'readJoystickButton',_0x3f65d8(0x13d7),_0x3f65d8(0x2390),'scrollDisplayRight',_0x3f65d8(0x3682),_0x3f65d8(0x364c),_0x3f65d8(0x2a1a),_0x3f65d8(0x3d00),_0x3f65d8(0x340e),'getSignalStrength',_0x3f65d8(0x318a),'getAsynchronously',_0x3f65d8(0x3d90),_0x3f65d8(0x51ab),'readAccelerometer',_0x3f65d8(0x1032),_0x3f65d8(0x1e6b),'lineFollowConfig',_0x3f65d8(0x3e2b),_0x3f65d8(0x50e0),_0x3f65d8(0x272e),_0x3f65d8(0x1d80),'readTemperature',_0x3f65d8(0x26f1),_0x3f65d8(0x7b8),_0x3f65d8(0x3c0a),_0x3f65d8(0x1c55),_0x3f65d8(0xb9d),'countryNameRead',_0x3f65d8(0x476),_0x3f65d8(0x50a2),_0x3f65d8(0xc04),_0x3f65d8(0x28e7),_0x3f65d8(0x3cfc),_0x3f65d8(0x17fc),_0x3f65d8(0x51c),_0x3f65d8(0x289e),_0x3f65d8(0x114d),_0x3f65d8(0x2b56),'mouseReleased',_0x3f65d8(0xd08),_0x3f65d8(0x299a),_0x3f65d8(0x337b),_0x3f65d8(0x41f0),_0x3f65d8(0x1659),'mousePressed',_0x3f65d8(0x208f),_0x3f65d8(0x1932),_0x3f65d8(0x3c3c),_0x3f65d8(0x427a),_0x3f65d8(0x3aa0),_0x3f65d8(0x472e),_0x3f65d8(0x2243),_0x3f65d8(0x3b75),_0x3f65d8(0x4f34),_0x3f65d8(0x528a),_0x3f65d8(0x4412),_0x3f65d8(0x1045),_0x3f65d8(0x2e41),_0x3f65d8(0x4d87),'writeMessage','blinkVersion',_0x3f65d8(0x1245),'readMessage',_0x3f65d8(0x369),_0x3f65d8(0x2651),'isListening','setBitOrder','beginPacket',_0x3f65d8(0x212f),_0x3f65d8(0x256d),_0x3f65d8(0x19a7),_0x3f65d8(0x150f),_0x3f65d8(0x296d),'serialEvent',_0x3f65d8(0x2d9c),'setTextSize',_0x3f65d8(0x3071),_0x3f65d8(0x1c59),_0x3f65d8(0x365e),_0x3f65d8(0x21b5),'analogWrite',_0x3f65d8(0x4b7d),_0x3f65d8(0x1931),'disconnect',_0x3f65d8(0x251f),_0x3f65d8(0x42de),_0x3f65d8(0x54d),'getPINUsed',_0x3f65d8(0x1d6b),_0x3f65d8(0x162d),_0x3f65d8(0x48f0),_0x3f65d8(0x40b4),'analogRead',_0x3f65d8(0x1356),'createChar',_0x3f65d8(0x2ff8),'keyPressed','tempoWrite',_0x3f65d8(0x2051),_0x3f65d8(0x3295),'debugPrint',_0x3f65d8(0xf1c),_0x3f65d8(0x1ed3),_0x3f65d8(0x263d),_0x3f65d8(0x40ac),_0x3f65d8(0xf39),_0x3f65d8(0x1c72),_0x3f65d8(0x20e6),_0x3f65d8(0x21dd),_0x3f65d8(0x36f),_0x3f65d8(0x1471),'getXChange','getYChange',_0x3f65d8(0x2995),_0x3f65d8(0x3592),'voiceCall',_0x3f65d8(0x4da5),_0x3f65d8(0x2fa4),_0x3f65d8(0xe03),'writeJSON','getButton',_0x3f65d8(0x2c29),_0x3f65d8(0x2b5d),_0x3f65d8(0x502d),'readBytes',_0x3f65d8(0x45e3),_0x3f65d8(0xd0c),_0x3f65d8(0x1c98),_0x3f65d8(0x29ce),_0x3f65d8(0x3709),_0x3f65d8(0x2cee),_0x3f65d8(0x3166),_0x3f65d8(0x18a7),_0x3f65d8(0x290),_0x3f65d8(0x4c58),_0x3f65d8(0x2604),_0x3f65d8(0x38ee),_0x3f65d8(0x4892),_0x3f65d8(0x1126),_0x3f65d8(0x2a44),_0x3f65d8(0x3c48),_0x3f65d8(0x4785),_0x3f65d8(0x2edc),_0x3f65d8(0xc07),_0x3f65d8(0x3623),'parseInt',_0x3f65d8(0xa69),_0x3f65d8(0x37e5),_0x3f65d8(0x1c7e),_0x3f65d8(0x18af),_0x3f65d8(0x32b7),_0x3f65d8(0x1044),_0x3f65d8(0x2701),_0x3f65d8(0x25f1),'writeRGB','highByte','writeRed',_0x3f65d8(0x4406),_0x3f65d8(0x2c2b),_0x3f65d8(0x418d),_0x3f65d8(0x3562),'transfer','shutdown',_0x3f65d8(0x4b54),'beginSMS','endWrite',_0x3f65d8(0xf6f),_0x3f65d8(0x3bf4),_0x3f65d8(0x35c6),_0x3f65d8(0x22c9),_0x3f65d8(0x26f8),'shiftOut',_0x3f65d8(0x38b1),_0x3f65d8(0x22db),_0x3f65d8(0x3f75),'connect',_0x3f65d8(0x2d7e),_0x3f65d8(0x4907),_0x3f65d8(0xfa1),_0x3f65d8(0x4126),_0x3f65d8(0x12ca),_0x3f65d8(0x4aed),_0x3f65d8(0xf1b),_0x3f65d8(0x2e2b),_0x3f65d8(0x3ff6),_0x3f65d8(0x3f27),'drawBMP',_0x3f65d8(0x1115),_0x3f65d8(0x36b6),_0x3f65d8(0x4a55),_0x3f65d8(0x181c),_0x3f65d8(0x3df8),'pointTo',_0x3f65d8(0x32cc),_0x3f65d8(0x2f34),_0x3f65d8(0x6d0),_0x3f65d8(0x42a1),_0x3f65d8(0x4452),_0x3f65d8(0x1d1b),'detach',_0x3f65d8(0x3cea),_0x3f65d8(0x1885),_0x3f65d8(0x449f),_0x3f65d8(0x2d10),_0x3f65d8(0x3cd6),_0x3f65d8(0x448c),_0x3f65d8(0x20f5),_0x3f65d8(0x3437),_0x3f65d8(0x824),_0x3f65d8(0xe98),_0x3f65d8(0x3898),_0x3f65d8(0x1b52),_0x3f65d8(0x63c),'getKey',_0x3f65d8(0x1c4b),_0x3f65d8(0x2ba8),_0x3f65d8(0x42fa),_0x3f65d8(0x4957),_0x3f65d8(0x4c95),'ready',_0x3f65d8(0x14a2),_0x3f65d8(0x17d2),_0x3f65d8(0x3741),'blink',_0x3f65d8(0x4933),_0x3f65d8(0x376),_0x3f65d8(0x2228),_0x3f65d8(0x4d1a),_0x3f65d8(0x50d8),_0x3f65d8(0x2464),_0x3f65d8(0x5075),'image',_0x3f65d8(0x277f),_0x3f65d8(0x364d),_0x3f65d8(0x2943),_0x3f65d8(0x50a6),'text',_0x3f65d8(0x2676),'peek',_0x3f65d8(0x342f),_0x3f65d8(0x2a8a),_0x3f65d8(0x3572),_0x3f65d8(0x1795),'seek',_0x3f65d8(0x1fe5),_0x3f65d8(0x395f),_0x3f65d8(0x3dd5),_0x3f65d8(0xaee),'home',_0x3f65d8(0x5144),_0x3f65d8(0xf8e),_0x3f65d8(0x24dc),_0x3f65d8(0x5011),_0x3f65d8(0x32cf),'SSID',_0x3f65d8(0x2681),_0x3f65d8(0x2523),_0x3f65d8(0x38de),_0x3f65d8(0x3935),_0x3f65d8(0x2a37),'pow',_0x3f65d8(0x4833),_0x3f65d8(0xbe0),'max',_0x3f65d8(0x37c8),_0x3f65d8(0xf9e),_0x3f65d8(0x254b),'put'],'literal':['DIGITAL_MESSAGE',_0x3f65d8(0x96e),_0x3f65d8(0x4e72),_0x3f65d8(0x170b),_0x3f65d8(0x4aba),_0x3f65d8(0x1859),_0x3f65d8(0x54c),_0x3f65d8(0x4932),'SYSTEM_RESET',_0x3f65d8(0xaa6),_0x3f65d8(0xe7c),'SYSEX_START',_0x3f65d8(0x3317),'EXTERNAL','DEFAULT','OUTPUT',_0x3f65d8(0x1d44),'HIGH',_0x3f65d8(0x1e0e)]},_0x3019a9=function(_0x2fa1af){const _0x542e27=_0x3f65d8,_0xa2b173=_0x2fa1af[_0x542e27(0x41d2)],_0x50be8c=_0x2fa1af[_0x542e27(0x4e4f)]('//','$',{'contains':[{'begin':/\\\n/}]}),_0x3dd060=_0x542e27(0xbea),_0x2ec354=_0x542e27(0xd39),_0x26ba48='(?!struct)('+_0x3dd060+'|'+_0xa2b173[_0x542e27(0x51e4)](_0x2ec354)+_0x542e27(0x5242)+_0xa2b173['optional']('<[^<>]+>')+')',_0x4a3306={'className':'type','begin':_0x542e27(0x37d3)},_0x450374={'className':_0x542e27(0x2431),'variants':[{'begin':'(u8?|U|L)?\x22','end':'\x22','illegal':'\x5cn','contains':[_0x2fa1af[_0x542e27(0x4a76)]]},{'begin':_0x542e27(0x4375),'end':'\x27','illegal':'.'},_0x2fa1af[_0x542e27(0x453e)]({'begin':/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,'end':/\)([^()\\ ]{0,16})"/})]},_0x3a614c={'className':_0x542e27(0x4a80),'variants':[{'begin':'\x5cb(0b[01\x27]+)'},{'begin':_0x542e27(0x10f1)},{'begin':'(-?)(\x5cb0[xX][a-fA-F0-9\x27]+|(\x5cb[\x5cd\x27]+(\x5c.[\x5cd\x27]*)?|\x5c.[\x5cd\x27]+)([eE][-+]?[\x5cd\x27]+)?)'}],'relevance':0x0},_0xf52874={'className':_0x542e27(0x5153),'begin':/#\s*[a-z]+\b/,'end':/$/,'keywords':{'keyword':_0x542e27(0x3bbe)},'contains':[{'begin':/\\\n/,'relevance':0x0},_0x2fa1af[_0x542e27(0x46a1)](_0x450374,{'className':_0x542e27(0x2431)}),{'className':_0x542e27(0x2431),'begin':/<.*?>/},_0x50be8c,_0x2fa1af[_0x542e27(0x23fe)]]},_0x5d898b={'className':'title','begin':_0xa2b173[_0x542e27(0x51e4)](_0x2ec354)+_0x2fa1af[_0x542e27(0xacc)],'relevance':0x0},_0x3cbfff=_0xa2b173[_0x542e27(0x51e4)](_0x2ec354)+_0x2fa1af[_0x542e27(0xacc)]+_0x542e27(0x7ef),_0x5209c3={'type':[_0x542e27(0x3ebd),_0x542e27(0x373c),_0x542e27(0xa55),'char32_t','char8_t',_0x542e27(0x5024),_0x542e27(0x1ab8),_0x542e27(0xc16),_0x542e27(0x324f),_0x542e27(0x4085),'void',_0x542e27(0x480b),_0x542e27(0x2edb),_0x542e27(0x3908),'const',_0x542e27(0x2c7c)],'keyword':[_0x542e27(0x137d),_0x542e27(0x3257),_0x542e27(0x2663),_0x542e27(0x1fb2),_0x542e27(0x1651),'atomic_cancel',_0x542e27(0x4a30),_0x542e27(0x1974),'auto',_0x542e27(0x3523),_0x542e27(0x2426),_0x542e27(0x4e10),_0x542e27(0x2e7e),'catch','class','co_await','co_return','co_yield','compl',_0x542e27(0x2272),_0x542e27(0x4d67),_0x542e27(0x23b2),_0x542e27(0x47d4),_0x542e27(0x4075),_0x542e27(0x16d9),_0x542e27(0x3fb2),'default',_0x542e27(0x5be),'do',_0x542e27(0x2cbc),'else',_0x542e27(0x44d8),_0x542e27(0x2121),_0x542e27(0x2bb9),'extern',_0x542e27(0x3984),_0x542e27(0x27e4),'for',_0x542e27(0xf26),_0x542e27(0x139c),'if',_0x542e27(0x331),'inline',_0x542e27(0x196c),_0x542e27(0x1c6c),_0x542e27(0x37f7),_0x542e27(0x4321),'noexcept',_0x542e27(0xc1a),_0x542e27(0xd91),_0x542e27(0x916),'operator','or','or_eq',_0x542e27(0x35a7),'private',_0x542e27(0xc14),_0x542e27(0x39ce),_0x542e27(0x18fb),_0x542e27(0x49b4),_0x542e27(0x1df0),_0x542e27(0x1c0e),_0x542e27(0xdfd),_0x542e27(0xc5d),_0x542e27(0x2d1a),_0x542e27(0x3a39),_0x542e27(0x4146),'switch',_0x542e27(0x2715),_0x542e27(0x15c6),'this',_0x542e27(0x457f),_0x542e27(0x383),'transaction_safe',_0x542e27(0x3971),_0x542e27(0x4022),_0x542e27(0x422b),_0x542e27(0xc83),_0x542e27(0x3955),'typename',_0x542e27(0x29d),'using',_0x542e27(0x38b8),_0x542e27(0x3512),_0x542e27(0x552),_0x542e27(0x32a6),_0x542e27(0xdfb)],'literal':[_0x542e27(0xa45),_0x542e27(0x3984),'nullopt','nullptr',_0x542e27(0x4022)],'built_in':[_0x542e27(0xca4)],'_type_hints':['any','auto_ptr',_0x542e27(0x3859),'binary_semaphore',_0x542e27(0x4f41),_0x542e27(0x3a77),_0x542e27(0x221e),_0x542e27(0x1ff3),'counting_semaphore','deque',_0x542e27(0xd6e),_0x542e27(0x3ee),_0x542e27(0x1056),_0x542e27(0x28fa),_0x542e27(0xaa7),_0x542e27(0x4b02),_0x542e27(0x4e0),_0x542e27(0x128b),_0x542e27(0x5276),_0x542e27(0xc0a),_0x542e27(0x1177),'optional','ostringstream',_0x542e27(0x4a02),'pair',_0x542e27(0x44c9),_0x542e27(0x1cd6),_0x542e27(0x3db4),'recursive_mutex',_0x542e27(0x1d66),_0x542e27(0x18e1),_0x542e27(0x1fa),_0x542e27(0x3c85),'shared_lock',_0x542e27(0xfcb),'shared_timed_mutex',_0x542e27(0x29b4),_0x542e27(0x453a),'string_view',_0x542e27(0xedf),'timed_mutex','thread',_0x542e27(0x21fb),'tuple',_0x542e27(0x32ca),_0x542e27(0x41a),_0x542e27(0x62c),_0x542e27(0x7fc),'unordered_multiset',_0x542e27(0x37a6),'variant',_0x542e27(0x4836),_0x542e27(0x3633),'wstring','wstring_view']},_0x3c9f4f={'className':_0x542e27(0x9eb),'relevance':0x0,'keywords':{'_hint':['abort',_0x542e27(0xbe0),_0x542e27(0x2c6e),_0x542e27(0x4c31),'as_const',_0x542e27(0x3c15),'atan','atan2','calloc',_0x542e27(0x10aa),_0x542e27(0xef3),_0x542e27(0x22c0),_0x542e27(0x1ece),_0x542e27(0x3935),_0x542e27(0x486e),'cout',_0x542e27(0x9c5),_0x542e27(0x4bb2),_0x542e27(0x2716),_0x542e27(0x4c7b),_0x542e27(0x3a1b),'fabs',_0x542e27(0x2e2d),'fmod',_0x542e27(0x179a),_0x542e27(0xe1d),'fputs','free',_0x542e27(0x2084),'fscanf',_0x542e27(0x3ee),'invoke',_0x542e27(0x2f62),'isalpha',_0x542e27(0x179c),_0x542e27(0x40e8),_0x542e27(0x43bb),_0x542e27(0x38f6),_0x542e27(0x1d12),_0x542e27(0x3b8f),_0x542e27(0x41dc),_0x542e27(0x483),'isxdigit',_0x542e27(0x1025),_0x542e27(0x39d6),_0x542e27(0x3afb),'log',_0x542e27(0x1463),_0x542e27(0x3421),_0x542e27(0x1040),_0x542e27(0x3597),'make_tuple',_0x542e27(0x1fde),_0x542e27(0x4934),_0x542e27(0x4983),_0x542e27(0x3e36),_0x542e27(0x33aa),_0x542e27(0x4609),'modf',_0x542e27(0x2676),_0x542e27(0x43bd),_0x542e27(0x32fe),_0x542e27(0x7e5),_0x542e27(0xe0c),'realloc',_0x542e27(0x230a),_0x542e27(0x2a37),_0x542e27(0x4bb3),_0x542e27(0x3110),_0x542e27(0xd87),_0x542e27(0x5011),'sscanf',_0x542e27(0x2340),_0x542e27(0x40ee),_0x542e27(0x1e79),_0x542e27(0x49a2),_0x542e27(0x2933),'strchr',_0x542e27(0x4ed0),_0x542e27(0x2436),_0x542e27(0x44d4),_0x542e27(0xed4),_0x542e27(0xdeb),_0x542e27(0x3bec),'strncpy',_0x542e27(0x27b4),_0x542e27(0x216c),'strspn','strstr','swap',_0x542e27(0x38de),'tanh',_0x542e27(0x315c),'to_underlying','tolower',_0x542e27(0x4415),'vfprintf','visit',_0x542e27(0x1ca1),_0x542e27(0x3643)]},'begin':_0xa2b173[_0x542e27(0x1d1d)](/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,_0x2fa1af[_0x542e27(0xacc)],_0xa2b173[_0x542e27(0x3296)](/(<[^<>]+>|)\s*\(/))},_0x2f6393=[_0x3c9f4f,_0xf52874,_0x4a3306,_0x50be8c,_0x2fa1af['C_BLOCK_COMMENT_MODE'],_0x3a614c,_0x450374],_0x5e77af={'variants':[{'begin':/=/,'end':/;/},{'begin':/\(/,'end':/\)/},{'beginKeywords':'new\x20throw\x20return\x20else','end':/;/}],'keywords':_0x5209c3,'contains':_0x2f6393['concat']([{'begin':/\(/,'end':/\)/,'keywords':_0x5209c3,'contains':_0x2f6393[_0x542e27(0x1d1d)](['self']),'relevance':0x0}]),'relevance':0x0},_0x5984c3={'className':'function','begin':'('+_0x26ba48+_0x542e27(0x1e78)+_0x3cbfff,'returnBegin':!0x0,'end':/[{;=]/,'excludeEnd':!0x0,'keywords':_0x5209c3,'illegal':/[^\w\s\*&:<>.]/,'contains':[{'begin':_0x3dd060,'keywords':_0x5209c3,'relevance':0x0},{'begin':_0x3cbfff,'returnBegin':!0x0,'contains':[_0x5d898b],'relevance':0x0},{'begin':/::/,'relevance':0x0},{'begin':/:/,'endsWithParent':!0x0,'contains':[_0x450374,_0x3a614c]},{'relevance':0x0,'match':/,/},{'className':_0x542e27(0xddd),'begin':/\(/,'end':/\)/,'keywords':_0x5209c3,'relevance':0x0,'contains':[_0x50be8c,_0x2fa1af['C_BLOCK_COMMENT_MODE'],_0x450374,_0x3a614c,_0x4a3306,{'begin':/\(/,'end':/\)/,'keywords':_0x5209c3,'relevance':0x0,'contains':[_0x542e27(0x4454),_0x50be8c,_0x2fa1af[_0x542e27(0x23fe)],_0x450374,_0x3a614c,_0x4a3306]}]},_0x4a3306,_0x50be8c,_0x2fa1af[_0x542e27(0x23fe)],_0xf52874]};return{'name':'C++','aliases':['cc',_0x542e27(0x5d3),_0x542e27(0x3ffd),_0x542e27(0x344a),'hh','hxx','cxx'],'keywords':_0x5209c3,'illegal':'','keywords':_0x5209c3,'contains':[_0x542e27(0x4454),_0x4a3306]},{'begin':_0x2fa1af[_0x542e27(0xacc)]+'::','keywords':_0x5209c3},{'match':[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/],'className':{0x1:_0x542e27(0x1357),0x3:_0x542e27(0x19e4)}}])};}(_0x1d95ed),_0x5233e8=_0x3019a9[_0x3f65d8(0xe37)];return _0x5233e8[_0x3f65d8(0xcfc)]=[..._0x5233e8[_0x3f65d8(0xcfc)],..._0x3097c1['type']],_0x5233e8[_0x3f65d8(0x2706)]=[..._0x5233e8['literal'],..._0x3097c1[_0x3f65d8(0x2706)]],_0x5233e8['built_in']=[..._0x5233e8['built_in'],..._0x3097c1[_0x3f65d8(0x43a)]],_0x5233e8[_0x3f65d8(0x2829)]=_0x3097c1[_0x3f65d8(0x2829)],_0x3019a9[_0x3f65d8(0x11d8)]=_0x3f65d8(0x14c8),_0x3019a9['aliases']=[_0x3f65d8(0xd35)],_0x3019a9['supersetOf']=_0x3f65d8(0xf7c),_0x3019a9;};},0x1e92:_0x2db0b2=>{const _0x2dc6a0=a0_0x11e7;_0x2db0b2[_0x2dc6a0(0x474c)]=function(_0x300d6f){const _0x256c57=_0x2dc6a0,_0x3f03a9={'variants':[_0x300d6f['COMMENT'](_0x256c57(0x385c),'$',{'relevance':0x0,'excludeBegin':!0x0}),_0x300d6f[_0x256c57(0x4e4f)](_0x256c57(0x5296),'$',{'relevance':0x0}),_0x300d6f[_0x256c57(0x2ae2)],_0x300d6f[_0x256c57(0x23fe)]]};return{'name':_0x256c57(0x2e17),'case_insensitive':!0x0,'aliases':[_0x256c57(0x26a0)],'keywords':{'$pattern':_0x256c57(0x488)+_0x300d6f[_0x256c57(0xacc)],'meta':_0x256c57(0x3ff0),'built_in':'r0\x20r1\x20r2\x20r3\x20r4\x20r5\x20r6\x20r7\x20r8\x20r9\x20r10\x20r11\x20r12\x20r13\x20r14\x20r15\x20w0\x20w1\x20w2\x20w3\x20w4\x20w5\x20w6\x20w7\x20w8\x20w9\x20w10\x20w11\x20w12\x20w13\x20w14\x20w15\x20w16\x20w17\x20w18\x20w19\x20w20\x20w21\x20w22\x20w23\x20w24\x20w25\x20w26\x20w27\x20w28\x20w29\x20w30\x20x0\x20x1\x20x2\x20x3\x20x4\x20x5\x20x6\x20x7\x20x8\x20x9\x20x10\x20x11\x20x12\x20x13\x20x14\x20x15\x20x16\x20x17\x20x18\x20x19\x20x20\x20x21\x20x22\x20x23\x20x24\x20x25\x20x26\x20x27\x20x28\x20x29\x20x30\x20pc\x20lr\x20sp\x20ip\x20sl\x20sb\x20fp\x20a1\x20a2\x20a3\x20a4\x20v1\x20v2\x20v3\x20v4\x20v5\x20v6\x20v7\x20v8\x20f0\x20f1\x20f2\x20f3\x20f4\x20f5\x20f6\x20f7\x20p0\x20p1\x20p2\x20p3\x20p4\x20p5\x20p6\x20p7\x20p8\x20p9\x20p10\x20p11\x20p12\x20p13\x20p14\x20p15\x20c0\x20c1\x20c2\x20c3\x20c4\x20c5\x20c6\x20c7\x20c8\x20c9\x20c10\x20c11\x20c12\x20c13\x20c14\x20c15\x20q0\x20q1\x20q2\x20q3\x20q4\x20q5\x20q6\x20q7\x20q8\x20q9\x20q10\x20q11\x20q12\x20q13\x20q14\x20q15\x20cpsr_c\x20cpsr_x\x20cpsr_s\x20cpsr_f\x20cpsr_cx\x20cpsr_cxs\x20cpsr_xs\x20cpsr_xsf\x20cpsr_sf\x20cpsr_cxsf\x20spsr_c\x20spsr_x\x20spsr_s\x20spsr_f\x20spsr_cx\x20spsr_cxs\x20spsr_xs\x20spsr_xsf\x20spsr_sf\x20spsr_cxsf\x20s0\x20s1\x20s2\x20s3\x20s4\x20s5\x20s6\x20s7\x20s8\x20s9\x20s10\x20s11\x20s12\x20s13\x20s14\x20s15\x20s16\x20s17\x20s18\x20s19\x20s20\x20s21\x20s22\x20s23\x20s24\x20s25\x20s26\x20s27\x20s28\x20s29\x20s30\x20s31\x20d0\x20d1\x20d2\x20d3\x20d4\x20d5\x20d6\x20d7\x20d8\x20d9\x20d10\x20d11\x20d12\x20d13\x20d14\x20d15\x20d16\x20d17\x20d18\x20d19\x20d20\x20d21\x20d22\x20d23\x20d24\x20d25\x20d26\x20d27\x20d28\x20d29\x20d30\x20d31\x20{PC}\x20{VAR}\x20{TRUE}\x20{FALSE}\x20{OPT}\x20{CONFIG}\x20{ENDIAN}\x20{CODESIZE}\x20{CPU}\x20{FPU}\x20{ARCHITECTURE}\x20{PCSTOREOFFSET}\x20{ARMASM_VERSION}\x20{INTER}\x20{ROPI}\x20{RWPI}\x20{SWST}\x20{NOSWST}\x20.\x20@'},'contains':[{'className':_0x256c57(0x1357),'begin':'\x5cb(adc|(qd?|sh?|u[qh]?)?add(8|16)?|usada?8|(q|sh?|u[qh]?)?(as|sa)x|and|adrl?|sbc|rs[bc]|asr|b[lx]?|blx|bxj|cbn?z|tb[bh]|bic|bfc|bfi|[su]bfx|bkpt|cdp2?|clz|clrex|cmp|cmn|cpsi[ed]|cps|setend|dbg|dmb|dsb|eor|isb|it[te]{0,3}|lsl|lsr|ror|rrx|ldm(([id][ab])|f[ds])?|ldr((s|ex)?[bhd])?|movt?|mvn|mra|mar|mul|[us]mull|smul[bwt][bt]|smu[as]d|smmul|smmla|mla|umlaal|smlal?([wbt][bt]|d)|mls|smlsl?[ds]|smc|svc|sev|mia([bt]{2}|ph)?|mrr?c2?|mcrr2?|mrs|msr|orr|orn|pkh(tb|bt)|rbit|rev(16|sh)?|sel|[su]sat(16)?|nop|pop|push|rfe([id][ab])?|stm([id][ab])?|str(ex)?[bhd]?|(qd?)?sub|(sh?|q|u[qh]?)?sub(8|16)|[su]xt(a?h|a?b(16)?)|srs([id][ab])?|swpb?|swi|smi|tst|teq|wfe|wfi|yield)(eq|ne|cs|cc|mi|pl|vs|vc|hi|ls|ge|lt|gt|le|al|hs|lo)?[sptrx]?(?=\x5cs)'},_0x3f03a9,_0x300d6f[_0x256c57(0x291b)],{'className':_0x256c57(0x2431),'begin':'\x27','end':_0x256c57(0x2324),'relevance':0x0},{'className':_0x256c57(0x4685),'begin':'\x5c|','end':'\x5c|','illegal':'\x5cn','relevance':0x0},{'className':_0x256c57(0x4a80),'variants':[{'begin':'[#$=]?0x[0-9a-f]+'},{'begin':_0x256c57(0x2198)},{'begin':_0x256c57(0x46e9)},{'begin':_0x256c57(0x15e4)}],'relevance':0x0},{'className':_0x256c57(0x239b),'variants':[{'begin':_0x256c57(0x440f)},{'begin':_0x256c57(0x483e)},{'begin':'[=#]\x5cw+'}],'relevance':0x0}]};};},0x1064:_0x25f401=>{const _0x1326f3=a0_0x11e7;_0x25f401[_0x1326f3(0x474c)]=function(_0x441bcf){const _0x3e80b9=_0x1326f3,_0x3e7a32=_0x441bcf[_0x3e80b9(0x41d2)],_0x5dfea8=[{'className':'strong','begin':/\*{2}([^\n]+?)\*{2}/},{'className':_0x3e80b9(0x40c9),'begin':_0x3e7a32['concat'](/\*\*/,/((\*(?!\*)|\\[^\n]|[^*\n\\])+\n)+/,/(\*(?!\*)|\\[^\n]|[^*\n\\])*/,/\*\*/),'relevance':0x0},{'className':_0x3e80b9(0x40c9),'begin':/\B\*(\S|\S[^\n]*?\S)\*(?!\w)/},{'className':_0x3e80b9(0x40c9),'begin':/\*[^\s]([^\n]+\n)+([^\n]+)\*/}],_0x906469=[{'className':_0x3e80b9(0x2297),'begin':/_{2}([^\n]+?)_{2}/},{'className':_0x3e80b9(0x2297),'begin':_0x3e7a32[_0x3e80b9(0x1d1d)](/__/,/((_(?!_)|\\[^\n]|[^_\n\\])+\n)+/,/(_(?!_)|\\[^\n]|[^_\n\\])*/,/__/),'relevance':0x0},{'className':_0x3e80b9(0x2297),'begin':/\b_(\S|\S[^\n]*?\S)_(?!\w)/},{'className':_0x3e80b9(0x2297),'begin':/_[^\s]([^\n]+\n)+([^\n]+)_/},{'className':_0x3e80b9(0x2297),'begin':'\x5cB\x27(?![\x27\x5cs])','end':_0x3e80b9(0xd10),'contains':[{'begin':'\x5c\x5c\x27\x5cw','relevance':0x0}],'relevance':0x0}];return{'name':_0x3e80b9(0x3571),'aliases':[_0x3e80b9(0xbb2)],'contains':[_0x441bcf['COMMENT'](_0x3e80b9(0x4a98),'\x5cn/{4,}$',{'relevance':0xa}),_0x441bcf['COMMENT']('^//','$',{'relevance':0x0}),{'className':_0x3e80b9(0x4685),'begin':'^\x5c.\x5cw.*$'},{'begin':_0x3e80b9(0x4b52),'end':'\x5cn^[=\x5c*]{4,}$','relevance':0xa},{'className':'section','relevance':0xa,'variants':[{'begin':'^(={1,6})[\x20\x09].+?([\x20\x09]\x5c1)?$'},{'begin':_0x3e80b9(0x4ab2)}]},{'className':_0x3e80b9(0x5153),'begin':'^:.+?:','end':'\x5cs','excludeEnd':!0x0,'relevance':0xa},{'className':_0x3e80b9(0x5153),'begin':_0x3e80b9(0x30ab),'relevance':0x0},{'className':_0x3e80b9(0x3567),'begin':_0x3e80b9(0x398f),'end':'\x5cn_{4,}$','relevance':0xa},{'className':_0x3e80b9(0x4948),'begin':_0x3e80b9(0x23f4),'end':'\x5cn[\x5c-\x5c.]{4,}$','relevance':0xa},{'begin':_0x3e80b9(0x2636),'end':_0x3e80b9(0x765),'contains':[{'begin':'<','end':'>','subLanguage':_0x3e80b9(0x2655),'relevance':0x0}],'relevance':0xa},{'className':_0x3e80b9(0x6af),'begin':'^(\x5c*+|-+|\x5c.+|[^\x5cn]+?::)\x5cs+'},{'className':_0x3e80b9(0x239b),'begin':_0x3e80b9(0x4993),'relevance':0xa},{'begin':/\\[*_`]/},{'begin':/\\\\\*{2}[^\n]*?\*{2}/},{'begin':/\\\\_{2}[^\n]*_{2}/},{'begin':/\\\\`{2}[^\n]*`{2}/},{'begin':/[:;}][*_`](?![*_`])/},..._0x5dfea8,..._0x906469,{'className':_0x3e80b9(0x2431),'variants':[{'begin':'``.+?\x27\x27'},{'begin':'`.+?\x27'}]},{'className':'code','begin':/`{2}/,'end':/(\n{2}|`{2})/},{'className':'code','begin':_0x3e80b9(0x4c72),'relevance':0x0},{'className':_0x3e80b9(0x4948),'begin':_0x3e80b9(0x317a),'end':'$','relevance':0x0},{'begin':_0x3e80b9(0x9bb),'relevance':0xa},{'begin':_0x3e80b9(0x1b08),'returnBegin':!0x0,'contains':[{'begin':_0x3e80b9(0x20df),'relevance':0x0},{'className':_0x3e80b9(0x4b32),'begin':'\x5cw','end':'[^\x5c[]+','relevance':0x0},{'className':_0x3e80b9(0x2431),'begin':'\x5c[','end':'\x5c]','excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0}],'relevance':0xa}]};};},0x2709:_0xf2ef98=>{_0xf2ef98['exports']=function(_0xf69a2a){const _0x487a15=a0_0x11e7,_0x363a02=_0xf69a2a[_0x487a15(0x41d2)],_0x19826e=[_0x487a15(0x3984),_0x487a15(0x2715),_0x487a15(0xc16),_0x487a15(0x3027),'float',_0x487a15(0x4ef4),_0x487a15(0x373c),'boolean',_0x487a15(0x2c7c),_0x487a15(0x1582),'if',_0x487a15(0xc01),_0x487a15(0x3c19),_0x487a15(0x4022),_0x487a15(0x552),'long',_0x487a15(0x383),_0x487a15(0x4005),_0x487a15(0x37b2),_0x487a15(0xc14),'import',_0x487a15(0x50dc),_0x487a15(0x27e4),'return',_0x487a15(0x27d6),_0x487a15(0x44d8),_0x487a15(0x3d4),'extends','implements',_0x487a15(0x4e10),_0x487a15(0xf95),_0x487a15(0x4321),_0x487a15(0x31a3),_0x487a15(0xf3c),'byte','super',_0x487a15(0x3512),_0x487a15(0x2e7e),'assert',_0x487a15(0x4085),_0x487a15(0x4bd0),_0x487a15(0x3d23),_0x487a15(0x5024),_0x487a15(0x39ce),_0x487a15(0x422b),_0x487a15(0x138f),_0x487a15(0x857),'continue','throws','privileged',_0x487a15(0x2f1f),'adviceexecution','proceed',_0x487a15(0x3ce2),_0x487a15(0x11aa),_0x487a15(0x1ec7),_0x487a15(0x173b),'staticinitialization',_0x487a15(0x37cc),_0x487a15(0x1bc7),_0x487a15(0x5c5),_0x487a15(0x1b53),'getWithinTypeName',_0x487a15(0x1b6b),'thisJoinPoint',_0x487a15(0x1bc0),_0x487a15(0x2af2),'declare',_0x487a15(0x1ed8),_0x487a15(0x4020),_0x487a15(0x3d85),_0x487a15(0x15be),_0x487a15(0x896),_0x487a15(0x4f09)],_0x2b7dba=['get',_0x487a15(0x1fa),_0x487a15(0x3b59),'call'];return{'name':'AspectJ','keywords':_0x19826e,'illegal':/<\/|#/,'contains':[_0xf69a2a[_0x487a15(0x4e4f)](/\/\*\*/,/\*\//,{'relevance':0x0,'contains':[{'begin':/\w+@/,'relevance':0x0},{'className':'doctag','begin':/@[A-Za-z]+/}]}),_0xf69a2a[_0x487a15(0x2ae2)],_0xf69a2a[_0x487a15(0x23fe)],_0xf69a2a[_0x487a15(0xa4c)],_0xf69a2a['QUOTE_STRING_MODE'],{'className':_0x487a15(0x1390),'beginKeywords':_0x487a15(0x4ff),'end':/[{;=]/,'excludeEnd':!0x0,'illegal':/[:;"\[\]]/,'contains':[{'beginKeywords':_0x487a15(0x2923)},_0xf69a2a[_0x487a15(0xb0e)],{'begin':/\([^\)]*/,'end':/[)]+/,'keywords':_0x19826e[_0x487a15(0x1d1d)](_0x2b7dba),'excludeEnd':!0x1}]},{'className':_0x487a15(0x1390),'beginKeywords':_0x487a15(0x1ba3),'end':/[{;=]/,'excludeEnd':!0x0,'relevance':0x0,'keywords':_0x487a15(0x1ba3),'illegal':/[:"\[\]]/,'contains':[{'beginKeywords':_0x487a15(0x4c1c)},_0xf69a2a[_0x487a15(0xb0e)]]},{'beginKeywords':_0x487a15(0x1704),'end':/[)]/,'excludeEnd':!0x1,'illegal':/["\[\]]/,'contains':[{'begin':_0x363a02[_0x487a15(0x1d1d)](_0xf69a2a[_0x487a15(0x206e)],/\s*\(/),'returnBegin':!0x0,'contains':[_0xf69a2a[_0x487a15(0xb0e)]]}]},{'begin':/[:]/,'returnBegin':!0x0,'end':/[{;]/,'relevance':0x0,'excludeEnd':!0x1,'keywords':_0x19826e,'illegal':/["\[\]]/,'contains':[{'begin':_0x363a02[_0x487a15(0x1d1d)](_0xf69a2a['UNDERSCORE_IDENT_RE'],/\s*\(/),'keywords':_0x19826e[_0x487a15(0x1d1d)](_0x2b7dba),'relevance':0x0},_0xf69a2a[_0x487a15(0x291b)]]},{'beginKeywords':'new\x20throw','relevance':0x0},{'className':'function','begin':/\w+ +\w+(\.\w+)?\s*\([^\)]*\)\s*((throws)[\w\s,]+)?[\{;]/,'returnBegin':!0x0,'end':/[{;=]/,'keywords':_0x19826e,'excludeEnd':!0x0,'contains':[{'begin':_0x363a02[_0x487a15(0x1d1d)](_0xf69a2a[_0x487a15(0x206e)],/\s*\(/),'returnBegin':!0x0,'relevance':0x0,'contains':[_0xf69a2a[_0x487a15(0xb0e)]]},{'className':_0x487a15(0xddd),'begin':/\(/,'end':/\)/,'relevance':0x0,'keywords':_0x19826e,'contains':[_0xf69a2a[_0x487a15(0xa4c)],_0xf69a2a[_0x487a15(0x291b)],_0xf69a2a[_0x487a15(0xd12)],_0xf69a2a['C_BLOCK_COMMENT_MODE']]},_0xf69a2a[_0x487a15(0x2ae2)],_0xf69a2a[_0x487a15(0x23fe)]]},_0xf69a2a[_0x487a15(0xd12)],{'className':_0x487a15(0x5153),'begin':/@[A-Za-z]+/}]};};},0x1156:_0xe3cb1e=>{const _0x42104b=a0_0x11e7;_0xe3cb1e[_0x42104b(0x474c)]=function(_0x18d7a0){const _0x21dc31=_0x42104b,_0x32059e={'begin':'`[\x5cs\x5cS]'};return{'name':'AutoHotkey','case_insensitive':!0x0,'aliases':[_0x21dc31(0x1315)],'keywords':{'keyword':'Break\x20Continue\x20Critical\x20Exit\x20ExitApp\x20Gosub\x20Goto\x20New\x20OnExit\x20Pause\x20return\x20SetBatchLines\x20SetTimer\x20Suspend\x20Thread\x20Throw\x20Until\x20ahk_id\x20ahk_class\x20ahk_pid\x20ahk_exe\x20ahk_group','literal':_0x21dc31(0x462d),'built_in':_0x21dc31(0x4d06)},'contains':[_0x32059e,_0x18d7a0[_0x21dc31(0x46a1)](_0x18d7a0[_0x21dc31(0x291b)],{'contains':[_0x32059e]}),_0x18d7a0[_0x21dc31(0x4e4f)](';','$',{'relevance':0x0}),_0x18d7a0[_0x21dc31(0x23fe)],{'className':'number','begin':_0x18d7a0[_0x21dc31(0x5047)],'relevance':0x0},{'className':_0x21dc31(0x3362),'begin':_0x21dc31(0x2f46)},{'className':_0x21dc31(0x43a),'begin':_0x21dc31(0x32cd)},{'className':_0x21dc31(0x4685),'variants':[{'begin':_0x21dc31(0x4114)},{'begin':_0x21dc31(0x23a1),'relevance':0x0}]},{'className':'meta','begin':'^\x5cs*#\x5cw+','end':'$','relevance':0x0},{'className':'built_in','begin':_0x21dc31(0x1301)},{'begin':',\x5cs*,'}]};};},0x7e5:_0x41f9a5=>{_0x41f9a5['exports']=function(_0x24aa85){const _0x39c962=a0_0x11e7,_0x3dd78={'variants':[_0x24aa85[_0x39c962(0x4e4f)](';','$',{'relevance':0x0}),_0x24aa85[_0x39c962(0x4e4f)](_0x39c962(0x2575),_0x39c962(0x3069)),_0x24aa85[_0x39c962(0x4e4f)]('#comments-start',_0x39c962(0x2ade))]},_0x3cc629={'begin':'\x5c$[A-z0-9_]+'},_0x59dc5f={'className':_0x39c962(0x2431),'variants':[{'begin':/"/,'end':/"/,'contains':[{'begin':/""/,'relevance':0x0}]},{'begin':/'/,'end':/'/,'contains':[{'begin':/''/,'relevance':0x0}]}]},_0x5e5e75={'variants':[_0x24aa85[_0x39c962(0xed7)],_0x24aa85[_0x39c962(0xd12)]]};return{'name':_0x39c962(0x4d33),'case_insensitive':!0x0,'illegal':/\/\*/,'keywords':{'keyword':'ByRef\x20Case\x20Const\x20ContinueCase\x20ContinueLoop\x20Dim\x20Do\x20Else\x20ElseIf\x20EndFunc\x20EndIf\x20EndSelect\x20EndSwitch\x20EndWith\x20Enum\x20Exit\x20ExitLoop\x20For\x20Func\x20Global\x20If\x20In\x20Local\x20Next\x20ReDim\x20Return\x20Select\x20Static\x20Step\x20Switch\x20Then\x20To\x20Until\x20Volatile\x20WEnd\x20While\x20With','built_in':_0x39c962(0x4d66),'literal':_0x39c962(0x4e94)},'contains':[_0x3dd78,_0x3cc629,_0x59dc5f,_0x5e5e75,{'className':_0x39c962(0x5153),'begin':'#','end':'$','keywords':{'keyword':[_0x39c962(0x230),_0x39c962(0x4c81),'forceref',_0x39c962(0x81e),_0x39c962(0x478e),_0x39c962(0x3f7c),_0x39c962(0x160c),_0x39c962(0x47e0),'pragma',_0x39c962(0x3d8e),_0x39c962(0x146e),_0x39c962(0xf28),_0x39c962(0x16fc),_0x39c962(0x2e6b)]},'contains':[{'begin':/\\\n/,'relevance':0x0},{'beginKeywords':_0x39c962(0x478e),'keywords':{'keyword':_0x39c962(0x478e)},'end':'$','contains':[_0x59dc5f,{'className':_0x39c962(0x2431),'variants':[{'begin':'<','end':'>'},{'begin':/"/,'end':/"/,'contains':[{'begin':/""/,'relevance':0x0}]},{'begin':/'/,'end':/'/,'contains':[{'begin':/''/,'relevance':0x0}]}]}]},_0x59dc5f,_0x3dd78]},{'className':'symbol','begin':_0x39c962(0x3d63)},{'beginKeywords':_0x39c962(0x412d),'end':'$','illegal':_0x39c962(0x2c6f),'contains':[_0x24aa85['inherit'](_0x24aa85[_0x39c962(0xb0e)],{'className':_0x39c962(0x20db)}),{'className':_0x39c962(0xddd),'begin':'\x5c(','end':'\x5c)','contains':[_0x3cc629,_0x59dc5f,_0x5e5e75]}]}]};};},0x275:_0x51737c=>{const _0x5e50db=a0_0x11e7;_0x51737c[_0x5e50db(0x474c)]=function(_0x3d5ae8){const _0x4ddc19=_0x5e50db;return{'name':_0x4ddc19(0xd73),'case_insensitive':!0x0,'keywords':{'$pattern':_0x4ddc19(0x488)+_0x3d5ae8[_0x4ddc19(0xacc)],'keyword':_0x4ddc19(0x2d15),'built_in':_0x4ddc19(0x4a12),'meta':'.byte\x20.cseg\x20.db\x20.def\x20.device\x20.dseg\x20.dw\x20.endmacro\x20.equ\x20.eseg\x20.exit\x20.include\x20.list\x20.listmac\x20.macro\x20.nolist\x20.org\x20.set'},'contains':[_0x3d5ae8[_0x4ddc19(0x23fe)],_0x3d5ae8[_0x4ddc19(0x4e4f)](';','$',{'relevance':0x0}),_0x3d5ae8[_0x4ddc19(0xd12)],_0x3d5ae8[_0x4ddc19(0xed7)],{'className':_0x4ddc19(0x4a80),'begin':_0x4ddc19(0x4617)},_0x3d5ae8['QUOTE_STRING_MODE'],{'className':_0x4ddc19(0x2431),'begin':'\x27','end':_0x4ddc19(0x2324),'illegal':_0x4ddc19(0x34cc)},{'className':_0x4ddc19(0x239b),'begin':_0x4ddc19(0x1a62)},{'className':'meta','begin':'#','end':'$'},{'className':'subst','begin':_0x4ddc19(0x308f)}]};};},0xa8e:_0x1282db=>{const _0xa67534=a0_0x11e7;_0x1282db[_0xa67534(0x474c)]=function(_0x4eb583){const _0x3db250=_0xa67534;return{'name':'Awk','keywords':{'keyword':_0x3db250(0x4650)},'contains':[{'className':_0x3db250(0x3362),'variants':[{'begin':/\$[\w\d#@][\w\d_]*/},{'begin':/\$\{(.*?)\}/}]},{'className':_0x3db250(0x2431),'contains':[_0x4eb583[_0x3db250(0x4a76)]],'variants':[{'begin':/(u|b)?r?'''/,'end':/'''/,'relevance':0xa},{'begin':/(u|b)?r?"""/,'end':/"""/,'relevance':0xa},{'begin':/(u|r|ur)'/,'end':/'/,'relevance':0xa},{'begin':/(u|r|ur)"/,'end':/"/,'relevance':0xa},{'begin':/(b|br)'/,'end':/'/},{'begin':/(b|br)"/,'end':/"/},_0x4eb583[_0x3db250(0xa4c)],_0x4eb583[_0x3db250(0x291b)]]},_0x4eb583['REGEXP_MODE'],_0x4eb583[_0x3db250(0x2bbe)],_0x4eb583['NUMBER_MODE']]};};},0xf64:_0x48def7=>{_0x48def7['exports']=function(_0x18257d){const _0x30011e=a0_0x11e7,_0x2d206a=_0x18257d[_0x30011e(0x206e)],_0x6bfdc0={'keyword':[_0x30011e(0x3027),'as',_0x30011e(0x637),'avg',_0x30011e(0x4e10),_0x30011e(0x36f1),'by',_0x30011e(0x48e6),_0x30011e(0x2e7e),'catch',_0x30011e(0x5293),_0x30011e(0x1390),_0x30011e(0x5a3),'client',_0x30011e(0x1484),'const',_0x30011e(0x16d9),_0x30011e(0x404e),'crosscompany',_0x30011e(0x1674),'delete_from',_0x30011e(0x3c57),_0x30011e(0x12ca),'div','do','edit','else',_0x30011e(0x249e),'exists','extends',_0x30011e(0x27e4),'finally',_0x30011e(0x3d30),_0x30011e(0x35c1),_0x30011e(0x2fe8),'firstonly10',_0x30011e(0xbed),'firstonly1000',_0x30011e(0x14a2),_0x30011e(0x3c19),_0x30011e(0x176d),_0x30011e(0x3fc3),_0x30011e(0x100a),_0x30011e(0x18cb),_0x30011e(0x5f6),_0x30011e(0x27e6),_0x30011e(0x35d7),_0x30011e(0x4e5b),'hint','if',_0x30011e(0x6c3),'in','index',_0x30011e(0x155f),_0x30011e(0x321b),_0x30011e(0x2dad),'is',_0x30011e(0x3541),_0x30011e(0x83d),_0x30011e(0x53e),_0x30011e(0x1f03),_0x30011e(0x4531),_0x30011e(0x37f7),_0x30011e(0x4321),_0x30011e(0x3dc6),_0x30011e(0x1df1),'notexists',_0x30011e(0x1197),_0x30011e(0xd8d),'outer',_0x30011e(0x60e),_0x30011e(0x4957),_0x30011e(0x4ef4),_0x30011e(0xc14),_0x30011e(0x39ce),_0x30011e(0x1aa2),_0x30011e(0x447e),_0x30011e(0x1977),_0x30011e(0xdfd),_0x30011e(0x78b),_0x30011e(0x3fc9),_0x30011e(0x395b),_0x30011e(0x671),_0x30011e(0x2c7c),'sum',_0x30011e(0x2cc),_0x30011e(0x857),_0x30011e(0x138f),'throw','try',_0x30011e(0x39ad),_0x30011e(0x4587),_0x30011e(0x42f4),_0x30011e(0xb09),_0x30011e(0x51af),_0x30011e(0x347a),_0x30011e(0x4e7a),_0x30011e(0x27d6),_0x30011e(0x3b62),_0x30011e(0x552)],'built_in':[_0x30011e(0x42f9),_0x30011e(0x1e8d),_0x30011e(0x961),_0x30011e(0x373c),'container',_0x30011e(0x40d9),_0x30011e(0x5024),_0x30011e(0x44d8),_0x30011e(0x614),_0x30011e(0xc16),'int64',_0x30011e(0x324f),_0x30011e(0x47f6),'short',_0x30011e(0x257f),_0x30011e(0x9be),_0x30011e(0x469d)],'literal':[_0x30011e(0x3d23),'false','null',_0x30011e(0x4022)]},_0x4f4a35={'variants':[{'match':[/(class|interface)\s+/,_0x2d206a,/\s+(extends|implements)\s+/,_0x2d206a]},{'match':[/class\s+/,_0x2d206a]}],'scope':{0x2:_0x30011e(0x19e4),0x4:_0x30011e(0x3235)},'keywords':_0x6bfdc0};return{'name':'X++','aliases':[_0x30011e(0x434)],'keywords':_0x6bfdc0,'contains':[_0x18257d[_0x30011e(0x2ae2)],_0x18257d[_0x30011e(0x23fe)],_0x18257d[_0x30011e(0xa4c)],_0x18257d['QUOTE_STRING_MODE'],_0x18257d[_0x30011e(0xd12)],{'className':_0x30011e(0x5153),'begin':'#','end':'$'},_0x4f4a35]};};},0x21c1:_0x31eec2=>{_0x31eec2['exports']=function(_0x3bf912){const _0x1ddf50=a0_0x11e7,_0x460a42=_0x3bf912[_0x1ddf50(0x41d2)],_0x536303={},_0x233bb7={'begin':/\$\{/,'end':/\}/,'contains':[_0x1ddf50(0x4454),{'begin':/:-/,'contains':[_0x536303]}]};Object[_0x1ddf50(0x4e14)](_0x536303,{'className':'variable','variants':[{'begin':_0x460a42[_0x1ddf50(0x1d1d)](/\$[\w\d#@][\w\d_]*/,_0x1ddf50(0x1ff5))},_0x233bb7]});const _0x5c4264={'className':_0x1ddf50(0x2ad6),'begin':/\$\(/,'end':/\)/,'contains':[_0x3bf912['BACKSLASH_ESCAPE']]},_0x1a18ec={'begin':/<<-?\s*(?=\w+)/,'starts':{'contains':[_0x3bf912[_0x1ddf50(0x453e)]({'begin':/(\w+)/,'end':/(\w+)/,'className':_0x1ddf50(0x2431)})]}},_0x1cb26a={'className':_0x1ddf50(0x2431),'begin':/"/,'end':/"/,'contains':[_0x3bf912[_0x1ddf50(0x4a76)],_0x536303,_0x5c4264]};_0x5c4264[_0x1ddf50(0x2b31)]['push'](_0x1cb26a);const _0x4889b7={'begin':/\$?\(\(/,'end':/\)\)/,'contains':[{'begin':/\d+#[0-9a-f]+/,'className':_0x1ddf50(0x4a80)},_0x3bf912[_0x1ddf50(0x30be)],_0x536303]},_0x1a79e5=_0x3bf912[_0x1ddf50(0x307a)]({'binary':'('+['fish',_0x1ddf50(0x3be4),_0x1ddf50(0x130c),'sh',_0x1ddf50(0x109f),'ksh',_0x1ddf50(0x253e),_0x1ddf50(0x1983),_0x1ddf50(0x4451)][_0x1ddf50(0x3541)]('|')+')','relevance':0xa}),_0x2f1724={'className':_0x1ddf50(0x14b2),'begin':/\w[\w\d_]*\s*\(\s*\)\s*\{/,'returnBegin':!0x0,'contains':[_0x3bf912[_0x1ddf50(0x46a1)](_0x3bf912[_0x1ddf50(0x2029)],{'begin':/\w[\w\d_]*/})],'relevance':0x0};return{'name':_0x1ddf50(0x1e7e),'aliases':['sh'],'keywords':{'$pattern':/\b[a-z][a-z0-9._-]+\b/,'keyword':['if',_0x1ddf50(0xaf5),_0x1ddf50(0x3d4),_0x1ddf50(0x4ef2),'fi','for',_0x1ddf50(0x552),_0x1ddf50(0x30d6),'in','do',_0x1ddf50(0x37e),_0x1ddf50(0x2e7e),_0x1ddf50(0x23e1),'function',_0x1ddf50(0x3fc9)],'literal':[_0x1ddf50(0x4022),_0x1ddf50(0x3984)],'built_in':[_0x1ddf50(0x4e10),'cd',_0x1ddf50(0x16d9),'eval',_0x1ddf50(0x198d),_0x1ddf50(0x4c7b),'export',_0x1ddf50(0x468),'hash',_0x1ddf50(0x44cf),_0x1ddf50(0x1aa2),_0x1ddf50(0xdfd),_0x1ddf50(0x34fe),_0x1ddf50(0x1769),_0x1ddf50(0xc31),_0x1ddf50(0x20da),_0x1ddf50(0x4709),'unset',_0x1ddf50(0xa94),'bind','builtin','caller',_0x1ddf50(0x2b63),_0x1ddf50(0x45e8),_0x1ddf50(0x4978),_0x1ddf50(0x4fa),'help','let',_0x1ddf50(0x16a7),'logout',_0x1ddf50(0x2323),'printf','read',_0x1ddf50(0x1e03),'source','type','typeset',_0x1ddf50(0x417a),_0x1ddf50(0x22fe),_0x1ddf50(0x1fa),_0x1ddf50(0x240),_0x1ddf50(0x4b98),'bg',_0x1ddf50(0x511b),'bye',_0x1ddf50(0xc2c),_0x1ddf50(0x4772),_0x1ddf50(0x150c),_0x1ddf50(0x3aec),'compcall',_0x1ddf50(0xf6d),'compdescribe',_0x1ddf50(0x4788),'compgroups',_0x1ddf50(0x367),_0x1ddf50(0x4a77),_0x1ddf50(0x189c),_0x1ddf50(0x3e25),_0x1ddf50(0x2708),_0x1ddf50(0x4124),_0x1ddf50(0x4e6e),_0x1ddf50(0x1960),'echoti',_0x1ddf50(0x42e7),'fc','fg',_0x1ddf50(0x1ab8),_0x1ddf50(0x4b4c),_0x1ddf50(0x13e9),_0x1ddf50(0x5126),'history',_0x1ddf50(0x410f),_0x1ddf50(0x4c4b),_0x1ddf50(0x3c2d),_0x1ddf50(0x1c08),'log',_0x1ddf50(0x17f7),_0x1ddf50(0x603),'print',_0x1ddf50(0xd4f),_0x1ddf50(0x37eb),_0x1ddf50(0xd23),_0x1ddf50(0x3929),_0x1ddf50(0x740),'setopt','stat',_0x1ddf50(0xab9),'ttyctl',_0x1ddf50(0x3dd2),_0x1ddf50(0x472d),_0x1ddf50(0x1538),_0x1ddf50(0x8f1),_0x1ddf50(0x345a),_0x1ddf50(0x1c14),_0x1ddf50(0xdaa),_0x1ddf50(0x3b62),_0x1ddf50(0x34e9),_0x1ddf50(0x3300),_0x1ddf50(0x8f0),_0x1ddf50(0x1bfd),_0x1ddf50(0x4f8d),'zmodload',_0x1ddf50(0x281d),_0x1ddf50(0x29ff),_0x1ddf50(0x11dc),'zregexparse',_0x1ddf50(0x25aa),_0x1ddf50(0x1911),_0x1ddf50(0x1e99),'chcon',_0x1ddf50(0x1656),_0x1ddf50(0x44c3),_0x1ddf50(0x1cb7),'cp','dd','df',_0x1ddf50(0x177b),_0x1ddf50(0x3894),'ln','ls',_0x1ddf50(0x2228),_0x1ddf50(0x3b0),_0x1ddf50(0x5125),'mktemp','mv',_0x1ddf50(0x3ae2),'rm',_0x1ddf50(0x4d1a),_0x1ddf50(0x4e36),_0x1ddf50(0x194e),'touch',_0x1ddf50(0x4dbe),_0x1ddf50(0xdb2),'b2sum',_0x1ddf50(0x2165),_0x1ddf50(0x2101),'cat',_0x1ddf50(0x51ce),'comm','csplit',_0x1ddf50(0x34a6),_0x1ddf50(0x2d1c),_0x1ddf50(0x11db),'fold',_0x1ddf50(0x1da1),_0x1ddf50(0x3541),'md5sum','nl',_0x1ddf50(0x1472),'od',_0x1ddf50(0x133d),'ptx','pr',_0x1ddf50(0x17b6),_0x1ddf50(0x27d),_0x1ddf50(0x443c),_0x1ddf50(0x27f8),_0x1ddf50(0x209e),_0x1ddf50(0x3005),_0x1ddf50(0x4c33),_0x1ddf50(0x1117),_0x1ddf50(0x13b9),_0x1ddf50(0x5219),_0x1ddf50(0x25c1),'tr',_0x1ddf50(0x3cce),'unexpand',_0x1ddf50(0x50ff),'wc',_0x1ddf50(0x11a8),_0x1ddf50(0x2de5),_0x1ddf50(0x4d69),_0x1ddf50(0x40d9),_0x1ddf50(0x91c),'du',_0x1ddf50(0x4978),'env',_0x1ddf50(0x226e),_0x1ddf50(0x2af),_0x1ddf50(0x9e6),_0x1ddf50(0x3ca4),'id','link',_0x1ddf50(0x1163),_0x1ddf50(0x29ab),_0x1ddf50(0x337c),_0x1ddf50(0x412e),_0x1ddf50(0x3d38),_0x1ddf50(0x4301),_0x1ddf50(0x45f3),_0x1ddf50(0x32fe),_0x1ddf50(0x44cf),_0x1ddf50(0x461e),_0x1ddf50(0x3c22),_0x1ddf50(0xc26),_0x1ddf50(0x4b2e),_0x1ddf50(0x8f5),'stdbuf','stty',_0x1ddf50(0x4f35),_0x1ddf50(0x1769),'timeout',_0x1ddf50(0x1c1a),_0x1ddf50(0x1611),_0x1ddf50(0x1e6f),_0x1ddf50(0x90b),'users',_0x1ddf50(0x3e11),'whoami',_0x1ddf50(0x1df8)]},'contains':[_0x1a79e5,_0x3bf912[_0x1ddf50(0x307a)](),_0x2f1724,_0x4889b7,_0x3bf912[_0x1ddf50(0x2bbe)],_0x1a18ec,{'match':/(\/[a-z._-]+)+/},_0x1cb26a,{'match':/\\"/},{'className':_0x1ddf50(0x2431),'begin':/'/,'end':/'/},{'match':/\\'/},_0x536303]};};},0x1991:_0x479078=>{const _0xfe93b9=a0_0x11e7;_0x479078[_0xfe93b9(0x474c)]=function(_0x1c0dbf){const _0x2bf747=_0xfe93b9;return{'name':_0x2bf747(0x3055),'case_insensitive':!0x0,'illegal':'^.','keywords':{'$pattern':_0x2bf747(0x4a7b),'keyword':[_0x2bf747(0x3a8),'ASC','AND',_0x2bf747(0x914),'AUTO|0',_0x2bf747(0x2a66),_0x2bf747(0x1d91),_0x2bf747(0x1542),_0x2bf747(0x42ab),_0x2bf747(0x294a),_0x2bf747(0x17ab),'CHAIN','CHDIR',_0x2bf747(0x629),'CINT',_0x2bf747(0x1c9f),_0x2bf747(0x983),'CLOSE',_0x2bf747(0xda2),_0x2bf747(0x239c),'COM',_0x2bf747(0x3779),_0x2bf747(0x4527),_0x2bf747(0xc1f),'CSNG','CSRLIN',_0x2bf747(0x1fd3),_0x2bf747(0x37f8),'CVS',_0x2bf747(0x4b2f),_0x2bf747(0x5a0),_0x2bf747(0x3545),'DEFINT','DEFSNG','DEFSTR',_0x2bf747(0x2421),_0x2bf747(0x41ce),_0x2bf747(0x486b),_0x2bf747(0x24a),_0x2bf747(0x1041),_0x2bf747(0x2f5f),_0x2bf747(0x38a1),_0x2bf747(0x2acb),'ENVIRON','ENVIRON$','EOF',_0x2bf747(0xc3a),_0x2bf747(0x2c2d),_0x2bf747(0x33e2),'ERDEV$',_0x2bf747(0x62d),_0x2bf747(0x1e2b),_0x2bf747(0x2d47),_0x2bf747(0x34b3),_0x2bf747(0x3630),_0x2bf747(0x13df),'FIX','FOR|0','FRE',_0x2bf747(0x1802),_0x2bf747(0x3e83),_0x2bf747(0x1345),'HEX$','IF','THEN',_0x2bf747(0x877),_0x2bf747(0x3141),_0x2bf747(0x4175),_0x2bf747(0x1d44),'INPUT#',_0x2bf747(0x1101),_0x2bf747(0x33a),_0x2bf747(0x15b3),_0x2bf747(0x4f97),'IOCTL',_0x2bf747(0x33ae),_0x2bf747(0x411),'ON',_0x2bf747(0x36c3),_0x2bf747(0x18ab),_0x2bf747(0x3c77),_0x2bf747(0x1334),_0x2bf747(0x4fa1),_0x2bf747(0x36d6),_0x2bf747(0x1e7d),_0x2bf747(0x3ca0),_0x2bf747(0xedc),_0x2bf747(0x4125),_0x2bf747(0x30cf),_0x2bf747(0x2ab0),_0x2bf747(0x131a),_0x2bf747(0x14d1),_0x2bf747(0x1669),'LSET',_0x2bf747(0x754),'MID$',_0x2bf747(0x3f0),'MKD$','MKI$',_0x2bf747(0x60d),'MOD','NAME',_0x2bf747(0x3f8d),_0x2bf747(0x4fa8),_0x2bf747(0xe0d),_0x2bf747(0x255e),_0x2bf747(0x4b06),'ON','OR',_0x2bf747(0x11be),_0x2bf747(0x2236),_0x2bf747(0x250f),'OPEN',_0x2bf747(0x19cf),_0x2bf747(0x2fe6),_0x2bf747(0x1aa8),_0x2bf747(0x4c3f),'PALETTE',_0x2bf747(0x1bf2),_0x2bf747(0x7e9),_0x2bf747(0x4ce6),_0x2bf747(0x4abe),'POKE',_0x2bf747(0x21b),_0x2bf747(0x4b80),_0x2bf747(0x3f57),_0x2bf747(0x47cc),'PRESET','PUT',_0x2bf747(0x4df9),_0x2bf747(0x7c9),'REM',_0x2bf747(0x49f6),'RESET|0','RESTORE',_0x2bf747(0xcaf),_0x2bf747(0x496d),_0x2bf747(0x3e4a),_0x2bf747(0xfbe),_0x2bf747(0x23f5),_0x2bf747(0x4156),_0x2bf747(0x8ec),_0x2bf747(0x4436),'SCREEN',_0x2bf747(0x3ea1),_0x2bf747(0x1ed1),_0x2bf747(0x2763),_0x2bf747(0x3e56),_0x2bf747(0x43b8),_0x2bf747(0x45ff),'SQR','STEP',_0x2bf747(0x4535),_0x2bf747(0x4ab6),_0x2bf747(0x9de),_0x2bf747(0x2822),'SWAP',_0x2bf747(0x543),_0x2bf747(0x3be8),_0x2bf747(0x2835),_0x2bf747(0x37a9),'TIMER',_0x2bf747(0x3ab7),_0x2bf747(0x1c5b),'TO',_0x2bf747(0x486b),'VAL','VARPTR',_0x2bf747(0x3fc),_0x2bf747(0x5254),'WAIT',_0x2bf747(0xd58),_0x2bf747(0x4494),_0x2bf747(0x31b4),'WINDOW','WRITE',_0x2bf747(0x24e6)]},'contains':[_0x1c0dbf[_0x2bf747(0x291b)],_0x1c0dbf[_0x2bf747(0x4e4f)](_0x2bf747(0x30c9),'$',{'relevance':0xa}),_0x1c0dbf['COMMENT']('\x27','$',{'relevance':0x0}),{'className':_0x2bf747(0x239b),'begin':_0x2bf747(0x2c0f),'relevance':0xa},{'className':_0x2bf747(0x4a80),'begin':_0x2bf747(0x1e35),'relevance':0x0},{'className':_0x2bf747(0x4a80),'begin':_0x2bf747(0x16a3)},{'className':'number','begin':_0x2bf747(0x2017)}]};};},0x1a8d:_0x4aa4e1=>{const _0x314300=a0_0x11e7;_0x4aa4e1[_0x314300(0x474c)]=function(_0x1cde6b){const _0x4a5d2c=_0x314300;return{'name':'Backus–Naur\x20Form','contains':[{'className':'attribute','begin'://},{'begin':/::=/,'end':/$/,'contains':[{'begin'://},_0x1cde6b['C_LINE_COMMENT_MODE'],_0x1cde6b[_0x4a5d2c(0x23fe)],_0x1cde6b[_0x4a5d2c(0xa4c)],_0x1cde6b[_0x4a5d2c(0x291b)]]}]};};},0x7bc:_0x278c73=>{const _0x4aafc9=a0_0x11e7;_0x278c73[_0x4aafc9(0x474c)]=function(_0x55eb5b){const _0x2ec4ba=_0x4aafc9,_0x2b5d61={'className':'literal','begin':/[+-]+/,'relevance':0x0};return{'name':_0x2ec4ba(0xa64),'aliases':['bf'],'contains':[_0x55eb5b[_0x2ec4ba(0x4e4f)](/[^\[\]\.,\+\-<> \r\n]/,/[\[\]\.,\+\-<> \r\n]/,{'contains':[{'match':/[ ]+[^\[\]\.,\+\-<> \r\n]/,'relevance':0x0}],'returnEnd':!0x0,'relevance':0x0}),{'className':_0x2ec4ba(0x4685),'begin':_0x2ec4ba(0x29fa),'relevance':0x0},{'className':_0x2ec4ba(0x2431),'begin':'[\x5c.,]','relevance':0x0},{'begin':/(?=\+\+|--)/,'contains':[_0x2b5d61]},_0x2b5d61]};};},0x2d2:_0xf2bd18=>{const _0x313a16=a0_0x11e7;_0xf2bd18[_0x313a16(0x474c)]=function(_0x4dda81){const _0x29d84e=_0x313a16,_0x5c78a8=_0x4dda81[_0x29d84e(0x41d2)],_0x7f0ae=_0x4dda81[_0x29d84e(0x4e4f)]('//','$',{'contains':[{'begin':/\\\n/}]}),_0x2b4b90=_0x29d84e(0xbea),_0x234430='[a-zA-Z_]\x5cw*::',_0x18b684='('+_0x2b4b90+'|'+_0x5c78a8[_0x29d84e(0x51e4)](_0x234430)+_0x29d84e(0x5242)+_0x5c78a8[_0x29d84e(0x51e4)](_0x29d84e(0x4668))+')',_0x3e59b9={'className':_0x29d84e(0xcfc),'variants':[{'begin':_0x29d84e(0x37d3)},{'match':/\batomic_[a-z]{3,6}\b/}]},_0x3bdb94={'className':_0x29d84e(0x2431),'variants':[{'begin':_0x29d84e(0xd65),'end':'\x22','illegal':'\x5cn','contains':[_0x4dda81[_0x29d84e(0x4a76)]]},{'begin':_0x29d84e(0x4375),'end':'\x27','illegal':'.'},_0x4dda81[_0x29d84e(0x453e)]({'begin':/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,'end':/\)([^()\\ ]{0,16})"/})]},_0x55fa74={'className':_0x29d84e(0x4a80),'variants':[{'begin':_0x29d84e(0x219f)},{'begin':_0x29d84e(0x10f1)},{'begin':_0x29d84e(0x2b89)}],'relevance':0x0},_0x228ffd={'className':'meta','begin':/#\s*[a-z]+\b/,'end':/$/,'keywords':{'keyword':_0x29d84e(0x3bbe)},'contains':[{'begin':/\\\n/,'relevance':0x0},_0x4dda81[_0x29d84e(0x46a1)](_0x3bdb94,{'className':_0x29d84e(0x2431)}),{'className':_0x29d84e(0x2431),'begin':/<.*?>/},_0x7f0ae,_0x4dda81[_0x29d84e(0x23fe)]]},_0x1eebd3={'className':_0x29d84e(0x4685),'begin':_0x5c78a8[_0x29d84e(0x51e4)](_0x234430)+_0x4dda81[_0x29d84e(0xacc)],'relevance':0x0},_0x16ddae=_0x5c78a8[_0x29d84e(0x51e4)](_0x234430)+_0x4dda81['IDENT_RE']+_0x29d84e(0x7ef),_0x221dfb={'keyword':['asm','auto',_0x29d84e(0x4e10),_0x29d84e(0x2e7e),'continue',_0x29d84e(0x3d23),'do',_0x29d84e(0x3d4),_0x29d84e(0x44d8),_0x29d84e(0x2068),'for','fortran',_0x29d84e(0x139c),'if','inline','register',_0x29d84e(0x5027),'return',_0x29d84e(0xc5d),_0x29d84e(0x4146),'switch',_0x29d84e(0xc83),'union',_0x29d84e(0x3512),_0x29d84e(0x552),'_Alignas',_0x29d84e(0x610),_0x29d84e(0x4caf),_0x29d84e(0x37dc),'_Noreturn','_Static_assert','_Thread_local',_0x29d84e(0x137d),'alignof',_0x29d84e(0x41ae),_0x29d84e(0x2d1a),_0x29d84e(0x457f),_0x29d84e(0xca4)],'type':[_0x29d84e(0x1ab8),_0x29d84e(0x5024),'signed',_0x29d84e(0x2edb),_0x29d84e(0xc16),_0x29d84e(0x4085),_0x29d84e(0x324f),'char',_0x29d84e(0x27d6),'_Bool',_0x29d84e(0x1708),_0x29d84e(0x4882),'_Decimal32',_0x29d84e(0xe14),'_Decimal128',_0x29d84e(0xc01),_0x29d84e(0x2c7c),'complex',_0x29d84e(0x3ebd),_0x29d84e(0x1056)],'literal':_0x29d84e(0x2f16),'built_in':_0x29d84e(0xc7b)},_0x544072=[_0x228ffd,_0x3e59b9,_0x7f0ae,_0x4dda81[_0x29d84e(0x23fe)],_0x55fa74,_0x3bdb94],_0x28e338={'variants':[{'begin':/=/,'end':/;/},{'begin':/\(/,'end':/\)/},{'beginKeywords':_0x29d84e(0x17e4),'end':/;/}],'keywords':_0x221dfb,'contains':_0x544072[_0x29d84e(0x1d1d)]([{'begin':/\(/,'end':/\)/,'keywords':_0x221dfb,'contains':_0x544072['concat']([_0x29d84e(0x4454)]),'relevance':0x0}]),'relevance':0x0},_0x550d02={'begin':'('+_0x18b684+_0x29d84e(0x1e78)+_0x16ddae,'returnBegin':!0x0,'end':/[{;=]/,'excludeEnd':!0x0,'keywords':_0x221dfb,'illegal':/[^\w\s\*&:<>.]/,'contains':[{'begin':_0x2b4b90,'keywords':_0x221dfb,'relevance':0x0},{'begin':_0x16ddae,'returnBegin':!0x0,'contains':[_0x4dda81[_0x29d84e(0x46a1)](_0x1eebd3,{'className':'title.function'})],'relevance':0x0},{'relevance':0x0,'match':/,/},{'className':_0x29d84e(0xddd),'begin':/\(/,'end':/\)/,'keywords':_0x221dfb,'relevance':0x0,'contains':[_0x7f0ae,_0x4dda81[_0x29d84e(0x23fe)],_0x3bdb94,_0x55fa74,_0x3e59b9,{'begin':/\(/,'end':/\)/,'keywords':_0x221dfb,'relevance':0x0,'contains':['self',_0x7f0ae,_0x4dda81[_0x29d84e(0x23fe)],_0x3bdb94,_0x55fa74,_0x3e59b9]}]},_0x3e59b9,_0x7f0ae,_0x4dda81[_0x29d84e(0x23fe)],_0x228ffd]};return{'name':'C','aliases':['h'],'keywords':_0x221dfb,'disableAutodetect':!0x0,'illegal':'=]/,'contains':[{'beginKeywords':_0x29d84e(0x16ae)},_0x4dda81[_0x29d84e(0x2029)]]}]),'exports':{'preprocessor':_0x228ffd,'strings':_0x3bdb94,'keywords':_0x221dfb}};};},0x19ab:_0x369efb=>{_0x369efb['exports']=function(_0x5bdb90){const _0x433e4a=a0_0x11e7,_0x5c1392=_0x5bdb90[_0x433e4a(0x41d2)],_0x1fea72=[_0x433e4a(0x4c88),_0x433e4a(0x4531),'in',_0x433e4a(0x2663),'or',_0x433e4a(0xc1a),_0x433e4a(0x32a6),_0x433e4a(0x5068),_0x433e4a(0x42fa),'case','do',_0x433e4a(0x4003),_0x433e4a(0x3d4),'end',_0x433e4a(0x4c7b),_0x433e4a(0x3c19),'local','if','of',_0x433e4a(0x11d0),_0x433e4a(0xaf5),'to',_0x433e4a(0x30d6),_0x433e4a(0x552),'with',_0x433e4a(0x469d)],_0xd21ef6=[_0x5bdb90[_0x433e4a(0x2ae2)],_0x5bdb90[_0x433e4a(0x4e4f)](/\{/,/\}/,{'relevance':0x0}),_0x5bdb90['COMMENT'](/\(\*/,/\*\)/,{'relevance':0xa})],_0x1a3c4a={'className':_0x433e4a(0x2431),'begin':/'/,'end':/'/,'contains':[{'begin':/''/}]},_0x479f06={'className':'string','begin':/(#\d+)+/},_0x4f10f0={'match':[/procedure/,/\s+/,/[a-zA-Z_][\w@]*/,/\s*/],'scope':{0x1:_0x433e4a(0x1357),0x3:_0x433e4a(0x20db)},'contains':[{'className':'params','begin':/\(/,'end':/\)/,'keywords':_0x1fea72,'contains':[_0x1a3c4a,_0x479f06,_0x5bdb90[_0x433e4a(0x30be)]]},..._0xd21ef6]},_0x1b2d18={'match':[/OBJECT/,/\s+/,_0x5c1392[_0x433e4a(0x583)](_0x433e4a(0x4dd6),'Form',_0x433e4a(0x1fb1),_0x433e4a(0x735),_0x433e4a(0x153b),'XMLport',_0x433e4a(0x42bb),'Page',_0x433e4a(0x2774)),/\s+/,/\d+/,/\s+(?=[^\s])/,/.*/,/$/],'relevance':0x3,'scope':{0x1:_0x433e4a(0x1357),0x3:_0x433e4a(0xcfc),0x5:_0x433e4a(0x4a80),0x7:_0x433e4a(0x4685)}};return{'name':_0x433e4a(0x104b),'case_insensitive':!0x0,'keywords':{'keyword':_0x1fea72,'literal':'false\x20true'},'illegal':/\/\*/,'contains':[{'match':/[\w]+(?=\=)/,'scope':_0x433e4a(0x263f),'relevance':0x0},_0x1a3c4a,_0x479f06,{'className':'number','begin':_0x433e4a(0x2790),'relevance':0x0},{'className':'string','begin':'\x22','end':'\x22'},_0x5bdb90[_0x433e4a(0x30be)],_0x1b2d18,_0x4f10f0]};};},0x25a7:_0x377688=>{const _0x244cf0=a0_0x11e7;_0x377688[_0x244cf0(0x474c)]=function(_0x225e1a){const _0x20fd40=_0x244cf0,_0x5af247={'variants':[{'match':[/(struct|enum|interface)/,/\s+/,_0x225e1a[_0x20fd40(0xacc)]]},{'match':[/extends/,/\s*\(/,_0x225e1a[_0x20fd40(0xacc)],/\s*\)/]}],'scope':{0x1:_0x20fd40(0x1357),0x3:_0x20fd40(0x19e4)}};return{'name':_0x20fd40(0x4198),'aliases':[_0x20fd40(0x32f1)],'keywords':{'keyword':[_0x20fd40(0x4146),_0x20fd40(0x44d8),_0x20fd40(0x321b),_0x20fd40(0x29d),'group',_0x20fd40(0x331),_0x20fd40(0x347a),_0x20fd40(0xc01),_0x20fd40(0x899),_0x20fd40(0x4428),'in','of','on','as',_0x20fd40(0x2aa7),_0x20fd40(0x27e6),_0x20fd40(0x1aaf)],'type':[_0x20fd40(0x4c8c),_0x20fd40(0x3563),_0x20fd40(0x4276),_0x20fd40(0x3996),'Int32',_0x20fd40(0x1921),_0x20fd40(0x445b),_0x20fd40(0x1a89),_0x20fd40(0x519a),'UInt64','Float32','Float64',_0x20fd40(0x2f37),_0x20fd40(0x3591),_0x20fd40(0x3b4c),_0x20fd40(0x1e81),'Capability',_0x20fd40(0x191f)],'literal':[_0x20fd40(0x4022),_0x20fd40(0x3984)]},'contains':[_0x225e1a[_0x20fd40(0x291b)],_0x225e1a[_0x20fd40(0x30be)],_0x225e1a[_0x20fd40(0x2bbe)],{'className':_0x20fd40(0x5153),'begin':/@0x[\w\d]{16};/,'illegal':/\n/},{'className':_0x20fd40(0x239b),'begin':/@\d+\b/},_0x5af247]};};},0x1fb:_0x5158ed=>{const _0x36c0bb=a0_0x11e7;_0x5158ed[_0x36c0bb(0x474c)]=function(_0x49afb7){const _0x49c3a0=_0x36c0bb,_0x30132b=['assembly',_0x49c3a0(0x196c),_0x49c3a0(0x4bd0),_0x49c3a0(0x331),_0x49c3a0(0xa94),_0x49c3a0(0x1390),_0x49c3a0(0x321b),'object','given',_0x49c3a0(0x4fe9),_0x49c3a0(0x4e14),_0x49c3a0(0x27d6),_0x49c3a0(0x14b2),'new','of',_0x49c3a0(0x4428),_0x49c3a0(0x4ad2),_0x49c3a0(0x1bd4),'in',_0x49c3a0(0x3ab5),_0x49c3a0(0xdfd),_0x49c3a0(0x4e10),_0x49c3a0(0x16d9),_0x49c3a0(0x383),_0x49c3a0(0x4fd4),_0x49c3a0(0x41aa),'if',_0x49c3a0(0x3d4),_0x49c3a0(0x857),_0x49c3a0(0x2e7e),'for','while','try',_0x49c3a0(0x31a3),_0x49c3a0(0x37b2),_0x49c3a0(0xaf5),'let','this',_0x49c3a0(0x3c1),_0x49c3a0(0x2cc),'is',_0x49c3a0(0x449f),_0x49c3a0(0x518)],_0x1c832b={'className':_0x49c3a0(0x2ad6),'excludeBegin':!0x0,'excludeEnd':!0x0,'begin':/``/,'end':/``/,'keywords':_0x30132b,'relevance':0xa},_0x4b6eab=[{'className':_0x49c3a0(0x2431),'begin':_0x49c3a0(0xb00),'end':_0x49c3a0(0xb00),'relevance':0xa},{'className':_0x49c3a0(0x2431),'begin':'\x22','end':'\x22','contains':[_0x1c832b]},{'className':_0x49c3a0(0x2431),'begin':'\x27','end':'\x27'},{'className':_0x49c3a0(0x4a80),'begin':_0x49c3a0(0x41ac),'relevance':0x0}];return _0x1c832b[_0x49c3a0(0x2b31)]=_0x4b6eab,{'name':_0x49c3a0(0x2a68),'keywords':{'keyword':_0x30132b['concat']([_0x49c3a0(0x2b1c),_0x49c3a0(0x3027),_0x49c3a0(0x10f3),_0x49c3a0(0x3d23),_0x49c3a0(0x2d38),'variable',_0x49c3a0(0x1523),_0x49c3a0(0x50dc),_0x49c3a0(0x3df5),_0x49c3a0(0x27e4),_0x49c3a0(0x3a89),_0x49c3a0(0x899),_0x49c3a0(0x1a9e),_0x49c3a0(0x4830)]),'meta':['doc','by',_0x49c3a0(0x19d5),_0x49c3a0(0x35cd),_0x49c3a0(0x20f8),'tagged']},'illegal':_0x49c3a0(0x1c93),'contains':[_0x49afb7['C_LINE_COMMENT_MODE'],_0x49afb7[_0x49c3a0(0x4e4f)](_0x49c3a0(0x4f94),_0x49c3a0(0x1820),{'contains':[_0x49c3a0(0x4454)]}),{'className':_0x49c3a0(0x5153),'begin':'@[a-z]\x5cw*(?::\x22[^\x22]*\x22)?'}][_0x49c3a0(0x1d1d)](_0x4b6eab)};};},0x41a:_0x49aa74=>{const _0x39ac47=a0_0x11e7;_0x49aa74[_0x39ac47(0x474c)]=function(_0x217185){const _0x24dd68=_0x39ac47;return{'name':_0x24dd68(0x452c),'aliases':[_0x24dd68(0x2a9),_0x24dd68(0x34c9)],'keywords':{'keyword':['if',_0x24dd68(0x1e61),'in',_0x24dd68(0x2aa7),'where',_0x24dd68(0x2e7e),'of',_0x24dd68(0x1390),_0x24dd68(0x4279),_0x24dd68(0x313e),_0x24dd68(0x366a),'definition',_0x24dd68(0x2ec4),'module','from',_0x24dd68(0x331),_0x24dd68(0x4cad),'as',_0x24dd68(0x3cd5),_0x24dd68(0x4948),_0x24dd68(0x2988),_0x24dd68(0x20bd),_0x24dd68(0x2bb9),'ccall','stdcall','generic','derive',_0x24dd68(0x10c2),'infixl',_0x24dd68(0x380f)],'built_in':_0x24dd68(0x1589),'literal':_0x24dd68(0x1737)},'contains':[_0x217185[_0x24dd68(0x2ae2)],_0x217185[_0x24dd68(0x23fe)],_0x217185[_0x24dd68(0xa4c)],_0x217185[_0x24dd68(0x291b)],_0x217185[_0x24dd68(0xd12)],{'begin':_0x24dd68(0x2377)}]};};},0x2581:_0x246c51=>{const _0x4743fa=a0_0x11e7;_0x246c51[_0x4743fa(0x474c)]=function(_0x4a8a2d){const _0x467669=_0x4743fa;return{'name':_0x467669(0x3b60),'contains':[{'className':_0x467669(0x4cea),'begin':/^([\w.-]+|\s*#_)?=>/,'starts':{'end':/$/,'subLanguage':_0x467669(0x2a7c)}}]};};},0x1d9b:_0x42f4e5=>{_0x42f4e5['exports']=function(_0x7c19e4){const _0x157db8=a0_0x11e7,_0x1c4ae8=_0x157db8(0x473),_0x3317c6='[#]?['+_0x1c4ae8+']['+_0x1c4ae8+_0x157db8(0x205e),_0x40a0aa='def\x20defonce\x20defprotocol\x20defstruct\x20defmulti\x20defmethod\x20defn-\x20defn\x20defmacro\x20deftype\x20defrecord',_0x3aa0d4={'$pattern':_0x3317c6,'built_in':_0x40a0aa+'\x20cond\x20apply\x20if-not\x20if-let\x20if\x20not\x20not=\x20=|0\x20<|0\x20>|0\x20<=|0\x20>=|0\x20==|0\x20+|0\x20/|0\x20*|0\x20-|0\x20rem\x20quot\x20neg?\x20pos?\x20delay?\x20symbol?\x20keyword?\x20true?\x20false?\x20integer?\x20empty?\x20coll?\x20list?\x20set?\x20ifn?\x20fn?\x20associative?\x20sequential?\x20sorted?\x20counted?\x20reversible?\x20number?\x20decimal?\x20class?\x20distinct?\x20isa?\x20float?\x20rational?\x20reduced?\x20ratio?\x20odd?\x20even?\x20char?\x20seq?\x20vector?\x20string?\x20map?\x20nil?\x20contains?\x20zero?\x20instance?\x20not-every?\x20not-any?\x20libspec?\x20->\x20->>\x20..\x20.\x20inc\x20compare\x20do\x20dotimes\x20mapcat\x20take\x20remove\x20take-while\x20drop\x20letfn\x20drop-last\x20take-last\x20drop-while\x20while\x20intern\x20condp\x20case\x20reduced\x20cycle\x20split-at\x20split-with\x20repeat\x20replicate\x20iterate\x20range\x20merge\x20zipmap\x20declare\x20line-seq\x20sort\x20comparator\x20sort-by\x20dorun\x20doall\x20nthnext\x20nthrest\x20partition\x20eval\x20doseq\x20await\x20await-for\x20let\x20agent\x20atom\x20send\x20send-off\x20release-pending-sends\x20add-watch\x20mapv\x20filterv\x20remove-watch\x20agent-error\x20restart-agent\x20set-error-handler\x20error-handler\x20set-error-mode!\x20error-mode\x20shutdown-agents\x20quote\x20var\x20fn\x20loop\x20recur\x20throw\x20try\x20monitor-enter\x20monitor-exit\x20macroexpand\x20macroexpand-1\x20for\x20dosync\x20and\x20or\x20when\x20when-not\x20when-let\x20comp\x20juxt\x20partial\x20sequence\x20memoize\x20constantly\x20complement\x20identity\x20assert\x20peek\x20pop\x20doto\x20proxy\x20first\x20rest\x20cons\x20cast\x20coll\x20last\x20butlast\x20sigs\x20reify\x20second\x20ffirst\x20fnext\x20nfirst\x20nnext\x20meta\x20with-meta\x20ns\x20in-ns\x20create-ns\x20import\x20refer\x20keys\x20select-keys\x20vals\x20key\x20val\x20rseq\x20name\x20namespace\x20promise\x20into\x20transient\x20persistent!\x20conj!\x20assoc!\x20dissoc!\x20pop!\x20disj!\x20use\x20class\x20type\x20num\x20float\x20double\x20short\x20byte\x20boolean\x20bigint\x20biginteger\x20bigdec\x20print-method\x20print-dup\x20throw-if\x20printf\x20format\x20load\x20compile\x20get-in\x20update-in\x20pr\x20pr-on\x20newline\x20flush\x20read\x20slurp\x20read-line\x20subvec\x20with-open\x20memfn\x20time\x20re-find\x20re-groups\x20rand-int\x20rand\x20mod\x20locking\x20assert-valid-fdecl\x20alias\x20resolve\x20ref\x20deref\x20refset\x20swap!\x20reset!\x20set-validator!\x20compare-and-set!\x20alter-meta!\x20reset-meta!\x20commute\x20get-validator\x20alter\x20ref-set\x20ref-history-count\x20ref-min-history\x20ref-max-history\x20ensure\x20sync\x20io!\x20new\x20next\x20conj\x20set!\x20to-array\x20future\x20future-call\x20into-array\x20aset\x20gen-class\x20reduce\x20map\x20filter\x20find\x20empty\x20hash-map\x20hash-set\x20sorted-map\x20sorted-map-by\x20sorted-set\x20sorted-set-by\x20vec\x20vector\x20seq\x20flatten\x20reverse\x20assoc\x20dissoc\x20list\x20disj\x20get\x20union\x20difference\x20intersection\x20extend\x20extend-type\x20extend-protocol\x20int\x20nth\x20delay\x20count\x20concat\x20chunk\x20chunk-buffer\x20chunk-append\x20chunk-first\x20chunk-rest\x20max\x20min\x20dec\x20unchecked-inc-int\x20unchecked-inc\x20unchecked-dec-inc\x20unchecked-dec\x20unchecked-negate\x20unchecked-add-int\x20unchecked-add\x20unchecked-subtract-int\x20unchecked-subtract\x20chunk-next\x20chunk-cons\x20chunked-seq?\x20prn\x20vary-meta\x20lazy-seq\x20spread\x20list*\x20str\x20find-keyword\x20keyword\x20symbol\x20gensym\x20force\x20rationalize'},_0x463ead={'begin':_0x3317c6,'relevance':0x0},_0x554a98={'scope':_0x157db8(0x4a80),'relevance':0x0,'variants':[{'match':/[-+]?0[xX][0-9a-fA-F]+N?/},{'match':/[-+]?0[0-7]+N?/},{'match':/[-+]?[1-9][0-9]?[rR][0-9a-zA-Z]+N?/},{'match':/[-+]?[0-9]+\/[0-9]+N?/},{'match':/[-+]?[0-9]+((\.[0-9]*([eE][+-]?[0-9]+)?M?)|([eE][+-]?[0-9]+M?|M))/},{'match':/[-+]?([1-9][0-9]*|0)N?/}]},_0x1c08be={'scope':_0x157db8(0x4b35),'variants':[{'match':/\\o[0-3]?[0-7]{1,2}/},{'match':/\\u[0-9a-fA-F]{4}/},{'match':/\\(newline|space|tab|formfeed|backspace|return)/},{'match':/\\\S/,'relevance':0x0}]},_0x33d100={'scope':_0x157db8(0x41d2),'begin':/#"/,'end':/"/,'contains':[_0x7c19e4['BACKSLASH_ESCAPE']]},_0x443055=_0x7c19e4[_0x157db8(0x46a1)](_0x7c19e4['QUOTE_STRING_MODE'],{'illegal':null}),_0x51b8a0={'scope':'punctuation','match':/,/,'relevance':0x0},_0x3c6bdc=_0x7c19e4[_0x157db8(0x4e4f)](';','$',{'relevance':0x0}),_0x50668a={'className':_0x157db8(0x2706),'begin':/\b(true|false|nil)\b/},_0x5231b6={'begin':'\x5c[|(#::?'+_0x3317c6+_0x157db8(0x2c3f),'end':'[\x5c]\x5c}]','relevance':0x0},_0x3ab21a={'className':_0x157db8(0x239b),'begin':_0x157db8(0x51db)+_0x3317c6},_0x359f52={'begin':'\x5c(','end':'\x5c)'},_0x4b4178={'endsWithParent':!0x0,'relevance':0x0},_0x32f51a={'keywords':_0x3aa0d4,'className':_0x157db8(0x11d8),'begin':_0x3317c6,'relevance':0x0,'starts':_0x4b4178},_0x4b4aaf=[_0x51b8a0,_0x359f52,_0x1c08be,_0x33d100,_0x443055,_0x3c6bdc,_0x3ab21a,_0x5231b6,_0x554a98,_0x50668a,_0x463ead],_0x349eec={'beginKeywords':_0x40a0aa,'keywords':{'$pattern':_0x3317c6,'keyword':_0x40a0aa},'end':_0x157db8(0x2aef),'contains':[{'className':'title','begin':_0x3317c6,'relevance':0x0,'excludeEnd':!0x0,'endsParent':!0x0}][_0x157db8(0x1d1d)](_0x4b4aaf)};return _0x359f52[_0x157db8(0x2b31)]=[_0x349eec,_0x32f51a,_0x4b4178],_0x4b4178[_0x157db8(0x2b31)]=_0x4b4aaf,_0x5231b6['contains']=_0x4b4aaf,{'name':_0x157db8(0x446a),'aliases':['clj','edn'],'illegal':/\S/,'contains':[_0x51b8a0,_0x359f52,_0x1c08be,_0x33d100,_0x443055,_0x3c6bdc,_0x3ab21a,_0x5231b6,_0x554a98,_0x50668a]};};},0x7d8:_0x4e45d4=>{const _0x3ee651=a0_0x11e7;_0x4e45d4[_0x3ee651(0x474c)]=function(_0x319394){const _0x3818bd=_0x3ee651;return{'name':_0x3818bd(0xeeb),'aliases':['cmake.in'],'case_insensitive':!0x0,'keywords':{'keyword':_0x3818bd(0xdd9)},'contains':[{'className':_0x3818bd(0x3362),'begin':/\$\{/,'end':/\}/},_0x319394['COMMENT'](/#\[\[/,/]]/),_0x319394[_0x3818bd(0x2bbe)],_0x319394[_0x3818bd(0x291b)],_0x319394[_0x3818bd(0x30be)]]};};},0x20ac:_0xde9143=>{const _0x3a9f29=a0_0x11e7,_0x2f8823=['as','in','of','if','for',_0x3a9f29(0x552),_0x3a9f29(0x37b2),_0x3a9f29(0x469d),'new',_0x3a9f29(0x14b2),'do','return',_0x3a9f29(0x27d6),_0x3a9f29(0x3d4),_0x3a9f29(0x4e10),'catch',_0x3a9f29(0xf3c),_0x3a9f29(0x2aa7),_0x3a9f29(0x383),_0x3a9f29(0x2e7e),_0x3a9f29(0x3d23),_0x3a9f29(0x422b),_0x3a9f29(0x857),_0x3a9f29(0x16d9),'typeof',_0x3a9f29(0x5be),_0x3a9f29(0x1e61),'yield',_0x3a9f29(0xc01),_0x3a9f29(0x1390),_0x3a9f29(0x2085),_0x3a9f29(0x16c2),_0x3a9f29(0x371f),_0x3a9f29(0x2c7c),_0x3a9f29(0x331),'from','export',_0x3a9f29(0x4428)],_0x25839c=[_0x3a9f29(0x4022),'false',_0x3a9f29(0x1582),_0x3a9f29(0x1daa),_0x3a9f29(0x494b),_0x3a9f29(0x2486)],_0x21700f=[]['concat']([_0x3a9f29(0x2a87),_0x3a9f29(0x162d),'clearInterval',_0x3a9f29(0x4f79),'require','exports','eval',_0x3a9f29(0x1d8e),_0x3a9f29(0x4ee0),'parseFloat','parseInt',_0x3a9f29(0x2b42),_0x3a9f29(0x25a3),_0x3a9f29(0x51f2),_0x3a9f29(0x2c10),'escape',_0x3a9f29(0x254e)],[_0x3a9f29(0x108b),_0x3a9f29(0x2ac5),_0x3a9f29(0x3bc9),_0x3a9f29(0x2e8a),_0x3a9f29(0x37c3),_0x3a9f29(0x448e),_0x3a9f29(0x3cd),_0x3a9f29(0x4cd1),_0x3a9f29(0x3327),_0x3a9f29(0xae9),'Array','Float32Array',_0x3a9f29(0x3ae8),_0x3a9f29(0x927),_0x3a9f29(0x15b2),_0x3a9f29(0x5091),_0x3a9f29(0x4f9),_0x3a9f29(0x26ec),_0x3a9f29(0x323e),_0x3a9f29(0x4399),'BigInt64Array',_0x3a9f29(0x17e5),_0x3a9f29(0x34d5),_0x3a9f29(0x4a59),_0x3a9f29(0x1968),_0x3a9f29(0x32f2),_0x3a9f29(0x3304),_0x3a9f29(0x593),_0x3a9f29(0x1eca),_0x3a9f29(0x35d6),_0x3a9f29(0x9c3),_0x3a9f29(0x431b),_0x3a9f29(0x2122),_0x3a9f29(0x5f3),_0x3a9f29(0x4265),_0x3a9f29(0x373),_0x3a9f29(0x190f),'Intl','WebAssembly'],[_0x3a9f29(0x5e5),_0x3a9f29(0x47b6),'InternalError',_0x3a9f29(0x7b7),'ReferenceError',_0x3a9f29(0x22ea),_0x3a9f29(0x1a6c),_0x3a9f29(0x3f3)]);_0xde9143['exports']=function(_0x31998b){const _0x383599=_0x3a9f29,_0x53f049={'keyword':_0x2f8823[_0x383599(0x1d1d)](['then',_0x383599(0x26b1),'until','loop','by','when','and','or','is','isnt',_0x383599(0xc1a)])[_0x383599(0x1465)]((_0x369aad=[_0x383599(0x469d),_0x383599(0xc01),_0x383599(0x1e61),'function','static'],_0x11e5e9=>!_0x369aad[_0x383599(0x2628)](_0x11e5e9))),'literal':_0x25839c[_0x383599(0x1d1d)]([_0x383599(0x1df8),'no','on',_0x383599(0x2422)]),'built_in':_0x21700f[_0x383599(0x1d1d)](['npm','print'])};var _0x369aad;const _0x17136c=_0x383599(0x18c3),_0xa97380={'className':_0x383599(0x2ad6),'begin':/#\{/,'end':/\}/,'keywords':_0x53f049},_0x19b733=[_0x31998b['BINARY_NUMBER_MODE'],_0x31998b['inherit'](_0x31998b[_0x383599(0xd12)],{'starts':{'end':_0x383599(0x4887),'relevance':0x0}}),{'className':'string','variants':[{'begin':/'''/,'end':/'''/,'contains':[_0x31998b[_0x383599(0x4a76)]]},{'begin':/'/,'end':/'/,'contains':[_0x31998b['BACKSLASH_ESCAPE']]},{'begin':/"""/,'end':/"""/,'contains':[_0x31998b[_0x383599(0x4a76)],_0xa97380]},{'begin':/"/,'end':/"/,'contains':[_0x31998b[_0x383599(0x4a76)],_0xa97380]}]},{'className':_0x383599(0x4d1d),'variants':[{'begin':'///','end':'///','contains':[_0xa97380,_0x31998b[_0x383599(0x2bbe)]]},{'begin':'//[gim]{0,3}(?=\x5cW)','relevance':0x0},{'begin':/\/(?![ *]).*?(?![\\]).\/[gim]{0,3}(?=\W)/}]},{'begin':'@'+_0x17136c},{'subLanguage':'javascript','excludeBegin':!0x0,'excludeEnd':!0x0,'variants':[{'begin':_0x383599(0x11f5),'end':_0x383599(0x11f5)},{'begin':'`','end':'`'}]}];_0xa97380[_0x383599(0x2b31)]=_0x19b733;const _0x21c11d=_0x31998b[_0x383599(0x46a1)](_0x31998b[_0x383599(0x2029)],{'begin':_0x17136c}),_0x51dad5=_0x383599(0x4bd3),_0x326e80={'className':_0x383599(0xddd),'begin':_0x383599(0x898),'returnBegin':!0x0,'contains':[{'begin':/\(/,'end':/\)/,'keywords':_0x53f049,'contains':[_0x383599(0x4454)]['concat'](_0x19b733)}]},_0x15a2f4={'variants':[{'match':[/class\s+/,_0x17136c,/\s+extends\s+/,_0x17136c]},{'match':[/class\s+/,_0x17136c]}],'scope':{0x2:'title.class',0x4:_0x383599(0x3235)},'keywords':_0x53f049};return{'name':_0x383599(0x3f64),'aliases':[_0x383599(0x38ea),'cson',_0x383599(0x4441)],'keywords':_0x53f049,'illegal':/\/\*/,'contains':[..._0x19b733,_0x31998b[_0x383599(0x4e4f)]('###',_0x383599(0x1bbc)),_0x31998b['HASH_COMMENT_MODE'],{'className':'function','begin':'^\x5cs*'+_0x17136c+_0x383599(0x2210)+_0x51dad5,'end':'[-=]>','returnBegin':!0x0,'contains':[_0x21c11d,_0x326e80]},{'begin':/[:\(,=]\s*/,'relevance':0x0,'contains':[{'className':_0x383599(0x14b2),'begin':_0x51dad5,'end':_0x383599(0x3200),'returnBegin':!0x0,'contains':[_0x326e80]}]},_0x15a2f4,{'begin':_0x17136c+':','end':':','returnBegin':!0x0,'returnEnd':!0x0,'relevance':0x0}]};};},0x25fa:_0x14aca4=>{const _0xafe97f=a0_0x11e7;_0x14aca4[_0xafe97f(0x474c)]=function(_0x437fc5){const _0x25515b=_0xafe97f;return{'name':_0x25515b(0x3aa8),'keywords':{'keyword':[_0x25515b(0x2834),'as','at','cofix',_0x25515b(0x3d4),_0x25515b(0x2681),_0x25515b(0x449f),_0x25515b(0x4c59),_0x25515b(0x2002),_0x25515b(0x3c19),_0x25515b(0x43a6),_0x25515b(0x451d),'if','IF','in',_0x25515b(0x1e61),_0x25515b(0x2d96),_0x25515b(0x4531),_0x25515b(0xe3a),'return',_0x25515b(0x34d5),_0x25515b(0xaf5),'Type','using',_0x25515b(0x3b62),'with','Abort',_0x25515b(0x30dc),_0x25515b(0x1a11),'Admit',_0x25515b(0x3843),_0x25515b(0x48e5),'Arguments',_0x25515b(0x1bb0),_0x25515b(0x771),_0x25515b(0x122f),_0x25515b(0x3fe0),_0x25515b(0xd9e),_0x25515b(0x4a7a),_0x25515b(0x11c9),_0x25515b(0x24e5),'Cd',_0x25515b(0x1f14),_0x25515b(0x3352),_0x25515b(0xb88),_0x25515b(0xe7f),_0x25515b(0x4a2),_0x25515b(0x3691),_0x25515b(0x9ef),_0x25515b(0x1037),_0x25515b(0x13d5),_0x25515b(0x173e),_0x25515b(0xb3b),_0x25515b(0x401f),_0x25515b(0x4b33),_0x25515b(0x36f0),'constr','Constraint',_0x25515b(0x2b09),_0x25515b(0x1c6e),_0x25515b(0xfbd),_0x25515b(0x3bee),_0x25515b(0x36c),'Declare',_0x25515b(0x585),_0x25515b(0x3096),_0x25515b(0x36db),_0x25515b(0xb9b),_0x25515b(0x1817),_0x25515b(0x45f0),'Drop',_0x25515b(0x312d),_0x25515b(0x354c),_0x25515b(0x388f),'Eval',_0x25515b(0x138d),'Existential',_0x25515b(0x501f),_0x25515b(0x17a1),_0x25515b(0x246b),'exporting',_0x25515b(0xe3d),'Extract','Extraction',_0x25515b(0x1ac7),'Field','Fields',_0x25515b(0x106b),_0x25515b(0x4c1f),_0x25515b(0x3b9f),'for','From',_0x25515b(0x2ac5),_0x25515b(0x41b9),_0x25515b(0xc1d),_0x25515b(0x1305),_0x25515b(0x19de),_0x25515b(0x2e3f),_0x25515b(0x2027),_0x25515b(0x27a),'Guarded',_0x25515b(0x5235),_0x25515b(0x51d5),_0x25515b(0xe5c),'Hints',_0x25515b(0x3b72),'Hypothesis',_0x25515b(0x27e9),'Identity','If',_0x25515b(0x20b1),_0x25515b(0x3a21),_0x25515b(0x107b),_0x25515b(0x39d5),_0x25515b(0x2992),_0x25515b(0x22d4),_0x25515b(0x3391),_0x25515b(0x12e5),_0x25515b(0x7ff),_0x25515b(0x117c),'Instance',_0x25515b(0x3398),_0x25515b(0x2593),_0x25515b(0x1ba4),_0x25515b(0x2df6),_0x25515b(0x4b00),'Language',_0x25515b(0x2020),_0x25515b(0x129b),_0x25515b(0x1a95),_0x25515b(0x2ca7),'Library',_0x25515b(0x2472),_0x25515b(0xb71),_0x25515b(0x4d4c),_0x25515b(0x5ed),_0x25515b(0x1488),'ML',_0x25515b(0x3a23),_0x25515b(0x2496),_0x25515b(0x2818),_0x25515b(0x349),'Morphism','Next',_0x25515b(0xbf3),_0x25515b(0x1741),_0x25515b(0x4552),_0x25515b(0x4e80),_0x25515b(0x2222),_0x25515b(0x430c),_0x25515b(0x18e9),'Options','Parameter',_0x25515b(0x52c),_0x25515b(0x4dbb),_0x25515b(0x6c1),'Paths',_0x25515b(0x3dd7),'Polymorphic',_0x25515b(0x40ad),_0x25515b(0x293),'Printing','Program',_0x25515b(0x462),'Proof',_0x25515b(0xaca),_0x25515b(0x1ff1),_0x25515b(0x2a8b),_0x25515b(0x1acc),_0x25515b(0x30a0),'Record',_0x25515b(0x4fd5),_0x25515b(0x378a),_0x25515b(0x1a3b),_0x25515b(0x276d),_0x25515b(0x3985),_0x25515b(0x3b1b),_0x25515b(0x37af),_0x25515b(0x2b94),_0x25515b(0x4a90),'Restart',_0x25515b(0x1421),_0x25515b(0x1dd6),_0x25515b(0x850),'Rings','Save',_0x25515b(0x1e89),_0x25515b(0x34c8),_0x25515b(0x1a5b),_0x25515b(0x4baf),_0x25515b(0x2609),_0x25515b(0x4ba1),_0x25515b(0x4c7),_0x25515b(0x4d57),_0x25515b(0x3c63),_0x25515b(0xccb),_0x25515b(0x43e),_0x25515b(0x34d5),'Setoid',_0x25515b(0x1455),_0x25515b(0x4afd),_0x25515b(0x1c63),_0x25515b(0xac7),_0x25515b(0x3645),_0x25515b(0x3997),_0x25515b(0x525),_0x25515b(0x1a82),_0x25515b(0x4dd6),'Tables',_0x25515b(0x2e1a),_0x25515b(0x7c1),_0x25515b(0x18e7),_0x25515b(0x1f31),'Time',_0x25515b(0x4b6c),_0x25515b(0x3322),_0x25515b(0x2b7e),_0x25515b(0x2fd9),_0x25515b(0x5195),_0x25515b(0x3c60),_0x25515b(0x1127),_0x25515b(0x48d5),_0x25515b(0x3c6e),_0x25515b(0x3f54),'Universe',_0x25515b(0x18e6),_0x25515b(0x2a6b),_0x25515b(0x355e),_0x25515b(0x347a),'Variable',_0x25515b(0x4c23),'Variant',_0x25515b(0x476d),_0x25515b(0x34fc),_0x25515b(0x3b62),'with'],'built_in':[_0x25515b(0x3027),_0x25515b(0x43f5),'admit',_0x25515b(0x1349),_0x25515b(0x4c31),'as',_0x25515b(0x4fd4),'assumption','at',_0x25515b(0x4c14),'autorewrite','autounfold',_0x25515b(0x5097),'bottom',_0x25515b(0x231c),'by','case',_0x25515b(0x34e4),_0x25515b(0x31f9),_0x25515b(0x2caf),_0x25515b(0x1efc),_0x25515b(0xcdc),_0x25515b(0x3d49),'clear',_0x25515b(0x4357),_0x25515b(0x268e),'compare','compute','congruence',_0x25515b(0x5268),_0x25515b(0x4514),_0x25515b(0x2def),'contradiction',_0x25515b(0x34a6),_0x25515b(0x24bc),_0x25515b(0x1429),'decide',_0x25515b(0x91f),_0x25515b(0x28b3),_0x25515b(0x35b),_0x25515b(0x2c41),_0x25515b(0x489c),_0x25515b(0x70f),_0x25515b(0x3815),'do',_0x25515b(0x5024),_0x25515b(0x376f),_0x25515b(0x1c23),'eassumption',_0x25515b(0x312d),_0x25515b(0x393),_0x25515b(0x1539),_0x25515b(0x4628),_0x25515b(0x31e7),_0x25515b(0x646),_0x25515b(0x26aa),'eexists',_0x25515b(0x480c),_0x25515b(0x5250),_0x25515b(0x21bc),_0x25515b(0x3826),_0x25515b(0xa24),_0x25515b(0x11ad),_0x25515b(0x9b5),_0x25515b(0x462a),_0x25515b(0x1c46),'esimplify_eq',_0x25515b(0x417b),'evar',_0x25515b(0x1eb7),_0x25515b(0x1a0f),_0x25515b(0x28f7),_0x25515b(0x449f),_0x25515b(0x9d7),_0x25515b(0x186b),'field','field_simplify',_0x25515b(0x3d40),'first','firstorder',_0x25515b(0x2002),_0x25515b(0x3b8e),_0x25515b(0x3245),_0x25515b(0x451b),_0x25515b(0x508c),_0x25515b(0xef4),_0x25515b(0x221a),_0x25515b(0x3c03),_0x25515b(0x518d),_0x25515b(0x4c3b),_0x25515b(0x2eef),'in',_0x25515b(0x1e20),_0x25515b(0xa84),_0x25515b(0x698),_0x25515b(0xf3a),_0x25515b(0x13d2),'intros',_0x25515b(0x3be3),'inversion',_0x25515b(0x399f),'is_evar',_0x25515b(0x28d3),'lapply',_0x25515b(0x393d),_0x25515b(0x48eb),_0x25515b(0x3438),'lra',_0x25515b(0x2676),'native_compute','nia','nsatz',_0x25515b(0x363e),'once',_0x25515b(0x3dd7),_0x25515b(0x4f56),_0x25515b(0x357b),_0x25515b(0x116f),'psatz',_0x25515b(0x3567),_0x25515b(0x15bd),_0x25515b(0x117e),_0x25515b(0x15af),_0x25515b(0x1e94),_0x25515b(0x1d46),'rename',_0x25515b(0x11d0),_0x25515b(0x741),_0x25515b(0x26f7),_0x25515b(0x163e),'rewrite',_0x25515b(0x4b8c),'right',_0x25515b(0x300d),'ring_simplify',_0x25515b(0x634),'set','setoid_reflexivity',_0x25515b(0x1648),_0x25515b(0x3f66),'setoid_symmetry',_0x25515b(0x11c7),_0x25515b(0x13e1),_0x25515b(0x4d8e),'simpl',_0x25515b(0x4dc2),_0x25515b(0x14ad),_0x25515b(0x33fe),_0x25515b(0xc82),_0x25515b(0x1117),_0x25515b(0x15d2),_0x25515b(0xb94),'stepl',_0x25515b(0x3702),_0x25515b(0x2ad6),_0x25515b(0x13b9),_0x25515b(0x25a9),_0x25515b(0xbb0),_0x25515b(0x3145),'tauto',_0x25515b(0x51b6),_0x25515b(0x2786),_0x25515b(0x279d),_0x25515b(0x2a90),_0x25515b(0x2752),_0x25515b(0x422b),_0x25515b(0x3a53),_0x25515b(0x1bc8),'unify',_0x25515b(0x30d6),_0x25515b(0x347a),'vm_compute',_0x25515b(0x2aa7)]},'contains':[_0x437fc5[_0x25515b(0x291b)],_0x437fc5['COMMENT']('\x5c(\x5c*',_0x25515b(0x17ad)),_0x437fc5[_0x25515b(0xd12)],{'className':'type','excludeBegin':!0x0,'begin':'\x5c|\x5cs*','end':_0x25515b(0xdce)},{'begin':/[-=]>/}]};};},0x2450:_0x8da449=>{const _0x1f2ae4=a0_0x11e7;_0x8da449[_0x1f2ae4(0x474c)]=function(_0x3bb067){const _0x1e9127=_0x1f2ae4;return{'name':_0x1e9127(0x4804),'case_insensitive':!0x0,'aliases':['cls'],'keywords':_0x1e9127(0x45d7),'contains':[{'className':_0x1e9127(0x4a80),'begin':_0x1e9127(0x4c5a),'relevance':0x0},{'className':_0x1e9127(0x2431),'variants':[{'begin':'\x22','end':'\x22','contains':[{'begin':'\x22\x22','relevance':0x0}]}]},_0x3bb067['C_LINE_COMMENT_MODE'],_0x3bb067[_0x1e9127(0x23fe)],{'className':_0x1e9127(0x4645),'begin':/;/,'end':'$','relevance':0x0},{'className':_0x1e9127(0x43a),'begin':/(?:\$\$?|\.\.)\^?[a-zA-Z]+/},{'className':_0x1e9127(0x43a),'begin':/\$\$\$[a-zA-Z]+/},{'className':_0x1e9127(0x43a),'begin':/%[a-z]+(?:\.[a-z]+)*/},{'className':'symbol','begin':/\^%?[a-zA-Z][\w]*/},{'className':_0x1e9127(0x1357),'begin':/##class|##super|#define|#dim/},{'begin':/&sql\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'subLanguage':_0x1e9127(0x124d)},{'begin':/&(js|jscript|javascript)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'subLanguage':_0x1e9127(0x45ac)},{'begin':/&html<\s*\s*>/,'subLanguage':_0x1e9127(0x2655)}]};};},0x19aa:_0x44314c=>{const _0x2b2ced=a0_0x11e7;_0x44314c[_0x2b2ced(0x474c)]=function(_0x334d5c){const _0x46e16c=_0x2b2ced,_0x129c9c=_0x334d5c[_0x46e16c(0x41d2)],_0x273859=_0x334d5c['COMMENT']('//','$',{'contains':[{'begin':/\\\n/}]}),_0x2d6998=_0x46e16c(0xbea),_0x355ec3=_0x46e16c(0xd39),_0x17afc3=_0x46e16c(0xf07)+_0x2d6998+'|'+_0x129c9c[_0x46e16c(0x51e4)](_0x355ec3)+_0x46e16c(0x5242)+_0x129c9c[_0x46e16c(0x51e4)]('<[^<>]+>')+')',_0x5a2b28={'className':'type','begin':'\x5cb[a-z\x5cd_]*_t\x5cb'},_0x2276b9={'className':'string','variants':[{'begin':_0x46e16c(0xd65),'end':'\x22','illegal':'\x5cn','contains':[_0x334d5c['BACKSLASH_ESCAPE']]},{'begin':_0x46e16c(0x4375),'end':'\x27','illegal':'.'},_0x334d5c['END_SAME_AS_BEGIN']({'begin':/(?:u8?|U|L)?R"([^()\\ ]{0,16})\(/,'end':/\)([^()\\ ]{0,16})"/})]},_0x488936={'className':_0x46e16c(0x4a80),'variants':[{'begin':_0x46e16c(0x219f)},{'begin':_0x46e16c(0x10f1)},{'begin':_0x46e16c(0x2b89)}],'relevance':0x0},_0x4e4db9={'className':_0x46e16c(0x5153),'begin':/#\s*[a-z]+\b/,'end':/$/,'keywords':{'keyword':_0x46e16c(0x3bbe)},'contains':[{'begin':/\\\n/,'relevance':0x0},_0x334d5c[_0x46e16c(0x46a1)](_0x2276b9,{'className':'string'}),{'className':'string','begin':/<.*?>/},_0x273859,_0x334d5c[_0x46e16c(0x23fe)]]},_0x3f00ed={'className':_0x46e16c(0x4685),'begin':_0x129c9c['optional'](_0x355ec3)+_0x334d5c[_0x46e16c(0xacc)],'relevance':0x0},_0x368d07=_0x129c9c['optional'](_0x355ec3)+_0x334d5c['IDENT_RE']+'\x5cs*\x5c(',_0x1b0d52={'type':[_0x46e16c(0x3ebd),'char',_0x46e16c(0xa55),'char32_t',_0x46e16c(0x1380),'double','float','int',_0x46e16c(0x324f),_0x46e16c(0x4085),'void',_0x46e16c(0x480b),'unsigned','signed',_0x46e16c(0xc01),_0x46e16c(0x2c7c)],'keyword':[_0x46e16c(0x137d),_0x46e16c(0x3257),_0x46e16c(0x2663),_0x46e16c(0x1fb2),_0x46e16c(0x1651),_0x46e16c(0x5163),_0x46e16c(0x4a30),_0x46e16c(0x1974),_0x46e16c(0x4c14),_0x46e16c(0x3523),_0x46e16c(0x2426),_0x46e16c(0x4e10),_0x46e16c(0x2e7e),'catch',_0x46e16c(0x1390),_0x46e16c(0x1125),_0x46e16c(0x3c7d),_0x46e16c(0x448b),_0x46e16c(0x49fb),_0x46e16c(0x2272),_0x46e16c(0x4d67),_0x46e16c(0x23b2),_0x46e16c(0x47d4),_0x46e16c(0x4075),_0x46e16c(0x16d9),_0x46e16c(0x3fb2),_0x46e16c(0x3d23),_0x46e16c(0x5be),'do',_0x46e16c(0x2cbc),_0x46e16c(0x3d4),_0x46e16c(0x44d8),_0x46e16c(0x2121),_0x46e16c(0x2bb9),_0x46e16c(0x2068),'false',_0x46e16c(0x27e4),_0x46e16c(0x3c19),'friend',_0x46e16c(0x139c),'if',_0x46e16c(0x331),_0x46e16c(0x2988),_0x46e16c(0x196c),_0x46e16c(0x1c6c),_0x46e16c(0x37f7),'new',_0x46e16c(0x4855),_0x46e16c(0xc1a),'not_eq',_0x46e16c(0x916),'operator','or',_0x46e16c(0x460a),_0x46e16c(0x35a7),_0x46e16c(0x4ef4),_0x46e16c(0xc14),_0x46e16c(0x39ce),_0x46e16c(0x18fb),_0x46e16c(0x49b4),'reinterpret_cast|10',_0x46e16c(0x1c0e),_0x46e16c(0xdfd),_0x46e16c(0xc5d),_0x46e16c(0x2d1a),'static_cast|10','struct',_0x46e16c(0x857),_0x46e16c(0x2715),_0x46e16c(0x15c6),_0x46e16c(0x138f),_0x46e16c(0x457f),'throw',_0x46e16c(0x1010),_0x46e16c(0x3971),_0x46e16c(0x4022),'try','typedef',_0x46e16c(0x3955),_0x46e16c(0x2e71),_0x46e16c(0x29d),_0x46e16c(0x347a),_0x46e16c(0x38b8),'volatile','while',_0x46e16c(0x32a6),'xor_eq'],'literal':[_0x46e16c(0xa45),_0x46e16c(0x3984),'nullopt',_0x46e16c(0x916),'true'],'built_in':['_Pragma'],'_type_hints':['any',_0x46e16c(0x2147),_0x46e16c(0x3859),_0x46e16c(0x51f3),_0x46e16c(0x4f41),_0x46e16c(0x3a77),_0x46e16c(0x221e),_0x46e16c(0x1ff3),_0x46e16c(0x4209),'deque',_0x46e16c(0xd6e),_0x46e16c(0x3ee),_0x46e16c(0x1056),_0x46e16c(0x28fa),_0x46e16c(0xaa7),_0x46e16c(0x4b02),'latch',_0x46e16c(0x128b),_0x46e16c(0x5276),_0x46e16c(0xc0a),'mutex','optional',_0x46e16c(0x2552),_0x46e16c(0x4a02),_0x46e16c(0x3959),'promise',_0x46e16c(0x1cd6),_0x46e16c(0x3db4),_0x46e16c(0x4951),'recursive_timed_mutex',_0x46e16c(0x18e1),_0x46e16c(0x1fa),_0x46e16c(0x3c85),'shared_lock',_0x46e16c(0xfcb),'shared_timed_mutex',_0x46e16c(0x29b4),_0x46e16c(0x453a),_0x46e16c(0x4300),_0x46e16c(0xedf),_0x46e16c(0x1cdf),_0x46e16c(0x3701),_0x46e16c(0x21fb),_0x46e16c(0x3cab),_0x46e16c(0x32ca),_0x46e16c(0x41a),'unordered_map',_0x46e16c(0x7fc),'unordered_multiset',_0x46e16c(0x37a6),_0x46e16c(0x4ea),_0x46e16c(0x4836),_0x46e16c(0x3633),_0x46e16c(0x23c2),_0x46e16c(0x2894)]},_0x22f0ed={'className':_0x46e16c(0x9eb),'relevance':0x0,'keywords':{'_hint':[_0x46e16c(0x1ec4),_0x46e16c(0xbe0),_0x46e16c(0x2c6e),_0x46e16c(0x4c31),_0x46e16c(0x190b),_0x46e16c(0x3c15),_0x46e16c(0x3aab),_0x46e16c(0x41c2),_0x46e16c(0x20e5),'ceil',_0x46e16c(0xef3),'cin',_0x46e16c(0x1ece),_0x46e16c(0x3935),_0x46e16c(0x486e),'cout',_0x46e16c(0x9c5),'endl',_0x46e16c(0x2716),_0x46e16c(0x4c7b),_0x46e16c(0x3a1b),'fabs',_0x46e16c(0x2e2d),_0x46e16c(0x3695),_0x46e16c(0x179a),_0x46e16c(0xe1d),_0x46e16c(0x2e7c),_0x46e16c(0x2e98),_0x46e16c(0x2084),'fscanf',_0x46e16c(0x3ee),'invoke','isalnum',_0x46e16c(0x132c),_0x46e16c(0x179c),'isdigit',_0x46e16c(0x43bb),'islower','isprint',_0x46e16c(0x3b8f),'isspace',_0x46e16c(0x483),'isxdigit','labs',_0x46e16c(0x39d6),_0x46e16c(0x3afb),'log',_0x46e16c(0x1463),'make_pair','make_shared',_0x46e16c(0x3597),_0x46e16c(0x4dbf),_0x46e16c(0x1fde),_0x46e16c(0x4934),_0x46e16c(0x4983),'memcmp','memcpy',_0x46e16c(0x4609),_0x46e16c(0x28d5),_0x46e16c(0x2676),_0x46e16c(0x43bd),_0x46e16c(0x32fe),'putchar','puts',_0x46e16c(0x2ef7),'scanf',_0x46e16c(0x2a37),_0x46e16c(0x4bb3),'snprintf','sprintf',_0x46e16c(0x5011),'sscanf',_0x46e16c(0x2340),_0x46e16c(0x40ee),_0x46e16c(0x1e79),_0x46e16c(0x49a2),_0x46e16c(0x2933),_0x46e16c(0x5113),_0x46e16c(0x4ed0),_0x46e16c(0x2436),_0x46e16c(0x44d4),_0x46e16c(0xed4),_0x46e16c(0xdeb),_0x46e16c(0x3bec),_0x46e16c(0x2ac0),_0x46e16c(0x27b4),_0x46e16c(0x216c),_0x46e16c(0x236d),_0x46e16c(0x3a8e),_0x46e16c(0x25a9),'tan',_0x46e16c(0x327c),'terminate',_0x46e16c(0x279e),'tolower',_0x46e16c(0x4415),'vfprintf',_0x46e16c(0x1d31),_0x46e16c(0x1ca1),_0x46e16c(0x3643)]},'begin':_0x129c9c['concat'](/\b/,/(?!decltype)/,/(?!if)/,/(?!for)/,/(?!switch)/,/(?!while)/,_0x334d5c[_0x46e16c(0xacc)],_0x129c9c[_0x46e16c(0x3296)](/(<[^<>]+>|)\s*\(/))},_0xc272ed=[_0x22f0ed,_0x4e4db9,_0x5a2b28,_0x273859,_0x334d5c[_0x46e16c(0x23fe)],_0x488936,_0x2276b9],_0x3359af={'variants':[{'begin':/=/,'end':/;/},{'begin':/\(/,'end':/\)/},{'beginKeywords':_0x46e16c(0x17e4),'end':/;/}],'keywords':_0x1b0d52,'contains':_0xc272ed[_0x46e16c(0x1d1d)]([{'begin':/\(/,'end':/\)/,'keywords':_0x1b0d52,'contains':_0xc272ed[_0x46e16c(0x1d1d)]([_0x46e16c(0x4454)]),'relevance':0x0}]),'relevance':0x0},_0x4bfd56={'className':_0x46e16c(0x14b2),'begin':'('+_0x17afc3+_0x46e16c(0x1e78)+_0x368d07,'returnBegin':!0x0,'end':/[{;=]/,'excludeEnd':!0x0,'keywords':_0x1b0d52,'illegal':/[^\w\s\*&:<>.]/,'contains':[{'begin':_0x2d6998,'keywords':_0x1b0d52,'relevance':0x0},{'begin':_0x368d07,'returnBegin':!0x0,'contains':[_0x3f00ed],'relevance':0x0},{'begin':/::/,'relevance':0x0},{'begin':/:/,'endsWithParent':!0x0,'contains':[_0x2276b9,_0x488936]},{'relevance':0x0,'match':/,/},{'className':_0x46e16c(0xddd),'begin':/\(/,'end':/\)/,'keywords':_0x1b0d52,'relevance':0x0,'contains':[_0x273859,_0x334d5c[_0x46e16c(0x23fe)],_0x2276b9,_0x488936,_0x5a2b28,{'begin':/\(/,'end':/\)/,'keywords':_0x1b0d52,'relevance':0x0,'contains':['self',_0x273859,_0x334d5c[_0x46e16c(0x23fe)],_0x2276b9,_0x488936,_0x5a2b28]}]},_0x5a2b28,_0x273859,_0x334d5c[_0x46e16c(0x23fe)],_0x4e4db9]};return{'name':'C++','aliases':['cc',_0x46e16c(0x5d3),_0x46e16c(0x3ffd),_0x46e16c(0x344a),'hh',_0x46e16c(0x12e2),_0x46e16c(0x654)],'keywords':_0x1b0d52,'illegal':'','keywords':_0x1b0d52,'contains':['self',_0x5a2b28]},{'begin':_0x334d5c[_0x46e16c(0xacc)]+'::','keywords':_0x1b0d52},{'match':[/\b(?:enum(?:\s+(?:class|struct))?|class|struct|union)/,/\s+/,/\w+/],'className':{0x1:_0x46e16c(0x1357),0x3:_0x46e16c(0x19e4)}}])};};},0x6dc:_0xc75c4a=>{const _0x3aaf41=a0_0x11e7;_0xc75c4a[_0x3aaf41(0x474c)]=function(_0x4b6947){const _0x3da980=_0x3aaf41,_0x12fe25='group\x20clone\x20ms\x20master\x20location\x20colocation\x20order\x20fencing_topology\x20rsc_ticket\x20acl_target\x20acl_group\x20user\x20role\x20tag\x20xml';return{'name':_0x3da980(0x3ea9),'aliases':[_0x3da980(0x1b3d),_0x3da980(0x3e08)],'case_insensitive':!0x0,'keywords':{'keyword':_0x3da980(0x2a3b),'literal':_0x3da980(0x1992)},'contains':[_0x4b6947[_0x3da980(0x2bbe)],{'beginKeywords':_0x3da980(0x13c6),'starts':{'end':_0x3da980(0x3238),'starts':{'className':_0x3da980(0x4685),'end':_0x3da980(0x1e8c)}}},{'beginKeywords':_0x3da980(0x449d),'starts':{'className':_0x3da980(0x4685),'end':_0x3da980(0x1e8c),'starts':{'end':_0x3da980(0x29f5)}}},{'begin':_0x3da980(0x4cd5)+_0x12fe25[_0x3da980(0x1117)]('\x20')['join']('|')+_0x3da980(0x28e),'keywords':_0x12fe25,'starts':{'className':_0x3da980(0x4685),'end':_0x3da980(0x364b)}},{'beginKeywords':_0x3da980(0x40e1),'starts':{'className':_0x3da980(0x4685),'end':_0x3da980(0x3238)}},_0x4b6947[_0x3da980(0x291b)],{'className':_0x3da980(0x5153),'begin':_0x3da980(0x175b),'relevance':0x0},{'className':'number','begin':_0x3da980(0x1382),'relevance':0x0},{'className':'literal','begin':_0x3da980(0x3db0),'relevance':0x0},{'className':_0x3da980(0x431d),'begin':/([A-Za-z$_#][\w_-]+)=/,'relevance':0x0},{'className':'tag','begin':_0x3da980(0x2574),'end':_0x3da980(0x316a),'relevance':0x0}]};};},0x21d5:_0x2f92fd=>{const _0x3f59f7=a0_0x11e7;_0x2f92fd[_0x3f59f7(0x474c)]=function(_0x192cf3){const _0x16b95c=_0x3f59f7,_0x56de5f='(_?[ui](8|16|32|64|128))?',_0x3d1a84=_0x16b95c(0x1734),_0x523a0d=_0x16b95c(0x1397),_0x11363a={'$pattern':'[a-zA-Z_]\x5cw*[!?=]?','keyword':_0x16b95c(0x13f1),'literal':_0x16b95c(0xca7)},_0x512f55={'className':_0x16b95c(0x2ad6),'begin':/#\{/,'end':/\}/,'keywords':_0x11363a},_0x4eb68c={'className':_0x16b95c(0x3362),'begin':_0x16b95c(0x1b5c)},_0x3ef14f={'className':'template-variable','variants':[{'begin':_0x16b95c(0x308c),'end':_0x16b95c(0x4b9f)},{'begin':_0x16b95c(0x44a4),'end':'%\x5c}'}],'keywords':_0x11363a};function _0x5bd2d2(_0x57790e,_0x259418){const _0x453708=_0x16b95c,_0x526e65=[{'begin':_0x57790e,'end':_0x259418}];return _0x526e65[0x0][_0x453708(0x2b31)]=_0x526e65,_0x526e65;}const _0x21ebba={'className':_0x16b95c(0x2431),'contains':[_0x192cf3[_0x16b95c(0x4a76)],_0x512f55],'variants':[{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/},{'begin':/`/,'end':/`/},{'begin':'%[Qwi]?\x5c(','end':'\x5c)','contains':_0x5bd2d2('\x5c(','\x5c)')},{'begin':_0x16b95c(0x510a),'end':'\x5c]','contains':_0x5bd2d2('\x5c[','\x5c]')},{'begin':_0x16b95c(0x783),'end':/\}/,'contains':_0x5bd2d2(/\{/,/\}/)},{'begin':_0x16b95c(0x3006),'end':'>','contains':_0x5bd2d2('<','>')},{'begin':'%[Qwi]?\x5c|','end':'\x5c|'},{'begin':/<<-\w+$/,'end':/^\s*\w+$/}],'relevance':0x0},_0x22681d={'className':_0x16b95c(0x2431),'variants':[{'begin':_0x16b95c(0x33e8),'end':'\x5c)','contains':_0x5bd2d2('\x5c(','\x5c)')},{'begin':'%q\x5c[','end':'\x5c]','contains':_0x5bd2d2('\x5c[','\x5c]')},{'begin':_0x16b95c(0x4fdd),'end':/\}/,'contains':_0x5bd2d2(/\{/,/\}/)},{'begin':'%q<','end':'>','contains':_0x5bd2d2('<','>')},{'begin':_0x16b95c(0x2148),'end':'\x5c|'},{'begin':/<<-'\w+'$/,'end':/^\s*\w+$/}],'relevance':0x0},_0x41ca25={'begin':_0x16b95c(0x1302)+_0x192cf3[_0x16b95c(0x49de)]+_0x16b95c(0x1b21),'keywords':_0x16b95c(0x1c0a),'contains':[{'className':_0x16b95c(0x4d1d),'contains':[_0x192cf3[_0x16b95c(0x4a76)],_0x512f55],'variants':[{'begin':_0x16b95c(0x516c),'relevance':0x0},{'begin':_0x16b95c(0x944),'end':_0x16b95c(0xc40)}]}],'relevance':0x0},_0x30b855=[_0x3ef14f,_0x21ebba,_0x22681d,{'className':_0x16b95c(0x4d1d),'contains':[_0x192cf3['BACKSLASH_ESCAPE'],_0x512f55],'variants':[{'begin':_0x16b95c(0x1f29),'end':'\x5c)','contains':_0x5bd2d2('\x5c(','\x5c)')},{'begin':_0x16b95c(0x95d),'end':'\x5c]','contains':_0x5bd2d2('\x5c[','\x5c]')},{'begin':_0x16b95c(0x2b9e),'end':/\}/,'contains':_0x5bd2d2(/\{/,/\}/)},{'begin':_0x16b95c(0x48de),'end':'>','contains':_0x5bd2d2('<','>')},{'begin':_0x16b95c(0x42f6),'end':'\x5c|'}],'relevance':0x0},_0x41ca25,{'className':_0x16b95c(0x5153),'begin':_0x16b95c(0x44b9),'end':'\x5c]','contains':[_0x192cf3[_0x16b95c(0x46a1)](_0x192cf3[_0x16b95c(0x291b)],{'className':_0x16b95c(0x2431)})]},_0x4eb68c,_0x192cf3[_0x16b95c(0x2bbe)],{'className':_0x16b95c(0x1390),'beginKeywords':_0x16b95c(0x37f5),'end':_0x16b95c(0x26a2),'illegal':/=/,'contains':[_0x192cf3['HASH_COMMENT_MODE'],_0x192cf3[_0x16b95c(0x46a1)](_0x192cf3[_0x16b95c(0x2029)],{'begin':_0x523a0d}),{'begin':'<'}]},{'className':_0x16b95c(0x1390),'beginKeywords':'lib\x20enum\x20union','end':_0x16b95c(0x26a2),'illegal':/=/,'contains':[_0x192cf3[_0x16b95c(0x2bbe)],_0x192cf3[_0x16b95c(0x46a1)](_0x192cf3['TITLE_MODE'],{'begin':_0x523a0d})]},{'beginKeywords':_0x16b95c(0x899),'end':_0x16b95c(0x26a2),'illegal':/=/,'contains':[_0x192cf3['HASH_COMMENT_MODE'],_0x192cf3[_0x16b95c(0x46a1)](_0x192cf3[_0x16b95c(0x2029)],{'begin':_0x523a0d})],'relevance':0x2},{'className':_0x16b95c(0x14b2),'beginKeywords':_0x16b95c(0x452b),'end':/\B\b/,'contains':[_0x192cf3[_0x16b95c(0x46a1)](_0x192cf3[_0x16b95c(0x2029)],{'begin':_0x3d1a84,'endsParent':!0x0})]},{'className':_0x16b95c(0x14b2),'beginKeywords':_0x16b95c(0x1934),'end':/\B\b/,'contains':[_0x192cf3[_0x16b95c(0x46a1)](_0x192cf3['TITLE_MODE'],{'begin':_0x3d1a84,'endsParent':!0x0})],'relevance':0x2},{'className':_0x16b95c(0x239b),'begin':_0x192cf3[_0x16b95c(0x206e)]+_0x16b95c(0x2df5),'relevance':0x0},{'className':'symbol','begin':':','contains':[_0x21ebba,{'begin':_0x3d1a84}],'relevance':0x0},{'className':_0x16b95c(0x4a80),'variants':[{'begin':_0x16b95c(0x333b)+_0x56de5f},{'begin':_0x16b95c(0x4270)+_0x56de5f},{'begin':_0x16b95c(0x23cc)+_0x56de5f},{'begin':_0x16b95c(0x395a)},{'begin':'\x5cb([1-9][0-9_]*|0)'+_0x56de5f}],'relevance':0x0}];return _0x512f55[_0x16b95c(0x2b31)]=_0x30b855,_0x3ef14f[_0x16b95c(0x2b31)]=_0x30b855[_0x16b95c(0x384c)](0x1),{'name':_0x16b95c(0x13ad),'aliases':['cr'],'keywords':_0x11363a,'contains':_0x30b855};};},0x1bd0:_0x46a645=>{const _0x57b74d=a0_0x11e7;_0x46a645[_0x57b74d(0x474c)]=function(_0x513c28){const _0x49f84b=_0x57b74d,_0x5be056={'keyword':[_0x49f84b(0x3027),'as',_0x49f84b(0x7c0),'break',_0x49f84b(0x2e7e),_0x49f84b(0x31a3),_0x49f84b(0x1390),_0x49f84b(0xc01),_0x49f84b(0x16d9),'do',_0x49f84b(0x3d4),'event',_0x49f84b(0x2121),_0x49f84b(0x2068),_0x49f84b(0x37b2),_0x49f84b(0x1aaf),_0x49f84b(0x3c19),'foreach',_0x49f84b(0x139c),'if',_0x49f84b(0x4570),'in','interface','internal','is',_0x49f84b(0x192b),_0x49f84b(0x37f7),_0x49f84b(0x4321),'operator','out','override',_0x49f84b(0xddd),_0x49f84b(0x4ef4),_0x49f84b(0xc14),_0x49f84b(0x39ce),_0x49f84b(0x1aa2),_0x49f84b(0x15bd),_0x49f84b(0x21c3),_0x49f84b(0xdfd),_0x49f84b(0x1756),_0x49f84b(0x3a89),_0x49f84b(0xc5d),_0x49f84b(0x4b23),_0x49f84b(0x2c7c),_0x49f84b(0x4146),_0x49f84b(0x857),_0x49f84b(0x138f),'throw',_0x49f84b(0x422b),_0x49f84b(0x3368),'unchecked',_0x49f84b(0x2b2),_0x49f84b(0x347a),_0x49f84b(0x38b8),_0x49f84b(0x27d6),'volatile',_0x49f84b(0x552)][_0x49f84b(0x1d1d)]([_0x49f84b(0x362c),_0x49f84b(0xa94),_0x49f84b(0x2663),'ascending',_0x49f84b(0x16c2),'await','by',_0x49f84b(0x221b),_0x49f84b(0x39f9),_0x49f84b(0x27e6),_0x49f84b(0xf9e),'global',_0x49f84b(0x4e5b),'init',_0x49f84b(0x31cf),_0x49f84b(0x3541),_0x49f84b(0x1e61),_0x49f84b(0x3354),_0x49f84b(0xc1a),_0x49f84b(0x1de9),'on','or',_0x49f84b(0x46e3),_0x49f84b(0x42dc),'remove','select','set',_0x49f84b(0x7e1),_0x49f84b(0x487),'var',_0x49f84b(0x191b),_0x49f84b(0x3b62),'with',_0x49f84b(0x5075)]),'built_in':['bool','byte',_0x49f84b(0x373c),_0x49f84b(0x2353),_0x49f84b(0x1674),_0x49f84b(0x5024),_0x49f84b(0x41aa),_0x49f84b(0x44d8),_0x49f84b(0x1ab8),_0x49f84b(0xc16),'long',_0x49f84b(0x3f86),_0x49f84b(0x38f9),_0x49f84b(0x20c7),_0x49f84b(0x4c5b),'short','string',_0x49f84b(0x3fb),_0x49f84b(0x3a0b),_0x49f84b(0x4f2a)],'literal':['default','false',_0x49f84b(0x1582),_0x49f84b(0x4022)]},_0x3e946a=_0x513c28[_0x49f84b(0x46a1)](_0x513c28['TITLE_MODE'],{'begin':_0x49f84b(0xcd7)}),_0x5895b7={'className':_0x49f84b(0x4a80),'variants':[{'begin':_0x49f84b(0x219f)},{'begin':_0x49f84b(0x3444)},{'begin':'(-?)(\x5cb0[xX][a-fA-F0-9\x27]+|(\x5cb[\x5cd\x27]+(\x5c.[\x5cd\x27]*)?|\x5c.[\x5cd\x27]+)([eE][-+]?[\x5cd\x27]+)?)'}],'relevance':0x0},_0x4627b6={'className':_0x49f84b(0x2431),'begin':'@\x22','end':'\x22','contains':[{'begin':'\x22\x22'}]},_0xdd9774=_0x513c28[_0x49f84b(0x46a1)](_0x4627b6,{'illegal':/\n/}),_0x319981={'className':'subst','begin':/\{/,'end':/\}/,'keywords':_0x5be056},_0x19aa6e=_0x513c28[_0x49f84b(0x46a1)](_0x319981,{'illegal':/\n/}),_0x28d6bc={'className':_0x49f84b(0x2431),'begin':/\$"/,'end':'\x22','illegal':/\n/,'contains':[{'begin':/\{\{/},{'begin':/\}\}/},_0x513c28[_0x49f84b(0x4a76)],_0x19aa6e]},_0x555098={'className':_0x49f84b(0x2431),'begin':/\$@"/,'end':'\x22','contains':[{'begin':/\{\{/},{'begin':/\}\}/},{'begin':'\x22\x22'},_0x319981]},_0x1cc03d=_0x513c28[_0x49f84b(0x46a1)](_0x555098,{'illegal':/\n/,'contains':[{'begin':/\{\{/},{'begin':/\}\}/},{'begin':'\x22\x22'},_0x19aa6e]});_0x319981['contains']=[_0x555098,_0x28d6bc,_0x4627b6,_0x513c28[_0x49f84b(0xa4c)],_0x513c28['QUOTE_STRING_MODE'],_0x5895b7,_0x513c28[_0x49f84b(0x23fe)]],_0x19aa6e[_0x49f84b(0x2b31)]=[_0x1cc03d,_0x28d6bc,_0xdd9774,_0x513c28['APOS_STRING_MODE'],_0x513c28['QUOTE_STRING_MODE'],_0x5895b7,_0x513c28[_0x49f84b(0x46a1)](_0x513c28['C_BLOCK_COMMENT_MODE'],{'illegal':/\n/})];const _0x14b704={'variants':[_0x555098,_0x28d6bc,_0x4627b6,_0x513c28[_0x49f84b(0xa4c)],_0x513c28[_0x49f84b(0x291b)]]},_0x52cac2={'begin':'<','end':'>','contains':[{'beginKeywords':_0x49f84b(0x34db)},_0x3e946a]},_0x1494f9=_0x513c28['IDENT_RE']+'(<'+_0x513c28[_0x49f84b(0xacc)]+_0x49f84b(0x3232)+_0x513c28['IDENT_RE']+_0x49f84b(0x3f80),_0x14f374={'begin':'@'+_0x513c28[_0x49f84b(0xacc)],'relevance':0x0};return{'name':'C#','aliases':['cs','c#'],'keywords':_0x5be056,'illegal':/::/,'contains':[_0x513c28[_0x49f84b(0x4e4f)](_0x49f84b(0x4c6a),'$',{'returnBegin':!0x0,'contains':[{'className':'doctag','variants':[{'begin':_0x49f84b(0x4c6a),'relevance':0x0},{'begin':_0x49f84b(0x1190)},{'begin':_0x49f84b(0x2574),'end':'>'}]}]}),_0x513c28[_0x49f84b(0x2ae2)],_0x513c28['C_BLOCK_COMMENT_MODE'],{'className':_0x49f84b(0x5153),'begin':'#','end':'$','keywords':{'keyword':_0x49f84b(0x2fc9)}},_0x14b704,_0x5895b7,{'beginKeywords':_0x49f84b(0x1ba3),'relevance':0x0,'end':/[{;=]/,'illegal':/[^\s:,]/,'contains':[{'beginKeywords':'where\x20class'},_0x3e946a,_0x52cac2,_0x513c28[_0x49f84b(0x2ae2)],_0x513c28['C_BLOCK_COMMENT_MODE']]},{'beginKeywords':_0x49f84b(0x37f7),'relevance':0x0,'end':/[{;=]/,'illegal':/[^\s:]/,'contains':[_0x3e946a,_0x513c28['C_LINE_COMMENT_MODE'],_0x513c28[_0x49f84b(0x23fe)]]},{'beginKeywords':_0x49f84b(0x15bd),'relevance':0x0,'end':/[{;=]/,'illegal':/[^\s:]/,'contains':[_0x3e946a,_0x52cac2,_0x513c28[_0x49f84b(0x2ae2)],_0x513c28[_0x49f84b(0x23fe)]]},{'className':'meta','begin':_0x49f84b(0x3851),'excludeBegin':!0x0,'end':'\x5c]','excludeEnd':!0x0,'contains':[{'className':_0x49f84b(0x2431),'begin':/"/,'end':/"/}]},{'beginKeywords':_0x49f84b(0x1664),'relevance':0x0},{'className':_0x49f84b(0x14b2),'begin':'('+_0x1494f9+_0x49f84b(0x20fe)+_0x513c28[_0x49f84b(0xacc)]+_0x49f84b(0x34ab),'returnBegin':!0x0,'end':/\s*[{;=]/,'excludeEnd':!0x0,'keywords':_0x5be056,'contains':[{'beginKeywords':[_0x49f84b(0x39ce),_0x49f84b(0x4ef4),_0x49f84b(0xc14),_0x49f84b(0x2c7c),_0x49f84b(0x2dad),_0x49f84b(0xc14),_0x49f84b(0x3027),'async',_0x49f84b(0x2068),'override',_0x49f84b(0x2b2),_0x49f84b(0x38b8),_0x49f84b(0x4321),_0x49f84b(0x3a89),_0x49f84b(0x42dc)][_0x49f84b(0x3541)]('\x20'),'relevance':0x0},{'begin':_0x513c28['IDENT_RE']+'\x5cs*(<[^=]+>\x5cs*)?\x5c(','returnBegin':!0x0,'contains':[_0x513c28[_0x49f84b(0x2029)],_0x52cac2],'relevance':0x0},{'match':/\(\)/},{'className':_0x49f84b(0xddd),'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'keywords':_0x5be056,'relevance':0x0,'contains':[_0x14b704,_0x5895b7,_0x513c28[_0x49f84b(0x23fe)]]},_0x513c28[_0x49f84b(0x2ae2)],_0x513c28['C_BLOCK_COMMENT_MODE']]},_0x14f374]};};},0x1a9d:_0x2aa183=>{const _0x40c54b=a0_0x11e7;_0x2aa183[_0x40c54b(0x474c)]=function(_0x1870e4){const _0x1741f8=_0x40c54b;return{'name':'CSP','case_insensitive':!0x1,'keywords':{'$pattern':_0x1741f8(0x513e),'keyword':[_0x1741f8(0x31c),_0x1741f8(0x4ced),_0x1741f8(0x9ba),_0x1741f8(0x27c0),'font-src',_0x1741f8(0x354),_0x1741f8(0x5175),_0x1741f8(0x45b0),_0x1741f8(0x497e),_0x1741f8(0x1faf),'media-src',_0x1741f8(0x40f3),_0x1741f8(0x3049),_0x1741f8(0x24d1),_0x1741f8(0x3f56),_0x1741f8(0x2463),_0x1741f8(0xa0c),_0x1741f8(0x31f1),'unsafe-hashes','worker-src']},'contains':[{'className':_0x1741f8(0x2431),'begin':'\x27','end':'\x27'},{'className':_0x1741f8(0x263f),'begin':_0x1741f8(0x68b),'end':':','excludeEnd':!0x0}]};};},0x21a4:_0xe46d82=>{const _0x1a9630=a0_0x11e7,_0x2996e3=['a',_0x1a9630(0xd80),_0x1a9630(0x3db6),_0x1a9630(0x51c1),_0x1a9630(0x2a6),_0x1a9630(0x645),'b',_0x1a9630(0x702),'body',_0x1a9630(0x18b7),_0x1a9630(0x3115),_0x1a9630(0x2200),'cite',_0x1a9630(0x4948),'dd',_0x1a9630(0x109c),_0x1a9630(0x2dd7),_0x1a9630(0x2b08),_0x1a9630(0x4c88),'dl','dt','em',_0x1a9630(0x31c0),_0x1a9630(0x4252),'figure',_0x1a9630(0x3af9),_0x1a9630(0x31e0),'h1','h2','h3','h4','h5','h6',_0x1a9630(0x17f8),_0x1a9630(0x507c),_0x1a9630(0x2acd),'i',_0x1a9630(0x2168),_0x1a9630(0x3045),_0x1a9630(0x7b0),_0x1a9630(0x4432),_0x1a9630(0x728),_0x1a9630(0x3b71),'legend','li','main',_0x1a9630(0x1af6),'menu',_0x1a9630(0x1ec0),_0x1a9630(0x20c7),'ol','p','q',_0x1a9630(0x3567),'samp',_0x1a9630(0x69d),_0x1a9630(0x2bbd),_0x1a9630(0x40c9),_0x1a9630(0x4829),_0x1a9630(0xb95),'table','tbody','td',_0x1a9630(0x39cd),_0x1a9630(0x2f6d),'th','thead',_0x1a9630(0x51b6),'tr','ul',_0x1a9630(0x469d),'video'],_0x346481=[_0x1a9630(0x3f0b),_0x1a9630(0x4435),_0x1a9630(0x1bc1),_0x1a9630(0xe81),'color-gamut','color-index',_0x1a9630(0xc20),_0x1a9630(0x2dc),_0x1a9630(0x4790),_0x1a9630(0x4a4d),'forced-colors','grid',_0x1a9630(0x3cd6),_0x1a9630(0x3d6d),_0x1a9630(0x5231),_0x1a9630(0x3dc),'orientation','overflow-block',_0x1a9630(0x31fe),'pointer',_0x1a9630(0x1b7f),_0x1a9630(0x4099),_0x1a9630(0x228c),_0x1a9630(0x2c6b),_0x1a9630(0x4de3),'scan',_0x1a9630(0x39d8),_0x1a9630(0x38d6),'width',_0x1a9630(0x2c06),_0x1a9630(0x3e30),'min-height','max-height'],_0x2e501c=[_0x1a9630(0x182f),_0x1a9630(0x4f6f),'blank',_0x1a9630(0x494d),_0x1a9630(0xbb8),'default',_0x1a9630(0x183b),_0x1a9630(0x177b),'disabled','drop',_0x1a9630(0x168c),_0x1a9630(0x223e),_0x1a9630(0x4d51),'first-child',_0x1a9630(0x2c0c),_0x1a9630(0x363),'future','focus',_0x1a9630(0xbcb),_0x1a9630(0x1874),_0x1a9630(0x3170),_0x1a9630(0x1a61),_0x1a9630(0x933),_0x1a9630(0x3d6d),_0x1a9630(0x33a8),_0x1a9630(0x1418),_0x1a9630(0x498a),'is',_0x1a9630(0x1f46),_0x1a9630(0x11ef),_0x1a9630(0x69f),_0x1a9630(0x48eb),_0x1a9630(0x4b32),_0x1a9630(0xaea),_0x1a9630(0xc1a),_0x1a9630(0x2fee),'nth-col',_0x1a9630(0x3837),_0x1a9630(0x3171),_0x1a9630(0x17bb),'nth-of-type',_0x1a9630(0x12a4),_0x1a9630(0x5133),_0x1a9630(0x51e4),_0x1a9630(0x48d0),_0x1a9630(0x38e6),_0x1a9630(0x38ef),'read-only',_0x1a9630(0x3229),_0x1a9630(0x3e07),_0x1a9630(0x4d50),_0x1a9630(0x507b),'scope',_0x1a9630(0x1bc7),_0x1a9630(0x4579),_0x1a9630(0x4147),_0x1a9630(0x4297),_0x1a9630(0xd18),_0x1a9630(0x3b62)],_0x507a10=[_0x1a9630(0x1349),_0x1a9630(0x313),_0x1a9630(0x5097),'cue',_0x1a9630(0x255f),_0x1a9630(0x222a),'first-line',_0x1a9630(0x50a3),_0x1a9630(0x5121),_0x1a9630(0x2aa),_0x1a9630(0x10e2),_0x1a9630(0x2e6e),'slotted',_0x1a9630(0x3c13)],_0x1ee813=[_0x1a9630(0x31e),_0x1a9630(0x2558),_0x1a9630(0xe82),'all','animation',_0x1a9630(0x1b0e),'animation-direction',_0x1a9630(0x2c5c),_0x1a9630(0x4fd3),_0x1a9630(0x4e24),'animation-name','animation-play-state','animation-timing-function',_0x1a9630(0x460d),_0x1a9630(0x1471),_0x1a9630(0x7b3),'background-blend-mode','background-clip','background-color','background-image','background-origin',_0x1a9630(0x4295),'background-repeat',_0x1a9630(0x4065),_0x1a9630(0x2f89),_0x1a9630(0x369a),_0x1a9630(0x49ba),'border-block-color','border-block-end',_0x1a9630(0x2f10),_0x1a9630(0x43ac),'border-block-end-width',_0x1a9630(0x34f),_0x1a9630(0x480a),_0x1a9630(0x2a76),_0x1a9630(0x3b35),_0x1a9630(0x28ae),'border-block-width',_0x1a9630(0x129e),'border-bottom-color',_0x1a9630(0x494c),_0x1a9630(0x11d1),_0x1a9630(0x4491),_0x1a9630(0x2855),'border-collapse','border-color',_0x1a9630(0x14ac),'border-image-outset',_0x1a9630(0x19a5),'border-image-slice','border-image-source',_0x1a9630(0x2aa0),_0x1a9630(0x6cc),'border-inline-color',_0x1a9630(0x2a98),'border-inline-end-color','border-inline-end-style',_0x1a9630(0xdfa),_0x1a9630(0x4a8),_0x1a9630(0x3152),'border-inline-start-style','border-inline-start-width','border-inline-style',_0x1a9630(0xb54),'border-left',_0x1a9630(0x82b),_0x1a9630(0xb7c),_0x1a9630(0x1b93),_0x1a9630(0x2557),_0x1a9630(0x23d3),'border-right-color',_0x1a9630(0x4a2d),_0x1a9630(0x2fd6),'border-spacing','border-style',_0x1a9630(0x49df),_0x1a9630(0x5090),_0x1a9630(0x1be7),'border-top-right-radius','border-top-style',_0x1a9630(0x1b9b),_0x1a9630(0x24a4),_0x1a9630(0x3335),_0x1a9630(0x1db9),_0x1a9630(0x4419),'box-sizing','break-after','break-before',_0x1a9630(0x425d),_0x1a9630(0x1204),_0x1a9630(0x1cb2),_0x1a9630(0x4933),'clip',_0x1a9630(0x19a3),_0x1a9630(0x1c7c),_0x1a9630(0xe81),_0x1a9630(0x2c2e),'column-fill',_0x1a9630(0x4679),'column-rule',_0x1a9630(0x2ccf),'column-rule-style','column-rule-width',_0x1a9630(0x3085),_0x1a9630(0x27a0),_0x1a9630(0x4457),'contain','content','content-visibility',_0x1a9630(0x2aa6),_0x1a9630(0x43c7),'cue','cue-after',_0x1a9630(0x18ef),_0x1a9630(0x824),_0x1a9630(0x296c),'display','empty-cells',_0x1a9630(0x1465),'flex',_0x1a9630(0x2c01),_0x1a9630(0x295a),'flex-flow','flex-grow','flex-shrink',_0x1a9630(0x32ae),_0x1a9630(0x1ab8),'flow',_0x1a9630(0xe53),'font-display',_0x1a9630(0x40de),_0x1a9630(0xeea),_0x1a9630(0x3048),'font-language-override',_0x1a9630(0x464),_0x1a9630(0x750),'font-smoothing',_0x1a9630(0x1d98),'font-style',_0x1a9630(0x4fd6),_0x1a9630(0xb7a),_0x1a9630(0x2694),_0x1a9630(0x3afe),_0x1a9630(0x2eae),_0x1a9630(0x2813),_0x1a9630(0x474f),_0x1a9630(0x4c7a),_0x1a9630(0x417d),'gap','glyph-orientation-vertical','grid',_0x1a9630(0x2b00),'grid-auto-columns','grid-auto-flow',_0x1a9630(0x77e),_0x1a9630(0x3058),'grid-column-end',_0x1a9630(0x4df),'grid-gap',_0x1a9630(0x92e),_0x1a9630(0x48ab),_0x1a9630(0x3ffa),_0x1a9630(0x4a8e),'grid-template-areas',_0x1a9630(0x2bb0),_0x1a9630(0xef6),_0x1a9630(0xc89),'height','hyphens',_0x1a9630(0x169e),_0x1a9630(0x3e53),_0x1a9630(0x3f79),_0x1a9630(0x2833),_0x1a9630(0x23ed),_0x1a9630(0x41a9),_0x1a9630(0x3ba0),_0x1a9630(0xedd),'left','letter-spacing',_0x1a9630(0x2352),_0x1a9630(0x461c),'list-style',_0x1a9630(0x1438),_0x1a9630(0x1a28),'list-style-type','margin',_0x1a9630(0x1e53),'margin-block-end',_0x1a9630(0x562),'margin-bottom',_0x1a9630(0x249b),'margin-inline-end',_0x1a9630(0x2d5),_0x1a9630(0x26a1),'margin-right',_0x1a9630(0xefe),_0x1a9630(0x4b5c),_0x1a9630(0x1cee),_0x1a9630(0x1ebe),_0x1a9630(0x140d),_0x1a9630(0x1706),_0x1a9630(0x2c4f),_0x1a9630(0xad0),'mask-border-source',_0x1a9630(0x2691),'mask-clip',_0x1a9630(0x504b),_0x1a9630(0x4cac),_0x1a9630(0x3dfd),_0x1a9630(0x498f),_0x1a9630(0x321c),_0x1a9630(0x3d6b),_0x1a9630(0x4c09),'mask-type',_0x1a9630(0x448d),_0x1a9630(0x2ebc),_0x1a9630(0x1e50),'max-width',_0x1a9630(0x4000),_0x1a9630(0x2613),_0x1a9630(0x446d),_0x1a9630(0x2c06),_0x1a9630(0x2b16),_0x1a9630(0x516a),'nav-index',_0x1a9630(0x150e),_0x1a9630(0x467),_0x1a9630(0x14fc),_0x1a9630(0x28b),_0x1a9630(0x47d),_0x1a9630(0x3bd7),_0x1a9630(0x1541),'opacity',_0x1a9630(0xd8d),'orphans','outline',_0x1a9630(0x4ae0),_0x1a9630(0x40fb),_0x1a9630(0x230d),_0x1a9630(0x282),_0x1a9630(0xa69),_0x1a9630(0xdad),_0x1a9630(0x49a7),_0x1a9630(0x4704),'padding','padding-block','padding-block-end',_0x1a9630(0x2511),_0x1a9630(0x5060),_0x1a9630(0x1aac),'padding-inline-end',_0x1a9630(0x12fa),_0x1a9630(0x3bdb),_0x1a9630(0x41de),_0x1a9630(0xb57),_0x1a9630(0x45a),_0x1a9630(0x2d8f),_0x1a9630(0x1e45),_0x1a9630(0x468c),_0x1a9630(0x10c9),_0x1a9630(0x295c),_0x1a9630(0x2b0d),_0x1a9630(0x22c6),_0x1a9630(0x23a0),_0x1a9630(0x25f1),_0x1a9630(0x37b),'resize',_0x1a9630(0x1162),'rest-after',_0x1a9630(0x27f9),_0x1a9630(0x4d50),_0x1a9630(0x133b),_0x1a9630(0x4fcd),_0x1a9630(0xa40),_0x1a9630(0x2d9b),_0x1a9630(0x2914),_0x1a9630(0x2d49),'scroll-margin-inline',_0x1a9630(0x252e),_0x1a9630(0x3b8),_0x1a9630(0x10d2),_0x1a9630(0x277d),'scroll-margin-top',_0x1a9630(0x3796),_0x1a9630(0x328a),'scroll-padding-block-end',_0x1a9630(0x230b),_0x1a9630(0x2bef),_0x1a9630(0x13cc),_0x1a9630(0x11ba),_0x1a9630(0x4761),_0x1a9630(0x4705),_0x1a9630(0x1a7a),_0x1a9630(0x25b3),_0x1a9630(0x2934),'scroll-snap-stop','scroll-snap-type',_0x1a9630(0x337),'scrollbar-gutter',_0x1a9630(0x4a35),_0x1a9630(0x24d9),_0x1a9630(0x764),_0x1a9630(0x2e55),_0x1a9630(0x4bba),_0x1a9630(0x2df4),_0x1a9630(0x3d6),_0x1a9630(0x2ff2),_0x1a9630(0x45c0),'text-align',_0x1a9630(0x222f),_0x1a9630(0x1470),_0x1a9630(0x40ab),'text-decoration',_0x1a9630(0x4c70),'text-decoration-line',_0x1a9630(0x1866),'text-emphasis',_0x1a9630(0x3d6a),_0x1a9630(0x4866),_0x1a9630(0x246d),_0x1a9630(0x541),_0x1a9630(0x45d),'text-orientation',_0x1a9630(0x3d18),_0x1a9630(0x25da),_0x1a9630(0x2803),_0x1a9630(0x1aba),'text-underline-position',_0x1a9630(0x279d),_0x1a9630(0x5161),_0x1a9630(0x1fc1),'transform-origin','transform-style',_0x1a9630(0x427e),_0x1a9630(0x2d3e),'transition-duration',_0x1a9630(0x28db),_0x1a9630(0x416a),_0x1a9630(0x4074),_0x1a9630(0x1677),_0x1a9630(0x4703),'voice-balance',_0x1a9630(0x175f),'voice-family',_0x1a9630(0x2483),_0x1a9630(0x1002),_0x1a9630(0x22e9),'voice-stress',_0x1a9630(0x4f11),_0x1a9630(0x711),_0x1a9630(0x274c),'width',_0x1a9630(0x4258),'word-break',_0x1a9630(0xe5e),_0x1a9630(0x4c34),_0x1a9630(0x2a38),_0x1a9630(0x25fb)][_0x1a9630(0x78b)]();_0xe46d82[_0x1a9630(0x474c)]=function(_0xe27b34){const _0x3216ab=_0x1a9630,_0x1a00d1=_0xe27b34[_0x3216ab(0x41d2)],_0x3e896f=(_0x54490c=>({'IMPORTANT':{'scope':'meta','begin':'!important'},'BLOCK_COMMENT':_0x54490c[_0x3216ab(0x23fe)],'HEXCOLOR':{'scope':_0x3216ab(0x4a80),'begin':/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},'FUNCTION_DISPATCH':{'className':'built_in','begin':/[\w-]+(?=\()/},'ATTRIBUTE_SELECTOR_MODE':{'scope':_0x3216ab(0x4f99),'begin':/\[/,'end':/\]/,'illegal':'$','contains':[_0x54490c[_0x3216ab(0xa4c)],_0x54490c[_0x3216ab(0x291b)]]},'CSS_NUMBER_MODE':{'scope':_0x3216ab(0x4a80),'begin':_0x54490c[_0x3216ab(0x5047)]+_0x3216ab(0xf71),'relevance':0x0},'CSS_VARIABLE':{'className':'attr','begin':/--[A-Za-z_][A-Za-z0-9_-]*/}}))(_0xe27b34),_0x457d26=[_0xe27b34[_0x3216ab(0xa4c)],_0xe27b34['QUOTE_STRING_MODE']];return{'name':_0x3216ab(0x1697),'case_insensitive':!0x0,'illegal':/[=|'\$]/,'keywords':{'keyframePosition':'from\x20to'},'classNameAliases':{'keyframePosition':'selector-tag'},'contains':[_0x3e896f['BLOCK_COMMENT'],{'begin':/-(webkit|moz|ms|o)-(?=[a-z])/},_0x3e896f[_0x3216ab(0x46dc)],{'className':_0x3216ab(0x3713),'begin':/#[A-Za-z0-9_-]+/,'relevance':0x0},{'className':_0x3216ab(0x326),'begin':_0x3216ab(0x200),'relevance':0x0},_0x3e896f[_0x3216ab(0x3880)],{'className':_0x3216ab(0x277a),'variants':[{'begin':':('+_0x2e501c[_0x3216ab(0x3541)]('|')+')'},{'begin':_0x3216ab(0x2b0)+_0x507a10[_0x3216ab(0x3541)]('|')+')'}]},_0x3e896f['CSS_VARIABLE'],{'className':_0x3216ab(0x263f),'begin':_0x3216ab(0x4cd5)+_0x1ee813['join']('|')+')\x5cb'},{'begin':/:/,'end':/[;}{]/,'contains':[_0x3e896f[_0x3216ab(0x4213)],_0x3e896f['HEXCOLOR'],_0x3e896f[_0x3216ab(0x3df6)],_0x3e896f[_0x3216ab(0x46dc)],..._0x457d26,{'begin':/(url|data-uri)\(/,'end':/\)/,'relevance':0x0,'keywords':{'built_in':_0x3216ab(0x36f9)},'contains':[..._0x457d26,{'className':_0x3216ab(0x2431),'begin':/[^)]/,'endsWithParent':!0x0,'excludeEnd':!0x0}]},_0x3e896f[_0x3216ab(0x47a9)]]},{'begin':_0x1a00d1[_0x3216ab(0x3296)](/@/),'end':_0x3216ab(0x4dcc),'relevance':0x0,'illegal':/:/,'contains':[{'className':_0x3216ab(0x1357),'begin':/@-?\w[\w]*(-\w+)*/},{'begin':/\s/,'endsWithParent':!0x0,'excludeEnd':!0x0,'relevance':0x0,'keywords':{'$pattern':/[a-z-]+/,'keyword':_0x3216ab(0x29d9),'attribute':_0x346481['join']('\x20')},'contains':[{'begin':/[a-z-]+(?=:)/,'className':_0x3216ab(0x263f)},..._0x457d26,_0x3e896f[_0x3216ab(0x46dc)]]}]},{'className':'selector-tag','begin':_0x3216ab(0x4cd5)+_0x2996e3['join']('|')+_0x3216ab(0x716)}]};};},0x1a1b:_0x395644=>{const _0x4c7586=a0_0x11e7;_0x395644[_0x4c7586(0x474c)]=function(_0x31b613){const _0x446794=_0x4c7586,_0x4ed3f8={'$pattern':_0x31b613['UNDERSCORE_IDENT_RE'],'keyword':_0x446794(0x2c85),'built_in':_0x446794(0x1dce),'literal':_0x446794(0x28e6)},_0x472b41=_0x446794(0x1ae5),_0x1807f6=_0x446794(0xbc1),_0x258c35=_0x446794(0x1ef1),_0x21d161=_0x446794(0xb26)+_0x1807f6+')',_0x20f262='('+_0x472b41+_0x446794(0x4a73)+('0[xX]'+_0x258c35)+')',_0x302081=_0x446794(0x2cfb),_0x55f803={'className':'number','begin':'\x5cb'+_0x20f262+_0x446794(0x427),'relevance':0x0},_0x24694a={'className':'number','begin':_0x446794(0x4cd5)+('('+(_0x446794(0x3884)+_0x258c35+'\x5c.'+_0x258c35+_0x446794(0x2670)+_0x258c35+_0x446794(0x2053)+_0x1807f6+')')+'|'+('('+_0x1807f6+'(\x5c.\x5cd*|'+_0x21d161+_0x446794(0x4778)+_0x1807f6+_0x446794(0x40b7)+_0x472b41+_0x21d161+'?)')+')')+_0x446794(0x16fa)+_0x20f262+_0x446794(0xb31),'relevance':0x0},_0x1b77a4={'className':'string','begin':'\x27('+_0x302081+_0x446794(0x4fed),'end':'\x27','illegal':'.'},_0xc73c48={'className':'string','begin':'\x22','contains':[{'begin':_0x302081,'relevance':0x0}],'end':_0x446794(0x992)},_0x172c11=_0x31b613[_0x446794(0x4e4f)](_0x446794(0x31d1),_0x446794(0x1513),{'contains':['self'],'relevance':0xa});return{'name':'D','keywords':_0x4ed3f8,'contains':[_0x31b613[_0x446794(0x2ae2)],_0x31b613[_0x446794(0x23fe)],_0x172c11,{'className':_0x446794(0x2431),'begin':'x\x22[\x5cda-fA-F\x5cs\x5cn\x5cr]*\x22[cwd]?','relevance':0xa},_0xc73c48,{'className':_0x446794(0x2431),'begin':'[rq]\x22','end':'\x22[cwd]?','relevance':0x5},{'className':_0x446794(0x2431),'begin':'`','end':_0x446794(0x502f)},{'className':_0x446794(0x2431),'begin':_0x446794(0x3f26),'end':_0x446794(0x2b8f)},_0x24694a,_0x55f803,_0x1b77a4,{'className':_0x446794(0x5153),'begin':'^#!','end':'$','relevance':0x5},{'className':_0x446794(0x5153),'begin':_0x446794(0x48c5),'end':'$','relevance':0x5},{'className':_0x446794(0x1357),'begin':_0x446794(0x2e68)}]};};},0x8e2:_0x4eb051=>{const _0xb7b21f=a0_0x11e7;_0x4eb051[_0xb7b21f(0x474c)]=function(_0xe7d8c4){const _0x4f6572=_0xb7b21f,_0x352c39={'className':_0x4f6572(0x2ad6),'variants':[{'begin':'\x5c$[A-Za-z0-9_]+'}]},_0xebcbab={'className':_0x4f6572(0x2ad6),'variants':[{'begin':/\$\{/,'end':/\}/}],'keywords':_0x4f6572(0x3a70)},_0x24d458={'className':_0x4f6572(0x2431),'variants':[{'begin':_0x4f6572(0xcff),'end':'\x27\x27\x27'},{'begin':'r\x22\x22\x22','end':'\x22\x22\x22'},{'begin':'r\x27','end':'\x27','illegal':'\x5cn'},{'begin':'r\x22','end':'\x22','illegal':'\x5cn'},{'begin':_0x4f6572(0x3f83),'end':_0x4f6572(0x3f83),'contains':[_0xe7d8c4[_0x4f6572(0x4a76)],_0x352c39,_0xebcbab]},{'begin':_0x4f6572(0xb00),'end':'\x22\x22\x22','contains':[_0xe7d8c4[_0x4f6572(0x4a76)],_0x352c39,_0xebcbab]},{'begin':'\x27','end':'\x27','illegal':'\x5cn','contains':[_0xe7d8c4[_0x4f6572(0x4a76)],_0x352c39,_0xebcbab]},{'begin':'\x22','end':'\x22','illegal':'\x5cn','contains':[_0xe7d8c4[_0x4f6572(0x4a76)],_0x352c39,_0xebcbab]}]};_0xebcbab[_0x4f6572(0x2b31)]=[_0xe7d8c4[_0x4f6572(0xd12)],_0x24d458];const _0x41ca31=[_0x4f6572(0xd96),_0x4f6572(0x4339),'Duration',_0x4f6572(0x2ac5),'Iterable',_0x4f6572(0x45eb),_0x4f6572(0x191f),_0x4f6572(0x4a59),_0x4f6572(0x4e88),_0x4f6572(0x108b),_0x4f6572(0xa93),_0x4f6572(0xae9),'Set',_0x4f6572(0x77a),_0x4f6572(0x3327),_0x4f6572(0x1b20),_0x4f6572(0x2759),_0x4f6572(0x2e8a),_0x4f6572(0x2b7e),_0x4f6572(0xa5d),_0x4f6572(0x3ebd),_0x4f6572(0x5024),_0x4f6572(0xc16),_0x4f6572(0x51b1),_0x4f6572(0x4ed3),_0x4f6572(0x2b44)],_0x25fa86=_0x41ca31[_0x4f6572(0x4833)](_0x2d63a6=>_0x2d63a6+'?');return{'name':_0x4f6572(0xf56),'keywords':{'keyword':[_0x4f6572(0x3027),'as',_0x4f6572(0x4fd4),_0x4f6572(0x16c2),'await',_0x4f6572(0x7c0),_0x4f6572(0x4e10),_0x4f6572(0x2e7e),_0x4f6572(0x31a3),'class','const','continue',_0x4f6572(0x3f7e),_0x4f6572(0x3d23),'deferred','do',_0x4f6572(0x41aa),_0x4f6572(0x3d4),_0x4f6572(0x44d8),_0x4f6572(0x2bb9),_0x4f6572(0x4428),_0x4f6572(0x1dd9),'external','factory','false',_0x4f6572(0x27e4),_0x4f6572(0x37b2),'for',_0x4f6572(0x2ac5),_0x4f6572(0xf9e),_0x4f6572(0x9c4),'if',_0x4f6572(0x6c3),_0x4f6572(0x331),'in','interface','is',_0x4f6572(0x1523),_0x4f6572(0x2e60),'mixin',_0x4f6572(0x4321),'null','on',_0x4f6572(0x1182),_0x4f6572(0x2aa),'required',_0x4f6572(0x262f),_0x4f6572(0xdfd),_0x4f6572(0x3a89),_0x4f6572(0x1fa),_0x4f6572(0x2e0d),_0x4f6572(0x2c7c),_0x4f6572(0x2cc),_0x4f6572(0x857),'sync',_0x4f6572(0x138f),'throw',_0x4f6572(0x4022),_0x4f6572(0x422b),'typedef',_0x4f6572(0x469d),_0x4f6572(0x27d6),_0x4f6572(0x191b),_0x4f6572(0x552),_0x4f6572(0x2aa7),'yield'],'built_in':_0x41ca31[_0x4f6572(0x1d1d)](_0x25fa86)[_0x4f6572(0x1d1d)](['Never',_0x4f6572(0x2ed5),_0x4f6572(0x41aa),_0x4f6572(0x4957),_0x4f6572(0x295),'querySelector','querySelectorAll',_0x4f6572(0x18db)]),'$pattern':/[A-Za-z][A-Za-z0-9_]*\??/},'contains':[_0x24d458,_0xe7d8c4[_0x4f6572(0x4e4f)](/\/\*\*(?!\/)/,/\*\//,{'subLanguage':_0x4f6572(0x4116),'relevance':0x0}),_0xe7d8c4['COMMENT'](/\/{3,} ?/,/$/,{'contains':[{'subLanguage':_0x4f6572(0x4116),'begin':'.','end':'$','relevance':0x0}]}),_0xe7d8c4['C_LINE_COMMENT_MODE'],_0xe7d8c4[_0x4f6572(0x23fe)],{'className':_0x4f6572(0x1390),'beginKeywords':_0x4f6572(0x1ba3),'end':/\{/,'excludeEnd':!0x0,'contains':[{'beginKeywords':'extends\x20implements'},_0xe7d8c4[_0x4f6572(0xb0e)]]},_0xe7d8c4[_0x4f6572(0xd12)],{'className':_0x4f6572(0x5153),'begin':_0x4f6572(0x4d18)},{'begin':'=>'}]};};},0x20fb:_0x1a38b5=>{const _0x1de436=a0_0x11e7;_0x1a38b5[_0x1de436(0x474c)]=function(_0x222d00){const _0x344c27=_0x1de436,_0x36f0ba=['exports',_0x344c27(0x49b4),_0x344c27(0x184e),_0x344c27(0x12ee),_0x344c27(0x26f6),_0x344c27(0x15bd),_0x344c27(0x227a),_0x344c27(0x3c19),_0x344c27(0x4531),_0x344c27(0x552),'set',_0x344c27(0x2140),'label',_0x344c27(0x13e7),_0x344c27(0x2c1e),_0x344c27(0xc1a),'stored',_0x344c27(0x1390),_0x344c27(0x25e2),_0x344c27(0x469d),_0x344c27(0x321b),'or','private','static',_0x344c27(0x4c7b),_0x344c27(0x3bb5),'inherited','to',_0x344c27(0x3d4),_0x344c27(0x51d9),'override',_0x344c27(0x389f),_0x344c27(0x1651),_0x344c27(0x29ba),'resourcestring',_0x344c27(0x4b16),_0x344c27(0x32f4),'virtual',_0x344c27(0x3ab5),_0x344c27(0x2663),'protected',_0x344c27(0x2e60),'do','xorwrite',_0x344c27(0x139c),_0x344c27(0x3d87),'function',_0x344c27(0x2681),_0x344c27(0x4c88),'overload',_0x344c27(0x20c7),_0x344c27(0x43c3),_0x344c27(0x42fa),'string','on',_0x344c27(0x2988),_0x344c27(0x11d0),_0x344c27(0x30d6),_0x344c27(0x1001),_0x344c27(0x4c95),_0x344c27(0x4cdf),_0x344c27(0x336a),_0x344c27(0x2aa7),_0x344c27(0x50a6),_0x344c27(0x1ec7),_0x344c27(0x233e),_0x344c27(0x3d23),_0x344c27(0x3e27),'if','case',_0x344c27(0x4be5),'in',_0x344c27(0x4003),_0x344c27(0x4e12),'of','try',_0x344c27(0x21e8),_0x344c27(0xc01),_0x344c27(0x2f0b),'constructor','type','public',_0x344c27(0xaf5),'implementation',_0x344c27(0x37b2),'published',_0x344c27(0x1285),_0x344c27(0x5140),_0x344c27(0xbf9),_0x344c27(0x1182),'as','is','abstract','alias','assembler',_0x344c27(0x437d),_0x344c27(0x4e10),_0x344c27(0x16d9),_0x344c27(0x25c7),_0x344c27(0x4da),_0x344c27(0x3cc6),_0x344c27(0x4f50),'platform',_0x344c27(0x3df5),_0x344c27(0x4654),_0x344c27(0x41aa),'export','far16','forward','generic','helper',_0x344c27(0x6c3),_0x344c27(0x3b9a),_0x344c27(0x1b4e),_0x344c27(0x16a7),_0x344c27(0x11d8),_0x344c27(0x5016),_0x344c27(0x41ae),'nostackframe',_0x344c27(0x2875),_0x344c27(0x313e),_0x344c27(0x1b5a),_0x344c27(0x1402),_0x344c27(0xc82),_0x344c27(0x4aa8),_0x344c27(0x3fc0),_0x344c27(0x3c5)],_0x536ab7=[_0x222d00[_0x344c27(0x2ae2)],_0x222d00[_0x344c27(0x4e4f)](/\{/,/\}/,{'relevance':0x0}),_0x222d00[_0x344c27(0x4e4f)](/\(\*/,/\*\)/,{'relevance':0xa})],_0x1b496f={'className':_0x344c27(0x5153),'variants':[{'begin':/\{\$/,'end':/\}/},{'begin':/\(\*\$/,'end':/\*\)/}]},_0x3c7ada={'className':_0x344c27(0x2431),'begin':/'/,'end':/'/,'contains':[{'begin':/''/}]},_0x3b2b94={'className':_0x344c27(0x2431),'begin':/(#\d+)+/},_0xee4397={'begin':_0x222d00[_0x344c27(0xacc)]+_0x344c27(0x21a9),'returnBegin':!0x0,'contains':[_0x222d00[_0x344c27(0x2029)]]},_0x2b0525={'className':_0x344c27(0x14b2),'beginKeywords':_0x344c27(0x1fd1),'end':/[:;]/,'keywords':'function\x20constructor|10\x20destructor|10\x20procedure|10','contains':[_0x222d00[_0x344c27(0x2029)],{'className':'params','begin':/\(/,'end':/\)/,'keywords':_0x36f0ba,'contains':[_0x3c7ada,_0x3b2b94,_0x1b496f]['concat'](_0x536ab7)},_0x1b496f]['concat'](_0x536ab7)};return{'name':'Delphi','aliases':['dpr',_0x344c27(0x1ddd),_0x344c27(0x5002),'pascal'],'case_insensitive':!0x0,'keywords':_0x36f0ba,'illegal':/"|\$[G-Zg-z]|\/\*|<\/|\|/,'contains':[_0x3c7ada,_0x3b2b94,_0x222d00[_0x344c27(0x30be)],{'className':'number','relevance':0x0,'variants':[{'begin':'\x5c$[0-9A-Fa-f]+'},{'begin':_0x344c27(0x3371)},{'begin':_0x344c27(0x1b6e)}]},_0xee4397,_0x2b0525,_0x1b496f]['concat'](_0x536ab7)};};},0x2194:_0x5d098c=>{const _0x52a56f=a0_0x11e7;_0x5d098c[_0x52a56f(0x474c)]=function(_0x57d46b){const _0x13324f=_0x52a56f,_0x1fb9bd=_0x57d46b['regex'];return{'name':'Diff','aliases':[_0x13324f(0x13a4)],'contains':[{'className':_0x13324f(0x5153),'relevance':0xa,'match':_0x1fb9bd[_0x13324f(0x583)](/^@@ +-\d+,\d+ +\+\d+,\d+ +@@/,/^\*\*\* +\d+,\d+ +\*\*\*\*$/,/^--- +\d+,\d+ +----$/)},{'className':'comment','variants':[{'begin':_0x1fb9bd['either'](/Index: /,/^index/,/={3,}/,/^-{3}/,/^\*{3} /,/^\+{3}/,/^diff --git/),'end':/$/},{'match':/^\*{15}$/}]},{'className':'addition','begin':/^\+/,'end':/$/},{'className':_0x13324f(0x20a2),'begin':/^-/,'end':/$/},{'className':'addition','begin':/^!/,'end':/$/}]};};},0xda8:_0x4c4608=>{_0x4c4608['exports']=function(_0x908bd3){const _0xe532c7=a0_0x11e7,_0x428f16={'begin':/\|[A-Za-z]+:?/,'keywords':{'name':_0xe532c7(0x3506)},'contains':[_0x908bd3[_0xe532c7(0x291b)],_0x908bd3['APOS_STRING_MODE']]};return{'name':_0xe532c7(0x3f62),'aliases':[_0xe532c7(0x3e05)],'case_insensitive':!0x0,'subLanguage':_0xe532c7(0x2655),'contains':[_0x908bd3[_0xe532c7(0x4e4f)](/\{%\s*comment\s*%\}/,/\{%\s*endcomment\s*%\}/),_0x908bd3[_0xe532c7(0x4e4f)](/\{#/,/#\}/),{'className':_0xe532c7(0x499d),'begin':/\{%/,'end':/%\}/,'contains':[{'className':_0xe532c7(0x11d8),'begin':/\w+/,'keywords':{'name':_0xe532c7(0x48c6)},'starts':{'endsWithParent':!0x0,'keywords':_0xe532c7(0x2c9d),'contains':[_0x428f16],'relevance':0x0}}]},{'className':'template-variable','begin':/\{\{/,'end':/\}\}/,'contains':[_0x428f16]}]};};},0x5dc:_0x22166a=>{_0x22166a['exports']=function(_0x57c689){const _0x2651e1=a0_0x11e7;return{'name':_0x2651e1(0x2bd3),'aliases':[_0x2651e1(0x39e8),_0x2651e1(0x2bd1)],'keywords':['IN','A',_0x2651e1(0xa56),_0x2651e1(0x102f),_0x2651e1(0x7ac),'CAA',_0x2651e1(0x48ad),_0x2651e1(0x1b48),_0x2651e1(0x4a3d),_0x2651e1(0x1a6f),_0x2651e1(0xb82),_0x2651e1(0x5160),_0x2651e1(0x4526),_0x2651e1(0x358),'DS',_0x2651e1(0x15d4),_0x2651e1(0x2d79),'KEY','KX','LOC','MX',_0x2651e1(0x4fbc),'NS',_0x2651e1(0x4158),_0x2651e1(0x1b79),_0x2651e1(0x2176),_0x2651e1(0x141f),_0x2651e1(0x1b13),'RP',_0x2651e1(0x4ea0),'SOA',_0x2651e1(0x4637),_0x2651e1(0x3dba),'TA',_0x2651e1(0x4ef),_0x2651e1(0x51be),_0x2651e1(0x4a91),'TXT'],'contains':[_0x57c689['COMMENT'](';','$',{'relevance':0x0}),{'className':'meta','begin':/^\$(TTL|GENERATE|INCLUDE|ORIGIN)\b/},{'className':'number','begin':_0x2651e1(0x1fec)},{'className':_0x2651e1(0x4a80),'begin':_0x2651e1(0x495d)},_0x57c689[_0x2651e1(0x46a1)](_0x57c689['NUMBER_MODE'],{'begin':/\b\d+[dhwm]?/})]};};},0x23ed:_0x68f29b=>{_0x68f29b['exports']=function(_0xde7437){const _0x4d6e3e=a0_0x11e7;return{'name':'Dockerfile','aliases':[_0x4d6e3e(0x529a)],'case_insensitive':!0x0,'keywords':[_0x4d6e3e(0x27e6),_0x4d6e3e(0x4d27),_0x4d6e3e(0x1cc0),_0x4d6e3e(0xe1a),_0x4d6e3e(0x4a4),_0x4d6e3e(0x4b31),'onbuild',_0x4d6e3e(0x49f2)],'contains':[_0xde7437[_0x4d6e3e(0x2bbe)],_0xde7437[_0x4d6e3e(0xa4c)],_0xde7437[_0x4d6e3e(0x291b)],_0xde7437[_0x4d6e3e(0x30be)],{'beginKeywords':_0x4d6e3e(0xb52),'starts':{'end':/[^\\]$/,'subLanguage':'bash'}}],'illegal':'{_0x9c4c79['exports']=function(_0x527e2b){const _0x235a84=a0_0x11e7,_0x46e9e4=_0x527e2b[_0x235a84(0x4e4f)](/^\s*@?rem\b/,/$/,{'relevance':0xa});return{'name':'Batch\x20file\x20(DOS)','aliases':[_0x235a84(0xcb3),_0x235a84(0x41b5)],'case_insensitive':!0x0,'illegal':/\/\*/,'keywords':{'keyword':['if','else','goto','for','in','do',_0x235a84(0x236b),'exit','not',_0x235a84(0x226),_0x235a84(0x3e54),_0x235a84(0x183b),_0x235a84(0x2ea4),_0x235a84(0x16bc),_0x235a84(0x2213),_0x235a84(0x13f7),_0x235a84(0x36e3),_0x235a84(0x4c0a)],'built_in':[_0x235a84(0x23bc),_0x235a84(0x2837),_0x235a84(0x2ea3),_0x235a84(0x35ec),_0x235a84(0x2a29),_0x235a84(0x2205),_0x235a84(0x1384),_0x235a84(0x1fd7),_0x235a84(0x4a46),_0x235a84(0x4890),_0x235a84(0x47f2),_0x235a84(0x34fe),'cd',_0x235a84(0x177b),'echo',_0x235a84(0x1da7),_0x235a84(0x2303),_0x235a84(0x1fa),_0x235a84(0x468c),_0x235a84(0x16c0),_0x235a84(0x366b),_0x235a84(0x4512),'at',_0x235a84(0x4e21),'break','cacls','cd',_0x235a84(0x5232),_0x235a84(0x4772),_0x235a84(0x456a),_0x235a84(0x35c7),_0x235a84(0x50ef),_0x235a84(0x41b5),_0x235a84(0xe81),_0x235a84(0x2ca),_0x235a84(0x227f),_0x235a84(0x1eed),_0x235a84(0x40d9),_0x235a84(0x177b),'diskcomp',_0x235a84(0x132f),_0x235a84(0x1f9b),_0x235a84(0x3d84),'fs',_0x235a84(0x5144),'findstr',_0x235a84(0x29a7),_0x235a84(0x522c),_0x235a84(0x4f87),'help',_0x235a84(0x16d6),'label','md',_0x235a84(0x2228),'mode','more',_0x235a84(0x2676),_0x235a84(0x83c),_0x235a84(0x468c),'print',_0x235a84(0x603),'pushd','promt','rd',_0x235a84(0x3086),'rem','rename',_0x235a84(0x741),_0x235a84(0x24b),_0x235a84(0x4d1a),_0x235a84(0x34fe),'sort','start',_0x235a84(0x2ad6),_0x235a84(0x51b6),'title',_0x235a84(0x2a2a),_0x235a84(0xcfc),_0x235a84(0x2d33),_0x235a84(0x1a64),'vol',_0x235a84(0x2700),'net',_0x235a84(0x4d77),_0x235a84(0x5281),_0x235a84(0x2f49),_0x235a84(0x1106),_0x235a84(0x109c)]},'contains':[{'className':'variable','begin':/%%[^ ]|%[^ ]+?%|![^ ]+?!/},{'className':_0x235a84(0x14b2),'begin':_0x235a84(0x1c30),'end':_0x235a84(0x4a0b),'contains':[_0x527e2b[_0x235a84(0x46a1)](_0x527e2b[_0x235a84(0x2029)],{'begin':'([_a-zA-Z]\x5cw*\x5c.)*([_a-zA-Z]\x5cw*:)?[_a-zA-Z]\x5cw*'}),_0x46e9e4]},{'className':'number','begin':'\x5cb\x5cd+','relevance':0x0},_0x46e9e4]};};},0x205c:_0x11fed2=>{const _0x48aa62=a0_0x11e7;_0x11fed2[_0x48aa62(0x474c)]=function(_0x1285c8){const _0x38ca8a=_0x48aa62;return{'keywords':_0x38ca8a(0x3127),'contains':[{'className':'keyword','begin':'^dsconfig','end':/\s/,'excludeEnd':!0x0,'relevance':0xa},{'className':_0x38ca8a(0x43a),'begin':/(list|create|get|set|delete)-(\w+)/,'end':/\s/,'excludeEnd':!0x0,'illegal':_0x38ca8a(0x12a7),'relevance':0xa},{'className':'built_in','begin':/--(\w+)/,'end':/\s/,'excludeEnd':!0x0},{'className':_0x38ca8a(0x2431),'begin':/"/,'end':/"/},{'className':_0x38ca8a(0x2431),'begin':/'/,'end':/'/},{'className':_0x38ca8a(0x2431),'begin':/[\w\-?]+:\w+/,'end':/\W/,'relevance':0x0},{'className':_0x38ca8a(0x2431),'begin':/\w+(\-\w+)*/,'end':/(?=\W)/,'relevance':0x0},_0x1285c8[_0x38ca8a(0x2bbe)]]};};},0xa0a:_0x3e616b=>{const _0x2e0f8d=a0_0x11e7;_0x3e616b[_0x2e0f8d(0x474c)]=function(_0x35aa02){const _0x563abd=_0x2e0f8d,_0x5e2f69={'className':'string','variants':[_0x35aa02[_0x563abd(0x46a1)](_0x35aa02[_0x563abd(0x291b)],{'begin':_0x563abd(0x1df3)}),{'begin':_0x563abd(0x1829),'end':'\x22','contains':[_0x35aa02[_0x563abd(0x4a76)]]},{'begin':_0x563abd(0x47b9),'end':'\x27','illegal':'.'}]},_0x39f72b={'className':_0x563abd(0x4a80),'variants':[{'begin':_0x563abd(0x5098)},{'begin':_0x35aa02[_0x563abd(0x45be)]}],'relevance':0x0},_0x17a57a={'className':_0x563abd(0x5153),'begin':'#','end':'$','keywords':{'keyword':_0x563abd(0x333a)},'contains':[{'begin':/\\\n/,'relevance':0x0},{'beginKeywords':_0x563abd(0x478e),'end':'$','keywords':{'keyword':_0x563abd(0x478e)},'contains':[_0x35aa02['inherit'](_0x5e2f69,{'className':_0x563abd(0x2431)}),{'className':_0x563abd(0x2431),'begin':'<','end':'>','illegal':'\x5cn'}]},_0x5e2f69,_0x35aa02[_0x563abd(0x2ae2)],_0x35aa02[_0x563abd(0x23fe)]]},_0x15cfdd={'className':'variable','begin':/&[a-z\d_]*\b/};return{'name':'Device\x20Tree','contains':[{'className':_0x563abd(0x19e4),'begin':/^\/(?=\s*\{)/,'relevance':0xa},_0x15cfdd,{'className':_0x563abd(0x1357),'begin':_0x563abd(0x4493)},{'className':_0x563abd(0x239b),'begin':_0x563abd(0x838)},{'className':_0x563abd(0x19e4),'begin':/[a-zA-Z_][a-zA-Z\d_@-]*(?=\s\{)/,'relevance':0.2},{'relevance':0x0,'match':[/[a-z][a-z-,]+/,/\s*/,/=/],'scope':{0x1:'attr',0x3:_0x563abd(0x1182)}},{'match':/[a-z][a-z-,]+(?=;)/,'relevance':0x0,'scope':'attr'},{'className':'params','relevance':0x0,'begin':'<','end':'>','contains':[_0x39f72b,_0x15cfdd]},_0x35aa02[_0x563abd(0x2ae2)],_0x35aa02[_0x563abd(0x23fe)],_0x39f72b,_0x5e2f69,_0x17a57a,{'scope':_0x563abd(0xa25),'relevance':0x0,'match':/\};|[;{}]/},{'begin':_0x35aa02['IDENT_RE']+'::','keywords':''}]};};},0x25fb:_0x23b747=>{const _0x547c4b=a0_0x11e7;_0x23b747[_0x547c4b(0x474c)]=function(_0x85f997){const _0x47cc91=_0x547c4b;return{'name':_0x47cc91(0x28af),'aliases':[_0x47cc91(0x1e4d)],'case_insensitive':!0x0,'subLanguage':_0x47cc91(0x2655),'contains':[{'className':_0x47cc91(0x499d),'begin':/\{[#\/]/,'end':/\}/,'illegal':/;/,'contains':[{'className':_0x47cc91(0x11d8),'begin':/[a-zA-Z\.-]+/,'starts':{'endsWithParent':!0x0,'relevance':0x0,'contains':[_0x85f997[_0x47cc91(0x291b)]]}}]},{'className':'template-variable','begin':/\{/,'end':/\}/,'illegal':/;/,'keywords':_0x47cc91(0x4a0c)}]};};},0x2094:_0x3c5486=>{_0x3c5486['exports']=function(_0x3a0e31){const _0x2ac7be=a0_0x11e7,_0x2c5511=_0x3a0e31['COMMENT'](/\(\*/,/\*\)/);return{'name':_0x2ac7be(0x51eb),'illegal':/\S/,'contains':[_0x2c5511,{'className':_0x2ac7be(0x263f),'begin':/^[ ]*[a-zA-Z]+([\s_-]+[a-zA-Z]+)*/},{'begin':/=/,'end':/[.;]/,'contains':[_0x2c5511,{'className':_0x2ac7be(0x5153),'begin':/\?.*\?/},{'className':_0x2ac7be(0x2431),'variants':[_0x3a0e31[_0x2ac7be(0xa4c)],_0x3a0e31[_0x2ac7be(0x291b)],{'begin':'`','end':'`'}]}]}]};};},0x25a0:_0x3d4b16=>{const _0x15caea=a0_0x11e7;_0x3d4b16[_0x15caea(0x474c)]=function(_0xeedc57){const _0x45052e=_0x15caea,_0x1d17bb=_0xeedc57[_0x45052e(0x41d2)],_0xd63766=_0x45052e(0x15b1),_0x1b4b40={'$pattern':_0xd63766,'keyword':[_0x45052e(0x1349),_0x45052e(0xa94),_0x45052e(0x2663),'case',_0x45052e(0x31a3),_0x45052e(0x38e8),_0x45052e(0x31a1),_0x45052e(0x3889),'do',_0x45052e(0x3d4),_0x45052e(0x2681),'fn',_0x45052e(0x3c19),'if',_0x45052e(0x331),'in',_0x45052e(0xc1a),'or',_0x45052e(0x3567),_0x45052e(0x2c1e),'receive',_0x45052e(0x4031),'reraise','rescue',_0x45052e(0x422b),_0x45052e(0x26b1),_0x45052e(0x3a1f),_0x45052e(0x1dca),_0x45052e(0x84a),_0x45052e(0x191b),_0x45052e(0xcb7)],'literal':[_0x45052e(0x3984),'nil',_0x45052e(0x4022)]},_0x173334={'className':'subst','begin':/#\{/,'end':/\}/,'keywords':_0x1b4b40},_0xd5f64b={'match':/\\[\s\S]/,'scope':_0x45052e(0x2825),'relevance':0x0},_0x554d78=_0x45052e(0x444b),_0x4cea12=[{'begin':/"/,'end':/"/},{'begin':/'/,'end':/'/},{'begin':/\//,'end':/\//},{'begin':/\|/,'end':/\|/},{'begin':/\(/,'end':/\)/},{'begin':/\[/,'end':/\]/},{'begin':/\{/,'end':/\}/},{'begin'://}],_0x7c7707=_0x50b8d4=>({'scope':'char.escape','begin':_0x1d17bb['concat'](/\\/,_0x50b8d4),'relevance':0x0}),_0x277870={'className':_0x45052e(0x2431),'begin':_0x45052e(0x218e)+_0x554d78+')','contains':_0x4cea12[_0x45052e(0x4833)](_0x1e511a=>_0xeedc57[_0x45052e(0x46a1)](_0x1e511a,{'contains':[_0x7c7707(_0x1e511a[_0x45052e(0x2681)]),_0xd5f64b,_0x173334]}))},_0x3d799b={'className':_0x45052e(0x2431),'begin':'~[A-Z](?='+_0x554d78+')','contains':_0x4cea12[_0x45052e(0x4833)](_0x447306=>_0xeedc57[_0x45052e(0x46a1)](_0x447306,{'contains':[_0x7c7707(_0x447306[_0x45052e(0x2681)])]}))},_0x4adaeb={'className':'regex','variants':[{'begin':_0x45052e(0x4e09)+_0x554d78+')','contains':_0x4cea12['map'](_0x3a9ef6=>_0xeedc57[_0x45052e(0x46a1)](_0x3a9ef6,{'end':_0x1d17bb[_0x45052e(0x1d1d)](_0x3a9ef6[_0x45052e(0x2681)],/[uismxfU]{0,7}/),'contains':[_0x7c7707(_0x3a9ef6['end']),_0xd5f64b,_0x173334]}))},{'begin':_0x45052e(0x2520)+_0x554d78+')','contains':_0x4cea12['map'](_0x109d81=>_0xeedc57[_0x45052e(0x46a1)](_0x109d81,{'end':_0x1d17bb[_0x45052e(0x1d1d)](_0x109d81[_0x45052e(0x2681)],/[uismxfU]{0,7}/),'contains':[_0x7c7707(_0x109d81[_0x45052e(0x2681)])]}))}]},_0x426ec7={'className':_0x45052e(0x2431),'contains':[_0xeedc57[_0x45052e(0x4a76)],_0x173334],'variants':[{'begin':/"""/,'end':/"""/},{'begin':/'''/,'end':/'''/},{'begin':/~S"""/,'end':/"""/,'contains':[]},{'begin':/~S"/,'end':/"/,'contains':[]},{'begin':/~S'''/,'end':/'''/,'contains':[]},{'begin':/~S'/,'end':/'/,'contains':[]},{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/}]},_0x5186a9={'className':_0x45052e(0x14b2),'beginKeywords':_0x45052e(0x2ae0),'end':/\B\b/,'contains':[_0xeedc57[_0x45052e(0x46a1)](_0xeedc57['TITLE_MODE'],{'begin':_0xd63766,'endsParent':!0x0})]},_0x56b44b=_0xeedc57['inherit'](_0x5186a9,{'className':'class','beginKeywords':_0x45052e(0x1872),'end':/\bdo\b|$|;/}),_0x175d71=[_0x426ec7,_0x4adaeb,_0x3d799b,_0x277870,_0xeedc57[_0x45052e(0x2bbe)],_0x56b44b,_0x5186a9,{'begin':'::'},{'className':_0x45052e(0x239b),'begin':':(?![\x5cs:])','contains':[_0x426ec7,{'begin':_0x45052e(0x452d)}],'relevance':0x0},{'className':_0x45052e(0x239b),'begin':_0xd63766+_0x45052e(0x47fc),'relevance':0x0},{'className':_0x45052e(0x19e4),'begin':/(\b[A-Z][a-zA-Z0-9_]+)/,'relevance':0x0},{'className':'number','begin':'(\x5cb0o[0-7_]+)|(\x5cb0b[01_]+)|(\x5cb0x[0-9a-fA-F_]+)|(-?\x5cb[0-9][0-9_]*(\x5c.[0-9_]+([eE][-+]?[0-9]+)?)?)','relevance':0x0},{'className':_0x45052e(0x3362),'begin':'(\x5c$\x5cW)|((\x5c$|@@?)(\x5cw+))'}];return _0x173334[_0x45052e(0x2b31)]=_0x175d71,{'name':_0x45052e(0x5073),'aliases':['ex','exs'],'keywords':_0x1b4b40,'contains':_0x175d71};};},0x1fdb:_0x135509=>{_0x135509['exports']=function(_0x3383d3){const _0x13d8e0=a0_0x11e7,_0x33b1d0={'variants':[_0x3383d3[_0x13d8e0(0x4e4f)]('--','$'),_0x3383d3[_0x13d8e0(0x4e4f)](/\{-/,/-\}/,{'contains':[_0x13d8e0(0x4454)]})]},_0x18c0b1={'className':'type','begin':_0x13d8e0(0x2013),'relevance':0x0},_0x1de31a={'begin':'\x5c(','end':'\x5c)','illegal':'\x22','contains':[{'className':_0x13d8e0(0xcfc),'begin':'\x5cb[A-Z][\x5cw]*(\x5c((\x5c.\x5c.|,|\x5cw+)\x5c))?'},_0x33b1d0]};return{'name':'Elm','keywords':['let','in','if',_0x13d8e0(0xaf5),_0x13d8e0(0x3d4),_0x13d8e0(0x2e7e),'of',_0x13d8e0(0x3b62),_0x13d8e0(0x196c),_0x13d8e0(0x331),_0x13d8e0(0x4d8b),_0x13d8e0(0xcfc),_0x13d8e0(0xa94),'as',_0x13d8e0(0x10c2),_0x13d8e0(0x4d13),_0x13d8e0(0x380f),_0x13d8e0(0x299e),_0x13d8e0(0x321a),'command',_0x13d8e0(0x1b76)],'contains':[{'beginKeywords':_0x13d8e0(0x2980),'end':_0x13d8e0(0x4d8b),'keywords':_0x13d8e0(0x9f7),'contains':[_0x1de31a,_0x33b1d0],'illegal':_0x13d8e0(0x40e)},{'begin':_0x13d8e0(0x331),'end':'$','keywords':_0x13d8e0(0x1871),'contains':[_0x1de31a,_0x33b1d0],'illegal':_0x13d8e0(0x40e)},{'begin':_0x13d8e0(0xcfc),'end':'$','keywords':_0x13d8e0(0x4cf5),'contains':[_0x18c0b1,_0x1de31a,{'begin':/\{/,'end':/\}/,'contains':_0x1de31a[_0x13d8e0(0x2b31)]},_0x33b1d0]},{'beginKeywords':_0x13d8e0(0x13fc),'end':'$','contains':[_0x3383d3[_0x13d8e0(0xd12)],_0x33b1d0]},{'begin':_0x13d8e0(0x299e),'end':'$','keywords':'port','contains':[_0x33b1d0]},{'className':_0x13d8e0(0x2431),'begin':_0x13d8e0(0x47b9),'end':'\x27','illegal':'.'},_0x3383d3[_0x13d8e0(0x291b)],_0x3383d3['C_NUMBER_MODE'],_0x18c0b1,_0x3383d3[_0x13d8e0(0x46a1)](_0x3383d3[_0x13d8e0(0x2029)],{'begin':_0x13d8e0(0x2749)}),_0x33b1d0,{'begin':_0x13d8e0(0x3382)}],'illegal':/;/};};},0x1104:_0x4035b3=>{const _0x29fa0e=a0_0x11e7;_0x4035b3[_0x29fa0e(0x474c)]=function(_0x6aa78c){const _0x2f0166=_0x29fa0e;return{'name':'ERB','subLanguage':'xml','contains':[_0x6aa78c[_0x2f0166(0x4e4f)](_0x2f0166(0x41ec),'%>'),{'begin':_0x2f0166(0x251a),'end':_0x2f0166(0x2261),'subLanguage':_0x2f0166(0x4430),'excludeBegin':!0x0,'excludeEnd':!0x0}]};};},0x1d94:_0x2d5cff=>{const _0xa040d4=a0_0x11e7;_0x2d5cff[_0xa040d4(0x474c)]=function(_0x3aa1ec){const _0x46c46f=_0xa040d4,_0x1b2d79=_0x3aa1ec[_0x46c46f(0x41d2)];return{'name':_0x46c46f(0x4a07),'keywords':{'built_in':_0x46c46f(0x326e),'keyword':'after\x20and\x20andalso|10\x20band\x20begin\x20bnot\x20bor\x20bsl\x20bsr\x20bxor\x20case\x20catch\x20cond\x20div\x20end\x20fun\x20if\x20let\x20not\x20of\x20or\x20orelse|10\x20query\x20receive\x20rem\x20try\x20when\x20xor'},'contains':[{'className':_0x46c46f(0x4cea),'begin':_0x46c46f(0x18a1),'relevance':0xa},_0x3aa1ec[_0x46c46f(0x4e4f)]('%','$'),{'className':_0x46c46f(0x4a80),'begin':_0x46c46f(0x2b5),'relevance':0x0},_0x3aa1ec['APOS_STRING_MODE'],_0x3aa1ec[_0x46c46f(0x291b)],{'begin':_0x1b2d79[_0x46c46f(0x1d1d)](/\?(::)?/,/([A-Z]\w*)/,/((::)[A-Z]\w*)*/)},{'begin':'->'},{'begin':'ok'},{'begin':'!'},{'begin':_0x46c46f(0x4bbc),'relevance':0x0},{'begin':_0x46c46f(0x50b5),'relevance':0x0}]};};},0x1ffc:_0x2a192c=>{_0x2a192c['exports']=function(_0x2124e9){const _0x29f254=a0_0x11e7,_0x403392=_0x29f254(0x36d1),_0x1fe3bf='('+_0x403392+':'+_0x403392+'|'+_0x403392+')',_0x574218={'keyword':'after\x20and\x20andalso|10\x20band\x20begin\x20bnot\x20bor\x20bsl\x20bzr\x20bxor\x20case\x20catch\x20cond\x20div\x20end\x20fun\x20if\x20let\x20not\x20of\x20orelse|10\x20query\x20receive\x20rem\x20try\x20when\x20xor','literal':_0x29f254(0x177e)},_0x25e487=_0x2124e9[_0x29f254(0x4e4f)]('%','$'),_0x153a0c={'className':_0x29f254(0x4a80),'begin':_0x29f254(0x2b5),'relevance':0x0},_0x4956a7={'begin':_0x29f254(0x5076)+_0x403392+_0x29f254(0x28cf)},_0xbaaf7a={'begin':_0x1fe3bf+'\x5c(','end':'\x5c)','returnBegin':!0x0,'relevance':0x0,'contains':[{'begin':_0x1fe3bf,'relevance':0x0},{'begin':'\x5c(','end':'\x5c)','endsWithParent':!0x0,'returnEnd':!0x0,'relevance':0x0}]},_0x1816bd={'begin':/\{/,'end':/\}/,'relevance':0x0},_0x2d1650={'begin':'\x5cb_([A-Z][A-Za-z0-9_]*)?','relevance':0x0},_0x1ea00b={'begin':_0x29f254(0x28d0),'relevance':0x0},_0x54c52a={'begin':'#'+_0x2124e9['UNDERSCORE_IDENT_RE'],'relevance':0x0,'returnBegin':!0x0,'contains':[{'begin':'#'+_0x2124e9['UNDERSCORE_IDENT_RE'],'relevance':0x0},{'begin':/\{/,'end':/\}/,'relevance':0x0}]},_0x409c0c={'beginKeywords':_0x29f254(0x305e),'end':_0x29f254(0x2681),'keywords':_0x574218};_0x409c0c[_0x29f254(0x2b31)]=[_0x25e487,_0x4956a7,_0x2124e9[_0x29f254(0x46a1)](_0x2124e9['APOS_STRING_MODE'],{'className':''}),_0x409c0c,_0xbaaf7a,_0x2124e9[_0x29f254(0x291b)],_0x153a0c,_0x1816bd,_0x2d1650,_0x1ea00b,_0x54c52a];const _0x2f590f=[_0x25e487,_0x4956a7,_0x409c0c,_0xbaaf7a,_0x2124e9['QUOTE_STRING_MODE'],_0x153a0c,_0x1816bd,_0x2d1650,_0x1ea00b,_0x54c52a];_0xbaaf7a[_0x29f254(0x2b31)][0x1][_0x29f254(0x2b31)]=_0x2f590f,_0x1816bd[_0x29f254(0x2b31)]=_0x2f590f,_0x54c52a[_0x29f254(0x2b31)][0x1][_0x29f254(0x2b31)]=_0x2f590f;const _0x12b924={'className':_0x29f254(0xddd),'begin':'\x5c(','end':'\x5c)','contains':_0x2f590f};return{'name':_0x29f254(0x2131),'aliases':['erl'],'keywords':_0x574218,'illegal':_0x29f254(0xb64),'contains':[{'className':_0x29f254(0x14b2),'begin':'^'+_0x403392+_0x29f254(0x7ef),'end':'->','returnBegin':!0x0,'illegal':'\x5c(|#|//|/\x5c*|\x5c\x5c|:|;','contains':[_0x12b924,_0x2124e9[_0x29f254(0x46a1)](_0x2124e9['TITLE_MODE'],{'begin':_0x403392})],'starts':{'end':_0x29f254(0x2126),'keywords':_0x574218,'contains':_0x2f590f}},_0x25e487,{'begin':'^-','end':'\x5c.','relevance':0x0,'excludeEnd':!0x0,'returnBegin':!0x0,'keywords':{'$pattern':'-'+_0x2124e9['IDENT_RE'],'keyword':[_0x29f254(0x4895),'-record',_0x29f254(0x14a5),_0x29f254(0x29a0),_0x29f254(0x2da4),'-ifndef',_0x29f254(0x4c71),_0x29f254(0x2962),'-doc',_0x29f254(0xbfb),_0x29f254(0x2271),_0x29f254(0x4f9d),_0x29f254(0x2ef5),_0x29f254(0x2a08),_0x29f254(0x4439),_0x29f254(0x2e9a),_0x29f254(0x2b47),_0x29f254(0x2f31),_0x29f254(0x3c5e),_0x29f254(0x2437),_0x29f254(0x4490)][_0x29f254(0x4833)](_0x1f19fe=>_0x1f19fe+_0x29f254(0x2b7a))[_0x29f254(0x3541)]('\x20')},'contains':[_0x12b924]},_0x153a0c,_0x2124e9[_0x29f254(0x291b)],_0x54c52a,_0x2d1650,_0x1ea00b,_0x1816bd,{'begin':/\.$/}]};};},0x1c78:_0x22f19e=>{_0x22f19e['exports']=function(_0x1f47b6){const _0x14ad7c=a0_0x11e7;return{'name':_0x14ad7c(0x44f6),'aliases':[_0x14ad7c(0x1828),_0x14ad7c(0x2890)],'case_insensitive':!0x0,'keywords':{'$pattern':/[a-zA-Z][\w\.]*/,'built_in':[_0x14ad7c(0x3a8),_0x14ad7c(0x1b25),_0x14ad7c(0x49cc),_0x14ad7c(0xd3e),_0x14ad7c(0x246e),_0x14ad7c(0x29d3),_0x14ad7c(0xb1b),_0x14ad7c(0x415),_0x14ad7c(0x3e35),_0x14ad7c(0x2614),_0x14ad7c(0x433e),_0x14ad7c(0x278b),_0x14ad7c(0x5280),'AREAS',_0x14ad7c(0x46ee),'ASIN',_0x14ad7c(0x2230),_0x14ad7c(0x267f),_0x14ad7c(0x479b),_0x14ad7c(0x25cb),'AVEDEV',_0x14ad7c(0x50de),_0x14ad7c(0x4adc),_0x14ad7c(0x42b1),_0x14ad7c(0x2dee),'BAHTTEXT','BASE','BESSELI','BESSELJ',_0x14ad7c(0x3e29),_0x14ad7c(0x35bc),_0x14ad7c(0x28c3),_0x14ad7c(0x293d),_0x14ad7c(0x1289),_0x14ad7c(0x4ffe),_0x14ad7c(0x4a09),_0x14ad7c(0x4cb7),'BIN2OCT',_0x14ad7c(0x47a4),_0x14ad7c(0x33d8),_0x14ad7c(0x2586),_0x14ad7c(0x3bb3),_0x14ad7c(0x166a),_0x14ad7c(0x2a0b),_0x14ad7c(0x2c40),_0x14ad7c(0x11f6),_0x14ad7c(0x3768),_0x14ad7c(0x42ab),'CEILING',_0x14ad7c(0x2d1f),'CEILING.PRECISE',_0x14ad7c(0x3806),_0x14ad7c(0x3178),_0x14ad7c(0x48ce),_0x14ad7c(0x2c2c),_0x14ad7c(0x20d1),_0x14ad7c(0x2a01),_0x14ad7c(0x2fe),'CHISQ.INV',_0x14ad7c(0x2e5e),'CHISQ.TEST',_0x14ad7c(0x27e2),_0x14ad7c(0x2d88),'CODE',_0x14ad7c(0x4c25),_0x14ad7c(0x4d8a),_0x14ad7c(0x19d0),_0x14ad7c(0x3db7),_0x14ad7c(0x460b),_0x14ad7c(0xa92),_0x14ad7c(0x353f),_0x14ad7c(0x4a4e),_0x14ad7c(0x23a4),_0x14ad7c(0xc41),_0x14ad7c(0x1c90),_0x14ad7c(0x407b),_0x14ad7c(0xc1f),_0x14ad7c(0x272d),_0x14ad7c(0x1a1c),'COTH',_0x14ad7c(0x1991),_0x14ad7c(0x3949),_0x14ad7c(0x4024),_0x14ad7c(0x4348),'COUNTIFS',_0x14ad7c(0x2e97),'COUPDAYS',_0x14ad7c(0x1435),'COUPNCD','COUPNUM',_0x14ad7c(0x1102),_0x14ad7c(0x1319),_0x14ad7c(0x4312),_0x14ad7c(0x123c),_0x14ad7c(0xd92),_0x14ad7c(0x502a),_0x14ad7c(0x45ef),_0x14ad7c(0x230f),'CUBEMEMBER',_0x14ad7c(0xed5),_0x14ad7c(0x1165),'CUBESET','CUBESETCOUNT',_0x14ad7c(0x3805),_0x14ad7c(0x782),_0x14ad7c(0x1894),_0x14ad7c(0x2c7a),_0x14ad7c(0x4d3),_0x14ad7c(0x4f5f),_0x14ad7c(0x32c1),_0x14ad7c(0x51bc),'DAYS',_0x14ad7c(0x1121),'DB',_0x14ad7c(0x4862),'DCOUNT',_0x14ad7c(0x42b7),_0x14ad7c(0x775),_0x14ad7c(0x2921),_0x14ad7c(0x21b8),_0x14ad7c(0xee3),'DECIMAL',_0x14ad7c(0x2f33),_0x14ad7c(0x2e51),_0x14ad7c(0x3b18),'DGET',_0x14ad7c(0x1278),_0x14ad7c(0x2d61),_0x14ad7c(0x503b),_0x14ad7c(0x39e2),_0x14ad7c(0x2e37),_0x14ad7c(0x397d),'DPRODUCT','DSTDEV',_0x14ad7c(0x5156),_0x14ad7c(0x760),_0x14ad7c(0x48d2),_0x14ad7c(0x648),_0x14ad7c(0x4acf),_0x14ad7c(0x3a7e),'EFFECT','ENCODEURL',_0x14ad7c(0x3dce),_0x14ad7c(0x2714),_0x14ad7c(0x44b6),_0x14ad7c(0x40e7),'ERFC.PRECISE',_0x14ad7c(0x1a02),_0x14ad7c(0x1df9),'EVEN',_0x14ad7c(0x45a1),_0x14ad7c(0x34b3),_0x14ad7c(0x33dc),'EXPONDIST','FACT',_0x14ad7c(0x2c9),_0x14ad7c(0x21c2),'F.DIST',_0x14ad7c(0x4623),'F.DIST.RT',_0x14ad7c(0x7a9),_0x14ad7c(0x2088),_0x14ad7c(0x46be),_0x14ad7c(0x4ae4),_0x14ad7c(0x3340),_0x14ad7c(0x4651),_0x14ad7c(0x2107),_0x14ad7c(0x2eaa),_0x14ad7c(0x3c58),_0x14ad7c(0x4c1a),'FLOOR.MATH',_0x14ad7c(0x1823),_0x14ad7c(0x3253),_0x14ad7c(0x29e7),_0x14ad7c(0x43ae),_0x14ad7c(0x292d),_0x14ad7c(0x18fd),_0x14ad7c(0x2ae5),_0x14ad7c(0x449a),_0x14ad7c(0x1075),'F.TEST','FTEST','FV',_0x14ad7c(0x4009),'GAMMA',_0x14ad7c(0x6ca),_0x14ad7c(0x3a8d),'GAMMA.INV','GAMMAINV',_0x14ad7c(0x3038),_0x14ad7c(0x3932),_0x14ad7c(0x3301),_0x14ad7c(0x387a),_0x14ad7c(0xe46),_0x14ad7c(0x85a),_0x14ad7c(0x357f),_0x14ad7c(0x318f),_0x14ad7c(0x1a19),_0x14ad7c(0x418e),_0x14ad7c(0x1164),_0x14ad7c(0x42f5),'HLOOKUP',_0x14ad7c(0x1d6e),_0x14ad7c(0x4791),_0x14ad7c(0x2fa1),_0x14ad7c(0x23c4),'IF',_0x14ad7c(0x1322),_0x14ad7c(0x4349),_0x14ad7c(0xd1d),'IMABS','IMAGINARY',_0x14ad7c(0x36b3),_0x14ad7c(0x29aa),_0x14ad7c(0x3f63),_0x14ad7c(0x3d75),_0x14ad7c(0x16d7),_0x14ad7c(0x4517),_0x14ad7c(0x732),_0x14ad7c(0x4c11),'IMEXP','IMLN',_0x14ad7c(0x4421),_0x14ad7c(0x2d1b),_0x14ad7c(0x2b4e),_0x14ad7c(0x705),_0x14ad7c(0x2501),_0x14ad7c(0x271a),_0x14ad7c(0x4efa),_0x14ad7c(0x35c),'IMSINH',_0x14ad7c(0x1ad1),_0x14ad7c(0x117b),_0x14ad7c(0x1413),_0x14ad7c(0x4a16),_0x14ad7c(0x5212),_0x14ad7c(0x403a),_0x14ad7c(0x3d0e),_0x14ad7c(0x4f97),_0x14ad7c(0x1e95),_0x14ad7c(0x70b),'IPMT',_0x14ad7c(0x4179),_0x14ad7c(0x4db1),_0x14ad7c(0x1720),_0x14ad7c(0x487d),_0x14ad7c(0xe60),_0x14ad7c(0x2bba),_0x14ad7c(0x2d4a),_0x14ad7c(0x34bf),_0x14ad7c(0x177a),_0x14ad7c(0xab6),_0x14ad7c(0x3e77),_0x14ad7c(0x1cc2),'ISTEXT','ISO.CEILING','ISOWEEKNUM',_0x14ad7c(0x4f7e),_0x14ad7c(0x21d0),_0x14ad7c(0x2a93),_0x14ad7c(0x42ea),_0x14ad7c(0x380d),'LEFT',_0x14ad7c(0x4133),_0x14ad7c(0x4fa1),'LENB',_0x14ad7c(0x3ca),'LN',_0x14ad7c(0x131a),_0x14ad7c(0x30e3),_0x14ad7c(0x128a),'LOGINV',_0x14ad7c(0x243e),_0x14ad7c(0x3198),_0x14ad7c(0x242e),_0x14ad7c(0x4b22),_0x14ad7c(0x37d1),_0x14ad7c(0x348d),_0x14ad7c(0x26df),_0x14ad7c(0x1b38),_0x14ad7c(0x37d8),_0x14ad7c(0x33b7),_0x14ad7c(0x2cd6),_0x14ad7c(0x5187),_0x14ad7c(0x3193),_0x14ad7c(0x1716),_0x14ad7c(0x4291),_0x14ad7c(0x1917),_0x14ad7c(0x1fc0),_0x14ad7c(0x2e05),'MINVERSE','MIRR','MMULT',_0x14ad7c(0x1f9d),_0x14ad7c(0x376c),_0x14ad7c(0x377),_0x14ad7c(0x2428),_0x14ad7c(0x24a3),_0x14ad7c(0xd95),'MULTINOMIAL','MUNIT','N','NA','NEGBINOM.DIST',_0x14ad7c(0x15a1),_0x14ad7c(0x4597),'NETWORKDAYS.INTL',_0x14ad7c(0x353c),'NORM.DIST',_0x14ad7c(0x144f),_0x14ad7c(0x2c87),'NORM.INV',_0x14ad7c(0x2fed),_0x14ad7c(0x134e),_0x14ad7c(0x2177),'NORMSINV',_0x14ad7c(0x255e),'NOW',_0x14ad7c(0x2a26),'NPV',_0x14ad7c(0x3c3e),_0x14ad7c(0x3535),'OCT2DEC',_0x14ad7c(0x29d2),_0x14ad7c(0x43b1),_0x14ad7c(0x49db),_0x14ad7c(0x9bd),'ODDLPRICE',_0x14ad7c(0xafc),'OFFSET','OR',_0x14ad7c(0x1f35),'PEARSON',_0x14ad7c(0x48ea),'PERCENTILE.INC','PERCENTILE',_0x14ad7c(0x3429),_0x14ad7c(0x40f1),_0x14ad7c(0xf40),_0x14ad7c(0x31d0),'PERMUTATIONA','PHI',_0x14ad7c(0x25f6),'PI',_0x14ad7c(0x2f97),_0x14ad7c(0x144a),_0x14ad7c(0x443b),_0x14ad7c(0x4a4f),_0x14ad7c(0x4f22),_0x14ad7c(0x104a),_0x14ad7c(0x50cd),_0x14ad7c(0x16c1),_0x14ad7c(0x236f),'PRODUCT',_0x14ad7c(0x1e3e),'PV',_0x14ad7c(0x2860),_0x14ad7c(0x8cb),_0x14ad7c(0x36fa),_0x14ad7c(0x191e),_0x14ad7c(0x18ac),_0x14ad7c(0x4e8f),_0x14ad7c(0x3b1c),_0x14ad7c(0x1804),_0x14ad7c(0x13e3),_0x14ad7c(0x20b8),_0x14ad7c(0x3956),_0x14ad7c(0x4cc9),_0x14ad7c(0x4a78),_0x14ad7c(0x299),'REPLACEB',_0x14ad7c(0x3a78),_0x14ad7c(0x3e74),_0x14ad7c(0x185a),_0x14ad7c(0x3bfd),_0x14ad7c(0x2c3a),_0x14ad7c(0x2d71),_0x14ad7c(0x42e8),_0x14ad7c(0x13d3),_0x14ad7c(0x3ee7),_0x14ad7c(0x4180),_0x14ad7c(0x44bb),_0x14ad7c(0x3ab8),_0x14ad7c(0x28b5),_0x14ad7c(0x40c7),_0x14ad7c(0x2503),_0x14ad7c(0x1cfa),'SECOND',_0x14ad7c(0x4bd),_0x14ad7c(0x4923),'SHEETS',_0x14ad7c(0x2349),_0x14ad7c(0x2763),'SINH',_0x14ad7c(0x467b),'SKEW.P',_0x14ad7c(0x4e3e),_0x14ad7c(0x23f0),_0x14ad7c(0x672),_0x14ad7c(0x1596),'SQRT',_0x14ad7c(0x4c4f),'STANDARDIZE','STDEV',_0x14ad7c(0x228a),_0x14ad7c(0x3380),'STDEVA',_0x14ad7c(0x5157),_0x14ad7c(0x14db),_0x14ad7c(0x286f),_0x14ad7c(0x2f47),_0x14ad7c(0x1aa4),'SUM',_0x14ad7c(0x1f75),_0x14ad7c(0x38fa),_0x14ad7c(0x467e),_0x14ad7c(0x1c7b),'SUMX2MY2','SUMX2PY2','SUMXMY2',_0x14ad7c(0xba3),_0x14ad7c(0x30d8),'T','TAN',_0x14ad7c(0x4c1e),'TBILLEQ','TBILLPRICE','TBILLYIELD',_0x14ad7c(0x20fb),_0x14ad7c(0x6c2),'T.DIST.RT',_0x14ad7c(0x275d),_0x14ad7c(0xda8),_0x14ad7c(0x1457),_0x14ad7c(0x22d8),_0x14ad7c(0x4408),'T.INV',_0x14ad7c(0xead),'TINV',_0x14ad7c(0x179b),_0x14ad7c(0x44cd),_0x14ad7c(0x4042),_0x14ad7c(0x482c),'TRIMMEAN',_0x14ad7c(0x3072),'TRUNC','T.TEST',_0x14ad7c(0x49e3),_0x14ad7c(0x811),_0x14ad7c(0x149d),_0x14ad7c(0x529b),'UPPER',_0x14ad7c(0x245),'VAR',_0x14ad7c(0x49fd),'VAR.S','VARA',_0x14ad7c(0x209d),'VARPA',_0x14ad7c(0x1a83),_0x14ad7c(0x3f02),_0x14ad7c(0x88e),_0x14ad7c(0x30c5),'WEEKNUM','WEIBULL',_0x14ad7c(0x34a7),_0x14ad7c(0x1be1),_0x14ad7c(0xac6),'XIRR',_0x14ad7c(0x7e3),'XOR',_0x14ad7c(0x12d6),_0x14ad7c(0x2985),'YIELD','YIELDDISC',_0x14ad7c(0x1d4c),'Z.TEST','ZTEST']},'contains':[{'begin':/^=/,'end':/[^=]/,'returnEnd':!0x0,'illegal':/=/,'relevance':0xa},{'className':_0x14ad7c(0x239b),'begin':/\b[A-Z]{1,2}\d+\b/,'end':/[^\d]/,'excludeEnd':!0x0,'relevance':0x0},{'className':'symbol','begin':/[A-Z]{0,2}\d*:[A-Z]{0,2}\d*/,'relevance':0x0},_0x1f47b6['BACKSLASH_ESCAPE'],_0x1f47b6[_0x14ad7c(0x291b)],{'className':_0x14ad7c(0x4a80),'begin':_0x1f47b6['NUMBER_RE']+_0x14ad7c(0x12b9),'relevance':0x0},_0x1f47b6[_0x14ad7c(0x4e4f)](/\bN\(/,/\)/,{'excludeBegin':!0x0,'excludeEnd':!0x0,'illegal':/\n/})]};};},0x1f48:_0x1279ce=>{_0x1279ce['exports']=function(_0x50bfeb){const _0x448662=a0_0x11e7;return{'name':'FIX','contains':[{'begin':/[^\u2401\u0001]+/,'end':/[\u2401\u0001]/,'excludeEnd':!0x0,'returnBegin':!0x0,'returnEnd':!0x1,'contains':[{'begin':/([^\u2401\u0001=]+)/,'end':/=([^\u2401\u0001=]+)/,'returnEnd':!0x0,'returnBegin':!0x1,'className':_0x448662(0x431d)},{'begin':/=/,'end':/([\u2401\u0001])/,'excludeEnd':!0x0,'excludeBegin':!0x0,'className':_0x448662(0x2431)}]}],'case_insensitive':!0x0};};},0x2610:_0x17e9a4=>{_0x17e9a4['exports']=function(_0xd5b6c2){const _0x5b7b5b=a0_0x11e7,_0x1086b4={'className':_0x5b7b5b(0x14b2),'beginKeywords':_0x5b7b5b(0x452b),'end':/[:={\[(\n;]/,'excludeEnd':!0x0,'contains':[{'className':_0x5b7b5b(0x4685),'relevance':0x0,'begin':/[^0-9\n\t "'(),.`{}\[\]:;][^\n\t "'(),.`{}\[\]:;]+|[^0-9\n\t "'(),.`{}\[\]:;=]/}]};return{'name':'Flix','keywords':{'keyword':[_0x5b7b5b(0x2e7e),'class',_0x5b7b5b(0x452b),_0x5b7b5b(0x3d4),_0x5b7b5b(0x44d8),'if','impl','import','in',_0x5b7b5b(0x44d9),_0x5b7b5b(0x3f3a),_0x5b7b5b(0x3bb5),_0x5b7b5b(0x1e61),_0x5b7b5b(0x2d96),_0x5b7b5b(0x37f7),'switch',_0x5b7b5b(0xcfc),_0x5b7b5b(0x5075),'with'],'literal':['true',_0x5b7b5b(0x3984)]},'contains':[_0xd5b6c2['C_LINE_COMMENT_MODE'],_0xd5b6c2[_0x5b7b5b(0x23fe)],{'className':'string','begin':/'(.|\\[xXuU][a-zA-Z0-9]+)'/},{'className':'string','variants':[{'begin':'\x22','end':'\x22'}]},_0x1086b4,_0xd5b6c2[_0x5b7b5b(0xd12)]]};};},0x4a5:_0x951d2c=>{_0x951d2c['exports']=function(_0x15e9dc){const _0xb006bf=a0_0x11e7,_0x5b35dc=_0x15e9dc['regex'],_0x13548b={'variants':[_0x15e9dc[_0xb006bf(0x4e4f)]('!','$',{'relevance':0x0}),_0x15e9dc[_0xb006bf(0x4e4f)](_0xb006bf(0x2f15),'$',{'relevance':0x0}),_0x15e9dc[_0xb006bf(0x4e4f)](_0xb006bf(0x2b54),'$',{'relevance':0x0})]},_0x46e3ca=/(_[a-z_\d]+)?/,_0x22503e=/([de][+-]?\d+)?/,_0x4b0cdb={'className':_0xb006bf(0x4a80),'variants':[{'begin':_0x5b35dc[_0xb006bf(0x1d1d)](/\b\d+/,/\.(\d*)/,_0x22503e,_0x46e3ca)},{'begin':_0x5b35dc['concat'](/\b\d+/,_0x22503e,_0x46e3ca)},{'begin':_0x5b35dc[_0xb006bf(0x1d1d)](/\.\d+/,_0x22503e,_0x46e3ca)}],'relevance':0x0},_0x2890d4={'className':_0xb006bf(0x14b2),'beginKeywords':_0xb006bf(0xf14),'illegal':'[${=\x5cn]','contains':[_0x15e9dc[_0xb006bf(0xb0e)],{'className':_0xb006bf(0xddd),'begin':'\x5c(','end':'\x5c)'}]};return{'name':'Fortran','case_insensitive':!0x0,'aliases':[_0xb006bf(0x1528),'f95'],'keywords':{'keyword':['kind','do',_0xb006bf(0x2a4f),_0xb006bf(0x16a7),'shared',_0xb006bf(0x552),_0xb006bf(0x4ef4),_0xb006bf(0x236b),_0xb006bf(0x13f8),_0xb006bf(0x3b62),_0xb006bf(0x262a),_0xb006bf(0xcfc),'endtype',_0xb006bf(0x35f1),'endselect',_0xb006bf(0x184a),_0xb006bf(0x2681),_0xb006bf(0x5092),'endif','if',_0xb006bf(0x43a6),_0xb006bf(0xc1e),'only',_0xb006bf(0x2b31),_0xb006bf(0x3d23),'return',_0xb006bf(0xaee),_0xb006bf(0xaf5),'block','endblock',_0xb006bf(0x5154),_0xb006bf(0x39ce),_0xb006bf(0x221),_0xb006bf(0x14b2),_0xb006bf(0x336a),_0xb006bf(0x37d7),_0xb006bf(0x35ef),_0xb006bf(0x24b5),_0xb006bf(0x657),_0xb006bf(0x4846),_0xb006bf(0x237c),'.gt.',_0xb006bf(0x4ee8),'goto','save','else',_0xb006bf(0x84a),_0xb006bf(0x196c),_0xb006bf(0x3fc9),'case',_0xb006bf(0x463e),_0xb006bf(0x2bd),_0xb006bf(0x2cf4),_0xb006bf(0x226),'file',_0xb006bf(0x11db),_0xb006bf(0x31e0),_0xb006bf(0x1096),_0xb006bf(0x15a4),_0xb006bf(0x11d8),'named',_0xb006bf(0x1bed),'number',_0xb006bf(0x233b),'rec',_0xb006bf(0xcc1),_0xb006bf(0x429c),'status',_0xb006bf(0xb07),_0xb006bf(0x43c3),_0xb006bf(0x16d9),'format',_0xb006bf(0x468c),_0xb006bf(0x1429),'exit',_0xb006bf(0x3361),'c_alert',_0xb006bf(0x182a),_0xb006bf(0x315d),_0xb006bf(0x14a2),_0xb006bf(0x1c14),_0xb006bf(0x2353),_0xb006bf(0x3d6c),'iomsg','synchronous',_0xb006bf(0xf99),_0xb006bf(0x2a75),_0xb006bf(0xed1),_0xb006bf(0xc14),_0xb006bf(0x3512),_0xb006bf(0x3027),_0xb006bf(0x4428),_0xb006bf(0x331),_0xb006bf(0x289d),_0xb006bf(0x4fe9),_0xb006bf(0x3b3b),_0xb006bf(0xff8),'final',_0xb006bf(0x3cc6),_0xb006bf(0x1390),_0xb006bf(0x38d5),_0xb006bf(0x39e8),_0xb006bf(0x44d8),_0xb006bf(0x3f82),'c_short',_0xb006bf(0xb0d),_0xb006bf(0x155b),_0xb006bf(0x193f),'c_size_t',_0xb006bf(0x8da),_0xb006bf(0x260c),_0xb006bf(0x3eba),_0xb006bf(0x402b),'c_int_least8_t',_0xb006bf(0x4c65),_0xb006bf(0x2b8b),_0xb006bf(0x146f),_0xb006bf(0x1da2),_0xb006bf(0x3409),'c_int_fast32_t',_0xb006bf(0x316b),_0xb006bf(0x2af7),_0xb006bf(0x3670),_0xb006bf(0x4135),_0xb006bf(0x454f),_0xb006bf(0x9ad),_0xb006bf(0x19e9),_0xb006bf(0x4b2),'c_long_double_complex',_0xb006bf(0x2952),'c_char',_0xb006bf(0x262d),'c_null_funptr',_0xb006bf(0x48fc),_0xb006bf(0x2d82),_0xb006bf(0x9e7),'c_vertical_tab',_0xb006bf(0x14a6),_0xb006bf(0xbd2),'c_funloc',_0xb006bf(0x4528),_0xb006bf(0x3868),_0xb006bf(0x4bdf),_0xb006bf(0xff5),'iso_fortran_env',_0xb006bf(0x1738),_0xb006bf(0x2d58),_0xb006bf(0x3f3c),'input_unit',_0xb006bf(0x2944),_0xb006bf(0xe75),_0xb006bf(0x4464),_0xb006bf(0x988),_0xb006bf(0x2d8d),_0xb006bf(0x4549),_0xb006bf(0x42b2),'ieee_get_underflow_mode',_0xb006bf(0x38e1),_0xb006bf(0x2b4f),_0xb006bf(0x4ec3),_0xb006bf(0x3829),'pad','position','action',_0xb006bf(0x4b94),_0xb006bf(0x23d0),_0xb006bf(0x3daf),_0xb006bf(0x4b58),'nml',_0xb006bf(0x321b),_0xb006bf(0x1285),'namelist','include',_0xb006bf(0x3c2),_0xb006bf(0x4765),_0xb006bf(0x44ca),'impure','integer',_0xb006bf(0x47f6),'character','complex',_0xb006bf(0x4adf),_0xb006bf(0x2fab),'dimension',_0xb006bf(0x3ef4),_0xb006bf(0x3740),'external',_0xb006bf(0x12c0),'none','double',_0xb006bf(0x23f8),'assign',_0xb006bf(0x3e1c),_0xb006bf(0x51e4),'pointer',_0xb006bf(0x1bc7),'in',_0xb006bf(0x3ab5),_0xb006bf(0x1484),_0xb006bf(0x1e0b),_0xb006bf(0x5139)],'literal':[_0xb006bf(0x2b45),_0xb006bf(0x3f9a)],'built_in':['alog',_0xb006bf(0x1f09),_0xb006bf(0xf17),'amax1','amin0',_0xb006bf(0x5039),_0xb006bf(0x2f9d),_0xb006bf(0x64d),'ccos',_0xb006bf(0x19ef),'clog',_0xb006bf(0x282c),_0xb006bf(0x2615),_0xb006bf(0x2843),_0xb006bf(0x1f4b),_0xb006bf(0x27de),_0xb006bf(0x4217),_0xb006bf(0x13ff),'dcos','dcosh',_0xb006bf(0x4499),_0xb006bf(0xf4b),_0xb006bf(0x3cb2),_0xb006bf(0x11fd),_0xb006bf(0x3b39),_0xb006bf(0x483a),_0xb006bf(0x193c),_0xb006bf(0x384e),_0xb006bf(0x4368),_0xb006bf(0x3a6),_0xb006bf(0x1b89),_0xb006bf(0x1e3d),'dsqrt',_0xb006bf(0x186e),_0xb006bf(0x38d1),_0xb006bf(0x1ab8),'iabs',_0xb006bf(0x234d),'idint',_0xb006bf(0x4937),_0xb006bf(0x3076),_0xb006bf(0x2999),_0xb006bf(0x31a2),_0xb006bf(0x2fb8),_0xb006bf(0x2151),_0xb006bf(0x19ce),_0xb006bf(0x1534),_0xb006bf(0x2260),'cdabs',_0xb006bf(0xa27),'cdexp','cdlog',_0xb006bf(0x7df),_0xb006bf(0x171a),'cqabs',_0xb006bf(0x232a),_0xb006bf(0x35ea),_0xb006bf(0x4a13),_0xb006bf(0x3dde),_0xb006bf(0x352d),_0xb006bf(0x621),_0xb006bf(0x2f58),_0xb006bf(0x5299),'derfc',_0xb006bf(0x98e),_0xb006bf(0x3703),_0xb006bf(0x624),_0xb006bf(0x46a4),_0xb006bf(0x4f33),_0xb006bf(0x23f6),'qacos',_0xb006bf(0x1784),_0xb006bf(0x40cb),'qatan2',_0xb006bf(0x41ed),'qconjg',_0xb006bf(0x984),'qcosh',_0xb006bf(0x37a3),_0xb006bf(0x3a08),_0xb006bf(0x24f7),_0xb006bf(0x4dba),_0xb006bf(0x875),_0xb006bf(0x275a),_0xb006bf(0x46e2),_0xb006bf(0x409),_0xb006bf(0x4681),'qmax1','qmin1',_0xb006bf(0x4eb4),'qnint',_0xb006bf(0x2957),_0xb006bf(0x266c),_0xb006bf(0x43f4),_0xb006bf(0x2df9),_0xb006bf(0x2012),'qtanh','abs','acos',_0xb006bf(0x114b),'aint',_0xb006bf(0x4884),_0xb006bf(0x3c15),'atan',_0xb006bf(0x41c2),_0xb006bf(0x373c),_0xb006bf(0x435e),_0xb006bf(0x3af5),_0xb006bf(0x3935),_0xb006bf(0x486e),'exp','ichar',_0xb006bf(0x3bb5),_0xb006bf(0xc16),_0xb006bf(0x20ff),_0xb006bf(0x1463),_0xb006bf(0x4529),_0xb006bf(0x37c8),_0xb006bf(0x3f86),_0xb006bf(0x7f7),'sin',_0xb006bf(0x4bb3),_0xb006bf(0x5011),_0xb006bf(0x38de),'tanh',_0xb006bf(0x4957),_0xb006bf(0x4c95),'dim','lge',_0xb006bf(0x1eb8),'lle','llt',_0xb006bf(0x4531),_0xb006bf(0x44d7),_0xb006bf(0x2f21),_0xb006bf(0x6e1),_0xb006bf(0x437),_0xb006bf(0x2546),_0xb006bf(0xc36),_0xb006bf(0xfc1),_0xb006bf(0x4684),_0xb006bf(0x5102),'bit_size','btest','ceiling','count',_0xb006bf(0x2526),_0xb006bf(0x5278),_0xb006bf(0xe1b),_0xb006bf(0x428c),'eoshift','epsilon',_0xb006bf(0x5119),'floor','fraction',_0xb006bf(0x44cb),_0xb006bf(0x492c),_0xb006bf(0x3143),_0xb006bf(0x734),_0xb006bf(0x3dc0),_0xb006bf(0x3a51),_0xb006bf(0x4211),_0xb006bf(0x19f0),'ishftc',_0xb006bf(0x1b09),'len_trim',_0xb006bf(0x47c4),_0xb006bf(0x1602),_0xb006bf(0x488f),_0xb006bf(0x35db),_0xb006bf(0x10d1),_0xb006bf(0x3e98),_0xb006bf(0x9df),_0xb006bf(0x2b61),_0xb006bf(0x4843),'mvbits','nearest',_0xb006bf(0x22d6),'present',_0xb006bf(0x49a5),_0xb006bf(0x13f2),'random_number','random_seed',_0xb006bf(0x51f),_0xb006bf(0x11d0),_0xb006bf(0x1324),_0xb006bf(0x254f),'scale','scan',_0xb006bf(0x4cab),'selected_real_kind','set_exponent',_0xb006bf(0x4d0a),_0xb006bf(0x395f),'spacing',_0xb006bf(0x2634),_0xb006bf(0x13b9),_0xb006bf(0x50df),'tiny',_0xb006bf(0x5a5),_0xb006bf(0x1b23),_0xb006bf(0x3adf),_0xb006bf(0x4f08),_0xb006bf(0x1a64),_0xb006bf(0x31fa),_0xb006bf(0x3b0c),_0xb006bf(0x4d36),'dble',_0xb006bf(0x1a87),'dprod',_0xb006bf(0x4b6d),'command_argument_count','get_command',_0xb006bf(0x105a),_0xb006bf(0x4060),_0xb006bf(0x4e58),'ieee_arithmetic',_0xb006bf(0x42b2),_0xb006bf(0x308b),_0xb006bf(0x38e1),'is_iostat_eor',_0xb006bf(0x40fd),_0xb006bf(0x265),_0xb006bf(0x421d),_0xb006bf(0x3dbf),_0xb006bf(0x1984),_0xb006bf(0x5159),'asinh',_0xb006bf(0x3075),_0xb006bf(0x3cf1),_0xb006bf(0x18a9),_0xb006bf(0x1e5c),_0xb006bf(0x2f7f),'bessel_y1',_0xb006bf(0x334),_0xb006bf(0x1a55),_0xb006bf(0x1147),_0xb006bf(0x3bf1),_0xb006bf(0x1c19),'log_gamma',_0xb006bf(0x1466),_0xb006bf(0x334b),_0xb006bf(0x21a1),'atomic_ref','execute_command_line',_0xb006bf(0x1768),_0xb006bf(0x5bd),_0xb006bf(0x492e),_0xb006bf(0x3261),_0xb006bf(0x2f52),_0xb006bf(0x2811),'ble','blt',_0xb006bf(0x2fd),'dshiftr',_0xb006bf(0xc77),_0xb006bf(0x2579),'iany',_0xb006bf(0x4290),_0xb006bf(0x12da),'lcobound',_0xb006bf(0x262c),'maskl',_0xb006bf(0x1ee5),'num_images',_0xb006bf(0x4d52),_0xb006bf(0x363f),_0xb006bf(0x2e3a),'shifta','shiftl',_0xb006bf(0x7bf),_0xb006bf(0x2e2),_0xb006bf(0x194e),'change',_0xb006bf(0x3003),'co_broadcast',_0xb006bf(0x138c),_0xb006bf(0x4eda),_0xb006bf(0x22e4),_0xb006bf(0x3013)]},'illegal':/\/\*/,'contains':[{'className':_0xb006bf(0x2431),'relevance':0x0,'variants':[_0x15e9dc['APOS_STRING_MODE'],_0x15e9dc['QUOTE_STRING_MODE']]},_0x2890d4,{'begin':/^C\s*=(?!=)/,'relevance':0x0},_0x13548b,_0x4b0cdb]};};},0x2455:_0x53f24a=>{const _0x20c0cd=a0_0x11e7;function _0x183d92(_0xb62edd){return new RegExp(_0xb62edd['replace'](/[-/\\^$*+?.()|[\]{}]/g,'\x5c$&'),'m');}function _0x51fac5(_0x5be953){const _0xbfe85f=a0_0x11e7;return _0x5be953?_0xbfe85f(0x2431)==typeof _0x5be953?_0x5be953:_0x5be953[_0xbfe85f(0x33b0)]:null;}function _0x3c38c4(_0x3e28c3){return _0x324b6b('(?=',_0x3e28c3,')');}function _0x324b6b(..._0x2f8de9){const _0x204d2c=a0_0x11e7;return _0x2f8de9[_0x204d2c(0x4833)](_0xfdc93c=>_0x51fac5(_0xfdc93c))[_0x204d2c(0x3541)]('');}function _0x2fcfea(..._0x2b6e7c){const _0x168775=a0_0x11e7,_0x46bf40=function(_0x4636fd){const _0x3f83e1=a0_0x11e7,_0x3e0caf=_0x4636fd[_0x4636fd['length']-0x1];return _0x3f83e1(0x20c7)==typeof _0x3e0caf&&_0x3e0caf['constructor']===Object?(_0x4636fd[_0x3f83e1(0x4986)](_0x4636fd[_0x3f83e1(0x1b19)]-0x1,0x1),_0x3e0caf):{};}(_0x2b6e7c);return'('+(_0x46bf40[_0x168775(0x2f26)]?'':'?:')+_0x2b6e7c['map'](_0x54a028=>_0x51fac5(_0x54a028))['join']('|')+')';}_0x53f24a[_0x20c0cd(0x474c)]=function(_0x3c3733){const _0x24b0dd=_0x20c0cd,_0x1865c5={'scope':_0x24b0dd(0x1357),'match':/\b(yield|return|let|do|match|use)!/},_0x53818f=['bool',_0x24b0dd(0x961),_0x24b0dd(0x4c5b),_0x24b0dd(0xc11),_0x24b0dd(0x22b1),_0x24b0dd(0x3f1b),_0x24b0dd(0xd4d),_0x24b0dd(0xf20),_0x24b0dd(0xa09),_0x24b0dd(0xc16),'uint',_0x24b0dd(0x13c3),'uint64',_0x24b0dd(0x1403),_0x24b0dd(0x18ed),_0x24b0dd(0x2353),_0x24b0dd(0x1ab8),_0x24b0dd(0x5024),'float32',_0x24b0dd(0x4d4),_0x24b0dd(0x373c),'string','unit',_0x24b0dd(0x3179),_0x24b0dd(0x1081),_0x24b0dd(0x1c68),_0x24b0dd(0x144e),'array',_0x24b0dd(0xc26),_0x24b0dd(0x48e6),'exn',_0x24b0dd(0x4d05),_0x24b0dd(0x1fba),_0x24b0dd(0x22ac),'outref',_0x24b0dd(0x1f47),_0x24b0dd(0x3614)],_0x5bc0f1={'keyword':[_0x24b0dd(0x3027),_0x24b0dd(0x2663),'as','assert',_0x24b0dd(0x7c0),_0x24b0dd(0x42fa),_0x24b0dd(0x1390),_0x24b0dd(0x3d23),'delegate','do','done',_0x24b0dd(0x4573),_0x24b0dd(0x4003),_0x24b0dd(0x4ef2),_0x24b0dd(0x3d4),_0x24b0dd(0x2681),'exception','extern',_0x24b0dd(0x37b2),_0x24b0dd(0x1aaf),_0x24b0dd(0x3c19),_0x24b0dd(0x451d),'function','global','if','in',_0x24b0dd(0x46a1),_0x24b0dd(0x2988),_0x24b0dd(0x321b),'internal',_0x24b0dd(0x393d),_0x24b0dd(0x1e61),_0x24b0dd(0x2d96),'member',_0x24b0dd(0x196c),_0x24b0dd(0x1c6c),'namespace',_0x24b0dd(0x4321),'of',_0x24b0dd(0x1795),'or',_0x24b0dd(0x35a7),_0x24b0dd(0x4ef4),_0x24b0dd(0x39ce),_0x24b0dd(0x4860),_0x24b0dd(0xdfd),_0x24b0dd(0x2c7c),_0x24b0dd(0x4146),'then','to','try',_0x24b0dd(0xcfc),'upcast',_0x24b0dd(0x84a),'val',_0x24b0dd(0x27d6),_0x24b0dd(0x191b),_0x24b0dd(0x552),_0x24b0dd(0x2aa7),'yield'],'literal':[_0x24b0dd(0x4022),_0x24b0dd(0x3984),'null','Some','None','Ok',_0x24b0dd(0x5e5),_0x24b0dd(0x320),_0x24b0dd(0x8f3),_0x24b0dd(0xe2c),'nanf'],'built_in':[_0x24b0dd(0xc1a),_0x24b0dd(0x21c3),_0x24b0dd(0x2c1e),'reraise',_0x24b0dd(0x4e11),_0x24b0dd(0x681),_0x24b0dd(0x1fa),_0x24b0dd(0xf9e),_0x24b0dd(0x44d8),_0x24b0dd(0xc5d),'typeof',_0x24b0dd(0x1ea5),_0x24b0dd(0x3354),_0x24b0dd(0x949),_0x24b0dd(0x49cd),_0x24b0dd(0x2fea),'id',_0x24b0dd(0x25f8),'snd','ignore',_0x24b0dd(0x192b),_0x24b0dd(0x347a),_0x24b0dd(0x22ec),_0x24b0dd(0xef2),_0x24b0dd(0x1e83),_0x24b0dd(0x32fe),_0x24b0dd(0x45c3),_0x24b0dd(0xd87),_0x24b0dd(0x35a8),_0x24b0dd(0x147c),'fprintf',_0x24b0dd(0x4891),_0x24b0dd(0x3605),_0x24b0dd(0x4c07)],'variable.constant':['__LINE__','__SOURCE_DIRECTORY__','__SOURCE_FILE__']},_0x36b1a3={'variants':[_0x3c3733[_0x24b0dd(0x4e4f)](/\(\*(?!\))/,/\*\)/,{'contains':[_0x24b0dd(0x4454)]}),_0x3c3733[_0x24b0dd(0x2ae2)]]},_0x513b34={'scope':_0x24b0dd(0x3362),'begin':/``/,'end':/``/},_0x58c02d=/\B('|\^)/,_0x402ea3={'scope':'symbol','variants':[{'match':_0x324b6b(_0x58c02d,/``.*?``/)},{'match':_0x324b6b(_0x58c02d,_0x3c3733[_0x24b0dd(0x206e)])}],'relevance':0x0},_0x186297=function({includeEqual:_0x11fe41}){const _0x5c5a24=_0x24b0dd;let _0x251d36;_0x251d36=_0x11fe41?_0x5c5a24(0x1fdb):_0x5c5a24(0x45a9);const _0x384df0=_0x324b6b('[',...Array[_0x5c5a24(0x27e6)](_0x251d36)[_0x5c5a24(0x4833)](_0x183d92),']'),_0x56f451=_0x2fcfea(_0x384df0,/\./),_0x8fadc8=_0x324b6b(_0x56f451,_0x3c38c4(_0x56f451)),_0x261073=_0x2fcfea(_0x324b6b(_0x8fadc8,_0x56f451,'*'),_0x324b6b(_0x384df0,'+'));return{'scope':_0x5c5a24(0x1182),'match':_0x2fcfea(_0x261073,/:\?>/,/:\?/,/:>/,/:=/,/::?/,/\$/),'relevance':0x0};},_0x4326f4=_0x186297({'includeEqual':!0x0}),_0x5af5bc=_0x186297({'includeEqual':!0x1}),_0x3fc86d=function(_0x3dcd09,_0x2f5103){const _0x505f9d=_0x24b0dd;return{'begin':_0x324b6b(_0x3dcd09,_0x3c38c4(_0x324b6b(/\s*/,_0x2fcfea(/\w/,/'/,/\^/,/#/,/``/,/\(/,/{\|/)))),'beginScope':_0x2f5103,'end':_0x3c38c4(_0x2fcfea(/\n/,/=/)),'relevance':0x0,'keywords':_0x3c3733[_0x505f9d(0x46a1)](_0x5bc0f1,{'type':_0x53818f}),'contains':[_0x36b1a3,_0x402ea3,_0x3c3733[_0x505f9d(0x46a1)](_0x513b34,{'scope':null}),_0x5af5bc]};},_0x5ed84d=_0x3fc86d(/:/,_0x24b0dd(0x1182)),_0x1f18ec=_0x3fc86d(/\bof\b/,_0x24b0dd(0x1357)),_0x575bc6={'begin':[/(^|\s+)/,/type/,/\s+/,/[a-zA-Z_](\w|')*/],'beginScope':{0x2:_0x24b0dd(0x1357),0x4:_0x24b0dd(0x19e4)},'end':_0x3c38c4(/\(|=|$/),'keywords':_0x5bc0f1,'contains':[_0x36b1a3,_0x3c3733[_0x24b0dd(0x46a1)](_0x513b34,{'scope':null}),_0x402ea3,{'scope':'operator','match':/<|>/},_0x5ed84d]},_0x30258d={'scope':_0x24b0dd(0x3cfa),'match':/\b[_a-z]\w*(?=\s*\{)/},_0x12af31={'begin':[/^\s*/,_0x324b6b(/#/,_0x2fcfea('if','else',_0x24b0dd(0x39b),_0x24b0dd(0x3572),_0x24b0dd(0x45bf),_0x24b0dd(0x4b25),'r','i','I',_0x24b0dd(0x2d42),_0x24b0dd(0x51b6),_0x24b0dd(0x2e5),_0x24b0dd(0x2188))),/\b/],'beginScope':{0x2:_0x24b0dd(0x5153)},'end':_0x3c38c4(/\s|$/)},_0x16258b={'variants':[_0x3c3733[_0x24b0dd(0xed7)],_0x3c3733[_0x24b0dd(0xd12)]]},_0x1aa5f6={'scope':'string','begin':/"/,'end':/"/,'contains':[_0x3c3733['BACKSLASH_ESCAPE']]},_0x291a1c={'scope':_0x24b0dd(0x2431),'begin':/@"/,'end':/"/,'contains':[{'match':/""/},_0x3c3733[_0x24b0dd(0x4a76)]]},_0x30249b={'scope':_0x24b0dd(0x2431),'begin':/"""/,'end':/"""/,'relevance':0x2},_0x5951c={'scope':_0x24b0dd(0x2ad6),'begin':/\{/,'end':/\}/,'keywords':_0x5bc0f1},_0x485b4c={'scope':'string','begin':/\$"/,'end':/"/,'contains':[{'match':/\{\{/},{'match':/\}\}/},_0x3c3733[_0x24b0dd(0x4a76)],_0x5951c]},_0x387790={'scope':_0x24b0dd(0x2431),'begin':/(\$@|@\$)"/,'end':/"/,'contains':[{'match':/\{\{/},{'match':/\}\}/},{'match':/""/},_0x3c3733['BACKSLASH_ESCAPE'],_0x5951c]},_0x291d1c={'scope':_0x24b0dd(0x2431),'begin':/\$"""/,'end':/"""/,'contains':[{'match':/\{\{/},{'match':/\}\}/},_0x5951c],'relevance':0x2},_0x19fad1={'scope':'string','match':_0x324b6b(/'/,_0x2fcfea(/[^\\']/,/\\(?:.|\d{3}|x[a-fA-F\d]{2}|u[a-fA-F\d]{4}|U[a-fA-F\d]{8})/),/'/)};return _0x5951c[_0x24b0dd(0x2b31)]=[_0x387790,_0x485b4c,_0x291a1c,_0x1aa5f6,_0x19fad1,_0x1865c5,_0x36b1a3,_0x513b34,_0x5ed84d,_0x30258d,_0x12af31,_0x16258b,_0x402ea3,_0x4326f4],{'name':'F#','aliases':['fs','f#'],'keywords':_0x5bc0f1,'illegal':/\/\*/,'classNameAliases':{'computation-expression':_0x24b0dd(0x1357)},'contains':[_0x1865c5,{'variants':[_0x291d1c,_0x387790,_0x485b4c,_0x30249b,_0x291a1c,_0x1aa5f6,_0x19fad1]},_0x36b1a3,_0x513b34,_0x575bc6,{'scope':_0x24b0dd(0x5153),'begin':/\[\]/,'relevance':0x2,'contains':[_0x513b34,_0x30249b,_0x291a1c,_0x1aa5f6,_0x19fad1,_0x16258b]},_0x1f18ec,_0x5ed84d,_0x30258d,_0x12af31,_0x16258b,_0x402ea3,_0x4326f4]};};},0x1d47:_0x5859f9=>{_0x5859f9['exports']=function(_0x3fc098){const _0x520228=a0_0x11e7,_0x4c866e=_0x3fc098['regex'],_0x4764d0={'keyword':_0x520228(0x4e92),'literal':_0x520228(0x25b7),'built_in':_0x520228(0x3f9c)},_0x4abeee={'className':_0x520228(0x239b),'variants':[{'begin':/=[lgenxc]=/},{'begin':/\$/}]},_0x5557f7={'className':_0x520228(0x4645),'variants':[{'begin':'\x27','end':'\x27'},{'begin':'\x22','end':'\x22'}],'illegal':'\x5cn','contains':[_0x3fc098[_0x520228(0x4a76)]]},_0x5c9a8e={'begin':'/','end':'/','keywords':_0x4764d0,'contains':[_0x5557f7,_0x3fc098[_0x520228(0x2ae2)],_0x3fc098[_0x520228(0x23fe)],_0x3fc098['QUOTE_STRING_MODE'],_0x3fc098[_0x520228(0xa4c)],_0x3fc098['C_NUMBER_MODE']]},_0x1bc30d=/[a-z0-9&#*=?@\\><:,()$[\]_.{}!+%^-]+/,_0x3910eb={'begin':/[a-z][a-z0-9_]*(\([a-z0-9_, ]*\))?[ \t]+/,'excludeBegin':!0x0,'end':'$','endsWithParent':!0x0,'contains':[_0x5557f7,_0x5c9a8e,{'className':_0x520228(0x4645),'begin':_0x4c866e['concat'](_0x1bc30d,_0x4c866e['anyNumberOfTimes'](_0x4c866e[_0x520228(0x1d1d)](/[ ]+/,_0x1bc30d))),'relevance':0x0}]};return{'name':_0x520228(0x3bb6),'aliases':[_0x520228(0x1c64)],'case_insensitive':!0x0,'keywords':_0x4764d0,'contains':[_0x3fc098['COMMENT'](/^\$ontext/,/^\$offtext/),{'className':_0x520228(0x5153),'begin':_0x520228(0x3869),'end':'$','returnBegin':!0x0,'contains':[{'className':_0x520228(0x1357),'begin':'^\x5c$[a-z0-9]+'}]},_0x3fc098[_0x520228(0x4e4f)](_0x520228(0x50f9),'$'),_0x3fc098[_0x520228(0x2ae2)],_0x3fc098[_0x520228(0x23fe)],_0x3fc098[_0x520228(0x291b)],_0x3fc098[_0x520228(0xa4c)],{'beginKeywords':_0x520228(0x3f50),'end':';','contains':[_0x3fc098[_0x520228(0x4e4f)](_0x520228(0x50f9),'$'),_0x3fc098[_0x520228(0x2ae2)],_0x3fc098['C_BLOCK_COMMENT_MODE'],_0x3fc098[_0x520228(0x291b)],_0x3fc098[_0x520228(0xa4c)],_0x5c9a8e,_0x3910eb]},{'beginKeywords':_0x520228(0x1639),'end':';','returnBegin':!0x0,'contains':[{'beginKeywords':_0x520228(0x1639),'end':'$','contains':[_0x3910eb]},_0x3fc098['COMMENT'](_0x520228(0x50f9),'$'),_0x3fc098['C_LINE_COMMENT_MODE'],_0x3fc098[_0x520228(0x23fe)],_0x3fc098[_0x520228(0x291b)],_0x3fc098[_0x520228(0xa4c)],_0x3fc098[_0x520228(0xd12)]]},{'className':_0x520228(0x14b2),'begin':/^[a-z][a-z0-9_,\-+' ()$]+\.{2}/,'returnBegin':!0x0,'contains':[{'className':_0x520228(0x4685),'begin':/^[a-z0-9_]+/},{'className':_0x520228(0xddd),'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0},_0x4abeee]},_0x3fc098[_0x520228(0xd12)],_0x4abeee]};};},0xeba:_0x411db9=>{const _0x2eae50=a0_0x11e7;_0x411db9[_0x2eae50(0x474c)]=function(_0x3545cb){const _0x434347=_0x2eae50,_0x3d8d31={'keyword':_0x434347(0x3f7d),'built_in':'abs\x20acf\x20aconcat\x20aeye\x20amax\x20amean\x20AmericanBinomCall\x20AmericanBinomCall_Greeks\x20AmericanBinomCall_ImpVol\x20AmericanBinomPut\x20AmericanBinomPut_Greeks\x20AmericanBinomPut_ImpVol\x20AmericanBSCall\x20AmericanBSCall_Greeks\x20AmericanBSCall_ImpVol\x20AmericanBSPut\x20AmericanBSPut_Greeks\x20AmericanBSPut_ImpVol\x20amin\x20amult\x20annotationGetDefaults\x20annotationSetBkd\x20annotationSetFont\x20annotationSetLineColor\x20annotationSetLineStyle\x20annotationSetLineThickness\x20annualTradingDays\x20arccos\x20arcsin\x20areshape\x20arrayalloc\x20arrayindex\x20arrayinit\x20arraytomat\x20asciiload\x20asclabel\x20astd\x20astds\x20asum\x20atan\x20atan2\x20atranspose\x20axmargin\x20balance\x20band\x20bandchol\x20bandcholsol\x20bandltsol\x20bandrv\x20bandsolpd\x20bar\x20base10\x20begwind\x20besselj\x20bessely\x20beta\x20box\x20boxcox\x20cdfBeta\x20cdfBetaInv\x20cdfBinomial\x20cdfBinomialInv\x20cdfBvn\x20cdfBvn2\x20cdfBvn2e\x20cdfCauchy\x20cdfCauchyInv\x20cdfChic\x20cdfChii\x20cdfChinc\x20cdfChincInv\x20cdfExp\x20cdfExpInv\x20cdfFc\x20cdfFnc\x20cdfFncInv\x20cdfGam\x20cdfGenPareto\x20cdfHyperGeo\x20cdfLaplace\x20cdfLaplaceInv\x20cdfLogistic\x20cdfLogisticInv\x20cdfmControlCreate\x20cdfMvn\x20cdfMvn2e\x20cdfMvnce\x20cdfMvne\x20cdfMvt2e\x20cdfMvtce\x20cdfMvte\x20cdfN\x20cdfN2\x20cdfNc\x20cdfNegBinomial\x20cdfNegBinomialInv\x20cdfNi\x20cdfPoisson\x20cdfPoissonInv\x20cdfRayleigh\x20cdfRayleighInv\x20cdfTc\x20cdfTci\x20cdfTnc\x20cdfTvn\x20cdfWeibull\x20cdfWeibullInv\x20cdir\x20ceil\x20ChangeDir\x20chdir\x20chiBarSquare\x20chol\x20choldn\x20cholsol\x20cholup\x20chrs\x20close\x20code\x20cols\x20colsf\x20combinate\x20combinated\x20complex\x20con\x20cond\x20conj\x20cons\x20ConScore\x20contour\x20conv\x20convertsatostr\x20convertstrtosa\x20corrm\x20corrms\x20corrvc\x20corrx\x20corrxs\x20cos\x20cosh\x20counts\x20countwts\x20crossprd\x20crout\x20croutp\x20csrcol\x20csrlin\x20csvReadM\x20csvReadSA\x20cumprodc\x20cumsumc\x20curve\x20cvtos\x20datacreate\x20datacreatecomplex\x20datalist\x20dataload\x20dataloop\x20dataopen\x20datasave\x20date\x20datestr\x20datestring\x20datestrymd\x20dayinyr\x20dayofweek\x20dbAddDatabase\x20dbClose\x20dbCommit\x20dbCreateQuery\x20dbExecQuery\x20dbGetConnectOptions\x20dbGetDatabaseName\x20dbGetDriverName\x20dbGetDrivers\x20dbGetHostName\x20dbGetLastErrorNum\x20dbGetLastErrorText\x20dbGetNumericalPrecPolicy\x20dbGetPassword\x20dbGetPort\x20dbGetTableHeaders\x20dbGetTables\x20dbGetUserName\x20dbHasFeature\x20dbIsDriverAvailable\x20dbIsOpen\x20dbIsOpenError\x20dbOpen\x20dbQueryBindValue\x20dbQueryClear\x20dbQueryCols\x20dbQueryExecPrepared\x20dbQueryFetchAllM\x20dbQueryFetchAllSA\x20dbQueryFetchOneM\x20dbQueryFetchOneSA\x20dbQueryFinish\x20dbQueryGetBoundValue\x20dbQueryGetBoundValues\x20dbQueryGetField\x20dbQueryGetLastErrorNum\x20dbQueryGetLastErrorText\x20dbQueryGetLastInsertID\x20dbQueryGetLastQuery\x20dbQueryGetPosition\x20dbQueryIsActive\x20dbQueryIsForwardOnly\x20dbQueryIsNull\x20dbQueryIsSelect\x20dbQueryIsValid\x20dbQueryPrepare\x20dbQueryRows\x20dbQuerySeek\x20dbQuerySeekFirst\x20dbQuerySeekLast\x20dbQuerySeekNext\x20dbQuerySeekPrevious\x20dbQuerySetForwardOnly\x20dbRemoveDatabase\x20dbRollback\x20dbSetConnectOptions\x20dbSetDatabaseName\x20dbSetHostName\x20dbSetNumericalPrecPolicy\x20dbSetPort\x20dbSetUserName\x20dbTransaction\x20DeleteFile\x20delif\x20delrows\x20denseToSp\x20denseToSpRE\x20denToZero\x20design\x20det\x20detl\x20dfft\x20dffti\x20diag\x20diagrv\x20digamma\x20doswin\x20DOSWinCloseall\x20DOSWinOpen\x20dotfeq\x20dotfeqmt\x20dotfge\x20dotfgemt\x20dotfgt\x20dotfgtmt\x20dotfle\x20dotflemt\x20dotflt\x20dotfltmt\x20dotfne\x20dotfnemt\x20draw\x20drop\x20dsCreate\x20dstat\x20dstatmt\x20dstatmtControlCreate\x20dtdate\x20dtday\x20dttime\x20dttodtv\x20dttostr\x20dttoutc\x20dtvnormal\x20dtvtodt\x20dtvtoutc\x20dummy\x20dummybr\x20dummydn\x20eig\x20eigh\x20eighv\x20eigv\x20elapsedTradingDays\x20endwind\x20envget\x20eof\x20eqSolve\x20eqSolvemt\x20eqSolvemtControlCreate\x20eqSolvemtOutCreate\x20eqSolveset\x20erf\x20erfc\x20erfccplx\x20erfcplx\x20error\x20etdays\x20ethsec\x20etstr\x20EuropeanBinomCall\x20EuropeanBinomCall_Greeks\x20EuropeanBinomCall_ImpVol\x20EuropeanBinomPut\x20EuropeanBinomPut_Greeks\x20EuropeanBinomPut_ImpVol\x20EuropeanBSCall\x20EuropeanBSCall_Greeks\x20EuropeanBSCall_ImpVol\x20EuropeanBSPut\x20EuropeanBSPut_Greeks\x20EuropeanBSPut_ImpVol\x20exctsmpl\x20exec\x20execbg\x20exp\x20extern\x20eye\x20fcheckerr\x20fclearerr\x20feq\x20feqmt\x20fflush\x20fft\x20ffti\x20fftm\x20fftmi\x20fftn\x20fge\x20fgemt\x20fgets\x20fgetsa\x20fgetsat\x20fgetst\x20fgt\x20fgtmt\x20fileinfo\x20filesa\x20fle\x20flemt\x20floor\x20flt\x20fltmt\x20fmod\x20fne\x20fnemt\x20fonts\x20fopen\x20formatcv\x20formatnv\x20fputs\x20fputst\x20fseek\x20fstrerror\x20ftell\x20ftocv\x20ftos\x20ftostrC\x20gamma\x20gammacplx\x20gammaii\x20gausset\x20gdaAppend\x20gdaCreate\x20gdaDStat\x20gdaDStatMat\x20gdaGetIndex\x20gdaGetName\x20gdaGetNames\x20gdaGetOrders\x20gdaGetType\x20gdaGetTypes\x20gdaGetVarInfo\x20gdaIsCplx\x20gdaLoad\x20gdaPack\x20gdaRead\x20gdaReadByIndex\x20gdaReadSome\x20gdaReadSparse\x20gdaReadStruct\x20gdaReportVarInfo\x20gdaSave\x20gdaUpdate\x20gdaUpdateAndPack\x20gdaVars\x20gdaWrite\x20gdaWrite32\x20gdaWriteSome\x20getarray\x20getdims\x20getf\x20getGAUSShome\x20getmatrix\x20getmatrix4D\x20getname\x20getnamef\x20getNextTradingDay\x20getNextWeekDay\x20getnr\x20getorders\x20getpath\x20getPreviousTradingDay\x20getPreviousWeekDay\x20getRow\x20getscalar3D\x20getscalar4D\x20getTrRow\x20getwind\x20glm\x20gradcplx\x20gradMT\x20gradMTm\x20gradMTT\x20gradMTTm\x20gradp\x20graphprt\x20graphset\x20hasimag\x20header\x20headermt\x20hess\x20hessMT\x20hessMTg\x20hessMTgw\x20hessMTm\x20hessMTmw\x20hessMTT\x20hessMTTg\x20hessMTTgw\x20hessMTTm\x20hessMTw\x20hessp\x20hist\x20histf\x20histp\x20hsec\x20imag\x20indcv\x20indexcat\x20indices\x20indices2\x20indicesf\x20indicesfn\x20indnv\x20indsav\x20integrate1d\x20integrateControlCreate\x20intgrat2\x20intgrat3\x20inthp1\x20inthp2\x20inthp3\x20inthp4\x20inthpControlCreate\x20intquad1\x20intquad2\x20intquad3\x20intrleav\x20intrleavsa\x20intrsect\x20intsimp\x20inv\x20invpd\x20invswp\x20iscplx\x20iscplxf\x20isden\x20isinfnanmiss\x20ismiss\x20key\x20keyav\x20keyw\x20lag\x20lag1\x20lagn\x20lapEighb\x20lapEighi\x20lapEighvb\x20lapEighvi\x20lapgEig\x20lapgEigh\x20lapgEighv\x20lapgEigv\x20lapgSchur\x20lapgSvdcst\x20lapgSvds\x20lapgSvdst\x20lapSvdcusv\x20lapSvds\x20lapSvdusv\x20ldlp\x20ldlsol\x20linSolve\x20listwise\x20ln\x20lncdfbvn\x20lncdfbvn2\x20lncdfmvn\x20lncdfn\x20lncdfn2\x20lncdfnc\x20lnfact\x20lngammacplx\x20lnpdfmvn\x20lnpdfmvt\x20lnpdfn\x20lnpdft\x20loadd\x20loadstruct\x20loadwind\x20loess\x20loessmt\x20loessmtControlCreate\x20log\x20loglog\x20logx\x20logy\x20lower\x20lowmat\x20lowmat1\x20ltrisol\x20lu\x20lusol\x20machEpsilon\x20make\x20makevars\x20makewind\x20margin\x20matalloc\x20matinit\x20mattoarray\x20maxbytes\x20maxc\x20maxindc\x20maxv\x20maxvec\x20mbesselei\x20mbesselei0\x20mbesselei1\x20mbesseli\x20mbesseli0\x20mbesseli1\x20meanc\x20median\x20mergeby\x20mergevar\x20minc\x20minindc\x20minv\x20miss\x20missex\x20missrv\x20moment\x20momentd\x20movingave\x20movingaveExpwgt\x20movingaveWgt\x20nextindex\x20nextn\x20nextnevn\x20nextwind\x20ntos\x20null\x20null1\x20numCombinations\x20ols\x20olsmt\x20olsmtControlCreate\x20olsqr\x20olsqr2\x20olsqrmt\x20ones\x20optn\x20optnevn\x20orth\x20outtyp\x20pacf\x20packedToSp\x20packr\x20parse\x20pause\x20pdfCauchy\x20pdfChi\x20pdfExp\x20pdfGenPareto\x20pdfHyperGeo\x20pdfLaplace\x20pdfLogistic\x20pdfn\x20pdfPoisson\x20pdfRayleigh\x20pdfWeibull\x20pi\x20pinv\x20pinvmt\x20plotAddArrow\x20plotAddBar\x20plotAddBox\x20plotAddHist\x20plotAddHistF\x20plotAddHistP\x20plotAddPolar\x20plotAddScatter\x20plotAddShape\x20plotAddTextbox\x20plotAddTS\x20plotAddXY\x20plotArea\x20plotBar\x20plotBox\x20plotClearLayout\x20plotContour\x20plotCustomLayout\x20plotGetDefaults\x20plotHist\x20plotHistF\x20plotHistP\x20plotLayout\x20plotLogLog\x20plotLogX\x20plotLogY\x20plotOpenWindow\x20plotPolar\x20plotSave\x20plotScatter\x20plotSetAxesPen\x20plotSetBar\x20plotSetBarFill\x20plotSetBarStacked\x20plotSetBkdColor\x20plotSetFill\x20plotSetGrid\x20plotSetLegend\x20plotSetLineColor\x20plotSetLineStyle\x20plotSetLineSymbol\x20plotSetLineThickness\x20plotSetNewWindow\x20plotSetTitle\x20plotSetWhichYAxis\x20plotSetXAxisShow\x20plotSetXLabel\x20plotSetXRange\x20plotSetXTicInterval\x20plotSetXTicLabel\x20plotSetYAxisShow\x20plotSetYLabel\x20plotSetYRange\x20plotSetZAxisShow\x20plotSetZLabel\x20plotSurface\x20plotTS\x20plotXY\x20polar\x20polychar\x20polyeval\x20polygamma\x20polyint\x20polymake\x20polymat\x20polymroot\x20polymult\x20polyroot\x20pqgwin\x20previousindex\x20princomp\x20printfm\x20printfmt\x20prodc\x20psi\x20putarray\x20putf\x20putvals\x20pvCreate\x20pvGetIndex\x20pvGetParNames\x20pvGetParVector\x20pvLength\x20pvList\x20pvPack\x20pvPacki\x20pvPackm\x20pvPackmi\x20pvPacks\x20pvPacksi\x20pvPacksm\x20pvPacksmi\x20pvPutParVector\x20pvTest\x20pvUnpack\x20QNewton\x20QNewtonmt\x20QNewtonmtControlCreate\x20QNewtonmtOutCreate\x20QNewtonSet\x20QProg\x20QProgmt\x20QProgmtInCreate\x20qqr\x20qqre\x20qqrep\x20qr\x20qre\x20qrep\x20qrsol\x20qrtsol\x20qtyr\x20qtyre\x20qtyrep\x20quantile\x20quantiled\x20qyr\x20qyre\x20qyrep\x20qz\x20rank\x20rankindx\x20readr\x20real\x20reclassify\x20reclassifyCuts\x20recode\x20recserar\x20recsercp\x20recserrc\x20rerun\x20rescale\x20reshape\x20rets\x20rev\x20rfft\x20rffti\x20rfftip\x20rfftn\x20rfftnp\x20rfftp\x20rndBernoulli\x20rndBeta\x20rndBinomial\x20rndCauchy\x20rndChiSquare\x20rndCon\x20rndCreateState\x20rndExp\x20rndGamma\x20rndGeo\x20rndGumbel\x20rndHyperGeo\x20rndi\x20rndKMbeta\x20rndKMgam\x20rndKMi\x20rndKMn\x20rndKMnb\x20rndKMp\x20rndKMu\x20rndKMvm\x20rndLaplace\x20rndLCbeta\x20rndLCgam\x20rndLCi\x20rndLCn\x20rndLCnb\x20rndLCp\x20rndLCu\x20rndLCvm\x20rndLogNorm\x20rndMTu\x20rndMVn\x20rndMVt\x20rndn\x20rndnb\x20rndNegBinomial\x20rndp\x20rndPoisson\x20rndRayleigh\x20rndStateSkip\x20rndu\x20rndvm\x20rndWeibull\x20rndWishart\x20rotater\x20round\x20rows\x20rowsf\x20rref\x20sampleData\x20satostrC\x20saved\x20saveStruct\x20savewind\x20scale\x20scale3d\x20scalerr\x20scalinfnanmiss\x20scalmiss\x20schtoc\x20schur\x20searchsourcepath\x20seekr\x20select\x20selif\x20seqa\x20seqm\x20setdif\x20setdifsa\x20setvars\x20setvwrmode\x20setwind\x20shell\x20shiftr\x20sin\x20singleindex\x20sinh\x20sleep\x20solpd\x20sortc\x20sortcc\x20sortd\x20sorthc\x20sorthcc\x20sortind\x20sortindc\x20sortmc\x20sortr\x20sortrc\x20spBiconjGradSol\x20spChol\x20spConjGradSol\x20spCreate\x20spDenseSubmat\x20spDiagRvMat\x20spEigv\x20spEye\x20spLDL\x20spline\x20spLU\x20spNumNZE\x20spOnes\x20spreadSheetReadM\x20spreadSheetReadSA\x20spreadSheetWrite\x20spScale\x20spSubmat\x20spToDense\x20spTrTDense\x20spTScalar\x20spZeros\x20sqpSolve\x20sqpSolveMT\x20sqpSolveMTControlCreate\x20sqpSolveMTlagrangeCreate\x20sqpSolveMToutCreate\x20sqpSolveSet\x20sqrt\x20statements\x20stdc\x20stdsc\x20stocv\x20stof\x20strcombine\x20strindx\x20strlen\x20strput\x20strrindx\x20strsect\x20strsplit\x20strsplitPad\x20strtodt\x20strtof\x20strtofcplx\x20strtriml\x20strtrimr\x20strtrunc\x20strtruncl\x20strtruncpad\x20strtruncr\x20submat\x20subscat\x20substute\x20subvec\x20sumc\x20sumr\x20surface\x20svd\x20svd1\x20svd2\x20svdcusv\x20svds\x20svdusv\x20sysstate\x20tab\x20tan\x20tanh\x20tempname\x20time\x20timedt\x20timestr\x20timeutc\x20title\x20tkf2eps\x20tkf2ps\x20tocart\x20todaydt\x20toeplitz\x20token\x20topolar\x20trapchk\x20trigamma\x20trimr\x20trunc\x20type\x20typecv\x20typef\x20union\x20unionsa\x20uniqindx\x20uniqindxsa\x20unique\x20uniquesa\x20upmat\x20upmat1\x20upper\x20utctodt\x20utctodtv\x20utrisol\x20vals\x20varCovMS\x20varCovXS\x20varget\x20vargetl\x20varmall\x20varmares\x20varput\x20varputl\x20vartypef\x20vcm\x20vcms\x20vcx\x20vcxs\x20vec\x20vech\x20vecr\x20vector\x20vget\x20view\x20viewxyz\x20vlist\x20vnamecv\x20volume\x20vput\x20vread\x20vtypecv\x20wait\x20waitc\x20walkindex\x20where\x20window\x20writer\x20xlabel\x20xlsGetSheetCount\x20xlsGetSheetSize\x20xlsGetSheetTypes\x20xlsMakeRange\x20xlsReadM\x20xlsReadSA\x20xlsWrite\x20xlsWriteM\x20xlsWriteSA\x20xpnd\x20xtics\x20xy\x20xyz\x20ylabel\x20ytics\x20zeros\x20zeta\x20zlabel\x20ztics\x20cdfEmpirical\x20dot\x20h5create\x20h5open\x20h5read\x20h5readAttribute\x20h5write\x20h5writeAttribute\x20ldl\x20plotAddErrorBar\x20plotAddSurface\x20plotCDFEmpirical\x20plotSetColormap\x20plotSetContourLabels\x20plotSetLegendFont\x20plotSetTextInterpreter\x20plotSetXTicCount\x20plotSetYTicCount\x20plotSetZLevels\x20powerm\x20strjoin\x20sylvester\x20strtrim','literal':'DB_AFTER_LAST_ROW\x20DB_ALL_TABLES\x20DB_BATCH_OPERATIONS\x20DB_BEFORE_FIRST_ROW\x20DB_BLOB\x20DB_EVENT_NOTIFICATIONS\x20DB_FINISH_QUERY\x20DB_HIGH_PRECISION\x20DB_LAST_INSERT_ID\x20DB_LOW_PRECISION_DOUBLE\x20DB_LOW_PRECISION_INT32\x20DB_LOW_PRECISION_INT64\x20DB_LOW_PRECISION_NUMBERS\x20DB_MULTIPLE_RESULT_SETS\x20DB_NAMED_PLACEHOLDERS\x20DB_POSITIONAL_PLACEHOLDERS\x20DB_PREPARED_QUERIES\x20DB_QUERY_SIZE\x20DB_SIMPLE_LOCKING\x20DB_SYSTEM_TABLES\x20DB_TABLES\x20DB_TRANSACTIONS\x20DB_UNICODE\x20DB_VIEWS\x20__STDIN\x20__STDOUT\x20__STDERR\x20__FILE_DIR'},_0x59a459=_0x3545cb['COMMENT']('@','@'),_0x160209={'className':'meta','begin':'#','end':'$','keywords':{'keyword':_0x434347(0x469e)},'contains':[{'begin':/\\\n/,'relevance':0x0},{'beginKeywords':_0x434347(0x478e),'end':'$','keywords':{'keyword':_0x434347(0x478e)},'contains':[{'className':_0x434347(0x2431),'begin':'\x22','end':'\x22','illegal':'\x5cn'}]},_0x3545cb[_0x434347(0x2ae2)],_0x3545cb['C_BLOCK_COMMENT_MODE'],_0x59a459]},_0x1e4a3c={'begin':/\bstruct\s+/,'end':/\s/,'keywords':_0x434347(0x4146),'contains':[{'className':'type','begin':_0x3545cb[_0x434347(0x206e)],'relevance':0x0}]},_0x5445d3=[{'className':_0x434347(0xddd),'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'endsWithParent':!0x0,'relevance':0x0,'contains':[{'className':'literal','begin':/\.\.\./},_0x3545cb[_0x434347(0xd12)],_0x3545cb['C_BLOCK_COMMENT_MODE'],_0x59a459,_0x1e4a3c]}],_0xf4a05d={'className':_0x434347(0x4685),'begin':_0x3545cb['UNDERSCORE_IDENT_RE'],'relevance':0x0},_0x1be4d0=function(_0x11e23e,_0x56d347,_0x4ce844){const _0x147135=_0x434347,_0x25f57f=_0x3545cb['inherit']({'className':_0x147135(0x14b2),'beginKeywords':_0x11e23e,'end':_0x56d347,'excludeEnd':!0x0,'contains':[]['concat'](_0x5445d3)},_0x4ce844||{});return _0x25f57f[_0x147135(0x2b31)][_0x147135(0x1715)](_0xf4a05d),_0x25f57f[_0x147135(0x2b31)][_0x147135(0x1715)](_0x3545cb['C_NUMBER_MODE']),_0x25f57f[_0x147135(0x2b31)]['push'](_0x3545cb[_0x147135(0x23fe)]),_0x25f57f[_0x147135(0x2b31)]['push'](_0x59a459),_0x25f57f;},_0x45d973={'className':_0x434347(0x43a),'begin':'\x5cb('+_0x3d8d31['built_in'][_0x434347(0x1117)]('\x20')['join']('|')+_0x434347(0x716)},_0x2e587c={'className':_0x434347(0x2431),'begin':'\x22','end':'\x22','contains':[_0x3545cb[_0x434347(0x4a76)]],'relevance':0x0},_0x324f04={'begin':_0x3545cb['UNDERSCORE_IDENT_RE']+_0x434347(0x7ef),'returnBegin':!0x0,'keywords':_0x3d8d31,'relevance':0x0,'contains':[{'beginKeywords':_0x3d8d31[_0x434347(0x1357)]},_0x45d973,{'className':'built_in','begin':_0x3545cb[_0x434347(0x206e)],'relevance':0x0}]},_0x1bc0ef={'begin':/\(/,'end':/\)/,'relevance':0x0,'keywords':{'built_in':_0x3d8d31[_0x434347(0x43a)],'literal':_0x3d8d31[_0x434347(0x2706)]},'contains':[_0x3545cb[_0x434347(0xd12)],_0x3545cb['C_BLOCK_COMMENT_MODE'],_0x59a459,_0x45d973,_0x324f04,_0x2e587c,_0x434347(0x4454)]};return _0x324f04[_0x434347(0x2b31)][_0x434347(0x1715)](_0x1bc0ef),{'name':_0x434347(0x3301),'aliases':[_0x434347(0x1462)],'case_insensitive':!0x0,'keywords':_0x3d8d31,'illegal':/(\{[%#]|[%#]\}| <- )/,'contains':[_0x3545cb[_0x434347(0xd12)],_0x3545cb[_0x434347(0x2ae2)],_0x3545cb['C_BLOCK_COMMENT_MODE'],_0x59a459,_0x2e587c,_0x160209,{'className':'keyword','begin':/\bexternal (matrix|string|array|sparse matrix|struct|proc|keyword|fn)/},_0x1be4d0(_0x434347(0x2687),';'),_0x1be4d0('fn','='),{'beginKeywords':'for\x20threadfor','end':/;/,'relevance':0x0,'contains':[_0x3545cb[_0x434347(0x23fe)],_0x59a459,_0x1bc0ef]},{'variants':[{'begin':_0x3545cb['UNDERSCORE_IDENT_RE']+'\x5c.'+_0x3545cb[_0x434347(0x206e)]},{'begin':_0x3545cb[_0x434347(0x206e)]+'\x5cs*='}],'relevance':0x0},_0x324f04,_0x1e4a3c]};};},0x212b:_0x51406c=>{const _0x2db148=a0_0x11e7;_0x51406c[_0x2db148(0x474c)]=function(_0x20be23){const _0x3710fa=_0x2db148,_0x56988a={'$pattern':_0x3710fa(0x188f),'keyword':_0x3710fa(0x149a)},_0xfb4865=_0x20be23['inherit'](_0x20be23['C_NUMBER_MODE'],{'begin':'([-+]?((\x5c.\x5cd+)|(\x5cd+)(\x5c.\x5cd*)?))|'+_0x20be23[_0x3710fa(0x45be)]}),_0x404ab9=[_0x20be23[_0x3710fa(0x2ae2)],_0x20be23[_0x3710fa(0x23fe)],_0x20be23[_0x3710fa(0x4e4f)](/\(/,/\)/),_0xfb4865,_0x20be23['inherit'](_0x20be23[_0x3710fa(0xa4c)],{'illegal':null}),_0x20be23[_0x3710fa(0x46a1)](_0x20be23[_0x3710fa(0x291b)],{'illegal':null}),{'className':_0x3710fa(0x11d8),'begin':_0x3710fa(0x12d2)},{'className':'name','begin':_0x3710fa(0x4de9)},{'className':'attr','begin':'(VC|VS|#)','end':_0x3710fa(0x2455)},{'className':_0x3710fa(0x431d),'begin':_0x3710fa(0x21ba)},{'className':_0x3710fa(0x43a),'begin':'(ATAN|ABS|ACOS|ASIN|SIN|COS|EXP|FIX|FUP|ROUND|LN|TAN)(\x5c[)','contains':[_0xfb4865],'end':'\x5c]'},{'className':_0x3710fa(0x239b),'variants':[{'begin':'N','end':_0x3710fa(0x19d1),'illegal':'\x5cW'}]}];return{'name':_0x3710fa(0x83e),'aliases':['nc'],'case_insensitive':!0x0,'keywords':_0x56988a,'contains':[{'className':_0x3710fa(0x5153),'begin':'%'},{'className':_0x3710fa(0x5153),'begin':_0x3710fa(0x1614)}]['concat'](_0x404ab9)};};},0xce1:_0x563256=>{const _0x4ddf6c=a0_0x11e7;_0x563256[_0x4ddf6c(0x474c)]=function(_0x2425da){const _0x3d4c0d=_0x4ddf6c;return{'name':_0x3d4c0d(0x1a4b),'aliases':[_0x3d4c0d(0x2127)],'keywords':_0x3d4c0d(0x225d),'contains':[{'className':_0x3d4c0d(0x239b),'begin':'\x5c*','relevance':0x0},{'className':_0x3d4c0d(0x5153),'begin':_0x3d4c0d(0x16cb)},{'begin':'\x5c|','end':_0x3d4c0d(0x3595),'contains':[{'className':_0x3d4c0d(0x2431),'begin':_0x3d4c0d(0xbce)}]},{'className':_0x3d4c0d(0x3362),'begin':'<','end':'>'},_0x2425da['HASH_COMMENT_MODE'],{'className':_0x3d4c0d(0x2431),'begin':_0x3d4c0d(0xb00),'end':_0x3d4c0d(0xb00)},_0x2425da[_0x3d4c0d(0x291b)]]};};},0xec3:_0x3dc10f=>{const _0x3e6148=a0_0x11e7;_0x3dc10f[_0x3e6148(0x474c)]=function(_0x565387){const _0x48fb1a=_0x3e6148;return{'name':_0x48fb1a(0x32a5),'keywords':{'keyword':_0x48fb1a(0xef9),'type':_0x48fb1a(0x350a),'built_in':_0x48fb1a(0x310),'literal':'true\x20false'},'illegal':'\x22','contains':[_0x565387['C_LINE_COMMENT_MODE'],_0x565387[_0x48fb1a(0x23fe)],_0x565387[_0x48fb1a(0xd12)],{'className':_0x48fb1a(0x5153),'begin':'#','end':'$'}]};};},0x4ab:_0x5309dc=>{const _0x5c52b2=a0_0x11e7;_0x5309dc[_0x5c52b2(0x474c)]=function(_0x5652fa){const _0x1e9161=_0x5c52b2;return{'name':_0x1e9161(0x48b),'case_insensitive':!0x1,'keywords':{'keyword':[_0x1e9161(0x4bda),_0x1e9161(0x17ac),'#region',_0x1e9161(0x2663),_0x1e9161(0x42fa),_0x1e9161(0x4e10),_0x1e9161(0x2e7e),_0x1e9161(0x4514),_0x1e9161(0x16d9),_0x1e9161(0x3d23),_0x1e9161(0x5be),'div','do','else',_0x1e9161(0x2681),_0x1e9161(0x44d8),_0x1e9161(0x4c7b),_0x1e9161(0x3c19),'function',_0x1e9161(0x3d0c),'if',_0x1e9161(0x4531),_0x1e9161(0xc1a),'or','repeat',_0x1e9161(0xdfd),_0x1e9161(0x857),'then','until','var',_0x1e9161(0x552),'with',_0x1e9161(0x32a6)],'built_in':[_0x1e9161(0xbe0),_0x1e9161(0x36a1),_0x1e9161(0x4e23),'achievement_get_challenges','achievement_get_info',_0x1e9161(0x5106),'achievement_increment',_0x1e9161(0x68f),_0x1e9161(0x1a36),'achievement_load_progress',_0x1e9161(0x526f),'achievement_login_status',_0x1e9161(0x44d3),'achievement_post',_0x1e9161(0x597),_0x1e9161(0x42d4),_0x1e9161(0x929),_0x1e9161(0x208e),_0x1e9161(0xa1b),_0x1e9161(0x27bd),'achievement_show_leaderboards',_0x1e9161(0x3f23),_0x1e9161(0x725),_0x1e9161(0x3c9f),_0x1e9161(0x507f),_0x1e9161(0x515d),_0x1e9161(0x1aff),_0x1e9161(0x29c),_0x1e9161(0x572),'ads_event_preload','ads_get_display_height','ads_get_display_width',_0x1e9161(0x4eba),_0x1e9161(0x3c45),_0x1e9161(0x1d64),_0x1e9161(0x2138),_0x1e9161(0x35ff),'alarm_get',_0x1e9161(0x1c2e),_0x1e9161(0x2577),'analytics_event_ext','angle_difference','ansi_char',_0x1e9161(0x26e0),_0x1e9161(0x471e),_0x1e9161(0x21a),_0x1e9161(0x386),_0x1e9161(0x10c5),_0x1e9161(0x4d29),_0x1e9161(0xa77),_0x1e9161(0x2776),_0x1e9161(0x325b),'array_create','array_delete',_0x1e9161(0x1882),_0x1e9161(0x19bd),_0x1e9161(0x3d1d),_0x1e9161(0xe78),_0x1e9161(0x4751),_0x1e9161(0x464d),_0x1e9161(0xe56),_0x1e9161(0x169c),_0x1e9161(0x14ab),_0x1e9161(0xa5f),_0x1e9161(0x453d),_0x1e9161(0x42e9),'audio_channel_num',_0x1e9161(0x5215),_0x1e9161(0x1aaa),_0x1e9161(0x2b84),_0x1e9161(0x3b28),_0x1e9161(0x1005),'audio_destroy_stream',_0x1e9161(0x1ffd),_0x1e9161(0x2061),_0x1e9161(0x3a0c),_0x1e9161(0x876),_0x1e9161(0x4cc3),_0x1e9161(0x2d39),_0x1e9161(0x3735),_0x1e9161(0x1d24),_0x1e9161(0x2a9f),_0x1e9161(0x50a7),_0x1e9161(0x273f),_0x1e9161(0x1be5),_0x1e9161(0x5043),_0x1e9161(0x1103),_0x1e9161(0xc12),_0x1e9161(0x3e32),_0x1e9161(0x4664),_0x1e9161(0x3faf),_0x1e9161(0x4250),_0x1e9161(0x4214),_0x1e9161(0x33b1),_0x1e9161(0xe54),_0x1e9161(0x5fc),_0x1e9161(0x2ab5),_0x1e9161(0x480),_0x1e9161(0x15de),'audio_get_master_gain',_0x1e9161(0x51d7),'audio_get_recorder_count',_0x1e9161(0x279),'audio_get_type',_0x1e9161(0x3bd5),'audio_group_load',_0x1e9161(0x36b2),'audio_group_name',_0x1e9161(0x3500),_0x1e9161(0x4f59),_0x1e9161(0x3f6a),_0x1e9161(0xa39),'audio_is_playing','audio_listener_get_data',_0x1e9161(0xddb),_0x1e9161(0x11dd),_0x1e9161(0x2471),_0x1e9161(0x19b7),'audio_listener_set_velocity','audio_listener_velocity',_0x1e9161(0x2a48),'audio_music_gain','audio_music_is_playing',_0x1e9161(0x3197),'audio_pause_music','audio_pause_sound',_0x1e9161(0x1355),_0x1e9161(0x3be),_0x1e9161(0x35d4),_0x1e9161(0x2a2f),_0x1e9161(0x3e8e),_0x1e9161(0x3d4e),_0x1e9161(0x456d),_0x1e9161(0x2255),_0x1e9161(0x2977),_0x1e9161(0x3d07),'audio_resume_sync_group',_0x1e9161(0x3411),_0x1e9161(0x14d4),_0x1e9161(0x195e),_0x1e9161(0x235e),_0x1e9161(0x4571),_0x1e9161(0x12b4),_0x1e9161(0x57b),_0x1e9161(0x441a),_0x1e9161(0x49c),_0x1e9161(0x445c),_0x1e9161(0x45da),_0x1e9161(0x25c5),_0x1e9161(0x2549),_0x1e9161(0x1573),_0x1e9161(0x2069),'audio_stop_recording',_0x1e9161(0x1a4d),_0x1e9161(0x282a),_0x1e9161(0x4ce),_0x1e9161(0x301b),_0x1e9161(0x3188),_0x1e9161(0x1420),'background_get_height',_0x1e9161(0x35c2),_0x1e9161(0xd19),'base64_encode',_0x1e9161(0x3727),_0x1e9161(0x423f),_0x1e9161(0x855),_0x1e9161(0xaab),_0x1e9161(0x1791),_0x1e9161(0xc88),_0x1e9161(0x23b1),_0x1e9161(0x4952),'buffer_copy_from_vertex_buffer',_0x1e9161(0x248d),'buffer_create_from_vertex_buffer',_0x1e9161(0x4337),_0x1e9161(0x43c8),_0x1e9161(0x4606),_0x1e9161(0x26eb),'buffer_get_address',_0x1e9161(0x365b),_0x1e9161(0x4202),_0x1e9161(0x4922),_0x1e9161(0x668),_0x1e9161(0x169d),_0x1e9161(0x730),'buffer_load_ext','buffer_load_partial',_0x1e9161(0x50ce),_0x1e9161(0x174e),'buffer_poke',_0x1e9161(0x202e),_0x1e9161(0x312),_0x1e9161(0xbfe),_0x1e9161(0x238d),'buffer_save_ext',_0x1e9161(0x2129),_0x1e9161(0x25e0),_0x1e9161(0x704),_0x1e9161(0x161c),'buffer_tell',_0x1e9161(0x483d),_0x1e9161(0x2ad5),'camera_create','camera_create_view',_0x1e9161(0x4d92),'camera_get_active',_0x1e9161(0x5008),_0x1e9161(0xe35),_0x1e9161(0x703),_0x1e9161(0x270),_0x1e9161(0x24be),_0x1e9161(0x4594),_0x1e9161(0x3579),_0x1e9161(0x1393),_0x1e9161(0x3e14),_0x1e9161(0x3d50),_0x1e9161(0x353a),_0x1e9161(0x42e4),_0x1e9161(0x2cb9),_0x1e9161(0x18b9),'camera_get_view_x',_0x1e9161(0x4e82),'camera_set_begin_script',_0x1e9161(0x714),_0x1e9161(0x43af),'camera_set_proj_mat','camera_set_update_script','camera_set_view_angle',_0x1e9161(0x400e),_0x1e9161(0x50eb),'camera_set_view_pos',_0x1e9161(0x3a15),_0x1e9161(0x1290),_0x1e9161(0x910),'ceil',_0x1e9161(0x3ab9),'chr','clamp',_0x1e9161(0x2540),'clickable_add_ext',_0x1e9161(0x18f7),_0x1e9161(0x3cda),_0x1e9161(0x4227),'clickable_exists','clickable_set_style',_0x1e9161(0x2bd9),_0x1e9161(0x34be),_0x1e9161(0x50b2),_0x1e9161(0x3372),_0x1e9161(0x4fef),_0x1e9161(0x1fd9),_0x1e9161(0x320e),_0x1e9161(0x8b9),_0x1e9161(0x3969),_0x1e9161(0x258),_0x1e9161(0xd74),_0x1e9161(0x2bf8),_0x1e9161(0x2091),_0x1e9161(0x530),_0x1e9161(0x4c8e),_0x1e9161(0x4ad8),_0x1e9161(0x1feb),'color_get_blue',_0x1e9161(0x3134),_0x1e9161(0x402),_0x1e9161(0x31ad),'color_get_saturation',_0x1e9161(0x161f),_0x1e9161(0x4944),_0x1e9161(0x4fa9),_0x1e9161(0x396a),_0x1e9161(0x51a7),'colour_get_saturation',_0x1e9161(0x1b0c),'cos',_0x1e9161(0x791),_0x1e9161(0x1f52),_0x1e9161(0x3f59),_0x1e9161(0x4373),_0x1e9161(0x2189),'date_compare_datetime',_0x1e9161(0x1690),_0x1e9161(0x43aa),_0x1e9161(0x2c8a),_0x1e9161(0x3861),'date_date_string',_0x1e9161(0x755),_0x1e9161(0x18ad),_0x1e9161(0x49e9),'date_days_in_year','date_get_day',_0x1e9161(0x1db4),'date_get_hour','date_get_hour_of_year',_0x1e9161(0xc00),_0x1e9161(0x49c8),_0x1e9161(0x1297),'date_get_second',_0x1e9161(0x82f),_0x1e9161(0x1edc),'date_get_week','date_get_weekday',_0x1e9161(0xb3f),_0x1e9161(0x43a0),_0x1e9161(0x3ea),'date_inc_hour',_0x1e9161(0x1c4d),_0x1e9161(0x19bb),_0x1e9161(0x2356),_0x1e9161(0x26c4),_0x1e9161(0x30ed),_0x1e9161(0x4e31),_0x1e9161(0x2190),'date_minute_span','date_month_span','date_second_span','date_set_timezone','date_time_of',_0x1e9161(0x612),_0x1e9161(0x14e6),'date_week_span',_0x1e9161(0x3425),_0x1e9161(0x2947),_0x1e9161(0x2eea),'debug_get_callstack',_0x1e9161(0x3a13),_0x1e9161(0x236e),_0x1e9161(0x3b86),'device_get_tilt_z',_0x1e9161(0x4df6),_0x1e9161(0x2736),_0x1e9161(0x31aa),_0x1e9161(0x18f1),_0x1e9161(0x3616),_0x1e9161(0x806),_0x1e9161(0x3312),'device_mouse_x','device_mouse_x_to_gui',_0x1e9161(0x20d9),'device_mouse_y_to_gui','directory_create',_0x1e9161(0xb4b),_0x1e9161(0x4fc9),'display_get_dpi_x','display_get_dpi_y','display_get_gui_height',_0x1e9161(0x1132),_0x1e9161(0x3eef),'display_get_orientation',_0x1e9161(0x2484),_0x1e9161(0xd07),_0x1e9161(0x33eb),'display_mouse_get_x',_0x1e9161(0x3b96),'display_mouse_set','display_reset','display_set_gui_maximise',_0x1e9161(0x1387),'display_set_gui_size',_0x1e9161(0x3df3),'display_set_timing_method','display_set_ui_visibility',_0x1e9161(0x4d24),_0x1e9161(0xb76),_0x1e9161(0x428c),'dot_product_3d','dot_product_3d_normalised',_0x1e9161(0x386d),_0x1e9161(0x2f90),_0x1e9161(0x4254),_0x1e9161(0x4eeb),'draw_background','draw_background_ext',_0x1e9161(0x507e),_0x1e9161(0x524a),_0x1e9161(0x2070),'draw_circle',_0x1e9161(0x1586),_0x1e9161(0x4f04),_0x1e9161(0x33c3),'draw_clear_alpha','draw_ellipse',_0x1e9161(0x2b0e),_0x1e9161(0xbe6),'draw_enable_alphablend',_0x1e9161(0x3fcf),_0x1e9161(0x1c1f),_0x1e9161(0x17df),'draw_get_alpha','draw_get_color',_0x1e9161(0x29e3),'draw_get_lighting',_0x1e9161(0xd78),_0x1e9161(0x406e),_0x1e9161(0x807),_0x1e9161(0x1bad),_0x1e9161(0x424c),_0x1e9161(0x149e),'draw_light_define_direction',_0x1e9161(0x3515),_0x1e9161(0x4be),'draw_light_get','draw_light_get_ambient',_0x1e9161(0x3f20),'draw_line_color',_0x1e9161(0x6ed),'draw_line_width','draw_line_width_color',_0x1e9161(0x4927),'draw_path',_0x1e9161(0x3bc),_0x1e9161(0x1eee),_0x1e9161(0xb68),'draw_primitive_begin',_0x1e9161(0x48b7),_0x1e9161(0x467d),_0x1e9161(0x14a3),_0x1e9161(0x1d94),_0x1e9161(0x51c9),_0x1e9161(0x11bb),_0x1e9161(0x1310),'draw_roundrect_color_ext','draw_roundrect_colour',_0x1e9161(0x3180),_0x1e9161(0x576),'draw_self','draw_set_alpha',_0x1e9161(0x2ecc),_0x1e9161(0x4e5e),_0x1e9161(0x2a59),_0x1e9161(0x415c),_0x1e9161(0x208b),_0x1e9161(0x20bb),_0x1e9161(0x4eb7),'draw_set_colour',_0x1e9161(0x4851),'draw_set_halign',_0x1e9161(0x31ba),_0x1e9161(0x28df),_0x1e9161(0x47e2),_0x1e9161(0x310b),_0x1e9161(0x3d79),_0x1e9161(0x2772),_0x1e9161(0xbb5),'draw_sprite','draw_sprite_ext','draw_sprite_general',_0x1e9161(0x1501),_0x1e9161(0x454c),'draw_sprite_pos',_0x1e9161(0x4037),'draw_sprite_stretched_ext',_0x1e9161(0x3b7a),_0x1e9161(0x4de1),_0x1e9161(0x22cf),_0x1e9161(0x2b60),_0x1e9161(0x50c4),_0x1e9161(0x2432),'draw_surface_part_ext','draw_surface_stretched',_0x1e9161(0x3386),_0x1e9161(0x2e6d),'draw_surface_tiled_ext','draw_text','draw_text_color','draw_text_colour','draw_text_ext',_0x1e9161(0x4ff9),_0x1e9161(0x51d1),_0x1e9161(0x2d81),_0x1e9161(0x3725),_0x1e9161(0x27a3),_0x1e9161(0x4389),_0x1e9161(0x523c),'draw_text_transformed_colour',_0x1e9161(0x3079),_0x1e9161(0x31f5),'draw_tilemap',_0x1e9161(0x2d14),_0x1e9161(0x48ca),'draw_triangle_colour',_0x1e9161(0x4f86),'draw_vertex_color',_0x1e9161(0x2697),_0x1e9161(0x107d),_0x1e9161(0x2c9a),_0x1e9161(0x323d),_0x1e9161(0x394f),_0x1e9161(0x2095),_0x1e9161(0x1cd8),_0x1e9161(0x61c),'ds_grid_add_region','ds_grid_clear','ds_grid_copy',_0x1e9161(0x2e48),_0x1e9161(0x82c),_0x1e9161(0x35be),_0x1e9161(0x2abf),'ds_grid_get_disk_mean',_0x1e9161(0x2b58),_0x1e9161(0x4b34),'ds_grid_get_max','ds_grid_get_mean',_0x1e9161(0xbf6),_0x1e9161(0x3b2f),_0x1e9161(0x3992),_0x1e9161(0x3f81),_0x1e9161(0x3bdf),_0x1e9161(0x1be8),_0x1e9161(0x2722),_0x1e9161(0xc69),_0x1e9161(0x35b4),_0x1e9161(0x5190),_0x1e9161(0x4238),'ds_grid_set_grid_region',_0x1e9161(0x978),_0x1e9161(0x3218),_0x1e9161(0x2fef),_0x1e9161(0x826),_0x1e9161(0xfad),_0x1e9161(0x2dfb),_0x1e9161(0x3e8),_0x1e9161(0x171d),_0x1e9161(0x4929),_0x1e9161(0x3056),_0x1e9161(0x3dae),_0x1e9161(0x50f8),_0x1e9161(0x475),'ds_list_copy',_0x1e9161(0x15fd),_0x1e9161(0x3108),_0x1e9161(0x331a),_0x1e9161(0x4500),_0x1e9161(0x33db),_0x1e9161(0x2773),_0x1e9161(0x223c),_0x1e9161(0x1391),_0x1e9161(0x3a22),_0x1e9161(0x1080),'ds_list_replace',_0x1e9161(0x33f4),_0x1e9161(0xc25),_0x1e9161(0x1f60),_0x1e9161(0x2090),_0x1e9161(0x21e),_0x1e9161(0x2f20),_0x1e9161(0xb74),_0x1e9161(0x48ba),_0x1e9161(0x4d65),'ds_map_copy','ds_map_create','ds_map_delete',_0x1e9161(0x50dd),_0x1e9161(0x2a18),_0x1e9161(0x337e),_0x1e9161(0x34a2),_0x1e9161(0x1a45),_0x1e9161(0x238a),_0x1e9161(0x3f84),'ds_map_find_value',_0x1e9161(0x1c37),_0x1e9161(0x3acc),'ds_map_replace_list','ds_map_replace_map','ds_map_secure_load',_0x1e9161(0x24c0),'ds_map_secure_save',_0x1e9161(0x4a3),_0x1e9161(0x35c0),'ds_map_size',_0x1e9161(0x4914),'ds_priority_add',_0x1e9161(0x3beb),_0x1e9161(0x2e1f),_0x1e9161(0x369d),'ds_priority_create',_0x1e9161(0x4a10),_0x1e9161(0x4d2b),_0x1e9161(0x291c),_0x1e9161(0x4dc),_0x1e9161(0x1bbf),'ds_priority_find_max','ds_priority_find_min','ds_priority_find_priority',_0x1e9161(0x1029),'ds_priority_size',_0x1e9161(0x3b1),'ds_queue_clear',_0x1e9161(0x3c7),'ds_queue_create','ds_queue_dequeue',_0x1e9161(0x2e81),'ds_queue_empty',_0x1e9161(0x4101),_0x1e9161(0x116a),_0x1e9161(0x45c9),_0x1e9161(0x4a60),_0x1e9161(0x47a6),_0x1e9161(0x3603),_0x1e9161(0x7f2),_0x1e9161(0x49e0),_0x1e9161(0x41b),'ds_stack_create',_0x1e9161(0x6fc),_0x1e9161(0x498e),'ds_stack_pop','ds_stack_push','ds_stack_read',_0x1e9161(0x4a25),'ds_stack_top',_0x1e9161(0x32e9),'dsin',_0x1e9161(0x186e),_0x1e9161(0x10c6),_0x1e9161(0x4873),_0x1e9161(0x4a74),_0x1e9161(0x1d77),_0x1e9161(0x5116),_0x1e9161(0x3c9e),_0x1e9161(0x1935),_0x1e9161(0x21b2),_0x1e9161(0x3a1b),'external_call',_0x1e9161(0x27df),_0x1e9161(0x1295),'facebook_accesstoken',_0x1e9161(0x2042),'facebook_dialog',_0x1e9161(0x4fdf),_0x1e9161(0xf2b),_0x1e9161(0x2997),_0x1e9161(0x1e67),'facebook_logout',_0x1e9161(0x1b0b),_0x1e9161(0x2886),_0x1e9161(0x1a63),'facebook_send_invite','facebook_status','facebook_user_id','file_attributes',_0x1e9161(0x11e4),'file_bin_open',_0x1e9161(0x3271),_0x1e9161(0x4a64),'file_bin_rewrite','file_bin_seek','file_bin_size','file_bin_write_byte','file_copy',_0x1e9161(0x963),'file_exists',_0x1e9161(0x2ead),_0x1e9161(0x4200),'file_find_next',_0x1e9161(0x2af6),_0x1e9161(0x27d9),_0x1e9161(0x1530),'file_text_eoln','file_text_open_append',_0x1e9161(0x2ece),'file_text_open_read',_0x1e9161(0x418b),_0x1e9161(0x3f6),'file_text_read_string',_0x1e9161(0x4fc2),_0x1e9161(0x37b9),_0x1e9161(0x140f),'file_text_writeln','filename_change_ext',_0x1e9161(0x3bd3),'filename_drive',_0x1e9161(0x4c10),_0x1e9161(0x1be3),_0x1e9161(0x812),_0x1e9161(0x2e2d),_0x1e9161(0x37be),_0x1e9161(0x500c),_0x1e9161(0x3bc3),_0x1e9161(0x390a),_0x1e9161(0x2a70),_0x1e9161(0x3a0),_0x1e9161(0x11e3),_0x1e9161(0x14e0),_0x1e9161(0x2ce4),_0x1e9161(0x4129),'font_get_italic',_0x1e9161(0x2cd4),'font_get_name',_0x1e9161(0x454b),_0x1e9161(0x1144),'font_get_uvs',_0x1e9161(0x4977),_0x1e9161(0x2370),_0x1e9161(0x4ec9),'font_set_cache_size',_0x1e9161(0x47de),_0x1e9161(0x4c45),_0x1e9161(0x1da4),'game_get_speed',_0x1e9161(0x3ac2),_0x1e9161(0xabe),_0x1e9161(0x2ac6),_0x1e9161(0xbc5),_0x1e9161(0x4876),_0x1e9161(0x1af8),_0x1e9161(0x48bd),_0x1e9161(0x902),'gamepad_button_check',_0x1e9161(0x516b),_0x1e9161(0xe44),_0x1e9161(0x1136),'gamepad_button_value',_0x1e9161(0x8b2),_0x1e9161(0x30b8),_0x1e9161(0x30de),_0x1e9161(0xb4a),'gamepad_is_connected',_0x1e9161(0x1ede),'gamepad_set_axis_deadzone',_0x1e9161(0x34d9),'gamepad_set_color',_0x1e9161(0x4e65),_0x1e9161(0x4262),_0x1e9161(0x4b2b),_0x1e9161(0x3323),'gesture_drag_distance',_0x1e9161(0x2fc0),_0x1e9161(0x1222),_0x1e9161(0x243c),_0x1e9161(0xa4a),_0x1e9161(0x3827),_0x1e9161(0x3dcf),_0x1e9161(0x174d),'gesture_get_pinch_angle_away','gesture_get_pinch_angle_towards','gesture_get_pinch_distance',_0x1e9161(0x28b1),_0x1e9161(0x47c3),'gesture_get_tap_count',_0x1e9161(0x205),'gesture_pinch_angle_towards',_0x1e9161(0x2224),'gesture_rotate_angle','gesture_rotate_time','gesture_tap_count',_0x1e9161(0x415e),'get_integer_async',_0x1e9161(0x1f1b),'get_open_filename',_0x1e9161(0x3b7b),_0x1e9161(0x4aa),_0x1e9161(0x46a9),_0x1e9161(0x16db),_0x1e9161(0x50e),_0x1e9161(0x3729),'gml_pragma','gml_release_mode',_0x1e9161(0x399a),'gpu_get_alphatestfunc',_0x1e9161(0x21f0),_0x1e9161(0x2d64),_0x1e9161(0x34df),_0x1e9161(0xb17),_0x1e9161(0x34b5),_0x1e9161(0x3024),_0x1e9161(0x389c),_0x1e9161(0xb6f),_0x1e9161(0x2c79),_0x1e9161(0x3285),_0x1e9161(0x4f0c),_0x1e9161(0x23b0),'gpu_get_fog','gpu_get_lightingenable',_0x1e9161(0x1c2f),_0x1e9161(0x327a),_0x1e9161(0xeae),_0x1e9161(0x7d0),_0x1e9161(0x1270),_0x1e9161(0x485),'gpu_get_tex_max_mip_ext','gpu_get_tex_min_mip',_0x1e9161(0x228d),_0x1e9161(0x37c5),_0x1e9161(0x33f6),_0x1e9161(0x22a0),_0x1e9161(0xd8a),'gpu_get_tex_mip_filter',_0x1e9161(0x33c5),_0x1e9161(0x1aec),'gpu_get_tex_repeat_ext',_0x1e9161(0x1ab3),_0x1e9161(0x2dc9),_0x1e9161(0xd62),_0x1e9161(0x4487),'gpu_get_zfunc',_0x1e9161(0x251b),'gpu_get_zwriteenable',_0x1e9161(0x4b99),'gpu_push_state',_0x1e9161(0x4cb1),'gpu_set_alphatestfunc',_0x1e9161(0x48a9),_0x1e9161(0x1b00),'gpu_set_blendmode',_0x1e9161(0x4ea9),_0x1e9161(0x4c35),_0x1e9161(0x38b9),'gpu_set_colourwriteenable',_0x1e9161(0x19c6),_0x1e9161(0x3e7b),_0x1e9161(0x1780),'gpu_set_state',_0x1e9161(0x40fe),_0x1e9161(0x4ac3),_0x1e9161(0x2522),'gpu_set_tex_max_aniso_ext','gpu_set_tex_max_mip',_0x1e9161(0x137a),_0x1e9161(0x1dc0),'gpu_set_tex_min_mip_ext',_0x1e9161(0x1d67),_0x1e9161(0x4abd),_0x1e9161(0xb86),_0x1e9161(0x95c),_0x1e9161(0x34a0),_0x1e9161(0x1711),_0x1e9161(0x3d58),_0x1e9161(0x23af),_0x1e9161(0x43ee),_0x1e9161(0x3514),_0x1e9161(0x1fad),_0x1e9161(0x3c1f),_0x1e9161(0x3485),'gpu_set_ztestenable','gpu_set_zwriteenable','highscore_add',_0x1e9161(0x2361),'highscore_name',_0x1e9161(0x3efd),'http_get',_0x1e9161(0x1cec),'http_post_string',_0x1e9161(0xf47),_0x1e9161(0x268f),_0x1e9161(0x2ab),_0x1e9161(0x1e75),_0x1e9161(0x1d8b),_0x1e9161(0xc8a),_0x1e9161(0xa2a),'iap_restore_all','iap_status',_0x1e9161(0x1b4a),_0x1e9161(0x4157),'ini_key_exists','ini_open','ini_open_from_string',_0x1e9161(0x2eff),_0x1e9161(0x4a5),'ini_section_delete',_0x1e9161(0x1074),'ini_write_real',_0x1e9161(0x2de3),_0x1e9161(0x5198),_0x1e9161(0x5265),'instance_activate_object',_0x1e9161(0x483b),'instance_change',_0x1e9161(0x4c29),_0x1e9161(0x24a8),_0x1e9161(0x46db),_0x1e9161(0x3af6),_0x1e9161(0xeb6),'instance_deactivate_layer',_0x1e9161(0x4b46),'instance_deactivate_region',_0x1e9161(0x1c8a),_0x1e9161(0x4e7e),_0x1e9161(0x42eb),_0x1e9161(0x3f13),'instance_id_get',_0x1e9161(0x4b9c),_0x1e9161(0x2e0c),_0x1e9161(0xb84),_0x1e9161(0x4e60),_0x1e9161(0x17c2),_0x1e9161(0x8b8),'int64',_0x1e9161(0x2780),'irandom','irandom_range',_0x1e9161(0x3486),_0x1e9161(0x1d4a),_0x1e9161(0x2f76),_0x1e9161(0x355c),_0x1e9161(0x51ef),_0x1e9161(0x87b),_0x1e9161(0x3fe8),_0x1e9161(0x8ba),'is_numeric',_0x1e9161(0x995),_0x1e9161(0x3d64),_0x1e9161(0x3507),'is_struct','is_undefined',_0x1e9161(0xb77),_0x1e9161(0x686),_0x1e9161(0x5033),'json_encode',_0x1e9161(0x42c4),'keyboard_check_direct',_0x1e9161(0x1d96),_0x1e9161(0x1b4c),_0x1e9161(0x25cc),_0x1e9161(0x2057),'keyboard_get_numlock','keyboard_key_press','keyboard_key_release',_0x1e9161(0x3e01),_0x1e9161(0x1f59),'keyboard_unset_map',_0x1e9161(0xeab),_0x1e9161(0x1518),_0x1e9161(0xf1f),_0x1e9161(0x3988),'layer_add_instance',_0x1e9161(0x2eee),_0x1e9161(0x15f8),'layer_background_change',_0x1e9161(0x5246),_0x1e9161(0xacd),'layer_background_exists',_0x1e9161(0x269),_0x1e9161(0x8b1),_0x1e9161(0x2d16),'layer_background_get_id',_0x1e9161(0x4d1),'layer_background_get_speed',_0x1e9161(0x1672),'layer_background_get_stretch',_0x1e9161(0x4d83),_0x1e9161(0x15f7),_0x1e9161(0x473e),_0x1e9161(0x1e86),_0x1e9161(0x4ea8),_0x1e9161(0x4e3b),_0x1e9161(0x50fe),_0x1e9161(0x2d2e),_0x1e9161(0x29c5),_0x1e9161(0x8a1),_0x1e9161(0x315a),_0x1e9161(0x63f),_0x1e9161(0x26e1),_0x1e9161(0x3ac4),_0x1e9161(0x531),_0x1e9161(0x4dad),_0x1e9161(0x2d12),_0x1e9161(0x33a1),_0x1e9161(0x26c),'layer_force_draw_depth',_0x1e9161(0x19e5),'layer_get_all_elements',_0x1e9161(0x3d77),'layer_get_element_layer',_0x1e9161(0x4c9),'layer_get_forced_depth',_0x1e9161(0x411d),_0x1e9161(0x3415),_0x1e9161(0x3338),_0x1e9161(0xd38),_0x1e9161(0x4cce),_0x1e9161(0x40a9),_0x1e9161(0x229d),_0x1e9161(0x4660),'layer_get_visible',_0x1e9161(0x19a9),_0x1e9161(0xaf3),'layer_get_y',_0x1e9161(0x619),_0x1e9161(0x3b00),'layer_instance_get_instance',_0x1e9161(0x3970),'layer_reset_target_room',_0x1e9161(0x39c),_0x1e9161(0x2c67),_0x1e9161(0x48a5),'layer_set_visible','layer_shader',_0x1e9161(0x2a84),_0x1e9161(0x3fd4),_0x1e9161(0x1e60),'layer_sprite_change',_0x1e9161(0x1097),_0x1e9161(0x4f00),_0x1e9161(0x26b),_0x1e9161(0x4d2c),_0x1e9161(0x35e8),_0x1e9161(0x2b0c),_0x1e9161(0x1997),_0x1e9161(0x4dfc),_0x1e9161(0x118c),_0x1e9161(0x4d56),_0x1e9161(0x3662),_0x1e9161(0x1540),_0x1e9161(0x44b),_0x1e9161(0x3116),_0x1e9161(0x3265),'layer_sprite_speed','layer_sprite_x','layer_sprite_xscale',_0x1e9161(0x2296),_0x1e9161(0x678),_0x1e9161(0x2e0b),'layer_tile_blend',_0x1e9161(0x3902),_0x1e9161(0x46c8),'layer_tile_destroy',_0x1e9161(0x47c1),_0x1e9161(0x118a),'layer_tile_get_blend',_0x1e9161(0x4f4),_0x1e9161(0x1151),_0x1e9161(0x4867),_0x1e9161(0x3114),_0x1e9161(0x598),_0x1e9161(0x11e6),_0x1e9161(0x1826),_0x1e9161(0x518e),'layer_tile_visible',_0x1e9161(0x2928),'layer_tile_xscale',_0x1e9161(0x3a0f),_0x1e9161(0x1b62),'layer_tilemap_create',_0x1e9161(0x30c7),_0x1e9161(0x452),_0x1e9161(0x4f75),_0x1e9161(0x4f53),_0x1e9161(0x37e1),_0x1e9161(0x257),'lengthdir_x',_0x1e9161(0x22d5),'lerp','ln',_0x1e9161(0x400c),_0x1e9161(0x1463),_0x1e9161(0x40da),_0x1e9161(0x272f),_0x1e9161(0x419),'make_color_rgb',_0x1e9161(0x32be),'make_colour_rgb',_0x1e9161(0x3cb3),_0x1e9161(0x6bb),_0x1e9161(0x3719),'matrix_build_identity',_0x1e9161(0x2830),_0x1e9161(0x14cf),_0x1e9161(0x2896),_0x1e9161(0x4f4b),'matrix_get',_0x1e9161(0x1ce4),_0x1e9161(0x2e3e),_0x1e9161(0x238e),_0x1e9161(0x4b17),'matrix_stack_multiply',_0x1e9161(0x2b9c),_0x1e9161(0xb0f),_0x1e9161(0x2a56),'matrix_stack_top',_0x1e9161(0xdcd),_0x1e9161(0x4529),_0x1e9161(0x4a2c),'md5_string_unicode',_0x1e9161(0x39b3),_0x1e9161(0x4919),'median','merge_color','merge_colour',_0x1e9161(0x37c8),'motion_add',_0x1e9161(0x51cc),_0x1e9161(0x3d0),_0x1e9161(0x37cd),_0x1e9161(0x2711),'mouse_clear','mouse_wheel_down',_0x1e9161(0x213c),'move_bounce_all',_0x1e9161(0x51aa),_0x1e9161(0x550),'move_contact_solid',_0x1e9161(0x2a1f),'move_outside_solid',_0x1e9161(0x1366),_0x1e9161(0x3299),'move_towards_point',_0x1e9161(0x1781),'mp_grid_add_cell',_0x1e9161(0x4281),_0x1e9161(0x2e44),_0x1e9161(0x17d6),_0x1e9161(0x773),_0x1e9161(0x4a2f),_0x1e9161(0x3395),_0x1e9161(0x3453),_0x1e9161(0x123e),_0x1e9161(0x2b9b),_0x1e9161(0xfa3),_0x1e9161(0x2ff1),_0x1e9161(0x7d1),_0x1e9161(0x3aac),_0x1e9161(0x95b),_0x1e9161(0x2239),_0x1e9161(0xc46),'mp_potential_path_object','mp_potential_settings',_0x1e9161(0x3498),_0x1e9161(0x785),_0x1e9161(0x424f),'network_connect_raw','network_create_server',_0x1e9161(0x2e8d),_0x1e9161(0x1d9d),'network_create_socket_ext',_0x1e9161(0x4987),'network_resolve',_0x1e9161(0x3941),_0x1e9161(0x2aaa),_0x1e9161(0x430a),_0x1e9161(0x3925),_0x1e9161(0x4cef),_0x1e9161(0x8ab),'network_set_timeout',_0x1e9161(0x3e42),_0x1e9161(0x4438),_0x1e9161(0x122a),_0x1e9161(0xdc5),_0x1e9161(0x188a),_0x1e9161(0x290e),_0x1e9161(0x4df4),'object_get_solid',_0x1e9161(0x313c),_0x1e9161(0x1608),_0x1e9161(0x4fc4),'object_set_mask',_0x1e9161(0x10de),_0x1e9161(0x2006),_0x1e9161(0x5217),'object_set_visible',_0x1e9161(0x38b2),'os_get_config',_0x1e9161(0xf7b),_0x1e9161(0x1b7c),_0x1e9161(0x34f9),'os_is_network_connected','os_is_paused',_0x1e9161(0x23eb),'os_powersave_enable','parameter_count',_0x1e9161(0x1844),_0x1e9161(0x290a),_0x1e9161(0x2094),_0x1e9161(0x2f41),_0x1e9161(0x47e5),_0x1e9161(0x1cfd),_0x1e9161(0x8c1),_0x1e9161(0x15cb),_0x1e9161(0x4063),'part_particles_clear',_0x1e9161(0x4d73),_0x1e9161(0x4c96),'part_particles_create_color',_0x1e9161(0x2b67),_0x1e9161(0x4d3f),_0x1e9161(0x3e03),_0x1e9161(0x13d0),_0x1e9161(0x2a15),_0x1e9161(0x1605),_0x1e9161(0x12f0),_0x1e9161(0x4cdb),_0x1e9161(0x16ee),_0x1e9161(0x265c),_0x1e9161(0x4b28),_0x1e9161(0x3c05),_0x1e9161(0x517e),_0x1e9161(0x14f2),'part_system_update',_0x1e9161(0x21c7),'part_type_alpha2',_0x1e9161(0x2cc2),_0x1e9161(0x3e5d),_0x1e9161(0x22bc),_0x1e9161(0x24d4),_0x1e9161(0x1146),_0x1e9161(0x2ccc),_0x1e9161(0x5034),_0x1e9161(0x1031),_0x1e9161(0x2562),_0x1e9161(0x1f7f),_0x1e9161(0x347),_0x1e9161(0x1af3),_0x1e9161(0x1ed9),_0x1e9161(0x3c88),'part_type_colour_rgb',_0x1e9161(0x3de4),_0x1e9161(0xc8c),_0x1e9161(0x26bb),_0x1e9161(0x41f6),_0x1e9161(0x33b9),'part_type_gravity',_0x1e9161(0x1d6a),'part_type_orientation',_0x1e9161(0x3f36),'part_type_shape',_0x1e9161(0x1dde),'part_type_speed',_0x1e9161(0x4ebc),_0x1e9161(0x2b62),_0x1e9161(0x103e),_0x1e9161(0x2d73),'path_append','path_assign','path_change_point','path_clear_points',_0x1e9161(0x9a4),'path_delete_point',_0x1e9161(0xfea),_0x1e9161(0x31d4),_0x1e9161(0x34b),_0x1e9161(0x344d),_0x1e9161(0x2db0),_0x1e9161(0x1176),_0x1e9161(0x36aa),_0x1e9161(0x4991),'path_get_number','path_get_point_speed',_0x1e9161(0x16bf),_0x1e9161(0x508a),_0x1e9161(0x3c7e),_0x1e9161(0x1d7e),_0x1e9161(0xb32),_0x1e9161(0x43e3),_0x1e9161(0x2235),_0x1e9161(0x4e5a),_0x1e9161(0x191d),_0x1e9161(0x4f62),_0x1e9161(0x19ea),_0x1e9161(0x5d2),_0x1e9161(0x46cb),_0x1e9161(0x271d),_0x1e9161(0x370e),'path_shift',_0x1e9161(0x3102),'physics_apply_angular_impulse','physics_apply_force','physics_apply_impulse',_0x1e9161(0xb58),_0x1e9161(0x50b8),_0x1e9161(0xd6d),_0x1e9161(0x1d93),'physics_fixture_add_point',_0x1e9161(0x2518),_0x1e9161(0x1943),_0x1e9161(0x4a53),_0x1e9161(0x130f),_0x1e9161(0x3dac),'physics_fixture_set_awake','physics_fixture_set_box_shape','physics_fixture_set_chain_shape',_0x1e9161(0x1f5e),_0x1e9161(0x4a43),_0x1e9161(0x1f10),'physics_fixture_set_edge_shape',_0x1e9161(0x2dd9),'physics_fixture_set_kinematic','physics_fixture_set_linear_damping',_0x1e9161(0x19ab),_0x1e9161(0x3394),_0x1e9161(0x2b07),'physics_get_density',_0x1e9161(0x2a86),_0x1e9161(0x2283),_0x1e9161(0x4d74),'physics_joint_distance_create',_0x1e9161(0x673),_0x1e9161(0x3f92),_0x1e9161(0x1957),'physics_joint_get_value','physics_joint_prismatic_create',_0x1e9161(0x39f3),'physics_joint_revolute_create',_0x1e9161(0x29a1),_0x1e9161(0x11b0),_0x1e9161(0x4b20),'physics_joint_wheel_create',_0x1e9161(0x2a62),_0x1e9161(0x486d),'physics_particle_create',_0x1e9161(0x540),_0x1e9161(0x10f7),_0x1e9161(0x4dee),_0x1e9161(0x3848),_0x1e9161(0x19fc),_0x1e9161(0x3a80),_0x1e9161(0x3aa7),_0x1e9161(0x22c2),_0x1e9161(0x2175),_0x1e9161(0x2feb),'physics_particle_get_gravity_scale','physics_particle_get_group_flags','physics_particle_get_max_count',_0x1e9161(0xe38),'physics_particle_group_add_point',_0x1e9161(0x2c4a),'physics_particle_group_box','physics_particle_group_circle',_0x1e9161(0x48d),'physics_particle_group_delete',_0x1e9161(0x515b),_0x1e9161(0x185c),_0x1e9161(0x3cfd),_0x1e9161(0x389e),_0x1e9161(0x10cd),_0x1e9161(0xa52),_0x1e9161(0x420a),_0x1e9161(0x2d63),_0x1e9161(0x39c6),_0x1e9161(0x43a2),_0x1e9161(0xbdb),_0x1e9161(0x5111),'physics_particle_group_join',_0x1e9161(0x17ed),'physics_particle_set_category_flags',_0x1e9161(0x3858),_0x1e9161(0x3ca6),_0x1e9161(0x3b9),_0x1e9161(0x3100),'physics_particle_set_group_flags',_0x1e9161(0x27f0),_0x1e9161(0x16ed),_0x1e9161(0x19f3),_0x1e9161(0x4b05),_0x1e9161(0x1168),_0x1e9161(0x1028),_0x1e9161(0x45cd),'physics_test_overlap',_0x1e9161(0x274),_0x1e9161(0x40ef),_0x1e9161(0x20e0),_0x1e9161(0x347c),'physics_world_update_speed',_0x1e9161(0x51da),_0x1e9161(0x164e),_0x1e9161(0x2380),_0x1e9161(0x285f),_0x1e9161(0x5077),_0x1e9161(0x3580),_0x1e9161(0x1e96),_0x1e9161(0x3c68),'point_in_rectangle',_0x1e9161(0x3247),_0x1e9161(0xdaf),_0x1e9161(0x27ed),_0x1e9161(0x5297),_0x1e9161(0x3730),_0x1e9161(0x17a4),'ptr',_0x1e9161(0x27f1),'push_get_first_local_notification',_0x1e9161(0x2b5e),_0x1e9161(0x504e),_0x1e9161(0x2ca0),'random',_0x1e9161(0xb1e),'random_range',_0x1e9161(0x5173),'randomise',_0x1e9161(0x3cfe),_0x1e9161(0x47f6),'rectangle_in_circle',_0x1e9161(0x1835),'rectangle_in_triangle',_0x1e9161(0x12b7),'room_assign',_0x1e9161(0x2efe),_0x1e9161(0x2dd3),_0x1e9161(0xf7f),'room_get_name','room_get_viewport',_0x1e9161(0x3cdb),_0x1e9161(0x3dcc),_0x1e9161(0x30fc),_0x1e9161(0x28cc),_0x1e9161(0x2c2),'room_next','room_previous',_0x1e9161(0x3074),_0x1e9161(0x2be2),_0x1e9161(0x2513),_0x1e9161(0x30bd),'room_set_height',_0x1e9161(0x100c),_0x1e9161(0x336d),'room_set_view_enabled',_0x1e9161(0xec2),_0x1e9161(0xce6),_0x1e9161(0x3d6c),'screen_save',_0x1e9161(0x330a),_0x1e9161(0x414a),_0x1e9161(0xc24),_0x1e9161(0x2e21),_0x1e9161(0x14b9),_0x1e9161(0x39e0),'sha1_string_utf8','shader_current','shader_enable_corner_id',_0x1e9161(0x2c89),_0x1e9161(0x4110),'shader_get_uniform','shader_is_compiled',_0x1e9161(0x1400),'shader_set','shader_set_uniform_f','shader_set_uniform_f_array',_0x1e9161(0x26d1),'shader_set_uniform_i_array',_0x1e9161(0x821),_0x1e9161(0xcc0),_0x1e9161(0x10cc),_0x1e9161(0x503d),_0x1e9161(0x64e),_0x1e9161(0x2527),_0x1e9161(0x28a3),_0x1e9161(0x2e67),_0x1e9161(0xcb6),_0x1e9161(0x42b9),_0x1e9161(0x3ee6),_0x1e9161(0x7f7),_0x1e9161(0x2a37),_0x1e9161(0xdb7),_0x1e9161(0x360),_0x1e9161(0x2f54),_0x1e9161(0x477e),_0x1e9161(0x46e8),_0x1e9161(0x2d9e),_0x1e9161(0x3ddc),_0x1e9161(0x4bd5),'skeleton_animation_set',_0x1e9161(0x3524),'skeleton_animation_set_frame',_0x1e9161(0xf02),_0x1e9161(0xdc3),_0x1e9161(0x3a36),_0x1e9161(0x2075),_0x1e9161(0x4c08),_0x1e9161(0x185b),'skeleton_bone_state_set',_0x1e9161(0x290d),_0x1e9161(0xde6),_0x1e9161(0x423e),_0x1e9161(0x4328),'skeleton_skin_get',_0x1e9161(0x2788),'skeleton_skin_set',_0x1e9161(0x8bc),_0x1e9161(0x345e),_0x1e9161(0x3c26),_0x1e9161(0x1ad0),'sprite_collision_mask',_0x1e9161(0x281b),_0x1e9161(0x4c55),_0x1e9161(0x5110),'sprite_exists',_0x1e9161(0x602),'sprite_flush_multi',_0x1e9161(0xda9),_0x1e9161(0x1dc7),_0x1e9161(0x2247),'sprite_get_bbox_top','sprite_get_height',_0x1e9161(0x27cd),'sprite_get_number',_0x1e9161(0x4100),_0x1e9161(0x204c),_0x1e9161(0x2c7b),_0x1e9161(0x3fcb),_0x1e9161(0x3273),_0x1e9161(0x3168),_0x1e9161(0x4f48),'sprite_get_yoffset',_0x1e9161(0xa20),'sprite_prefetch',_0x1e9161(0x18be),_0x1e9161(0x1cb0),_0x1e9161(0x364),'sprite_save_strip','sprite_set_alpha_from_sprite',_0x1e9161(0x36a5),_0x1e9161(0x17b3),_0x1e9161(0x403d),_0x1e9161(0x75a),_0x1e9161(0x4655),'sqrt',_0x1e9161(0x413e),_0x1e9161(0x2afc),_0x1e9161(0x319d),_0x1e9161(0x1c4e),'steam_available_languages',_0x1e9161(0x5b0),_0x1e9161(0x15db),'steam_current_game_language',_0x1e9161(0x633),_0x1e9161(0x4bf5),_0x1e9161(0x4bff),'steam_file_delete',_0x1e9161(0x4dd7),_0x1e9161(0x289f),_0x1e9161(0xb5b),_0x1e9161(0x815),_0x1e9161(0x2f13),_0x1e9161(0x1510),_0x1e9161(0x8ac),_0x1e9161(0x4080),_0x1e9161(0x4f73),_0x1e9161(0x152b),'steam_get_quota_free',_0x1e9161(0x1899),_0x1e9161(0x4731),_0x1e9161(0x4da2),_0x1e9161(0x28c2),'steam_get_user_account_id','steam_get_user_persona_name','steam_get_user_steam_id',_0x1e9161(0x1f71),_0x1e9161(0x125b),_0x1e9161(0x1c2d),_0x1e9161(0x1b46),_0x1e9161(0x27bf),_0x1e9161(0x25be),_0x1e9161(0x414d),_0x1e9161(0x3885),'steam_reset_all_stats_achievements',_0x1e9161(0x4a36),_0x1e9161(0x6fa),_0x1e9161(0x3c89),_0x1e9161(0x3b3a),_0x1e9161(0x4d26),'steam_stats_ready',_0x1e9161(0x4044),'steam_ugc_create_query_all',_0x1e9161(0x1b81),_0x1e9161(0xda4),_0x1e9161(0x4174),_0x1e9161(0x2adc),_0x1e9161(0x3174),_0x1e9161(0x446e),_0x1e9161(0x7e0),'steam_ugc_get_subscribed_items','steam_ugc_num_subscribed_items',_0x1e9161(0x1336),_0x1e9161(0x2cc1),_0x1e9161(0x10a6),_0x1e9161(0xd2a),_0x1e9161(0x158a),_0x1e9161(0x48b2),_0x1e9161(0x1d86),_0x1e9161(0x3cd7),_0x1e9161(0x1597),_0x1e9161(0x341d),'steam_ugc_send_query','steam_ugc_set_item_content',_0x1e9161(0x26ff),_0x1e9161(0x1504),_0x1e9161(0x19c4),'steam_ugc_set_item_title',_0x1e9161(0x1f90),'steam_ugc_start_item_update',_0x1e9161(0x4963),_0x1e9161(0x317f),_0x1e9161(0x3fbb),'steam_upload_score',_0x1e9161(0x51e),_0x1e9161(0x22a3),_0x1e9161(0x230c),_0x1e9161(0x2f27),_0x1e9161(0x3e1e),'string',_0x1e9161(0xd98),'string_byte_length',_0x1e9161(0x4178),'string_copy','string_count',_0x1e9161(0x49ab),_0x1e9161(0x3593),'string_format',_0x1e9161(0x2492),_0x1e9161(0x1483),_0x1e9161(0x2fce),_0x1e9161(0x1141),'string_length','string_letters',_0x1e9161(0x4237),_0x1e9161(0x3de9),_0x1e9161(0x46e4),_0x1e9161(0x2da0),_0x1e9161(0x329d),_0x1e9161(0x4789),_0x1e9161(0x37ed),'string_set_byte_at',_0x1e9161(0x3557),_0x1e9161(0x6f3),'string_width_ext','surface_copy','surface_copy_part',_0x1e9161(0x2b27),'surface_create_ext',_0x1e9161(0x9a8),_0x1e9161(0x5049),_0x1e9161(0x1a98),_0x1e9161(0x2d7),'surface_get_height',_0x1e9161(0x456f),_0x1e9161(0x2948),_0x1e9161(0x28ee),_0x1e9161(0x194f),_0x1e9161(0x3508),_0x1e9161(0x3a6b),_0x1e9161(0x40bf),_0x1e9161(0x4576),_0x1e9161(0x48cb),_0x1e9161(0x1c8b),'tan','texture_get_height','texture_get_texel_height',_0x1e9161(0xfeb),'texture_get_uvs',_0x1e9161(0x22bf),_0x1e9161(0x3126),_0x1e9161(0x2de4),_0x1e9161(0x3676),_0x1e9161(0x3967),_0x1e9161(0x31e9),_0x1e9161(0x3f2b),_0x1e9161(0x1cf7),'tile_set_empty',_0x1e9161(0x3dab),_0x1e9161(0x20ea),_0x1e9161(0x2a0d),_0x1e9161(0x2219),_0x1e9161(0x1fa4),_0x1e9161(0x26d6),_0x1e9161(0x325a),_0x1e9161(0x46aa),_0x1e9161(0x22a2),'tilemap_get_frame',_0x1e9161(0x3f94),_0x1e9161(0x13e4),'tilemap_get_mask','tilemap_get_tile_height',_0x1e9161(0x3428),_0x1e9161(0x1670),_0x1e9161(0x3e7),_0x1e9161(0x36fd),_0x1e9161(0x27d3),'tilemap_set',_0x1e9161(0x2c2f),'tilemap_set_global_mask','tilemap_set_mask',_0x1e9161(0x4ac9),_0x1e9161(0xb6a),_0x1e9161(0x2596),_0x1e9161(0x2674),_0x1e9161(0x454e),'timeline_delete','timeline_exists',_0x1e9161(0x4ed2),_0x1e9161(0x4045),_0x1e9161(0x32ed),_0x1e9161(0x1b06),_0x1e9161(0x32f8),'typeof',_0x1e9161(0x361e),_0x1e9161(0x1055),_0x1e9161(0x143d),_0x1e9161(0x4792),'variable_global_exists',_0x1e9161(0x3ac),_0x1e9161(0x16d5),_0x1e9161(0xfc0),_0x1e9161(0xc68),_0x1e9161(0x3576),_0x1e9161(0xee7),_0x1e9161(0x3b36),_0x1e9161(0x3211),_0x1e9161(0x16dd),_0x1e9161(0x3083),_0x1e9161(0x49e1),_0x1e9161(0x2c90),'vertex_argb',_0x1e9161(0x5054),_0x1e9161(0x17ee),'vertex_colour',_0x1e9161(0x1aa3),_0x1e9161(0x21d3),_0x1e9161(0x1242),'vertex_create_buffer_from_buffer_ext','vertex_delete_buffer',_0x1e9161(0xade),_0x1e9161(0x5046),_0x1e9161(0x1f93),'vertex_float3',_0x1e9161(0xbf8),_0x1e9161(0xd26),_0x1e9161(0x3818),_0x1e9161(0x1426),_0x1e9161(0x1621),'vertex_format_add_position',_0x1e9161(0x26d5),'vertex_format_add_texcoord',_0x1e9161(0x3a16),_0x1e9161(0x4538),'vertex_format_delete',_0x1e9161(0xc94),_0x1e9161(0x4c6c),_0x1e9161(0xd79),'vertex_get_number',_0x1e9161(0x11e0),_0x1e9161(0x7bd),_0x1e9161(0x19d7),_0x1e9161(0x3d7d),_0x1e9161(0x710),_0x1e9161(0x2e23),_0x1e9161(0x3a52),_0x1e9161(0x2fbb),_0x1e9161(0x2a46),_0x1e9161(0x3fba),_0x1e9161(0x3a93),_0x1e9161(0x87d),_0x1e9161(0x36d2),_0x1e9161(0x2d57),_0x1e9161(0x392f),_0x1e9161(0x1a37),_0x1e9161(0xba0),_0x1e9161(0x33cd),_0x1e9161(0x14d0),_0x1e9161(0x49b7),'virtual_key_add',_0x1e9161(0x36f7),_0x1e9161(0x20ac),_0x1e9161(0x363b),_0x1e9161(0x36d3),'win8_appbar_enable','win8_appbar_remove_element',_0x1e9161(0x5065),_0x1e9161(0x6bc),_0x1e9161(0x35cf),_0x1e9161(0x265e),'win8_livetile_badge_notification',_0x1e9161(0x1dcb),_0x1e9161(0x3ccd),_0x1e9161(0xbef),_0x1e9161(0x3b11),_0x1e9161(0x4091),_0x1e9161(0x22e7),_0x1e9161(0x28d2),_0x1e9161(0x51fb),_0x1e9161(0xe7d),_0x1e9161(0x23de),_0x1e9161(0x2396),'win8_search_disable',_0x1e9161(0x11a0),'win8_secondarytile_badge_notification',_0x1e9161(0x3dc4),_0x1e9161(0xa74),'win8_settingscharm_add_entry',_0x1e9161(0x3fb7),_0x1e9161(0x1063),_0x1e9161(0x2911),_0x1e9161(0x3cd4),_0x1e9161(0x2f7),_0x1e9161(0xf87),_0x1e9161(0x38d4),_0x1e9161(0x27d5),'win8_share_text',_0x1e9161(0x4d43),_0x1e9161(0x10fa),_0x1e9161(0x3a17),_0x1e9161(0x2daa),'window_get_color',_0x1e9161(0x4625),_0x1e9161(0x2e14),'window_get_fullscreen',_0x1e9161(0x356d),_0x1e9161(0x200a),_0x1e9161(0x3f30),_0x1e9161(0x36c4),_0x1e9161(0x3089),_0x1e9161(0x1bca),_0x1e9161(0x2ebd),_0x1e9161(0x400f),_0x1e9161(0x4a61),_0x1e9161(0x738),'window_set_caption',_0x1e9161(0x424d),_0x1e9161(0x19e3),_0x1e9161(0xda3),_0x1e9161(0x242f),_0x1e9161(0x2a04),_0x1e9161(0x3dd3),_0x1e9161(0x2c9f),'window_set_min_width','window_set_position','window_set_rectangle',_0x1e9161(0x1d16),_0x1e9161(0xc42),_0x1e9161(0x701),_0x1e9161(0x69c),'window_views_mouse_get_y',_0x1e9161(0x423b),'winphone_tile_back_content','winphone_tile_back_content_wide',_0x1e9161(0x781),_0x1e9161(0x4e4a),_0x1e9161(0xeb5),'winphone_tile_background_color',_0x1e9161(0x3801),_0x1e9161(0x1ab4),'winphone_tile_cycle_images',_0x1e9161(0xd09),'winphone_tile_front_image_small','winphone_tile_front_image_wide',_0x1e9161(0x4c15),_0x1e9161(0x4b45),'winphone_tile_small_icon_image','winphone_tile_title','winphone_tile_wide_content',_0x1e9161(0x825)],'literal':[_0x1e9161(0xc36),_0x1e9161(0x3984),_0x1e9161(0x2727),'pointer_invalid',_0x1e9161(0x1db8),_0x1e9161(0x4022),'undefined'],'symbol':[_0x1e9161(0x378d),_0x1e9161(0x61a),'BALTIC_CHARSET',_0x1e9161(0x36ec),_0x1e9161(0x5ba),_0x1e9161(0x47b7),_0x1e9161(0x3e8a),_0x1e9161(0x10d8),_0x1e9161(0x2447),_0x1e9161(0x4355),_0x1e9161(0x47df),'HANGEUL_CHARSET',_0x1e9161(0x51b2),'JOHAB_CHARSET',_0x1e9161(0xb22),_0x1e9161(0x432e),_0x1e9161(0x4ee6),_0x1e9161(0x2aeb),'SYMBOL_CHARSET',_0x1e9161(0x1385),'TURKISH_CHARSET',_0x1e9161(0x48f),_0x1e9161(0x4425),_0x1e9161(0x2bae),_0x1e9161(0x2413),_0x1e9161(0x4f47),_0x1e9161(0x41ca),_0x1e9161(0x399b),'achievement_our_info',_0x1e9161(0x7c3),_0x1e9161(0xb30),'achievement_show_bank',_0x1e9161(0x4e7d),_0x1e9161(0x26fb),_0x1e9161(0x269a),_0x1e9161(0x40b),_0x1e9161(0x1c92),_0x1e9161(0x26cb),'achievement_type_score_challenge',_0x1e9161(0x43f2),'asset_object','asset_path',_0x1e9161(0x396d),'asset_script',_0x1e9161(0x6e3),'asset_sound',_0x1e9161(0xc13),'asset_tiles',_0x1e9161(0x3e57),_0x1e9161(0x2b2d),'audio_3d','audio_falloff_exponent_distance','audio_falloff_exponent_distance_clamped',_0x1e9161(0x3efe),_0x1e9161(0xc3f),'audio_falloff_linear_distance','audio_falloff_linear_distance_clamped',_0x1e9161(0x118d),_0x1e9161(0x171f),_0x1e9161(0x3b02),_0x1e9161(0xb97),_0x1e9161(0x2556),_0x1e9161(0x1dfc),_0x1e9161(0x474e),'bm_dest_alpha',_0x1e9161(0x111a),_0x1e9161(0x1f4d),_0x1e9161(0x881),_0x1e9161(0x618),'bm_inv_dest_colour',_0x1e9161(0x378),'bm_inv_src_color',_0x1e9161(0x50b1),_0x1e9161(0x3912),_0x1e9161(0x509f),_0x1e9161(0x10b3),_0x1e9161(0x3679),_0x1e9161(0x224a),_0x1e9161(0x67a),_0x1e9161(0x4b8f),'bm_subtract','bm_zero','browser_chrome',_0x1e9161(0x42d0),_0x1e9161(0x16d2),_0x1e9161(0x359c),_0x1e9161(0x4120),_0x1e9161(0x3893),_0x1e9161(0x4b47),_0x1e9161(0x2bda),'browser_safari_mobile','browser_tizen',_0x1e9161(0x3586),'browser_windows_store',_0x1e9161(0x36b7),_0x1e9161(0x4bab),_0x1e9161(0x9da),_0x1e9161(0x4902),'buffer_fast',_0x1e9161(0x3297),_0x1e9161(0x24f0),_0x1e9161(0x64f),_0x1e9161(0x1973),'buffer_network',_0x1e9161(0x5f8),_0x1e9161(0x2f87),'buffer_s16',_0x1e9161(0x3d1),_0x1e9161(0x1813),_0x1e9161(0x3960),_0x1e9161(0x35f9),_0x1e9161(0x44aa),_0x1e9161(0x2897),_0x1e9161(0x24ca),_0x1e9161(0x84b),_0x1e9161(0x2bc2),_0x1e9161(0x29ee),_0x1e9161(0x4e42),_0x1e9161(0xa7a),_0x1e9161(0x578),_0x1e9161(0xdf6),_0x1e9161(0x45d3),'c_aqua',_0x1e9161(0x264d),_0x1e9161(0x38bd),'c_dkgray',_0x1e9161(0x4b93),_0x1e9161(0x14a4),_0x1e9161(0x560),_0x1e9161(0x3e43),_0x1e9161(0x1399),'c_maroon',_0x1e9161(0x630),_0x1e9161(0x436),_0x1e9161(0xcee),_0x1e9161(0x4d08),'c_red',_0x1e9161(0x894),'c_teal',_0x1e9161(0x4efd),_0x1e9161(0xc95),_0x1e9161(0x2baa),_0x1e9161(0xec7),_0x1e9161(0x2473),_0x1e9161(0x10ae),'cmpfunc_less','cmpfunc_lessequal',_0x1e9161(0x33e3),_0x1e9161(0x39d2),_0x1e9161(0x46cf),_0x1e9161(0x3934),_0x1e9161(0x1bb3),'cr_cross',_0x1e9161(0x41c7),_0x1e9161(0xca6),_0x1e9161(0x6f8),'cr_hourglass',_0x1e9161(0x2fc5),_0x1e9161(0x1198),_0x1e9161(0x39a3),_0x1e9161(0x3afd),_0x1e9161(0x4996),_0x1e9161(0x3d45),_0x1e9161(0x3036),'cull_clockwise','cull_counterclockwise',_0x1e9161(0x3cec),_0x1e9161(0x2004),'device_ios_ipad','device_ios_ipad_retina','device_ios_iphone','device_ios_iphone5',_0x1e9161(0x42c5),_0x1e9161(0x32e5),_0x1e9161(0x3a02),'device_ios_unknown','device_tablet',_0x1e9161(0x3ef8),_0x1e9161(0x4353),_0x1e9161(0x2f99),_0x1e9161(0x13f3),_0x1e9161(0x4081),_0x1e9161(0x34c6),_0x1e9161(0x365a),'ds_type_list',_0x1e9161(0x1e77),'ds_type_priority',_0x1e9161(0x2583),_0x1e9161(0x3156),_0x1e9161(0x152a),_0x1e9161(0x9dd),_0x1e9161(0x1093),_0x1e9161(0x4954),_0x1e9161(0x12ae),_0x1e9161(0x2028),_0x1e9161(0xff0),_0x1e9161(0x514a),_0x1e9161(0x3b54),'ef_snow',_0x1e9161(0x3804),_0x1e9161(0x2dfd),'ev_alarm',_0x1e9161(0x2cac),_0x1e9161(0xe47),_0x1e9161(0x5101),'ev_close_button',_0x1e9161(0x7f9),_0x1e9161(0x67c),'ev_destroy',_0x1e9161(0x2dca),_0x1e9161(0x4c01),'ev_draw_end',_0x1e9161(0x1d7a),'ev_draw_pre',_0x1e9161(0x230e),'ev_game_end',_0x1e9161(0x31f),_0x1e9161(0xe99),_0x1e9161(0x5171),'ev_gesture_drag_end',_0x1e9161(0x41c0),_0x1e9161(0x3ee0),_0x1e9161(0x442d),_0x1e9161(0x3cd8),_0x1e9161(0x1388),'ev_gesture_pinch_out',_0x1e9161(0x3c75),_0x1e9161(0x522f),_0x1e9161(0x1ebf),_0x1e9161(0x37c),_0x1e9161(0x31af),'ev_global_gesture_double_tap',_0x1e9161(0x37aa),'ev_global_gesture_drag_start',_0x1e9161(0x394a),'ev_global_gesture_flick','ev_global_gesture_pinch_end','ev_global_gesture_pinch_in',_0x1e9161(0x413b),_0x1e9161(0x3a83),_0x1e9161(0x4cca),'ev_global_gesture_rotate_start',_0x1e9161(0x303c),_0x1e9161(0x13a0),_0x1e9161(0x2bfc),'ev_global_left_press',_0x1e9161(0x2c69),_0x1e9161(0x1167),_0x1e9161(0x3e82),'ev_global_middle_release',_0x1e9161(0x227d),_0x1e9161(0x638),'ev_global_right_release',_0x1e9161(0x4e9d),_0x1e9161(0x887),_0x1e9161(0x135e),_0x1e9161(0x18d2),_0x1e9161(0x20bc),_0x1e9161(0x385f),'ev_joystick1_button4','ev_joystick1_button5',_0x1e9161(0x2cb6),_0x1e9161(0x2576),'ev_joystick1_button8',_0x1e9161(0x3618),_0x1e9161(0x3ec2),_0x1e9161(0x167d),'ev_joystick1_up',_0x1e9161(0xbd6),'ev_joystick2_button2',_0x1e9161(0x3e3d),'ev_joystick2_button4',_0x1e9161(0x3a76),_0x1e9161(0x1850),'ev_joystick2_button7',_0x1e9161(0x548),_0x1e9161(0x9a2),_0x1e9161(0xf83),_0x1e9161(0x271b),_0x1e9161(0x5203),_0x1e9161(0x42c2),_0x1e9161(0x4d90),'ev_keyrelease',_0x1e9161(0x2cc8),_0x1e9161(0x132e),_0x1e9161(0x4d00),_0x1e9161(0x178b),_0x1e9161(0x4ee1),_0x1e9161(0x394e),_0x1e9161(0x3d7e),_0x1e9161(0x46ca),_0x1e9161(0x4fa4),'ev_mouse_wheel_down',_0x1e9161(0x3e46),_0x1e9161(0x5d0),_0x1e9161(0x263e),_0x1e9161(0x43dd),_0x1e9161(0x2d1e),_0x1e9161(0xe7a),_0x1e9161(0x3de5),'ev_right_press',_0x1e9161(0x5dc),_0x1e9161(0x514),_0x1e9161(0x109e),_0x1e9161(0x44f0),_0x1e9161(0x15c3),'ev_step_end','ev_step_normal',_0x1e9161(0x4d98),_0x1e9161(0x32c2),'ev_user1',_0x1e9161(0x1d61),_0x1e9161(0x38c1),'ev_user4',_0x1e9161(0x3958),_0x1e9161(0x317e),_0x1e9161(0x3283),_0x1e9161(0x3bed),_0x1e9161(0x2c75),_0x1e9161(0x915),_0x1e9161(0x3fd9),_0x1e9161(0x31ed),'ev_user13','ev_user14',_0x1e9161(0x1558),_0x1e9161(0x50a9),_0x1e9161(0xcf5),_0x1e9161(0x346f),_0x1e9161(0x14c9),_0x1e9161(0x4cc6),'fa_left',_0x1e9161(0x3e8b),_0x1e9161(0x3811),_0x1e9161(0xafe),_0x1e9161(0x44ed),_0x1e9161(0x1688),_0x1e9161(0x1926),_0x1e9161(0x1b67),_0x1e9161(0x2279),'fb_login_forcing_safari',_0x1e9161(0x463d),_0x1e9161(0x1a1d),_0x1e9161(0x1875),'gamespeed_fps',_0x1e9161(0xd81),'ge_lose',_0x1e9161(0x501b),'gp_axislh','gp_axislv',_0x1e9161(0x2723),_0x1e9161(0x2e87),_0x1e9161(0x30e7),_0x1e9161(0x37e9),'gp_face3',_0x1e9161(0x351e),_0x1e9161(0x3681),_0x1e9161(0x1224),_0x1e9161(0x1018),_0x1e9161(0x505e),_0x1e9161(0x4416),_0x1e9161(0x1425),'gp_shoulderlb',_0x1e9161(0x25ea),_0x1e9161(0x4d44),_0x1e9161(0x1b2d),'gp_stickl',_0x1e9161(0x3c12),_0x1e9161(0x4da7),'iap_canceled',_0x1e9161(0x2cf8),_0x1e9161(0x2a1c),'iap_ev_purchase',_0x1e9161(0xf2e),_0x1e9161(0xe2d),'iap_failed',_0x1e9161(0x770),_0x1e9161(0x1f91),_0x1e9161(0x3aa5),_0x1e9161(0x1c6f),_0x1e9161(0x3e52),_0x1e9161(0x2a05),'iap_status_unavailable',_0x1e9161(0x1566),_0x1e9161(0x684),_0x1e9161(0x51a2),_0x1e9161(0xf5f),_0x1e9161(0x3525),_0x1e9161(0x1f95),_0x1e9161(0x1e2d),_0x1e9161(0x1947),_0x1e9161(0x1b69),_0x1e9161(0x1893),_0x1e9161(0x12df),_0x1e9161(0x20ae),'kbv_returnkey_emergency','kbv_returnkey_go',_0x1e9161(0x4d9b),_0x1e9161(0x3c92),'kbv_returnkey_next',_0x1e9161(0x5086),_0x1e9161(0x4351),_0x1e9161(0x164f),_0x1e9161(0x4775),_0x1e9161(0x136a),'kbv_type_default',_0x1e9161(0x3d9e),_0x1e9161(0x43b3),_0x1e9161(0x1259),_0x1e9161(0x1ab0),'kbv_type_url','layerelementtype_background','layerelementtype_instance',_0x1e9161(0x1de7),_0x1e9161(0x3341),_0x1e9161(0xdd8),_0x1e9161(0x1cc1),'layerelementtype_tilemap',_0x1e9161(0x12fc),_0x1e9161(0x403e),_0x1e9161(0x29e1),_0x1e9161(0x370c),_0x1e9161(0x5023),_0x1e9161(0x35e9),'lb_sort_descending',_0x1e9161(0x3758),'leaderboard_type_number',_0x1e9161(0x319c),'lighttype_dir',_0x1e9161(0x390f),'local',_0x1e9161(0x339e),'matrix_view',_0x1e9161(0x4310),_0x1e9161(0x3293),_0x1e9161(0x51f7),_0x1e9161(0x1337),'mb_none','mb_right',_0x1e9161(0x241b),_0x1e9161(0x154c),'mip_on','network_config_connect_timeout',_0x1e9161(0x1fe6),_0x1e9161(0x136f),_0x1e9161(0x264c),_0x1e9161(0x42fc),_0x1e9161(0x2c6a),_0x1e9161(0x398a),_0x1e9161(0x1e88),_0x1e9161(0x44d1),_0x1e9161(0xfc8),'network_type_non_blocking_connect',_0x1e9161(0x1c8d),_0x1e9161(0x4acc),_0x1e9161(0x121a),_0x1e9161(0x3332),'os_android',_0x1e9161(0x2dbf),_0x1e9161(0x28c),_0x1e9161(0x3d14),'os_macosx',_0x1e9161(0x32bb),'os_ps4',_0x1e9161(0x190a),_0x1e9161(0x48ac),_0x1e9161(0x408d),_0x1e9161(0x3b52),_0x1e9161(0x382),_0x1e9161(0x3a7a),_0x1e9161(0x2ab8),_0x1e9161(0x1a40),_0x1e9161(0x30d4),_0x1e9161(0x23bf),'os_windows','os_winphone',_0x1e9161(0x352b),_0x1e9161(0x47af),_0x1e9161(0x4b40),_0x1e9161(0x2226),'ov_community',_0x1e9161(0x4693),'ov_gamegroup','ov_players','ov_settings',_0x1e9161(0x5150),_0x1e9161(0xe9a),_0x1e9161(0x2f24),_0x1e9161(0x3675),_0x1e9161(0x16b4),'phy_debug_render_collision_pairs',_0x1e9161(0x31bc),_0x1e9161(0x36ce),_0x1e9161(0x2c99),'phy_debug_render_obb',_0x1e9161(0x3441),'phy_joint_anchor_1_x',_0x1e9161(0x3765),_0x1e9161(0x51cf),_0x1e9161(0x22fc),_0x1e9161(0x1bb6),_0x1e9161(0x4361),_0x1e9161(0x1db5),_0x1e9161(0x1373),_0x1e9161(0xf53),_0x1e9161(0x2443),'phy_joint_lower_angle_limit',_0x1e9161(0x2e57),_0x1e9161(0x1815),_0x1e9161(0x205a),_0x1e9161(0x2913),_0x1e9161(0x5f2),_0x1e9161(0x1687),_0x1e9161(0x29c1),_0x1e9161(0x2dbc),_0x1e9161(0xeb9),_0x1e9161(0x1df7),_0x1e9161(0x4ba7),_0x1e9161(0x1546),'phy_joint_translation',_0x1e9161(0x461d),_0x1e9161(0x1b1c),_0x1e9161(0x44ab),_0x1e9161(0x4381),_0x1e9161(0x3980),_0x1e9161(0x43b2),'phy_particle_data_flag_velocity',_0x1e9161(0xe5b),_0x1e9161(0x29b7),_0x1e9161(0x10b4),_0x1e9161(0x2086),'phy_particle_flag_spring',_0x1e9161(0x2329),'phy_particle_flag_viscous',_0x1e9161(0x36ae),_0x1e9161(0x73a),_0x1e9161(0x1e21),_0x1e9161(0x4253),_0x1e9161(0x3c44),'pi',_0x1e9161(0x323f),_0x1e9161(0x2785),_0x1e9161(0x10a2),_0x1e9161(0xdbc),_0x1e9161(0x823),_0x1e9161(0x1066),_0x1e9161(0x2f45),_0x1e9161(0xdd2),_0x1e9161(0xb51),_0x1e9161(0x2359),_0x1e9161(0x1f4a),'ps_shape_line',_0x1e9161(0x2cbe),'pt_shape_circle',_0x1e9161(0x3a94),_0x1e9161(0x13f6),'pt_shape_explosion',_0x1e9161(0x3cf3),'pt_shape_line',_0x1e9161(0x409a),'pt_shape_ring',_0x1e9161(0x2c3d),'pt_shape_snow',_0x1e9161(0x37ce),_0x1e9161(0x721),_0x1e9161(0x34a3),_0x1e9161(0x2532),'spritespeed_framespergameframe','spritespeed_framespersecond',_0x1e9161(0x4fda),'tf_anisotropic','tf_linear',_0x1e9161(0x4bbb),_0x1e9161(0x2fbf),_0x1e9161(0x943),_0x1e9161(0x2bd0),_0x1e9161(0x5283),_0x1e9161(0x39b5),_0x1e9161(0x19f6),'tm_countvsyncs',_0x1e9161(0x1de1),_0x1e9161(0x50c3),'ty_string',_0x1e9161(0xcce),'ugc_filetype_microtrans',_0x1e9161(0x2ce8),_0x1e9161(0x2d22),_0x1e9161(0x3061),_0x1e9161(0x34e5),_0x1e9161(0x1b92),_0x1e9161(0xa07),'ugc_list_VotedOn','ugc_list_VotedUp','ugc_list_WillVoteLater',_0x1e9161(0x35fc),'ugc_match_Artwork',_0x1e9161(0x4a29),_0x1e9161(0x253f),_0x1e9161(0x3516),_0x1e9161(0x35dc),'ugc_match_Items_Mtx',_0x1e9161(0x2c7),'ugc_match_Screenshots','ugc_match_UsableInGame','ugc_match_Videos','ugc_match_WebGuides',_0x1e9161(0x4865),_0x1e9161(0x1008),_0x1e9161(0x1da0),_0x1e9161(0x1645),_0x1e9161(0xf85),_0x1e9161(0x26ee),'ugc_query_RankedByPublicationDate',_0x1e9161(0x32a8),_0x1e9161(0x347e),_0x1e9161(0x2740),_0x1e9161(0x4896),_0x1e9161(0x3c02),_0x1e9161(0x3be7),_0x1e9161(0x2e16),_0x1e9161(0x1284),_0x1e9161(0x497c),'ugc_sortorder_LastUpdatedDesc','ugc_sortorder_SubscriptionDateDesc','ugc_sortorder_TitleAsc',_0x1e9161(0x3f8b),_0x1e9161(0x4ca7),'ugc_visibility_private',_0x1e9161(0x6f6),_0x1e9161(0x381b),_0x1e9161(0x128e),_0x1e9161(0xb7b),'vertex_type_float2',_0x1e9161(0x32ac),_0x1e9161(0x41b8),'vertex_type_ubyte4',_0x1e9161(0x40d6),_0x1e9161(0x5197),_0x1e9161(0x1f15),_0x1e9161(0x2ef6),_0x1e9161(0x376b),_0x1e9161(0x40aa),'vertex_usage_fog',_0x1e9161(0x3585),'vertex_usage_position',_0x1e9161(0x2bb7),'vertex_usage_sample',_0x1e9161(0x43cb),_0x1e9161(0x1139),_0x1e9161(0xd03),_0x1e9161(0x1b55),_0x1e9161(0x4e83),_0x1e9161(0x442f),_0x1e9161(0x50b3),_0x1e9161(0x323),'vk_decimal',_0x1e9161(0x24b7),_0x1e9161(0x4ea5),_0x1e9161(0x3743),_0x1e9161(0x227),'vk_enter',_0x1e9161(0x48a6),_0x1e9161(0x4c98),_0x1e9161(0x2fa2),_0x1e9161(0x359e),_0x1e9161(0x51a9),_0x1e9161(0x2357),_0x1e9161(0x34f1),_0x1e9161(0x12fb),_0x1e9161(0x2960),_0x1e9161(0x11b9),_0x1e9161(0x3b94),_0x1e9161(0x2b88),_0x1e9161(0x42a7),_0x1e9161(0x27dd),_0x1e9161(0x5062),_0x1e9161(0x2ad3),_0x1e9161(0x640),_0x1e9161(0x4e13),_0x1e9161(0x42f8),_0x1e9161(0x1952),_0x1e9161(0x385e),'vk_numpad0',_0x1e9161(0x2922),_0x1e9161(0x14f8),_0x1e9161(0x28cd),_0x1e9161(0x165b),_0x1e9161(0x233a),_0x1e9161(0x1916),_0x1e9161(0x1491),_0x1e9161(0x4471),_0x1e9161(0xd69),_0x1e9161(0x4f5a),_0x1e9161(0x1e69),_0x1e9161(0xe04),'vk_printscreen',_0x1e9161(0x19df),_0x1e9161(0x1e73),_0x1e9161(0x1750),_0x1e9161(0x2038),_0x1e9161(0x2b81),_0x1e9161(0xdf7),_0x1e9161(0x2b3),_0x1e9161(0x126b),'vk_tab',_0x1e9161(0x23ac)],'variable.language':['alarm',_0x1e9161(0x157d),_0x1e9161(0xc35),_0x1e9161(0x4cda),'argument1',_0x1e9161(0x482f),_0x1e9161(0x33da),_0x1e9161(0x2045),_0x1e9161(0xae6),_0x1e9161(0x31e6),'argument7',_0x1e9161(0x3b74),_0x1e9161(0x379d),_0x1e9161(0x2ccb),_0x1e9161(0x228b),_0x1e9161(0xf96),_0x1e9161(0xa9b),_0x1e9161(0x4c6e),'argument15',_0x1e9161(0xcba),_0x1e9161(0x301e),'async_load',_0x1e9161(0x4a48),_0x1e9161(0x4b4f),_0x1e9161(0x2ae9),_0x1e9161(0x3e3b),_0x1e9161(0x1fd6),_0x1e9161(0x105d),'bbox_right',_0x1e9161(0x4516),'browser_height',_0x1e9161(0xb87),_0x1e9161(0x527a),_0x1e9161(0x15cd),_0x1e9161(0x4d93),'current_day',_0x1e9161(0x5294),_0x1e9161(0x1dae),_0x1e9161(0x422f),_0x1e9161(0x320b),_0x1e9161(0x606),_0x1e9161(0x2537),_0x1e9161(0x4db3),_0x1e9161(0x2341),_0x1e9161(0x4b1d),_0x1e9161(0x4509),_0x1e9161(0x368e),_0x1e9161(0x296c),'display_aa','error_last',_0x1e9161(0x2ed3),_0x1e9161(0x426e),_0x1e9161(0x408f),_0x1e9161(0x2710),_0x1e9161(0x1bbb),_0x1e9161(0x98d),'fps',_0x1e9161(0x20f7),_0x1e9161(0x3d0a),_0x1e9161(0x1ff9),_0x1e9161(0x2e0a),_0x1e9161(0x57f),_0x1e9161(0x60f),_0x1e9161(0x49d4),_0x1e9161(0x3a1a),_0x1e9161(0x26ad),_0x1e9161(0x4bfb),_0x1e9161(0x17aa),_0x1e9161(0x48c),_0x1e9161(0x4316),_0x1e9161(0x1d89),'id|0',_0x1e9161(0x465d),'image_angle',_0x1e9161(0x2924),'image_index',_0x1e9161(0x3901),_0x1e9161(0x1b47),_0x1e9161(0x369e),_0x1e9161(0x1320),'instance_count',_0x1e9161(0xa80),_0x1e9161(0x4b8d),_0x1e9161(0x42d3),_0x1e9161(0x3537),_0x1e9161(0x372d),_0x1e9161(0x23ca),_0x1e9161(0x3d22),_0x1e9161(0x1eb5),'mouse_button',_0x1e9161(0x14ea),'mouse_x',_0x1e9161(0x3033),_0x1e9161(0x452e),_0x1e9161(0x3423),'os_device',_0x1e9161(0x719),_0x1e9161(0x5257),_0x1e9161(0x458d),_0x1e9161(0x358b),_0x1e9161(0x39f1),_0x1e9161(0x4442),_0x1e9161(0x3dc5),_0x1e9161(0x28aa),_0x1e9161(0x1fe0),_0x1e9161(0x508b),_0x1e9161(0x4613),_0x1e9161(0x2df7),'phy_angular_velocity',_0x1e9161(0x5025),_0x1e9161(0x355b),_0x1e9161(0x49b5),_0x1e9161(0x31b3),'phy_collision_x',_0x1e9161(0x3732),'phy_com_x',_0x1e9161(0x393b),_0x1e9161(0x30af),'phy_fixed_rotation',_0x1e9161(0x1584),_0x1e9161(0x47e4),_0x1e9161(0x27c8),_0x1e9161(0x882),'phy_linear_velocity_y',_0x1e9161(0x1bd5),_0x1e9161(0x3363),'phy_position_xprevious',_0x1e9161(0x3dfc),_0x1e9161(0x3087),_0x1e9161(0x510b),_0x1e9161(0x4841),_0x1e9161(0x36ba),'phy_speed_x','phy_speed_y',_0x1e9161(0x14b1),_0x1e9161(0x25c0),'room_caption','room_first',_0x1e9161(0x40f4),'room_last',_0x1e9161(0x3522),'room_speed',_0x1e9161(0x2662),'score','self',_0x1e9161(0x38a9),_0x1e9161(0x309),_0x1e9161(0x33a0),_0x1e9161(0x3657),_0x1e9161(0xae8),_0x1e9161(0x2a94),_0x1e9161(0x2f1a),_0x1e9161(0x3ae1),'sprite_xoffset','sprite_yoffset',_0x1e9161(0x1c20),_0x1e9161(0x2be0),_0x1e9161(0x440d),'timeline_position',_0x1e9161(0x2c48),_0x1e9161(0x11e7),_0x1e9161(0x101f),_0x1e9161(0x1b03),_0x1e9161(0x2350),_0x1e9161(0x3a98),_0x1e9161(0x1ac8),_0x1e9161(0x31b1),_0x1e9161(0x4915),_0x1e9161(0x32d5),'view_object',_0x1e9161(0x3d5),_0x1e9161(0x4542),_0x1e9161(0xc63),_0x1e9161(0x4f23),'view_wport',_0x1e9161(0x33c2),_0x1e9161(0x2442),_0x1e9161(0x4ec8),'view_yport',_0x1e9161(0x2a33),'visible',_0x1e9161(0x426),_0x1e9161(0x38ad),_0x1e9161(0x3d66),_0x1e9161(0x1eea),'xstart','x|0',_0x1e9161(0x2046),_0x1e9161(0x42bf),_0x1e9161(0x4292)]},'contains':[_0x5652fa[_0x1e9161(0x2ae2)],_0x5652fa[_0x1e9161(0x23fe)],_0x5652fa[_0x1e9161(0xa4c)],_0x5652fa['QUOTE_STRING_MODE'],_0x5652fa[_0x1e9161(0xd12)]]};};},0x2631:_0x8b5f0a=>{const _0x20ae2c=a0_0x11e7;_0x8b5f0a[_0x20ae2c(0x474c)]=function(_0x7a48e9){const _0x3f6d43=_0x20ae2c,_0x4b4354={'keyword':[_0x3f6d43(0x4e10),_0x3f6d43(0x2e7e),_0x3f6d43(0x1159),_0x3f6d43(0xc01),_0x3f6d43(0x16d9),'default',_0x3f6d43(0x2ceb),_0x3f6d43(0x3d4),_0x3f6d43(0x85d),_0x3f6d43(0x3c19),'func','go','goto','if',_0x3f6d43(0x331),_0x3f6d43(0x321b),_0x3f6d43(0x4833),_0x3f6d43(0x4bd0),_0x3f6d43(0x51f),_0x3f6d43(0xdfd),'select',_0x3f6d43(0x4146),'switch',_0x3f6d43(0xcfc),'var'],'type':[_0x3f6d43(0x3ebd),_0x3f6d43(0x961),'complex64','complex128',_0x3f6d43(0x3d85),_0x3f6d43(0x1db6),_0x3f6d43(0x2209),'int8',_0x3f6d43(0x22b1),'int32',_0x3f6d43(0x13c3),'string',_0x3f6d43(0xd4d),'uint16',_0x3f6d43(0xa09),_0x3f6d43(0x491f),'int',_0x3f6d43(0x3a0b),_0x3f6d43(0x2fc8),'rune'],'literal':[_0x3f6d43(0x4022),'false',_0x3f6d43(0x3714),'nil'],'built_in':[_0x3f6d43(0x366b),_0x3f6d43(0xc2c),_0x3f6d43(0x50d8),_0x3f6d43(0x3a77),_0x3f6d43(0x16c0),_0x3f6d43(0x6ec),_0x3f6d43(0x4de4),_0x3f6d43(0x3c94),'new',_0x3f6d43(0x3649),'print',_0x3f6d43(0x2d7e),_0x3f6d43(0x47f6),'recover',_0x3f6d43(0x5be)]};return{'name':'Go','aliases':[_0x3f6d43(0x2cb4)],'keywords':_0x4b4354,'illegal':'{_0x42b142['exports']=function(_0x199e9f){const _0x31a07a=a0_0x11e7;return{'name':'Golo','keywords':{'keyword':[_0x31a07a(0x2d7e),'readln',_0x31a07a(0x4957),_0x31a07a(0x331),_0x31a07a(0x196c),_0x31a07a(0x14b2),_0x31a07a(0x16a7),_0x31a07a(0xdfd),'let',_0x31a07a(0x469d),_0x31a07a(0x552),_0x31a07a(0x3c19),_0x31a07a(0x4185),_0x31a07a(0xc31),'in',_0x31a07a(0x2e7e),_0x31a07a(0x191b),_0x31a07a(0x2d96),_0x31a07a(0x2aa7),'break',_0x31a07a(0x16d9),_0x31a07a(0x3f0e),_0x31a07a(0x22f5),_0x31a07a(0x2f9e),_0x31a07a(0x5144),_0x31a07a(0x1465),'reduce','if',_0x31a07a(0xaf5),'else',_0x31a07a(0x313e),_0x31a07a(0x422b),_0x31a07a(0x31a3),'finally',_0x31a07a(0x2c1e),'throw','orIfNull','DynamicObject|10',_0x31a07a(0x7cb),_0x31a07a(0x4146),_0x31a07a(0x3ce0),_0x31a07a(0x4833),_0x31a07a(0x1fa),_0x31a07a(0x4836),_0x31a07a(0x144e),'array'],'literal':['true',_0x31a07a(0x3984),_0x31a07a(0x1582)]},'contains':[_0x199e9f[_0x31a07a(0x2bbe)],_0x199e9f[_0x31a07a(0x291b)],_0x199e9f[_0x31a07a(0xd12)],{'className':'meta','begin':_0x31a07a(0x4d18)}]};};},0x598:_0xae3f7c=>{_0xae3f7c['exports']=function(_0x314dd1){const _0x191d01=a0_0x11e7;return{'name':_0x191d01(0x1433),'case_insensitive':!0x0,'keywords':[_0x191d01(0x3cef),_0x191d01(0x1214),'allprojects','subprojects',_0x191d01(0x515a),_0x191d01(0x424e),_0x191d01(0xce4),'dependencies','repositories',_0x191d01(0x2409),_0x191d01(0x900),'delete','from',_0x191d01(0x31cf),_0x191d01(0x478e),_0x191d01(0x433f),'source',_0x191d01(0xb80),_0x191d01(0x47f5),'includes',_0x191d01(0x20b6),'sourceCompatibility','targetCompatibility','group',_0x191d01(0x1c69),_0x191d01(0x391d),_0x191d01(0x253a),_0x191d01(0x14ff),_0x191d01(0x3dc9),_0x191d01(0x3381),_0x191d01(0x3cc8),_0x191d01(0x452b),_0x191d01(0x3027),_0x191d01(0x4e10),_0x191d01(0x2e7e),_0x191d01(0x31a3),'continue',_0x191d01(0x3d23),'do','else',_0x191d01(0x4428),_0x191d01(0x27e4),_0x191d01(0x37b2),_0x191d01(0x3c19),'if',_0x191d01(0x6c3),_0x191d01(0xf3c),_0x191d01(0x50dc),_0x191d01(0x4321),_0x191d01(0x4ef4),_0x191d01(0xc14),_0x191d01(0x39ce),_0x191d01(0xdfd),_0x191d01(0x2c7c),'switch','synchronized',_0x191d01(0x383),_0x191d01(0x20f8),'transient',_0x191d01(0x422b),_0x191d01(0x3512),_0x191d01(0x552),_0x191d01(0x4005),'package',_0x191d01(0x331),_0x191d01(0x3984),'null','super',_0x191d01(0x138f),_0x191d01(0x4022),_0x191d01(0x1bc2),'checkstyle','codenarc','copy','boolean',_0x191d01(0x961),_0x191d01(0x373c),_0x191d01(0x1390),_0x191d01(0x5024),_0x191d01(0x1ab8),_0x191d01(0xc16),_0x191d01(0x321b),'long',_0x191d01(0x4085),_0x191d01(0x27d6),_0x191d01(0x23cd),_0x191d01(0x24b9),'file','fileTree',_0x191d01(0xbe0),_0x191d01(0x4684),_0x191d01(0x366b),'asList',_0x191d01(0x25b0),'call',_0x191d01(0x325d),_0x191d01(0x500a),_0x191d01(0x404e),_0x191d01(0x4c88),_0x191d01(0x3dd8),_0x191d01(0x2f9e),'eachByte',_0x191d01(0x54a),_0x191d01(0x2ae7),'every','find',_0x191d01(0x1236),'flatten','getAt',_0x191d01(0x1e70),_0x191d01(0x833),_0x191d01(0xee4),_0x191d01(0x1812),_0x191d01(0x3e9a),_0x191d01(0xa78),_0x191d01(0x3ada),_0x191d01(0x3502),'intersect',_0x191d01(0x2745),_0x191d01(0x1a7f),_0x191d01(0x3541),'leftShift',_0x191d01(0x3ce8),'multiply',_0x191d01(0x3c1b),_0x191d01(0x129f),_0x191d01(0x22be),_0x191d01(0x12b0),_0x191d01(0x30f9),_0x191d01(0x3dc6),_0x191d01(0x3a4b),'pop',_0x191d01(0x17a4),_0x191d01(0x40bb),'print',_0x191d01(0x2d7e),'push','putAt',_0x191d01(0x50a6),_0x191d01(0xd28),_0x191d01(0x3138),_0x191d01(0x78b),'reverseEach',_0x191d01(0x3d6c),_0x191d01(0x395f),_0x191d01(0x4c33),'splitEachLine','step',_0x191d01(0xe93),'times',_0x191d01(0x4460),'toList','tokenize',_0x191d01(0x1478),_0x191d01(0x37ba),'withPrintWriter',_0x191d01(0x281e),_0x191d01(0x4af2),'withWriter','withWriterAppend',_0x191d01(0x4c95),_0x191d01(0xc17)],'contains':[_0x314dd1[_0x191d01(0x2ae2)],_0x314dd1['C_BLOCK_COMMENT_MODE'],_0x314dd1[_0x191d01(0xa4c)],_0x314dd1[_0x191d01(0x291b)],_0x314dd1[_0x191d01(0x30be)],_0x314dd1[_0x191d01(0x2db5)]]};};},0x1d32:_0x5862dc=>{_0x5862dc['exports']=function(_0x126544){const _0x3c76ff=a0_0x11e7,_0x1aa705=_0x126544['regex'];return{'name':_0x3c76ff(0x2160),'aliases':[_0x3c76ff(0x3cbf)],'case_insensitive':!0x0,'disableAutodetect':!0x1,'keywords':{'keyword':[_0x3c76ff(0x1cff),_0x3c76ff(0x393e),'subscription',_0x3c76ff(0xcfc),_0x3c76ff(0x7b0),'schema',_0x3c76ff(0x2c70),_0x3c76ff(0x321b),'union','scalar',_0x3c76ff(0x407d),_0x3c76ff(0x44d8),'on'],'literal':['true',_0x3c76ff(0x3984),_0x3c76ff(0x1582)]},'contains':[_0x126544[_0x3c76ff(0x2bbe)],_0x126544[_0x3c76ff(0x291b)],_0x126544['NUMBER_MODE'],{'scope':'punctuation','match':/[.]{3}/,'relevance':0x0},{'scope':'punctuation','begin':/[\!\(\)\:\=\[\]\{\|\}]{1}/,'relevance':0x0},{'scope':'variable','begin':/\$/,'end':/\W/,'excludeEnd':!0x0,'relevance':0x0},{'scope':_0x3c76ff(0x5153),'match':/@\w+/,'excludeEnd':!0x0},{'scope':_0x3c76ff(0x239b),'begin':_0x1aa705[_0x3c76ff(0x1d1d)](/[_A-Za-z][_0-9A-Za-z]*/,_0x1aa705[_0x3c76ff(0x3296)](/\s*:/)),'relevance':0x0}],'illegal':[/[;<']/,/BEGIN/]};};},0xfd:_0xfffc89=>{const _0x13daaa=a0_0x11e7;function _0x30c48b(_0x584820,_0x228708={}){const _0x1d922b=a0_0x11e7;return _0x228708[_0x1d922b(0x2807)]=_0x584820,_0x228708;}_0xfffc89[_0x13daaa(0x474c)]=function(_0x2086ac){const _0x3ca869=_0x13daaa,_0x5b4b3e=_0x2086ac['regex'],_0x2a574b=_0x3ca869(0xeba),_0x3b6790=_0x30c48b([_0x2086ac['C_LINE_COMMENT_MODE'],_0x2086ac[_0x3ca869(0x23fe)],_0x2086ac[_0x3ca869(0x4e4f)](_0x3ca869(0x945),_0x3ca869(0x1820),{'relevance':0x0,'contains':[{'begin':/\w+@/,'relevance':0x0},{'className':'doctag','begin':'@[A-Za-z]+'}]})]),_0x542a15={'className':'regexp','begin':/~?\/[^\/\n]+\//,'contains':[_0x2086ac[_0x3ca869(0x4a76)]]},_0x2af8e7=_0x30c48b([_0x2086ac['BINARY_NUMBER_MODE'],_0x2086ac[_0x3ca869(0xd12)]]),_0x39fe2b=_0x30c48b([{'begin':/"""/,'end':/"""/},{'begin':/'''/,'end':/'''/},{'begin':_0x3ca869(0x2fe4),'end':'/\x5c$','relevance':0xa},_0x2086ac[_0x3ca869(0xa4c)],_0x2086ac['QUOTE_STRING_MODE']],{'className':_0x3ca869(0x2431)}),_0x5a9bb1={'match':[/(class|interface|trait|enum|record|extends|implements)/,/\s+/,_0x2086ac[_0x3ca869(0x206e)]],'scope':{0x1:_0x3ca869(0x1357),0x3:_0x3ca869(0x19e4)}};return{'name':_0x3ca869(0x2b46),'keywords':{'variable.language':_0x3ca869(0xf3e),'literal':_0x3ca869(0x12f3),'type':[_0x3ca869(0x961),_0x3ca869(0x4085),_0x3ca869(0x373c),_0x3ca869(0xc16),_0x3ca869(0x324f),_0x3ca869(0x1e8d),_0x3ca869(0x1ab8),'double',_0x3ca869(0x27d6)],'keyword':[_0x3ca869(0x452b),'as','in',_0x3ca869(0x4fd4),_0x3ca869(0x4659),'abstract',_0x3ca869(0x2c7c),_0x3ca869(0x3512),_0x3ca869(0xf95),'public','private',_0x3ca869(0xc14),_0x3ca869(0x2715),'final',_0x3ca869(0x1390),_0x3ca869(0x321b),_0x3ca869(0x44d8),'if',_0x3ca869(0x3d4),_0x3ca869(0x3c19),_0x3ca869(0x552),_0x3ca869(0x857),_0x3ca869(0x2e7e),_0x3ca869(0x4e10),_0x3ca869(0x3d23),_0x3ca869(0x16d9),'throw',_0x3ca869(0x20f8),_0x3ca869(0x422b),_0x3ca869(0x31a3),_0x3ca869(0x37b2),_0x3ca869(0x6c3),_0x3ca869(0x4428),'new','import',_0x3ca869(0x4bd0),_0x3ca869(0xdfd),'instanceof',_0x3ca869(0x469d)]},'contains':[_0x2086ac['SHEBANG']({'binary':_0x3ca869(0x3015),'relevance':0xa}),_0x3b6790,_0x39fe2b,_0x542a15,_0x2af8e7,_0x5a9bb1,{'className':'meta','begin':_0x3ca869(0x4d18),'relevance':0x0},{'className':_0x3ca869(0x431d),'begin':_0x2a574b+_0x3ca869(0xe4e),'relevance':0x0},{'begin':/\?/,'end':/:/,'relevance':0x0,'contains':[_0x3b6790,_0x39fe2b,_0x542a15,_0x2af8e7,_0x3ca869(0x4454)]},{'className':_0x3ca869(0x239b),'begin':_0x3ca869(0x1f9e)+_0x5b4b3e[_0x3ca869(0x3296)](_0x2a574b+':'),'excludeBegin':!0x0,'end':_0x2a574b+':','relevance':0x0}],'illegal':/#|<\//};};},0x11a5:_0x37594c=>{const _0x374227=a0_0x11e7;_0x37594c[_0x374227(0x474c)]=function(_0x356bc8){const _0x1ebe8e=_0x374227;return{'name':'HAML','case_insensitive':!0x0,'contains':[{'className':_0x1ebe8e(0x5153),'begin':_0x1ebe8e(0x506f),'relevance':0xa},_0x356bc8[_0x1ebe8e(0x4e4f)](_0x1ebe8e(0x4faf),null,{'relevance':0x0}),{'begin':_0x1ebe8e(0x4eff),'end':/$/,'subLanguage':_0x1ebe8e(0x4430),'excludeBegin':!0x0,'excludeEnd':!0x0},{'className':_0x1ebe8e(0x15a9),'begin':'^\x5cs*%','contains':[{'className':_0x1ebe8e(0x527d),'begin':'\x5cw+'},{'className':_0x1ebe8e(0x3713),'begin':'#[\x5cw-]+'},{'className':_0x1ebe8e(0x326),'begin':'\x5c.[\x5cw-]+'},{'begin':/\{\s*/,'end':/\s*\}/,'contains':[{'begin':_0x1ebe8e(0x1c70),'end':',\x5cs+','returnBegin':!0x0,'endsWithParent':!0x0,'contains':[{'className':_0x1ebe8e(0x431d),'begin':_0x1ebe8e(0x2668)},_0x356bc8[_0x1ebe8e(0xa4c)],_0x356bc8[_0x1ebe8e(0x291b)],{'begin':_0x1ebe8e(0xdce),'relevance':0x0}]}]},{'begin':_0x1ebe8e(0x1778),'end':_0x1ebe8e(0x1e39),'excludeEnd':!0x0,'contains':[{'begin':_0x1ebe8e(0x1223),'end':_0x1ebe8e(0x16cc),'returnBegin':!0x0,'endsWithParent':!0x0,'contains':[{'className':_0x1ebe8e(0x431d),'begin':'\x5cw+','relevance':0x0},_0x356bc8[_0x1ebe8e(0xa4c)],_0x356bc8['QUOTE_STRING_MODE'],{'begin':'\x5cw+','relevance':0x0}]}]}]},{'begin':_0x1ebe8e(0x164a)},{'begin':/#\{/,'end':/\}/,'subLanguage':_0x1ebe8e(0x4430),'excludeBegin':!0x0,'excludeEnd':!0x0}]};};},0xc73:_0x580fcd=>{const _0x52c4c1=a0_0x11e7;_0x580fcd[_0x52c4c1(0x474c)]=function(_0xdadd1a){const _0x285755=_0x52c4c1,_0x41e712=_0xdadd1a[_0x285755(0x41d2)],_0x3abb39={'$pattern':/[\w.\/]+/,'built_in':[_0x285755(0x1a54),'bindattr',_0x285755(0x3caa),_0x285755(0x2bf5),_0x285755(0x1d1d),_0x285755(0x2085),_0x285755(0x2f9e),_0x285755(0x30f4),_0x285755(0xf9e),_0x285755(0x40c0),'if','in',_0x285755(0x7b0),_0x285755(0x2fb1),_0x285755(0x1d02),_0x285755(0x20ff),_0x285755(0x4823),'mut','outlet','partial',_0x285755(0x849),_0x285755(0x17d9),'template',_0x285755(0x39cd),_0x285755(0x3f49),_0x285755(0x26b1),'view','with','yield']},_0x241d8e=/\[\]|\[[^\]]+\]/,_0x27c091=/[^\s!"#%&'()*+,.\/;<=>@\[\\\]^`{|}~]+/,_0x4bb38c=_0x41e712[_0x285755(0x583)](/""|"[^"]+"/,/''|'[^']+'/,_0x241d8e,_0x27c091),_0x368070=_0x41e712[_0x285755(0x1d1d)](_0x41e712['optional'](/\.|\.\/|\//),_0x4bb38c,_0x41e712['anyNumberOfTimes'](_0x41e712['concat'](/(\.|\/)/,_0x4bb38c))),_0x134fbf=_0x41e712[_0x285755(0x1d1d)]('(',_0x241d8e,'|',_0x27c091,_0x285755(0x1e24)),_0xccec3c={'begin':_0x368070},_0x14d240=_0xdadd1a[_0x285755(0x46a1)](_0xccec3c,{'keywords':{'$pattern':/[\w.\/]+/,'literal':[_0x285755(0x4022),_0x285755(0x3984),_0x285755(0x1daa),'null']}}),_0x50d4f2={'begin':/\(/,'end':/\)/},_0x256d87={'className':'attr','begin':_0x134fbf,'relevance':0x0,'starts':{'begin':/=/,'end':/=/,'starts':{'contains':[_0xdadd1a['NUMBER_MODE'],_0xdadd1a[_0x285755(0x291b)],_0xdadd1a[_0x285755(0xa4c)],_0x14d240,_0x50d4f2]}}},_0x9c18aa={'contains':[_0xdadd1a['NUMBER_MODE'],_0xdadd1a[_0x285755(0x291b)],_0xdadd1a[_0x285755(0xa4c)],{'begin':/as\s+\|/,'keywords':{'keyword':'as'},'end':/\|/,'contains':[{'begin':/\w+/}]},_0x256d87,_0x14d240,_0x50d4f2],'returnEnd':!0x0},_0x39591b=_0xdadd1a[_0x285755(0x46a1)](_0xccec3c,{'className':_0x285755(0x11d8),'keywords':_0x3abb39,'starts':_0xdadd1a[_0x285755(0x46a1)](_0x9c18aa,{'end':/\)/})});_0x50d4f2[_0x285755(0x2b31)]=[_0x39591b];const _0x409631=_0xdadd1a[_0x285755(0x46a1)](_0xccec3c,{'keywords':_0x3abb39,'className':_0x285755(0x11d8),'starts':_0xdadd1a['inherit'](_0x9c18aa,{'end':/\}\}/})}),_0x392fc7=_0xdadd1a[_0x285755(0x46a1)](_0xccec3c,{'keywords':_0x3abb39,'className':_0x285755(0x11d8)}),_0x58f49d=_0xdadd1a[_0x285755(0x46a1)](_0xccec3c,{'className':_0x285755(0x11d8),'keywords':_0x3abb39,'starts':_0xdadd1a[_0x285755(0x46a1)](_0x9c18aa,{'end':/\}\}/})});return{'name':_0x285755(0x1593),'aliases':[_0x285755(0x2449),_0x285755(0x3be1),_0x285755(0x46c),'htmlbars'],'case_insensitive':!0x0,'subLanguage':'xml','contains':[{'begin':/\\\{\{/,'skip':!0x0},{'begin':/\\\\(?=\{\{)/,'skip':!0x0},_0xdadd1a[_0x285755(0x4e4f)](/\{\{!--/,/--\}\}/),_0xdadd1a['COMMENT'](/\{\{!/,/\}\}/),{'className':_0x285755(0x499d),'begin':/\{\{\{\{(?!\/)/,'end':/\}\}\}\}/,'contains':[_0x409631],'starts':{'end':/\{\{\{\{\//,'returnEnd':!0x0,'subLanguage':_0x285755(0x2655)}},{'className':'template-tag','begin':/\{\{\{\{\//,'end':/\}\}\}\}/,'contains':[_0x392fc7]},{'className':'template-tag','begin':/\{\{#/,'end':/\}\}/,'contains':[_0x409631]},{'className':_0x285755(0x499d),'begin':/\{\{(?=else\}\})/,'end':/\}\}/,'keywords':_0x285755(0x3d4)},{'className':_0x285755(0x499d),'begin':/\{\{(?=else if)/,'end':/\}\}/,'keywords':_0x285755(0x20a7)},{'className':_0x285755(0x499d),'begin':/\{\{\//,'end':/\}\}/,'contains':[_0x392fc7]},{'className':_0x285755(0x2916),'begin':/\{\{\{/,'end':/\}\}\}/,'contains':[_0x58f49d]},{'className':_0x285755(0x2916),'begin':/\{\{/,'end':/\}\}/,'contains':[_0x58f49d]}]};};},0x17b:_0x57b0af=>{const _0x4852d8=a0_0x11e7;_0x57b0af[_0x4852d8(0x474c)]=function(_0x425fad){const _0xd45199=_0x4852d8,_0x448cf1=_0xd45199(0x35b3),_0x3504f0=_0xd45199(0x4edf),_0x20c882=_0xd45199(0x2d6c),_0x16be0a={'variants':[_0x425fad[_0xd45199(0x4e4f)]('--+','$'),_0x425fad[_0xd45199(0x4e4f)](/\{-/,/-\}/,{'contains':[_0xd45199(0x4454)]})]},_0x5aa6bc={'className':_0xd45199(0x5153),'begin':/\{-#/,'end':/#-\}/},_0x53e195={'className':'meta','begin':'^#','end':'$'},_0x23785d={'className':'type','begin':_0xd45199(0x2013),'relevance':0x0},_0xffc788={'begin':'\x5c(','end':'\x5c)','illegal':'\x22','contains':[_0x5aa6bc,_0x53e195,{'className':_0xd45199(0xcfc),'begin':'\x5cb[A-Z][\x5cw]*(\x5c((\x5c.\x5c.|,|\x5cw+)\x5c))?'},_0x425fad[_0xd45199(0x46a1)](_0x425fad['TITLE_MODE'],{'begin':_0xd45199(0xe83)}),_0x16be0a]},_0x2e857e={'className':_0xd45199(0x4a80),'relevance':0x0,'variants':[{'match':_0xd45199(0x4cd5)+_0x448cf1+')(\x5c.('+_0x448cf1+_0xd45199(0x2385)+_0x448cf1+_0xd45199(0x1ed0)},{'match':_0xd45199(0x5273)+_0x3504f0+_0xd45199(0x2742)+_0x3504f0+'))?([pP][+-]?('+_0x448cf1+_0xd45199(0x1ed0)},{'match':'\x5cb0[oO](([0-7]_*)+)\x5cb'},{'match':_0xd45199(0x47bb)}]};return{'name':'Haskell','aliases':['hs'],'keywords':'let\x20in\x20if\x20then\x20else\x20case\x20of\x20where\x20do\x20module\x20import\x20hiding\x20qualified\x20type\x20data\x20newtype\x20deriving\x20class\x20instance\x20as\x20default\x20infix\x20infixl\x20infixr\x20foreign\x20export\x20ccall\x20stdcall\x20cplusplus\x20jvm\x20dotnet\x20safe\x20unsafe\x20family\x20forall\x20mdo\x20proc\x20rec','unicodeRegex':!0x0,'contains':[{'beginKeywords':_0xd45199(0x196c),'end':_0xd45199(0x3b62),'keywords':_0xd45199(0x45ba),'contains':[_0xffc788,_0x16be0a],'illegal':_0xd45199(0x40e)},{'begin':_0xd45199(0x2f09),'end':'$','keywords':_0xd45199(0x2ee3),'contains':[_0xffc788,_0x16be0a],'illegal':_0xd45199(0x40e)},{'className':_0xd45199(0x1390),'begin':'^(\x5cs*)?(class|instance)\x5cb','end':'where','keywords':_0xd45199(0x51e7),'contains':[_0x23785d,_0xffc788,_0x16be0a]},{'className':_0xd45199(0x1390),'begin':_0xd45199(0x2081),'end':'$','keywords':_0xd45199(0x1294),'contains':[_0x5aa6bc,_0x23785d,_0xffc788,{'begin':/\{/,'end':/\}/,'contains':_0xffc788[_0xd45199(0x2b31)]},_0x16be0a]},{'beginKeywords':_0xd45199(0x3d23),'end':'$','contains':[_0x23785d,_0xffc788,_0x16be0a]},{'beginKeywords':_0xd45199(0x13fc),'end':'$','contains':[_0x425fad[_0xd45199(0xd12)],_0x16be0a]},{'begin':_0xd45199(0x16a5),'end':'$','keywords':_0xd45199(0x31bf),'contains':[_0x23785d,_0x425fad['QUOTE_STRING_MODE'],_0x16be0a]},{'className':_0xd45199(0x5153),'begin':_0xd45199(0x2857),'end':'$'},_0x5aa6bc,_0x53e195,{'scope':_0xd45199(0x2431),'begin':/'(?=\\?.')/,'end':/'/,'contains':[{'scope':_0xd45199(0x2825),'match':/\\./}]},_0x425fad['QUOTE_STRING_MODE'],_0x2e857e,_0x23785d,_0x425fad['inherit'](_0x425fad[_0xd45199(0x2029)],{'begin':_0xd45199(0x2749)}),{'begin':_0xd45199(0x3f71)+_0x20c882+_0xd45199(0x1b4b)+_0x20c882},_0x16be0a,{'begin':_0xd45199(0x3382)}]};};},0x102b:_0x16265f=>{const _0x428e3a=a0_0x11e7;_0x16265f[_0x428e3a(0x474c)]=function(_0x2cf82b){const _0x5b9bad=_0x428e3a;return{'name':_0x5b9bad(0x1f73),'aliases':['hx'],'keywords':{'keyword':_0x5b9bad(0x26b7),'built_in':'trace\x20this','literal':_0x5b9bad(0x5272)},'contains':[{'className':_0x5b9bad(0x2431),'begin':'\x27','end':'\x27','contains':[_0x2cf82b[_0x5b9bad(0x4a76)],{'className':_0x5b9bad(0x2ad6),'begin':/\$\{/,'end':/\}/},{'className':'subst','begin':/\$/,'end':/\W\}/}]},_0x2cf82b['QUOTE_STRING_MODE'],_0x2cf82b[_0x5b9bad(0x2ae2)],_0x2cf82b[_0x5b9bad(0x23fe)],{'className':_0x5b9bad(0x4a80),'begin':/(-?)(\b0[xX][a-fA-F0-9_]+|(\b\d+(\.[\d_]*)?|\.[\d_]+)(([eE][-+]?\d+)|i32|u32|i64|f64)?)/,'relevance':0x0},{'className':_0x5b9bad(0x3362),'begin':'\x5c$[a-zA-Z_$][a-zA-Z0-9_$]*'},{'className':'meta','begin':/@:?/,'end':/\(|$/,'excludeEnd':!0x0},{'className':_0x5b9bad(0x5153),'begin':'#','end':'$','keywords':{'keyword':_0x5b9bad(0x920)}},{'className':_0x5b9bad(0xcfc),'begin':/:[ \t]*/,'end':/[^A-Za-z0-9_ \t\->]/,'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0},{'className':_0x5b9bad(0xcfc),'begin':/:[ \t]*/,'end':/\W/,'excludeBegin':!0x0,'excludeEnd':!0x0},{'className':_0x5b9bad(0xcfc),'begin':/new */,'end':/\W/,'excludeBegin':!0x0,'excludeEnd':!0x0},{'className':'title.class','beginKeywords':_0x5b9bad(0x44d8),'end':/\{/,'contains':[_0x2cf82b[_0x5b9bad(0x2029)]]},{'className':_0x5b9bad(0x19e4),'begin':'\x5cbabstract\x5cb(?=\x5cs*'+_0x2cf82b[_0x5b9bad(0xacc)]+_0x5b9bad(0x16f7),'end':/[\{$]/,'contains':[{'className':_0x5b9bad(0xcfc),'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0},{'className':'type','begin':/from +/,'end':/\W/,'excludeBegin':!0x0,'excludeEnd':!0x0},{'className':_0x5b9bad(0xcfc),'begin':/to +/,'end':/\W/,'excludeBegin':!0x0,'excludeEnd':!0x0},_0x2cf82b[_0x5b9bad(0x2029)]],'keywords':{'keyword':_0x5b9bad(0x1876)}},{'className':_0x5b9bad(0x19e4),'begin':/\b(class|interface) +/,'end':/[\{$]/,'excludeEnd':!0x0,'keywords':'class\x20interface','contains':[{'className':_0x5b9bad(0x1357),'begin':/\b(extends|implements) +/,'keywords':_0x5b9bad(0x4c1c),'contains':[{'className':_0x5b9bad(0xcfc),'begin':_0x2cf82b[_0x5b9bad(0xacc)],'relevance':0x0}]},_0x2cf82b['TITLE_MODE']]},{'className':_0x5b9bad(0x20db),'beginKeywords':_0x5b9bad(0x14b2),'end':/\(/,'excludeEnd':!0x0,'illegal':/\S/,'contains':[_0x2cf82b[_0x5b9bad(0x2029)]]}],'illegal':/<\//};};},0xfa8:_0x3181fa=>{const _0x2252cc=a0_0x11e7;_0x3181fa[_0x2252cc(0x474c)]=function(_0x5634dd){const _0x353965=_0x2252cc;return{'name':_0x353965(0x38e7),'case_insensitive':!0x0,'keywords':{'$pattern':/[\w._]+/,'keyword':_0x353965(0x2abd)},'contains':[_0x5634dd[_0x353965(0x2ae2)],_0x5634dd[_0x353965(0x23fe)],_0x5634dd[_0x353965(0x291b)],_0x5634dd[_0x353965(0xa4c)],{'className':'string','begin':/\{"/,'end':/"\}/,'contains':[_0x5634dd['BACKSLASH_ESCAPE']]},_0x5634dd['COMMENT'](';','$',{'relevance':0x0}),{'className':_0x353965(0x5153),'begin':'#','end':'$','keywords':{'keyword':'addion\x20cfunc\x20cmd\x20cmpopt\x20comfunc\x20const\x20defcfunc\x20deffunc\x20define\x20else\x20endif\x20enum\x20epack\x20func\x20global\x20if\x20ifdef\x20ifndef\x20include\x20modcfunc\x20modfunc\x20modinit\x20modterm\x20module\x20pack\x20packopt\x20regcmd\x20runtime\x20undef\x20usecom\x20uselib'},'contains':[_0x5634dd['inherit'](_0x5634dd[_0x353965(0x291b)],{'className':'string'}),_0x5634dd['NUMBER_MODE'],_0x5634dd[_0x353965(0xd12)],_0x5634dd[_0x353965(0x2ae2)],_0x5634dd['C_BLOCK_COMMENT_MODE']]},{'className':_0x353965(0x239b),'begin':_0x353965(0x2206)},_0x5634dd[_0x353965(0x30be)],_0x5634dd['C_NUMBER_MODE']]};};},0xd37:_0x2058d9=>{_0x2058d9['exports']=function(_0x59eb77){const _0x1de1f7=a0_0x11e7,_0x2d3b43=_0x1de1f7(0x97f),_0x47ed9a={'className':_0x1de1f7(0x263f),'begin':_0x59eb77[_0x1de1f7(0x41d2)][_0x1de1f7(0x1d1d)]('^',/[A-Za-z][A-Za-z0-9-]*/,_0x1de1f7(0x18f0)),'starts':{'contains':[{'className':_0x1de1f7(0xa25),'begin':/: /,'relevance':0x0,'starts':{'end':'$','relevance':0x0}}]}},_0x321bf9=[_0x47ed9a,{'begin':_0x1de1f7(0x1022),'starts':{'subLanguage':[],'endsWithParent':!0x0}}];return{'name':_0x1de1f7(0x25ab),'aliases':[_0x1de1f7(0x3539)],'illegal':/\S/,'contains':[{'begin':_0x1de1f7(0x2616)+_0x2d3b43+_0x1de1f7(0x1aae),'end':/$/,'contains':[{'className':'meta','begin':_0x2d3b43},{'className':_0x1de1f7(0x4a80),'begin':_0x1de1f7(0x19d2)}],'starts':{'end':/\b\B/,'illegal':/\S/,'contains':_0x321bf9}},{'begin':_0x1de1f7(0x5124)+_0x2d3b43+'$)','end':/$/,'contains':[{'className':_0x1de1f7(0x2431),'begin':'\x20','end':'\x20','excludeBegin':!0x0,'excludeEnd':!0x0},{'className':_0x1de1f7(0x5153),'begin':_0x2d3b43},{'className':'keyword','begin':_0x1de1f7(0x280d)}],'starts':{'end':/\b\B/,'illegal':/\S/,'contains':_0x321bf9}},_0x59eb77[_0x1de1f7(0x46a1)](_0x47ed9a,{'relevance':0x0})]};};},0x1250:_0x3bf661=>{const _0xb5f112=a0_0x11e7;_0x3bf661[_0xb5f112(0x474c)]=function(_0x490ae8){const _0x535f56=_0xb5f112,_0x591a69='a-zA-Z_\x5c-!.?+*=<>&#\x27',_0x55fb45='['+_0x591a69+']['+_0x591a69+_0x535f56(0x4b49),_0x1b5a2b={'$pattern':_0x55fb45,'built_in':_0x535f56(0x2cca)},_0xa98ea4={'begin':_0x55fb45,'relevance':0x0},_0x57a6f3={'className':'number','begin':'[-+]?\x5cd+(\x5c.\x5cd+)?','relevance':0x0},_0x17b8e0=_0x490ae8[_0x535f56(0x46a1)](_0x490ae8[_0x535f56(0x291b)],{'illegal':null}),_0x4746e2=_0x490ae8['COMMENT'](';','$',{'relevance':0x0}),_0x59befe={'className':'literal','begin':/\b([Tt]rue|[Ff]alse|nil|None)\b/},_0x21fc75={'begin':_0x535f56(0x1e12),'end':_0x535f56(0x1b87),'relevance':0x0},_0x315d52={'className':_0x535f56(0x4645),'begin':'\x5c^'+_0x55fb45},_0x25451c=_0x490ae8[_0x535f56(0x4e4f)](_0x535f56(0x425f),'\x5c}'),_0x2b137e={'className':'symbol','begin':_0x535f56(0x51db)+_0x55fb45},_0x5c6219={'begin':'\x5c(','end':'\x5c)'},_0x3f4d6f={'endsWithParent':!0x0,'relevance':0x0},_0x34e219={'className':_0x535f56(0x11d8),'relevance':0x0,'keywords':_0x1b5a2b,'begin':_0x55fb45,'starts':_0x3f4d6f},_0x34c756=[_0x5c6219,_0x17b8e0,_0x315d52,_0x25451c,_0x4746e2,_0x2b137e,_0x21fc75,_0x57a6f3,_0x59befe,_0xa98ea4];return _0x5c6219[_0x535f56(0x2b31)]=[_0x490ae8[_0x535f56(0x4e4f)](_0x535f56(0x4645),''),_0x34e219,_0x3f4d6f],_0x3f4d6f['contains']=_0x34c756,_0x21fc75[_0x535f56(0x2b31)]=_0x34c756,{'name':'Hy','aliases':[_0x535f56(0x1458)],'illegal':/\S/,'contains':[_0x490ae8[_0x535f56(0x307a)](),_0x5c6219,_0x17b8e0,_0x315d52,_0x25451c,_0x4746e2,_0x2b137e,_0x21fc75,_0x57a6f3,_0x59befe]};};},0x1b39:_0x523baa=>{_0x523baa['exports']=function(_0x3749c1){const _0xef73c3=a0_0x11e7;return{'name':_0xef73c3(0x2a4d),'aliases':['i7'],'case_insensitive':!0x0,'keywords':{'keyword':'thing\x20room\x20person\x20man\x20woman\x20animal\x20container\x20supporter\x20backdrop\x20door\x20scenery\x20open\x20closed\x20locked\x20inside\x20gender\x20is\x20are\x20say\x20understand\x20kind\x20of\x20rule'},'contains':[{'className':'string','begin':'\x22','end':'\x22','relevance':0x0,'contains':[{'className':'subst','begin':'\x5c[','end':'\x5c]'}]},{'className':'section','begin':/^(Volume|Book|Part|Chapter|Section|Table)\b/,'end':'$'},{'begin':/^(Check|Carry out|Report|Instead of|To|Rule|When|Before|After)\b/,'end':':','contains':[{'begin':_0xef73c3(0x495),'end':'\x5c)'}]},{'className':_0xef73c3(0x4645),'begin':'\x5c[','end':'\x5c]','contains':[_0xef73c3(0x4454)]}]};};},0x5fd:_0x1466d1=>{_0x1466d1['exports']=function(_0x26fe33){const _0x52cc03=a0_0x11e7,_0x1f64b2=_0x26fe33[_0x52cc03(0x41d2)],_0x4fb959={'className':_0x52cc03(0x4a80),'relevance':0x0,'variants':[{'begin':/([+-]+)?[\d]+_[\d_]+/},{'begin':_0x26fe33[_0x52cc03(0x5047)]}]},_0x249b1a=_0x26fe33[_0x52cc03(0x4e4f)]();_0x249b1a[_0x52cc03(0x2807)]=[{'begin':/;/,'end':/$/},{'begin':/#/,'end':/$/}];const _0x388871={'className':'variable','variants':[{'begin':/\$[\w\d"][\w\d_]*/},{'begin':/\$\{(.*?)\}/}]},_0x273c1d={'className':'literal','begin':/\bon|off|true|false|yes|no\b/},_0x549e83={'className':_0x52cc03(0x2431),'contains':[_0x26fe33[_0x52cc03(0x4a76)]],'variants':[{'begin':_0x52cc03(0x3f83),'end':_0x52cc03(0x3f83),'relevance':0xa},{'begin':'\x22\x22\x22','end':_0x52cc03(0xb00),'relevance':0xa},{'begin':'\x22','end':'\x22'},{'begin':'\x27','end':'\x27'}]},_0x3524d7={'begin':/\[/,'end':/\]/,'contains':[_0x249b1a,_0x273c1d,_0x388871,_0x549e83,_0x4fb959,_0x52cc03(0x4454)],'relevance':0x0},_0x1507c0=_0x1f64b2[_0x52cc03(0x583)](/[A-Za-z0-9_-]+/,/"(\\"|[^"])*"/,/'[^']*'/);return{'name':_0x52cc03(0x4787),'aliases':[_0x52cc03(0x4b29)],'case_insensitive':!0x0,'illegal':/\S/,'contains':[_0x249b1a,{'className':_0x52cc03(0x69d),'begin':/\[+/,'end':/\]+/},{'begin':_0x1f64b2[_0x52cc03(0x1d1d)](_0x1507c0,_0x52cc03(0x43f),_0x1507c0,')*',_0x1f64b2['lookahead'](/\s*=\s*[^#\s]/)),'className':'attr','starts':{'end':/$/,'contains':[_0x249b1a,_0x3524d7,_0x273c1d,_0x388871,_0x549e83,_0x4fb959]}}]};};},0x1193:_0xc22ac0=>{const _0x3072d8=a0_0x11e7;_0xc22ac0[_0x3072d8(0x474c)]=function(_0xa9b4f7){const _0x223799=_0x3072d8,_0x207420=_0xa9b4f7[_0x223799(0x41d2)],_0x4a5e4b=/(_[a-z_\d]+)?/,_0x1aa929=/([de][+-]?\d+)?/,_0x185447={'className':_0x223799(0x4a80),'variants':[{'begin':_0x207420[_0x223799(0x1d1d)](/\b\d+/,/\.(\d*)/,_0x1aa929,_0x4a5e4b)},{'begin':_0x207420[_0x223799(0x1d1d)](/\b\d+/,_0x1aa929,_0x4a5e4b)},{'begin':_0x207420[_0x223799(0x1d1d)](/\.\d+/,_0x1aa929,_0x4a5e4b)}],'relevance':0x0};return{'name':_0x223799(0xd6b),'case_insensitive':!0x0,'keywords':{'literal':_0x223799(0x3ad1),'keyword':_0x223799(0x3e7c),'built_in':_0x223799(0x512e)},'illegal':/\/\*/,'contains':[_0xa9b4f7[_0x223799(0x46a1)](_0xa9b4f7[_0x223799(0xa4c)],{'className':'string','relevance':0x0}),_0xa9b4f7[_0x223799(0x46a1)](_0xa9b4f7['QUOTE_STRING_MODE'],{'className':_0x223799(0x2431),'relevance':0x0}),{'className':'function','beginKeywords':_0x223799(0xf14),'illegal':_0x223799(0x443e),'contains':[_0xa9b4f7['UNDERSCORE_TITLE_MODE'],{'className':_0x223799(0xddd),'begin':'\x5c(','end':'\x5c)'}]},_0xa9b4f7['COMMENT']('!','$',{'relevance':0x0}),_0xa9b4f7[_0x223799(0x4e4f)](_0x223799(0x4cf3),_0x223799(0x3d48),{'relevance':0xa}),_0x185447]};};},0x1621:_0x5161cc=>{const _0x4f961b=a0_0x11e7;_0x5161cc[_0x4f961b(0x474c)]=function(_0x16b371){const _0x1da19f=_0x4f961b,_0x4b35dd=_0x1da19f(0x13a6),_0x9258a6={'className':_0x1da19f(0x4a80),'begin':_0x16b371[_0x1da19f(0x5047)],'relevance':0x0},_0x8c9989={'className':_0x1da19f(0x2431),'variants':[{'begin':'\x22','end':'\x22'},{'begin':'\x27','end':'\x27'}]},_0x235566={'className':'doctag','begin':_0x1da19f(0x2dac),'relevance':0x0},_0x2080d5={'variants':[{'className':'comment','begin':'//','end':'$','relevance':0x0,'contains':[_0x16b371[_0x1da19f(0x18cc)],_0x235566]},{'className':_0x1da19f(0x4645),'begin':_0x1da19f(0x4f94),'end':_0x1da19f(0x1820),'relevance':0x0,'contains':[_0x16b371['PHRASAL_WORDS_MODE'],_0x235566]}]},_0x34ecaa={'$pattern':_0x4b35dd,'keyword':'and\x20и\x20else\x20иначе\x20endexcept\x20endfinally\x20endforeach\x20конецвсе\x20endif\x20конецесли\x20endwhile\x20конецпока\x20except\x20exitfor\x20finally\x20foreach\x20все\x20if\x20если\x20in\x20в\x20not\x20не\x20or\x20или\x20try\x20while\x20пока\x20','built_in':_0x1da19f(0x4c1b),'class':_0x1da19f(0x33d),'literal':_0x1da19f(0x20fa)},_0x341bbe={'begin':_0x1da19f(0xd0a)+_0x16b371['UNDERSCORE_IDENT_RE'],'keywords':_0x34ecaa,'relevance':0x0},_0xaeb50b={'className':_0x1da19f(0xcfc),'begin':_0x1da19f(0x2ad0)+_0x1da19f(0x58f)[_0x1da19f(0x1b23)]()[_0x1da19f(0x741)](/\s/g,'|')+')','end':_0x1da19f(0x47cb),'excludeEnd':!0x0},_0x28621c={'className':_0x1da19f(0x3362),'keywords':_0x34ecaa,'begin':_0x4b35dd,'relevance':0x0,'contains':[_0xaeb50b,_0x341bbe]},_0x4b0cd8=_0x1da19f(0x1cdc);return{'name':'ISBL','case_insensitive':!0x0,'keywords':_0x34ecaa,'illegal':_0x1da19f(0x865),'contains':[{'className':_0x1da19f(0x14b2),'begin':_0x4b0cd8,'end':'\x5c)$','returnBegin':!0x0,'keywords':_0x34ecaa,'illegal':_0x1da19f(0x298b),'contains':[{'className':'title','keywords':{'$pattern':_0x4b35dd,'built_in':_0x1da19f(0x4b1f)},'begin':_0x4b0cd8,'end':'\x5c(','returnBegin':!0x0,'excludeEnd':!0x0},_0x341bbe,_0x28621c,_0x8c9989,_0x9258a6,_0x2080d5]},_0xaeb50b,_0x341bbe,_0x28621c,_0x8c9989,_0x9258a6,_0x2080d5]};};},0x131f:_0x3d7b94=>{const _0x4535a2=a0_0x11e7;var _0x389ca4=_0x4535a2(0x1e1c),_0x3c7e61=_0x4535a2(0x1af1)+_0x389ca4+')',_0x1eb82d=_0x4535a2(0xaae),_0x2377c3={'className':_0x4535a2(0x4a80),'variants':[{'begin':_0x4535a2(0x5ea)+_0x389ca4+_0x4535a2(0x4eb6)+_0x3c7e61+_0x4535a2(0x1d32)+_0x3c7e61+_0x4535a2(0x2c22)+_0x389ca4+_0x4535a2(0x557)},{'begin':_0x4535a2(0x4cd5)+_0x389ca4+_0x4535a2(0x4eb6)+_0x3c7e61+_0x4535a2(0x4d25)},{'begin':'('+_0x3c7e61+_0x4535a2(0x557)},{'begin':'\x5cb('+_0x389ca4+_0x4535a2(0x47b)},{'begin':'\x5cb0[xX](('+_0x1eb82d+_0x4535a2(0x5a9)+_0x1eb82d+_0x4535a2(0x43d7)+_0x1eb82d+_0x4535a2(0x51e0)+_0x389ca4+_0x4535a2(0x557)},{'begin':_0x4535a2(0x1e1a)},{'begin':_0x4535a2(0xe31)+_0x1eb82d+')[lL]?\x5cb'},{'begin':'\x5cb0(_*[0-7])*[lL]?\x5cb'},{'begin':_0x4535a2(0xfca)}],'relevance':0x0};function _0x17449e(_0x238073,_0x1c0de7,_0x455244){const _0xb0b62=_0x4535a2;return-0x1===_0x455244?'':_0x238073[_0xb0b62(0x741)](_0x1c0de7,_0x3b8cb8=>_0x17449e(_0x238073,_0x1c0de7,_0x455244-0x1));}_0x3d7b94['exports']=function(_0x48bd8e){const _0x32d175=_0x4535a2,_0x1c5120=_0x48bd8e[_0x32d175(0x41d2)],_0x429f42=_0x32d175(0x1c95),_0x18b78a=_0x429f42+_0x17449e('(?:<'+_0x429f42+_0x32d175(0x2054)+_0x429f42+'~~~)*>)?',/~~~/g,0x2),_0x267639={'keyword':[_0x32d175(0x2715),_0x32d175(0x3027),_0x32d175(0x4ef4),_0x32d175(0x469d),_0x32d175(0x2c7c),'if',_0x32d175(0x2273),_0x32d175(0x3c19),'while',_0x32d175(0x4005),_0x32d175(0x37b2),_0x32d175(0xc14),'import','native',_0x32d175(0x27e4),_0x32d175(0x27d6),_0x32d175(0x44d8),_0x32d175(0x3d4),_0x32d175(0x4e10),_0x32d175(0xf95),_0x32d175(0x31a3),_0x32d175(0xf3c),_0x32d175(0x3512),'case','assert','package',_0x32d175(0x3d23),_0x32d175(0x39ce),'try',_0x32d175(0x857),_0x32d175(0x16d9),_0x32d175(0x20f8),_0x32d175(0xc14),'public','private',_0x32d175(0x196c),_0x32d175(0x1c0e),_0x32d175(0x474c),'do',_0x32d175(0x3a89),_0x32d175(0x5075),'permits'],'literal':[_0x32d175(0x3984),'true',_0x32d175(0x1582)],'type':[_0x32d175(0x373c),_0x32d175(0x1e8d),'long','float','int','byte',_0x32d175(0x4085),_0x32d175(0x5024)],'built_in':['super',_0x32d175(0x138f)]},_0x5732db={'className':_0x32d175(0x5153),'begin':'@'+_0x429f42,'contains':[{'begin':/\(/,'end':/\)/,'contains':[_0x32d175(0x4454)]}]},_0x1c3567={'className':'params','begin':/\(/,'end':/\)/,'keywords':_0x267639,'relevance':0x0,'contains':[_0x48bd8e[_0x32d175(0x23fe)]],'endsParent':!0x0};return{'name':_0x32d175(0x4508),'aliases':['jsp'],'keywords':_0x267639,'illegal':/<\/|#/,'contains':[_0x48bd8e['COMMENT']('/\x5c*\x5c*',_0x32d175(0x1820),{'relevance':0x0,'contains':[{'begin':/\w+@/,'relevance':0x0},{'className':_0x32d175(0x4593),'begin':_0x32d175(0x4d18)}]}),{'begin':/import java\.[a-z]+\./,'keywords':'import','relevance':0x2},_0x48bd8e['C_LINE_COMMENT_MODE'],_0x48bd8e[_0x32d175(0x23fe)],{'begin':/"""/,'end':/"""/,'className':'string','contains':[_0x48bd8e[_0x32d175(0x4a76)]]},_0x48bd8e[_0x32d175(0xa4c)],_0x48bd8e[_0x32d175(0x291b)],{'match':[/\b(?:class|interface|enum|extends|implements|new)/,/\s+/,_0x429f42],'className':{0x1:_0x32d175(0x1357),0x3:_0x32d175(0x19e4)}},{'match':/non-sealed/,'scope':_0x32d175(0x1357)},{'begin':[_0x1c5120[_0x32d175(0x1d1d)](/(?!else)/,_0x429f42),/\s+/,_0x429f42,/\s+/,/=(?!=)/],'className':{0x1:_0x32d175(0xcfc),0x3:_0x32d175(0x3362),0x5:_0x32d175(0x1182)}},{'begin':[/record/,/\s+/,_0x429f42],'className':{0x1:_0x32d175(0x1357),0x3:'title.class'},'contains':[_0x1c3567,_0x48bd8e[_0x32d175(0x2ae2)],_0x48bd8e[_0x32d175(0x23fe)]]},{'beginKeywords':'new\x20throw\x20return\x20else','relevance':0x0},{'begin':[_0x32d175(0xf54)+_0x18b78a+_0x32d175(0x2e62),_0x48bd8e['UNDERSCORE_IDENT_RE'],/\s*(?=\()/],'className':{0x2:_0x32d175(0x20db)},'keywords':_0x267639,'contains':[{'className':'params','begin':/\(/,'end':/\)/,'keywords':_0x267639,'relevance':0x0,'contains':[_0x5732db,_0x48bd8e[_0x32d175(0xa4c)],_0x48bd8e[_0x32d175(0x291b)],_0x2377c3,_0x48bd8e[_0x32d175(0x23fe)]]},_0x48bd8e[_0x32d175(0x2ae2)],_0x48bd8e[_0x32d175(0x23fe)]]},_0x2377c3,_0x5732db]};};},0x1793:_0x3c6c54=>{const _0x1755db=a0_0x11e7,_0x239909=_0x1755db(0x18c3),_0x94d6b8=['as','in','of','if',_0x1755db(0x3c19),_0x1755db(0x552),'finally',_0x1755db(0x469d),'new',_0x1755db(0x14b2),'do',_0x1755db(0xdfd),_0x1755db(0x27d6),_0x1755db(0x3d4),'break',_0x1755db(0x31a3),_0x1755db(0xf3c),_0x1755db(0x2aa7),_0x1755db(0x383),_0x1755db(0x2e7e),'default',_0x1755db(0x422b),_0x1755db(0x857),_0x1755db(0x16d9),_0x1755db(0x3368),_0x1755db(0x5be),'let','yield',_0x1755db(0xc01),_0x1755db(0x1390),_0x1755db(0x2085),_0x1755db(0x16c2),_0x1755db(0x371f),'static',_0x1755db(0x331),_0x1755db(0x27e6),_0x1755db(0x2bb9),_0x1755db(0x4428)],_0x3834d9=['true',_0x1755db(0x3984),_0x1755db(0x1582),_0x1755db(0x1daa),_0x1755db(0x494b),_0x1755db(0x2486)],_0xfab220=[_0x1755db(0x108b),_0x1755db(0x2ac5),_0x1755db(0x3bc9),_0x1755db(0x2e8a),'Math',_0x1755db(0x448e),_0x1755db(0x3cd),_0x1755db(0x4cd1),'String',_0x1755db(0xae9),'Array',_0x1755db(0x4016),'Float64Array',_0x1755db(0x927),'Uint8Array',_0x1755db(0x5091),_0x1755db(0x4f9),_0x1755db(0x26ec),_0x1755db(0x323e),'Uint32Array',_0x1755db(0x3711),_0x1755db(0x17e5),_0x1755db(0x34d5),_0x1755db(0x4a59),'WeakSet',_0x1755db(0x32f2),_0x1755db(0x3304),_0x1755db(0x593),'Atomics',_0x1755db(0x35d6),_0x1755db(0x9c3),_0x1755db(0x431b),_0x1755db(0x2122),'GeneratorFunction',_0x1755db(0x4265),_0x1755db(0x373),_0x1755db(0x190f),_0x1755db(0xcef),'WebAssembly'],_0x158374=[_0x1755db(0x5e5),'EvalError',_0x1755db(0x1b9d),_0x1755db(0x7b7),_0x1755db(0x2041),_0x1755db(0x22ea),_0x1755db(0x1a6c),_0x1755db(0x3f3)],_0xd0398=[_0x1755db(0x2a87),'setTimeout',_0x1755db(0x2da),_0x1755db(0x4f79),_0x1755db(0x4031),_0x1755db(0x474c),_0x1755db(0x2ff9),_0x1755db(0x1d8e),_0x1755db(0x4ee0),_0x1755db(0x42de),'parseInt','decodeURI',_0x1755db(0x25a3),'encodeURI',_0x1755db(0x2c10),_0x1755db(0x4e2c),_0x1755db(0x254e)],_0x1e16ef=['arguments',_0x1755db(0x138f),_0x1755db(0x2cc),'console',_0x1755db(0x18db),_0x1755db(0x295),_0x1755db(0x301d),_0x1755db(0x38a4),_0x1755db(0x196c),_0x1755db(0x501b)],_0x422341=[][_0x1755db(0x1d1d)](_0xd0398,_0xfab220,_0x158374);_0x3c6c54[_0x1755db(0x474c)]=function(_0x2a7440){const _0x4028ef=_0x1755db,_0xd7487e=_0x2a7440['regex'],_0x563ab0=_0x239909,_0x2f6fa3='<>',_0x7e6d3='',_0x5c1315={'begin':/<[A-Za-z0-9\\._:-]+/,'end':/\/[A-Za-z0-9\\._:-]+>|\/>/,'isTrulyOpeningTag':(_0x3c7f79,_0x2f1ae1)=>{const _0x42963b=a0_0x11e7,_0x31d1bb=_0x3c7f79[0x0][_0x42963b(0x1b19)]+_0x3c7f79[_0x42963b(0x3bb5)],_0x3cd90f=_0x3c7f79[_0x42963b(0x7b0)][_0x31d1bb];if('<'===_0x3cd90f||','===_0x3cd90f)return void _0x2f1ae1[_0x42963b(0xec5)]();let _0x2dbc32;'>'===_0x3cd90f&&(((_0x16a82c,{after:_0x1c465c})=>{const _0x4038a8=_0x42963b,_0x3aad56='','contains':[{'className':_0x4028ef(0xddd),'variants':[{'begin':_0x2a7440[_0x4028ef(0x206e)],'relevance':0x0},{'className':null,'begin':/\(\s*\)/,'skip':!0x0},{'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'keywords':_0x354917,'contains':_0x5575aa}]}]},{'begin':/,/,'relevance':0x0},{'match':/\s+/,'relevance':0x0},{'variants':[{'begin':_0x2f6fa3,'end':_0x7e6d3},{'match':/<[A-Za-z0-9\\._:-]+\s*\/>/},{'begin':_0x5c1315[_0x4028ef(0x42fa)],'on:begin':_0x5c1315[_0x4028ef(0x15ca)],'end':_0x5c1315[_0x4028ef(0x2681)]}],'subLanguage':_0x4028ef(0x2655),'contains':[{'begin':_0x5c1315['begin'],'end':_0x5c1315['end'],'skip':!0x0,'contains':['self']}]}]},_0xc5cd9e,{'beginKeywords':'while\x20if\x20switch\x20catch\x20for'},{'begin':_0x4028ef(0x2a8f)+_0x2a7440[_0x4028ef(0x206e)]+_0x4028ef(0x3c93),'returnBegin':!0x0,'label':_0x4028ef(0x2d89),'contains':[_0x42c4a5,_0x2a7440[_0x4028ef(0x46a1)](_0x2a7440[_0x4028ef(0x2029)],{'begin':_0x563ab0,'className':_0x4028ef(0x20db)})]},{'match':/\.\.\./,'relevance':0x0},_0x1f0fcd,{'match':'\x5c$'+_0x563ab0,'relevance':0x0},{'match':[/\bconstructor(?=\s*\()/],'className':{0x1:'title.function'},'contains':[_0x42c4a5]},_0x4e5725,{'relevance':0x0,'match':/\b[A-Z][A-Z_0-9]+\b/,'className':_0x4028ef(0x41f7)},_0x3e9e16,_0x49cfc1,{'match':/\$[(.]/}]};};},0x15f7:_0x567c8a=>{const _0x4eef37=a0_0x11e7;_0x567c8a[_0x4eef37(0x474c)]=function(_0x539a5e){const _0x1cf160=_0x4eef37,_0x585dc0={'className':_0x1cf160(0xddd),'begin':/\(/,'end':/\)/,'contains':[{'begin':/[\w-]+ *=/,'returnBegin':!0x0,'relevance':0x0,'contains':[{'className':'attr','begin':/[\w-]+/}]}],'relevance':0x0};return{'name':'JBoss\x20CLI','aliases':[_0x1cf160(0x2445)],'keywords':{'$pattern':_0x1cf160(0x1d03),'keyword':_0x1cf160(0x4e03),'literal':_0x1cf160(0x4805)},'contains':[_0x539a5e[_0x1cf160(0x2bbe)],_0x539a5e[_0x1cf160(0x291b)],{'className':_0x1cf160(0xddd),'begin':/--[\w\-=\/]+/},{'className':_0x1cf160(0x14b2),'begin':/:[\w\-.]+/,'relevance':0x0},{'className':_0x1cf160(0x2431),'begin':/\B([\/.])[\w\-.\/=]+/},_0x585dc0]};};},0x26d:_0x1df75c=>{const _0x245194=a0_0x11e7;_0x1df75c[_0x245194(0x474c)]=function(_0xc8c1be){const _0x2f2fff=_0x245194,_0x24a693=[_0x2f2fff(0x4022),_0x2f2fff(0x3984),_0x2f2fff(0x1582)],_0x1b7369={'scope':_0x2f2fff(0x2706),'beginKeywords':_0x24a693[_0x2f2fff(0x3541)]('\x20')};return{'name':_0x2f2fff(0x9c3),'keywords':{'literal':_0x24a693},'contains':[{'className':'attr','begin':/"(\\.|[^\\"\r\n])*"(?=\s*:)/,'relevance':1.01},{'match':/[{}[\],:]/,'className':_0x2f2fff(0xa25),'relevance':0x0},_0xc8c1be['QUOTE_STRING_MODE'],_0x1b7369,_0xc8c1be[_0x2f2fff(0xd12)],_0xc8c1be['C_LINE_COMMENT_MODE'],_0xc8c1be[_0x2f2fff(0x23fe)]],'illegal':'\x5cS'};};},0x2142:_0x1b8363=>{const _0x428226=a0_0x11e7;_0x1b8363[_0x428226(0x474c)]=function(_0x36c3d5){const _0x3e05c3=_0x428226;return{'name':'Julia\x20REPL','contains':[{'className':_0x3e05c3(0x4cea),'begin':/^julia>/,'relevance':0xa,'starts':{'end':/^(?![ ]{6})/,'subLanguage':_0x3e05c3(0x1f3f)}}],'aliases':[_0x3e05c3(0x2aa8)]};};},0x22aa:_0x415d1e=>{const _0x587f1d=a0_0x11e7;_0x415d1e[_0x587f1d(0x474c)]=function(_0x50768f){const _0x29a385=_0x587f1d,_0x2c3abc='[A-Za-z_\x5cu00A1-\x5cuFFFF][A-Za-z_0-9\x5cu00A1-\x5cuFFFF]*',_0x4389b5={'$pattern':_0x2c3abc,'keyword':[_0x29a385(0x3c6b),_0x29a385(0x42fa),_0x29a385(0x4e10),_0x29a385(0x31a3),_0x29a385(0x26bc),_0x29a385(0xc01),'continue','do',_0x29a385(0x3d4),'elseif',_0x29a385(0x2681),_0x29a385(0x2bb9),_0x29a385(0x3984),'finally',_0x29a385(0x3c19),_0x29a385(0x14b2),_0x29a385(0x501b),'if',_0x29a385(0x331),'in','isa',_0x29a385(0x1e61),_0x29a385(0x16a7),'macro',_0x29a385(0x196c),'quote',_0x29a385(0xdfd),_0x29a385(0x4022),_0x29a385(0x422b),_0x29a385(0x347a),_0x29a385(0x3b62),_0x29a385(0x552)],'literal':['ARGS',_0x29a385(0x49ed),_0x29a385(0xf13),'ENDIAN_BOM',_0x29a385(0x1ae2),'Inf',_0x29a385(0x4e5d),_0x29a385(0x261f),'Inf64',_0x29a385(0x44e4),'LOAD_PATH','MergeSort',_0x29a385(0x494b),_0x29a385(0x1f45),'NaN32','NaN64',_0x29a385(0x1eaf),_0x29a385(0x4c97),_0x29a385(0x324d),_0x29a385(0x13fa),_0x29a385(0x36ed),_0x29a385(0x2f85),_0x29a385(0x36c7),_0x29a385(0x1eab),_0x29a385(0x4f3b),'VERSION|0','devnull',_0x29a385(0x3984),'im','missing',_0x29a385(0x25d8),'pi',_0x29a385(0x40ee),_0x29a385(0x1e79),'stdout',_0x29a385(0x4022),'undef','π','ℯ'],'built_in':['AbstractArray',_0x29a385(0x1445),_0x29a385(0x28cb),_0x29a385(0x17da),_0x29a385(0x4701),'AbstractFloat','AbstractIrrational','AbstractMatrix','AbstractRange',_0x29a385(0x1eec),_0x29a385(0x2ab3),'AbstractUnitRange',_0x29a385(0x37ab),_0x29a385(0x1d9a),_0x29a385(0x954),_0x29a385(0x11cd),_0x29a385(0x4b6a),_0x29a385(0x4670),_0x29a385(0x302b),_0x29a385(0x4cd1),_0x29a385(0xbac),_0x29a385(0x8c4),_0x29a385(0x27fa),_0x29a385(0x38a),_0x29a385(0x3563),_0x29a385(0x3355),_0x29a385(0x2aff),_0x29a385(0x399e),_0x29a385(0x303b),_0x29a385(0x20e2),_0x29a385(0x40d4),'Cfloat','Channel',_0x29a385(0x436e),_0x29a385(0x2de2),'Cintmax_t',_0x29a385(0x1fc5),_0x29a385(0x436f),'Cmd','Colon','Complex',_0x29a385(0x2c5b),'ComplexF32','ComplexF64',_0x29a385(0x2a02),'Condition','Cptrdiff_t','Cshort','Csize_t',_0x29a385(0x3e3),'Cstring',_0x29a385(0x49f5),'Cuint',_0x29a385(0x42bc),_0x29a385(0x28f6),_0x29a385(0x3a1d),'Cushort',_0x29a385(0x49ad),_0x29a385(0x4871),'Cwstring',_0x29a385(0x4322),_0x29a385(0x23b3),_0x29a385(0x2c8),_0x29a385(0x18d3),_0x29a385(0x23fc),_0x29a385(0x2b96),_0x29a385(0x4678),_0x29a385(0xf1a),_0x29a385(0x4941),_0x29a385(0x2309),_0x29a385(0x2f8d),_0x29a385(0x12b1),_0x29a385(0x8b3),_0x29a385(0x168e),'ExponentialBackOff',_0x29a385(0x2207),_0x29a385(0x345f),_0x29a385(0x4949),_0x29a385(0x22a4),_0x29a385(0x2ac5),_0x29a385(0x533),'HTML','IO',_0x29a385(0x2665),'IOContext',_0x29a385(0x10ac),_0x29a385(0xe7e),_0x29a385(0x330b),'IndexLinear',_0x29a385(0x1958),'InexactError',_0x29a385(0x3b80),_0x29a385(0x213b),_0x29a385(0x4969),'Int16','Int32',_0x29a385(0x1921),_0x29a385(0x4276),_0x29a385(0x3406),_0x29a385(0x4729),'InvalidStateException',_0x29a385(0x459),'KeyError','LinRange',_0x29a385(0x412b),'LinearIndices',_0x29a385(0x140c),_0x29a385(0x9f3),'Matrix',_0x29a385(0x11ed),_0x29a385(0xa3c),_0x29a385(0x3886),_0x29a385(0x35c8),_0x29a385(0x2496),_0x29a385(0x4a58),'NamedTuple',_0x29a385(0xe51),_0x29a385(0x3cd),_0x29a385(0x4311),_0x29a385(0x6f2),_0x29a385(0x1806),_0x29a385(0x2242),_0x29a385(0x2bb),_0x29a385(0x2a41),_0x29a385(0x34d8),_0x29a385(0x23f3),_0x29a385(0x3034),_0x29a385(0x3bb1),_0x29a385(0x4466),'RawFD',_0x29a385(0x2ff),_0x29a385(0x1f8c),_0x29a385(0xff3),_0x29a385(0x4a23),'Regex',_0x29a385(0x3427),_0x29a385(0x41a6),'SegmentationFault',_0x29a385(0x34d5),_0x29a385(0x2718),_0x29a385(0x22d9),_0x29a385(0x24e9),_0x29a385(0x198f),'StepRangeLen','StridedArray','StridedMatrix',_0x29a385(0x36fc),_0x29a385(0x1049),'String',_0x29a385(0x47dc),_0x29a385(0x1f87),_0x29a385(0x2f73),_0x29a385(0x2b8d),_0x29a385(0x2e8a),'SystemError',_0x29a385(0xfa0),'TaskFailedException','Text','TextDisplay',_0x29a385(0xde9),_0x29a385(0x166e),_0x29a385(0x2b7e),_0x29a385(0x1a6c),_0x29a385(0x2ebe),_0x29a385(0x4615),'UInt128',_0x29a385(0x1a89),_0x29a385(0x519a),_0x29a385(0x43ea),_0x29a385(0x445b),_0x29a385(0x3191),_0x29a385(0x10b6),'UndefRefError','UndefVarError',_0x29a385(0x605),_0x29a385(0x468d),_0x29a385(0x1548),'Unsigned','Val',_0x29a385(0x401c),'VecElement',_0x29a385(0x511f),'Vector',_0x29a385(0x2b30),_0x29a385(0x2ff3),'WeakRef']},_0x5749b5={'keywords':_0x4389b5,'illegal':/<\//},_0x2f466a={'className':_0x29a385(0x2ad6),'begin':/\$\(/,'end':/\)/,'keywords':_0x4389b5},_0x518cf8={'className':'variable','begin':'\x5c$'+_0x2c3abc},_0x17c283={'className':_0x29a385(0x2431),'contains':[_0x50768f[_0x29a385(0x4a76)],_0x2f466a,_0x518cf8],'variants':[{'begin':/\w*"""/,'end':/"""\w*/,'relevance':0xa},{'begin':/\w*"/,'end':/"\w*/}]},_0xde58b={'className':_0x29a385(0x2431),'contains':[_0x50768f['BACKSLASH_ESCAPE'],_0x2f466a,_0x518cf8],'begin':'`','end':'`'},_0x54f2a9={'className':'meta','begin':'@'+_0x2c3abc};return _0x5749b5[_0x29a385(0x11d8)]=_0x29a385(0x2b24),_0x5749b5[_0x29a385(0x2b31)]=[{'className':_0x29a385(0x4a80),'begin':/(\b0x[\d_]*(\.[\d_]*)?|0x\.\d[\d_]*)p[-+]?\d+|\b0[box][a-fA-F0-9][a-fA-F0-9_]*|(\b\d[\d_]*(\.[\d_]*)?|\.\d[\d_]*)([eEfF][-+]?\d+)?/,'relevance':0x0},{'className':_0x29a385(0x2431),'begin':/'(.|\\[xXuU][a-zA-Z0-9]+)'/},_0x17c283,_0xde58b,_0x54f2a9,{'className':_0x29a385(0x4645),'variants':[{'begin':'#=','end':'=#','relevance':0xa},{'begin':'#','end':'$'}]},_0x50768f[_0x29a385(0x2bbe)],{'className':_0x29a385(0x1357),'begin':_0x29a385(0x353e)},{'begin':/<:/}],_0x2f466a['contains']=_0x5749b5[_0x29a385(0x2b31)],_0x5749b5;};},0xb16:_0x194e37=>{const _0x4144d0=a0_0x11e7;var _0x38cac5=_0x4144d0(0x1e1c),_0x38c9a6=_0x4144d0(0x1af1)+_0x38cac5+')',_0x56d1be='[0-9a-fA-F](_*[0-9a-fA-F])*',_0x40a31a={'className':'number','variants':[{'begin':'(\x5cb('+_0x38cac5+_0x4144d0(0x4eb6)+_0x38c9a6+_0x4144d0(0x1d32)+_0x38c9a6+_0x4144d0(0x2c22)+_0x38cac5+_0x4144d0(0x557)},{'begin':_0x4144d0(0x4cd5)+_0x38cac5+_0x4144d0(0x4eb6)+_0x38c9a6+')[fFdD]?\x5cb|\x5c.([fFdD]\x5cb)?)'},{'begin':'('+_0x38c9a6+')[fFdD]?\x5cb'},{'begin':'\x5cb('+_0x38cac5+_0x4144d0(0x47b)},{'begin':_0x4144d0(0x3b44)+_0x56d1be+_0x4144d0(0x5a9)+_0x56d1be+_0x4144d0(0x43d7)+_0x56d1be+_0x4144d0(0x51e0)+_0x38cac5+')[fFdD]?\x5cb'},{'begin':_0x4144d0(0x1e1a)},{'begin':'\x5cb0[xX]('+_0x56d1be+')[lL]?\x5cb'},{'begin':_0x4144d0(0x4d4a)},{'begin':_0x4144d0(0xfca)}],'relevance':0x0};_0x194e37[_0x4144d0(0x474c)]=function(_0x1b8ba9){const _0x6a9854=_0x4144d0,_0x3d0e87={'keyword':_0x6a9854(0x2468),'built_in':_0x6a9854(0x3148),'literal':_0x6a9854(0x12f3)},_0x5726b2={'className':'symbol','begin':_0x1b8ba9[_0x6a9854(0x206e)]+'@'},_0x40c288={'className':'subst','begin':/\$\{/,'end':/\}/,'contains':[_0x1b8ba9['C_NUMBER_MODE']]},_0x154c4c={'className':_0x6a9854(0x3362),'begin':'\x5c$'+_0x1b8ba9['UNDERSCORE_IDENT_RE']},_0x38db45={'className':_0x6a9854(0x2431),'variants':[{'begin':'\x22\x22\x22','end':_0x6a9854(0x1c5f),'contains':[_0x154c4c,_0x40c288]},{'begin':'\x27','end':'\x27','illegal':/\n/,'contains':[_0x1b8ba9[_0x6a9854(0x4a76)]]},{'begin':'\x22','end':'\x22','illegal':/\n/,'contains':[_0x1b8ba9[_0x6a9854(0x4a76)],_0x154c4c,_0x40c288]}]};_0x40c288[_0x6a9854(0x2b31)][_0x6a9854(0x1715)](_0x38db45);const _0x3f980d={'className':_0x6a9854(0x5153),'begin':_0x6a9854(0x332f)+_0x1b8ba9[_0x6a9854(0x206e)]+')?'},_0xb34adb={'className':_0x6a9854(0x5153),'begin':'@'+_0x1b8ba9[_0x6a9854(0x206e)],'contains':[{'begin':/\(/,'end':/\)/,'contains':[_0x1b8ba9[_0x6a9854(0x46a1)](_0x38db45,{'className':_0x6a9854(0x2431)}),_0x6a9854(0x4454)]}]},_0x2fc7e5=_0x40a31a,_0x2cdb52=_0x1b8ba9[_0x6a9854(0x4e4f)](_0x6a9854(0x4f94),_0x6a9854(0x1820),{'contains':[_0x1b8ba9[_0x6a9854(0x23fe)]]}),_0x47b126={'variants':[{'className':'type','begin':_0x1b8ba9['UNDERSCORE_IDENT_RE']},{'begin':/\(/,'end':/\)/,'contains':[]}]},_0x21885f=_0x47b126;return _0x21885f[_0x6a9854(0x2807)][0x1][_0x6a9854(0x2b31)]=[_0x47b126],_0x47b126[_0x6a9854(0x2807)][0x1][_0x6a9854(0x2b31)]=[_0x21885f],{'name':'Kotlin','aliases':['kt','kts'],'keywords':_0x3d0e87,'contains':[_0x1b8ba9[_0x6a9854(0x4e4f)](_0x6a9854(0x945),_0x6a9854(0x1820),{'relevance':0x0,'contains':[{'className':_0x6a9854(0x4593),'begin':'@[A-Za-z]+'}]}),_0x1b8ba9[_0x6a9854(0x2ae2)],_0x2cdb52,{'className':_0x6a9854(0x1357),'begin':/\b(break|continue|return|this)\b/,'starts':{'contains':[{'className':'symbol','begin':/@\w+/}]}},_0x5726b2,_0x3f980d,_0xb34adb,{'className':_0x6a9854(0x14b2),'beginKeywords':_0x6a9854(0x451d),'end':_0x6a9854(0x2120),'returnBegin':!0x0,'excludeEnd':!0x0,'keywords':_0x3d0e87,'relevance':0x5,'contains':[{'begin':_0x1b8ba9['UNDERSCORE_IDENT_RE']+'\x5cs*\x5c(','returnBegin':!0x0,'relevance':0x0,'contains':[_0x1b8ba9['UNDERSCORE_TITLE_MODE']]},{'className':_0x6a9854(0xcfc),'begin'://,'keywords':'reified','relevance':0x0},{'className':'params','begin':/\(/,'end':/\)/,'endsParent':!0x0,'keywords':_0x3d0e87,'relevance':0x0,'contains':[{'begin':/:/,'end':/[=,\/]/,'endsWithParent':!0x0,'contains':[_0x47b126,_0x1b8ba9['C_LINE_COMMENT_MODE'],_0x2cdb52],'relevance':0x0},_0x1b8ba9[_0x6a9854(0x2ae2)],_0x2cdb52,_0x3f980d,_0xb34adb,_0x38db45,_0x1b8ba9[_0x6a9854(0xd12)]]},_0x2cdb52]},{'begin':[/class|interface|trait/,/\s+/,_0x1b8ba9[_0x6a9854(0x206e)]],'beginScope':{0x3:_0x6a9854(0x19e4)},'keywords':_0x6a9854(0x3117),'end':/[:\{(]|$/,'excludeEnd':!0x0,'illegal':_0x6a9854(0x4c1c),'contains':[{'beginKeywords':_0x6a9854(0x1150)},_0x1b8ba9['UNDERSCORE_TITLE_MODE'],{'className':_0x6a9854(0xcfc),'begin'://,'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0},{'className':'type','begin':/[,:]\s*/,'end':/[<\(,){\s]|$/,'excludeBegin':!0x0,'returnEnd':!0x0},_0x3f980d,_0xb34adb]},_0x38db45,{'className':'meta','begin':_0x6a9854(0xf30),'end':'$','illegal':'\x0a'},_0x2fc7e5]};};},0x17c9:_0x3bf7e9=>{const _0x361b7a=a0_0x11e7;_0x3bf7e9[_0x361b7a(0x474c)]=function(_0x163a1c){const _0x22caae=_0x361b7a,_0x1b216e='[a-zA-Z_][\x5cw.]*',_0x5d413f=_0x22caae(0xc09),_0x3deec8=_0x22caae(0x16dc),_0x43be5f={'$pattern':_0x1b216e+_0x22caae(0x3ddf),'literal':_0x22caae(0x404f),'built_in':_0x22caae(0xa19),'keyword':'cache\x20database_names\x20database_schemanames\x20database_tablenames\x20define_tag\x20define_type\x20email_batch\x20encode_set\x20html_comment\x20handle\x20handle_error\x20header\x20if\x20inline\x20iterate\x20ljax_target\x20link\x20link_currentaction\x20link_currentgroup\x20link_currentrecord\x20link_detail\x20link_firstgroup\x20link_firstrecord\x20link_lastgroup\x20link_lastrecord\x20link_nextgroup\x20link_nextrecord\x20link_prevgroup\x20link_prevrecord\x20log\x20loop\x20namespace_using\x20output_none\x20portal\x20private\x20protect\x20records\x20referer\x20referrer\x20repeating\x20resultset\x20rows\x20search_args\x20search_arguments\x20select\x20sort_args\x20sort_arguments\x20thread_atomic\x20value_list\x20while\x20abort\x20case\x20else\x20fail_if\x20fail_ifnot\x20fail\x20if_empty\x20if_false\x20if_null\x20if_true\x20loop_abort\x20loop_continue\x20loop_count\x20params\x20params_up\x20return\x20return_value\x20run_children\x20soap_definetag\x20soap_lastrequest\x20soap_lastresponse\x20tag_name\x20ascending\x20average\x20by\x20define\x20descending\x20do\x20equals\x20frozen\x20group\x20handle_failure\x20import\x20in\x20into\x20join\x20let\x20match\x20max\x20min\x20on\x20order\x20parent\x20protected\x20provide\x20public\x20require\x20returnhome\x20skip\x20split_thread\x20sum\x20take\x20thread\x20to\x20trait\x20type\x20where\x20with\x20yield\x20yieldhome'},_0x464205=_0x163a1c[_0x22caae(0x4e4f)](_0x22caae(0x2452),'-->',{'relevance':0x0}),_0x5e1f51={'className':_0x22caae(0x5153),'begin':_0x22caae(0x2f19),'starts':{'end':_0x22caae(0x29da),'returnEnd':!0x0,'contains':[_0x464205]}},_0x2be8eb={'className':_0x22caae(0x5153),'begin':'\x5c[/noprocess|'+_0x5d413f},_0x226d36={'className':'symbol','begin':'\x27'+_0x1b216e+'\x27'},_0x131baf=[_0x163a1c[_0x22caae(0x2ae2)],_0x163a1c[_0x22caae(0x23fe)],_0x163a1c['inherit'](_0x163a1c[_0x22caae(0xd12)],{'begin':_0x163a1c[_0x22caae(0x45be)]+_0x22caae(0x5176)}),_0x163a1c[_0x22caae(0x46a1)](_0x163a1c['APOS_STRING_MODE'],{'illegal':null}),_0x163a1c['inherit'](_0x163a1c[_0x22caae(0x291b)],{'illegal':null}),{'className':_0x22caae(0x2431),'begin':'`','end':'`'},{'variants':[{'begin':'[#$]'+_0x1b216e},{'begin':'#','end':_0x22caae(0x19d1),'illegal':'\x5cW'}]},{'className':_0x22caae(0xcfc),'begin':_0x22caae(0x185e),'end':_0x1b216e,'illegal':'\x5cW'},{'className':_0x22caae(0xddd),'variants':[{'begin':_0x22caae(0x1df4)+_0x1b216e,'relevance':0x0},{'begin':_0x22caae(0x1805)}]},{'begin':/(->|\.)\s*/,'relevance':0x0,'contains':[_0x226d36]},{'className':_0x22caae(0x1390),'beginKeywords':_0x22caae(0x1bb1),'returnEnd':!0x0,'end':_0x22caae(0x3d2),'contains':[_0x163a1c[_0x22caae(0x46a1)](_0x163a1c['TITLE_MODE'],{'begin':_0x1b216e+_0x22caae(0x1c91)})]}];return{'name':_0x22caae(0x3d98),'aliases':['ls',_0x22caae(0x44b8)],'case_insensitive':!0x0,'keywords':_0x43be5f,'contains':[{'className':_0x22caae(0x5153),'begin':_0x3deec8,'relevance':0x0,'starts':{'end':_0x22caae(0x4a81)+_0x5d413f,'returnEnd':!0x0,'relevance':0x0,'contains':[_0x464205]}},_0x5e1f51,_0x2be8eb,{'className':_0x22caae(0x5153),'begin':'\x5c[no_square_brackets','starts':{'end':_0x22caae(0x328e),'keywords':_0x43be5f,'contains':[{'className':_0x22caae(0x5153),'begin':_0x3deec8,'relevance':0x0,'starts':{'end':'\x5c[noprocess\x5c]|'+_0x5d413f,'returnEnd':!0x0,'contains':[_0x464205]}},_0x5e1f51,_0x2be8eb][_0x22caae(0x1d1d)](_0x131baf)}},{'className':_0x22caae(0x5153),'begin':'\x5c[','relevance':0x0},{'className':_0x22caae(0x5153),'begin':_0x22caae(0x26dc),'end':'lasso9$','relevance':0xa}][_0x22caae(0x1d1d)](_0x131baf)};};},0xd9b:_0x5af224=>{_0x5af224['exports']=function(_0x125908){const _0x56c2a8=a0_0x11e7,_0x296cb0=[{'begin':/\^{6}[0-9a-f]{6}/},{'begin':/\^{5}[0-9a-f]{5}/},{'begin':/\^{4}[0-9a-f]{4}/},{'begin':/\^{3}[0-9a-f]{3}/},{'begin':/\^{2}[0-9a-f]{2}/},{'begin':/\^{2}[\u0000-\u007f]/}],_0x5467f2=[{'className':_0x56c2a8(0x1357),'begin':/\\/,'relevance':0x0,'contains':[{'endsParent':!0x0,'begin':_0x125908[_0x56c2a8(0x41d2)]['either'](...[_0x56c2a8(0x860),_0x56c2a8(0x445f),'(?:DeclareOption|ProcessOptions)',_0x56c2a8(0x2e4e),_0x56c2a8(0x3bb9),_0x56c2a8(0x8a2),_0x56c2a8(0x14bd),_0x56c2a8(0x2c62),_0x56c2a8(0x406d),_0x56c2a8(0x4782),_0x56c2a8(0x2e02),_0x56c2a8(0x338e),_0x56c2a8(0x4600),_0x56c2a8(0x2200),_0x56c2a8(0x1092),_0x56c2a8(0x1f98),_0x56c2a8(0x3812),_0x56c2a8(0x159f),_0x56c2a8(0x29ed),_0x56c2a8(0x1ddb)]['map'](_0x12e5bb=>_0x12e5bb+'(?![a-zA-Z@:_])'))},{'endsParent':!0x0,'begin':new RegExp(['(?:__)?[a-zA-Z]{2,}_[a-zA-Z](?:_?[a-zA-Z])+:[a-zA-Z]*','[lgc]__?[a-zA-Z](?:_?[a-zA-Z])*_[a-zA-Z]{2,}',_0x56c2a8(0x366),_0x56c2a8(0x27ff),_0x56c2a8(0x1170),_0x56c2a8(0x1822),_0x56c2a8(0x354f),'::[a-zA-Z]_unbraced','::[a-zA-Z:]']['map'](_0x1065bb=>_0x1065bb+_0x56c2a8(0x468b))[_0x56c2a8(0x3541)]('|'))},{'endsParent':!0x0,'variants':_0x296cb0},{'endsParent':!0x0,'relevance':0x0,'variants':[{'begin':/[a-zA-Z@]+/},{'begin':/[^a-zA-Z@]?/}]}]},{'className':_0x56c2a8(0xddd),'relevance':0x0,'begin':/#+\d?/},{'variants':_0x296cb0},{'className':'built_in','relevance':0x0,'begin':/[$&^_]/},{'className':'meta','begin':/% ?!(T[eE]X|tex|BIB|bib)/,'end':'$','relevance':0xa},_0x125908['COMMENT']('%','$',{'relevance':0x0})],_0x3f5f67={'begin':/\{/,'end':/\}/,'relevance':0x0,'contains':[_0x56c2a8(0x4454),..._0x5467f2]},_0x4a5cc7=_0x125908[_0x56c2a8(0x46a1)](_0x3f5f67,{'relevance':0x0,'endsParent':!0x0,'contains':[_0x3f5f67,..._0x5467f2]}),_0x1fd916={'begin':/\[/,'end':/\]/,'endsParent':!0x0,'relevance':0x0,'contains':[_0x3f5f67,..._0x5467f2]},_0x188d02={'begin':/\s+/,'relevance':0x0},_0x1b7bad=[_0x4a5cc7],_0x68b263=[_0x1fd916],_0x4a1fb7=function(_0x4faf32,_0x3b701d){return{'contains':[_0x188d02],'starts':{'relevance':0x0,'contains':_0x4faf32,'starts':_0x3b701d}};},_0x113ed1=function(_0x20a3ff,_0x114e29){const _0x3493bc=_0x56c2a8;return{'begin':'\x5c\x5c'+_0x20a3ff+_0x3493bc(0xf06),'keywords':{'$pattern':/\\[a-zA-Z]+/,'keyword':'\x5c'+_0x20a3ff},'relevance':0x0,'contains':[_0x188d02],'starts':_0x114e29};},_0x36b328=function(_0x22fc8d,_0x3b0489){const _0x1009ba=_0x56c2a8;return _0x125908[_0x1009ba(0x46a1)]({'begin':_0x1009ba(0x177c)+_0x22fc8d+_0x1009ba(0x3059),'keywords':{'$pattern':/\\[a-zA-Z]+/,'keyword':_0x1009ba(0x1b0a)},'relevance':0x0},_0x4a1fb7(_0x1b7bad,_0x3b0489));},_0x92f5f1=(_0x34a2a0=_0x56c2a8(0x2431))=>_0x125908[_0x56c2a8(0x453e)]({'className':_0x34a2a0,'begin':/(.|\r?\n)/,'end':/(.|\r?\n)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'endsParent':!0x0}),_0xe169c7=function(_0x26a7fc){const _0x472edf=_0x56c2a8;return{'className':_0x472edf(0x2431),'end':'(?=\x5c\x5cend\x5c{'+_0x26a7fc+_0x472edf(0x3059)};},_0x5a4595=(_0x576d97='string')=>({'relevance':0x0,'begin':/\{/,'starts':{'endsParent':!0x0,'contains':[{'className':_0x576d97,'end':/(?=\})/,'endsParent':!0x0,'contains':[{'begin':/\{/,'end':/\}/,'relevance':0x0,'contains':[_0x56c2a8(0x4454)]}]}]}});return{'name':_0x56c2a8(0xedb),'aliases':[_0x56c2a8(0x1e48)],'contains':[...[...[_0x56c2a8(0x4134),_0x56c2a8(0x5004)][_0x56c2a8(0x4833)](_0x47e057=>_0x113ed1(_0x47e057,{'contains':[_0x92f5f1()]})),_0x113ed1('mint',_0x4a1fb7(_0x1b7bad,{'contains':[_0x92f5f1()]})),_0x113ed1(_0x56c2a8(0x3ecf),_0x4a1fb7(_0x1b7bad,{'contains':[_0x5a4595(),_0x92f5f1()]})),_0x113ed1('url',{'contains':[_0x5a4595(_0x56c2a8(0x4b32)),_0x5a4595(_0x56c2a8(0x4b32))]}),_0x113ed1(_0x56c2a8(0x1459),{'contains':[_0x5a4595(_0x56c2a8(0x4b32))]}),_0x113ed1(_0x56c2a8(0xe63),_0x4a1fb7(_0x68b263,{'contains':[_0x5a4595(_0x56c2a8(0x4b32))]})),...[][_0x56c2a8(0x1d1d)](...['','\x5c*'][_0x56c2a8(0x4833)](_0x307018=>[_0x36b328(_0x56c2a8(0x4fb3)+_0x307018,_0xe169c7(_0x56c2a8(0x4fb3)+_0x307018)),_0x36b328(_0x56c2a8(0xbb6)+_0x307018,_0x4a1fb7(_0x1b7bad,_0xe169c7(_0x56c2a8(0xbb6)+_0x307018))),...['','B','L'][_0x56c2a8(0x4833)](_0x3a0a9f=>_0x36b328(_0x3a0a9f+_0x56c2a8(0x4abf)+_0x307018,_0x4a1fb7(_0x68b263,_0xe169c7(_0x3a0a9f+_0x56c2a8(0x4abf)+_0x307018))))])),_0x36b328(_0x56c2a8(0x486a),_0x4a1fb7(_0x68b263,_0x4a1fb7(_0x1b7bad,_0xe169c7(_0x56c2a8(0x486a)))))],..._0x5467f2]};};},0x1c04:_0x4ae696=>{const _0x3ad5b9=a0_0x11e7;_0x4ae696[_0x3ad5b9(0x474c)]=function(_0x568446){const _0xf56e0c=_0x3ad5b9;return{'name':_0xf56e0c(0x10db),'contains':[{'className':_0xf56e0c(0x263f),'match':_0xf56e0c(0x29ad),'relevance':0xa},{'className':_0xf56e0c(0x263f),'match':_0xf56e0c(0x2f6a)},{'className':_0xf56e0c(0x2706),'match':'^-'},_0x568446[_0xf56e0c(0x2bbe)]]};};},0x1a2f:_0x3991cd=>{const _0x2a846d=a0_0x11e7;_0x3991cd[_0x2a846d(0x474c)]=function(_0x5776fa){const _0x35b978=_0x2a846d,_0x2b8a70=/([A-Za-z_][A-Za-z_0-9]*)?/,_0x30adf4={'scope':_0x35b978(0xddd),'begin':/\(/,'end':/\)(?=\:?)/,'endsParent':!0x0,'relevance':0x7,'contains':[{'scope':_0x35b978(0x2431),'begin':'\x22','end':'\x22'},{'scope':_0x35b978(0x1357),'match':['true','false','in'][_0x35b978(0x3541)]('|')},{'scope':_0x35b978(0x3362),'match':/[A-Za-z_][A-Za-z_0-9]*/},{'scope':_0x35b978(0x1182),'match':/\+|\-|\*|\/|\%|\=\=|\=|\!|\>|\<|\&\&|\|\|/}]},_0x36fb9f={'match':[_0x2b8a70,/(?=\()/],'scope':{0x1:_0x35b978(0x1357)},'contains':[_0x30adf4]};return _0x30adf4['contains'][_0x35b978(0x2767)](_0x36fb9f),{'name':_0x35b978(0xfec),'contains':[{'match':[/#+/,_0x2b8a70,/(?=\()/],'scope':{0x1:_0x35b978(0xa25),0x2:_0x35b978(0x1357)},'starts':{'contains':[{'match':/\:/,'scope':_0x35b978(0xa25)}]},'contains':[_0x30adf4]},{'match':[/#+/,_0x2b8a70,/:?/],'scope':{0x1:_0x35b978(0xa25),0x2:_0x35b978(0x1357),0x3:_0x35b978(0xa25)}}]};};},0x208a:_0x45b061=>{const _0xa3b67c=a0_0x11e7,_0x3f4848=['a',_0xa3b67c(0xd80),_0xa3b67c(0x3db6),_0xa3b67c(0x51c1),_0xa3b67c(0x2a6),_0xa3b67c(0x645),'b',_0xa3b67c(0x702),_0xa3b67c(0x4f1a),'button',_0xa3b67c(0x3115),_0xa3b67c(0x2200),_0xa3b67c(0x447d),_0xa3b67c(0x4948),'dd',_0xa3b67c(0x109c),'details',_0xa3b67c(0x2b08),_0xa3b67c(0x4c88),'dl','dt','em',_0xa3b67c(0x31c0),_0xa3b67c(0x4252),_0xa3b67c(0x12f4),_0xa3b67c(0x3af9),'form','h1','h2','h3','h4','h5','h6',_0xa3b67c(0x17f8),_0xa3b67c(0x507c),_0xa3b67c(0x2acd),'i','iframe',_0xa3b67c(0x3045),_0xa3b67c(0x7b0),_0xa3b67c(0x4432),_0xa3b67c(0x728),_0xa3b67c(0x3b71),_0xa3b67c(0x201b),'li',_0xa3b67c(0x3212),_0xa3b67c(0x1af6),_0xa3b67c(0x3888),_0xa3b67c(0x1ec0),_0xa3b67c(0x20c7),'ol','p','q','quote','samp',_0xa3b67c(0x69d),_0xa3b67c(0x2bbd),_0xa3b67c(0x40c9),_0xa3b67c(0x4829),_0xa3b67c(0xb95),_0xa3b67c(0x1639),'tbody','td',_0xa3b67c(0x39cd),_0xa3b67c(0x2f6d),'th',_0xa3b67c(0x46d4),_0xa3b67c(0x51b6),'tr','ul',_0xa3b67c(0x469d),_0xa3b67c(0xcde)],_0x576ec7=[_0xa3b67c(0x3f0b),'any-pointer',_0xa3b67c(0x1bc1),'color',_0xa3b67c(0xdf1),_0xa3b67c(0x2dc3),_0xa3b67c(0xc20),_0xa3b67c(0x2dc),_0xa3b67c(0x4790),_0xa3b67c(0x4a4d),_0xa3b67c(0x1e01),_0xa3b67c(0x2461),_0xa3b67c(0x3cd6),_0xa3b67c(0x3d6d),'inverted-colors','monochrome','orientation',_0xa3b67c(0xe97),'overflow-inline',_0xa3b67c(0x43e4),_0xa3b67c(0x1b7f),_0xa3b67c(0x4099),'prefers-reduced-motion',_0xa3b67c(0x2c6b),_0xa3b67c(0x4de3),_0xa3b67c(0x4318),_0xa3b67c(0x39d8),'update',_0xa3b67c(0x17d2),_0xa3b67c(0x2c06),_0xa3b67c(0x3e30),_0xa3b67c(0x2613),_0xa3b67c(0x2ebc)],_0x2eca05=['active','any-link',_0xa3b67c(0x2bd),_0xa3b67c(0x494d),_0xa3b67c(0xbb8),'default',_0xa3b67c(0x183b),_0xa3b67c(0x177b),_0xa3b67c(0x3f89),'drop','empty',_0xa3b67c(0x223e),_0xa3b67c(0x4d51),_0xa3b67c(0x14ca),'first-of-type',_0xa3b67c(0x363),'future',_0xa3b67c(0x4ba),_0xa3b67c(0xbcb),_0xa3b67c(0x1874),'has','host',_0xa3b67c(0x933),_0xa3b67c(0x3d6d),'indeterminate',_0xa3b67c(0x1418),_0xa3b67c(0x498a),'is',_0xa3b67c(0x1f46),_0xa3b67c(0x11ef),_0xa3b67c(0x69f),_0xa3b67c(0x48eb),'link',_0xa3b67c(0xaea),_0xa3b67c(0xc1a),'nth-child',_0xa3b67c(0x3d31),_0xa3b67c(0x3837),_0xa3b67c(0x3171),'nth-last-of-type',_0xa3b67c(0x4153),_0xa3b67c(0x12a4),_0xa3b67c(0x5133),_0xa3b67c(0x51e4),_0xa3b67c(0x48d0),'past',_0xa3b67c(0x38ef),_0xa3b67c(0x1128),_0xa3b67c(0x3229),_0xa3b67c(0x3e07),_0xa3b67c(0x4d50),_0xa3b67c(0x507b),_0xa3b67c(0x4cd),_0xa3b67c(0x1bc7),'target-within',_0xa3b67c(0x4147),_0xa3b67c(0x4297),_0xa3b67c(0xd18),_0xa3b67c(0x3b62)],_0x3c4e82=[_0xa3b67c(0x1349),_0xa3b67c(0x313),_0xa3b67c(0x5097),_0xa3b67c(0x18bf),_0xa3b67c(0x255f),_0xa3b67c(0x222a),_0xa3b67c(0x42e5),_0xa3b67c(0x50a3),'marker',_0xa3b67c(0x2aa),_0xa3b67c(0x10e2),_0xa3b67c(0x2e6e),'slotted',_0xa3b67c(0x3c13)],_0x29270e=['align-content',_0xa3b67c(0x2558),_0xa3b67c(0xe82),_0xa3b67c(0xc36),'animation',_0xa3b67c(0x1b0e),'animation-direction',_0xa3b67c(0x2c5c),_0xa3b67c(0x4fd3),_0xa3b67c(0x4e24),'animation-name',_0xa3b67c(0x1cc4),_0xa3b67c(0x40b0),_0xa3b67c(0x460d),_0xa3b67c(0x1471),_0xa3b67c(0x7b3),_0xa3b67c(0x163b),_0xa3b67c(0x1195),_0xa3b67c(0x11ac),_0xa3b67c(0x965),_0xa3b67c(0x93e),_0xa3b67c(0x4295),_0xa3b67c(0x3244),_0xa3b67c(0x4065),_0xa3b67c(0x2f89),_0xa3b67c(0x369a),'border-block','border-block-color',_0xa3b67c(0x2bbf),_0xa3b67c(0x2f10),_0xa3b67c(0x43ac),_0xa3b67c(0xbbb),_0xa3b67c(0x34f),_0xa3b67c(0x480a),_0xa3b67c(0x2a76),_0xa3b67c(0x3b35),'border-block-style','border-block-width',_0xa3b67c(0x129e),_0xa3b67c(0x1fda),_0xa3b67c(0x494c),_0xa3b67c(0x11d1),'border-bottom-style','border-bottom-width',_0xa3b67c(0x2d19),_0xa3b67c(0x4ede),_0xa3b67c(0x14ac),'border-image-outset',_0xa3b67c(0x19a5),_0xa3b67c(0x2b01),_0xa3b67c(0x4d0),_0xa3b67c(0x2aa0),_0xa3b67c(0x6cc),_0xa3b67c(0x21cd),_0xa3b67c(0x2a98),'border-inline-end-color',_0xa3b67c(0x3c06),'border-inline-end-width','border-inline-start',_0xa3b67c(0x3152),_0xa3b67c(0x2444),'border-inline-start-width',_0xa3b67c(0x1b8d),_0xa3b67c(0xb54),_0xa3b67c(0x953),_0xa3b67c(0x82b),_0xa3b67c(0xb7c),_0xa3b67c(0x1b93),_0xa3b67c(0x2557),_0xa3b67c(0x23d3),_0xa3b67c(0x4fdc),_0xa3b67c(0x4a2d),_0xa3b67c(0x2fd6),_0xa3b67c(0x5a6),_0xa3b67c(0x18e8),_0xa3b67c(0x49df),_0xa3b67c(0x5090),'border-top-left-radius',_0xa3b67c(0x106f),_0xa3b67c(0x4b1c),'border-top-width',_0xa3b67c(0x24a4),_0xa3b67c(0x3335),_0xa3b67c(0x1db9),_0xa3b67c(0x4419),_0xa3b67c(0x4f0e),_0xa3b67c(0x2ce9),'break-before',_0xa3b67c(0x425d),_0xa3b67c(0x1204),_0xa3b67c(0x1cb2),_0xa3b67c(0x4933),'clip',_0xa3b67c(0x19a3),_0xa3b67c(0x1c7c),_0xa3b67c(0xe81),_0xa3b67c(0x2c2e),_0xa3b67c(0x25c4),_0xa3b67c(0x4679),_0xa3b67c(0x1175),_0xa3b67c(0x2ccf),_0xa3b67c(0x36b1),'column-rule-width',_0xa3b67c(0x3085),_0xa3b67c(0x27a0),_0xa3b67c(0x4457),'contain',_0xa3b67c(0x484f),_0xa3b67c(0xc79),'counter-increment',_0xa3b67c(0x43c7),_0xa3b67c(0x18bf),'cue-after',_0xa3b67c(0x18ef),_0xa3b67c(0x824),_0xa3b67c(0x296c),'display',_0xa3b67c(0x22c7),_0xa3b67c(0x1465),_0xa3b67c(0x2a07),_0xa3b67c(0x2c01),'flex-direction',_0xa3b67c(0x494f),_0xa3b67c(0x3458),_0xa3b67c(0x2a69),'flex-wrap',_0xa3b67c(0x1ab8),_0xa3b67c(0x4b13),_0xa3b67c(0xe53),_0xa3b67c(0x2019),'font-family',_0xa3b67c(0xeea),'font-kerning',_0xa3b67c(0x44a5),_0xa3b67c(0x464),_0xa3b67c(0x750),_0xa3b67c(0x4a87),_0xa3b67c(0x1d98),_0xa3b67c(0x3764),_0xa3b67c(0x4fd6),'font-variant',_0xa3b67c(0x2694),_0xa3b67c(0x3afe),_0xa3b67c(0x2eae),'font-variant-numeric',_0xa3b67c(0x474f),_0xa3b67c(0x4c7a),_0xa3b67c(0x417d),_0xa3b67c(0x44f8),_0xa3b67c(0x32f0),_0xa3b67c(0x2461),_0xa3b67c(0x2b00),_0xa3b67c(0xf9c),_0xa3b67c(0x3a86),_0xa3b67c(0x77e),_0xa3b67c(0x3058),_0xa3b67c(0x3cf),_0xa3b67c(0x4df),_0xa3b67c(0x28a9),_0xa3b67c(0x92e),_0xa3b67c(0x48ab),_0xa3b67c(0x3ffa),_0xa3b67c(0x4a8e),_0xa3b67c(0x3fda),_0xa3b67c(0x2bb0),_0xa3b67c(0xef6),'hanging-punctuation',_0xa3b67c(0x3cd6),_0xa3b67c(0xca5),'icon',_0xa3b67c(0x3e53),'image-rendering',_0xa3b67c(0x2833),_0xa3b67c(0x23ed),_0xa3b67c(0x41a9),'isolation',_0xa3b67c(0xedd),_0xa3b67c(0x48eb),_0xa3b67c(0x4e98),_0xa3b67c(0x2352),'line-height',_0xa3b67c(0x36ea),_0xa3b67c(0x1438),_0xa3b67c(0x1a28),_0xa3b67c(0x27cf),_0xa3b67c(0x2b9f),_0xa3b67c(0x1e53),_0xa3b67c(0x4273),_0xa3b67c(0x562),_0xa3b67c(0x90d),_0xa3b67c(0x249b),'margin-inline-end','margin-inline-start',_0xa3b67c(0x26a1),_0xa3b67c(0x5022),_0xa3b67c(0xefe),_0xa3b67c(0x4b5c),'mask',_0xa3b67c(0x1ebe),'mask-border-mode',_0xa3b67c(0x1706),_0xa3b67c(0x2c4f),'mask-border-slice',_0xa3b67c(0x4985),_0xa3b67c(0x2691),_0xa3b67c(0x897),'mask-composite','mask-image','mask-mode',_0xa3b67c(0x498f),_0xa3b67c(0x321c),'mask-repeat',_0xa3b67c(0x4c09),_0xa3b67c(0x4921),'max-block-size',_0xa3b67c(0x2ebc),_0xa3b67c(0x1e50),_0xa3b67c(0x3e30),_0xa3b67c(0x4000),_0xa3b67c(0x2613),_0xa3b67c(0x446d),_0xa3b67c(0x2c06),_0xa3b67c(0x2b16),_0xa3b67c(0x516a),_0xa3b67c(0x4b63),_0xa3b67c(0x150e),_0xa3b67c(0x467),_0xa3b67c(0x14fc),_0xa3b67c(0x28b),_0xa3b67c(0x47d),_0xa3b67c(0x3bd7),_0xa3b67c(0x1541),'opacity',_0xa3b67c(0xd8d),'orphans',_0xa3b67c(0x2a21),'outline-color','outline-offset','outline-style',_0xa3b67c(0x282),_0xa3b67c(0xa69),_0xa3b67c(0xdad),_0xa3b67c(0x49a7),_0xa3b67c(0x4704),_0xa3b67c(0x5252),_0xa3b67c(0x49ca),'padding-block-end',_0xa3b67c(0x2511),_0xa3b67c(0x5060),_0xa3b67c(0x1aac),_0xa3b67c(0x25de),_0xa3b67c(0x12fa),_0xa3b67c(0x3bdb),_0xa3b67c(0x41de),_0xa3b67c(0xb57),_0xa3b67c(0x45a),_0xa3b67c(0x2d8f),_0xa3b67c(0x1e45),_0xa3b67c(0x468c),_0xa3b67c(0x10c9),_0xa3b67c(0x295c),_0xa3b67c(0x2b0d),'perspective-origin',_0xa3b67c(0x23a0),_0xa3b67c(0x25f1),_0xa3b67c(0x37b),_0xa3b67c(0x4df5),_0xa3b67c(0x1162),_0xa3b67c(0x58c),_0xa3b67c(0x27f9),_0xa3b67c(0x4d50),_0xa3b67c(0x133b),_0xa3b67c(0x4fcd),_0xa3b67c(0xa40),_0xa3b67c(0x2d9b),_0xa3b67c(0x2914),_0xa3b67c(0x2d49),_0xa3b67c(0x21e3),_0xa3b67c(0x252e),_0xa3b67c(0x3b8),_0xa3b67c(0x10d2),_0xa3b67c(0x277d),_0xa3b67c(0x4e95),'scroll-padding',_0xa3b67c(0x328a),_0xa3b67c(0x35f3),_0xa3b67c(0x230b),_0xa3b67c(0x2bef),_0xa3b67c(0x13cc),'scroll-padding-inline-end',_0xa3b67c(0x4761),'scroll-padding-left',_0xa3b67c(0x1a7a),'scroll-padding-top',_0xa3b67c(0x2934),_0xa3b67c(0x3c84),_0xa3b67c(0x4392),_0xa3b67c(0x337),'scrollbar-gutter',_0xa3b67c(0x4a35),_0xa3b67c(0x24d9),_0xa3b67c(0x764),'shape-outside',_0xa3b67c(0x4bba),_0xa3b67c(0x2df4),_0xa3b67c(0x3d6),_0xa3b67c(0x2ff2),_0xa3b67c(0x45c0),_0xa3b67c(0x3e47),_0xa3b67c(0x222f),_0xa3b67c(0x1470),_0xa3b67c(0x40ab),_0xa3b67c(0x21e1),_0xa3b67c(0x4c70),'text-decoration-line','text-decoration-style',_0xa3b67c(0x178f),_0xa3b67c(0x3d6a),_0xa3b67c(0x4866),_0xa3b67c(0x246d),_0xa3b67c(0x541),'text-justify',_0xa3b67c(0x22d7),'text-overflow',_0xa3b67c(0x25da),_0xa3b67c(0x2803),_0xa3b67c(0x1aba),_0xa3b67c(0x3deb),'top',_0xa3b67c(0x5161),_0xa3b67c(0x1fc1),'transform-origin',_0xa3b67c(0x478c),_0xa3b67c(0x427e),_0xa3b67c(0x2d3e),_0xa3b67c(0xce9),_0xa3b67c(0x28db),'transition-timing-function',_0xa3b67c(0x4074),_0xa3b67c(0x1677),_0xa3b67c(0x4703),_0xa3b67c(0x792),_0xa3b67c(0x175f),'voice-family',_0xa3b67c(0x2483),_0xa3b67c(0x1002),_0xa3b67c(0x22e9),'voice-stress',_0xa3b67c(0x4f11),_0xa3b67c(0x711),_0xa3b67c(0x274c),'width',_0xa3b67c(0x4258),'word-break','word-spacing','word-wrap',_0xa3b67c(0x2a38),_0xa3b67c(0x25fb)][_0xa3b67c(0x78b)](),_0x4abe06=_0x2eca05['concat'](_0x3c4e82);_0x45b061['exports']=function(_0x18cc8e){const _0x4a086a=_0xa3b67c,_0x2f8f69=(_0x5e93ae=>({'IMPORTANT':{'scope':_0x4a086a(0x5153),'begin':_0x4a086a(0x2293)},'BLOCK_COMMENT':_0x5e93ae[_0x4a086a(0x23fe)],'HEXCOLOR':{'scope':_0x4a086a(0x4a80),'begin':/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},'FUNCTION_DISPATCH':{'className':'built_in','begin':/[\w-]+(?=\()/},'ATTRIBUTE_SELECTOR_MODE':{'scope':_0x4a086a(0x4f99),'begin':/\[/,'end':/\]/,'illegal':'$','contains':[_0x5e93ae[_0x4a086a(0xa4c)],_0x5e93ae['QUOTE_STRING_MODE']]},'CSS_NUMBER_MODE':{'scope':_0x4a086a(0x4a80),'begin':_0x5e93ae[_0x4a086a(0x5047)]+_0x4a086a(0xf71),'relevance':0x0},'CSS_VARIABLE':{'className':_0x4a086a(0x431d),'begin':/--[A-Za-z_][A-Za-z0-9_-]*/}}))(_0x18cc8e),_0x21b414=_0x4abe06,_0x19eabf='[\x5cw-]+',_0x3292d2='('+_0x19eabf+_0x4a086a(0x4456)+_0x19eabf+_0x4a086a(0x3059),_0x39d51d=[],_0x1efd6e=[],_0x1f8a2a=function(_0x1e2ef8){const _0x57195f=_0x4a086a;return{'className':_0x57195f(0x2431),'begin':'~?'+_0x1e2ef8+_0x57195f(0x33be)+_0x1e2ef8};},_0x23e9a2=function(_0x50d08e,_0x169ab6,_0x2b22f3){return{'className':_0x50d08e,'begin':_0x169ab6,'relevance':_0x2b22f3};},_0xbbe7f9={'$pattern':/[a-z-]+/,'keyword':'and\x20or\x20not\x20only','attribute':_0x576ec7[_0x4a086a(0x3541)]('\x20')},_0x3f497a={'begin':'\x5c(','end':'\x5c)','contains':_0x1efd6e,'keywords':_0xbbe7f9,'relevance':0x0};_0x1efd6e[_0x4a086a(0x1715)](_0x18cc8e[_0x4a086a(0x2ae2)],_0x18cc8e['C_BLOCK_COMMENT_MODE'],_0x1f8a2a('\x27'),_0x1f8a2a('\x22'),_0x2f8f69[_0x4a086a(0x46dc)],{'begin':_0x4a086a(0x33dd),'starts':{'className':_0x4a086a(0x2431),'end':_0x4a086a(0x22e),'excludeEnd':!0x0}},_0x2f8f69['HEXCOLOR'],_0x3f497a,_0x23e9a2('variable','@@?'+_0x19eabf,0xa),_0x23e9a2(_0x4a086a(0x3362),_0x4a086a(0x16d0)+_0x19eabf+'\x5c}'),_0x23e9a2(_0x4a086a(0x43a),_0x4a086a(0x3a4f)),{'className':_0x4a086a(0x263f),'begin':_0x19eabf+_0x4a086a(0x3fb4),'end':':','returnBegin':!0x0,'excludeEnd':!0x0},_0x2f8f69[_0x4a086a(0x3df6)],{'beginKeywords':'and\x20not'},_0x2f8f69[_0x4a086a(0x47a9)]);const _0x5ac10e=_0x1efd6e[_0x4a086a(0x1d1d)]({'begin':/\{/,'end':/\}/,'contains':_0x39d51d}),_0x246ca1={'beginKeywords':_0x4a086a(0x191b),'endsWithParent':!0x0,'contains':[{'beginKeywords':_0x4a086a(0x1937)}]['concat'](_0x1efd6e)},_0x3f9359={'begin':_0x3292d2+'\x5cs*:','returnBegin':!0x0,'end':/[;}]/,'relevance':0x0,'contains':[{'begin':/-(webkit|moz|ms|o)-/},_0x2f8f69[_0x4a086a(0x21be)],{'className':_0x4a086a(0x263f),'begin':_0x4a086a(0x4cd5)+_0x29270e[_0x4a086a(0x3541)]('|')+_0x4a086a(0x716),'end':/(?=:)/,'starts':{'endsWithParent':!0x0,'illegal':_0x4a086a(0x1b3a),'relevance':0x0,'contains':_0x1efd6e}}]},_0x37a420={'className':_0x4a086a(0x1357),'begin':_0x4a086a(0xbd7),'starts':{'end':'[;{}]','keywords':_0xbbe7f9,'returnEnd':!0x0,'contains':_0x1efd6e,'relevance':0x0}},_0x1b2ad8={'className':_0x4a086a(0x3362),'variants':[{'begin':'@'+_0x19eabf+_0x4a086a(0x3fb4),'relevance':0xf},{'begin':'@'+_0x19eabf}],'starts':{'end':_0x4a086a(0x33bf),'returnEnd':!0x0,'contains':_0x5ac10e}},_0x3ab077={'variants':[{'begin':_0x4a086a(0x60a),'end':_0x4a086a(0x35d9)},{'begin':_0x3292d2,'end':/\{/}],'returnBegin':!0x0,'returnEnd':!0x0,'illegal':_0x4a086a(0x2990),'relevance':0x0,'contains':[_0x18cc8e['C_LINE_COMMENT_MODE'],_0x18cc8e['C_BLOCK_COMMENT_MODE'],_0x246ca1,_0x23e9a2(_0x4a086a(0x1357),_0x4a086a(0x242b)),_0x23e9a2(_0x4a086a(0x3362),_0x4a086a(0x16d0)+_0x19eabf+'\x5c}'),{'begin':_0x4a086a(0x4cd5)+_0x3f4848[_0x4a086a(0x3541)]('|')+_0x4a086a(0x716),'className':_0x4a086a(0x527d)},_0x2f8f69['CSS_NUMBER_MODE'],_0x23e9a2(_0x4a086a(0x527d),_0x3292d2,0x0),_0x23e9a2('selector-id','#'+_0x3292d2),_0x23e9a2(_0x4a086a(0x326),'\x5c.'+_0x3292d2,0x0),_0x23e9a2(_0x4a086a(0x527d),'&',0x0),_0x2f8f69['ATTRIBUTE_SELECTOR_MODE'],{'className':_0x4a086a(0x277a),'begin':':('+_0x2eca05['join']('|')+')'},{'className':_0x4a086a(0x277a),'begin':':(:)?('+_0x3c4e82['join']('|')+')'},{'begin':/\(/,'end':/\)/,'relevance':0x0,'contains':_0x5ac10e},{'begin':_0x4a086a(0x2293)},_0x2f8f69[_0x4a086a(0x47a9)]]},_0x55750a={'begin':_0x19eabf+_0x4a086a(0x2f43)+('('+_0x21b414[_0x4a086a(0x3541)]('|')+')'),'returnBegin':!0x0,'contains':[_0x3ab077]};return _0x39d51d[_0x4a086a(0x1715)](_0x18cc8e[_0x4a086a(0x2ae2)],_0x18cc8e['C_BLOCK_COMMENT_MODE'],_0x37a420,_0x1b2ad8,_0x55750a,_0x3f9359,_0x3ab077,_0x246ca1,_0x2f8f69[_0x4a086a(0x47a9)]),{'name':_0x4a086a(0x3a62),'case_insensitive':!0x0,'illegal':_0x4a086a(0x15fe),'contains':_0x39d51d};};},0x9bb:_0x4d5321=>{const _0x3ab7bd=a0_0x11e7;_0x4d5321[_0x3ab7bd(0x474c)]=function(_0x381b8d){const _0xd56324=_0x3ab7bd,_0x4a967a=_0xd56324(0x2b0b),_0x35b4be=_0xd56324(0x36bf),_0x3c6731='(-|\x5c+)?\x5cd+(\x5c.\x5cd+|\x5c/\x5cd+)?((d|e|f|l|s|D|E|F|L|S)(\x5c+|-)?\x5cd+)?',_0x139d25={'className':'literal','begin':_0xd56324(0x2909)},_0x5d8ed3={'className':_0xd56324(0x4a80),'variants':[{'begin':_0x3c6731,'relevance':0x0},{'begin':_0xd56324(0x8fc)},{'begin':'#(o|O)[0-7]+(/[0-7]+)?'},{'begin':_0xd56324(0x3f70)},{'begin':_0xd56324(0x467c)+_0x3c6731+'\x20+'+_0x3c6731,'end':'\x5c)'}]},_0x4c0d7e=_0x381b8d[_0xd56324(0x46a1)](_0x381b8d['QUOTE_STRING_MODE'],{'illegal':null}),_0x1028eb=_0x381b8d[_0xd56324(0x4e4f)](';','$',{'relevance':0x0}),_0x5630ba={'begin':'\x5c*','end':'\x5c*'},_0x22d467={'className':_0xd56324(0x239b),'begin':'[:&]'+_0x4a967a},_0x105461={'begin':_0x4a967a,'relevance':0x0},_0x5ad524={'begin':_0x35b4be},_0x261b53={'contains':[_0x5d8ed3,_0x4c0d7e,_0x5630ba,_0x22d467,{'begin':'\x5c(','end':'\x5c)','contains':[_0xd56324(0x4454),_0x139d25,_0x4c0d7e,_0x5d8ed3,_0x105461]},_0x105461],'variants':[{'begin':'[\x27`]\x5c(','end':'\x5c)'},{'begin':_0xd56324(0x4a97),'end':'\x5c)','keywords':{'name':_0xd56324(0x3567)}},{'begin':'\x27'+_0x35b4be}]},_0x257fc3={'variants':[{'begin':'\x27'+_0x4a967a},{'begin':'#\x27'+_0x4a967a+_0xd56324(0x4568)+_0x4a967a+')*'}]},_0x4ed52e={'begin':'\x5c(\x5cs*','end':'\x5c)'},_0x3ba94f={'endsWithParent':!0x0,'relevance':0x0};return _0x4ed52e[_0xd56324(0x2b31)]=[{'className':'name','variants':[{'begin':_0x4a967a,'relevance':0x0},{'begin':_0x35b4be}]},_0x3ba94f],_0x3ba94f['contains']=[_0x261b53,_0x257fc3,_0x4ed52e,_0x139d25,_0x5d8ed3,_0x4c0d7e,_0x1028eb,_0x5630ba,_0x22d467,_0x5ad524,_0x105461],{'name':_0xd56324(0x4ab3),'illegal':/\S/,'contains':[_0x5d8ed3,_0x381b8d['SHEBANG'](),_0x139d25,_0x4c0d7e,_0x1028eb,_0x261b53,_0x257fc3,_0x4ed52e,_0x105461]};};},0x1acb:_0x3fac89=>{const _0x1d8a3f=a0_0x11e7;_0x3fac89[_0x1d8a3f(0x474c)]=function(_0x49b114){const _0x416a81=_0x1d8a3f,_0x26325b={'className':_0x416a81(0x3362),'variants':[{'begin':_0x416a81(0x445)},{'begin':_0x416a81(0x3656)}],'relevance':0x0},_0x3a51ae=[_0x49b114['C_BLOCK_COMMENT_MODE'],_0x49b114['HASH_COMMENT_MODE'],_0x49b114[_0x416a81(0x4e4f)]('--','$'),_0x49b114[_0x416a81(0x4e4f)](_0x416a81(0x3ecc),'$')],_0x3efcf9=_0x49b114['inherit'](_0x49b114[_0x416a81(0x2029)],{'variants':[{'begin':_0x416a81(0x584)},{'begin':'\x5cb_[a-z0-9\x5c-]+'}]}),_0x3e7e61=_0x49b114[_0x416a81(0x46a1)](_0x49b114['TITLE_MODE'],{'begin':_0x416a81(0x3e97)});return{'name':_0x416a81(0xf19),'case_insensitive':!0x1,'keywords':{'keyword':_0x416a81(0x436a),'literal':_0x416a81(0x2964),'built_in':_0x416a81(0xc15)},'contains':[_0x26325b,{'className':_0x416a81(0x1357),'begin':'\x5cbend\x5csif\x5cb'},{'className':_0x416a81(0x14b2),'beginKeywords':_0x416a81(0x14b2),'end':'$','contains':[_0x26325b,_0x3e7e61,_0x49b114[_0x416a81(0xa4c)],_0x49b114[_0x416a81(0x291b)],_0x49b114['BINARY_NUMBER_MODE'],_0x49b114[_0x416a81(0xd12)],_0x3efcf9]},{'className':'function','begin':_0x416a81(0x3634),'end':'$','keywords':'end','contains':[_0x3e7e61,_0x3efcf9],'relevance':0x0},{'beginKeywords':_0x416a81(0x1a80),'end':'$','contains':[_0x26325b,_0x3e7e61,_0x49b114[_0x416a81(0xa4c)],_0x49b114['QUOTE_STRING_MODE'],_0x49b114[_0x416a81(0xed7)],_0x49b114[_0x416a81(0xd12)],_0x3efcf9]},{'className':'meta','variants':[{'begin':_0x416a81(0x3fc4),'relevance':0xa},{'begin':_0x416a81(0x260a)},{'begin':'\x5c?>'}]},_0x49b114['APOS_STRING_MODE'],_0x49b114[_0x416a81(0x291b)],_0x49b114[_0x416a81(0xed7)],_0x49b114['C_NUMBER_MODE'],_0x3efcf9]['concat'](_0x3a51ae),'illegal':_0x416a81(0x5013)};};},0x171a:_0xa5c054=>{const _0x287229=a0_0x11e7,_0x273445=['as','in','of','if','for',_0x287229(0x552),_0x287229(0x37b2),_0x287229(0x469d),'new',_0x287229(0x14b2),'do',_0x287229(0xdfd),_0x287229(0x27d6),_0x287229(0x3d4),_0x287229(0x4e10),_0x287229(0x31a3),'instanceof',_0x287229(0x2aa7),_0x287229(0x383),_0x287229(0x2e7e),_0x287229(0x3d23),_0x287229(0x422b),_0x287229(0x857),'continue',_0x287229(0x3368),_0x287229(0x5be),_0x287229(0x1e61),_0x287229(0x5075),'const',_0x287229(0x1390),_0x287229(0x2085),_0x287229(0x16c2),'await',_0x287229(0x2c7c),_0x287229(0x331),_0x287229(0x27e6),_0x287229(0x2bb9),_0x287229(0x4428)],_0x45c32a=[_0x287229(0x4022),_0x287229(0x3984),'null','undefined','NaN','Infinity'],_0x5000b1=[][_0x287229(0x1d1d)]([_0x287229(0x2a87),_0x287229(0x162d),_0x287229(0x2da),_0x287229(0x4f79),'require',_0x287229(0x474c),'eval','isFinite','isNaN',_0x287229(0x42de),_0x287229(0x3cfb),_0x287229(0x2b42),_0x287229(0x25a3),'encodeURI',_0x287229(0x2c10),_0x287229(0x4e2c),_0x287229(0x254e)],[_0x287229(0x108b),_0x287229(0x2ac5),_0x287229(0x3bc9),_0x287229(0x2e8a),_0x287229(0x37c3),_0x287229(0x448e),'Number',_0x287229(0x4cd1),_0x287229(0x3327),_0x287229(0xae9),_0x287229(0x4b6a),'Float32Array',_0x287229(0x3ae8),_0x287229(0x927),_0x287229(0x15b2),_0x287229(0x5091),'Int16Array',_0x287229(0x26ec),_0x287229(0x323e),_0x287229(0x4399),'BigInt64Array',_0x287229(0x17e5),_0x287229(0x34d5),_0x287229(0x4a59),_0x287229(0x1968),_0x287229(0x32f2),_0x287229(0x3304),_0x287229(0x593),'Atomics',_0x287229(0x35d6),_0x287229(0x9c3),_0x287229(0x431b),_0x287229(0x2122),'GeneratorFunction',_0x287229(0x4265),_0x287229(0x373),_0x287229(0x190f),_0x287229(0xcef),_0x287229(0x1d53)],['Error','EvalError',_0x287229(0x1b9d),'RangeError','ReferenceError','SyntaxError',_0x287229(0x1a6c),'URIError']);_0xa5c054[_0x287229(0x474c)]=function(_0xf5f6e){const _0xdd66bb=_0x287229,_0xb5435={'keyword':_0x273445[_0xdd66bb(0x1d1d)]([_0xdd66bb(0xaf5),_0xdd66bb(0x26b1),'until',_0xdd66bb(0x110b),'of','by',_0xdd66bb(0x191b),_0xdd66bb(0x2663),'or','is',_0xdd66bb(0x4a63),'not','it',_0xdd66bb(0x3a9c),_0xdd66bb(0x313e),_0xdd66bb(0x27e6),'to',_0xdd66bb(0x16de),_0xdd66bb(0x85d),_0xdd66bb(0x2e7e),_0xdd66bb(0x44d8),_0xdd66bb(0x50dc),_0xdd66bb(0x144e),_0xdd66bb(0x4833),_0xdd66bb(0x4d8c),_0xdd66bb(0x4f7b),_0xdd66bb(0x1012),'__bind','__indexOf']),'literal':_0x45c32a[_0xdd66bb(0x1d1d)]([_0xdd66bb(0x1df8),'no','on',_0xdd66bb(0x2422),'it','that','void']),'built_in':_0x5000b1[_0xdd66bb(0x1d1d)]([_0xdd66bb(0x5018),_0xdd66bb(0x4957)])},_0x328a73=_0xdd66bb(0x1487),_0x316fe7=_0xf5f6e['inherit'](_0xf5f6e[_0xdd66bb(0x2029)],{'begin':_0x328a73}),_0x3a8d21={'className':'subst','begin':/#\{/,'end':/\}/,'keywords':_0xb5435},_0x1696a9={'className':_0xdd66bb(0x2ad6),'begin':/#[A-Za-z$_]/,'end':/(?:-[0-9A-Za-z$_]|[0-9A-Za-z$_])*/,'keywords':_0xb5435},_0x41a101=[_0xf5f6e[_0xdd66bb(0xed7)],{'className':_0xdd66bb(0x4a80),'begin':_0xdd66bb(0xc4a),'relevance':0x0,'starts':{'end':'(\x5cs*/)?','relevance':0x0}},{'className':_0xdd66bb(0x2431),'variants':[{'begin':/'''/,'end':/'''/,'contains':[_0xf5f6e[_0xdd66bb(0x4a76)]]},{'begin':/'/,'end':/'/,'contains':[_0xf5f6e[_0xdd66bb(0x4a76)]]},{'begin':/"""/,'end':/"""/,'contains':[_0xf5f6e[_0xdd66bb(0x4a76)],_0x3a8d21,_0x1696a9]},{'begin':/"/,'end':/"/,'contains':[_0xf5f6e['BACKSLASH_ESCAPE'],_0x3a8d21,_0x1696a9]},{'begin':/\\/,'end':/(\s|$)/,'excludeEnd':!0x0}]},{'className':_0xdd66bb(0x4d1d),'variants':[{'begin':'//','end':_0xdd66bb(0x286),'contains':[_0x3a8d21,_0xf5f6e[_0xdd66bb(0x2bbe)]]},{'begin':/\/(?![ *])(\\.|[^\\\n])*?\/[gim]*(?=\W)/}]},{'begin':'@'+_0x328a73},{'begin':'``','end':'``','excludeBegin':!0x0,'excludeEnd':!0x0,'subLanguage':_0xdd66bb(0x45ac)}];_0x3a8d21[_0xdd66bb(0x2b31)]=_0x41a101;const _0x466aa5={'className':_0xdd66bb(0xddd),'begin':'\x5c(','returnBegin':!0x0,'contains':[{'begin':/\(/,'end':/\)/,'keywords':_0xb5435,'contains':['self'][_0xdd66bb(0x1d1d)](_0x41a101)}]},_0x5a1494={'variants':[{'match':[/class\s+/,_0x328a73,/\s+extends\s+/,_0x328a73]},{'match':[/class\s+/,_0x328a73]}],'scope':{0x2:_0xdd66bb(0x19e4),0x4:_0xdd66bb(0x3235)},'keywords':_0xb5435};return{'name':_0xdd66bb(0x15d1),'aliases':['ls'],'keywords':_0xb5435,'illegal':/\/\*/,'contains':_0x41a101[_0xdd66bb(0x1d1d)]([_0xf5f6e[_0xdd66bb(0x4e4f)](_0xdd66bb(0x1212),_0xdd66bb(0x222b)),_0xf5f6e['HASH_COMMENT_MODE'],{'begin':_0xdd66bb(0x1059)},{'className':_0xdd66bb(0x14b2),'contains':[_0x316fe7,_0x466aa5],'returnBegin':!0x0,'variants':[{'begin':'('+_0x328a73+_0xdd66bb(0x4db0),'end':_0xdd66bb(0x206c)},{'begin':'('+_0x328a73+_0xdd66bb(0x4821),'end':_0xdd66bb(0x4618)},{'begin':'('+_0x328a73+_0xdd66bb(0x1cd9),'end':'!?[-~]{1,2}>\x5c*?'}]},_0x5a1494,{'begin':_0x328a73+':','end':':','returnBegin':!0x0,'returnEnd':!0x0,'relevance':0x0}])};};},0x18fe:_0x9fd909=>{_0x9fd909['exports']=function(_0x5247d6){const _0x17e035=a0_0x11e7,_0x3b17ac=_0x5247d6[_0x17e035(0x41d2)],_0x5596d0=/([-a-zA-Z$._][\w$.-]*)/,_0x4c6b1b={'className':_0x17e035(0x3362),'variants':[{'begin':_0x3b17ac['concat'](/%/,_0x5596d0)},{'begin':/%\d+/},{'begin':/#\d+/}]},_0x45ff87={'className':_0x17e035(0x4685),'variants':[{'begin':_0x3b17ac[_0x17e035(0x1d1d)](/@/,_0x5596d0)},{'begin':/@\d+/},{'begin':_0x3b17ac[_0x17e035(0x1d1d)](/!/,_0x5596d0)},{'begin':_0x3b17ac[_0x17e035(0x1d1d)](/!\d+/,_0x5596d0)},{'begin':/!\d+/}]};return{'name':_0x17e035(0x567),'keywords':'begin\x20end\x20true\x20false\x20declare\x20define\x20global\x20constant\x20private\x20linker_private\x20internal\x20available_externally\x20linkonce\x20linkonce_odr\x20weak\x20weak_odr\x20appending\x20dllimport\x20dllexport\x20common\x20default\x20hidden\x20protected\x20extern_weak\x20external\x20thread_local\x20zeroinitializer\x20undef\x20null\x20to\x20tail\x20target\x20triple\x20datalayout\x20volatile\x20nuw\x20nsw\x20nnan\x20ninf\x20nsz\x20arcp\x20fast\x20exact\x20inbounds\x20align\x20addrspace\x20section\x20alias\x20module\x20asm\x20sideeffect\x20gc\x20dbg\x20linker_private_weak\x20attributes\x20blockaddress\x20initialexec\x20localdynamic\x20localexec\x20prefix\x20unnamed_addr\x20ccc\x20fastcc\x20coldcc\x20x86_stdcallcc\x20x86_fastcallcc\x20arm_apcscc\x20arm_aapcscc\x20arm_aapcs_vfpcc\x20ptx_device\x20ptx_kernel\x20intel_ocl_bicc\x20msp430_intrcc\x20spir_func\x20spir_kernel\x20x86_64_sysvcc\x20x86_64_win64cc\x20x86_thiscallcc\x20cc\x20c\x20signext\x20zeroext\x20inreg\x20sret\x20nounwind\x20noreturn\x20noalias\x20nocapture\x20byval\x20nest\x20readnone\x20readonly\x20inlinehint\x20noinline\x20alwaysinline\x20optsize\x20ssp\x20sspreq\x20noredzone\x20noimplicitfloat\x20naked\x20builtin\x20cold\x20nobuiltin\x20noduplicate\x20nonlazybind\x20optnone\x20returns_twice\x20sanitize_address\x20sanitize_memory\x20sanitize_thread\x20sspstrong\x20uwtable\x20returned\x20type\x20opaque\x20eq\x20ne\x20slt\x20sgt\x20sle\x20sge\x20ult\x20ugt\x20ule\x20uge\x20oeq\x20one\x20olt\x20ogt\x20ole\x20oge\x20ord\x20uno\x20ueq\x20une\x20x\x20acq_rel\x20acquire\x20alignstack\x20atomic\x20catch\x20cleanup\x20filter\x20inteldialect\x20max\x20min\x20monotonic\x20nand\x20personality\x20release\x20seq_cst\x20singlethread\x20umax\x20umin\x20unordered\x20xchg\x20add\x20fadd\x20sub\x20fsub\x20mul\x20fmul\x20udiv\x20sdiv\x20fdiv\x20urem\x20srem\x20frem\x20shl\x20lshr\x20ashr\x20and\x20or\x20xor\x20icmp\x20fcmp\x20phi\x20call\x20trunc\x20zext\x20sext\x20fptrunc\x20fpext\x20uitofp\x20sitofp\x20fptoui\x20fptosi\x20inttoptr\x20ptrtoint\x20bitcast\x20addrspacecast\x20select\x20va_arg\x20ret\x20br\x20switch\x20invoke\x20unwind\x20unreachable\x20indirectbr\x20landingpad\x20resume\x20malloc\x20alloca\x20free\x20load\x20store\x20getelementptr\x20extractelement\x20insertelement\x20shufflevector\x20getresult\x20extractvalue\x20insertvalue\x20atomicrmw\x20cmpxchg\x20fence\x20argmemonly\x20double','contains':[{'className':'type','begin':/\bi\d+(?=\s|\b)/},_0x5247d6[_0x17e035(0x4e4f)](/;\s*$/,null,{'relevance':0x0}),_0x5247d6[_0x17e035(0x4e4f)](/;/,/$/),{'className':_0x17e035(0x2431),'begin':/"/,'end':/"/,'contains':[{'className':_0x17e035(0x2825),'match':/\\\d\d/}]},_0x45ff87,{'className':_0x17e035(0xa25),'relevance':0x0,'begin':/,/},{'className':_0x17e035(0x1182),'relevance':0x0,'begin':/=/},_0x4c6b1b,{'className':_0x17e035(0x239b),'variants':[{'begin':/^\s*[a-z]+:/}],'relevance':0x0},{'className':'number','variants':[{'begin':/[su]?0[xX][KMLHR]?[a-fA-F0-9]+/},{'begin':/[-+]?\d+(?:[.]\d+)?(?:[eE][-+]?\d+(?:[.]\d+)?)?/}],'relevance':0x0}]};};},0x9b0:_0x50a954=>{const _0x3b607a=a0_0x11e7;_0x50a954[_0x3b607a(0x474c)]=function(_0x448a28){const _0x1a9393=_0x3b607a,_0x5056e5={'className':_0x1a9393(0x2431),'begin':'\x22','end':'\x22','contains':[{'className':_0x1a9393(0x2ad6),'begin':/\\[tn"\\]/}]},_0x1fd5cc={'className':_0x1a9393(0x4a80),'relevance':0x0,'begin':_0x448a28['C_NUMBER_RE']};return{'name':_0x1a9393(0x1965),'illegal':':','contains':[_0x5056e5,{'className':'comment','variants':[_0x448a28[_0x1a9393(0x4e4f)]('//','$'),_0x448a28['COMMENT'](_0x1a9393(0x4f94),_0x1a9393(0x1820))],'relevance':0x0},_0x1fd5cc,{'className':_0x1a9393(0x69d),'variants':[{'begin':'\x5cb(state|default)\x5cb'},{'begin':_0x1a9393(0x690)}]},{'className':_0x1a9393(0x43a),'begin':'\x5cb(ll(AgentInExperience|(Create|DataSize|Delete|KeyCount|Keys|Read|Update)KeyValue|GetExperience(Details|ErrorMessage)|ReturnObjectsBy(ID|Owner)|Json(2List|[GS]etValue|ValueType)|Sin|Cos|Tan|Atan2|Sqrt|Pow|Abs|Fabs|Frand|Floor|Ceil|Round|Vec(Mag|Norm|Dist)|Rot(Between|2(Euler|Fwd|Left|Up))|(Euler|Axes)2Rot|Whisper|(Region|Owner)?Say|Shout|Listen(Control|Remove)?|Sensor(Repeat|Remove)?|Detected(Name|Key|Owner|Type|Pos|Vel|Grab|Rot|Group|LinkNumber)|Die|Ground|Wind|([GS]et)(AnimationOverride|MemoryLimit|PrimMediaParams|ParcelMusicURL|Object(Desc|Name)|PhysicsMaterial|Status|Scale|Color|Alpha|Texture|Pos|Rot|Force|Torque)|ResetAnimationOverride|(Scale|Offset|Rotate)Texture|(Rot)?Target(Remove)?|(Stop)?MoveToTarget|Apply(Rotational)?Impulse|Set(KeyframedMotion|ContentType|RegionPos|(Angular)?Velocity|Buoyancy|HoverHeight|ForceAndTorque|TimerEvent|ScriptState|Damage|TextureAnim|Sound(Queueing|Radius)|Vehicle(Type|(Float|Vector|Rotation)Param)|(Touch|Sit)?Text|Camera(Eye|At)Offset|PrimitiveParams|ClickAction|Link(Alpha|Color|PrimitiveParams(Fast)?|Texture(Anim)?|Camera|Media)|RemoteScriptAccessPin|PayPrice|LocalRot)|ScaleByFactor|Get((Max|Min)ScaleFactor|ClosestNavPoint|StaticPath|SimStats|Env|PrimitiveParams|Link(PrimitiveParams|Number(OfSides)?|Key|Name|Media)|HTTPHeader|FreeURLs|Object(Details|PermMask|PrimCount)|Parcel(MaxPrims|Details|Prim(Count|Owners))|Attached(List)?|(SPMax|Free|Used)Memory|Region(Name|TimeDilation|FPS|Corner|AgentCount)|Root(Position|Rotation)|UnixTime|(Parcel|Region)Flags|(Wall|GMT)clock|SimulatorHostname|BoundingBox|GeometricCenter|Creator|NumberOf(Prims|NotecardLines|Sides)|Animation(List)?|(Camera|Local)(Pos|Rot)|Vel|Accel|Omega|Time(stamp|OfDay)|(Object|CenterOf)?Mass|MassMKS|Energy|Owner|(Owner)?Key|SunDirection|Texture(Offset|Scale|Rot)|Inventory(Number|Name|Key|Type|Creator|PermMask)|Permissions(Key)?|StartParameter|List(Length|EntryType)|Date|Agent(Size|Info|Language|List)|LandOwnerAt|NotecardLine|Script(Name|State))|(Get|Reset|GetAndReset)Time|PlaySound(Slave)?|LoopSound(Master|Slave)?|(Trigger|Stop|Preload)Sound|((Get|Delete)Sub|Insert)String|To(Upper|Lower)|Give(InventoryList|Money)|RezObject|(Stop)?LookAt|Sleep|CollisionFilter|(Take|Release)Controls|DetachFromAvatar|AttachToAvatar(Temp)?|InstantMessage|(GetNext)?Email|StopHover|MinEventDelay|RotLookAt|String(Length|Trim)|(Start|Stop)Animation|TargetOmega|Request(Experience)?Permissions|(Create|Break)Link|BreakAllLinks|(Give|Remove)Inventory|Water|PassTouches|Request(Agent|Inventory)Data|TeleportAgent(Home|GlobalCoords)?|ModifyLand|CollisionSound|ResetScript|MessageLinked|PushObject|PassCollisions|AxisAngle2Rot|Rot2(Axis|Angle)|A(cos|sin)|AngleBetween|AllowInventoryDrop|SubStringIndex|List2(CSV|Integer|Json|Float|String|Key|Vector|Rot|List(Strided)?)|DeleteSubList|List(Statistics|Sort|Randomize|(Insert|Find|Replace)List)|EdgeOfWorld|AdjustSoundVolume|Key2Name|TriggerSoundLimited|EjectFromLand|(CSV|ParseString)2List|OverMyLand|SameGroup|UnSit|Ground(Slope|Normal|Contour)|GroundRepel|(Set|Remove)VehicleFlags|SitOnLink|(AvatarOn)?(Link)?SitTarget|Script(Danger|Profiler)|Dialog|VolumeDetect|ResetOtherScript|RemoteLoadScriptPin|(Open|Close)RemoteDataChannel|SendRemoteData|RemoteDataReply|(Integer|String)ToBase64|XorBase64|Log(10)?|Base64To(String|Integer)|ParseStringKeepNulls|RezAtRoot|RequestSimulatorData|ForceMouselook|(Load|Release|(E|Une)scape)URL|ParcelMedia(CommandList|Query)|ModPow|MapDestination|(RemoveFrom|AddTo|Reset)Land(Pass|Ban)List|(Set|Clear)CameraParams|HTTP(Request|Response)|TextBox|DetectedTouch(UV|Face|Pos|(N|Bin)ormal|ST)|(MD5|SHA1|DumpList2)String|Request(Secure)?URL|Clear(Prim|Link)Media|(Link)?ParticleSystem|(Get|Request)(Username|DisplayName)|RegionSayTo|CastRay|GenerateKey|TransferLindenDollars|ManageEstateAccess|(Create|Delete)Character|ExecCharacterCmd|Evade|FleeFrom|NavigateTo|PatrolPoints|Pursue|UpdateCharacter|WanderWithin))\x5cb'},{'className':_0x1a9393(0x2706),'variants':[{'begin':'\x5cb(PI|TWO_PI|PI_BY_TWO|DEG_TO_RAD|RAD_TO_DEG|SQRT2)\x5cb'},{'begin':_0x1a9393(0x1c51)},{'begin':_0x1a9393(0x429)},{'begin':_0x1a9393(0x9b4)},{'begin':_0x1a9393(0x472f)},{'begin':_0x1a9393(0x34cf)}]},{'className':_0x1a9393(0xcfc),'begin':_0x1a9393(0x4a3b)}]};};},0xf21:_0x311363=>{const _0x23d7eb=a0_0x11e7;_0x311363[_0x23d7eb(0x474c)]=function(_0x12c743){const _0x297214=_0x23d7eb,_0x3b8bef=_0x297214(0x9aa),_0x539fcb=_0x297214(0x3a7d),_0x532b73={'begin':_0x3b8bef,'end':_0x539fcb,'contains':[_0x297214(0x4454)]},_0x56795a=[_0x12c743[_0x297214(0x4e4f)](_0x297214(0x2581)+_0x3b8bef+')','$'),_0x12c743[_0x297214(0x4e4f)]('--'+_0x3b8bef,_0x539fcb,{'contains':[_0x532b73],'relevance':0xa})];return{'name':'Lua','keywords':{'$pattern':_0x12c743[_0x297214(0x206e)],'literal':_0x297214(0x3ebb),'keyword':_0x297214(0x48d6),'built_in':_0x297214(0x23f1)},'contains':_0x56795a[_0x297214(0x1d1d)]([{'className':_0x297214(0x14b2),'beginKeywords':'function','end':'\x5c)','contains':[_0x12c743[_0x297214(0x46a1)](_0x12c743[_0x297214(0x2029)],{'begin':_0x297214(0x40b8)}),{'className':_0x297214(0xddd),'begin':'\x5c(','endsWithParent':!0x0,'contains':_0x56795a}][_0x297214(0x1d1d)](_0x56795a)},_0x12c743[_0x297214(0xd12)],_0x12c743[_0x297214(0xa4c)],_0x12c743[_0x297214(0x291b)],{'className':_0x297214(0x2431),'begin':_0x3b8bef,'end':_0x539fcb,'contains':[_0x532b73],'relevance':0x5}])};};},0x1df3:_0x22ddbf=>{const _0x50bb7d=a0_0x11e7;_0x22ddbf[_0x50bb7d(0x474c)]=function(_0x3a6050){const _0x59195e=_0x50bb7d,_0x2a5753={'className':_0x59195e(0x3362),'variants':[{'begin':_0x59195e(0x523)+_0x3a6050[_0x59195e(0x206e)]+'\x5c)','contains':[_0x3a6050[_0x59195e(0x4a76)]]},{'begin':/\$[@%{const _0x405bd6=a0_0x11e7;_0x1baafb[_0x405bd6(0x474c)]=function(_0x173526){const _0x28072c=_0x405bd6,_0x2d0a27={'begin':/<\/?[A-Za-z_]/,'end':'>','subLanguage':_0x28072c(0x2655),'relevance':0x0},_0x22f64d={'variants':[{'begin':/\[.+?\]\[.*?\]/,'relevance':0x0},{'begin':/\[.+?\]\(((data|javascript|mailto):|(?:http|ftp)s?:\/\/).*?\)/,'relevance':0x2},{'begin':_0x173526[_0x28072c(0x41d2)][_0x28072c(0x1d1d)](/\[.+?\]\(/,/[A-Za-z][A-Za-z0-9+.-]*/,/:\/\/.*?\)/),'relevance':0x2},{'begin':/\[.+?\]\([./?&#].*?\)/,'relevance':0x1},{'begin':/\[.*?\]\(.*?\)/,'relevance':0x0}],'returnBegin':!0x0,'contains':[{'match':/\[(?=\])/},{'className':_0x28072c(0x2431),'relevance':0x0,'begin':'\x5c[','end':'\x5c]','excludeBegin':!0x0,'returnEnd':!0x0},{'className':'link','relevance':0x0,'begin':_0x28072c(0x42dd),'end':'\x5c)','excludeBegin':!0x0,'excludeEnd':!0x0},{'className':_0x28072c(0x239b),'relevance':0x0,'begin':_0x28072c(0xa65),'end':'\x5c]','excludeBegin':!0x0,'excludeEnd':!0x0}]},_0x1929f1={'className':_0x28072c(0x40c9),'contains':[],'variants':[{'begin':/_{2}(?!\s)/,'end':/_{2}/},{'begin':/\*{2}(?!\s)/,'end':/\*{2}/}]},_0x54bbe5={'className':_0x28072c(0x2297),'contains':[],'variants':[{'begin':/\*(?![*\s])/,'end':/\*/},{'begin':/_(?![_\s])/,'end':/_/,'relevance':0x0}]},_0x5e2d0f=_0x173526['inherit'](_0x1929f1,{'contains':[]}),_0x133ba9=_0x173526[_0x28072c(0x46a1)](_0x54bbe5,{'contains':[]});_0x1929f1[_0x28072c(0x2b31)]['push'](_0x133ba9),_0x54bbe5[_0x28072c(0x2b31)]['push'](_0x5e2d0f);let _0x4d8958=[_0x2d0a27,_0x22f64d];return[_0x1929f1,_0x54bbe5,_0x5e2d0f,_0x133ba9]['forEach'](_0x3a2dc9=>{const _0x1e7809=_0x28072c;_0x3a2dc9[_0x1e7809(0x2b31)]=_0x3a2dc9[_0x1e7809(0x2b31)][_0x1e7809(0x1d1d)](_0x4d8958);}),_0x4d8958=_0x4d8958[_0x28072c(0x1d1d)](_0x1929f1,_0x54bbe5),{'name':_0x28072c(0x2fd5),'aliases':['md','mkdown',_0x28072c(0xd2f)],'contains':[{'className':'section','variants':[{'begin':_0x28072c(0x717),'end':'$','contains':_0x4d8958},{'begin':_0x28072c(0x8c7),'contains':[{'begin':_0x28072c(0x45cc)},{'begin':'^','end':'\x5cn','contains':_0x4d8958}]}]},_0x2d0a27,{'className':_0x28072c(0x6af),'begin':_0x28072c(0x599),'end':_0x28072c(0x16cc),'excludeEnd':!0x0},_0x1929f1,_0x54bbe5,{'className':_0x28072c(0x3567),'begin':_0x28072c(0x3a6a),'contains':_0x4d8958,'end':'$'},{'className':_0x28072c(0x4948),'variants':[{'begin':_0x28072c(0xf74)},{'begin':_0x28072c(0x3fa0)},{'begin':_0x28072c(0x11f5),'end':_0x28072c(0x4084)},{'begin':_0x28072c(0x15da),'end':_0x28072c(0x405)},{'begin':_0x28072c(0x2b10)},{'begin':'(?=^(\x20{4}|\x5ct))','contains':[{'begin':'^(\x20{4}|\x5ct)','end':'(\x5cn)$'}],'relevance':0x0}]},{'begin':_0x28072c(0x51b8),'end':'$'},_0x22f64d,{'begin':/^\[[^\n]+\]:/,'returnBegin':!0x0,'contains':[{'className':_0x28072c(0x239b),'begin':/\[/,'end':/\]/,'excludeBegin':!0x0,'excludeEnd':!0x0},{'className':_0x28072c(0x4b32),'begin':/:\s*/,'end':/$/,'excludeBegin':!0x0}]}]};};},0x1937:_0x596dd6=>{const _0x51a0f0=a0_0x11e7,_0xa47c78=[_0x51a0f0(0x1046),_0x51a0f0(0x1449),_0x51a0f0(0x4433),'AbortKernels','AbortProtect',_0x51a0f0(0x2915),_0x51a0f0(0xb67),_0x51a0f0(0x1bc9),_0x51a0f0(0x284e),_0x51a0f0(0x4ee7),'Absolute',_0x51a0f0(0x1ecf),_0x51a0f0(0x36af),_0x51a0f0(0x107c),_0x51a0f0(0x4eac),_0x51a0f0(0x4ef7),_0x51a0f0(0x1a9d),_0x51a0f0(0x61e),_0x51a0f0(0x3994),_0x51a0f0(0x3aa6),_0x51a0f0(0x4bb4),_0x51a0f0(0x3cb9),'AccountingForm',_0x51a0f0(0x1533),_0x51a0f0(0x2c80),_0x51a0f0(0x4210),_0x51a0f0(0x1b34),_0x51a0f0(0x16e6),_0x51a0f0(0x2f42),_0x51a0f0(0x164c),_0x51a0f0(0x2b66),_0x51a0f0(0x661),'AcousticSoundHardValue',_0x51a0f0(0x356a),_0x51a0f0(0x511),_0x51a0f0(0x2e95),'ActionMenuBox',_0x51a0f0(0x3ad9),_0x51a0f0(0x95e),_0x51a0f0(0x2874),'ActiveClassification','ActiveClassificationObject','ActiveItem','ActivePrediction',_0x51a0f0(0x32e6),'ActiveStyle','AcyclicGraphQ',_0x51a0f0(0x163d),'AddSides','AddTo',_0x51a0f0(0x828),_0x51a0f0(0x11fe),'AdjacencyGraph',_0x51a0f0(0x4563),_0x51a0f0(0x29bc),_0x51a0f0(0x2f91),'Adjugate',_0x51a0f0(0x3c5a),_0x51a0f0(0x4f3a),_0x51a0f0(0x3389),_0x51a0f0(0x4a40),_0x51a0f0(0x2d53),_0x51a0f0(0x2eb0),'AffineStateSpaceModel',_0x51a0f0(0x4b73),_0x51a0f0(0x2aac),'AggregatedEntityClass','AggregationLayer','AircraftData',_0x51a0f0(0x14a0),'AirPressureData',_0x51a0f0(0x47a7),_0x51a0f0(0x3d5d),_0x51a0f0(0x23d2),_0x51a0f0(0x3712),_0x51a0f0(0xde0),_0x51a0f0(0x3107),_0x51a0f0(0x2b25),_0x51a0f0(0x28f9),'AlgebraicIntegerQ',_0x51a0f0(0x4478),_0x51a0f0(0x3e64),'AlgebraicNumberNorm',_0x51a0f0(0x207f),'AlgebraicNumberTrace','AlgebraicRules',_0x51a0f0(0x2f29),'Algebraics',_0x51a0f0(0x39f2),_0x51a0f0(0x4e5),'AlignmentMarker',_0x51a0f0(0x262e),'All',_0x51a0f0(0x3836),'AllowChatServices',_0x51a0f0(0x5188),_0x51a0f0(0x35a),'AllowedDimensions','AllowedFrequencyRange',_0x51a0f0(0x1145),_0x51a0f0(0xb9c),_0x51a0f0(0x12cf),_0x51a0f0(0x13bf),_0x51a0f0(0x2707),_0x51a0f0(0x42ba),_0x51a0f0(0x1eda),_0x51a0f0(0x28d4),'AllowVersionUpdate','AllTrue',_0x51a0f0(0x2905),_0x51a0f0(0x29f7),_0x51a0f0(0x18a4),_0x51a0f0(0x2a16),_0x51a0f0(0x3d7a),_0x51a0f0(0x229),'AlternatingGroup','AlternativeHypothesis',_0x51a0f0(0x4de),_0x51a0f0(0x24ef),'AmbientLight',_0x51a0f0(0x4e2b),_0x51a0f0(0xf9b),_0x51a0f0(0x4232),_0x51a0f0(0x246),_0x51a0f0(0x82e),_0x51a0f0(0x1c02),_0x51a0f0(0x3717),_0x51a0f0(0x360b),_0x51a0f0(0xf82),_0x51a0f0(0xb55),_0x51a0f0(0x4501),'AngerJ','AngleBisector',_0x51a0f0(0x48cf),_0x51a0f0(0x370),_0x51a0f0(0x1d85),_0x51a0f0(0x3ba1),_0x51a0f0(0xae1),_0x51a0f0(0x4c05),_0x51a0f0(0x4390),_0x51a0f0(0x3c27),_0x51a0f0(0x1df6),_0x51a0f0(0x4523),_0x51a0f0(0x1b58),_0x51a0f0(0x50c6),_0x51a0f0(0x344c),'AnimationRunning',_0x51a0f0(0x4449),'AnimationTimeIndex',_0x51a0f0(0x2c8c),_0x51a0f0(0x2724),_0x51a0f0(0x4602),_0x51a0f0(0x511e),'AnimatorElements',_0x51a0f0(0x1c35),_0x51a0f0(0x1689),'AnnotationDelete',_0x51a0f0(0x2dc5),_0x51a0f0(0x4ad5),_0x51a0f0(0x1217),'Annuity',_0x51a0f0(0x4ca6),_0x51a0f0(0x1792),_0x51a0f0(0x29a2),_0x51a0f0(0x2919),'AnomalyDetectorFunction',_0x51a0f0(0xa28),_0x51a0f0(0x3547),_0x51a0f0(0x4340),'AntihermitianMatrixQ','Antisymmetric',_0x51a0f0(0x4ace),'Antonyms','AnyOrder',_0x51a0f0(0x2ad4),_0x51a0f0(0x27ac),_0x51a0f0(0x3d36),'ApartSquareFree',_0x51a0f0(0x45b7),'Appearance','AppearanceElements',_0x51a0f0(0xf8a),_0x51a0f0(0x862),_0x51a0f0(0x433c),'AppendCheck',_0x51a0f0(0xa9a),'AppendTo',_0x51a0f0(0x2c05),'Apply','ApplyReaction',_0x51a0f0(0x21ff),_0x51a0f0(0x2420),_0x51a0f0(0x4e55),_0x51a0f0(0x3a7b),_0x51a0f0(0x2524),_0x51a0f0(0x24b8),_0x51a0f0(0x951),_0x51a0f0(0x3e2e),'ArcCurvature','ARCHProcess',_0x51a0f0(0x3fec),_0x51a0f0(0x3275),_0x51a0f0(0x402e),'ArcSin',_0x51a0f0(0x410e),'ArcSinh',_0x51a0f0(0x72c),_0x51a0f0(0x3ac8),_0x51a0f0(0x4fac),_0x51a0f0(0x1ae4),_0x51a0f0(0x447),_0x51a0f0(0x2fde),'ArgumentCountQ',_0x51a0f0(0x2c0),_0x51a0f0(0x4d7a),_0x51a0f0(0xf4d),'ARMAProcess',_0x51a0f0(0x1bd9),_0x51a0f0(0x2aed),'ARProcess',_0x51a0f0(0x4b6a),_0x51a0f0(0x72d),_0x51a0f0(0x4a44),_0x51a0f0(0x2155),_0x51a0f0(0x143c),_0x51a0f0(0x430d),_0x51a0f0(0x19ae),_0x51a0f0(0x291),_0x51a0f0(0x2e9c),_0x51a0f0(0x12dd),_0x51a0f0(0x3467),'ArrayResample','ArrayReshape',_0x51a0f0(0x4e0a),_0x51a0f0(0x45a4),_0x51a0f0(0x43f1),_0x51a0f0(0x3b2c),'ArrowBox',_0x51a0f0(0x350b),_0x51a0f0(0x34ac),'Ask',_0x51a0f0(0x47fd),_0x51a0f0(0x3f65),'AskDisplay','AskedQ',_0x51a0f0(0x2475),_0x51a0f0(0x417e),_0x51a0f0(0x1d6f),_0x51a0f0(0x410a),_0x51a0f0(0x25f9),_0x51a0f0(0x4ef5),_0x51a0f0(0x1b86),'AssessmentFunction',_0x51a0f0(0x3dee),_0x51a0f0(0x3631),'Association',_0x51a0f0(0x5229),_0x51a0f0(0x131b),_0x51a0f0(0x1e9e),'AssociationThread',_0x51a0f0(0x3bab),'Assuming','Assumptions',_0x51a0f0(0x1503),_0x51a0f0(0x1374),'AstroCenter',_0x51a0f0(0x1076),'AstroGraphics',_0x51a0f0(0x370f),'AstroGridLinesStyle',_0x51a0f0(0x1e65),_0x51a0f0(0x4e08),_0x51a0f0(0x180a),_0x51a0f0(0x4759),_0x51a0f0(0x33ce),_0x51a0f0(0x4b42),_0x51a0f0(0x1c5a),_0x51a0f0(0x4c3d),_0x51a0f0(0x3a47),_0x51a0f0(0x4580),_0x51a0f0(0x1d65),_0x51a0f0(0x878),_0x51a0f0(0x4dff),_0x51a0f0(0x352e),_0x51a0f0(0x369c),_0x51a0f0(0x1fb4),_0x51a0f0(0x476e),_0x51a0f0(0x1e38),_0x51a0f0(0x3bf2),_0x51a0f0(0x3685),_0x51a0f0(0x1c42),'AsymptoticRSolveValue',_0x51a0f0(0x2d65),_0x51a0f0(0x19d3),_0x51a0f0(0x3ea7),'AsynchronousTaskObject',_0x51a0f0(0x3873),_0x51a0f0(0xbaf),_0x51a0f0(0x46a7),_0x51a0f0(0x2adb),_0x51a0f0(0x16f1),_0x51a0f0(0x4905),_0x51a0f0(0x3266),'AtomList','AtomQ','AttachCell',_0x51a0f0(0x176f),_0x51a0f0(0x48b3),_0x51a0f0(0x2fda),_0x51a0f0(0x2c21),_0x51a0f0(0x28c5),'AudioAnnotate','AudioAnnotationLookup','AudioBlockMap',_0x51a0f0(0x2669),_0x51a0f0(0x4173),_0x51a0f0(0x4a6f),_0x51a0f0(0x40ce),_0x51a0f0(0x2ffd),'AudioChannelSeparate',_0x51a0f0(0x3ca1),_0x51a0f0(0x30ce),_0x51a0f0(0x2328),_0x51a0f0(0x5131),'AudioDistance',_0x51a0f0(0x5152),_0x51a0f0(0x3bbc),'AudioFrequencyShift','AudioGenerator',_0x51a0f0(0x270e),'AudioInputDevice','AudioInsert','AudioInstanceQ',_0x51a0f0(0x3cf6),_0x51a0f0(0x1225),_0x51a0f0(0x481d),'AudioLength','AudioLocalMeasurements',_0x51a0f0(0x17e9),_0x51a0f0(0x37fd),_0x51a0f0(0x206),'AudioNormalize',_0x51a0f0(0x1233),'AudioOverlay','AudioPad',_0x51a0f0(0x27ae),_0x51a0f0(0x31fc),'AudioPause','AudioPitchShift',_0x51a0f0(0x28f8),_0x51a0f0(0x3b67),_0x51a0f0(0x273e),_0x51a0f0(0x4088),_0x51a0f0(0x4c04),_0x51a0f0(0x2c4e),_0x51a0f0(0x1a08),'AudioReverse',_0x51a0f0(0x48d3),'AudioSpectralMap','AudioSpectralTransformation',_0x51a0f0(0x4925),_0x51a0f0(0x1b3b),_0x51a0f0(0x3f18),'AudioStreams',_0x51a0f0(0xb47),_0x51a0f0(0x48d4),'AudioTrackSelection',_0x51a0f0(0x1207),_0x51a0f0(0x1a10),_0x51a0f0(0x139a),_0x51a0f0(0x1714),_0x51a0f0(0x925),_0x51a0f0(0x3eaa),'AuthenticationDialog',_0x51a0f0(0x293c),'Autocomplete',_0x51a0f0(0x77d),_0x51a0f0(0x4808),_0x51a0f0(0xbe4),'AutoDelete',_0x51a0f0(0x3d13),_0x51a0f0(0xae3),'AutoIndent','AutoIndentSpacings',_0x51a0f0(0x3b0b),'AutoloadPath',_0x51a0f0(0x8f4),'Automatic',_0x51a0f0(0x4ce3),'AutoMultiplicationSymbol',_0x51a0f0(0x3135),_0x51a0f0(0x1f2d),'AutoOpenPalettes',_0x51a0f0(0x247b),_0x51a0f0(0x18a6),_0x51a0f0(0x33f3),_0x51a0f0(0x473a),'AutorunSequencing',_0x51a0f0(0x2d9a),_0x51a0f0(0x1b95),_0x51a0f0(0x24f9),_0x51a0f0(0xdcc),'AutoStyleWords',_0x51a0f0(0x4d7),_0x51a0f0(0x2100),_0x51a0f0(0x33d7),_0x51a0f0(0x4f4a),_0x51a0f0(0x3249),'AxesStyle',_0x51a0f0(0x4cbf),'Axis','Axis3DBox',_0x51a0f0(0x304),'AxisBox',_0x51a0f0(0x37ff),_0x51a0f0(0x357),_0x51a0f0(0x278d),'AxisStyle',_0x51a0f0(0x1ef6),_0x51a0f0(0x122f),_0x51a0f0(0x1568),_0x51a0f0(0x864),_0x51a0f0(0x600),_0x51a0f0(0x3e99),_0x51a0f0(0xb50),'BackFaceSurfaceAppearance',_0x51a0f0(0x122e),_0x51a0f0(0x19bc),'BackgroundAppearance',_0x51a0f0(0x2e4d),'Backslash',_0x51a0f0(0x4635),'Backward','Ball',_0x51a0f0(0x46ef),_0x51a0f0(0x2498),'BandstopFilter','BarabasiAlbertGraphDistribution',_0x51a0f0(0x1161),_0x51a0f0(0x4945),_0x51a0f0(0x14c5),_0x51a0f0(0x27e7),_0x51a0f0(0x3951),_0x51a0f0(0x200c),_0x51a0f0(0x1b8f),'BarnesG',_0x51a0f0(0x319f),'BarSpacing',_0x51a0f0(0x1ff0),_0x51a0f0(0x3287),_0x51a0f0(0x1061),'BaseEncode',_0x51a0f0(0x1fa6),'Baseline',_0x51a0f0(0x25d6),_0x51a0f0(0x1fc4),_0x51a0f0(0x4cb5),_0x51a0f0(0x25c2),_0x51a0f0(0x2d3),'BatesDistribution',_0x51a0f0(0x17c8),_0x51a0f0(0x2c5f),_0x51a0f0(0xdb1),_0x51a0f0(0x3a27),'BayesianMinimizationObject',_0x51a0f0(0x2dcd),_0x51a0f0(0x14e3),_0x51a0f0(0x3628),'Before',_0x51a0f0(0x4519),_0x51a0f0(0xea3),_0x51a0f0(0x348a),_0x51a0f0(0x2b50),'BellY',_0x51a0f0(0x3128),_0x51a0f0(0x3a3),'BeniniDistribution',_0x51a0f0(0x1b85),_0x51a0f0(0x2345),'BernoulliB','BernoulliDistribution',_0x51a0f0(0x3694),'BernoulliProcess','BernsteinBasis','BesagL',_0x51a0f0(0x278),_0x51a0f0(0x1727),_0x51a0f0(0x4a7e),'BesselJZero',_0x51a0f0(0xabf),'BesselY',_0x51a0f0(0x4356),_0x51a0f0(0x15e5),_0x51a0f0(0x18e2),'BetaDistribution',_0x51a0f0(0x4260),_0x51a0f0(0x11c6),_0x51a0f0(0xa11),_0x51a0f0(0x216d),_0x51a0f0(0x1cf8),_0x51a0f0(0x1c01),_0x51a0f0(0x2bb6),_0x51a0f0(0x398d),_0x51a0f0(0x1e62),_0x51a0f0(0x26ab),_0x51a0f0(0x43c2),_0x51a0f0(0x1381),_0x51a0f0(0x1e6e),_0x51a0f0(0x31fd),_0x51a0f0(0x2191),_0x51a0f0(0x42c6),'Binarize',_0x51a0f0(0x1b39),'BinaryDistance',_0x51a0f0(0xbad),_0x51a0f0(0x1a33),_0x51a0f0(0x4815),_0x51a0f0(0x49f),_0x51a0f0(0x4d11),_0x51a0f0(0x210d),'BinCounts','BinLists',_0x51a0f0(0x27ea),_0x51a0f0(0x4cc1),_0x51a0f0(0x2e9),_0x51a0f0(0x2079),_0x51a0f0(0x1b74),_0x51a0f0(0x383a),'BiorthogonalSplineWavelet',_0x51a0f0(0x3d03),_0x51a0f0(0x1938),_0x51a0f0(0x1e0d),'BioSequenceInstances',_0x51a0f0(0x26d4),_0x51a0f0(0x2111),_0x51a0f0(0x4249),_0x51a0f0(0x8e3),_0x51a0f0(0x33fa),_0x51a0f0(0x342d),_0x51a0f0(0x1c2c),_0x51a0f0(0x3772),'BirnbaumImportance',_0x51a0f0(0x8c2),_0x51a0f0(0x3fc8),'BitClear',_0x51a0f0(0x4264),'BitLength',_0x51a0f0(0x4070),'BitOr',_0x51a0f0(0x19e6),_0x51a0f0(0x27fa),_0x51a0f0(0x121b),_0x51a0f0(0x1527),_0x51a0f0(0x641),_0x51a0f0(0x1e80),'BiweightMidvariance','Black',_0x51a0f0(0x20ed),_0x51a0f0(0x117f),_0x51a0f0(0x24ea),'Blank',_0x51a0f0(0xd02),'BlankNullSequence',_0x51a0f0(0x5087),_0x51a0f0(0x4f78),_0x51a0f0(0x9bf),_0x51a0f0(0x1071),_0x51a0f0(0x36eb),_0x51a0f0(0x3b1f),'BlockchainContractValue',_0x51a0f0(0x110c),_0x51a0f0(0x4a3c),'BlockchainKeyEncode','BlockchainPut',_0x51a0f0(0x3f4e),'BlockchainTransaction',_0x51a0f0(0x1760),'BlockchainTransactionSign',_0x51a0f0(0x39e7),_0x51a0f0(0x2650),'BlockLowerTriangularMatrix','BlockMap',_0x51a0f0(0x50e2),'BlockUpperTriangularMatrix',_0x51a0f0(0x2f72),'BlomqvistBetaTest',_0x51a0f0(0xf2c),'Blur',_0x51a0f0(0x4675),_0x51a0f0(0x275c),_0x51a0f0(0x3215),_0x51a0f0(0x30a1),_0x51a0f0(0x365),_0x51a0f0(0x3324),_0x51a0f0(0x4c0f),_0x51a0f0(0x4dab),'BondList',_0x51a0f0(0x2c07),_0x51a0f0(0x1b04),_0x51a0f0(0xf55),_0x51a0f0(0x380a),'BooleanConvert','BooleanCountingFunction',_0x51a0f0(0x26f3),_0x51a0f0(0x377e),_0x51a0f0(0x1890),'BooleanMinimize','BooleanMinterms',_0x51a0f0(0x626),_0x51a0f0(0x733),_0x51a0f0(0x3ce6),'BooleanStrings','BooleanTable',_0x51a0f0(0x2e24),_0x51a0f0(0x30bb),_0x51a0f0(0x237e),_0x51a0f0(0x2db3),_0x51a0f0(0x215b),'BoundaryDiscretizeGraphics',_0x51a0f0(0x47f0),'BoundaryMesh',_0x51a0f0(0x21fa),_0x51a0f0(0x3c24),_0x51a0f0(0x4f3e),_0x51a0f0(0x4102),_0x51a0f0(0x59a),'Bounds',_0x51a0f0(0x4f5),_0x51a0f0(0x1079),_0x51a0f0(0x2e3d),_0x51a0f0(0x29e0),_0x51a0f0(0x27ce),_0x51a0f0(0x2114),_0x51a0f0(0x334c),_0x51a0f0(0x1f17),_0x51a0f0(0x623),_0x51a0f0(0x2521),_0x51a0f0(0x203d),_0x51a0f0(0x43f7),_0x51a0f0(0x13ae),_0x51a0f0(0xb2d),'BoxRotation',_0x51a0f0(0x30a4),'BoxStyle','BoxWhiskerChart','Bra',_0x51a0f0(0x3195),'BraKet','BrayCurtisDistance','BreadthFirstScan',_0x51a0f0(0x464b),_0x51a0f0(0x4ade),_0x51a0f0(0x1243),'BroadcastStationData',_0x51a0f0(0x12b8),'BrownForsytheTest','BrownianBridgeProcess',_0x51a0f0(0xb93),'BSplineBasis','BSplineCurve',_0x51a0f0(0x4ea2),'BSplineCurve3DBoxOptions',_0x51a0f0(0x33f),'BSplineCurveBoxOptions',_0x51a0f0(0x1b6c),'BSplineSurface',_0x51a0f0(0x2072),_0x51a0f0(0x4d78),'BubbleChart',_0x51a0f0(0x13cf),_0x51a0f0(0x3c0),_0x51a0f0(0x4931),_0x51a0f0(0x202d),_0x51a0f0(0x3842),_0x51a0f0(0xf0b),_0x51a0f0(0x45af),_0x51a0f0(0x1bf0),_0x51a0f0(0x50f3),_0x51a0f0(0x3e70),_0x51a0f0(0x3bc0),_0x51a0f0(0x523d),_0x51a0f0(0x4713),_0x51a0f0(0x6c7),_0x51a0f0(0x682),_0x51a0f0(0x1e76),_0x51a0f0(0x3e3c),'ButtonEvaluator',_0x51a0f0(0x35cb),_0x51a0f0(0x5105),'ButtonFunction',_0x51a0f0(0x4641),_0x51a0f0(0x18a2),_0x51a0f0(0x1201),_0x51a0f0(0x4cfb),_0x51a0f0(0x712),_0x51a0f0(0x2567),_0x51a0f0(0x51a8),_0x51a0f0(0x1888),_0x51a0f0(0xa2b),_0x51a0f0(0x3d42),'ByteArrayFormatQ',_0x51a0f0(0x3479),'ByteArrayToString',_0x51a0f0(0x3b91),'ByteOrdering','C',_0x51a0f0(0x27b5),'CacheGraphics',_0x51a0f0(0x231b),_0x51a0f0(0x3aef),_0x51a0f0(0x168a),'CalendarType',_0x51a0f0(0x275f),_0x51a0f0(0x43fd),_0x51a0f0(0x4afa),_0x51a0f0(0xaa5),_0x51a0f0(0x202),_0x51a0f0(0x17a9),'CancelButton',_0x51a0f0(0xd94),_0x51a0f0(0x5115),_0x51a0f0(0x19b1),_0x51a0f0(0x455b),_0x51a0f0(0x4278),'CanonicalName',_0x51a0f0(0x1726),_0x51a0f0(0x2b9a),_0x51a0f0(0x24ee),_0x51a0f0(0x26d7),'Canvas','Cap',_0x51a0f0(0x4f7c),_0x51a0f0(0x147e),'Capitalize',_0x51a0f0(0x4d6),_0x51a0f0(0x4305),_0x51a0f0(0x162b),'CardinalBSplineBasis',_0x51a0f0(0x1e04),_0x51a0f0(0x5f0),'CarlsonRD',_0x51a0f0(0x4561),'CarlsonRF',_0x51a0f0(0x32e4),_0x51a0f0(0x2563),_0x51a0f0(0x1ad3),_0x51a0f0(0x4fa2),_0x51a0f0(0x4f9a),'CaseOrdering','Cases',_0x51a0f0(0x16f9),_0x51a0f0(0x1f0f),_0x51a0f0(0x25c3),'Cast',_0x51a0f0(0x1bd8),_0x51a0f0(0x4888),_0x51a0f0(0x274a),_0x51a0f0(0x16e1),'Catenate',_0x51a0f0(0x1b22),_0x51a0f0(0x2529),_0x51a0f0(0x4676),'CauchyPointProcess',_0x51a0f0(0x3df4),'CayleyGraph','CDF',_0x51a0f0(0x2d59),_0x51a0f0(0x1307),_0x51a0f0(0x126e),'Ceiling',_0x51a0f0(0x5e8),_0x51a0f0(0x549),_0x51a0f0(0x3f95),_0x51a0f0(0x4801),'CellBoundingBox',_0x51a0f0(0x4c6b),_0x51a0f0(0x4878),_0x51a0f0(0x3ab2),_0x51a0f0(0x2aa5),_0x51a0f0(0x3f7f),_0x51a0f0(0x40f6),_0x51a0f0(0x3321),_0x51a0f0(0x3cb4),_0x51a0f0(0x1b6d),_0x51a0f0(0xf09),_0x51a0f0(0x1c38),_0x51a0f0(0x99d),_0x51a0f0(0xfb5),_0x51a0f0(0xeee),_0x51a0f0(0x3518),_0x51a0f0(0x11a7),_0x51a0f0(0x38ca),'CellFrameLabelMargins','CellFrameLabels',_0x51a0f0(0x181e),'CellFrameStyle','CellGroup',_0x51a0f0(0x26e8),_0x51a0f0(0x2232),_0x51a0f0(0x47c0),_0x51a0f0(0x9cb),_0x51a0f0(0x46ec),'CellInsertionPointCell',_0x51a0f0(0x51e6),_0x51a0f0(0x2fdd),_0x51a0f0(0x36a3),_0x51a0f0(0x2cf9),_0x51a0f0(0x28d7),_0x51a0f0(0x42f0),_0x51a0f0(0x1450),_0x51a0f0(0x50c9),_0x51a0f0(0x4b07),_0x51a0f0(0x2f95),_0x51a0f0(0x196e),'Cells',_0x51a0f0(0x3ad2),'CellStyle',_0x51a0f0(0xf0d),_0x51a0f0(0x5264),_0x51a0f0(0x323b),_0x51a0f0(0x40cf),'CensoredDistribution',_0x51a0f0(0x2c59),_0x51a0f0(0x55b),_0x51a0f0(0x1d69),'CenterDot',_0x51a0f0(0x403),_0x51a0f0(0x1350),_0x51a0f0(0x6b4),_0x51a0f0(0x1089),_0x51a0f0(0x25a8),_0x51a0f0(0x17de),_0x51a0f0(0x47d0),_0x51a0f0(0x245b),_0x51a0f0(0x4c84),_0x51a0f0(0x4a72),_0x51a0f0(0x1692),'ChannelBrokerAction',_0x51a0f0(0x32a7),'ChannelHistoryLength',_0x51a0f0(0x2cb5),_0x51a0f0(0x29e),_0x51a0f0(0xb9a),_0x51a0f0(0x36a0),_0x51a0f0(0x2bf4),_0x51a0f0(0x2aa4),_0x51a0f0(0x451f),_0x51a0f0(0x1dc3),_0x51a0f0(0x4a70),_0x51a0f0(0x93c),_0x51a0f0(0x2044),_0x51a0f0(0x389a),_0x51a0f0(0x11ff),_0x51a0f0(0x409e),'CharacteristicFunction','CharacteristicPolynomial',_0x51a0f0(0x44bf),_0x51a0f0(0x4208),_0x51a0f0(0x1b96),_0x51a0f0(0x2555),_0x51a0f0(0x211),_0x51a0f0(0xb59),_0x51a0f0(0x4e1b),_0x51a0f0(0x2401),'ChartElements',_0x51a0f0(0x31da),_0x51a0f0(0x414c),'ChartLegends',_0x51a0f0(0x3f17),'Chebyshev1FilterModel','Chebyshev2FilterModel',_0x51a0f0(0x3de0),'ChebyshevT',_0x51a0f0(0x2656),_0x51a0f0(0x1f14),_0x51a0f0(0x1f72),_0x51a0f0(0x36b5),_0x51a0f0(0x341f),_0x51a0f0(0x43ce),_0x51a0f0(0x2ebb),'CheckboxBox',_0x51a0f0(0x4a34),_0x51a0f0(0x2a74),_0x51a0f0(0x2865),'ChemicalFormula',_0x51a0f0(0xe06),_0x51a0f0(0x4e40),'ChessboardDistance','ChiDistribution','ChineseRemainder','ChiSquareDistribution',_0x51a0f0(0x51f9),_0x51a0f0(0x151e),_0x51a0f0(0x1782),_0x51a0f0(0x48cc),_0x51a0f0(0xea2),_0x51a0f0(0x18ca),_0x51a0f0(0x431a),_0x51a0f0(0x38c7),_0x51a0f0(0x1104),_0x51a0f0(0x236a),'CircleMinus',_0x51a0f0(0x3999),'CirclePoints',_0x51a0f0(0x2b37),_0x51a0f0(0x1d1f),_0x51a0f0(0x426b),_0x51a0f0(0x14df),_0x51a0f0(0x1b14),_0x51a0f0(0x1232),'CircularRealMatrixDistribution',_0x51a0f0(0x4181),_0x51a0f0(0x2970),'Circumsphere','CityData',_0x51a0f0(0x4827),_0x51a0f0(0x30d0),_0x51a0f0(0x4958),_0x51a0f0(0x4f0a),'Classify',_0x51a0f0(0x12cc),_0x51a0f0(0x11c0),_0x51a0f0(0x2df8),_0x51a0f0(0x2e8f),_0x51a0f0(0x3ffc),_0x51a0f0(0x843),_0x51a0f0(0x46c2),'ClebschGordan',_0x51a0f0(0x3022),'ClickToCopy',_0x51a0f0(0x4734),_0x51a0f0(0x167f),'ClipboardNotebook',_0x51a0f0(0x3e84),_0x51a0f0(0x520a),_0x51a0f0(0x46a6),_0x51a0f0(0x3c1d),_0x51a0f0(0x941),'Clock',_0x51a0f0(0x1d8c),_0x51a0f0(0x42d1),_0x51a0f0(0xe7f),_0x51a0f0(0xcec),_0x51a0f0(0x17e7),_0x51a0f0(0x1ed6),_0x51a0f0(0x15e3),_0x51a0f0(0x27f5),_0x51a0f0(0x1db2),_0x51a0f0(0x24f1),_0x51a0f0(0x2d7b),'CloudConnect',_0x51a0f0(0x4159),_0x51a0f0(0x18c0),'CloudDirectory',_0x51a0f0(0xfc7),_0x51a0f0(0x2c4),_0x51a0f0(0x4325),_0x51a0f0(0x47a0),_0x51a0f0(0x4bdb),_0x51a0f0(0x273d),'CloudGet',_0x51a0f0(0x4667),'CloudLoggingData',_0x51a0f0(0x4fe6),_0x51a0f0(0x379a),_0x51a0f0(0xf3d),_0x51a0f0(0x289),_0x51a0f0(0x183f),_0x51a0f0(0x1615),_0x51a0f0(0x471f),'CloudPut',_0x51a0f0(0x2276),_0x51a0f0(0x31f0),_0x51a0f0(0x445d),'CloudSubmit',_0x51a0f0(0x489e),_0x51a0f0(0x274f),_0x51a0f0(0x229e),_0x51a0f0(0x42d),_0x51a0f0(0x443a),_0x51a0f0(0x366c),_0x51a0f0(0x1d79),_0x51a0f0(0x184d),_0x51a0f0(0x3d1f),'Coarse',_0x51a0f0(0x4e7b),'Coefficient',_0x51a0f0(0x1b9e),_0x51a0f0(0x3fad),'CoefficientList',_0x51a0f0(0x34b8),_0x51a0f0(0x2c31),_0x51a0f0(0x3169),'CollinearPoints',_0x51a0f0(0x22de),_0x51a0f0(0xdb6),_0x51a0f0(0x4df8),_0x51a0f0(0xede),_0x51a0f0(0xf22),_0x51a0f0(0x3476),_0x51a0f0(0x4e59),_0x51a0f0(0x26e2),_0x51a0f0(0x156d),_0x51a0f0(0x5166),_0x51a0f0(0x494a),'ColorFunctionBinning',_0x51a0f0(0x22aa),_0x51a0f0(0x4bad),_0x51a0f0(0xab8),_0x51a0f0(0xefb),_0x51a0f0(0x15e1),_0x51a0f0(0x36b),_0x51a0f0(0x3353),_0x51a0f0(0x137c),'ColorRules',_0x51a0f0(0x43ba),_0x51a0f0(0x3784),'ColorSetter',_0x51a0f0(0x39a8),_0x51a0f0(0x13eb),_0x51a0f0(0x475b),'ColorsNear',_0x51a0f0(0x1430),'ColorToneMapping',_0x51a0f0(0x1188),_0x51a0f0(0x16f4),_0x51a0f0(0xa16),_0x51a0f0(0x198c),'ColumnLines','ColumnsEqual','ColumnSpacings','ColumnWidths','CombinatorB',_0x51a0f0(0x49ec),_0x51a0f0(0x444c),_0x51a0f0(0x36d8),_0x51a0f0(0x38b3),_0x51a0f0(0x34bd),'CombinatorY',_0x51a0f0(0x4ad7),_0x51a0f0(0x4320),'CometData','CommonDefaultFormatTypes',_0x51a0f0(0x1845),_0x51a0f0(0x1401),_0x51a0f0(0x4d89),_0x51a0f0(0x3798),'CommunityBoundaryStyle',_0x51a0f0(0x215e),_0x51a0f0(0x4f6c),_0x51a0f0(0x519e),'CompanyData',_0x51a0f0(0xeb0),_0x51a0f0(0x33d1),_0x51a0f0(0x1f0d),_0x51a0f0(0x2412),_0x51a0f0(0x2ba),_0x51a0f0(0x39c9),'CompiledComponent',_0x51a0f0(0x4cde),_0x51a0f0(0x1105),_0x51a0f0(0x1ad9),_0x51a0f0(0x4149),_0x51a0f0(0xe6d),'CompilerEnvironmentAppend',_0x51a0f0(0x42b0),_0x51a0f0(0x102e),'CompilerOptions',_0x51a0f0(0x21cb),_0x51a0f0(0x4d34),'CompleteGraph',_0x51a0f0(0x3d9a),_0x51a0f0(0x4d5e),'CompleteKaryTree',_0x51a0f0(0x46b3),_0x51a0f0(0x1094),_0x51a0f0(0x1ba5),_0x51a0f0(0x4b27),'Complexes','ComplexExpand',_0x51a0f0(0x5084),_0x51a0f0(0x1155),'ComplexListPlot',_0x51a0f0(0x1f26),_0x51a0f0(0x48b8),'ComplexRegionPlot','ComplexStreamPlot',_0x51a0f0(0xada),_0x51a0f0(0x3596),_0x51a0f0(0x611),_0x51a0f0(0x4deb),_0x51a0f0(0x27b9),_0x51a0f0(0x2cdd),'CompositeQ',_0x51a0f0(0x410d),_0x51a0f0(0x343),_0x51a0f0(0x44f5),_0x51a0f0(0x3a65),'CompoundPoissonProcess',_0x51a0f0(0x1b8c),'Compress',_0x51a0f0(0x45d0),_0x51a0f0(0xfac),'ComputeUncertainty',_0x51a0f0(0x1a7e),'Condition',_0x51a0f0(0x2400),_0x51a0f0(0xed9),_0x51a0f0(0x212),_0x51a0f0(0x3a9f),'ConfidenceLevel',_0x51a0f0(0x513a),_0x51a0f0(0x245c),_0x51a0f0(0x44fe),_0x51a0f0(0x1eac),_0x51a0f0(0x4ccb),'ConfirmBy','ConfirmMatch','ConfirmQuiet',_0x51a0f0(0x2058),_0x51a0f0(0x10d6),_0x51a0f0(0x32f3),_0x51a0f0(0x17e2),_0x51a0f0(0x1e22),_0x51a0f0(0x111e),'ConicHullRegion3DBox',_0x51a0f0(0x2d83),_0x51a0f0(0x36e),_0x51a0f0(0x23da),_0x51a0f0(0x14dc),_0x51a0f0(0x12d5),_0x51a0f0(0x1fc9),'Conjunction',_0x51a0f0(0x27c),'ConnectedComponents',_0x51a0f0(0x14fd),'ConnectedGraphQ','ConnectedMeshComponents',_0x51a0f0(0x930),_0x51a0f0(0x1317),_0x51a0f0(0x18a0),_0x51a0f0(0xcb1),_0x51a0f0(0x2332),_0x51a0f0(0xd1f),_0x51a0f0(0x24e1),_0x51a0f0(0x47f3),_0x51a0f0(0x866),_0x51a0f0(0x409f),_0x51a0f0(0x36f0),_0x51a0f0(0x3b19),_0x51a0f0(0x47ef),_0x51a0f0(0x3877),_0x51a0f0(0x2638),_0x51a0f0(0x1a38),_0x51a0f0(0x4d47),_0x51a0f0(0x9f6),'ConstellationData',_0x51a0f0(0x4869),_0x51a0f0(0x3561),_0x51a0f0(0x25ad),_0x51a0f0(0x1e68),_0x51a0f0(0x69b),'ContainsAny','ContainsExactly',_0x51a0f0(0x85f),_0x51a0f0(0x372c),_0x51a0f0(0x3da3),_0x51a0f0(0x193b),'ContentLocationFunction',_0x51a0f0(0x339a),_0x51a0f0(0x861),_0x51a0f0(0x2f03),_0x51a0f0(0x204d),'ContentSize',_0x51a0f0(0x1c6e),'ContextMenu',_0x51a0f0(0xd46),_0x51a0f0(0x478),_0x51a0f0(0x51ae),_0x51a0f0(0x1440),_0x51a0f0(0x2f9),_0x51a0f0(0x32a),_0x51a0f0(0x2d9d),_0x51a0f0(0x3509),_0x51a0f0(0xcc7),_0x51a0f0(0xa7c),_0x51a0f0(0x432d),_0x51a0f0(0xa3f),_0x51a0f0(0x1fef),_0x51a0f0(0x47c2),_0x51a0f0(0x10d9),_0x51a0f0(0x1927),'ContourLines',_0x51a0f0(0x44be),'ContourPlot3D',_0x51a0f0(0x116e),'ContourShading',_0x51a0f0(0x4dc7),_0x51a0f0(0x2d20),_0x51a0f0(0x51ed),'ContrastiveLossLayer',_0x51a0f0(0x3f99),_0x51a0f0(0x1cdb),_0x51a0f0(0x2848),_0x51a0f0(0xac0),_0x51a0f0(0x4d97),_0x51a0f0(0x49a),_0x51a0f0(0x4f3d),_0x51a0f0(0x3723),_0x51a0f0(0x2482),_0x51a0f0(0x1507),_0x51a0f0(0x3f7),'ControllerLinking',_0x51a0f0(0x4904),_0x51a0f0(0x4dd9),_0x51a0f0(0x4473),'ControllerState',_0x51a0f0(0x21ad),_0x51a0f0(0x2ba4),_0x51a0f0(0x43c0),_0x51a0f0(0x40a3),_0x51a0f0(0x3227),_0x51a0f0(0x2c47),_0x51a0f0(0x4898),_0x51a0f0(0x358a),_0x51a0f0(0x652),_0x51a0f0(0xcf9),'ConvexHullRegion','ConvexOptimization',_0x51a0f0(0x522a),_0x51a0f0(0x2642),_0x51a0f0(0x1419),'ConvolutionLayer',_0x51a0f0(0x2643),_0x51a0f0(0x2af8),_0x51a0f0(0x37d5),_0x51a0f0(0x46e0),_0x51a0f0(0xd5a),_0x51a0f0(0x409d),_0x51a0f0(0x3a18),_0x51a0f0(0x3f09),_0x51a0f0(0x37db),'CoordinateBoundsArray',_0x51a0f0(0x4d40),_0x51a0f0(0x1736),_0x51a0f0(0x409c),'CoordinateTransformData','CoplanarPoints',_0x51a0f0(0x518b),_0x51a0f0(0x43b7),_0x51a0f0(0x5b4),_0x51a0f0(0x24db),_0x51a0f0(0x27c3),_0x51a0f0(0x2071),_0x51a0f0(0x1ff),_0x51a0f0(0xb8c),_0x51a0f0(0x4d60),_0x51a0f0(0x31a7),_0x51a0f0(0x5287),'CornerFilter',_0x51a0f0(0x3385),_0x51a0f0(0xbda),_0x51a0f0(0x279a),_0x51a0f0(0x36f6),_0x51a0f0(0x1575),_0x51a0f0(0x3ff8),_0x51a0f0(0x3ee9),_0x51a0f0(0x3f96),_0x51a0f0(0xdb8),'CosineWindow',_0x51a0f0(0x14fa),'Cot',_0x51a0f0(0x1666),_0x51a0f0(0x13fb),_0x51a0f0(0x1283),_0x51a0f0(0x2d52),_0x51a0f0(0x354d),'Count',_0x51a0f0(0xdd1),_0x51a0f0(0x22a6),'CounterAssignments',_0x51a0f0(0x4b77),'CounterBoxOptions','CounterClockwiseContourIntegral','CounterEvaluator',_0x51a0f0(0x2cba),_0x51a0f0(0x3ffb),_0x51a0f0(0x53a),_0x51a0f0(0x1c04),'CountRoots','CountryData',_0x51a0f0(0x4ddd),_0x51a0f0(0x1613),_0x51a0f0(0x13cd),_0x51a0f0(0x3c9c),'CovarianceFunction',_0x51a0f0(0x284d),_0x51a0f0(0x2bfa),'CoxModel',_0x51a0f0(0x3399),_0x51a0f0(0x3822),_0x51a0f0(0x38f8),_0x51a0f0(0x50d7),_0x51a0f0(0x2478),_0x51a0f0(0x344b),_0x51a0f0(0x297c),_0x51a0f0(0x10da),_0x51a0f0(0x5204),'CreateDataSystemModel','CreateDialog',_0x51a0f0(0x482e),_0x51a0f0(0x1f7b),_0x51a0f0(0x3863),_0x51a0f0(0x4fe8),_0x51a0f0(0x3da6),_0x51a0f0(0x3760),_0x51a0f0(0x2392),_0x51a0f0(0x3fc6),_0x51a0f0(0xf38),_0x51a0f0(0x2733),_0x51a0f0(0x51de),_0x51a0f0(0x3c7c),_0x51a0f0(0x7d8),_0x51a0f0(0xfda),_0x51a0f0(0x2eca),_0x51a0f0(0x2ec),_0x51a0f0(0x4008),'CriterionFunction',_0x51a0f0(0x2991),'CriticalitySuccessImportance',_0x51a0f0(0x3495),_0x51a0f0(0x12c4),'CrossEntropyLossLayer','CrossingCount',_0x51a0f0(0x353d),_0x51a0f0(0x1bda),_0x51a0f0(0x27a9),_0x51a0f0(0x4cec),_0x51a0f0(0x2da7),_0x51a0f0(0x1dbc),_0x51a0f0(0x18a3),'CSGRegionTree',_0x51a0f0(0x3410),_0x51a0f0(0x3e44),_0x51a0f0(0x4749),'Cubics',_0x51a0f0(0x2f75),'CuboidBox','CuboidBoxOptions',_0x51a0f0(0x47e1),_0x51a0f0(0x46c9),_0x51a0f0(0x1d01),'Cup',_0x51a0f0(0x366e),'Curl',_0x51a0f0(0x2b4),_0x51a0f0(0x2a0c),_0x51a0f0(0x1d21),_0x51a0f0(0x2b0a),_0x51a0f0(0x4106),_0x51a0f0(0x156c),_0x51a0f0(0x91e),'CurrentValue',_0x51a0f0(0xc54),_0x51a0f0(0x23b7),_0x51a0f0(0x4fee),_0x51a0f0(0xa2f),_0x51a0f0(0x5095),_0x51a0f0(0x1026),_0x51a0f0(0x4edd),_0x51a0f0(0x766),'CyclicGroup',_0x51a0f0(0x1053),_0x51a0f0(0x217f),_0x51a0f0(0x3b34),_0x51a0f0(0x31df),_0x51a0f0(0x4b70),_0x51a0f0(0x155c),'D','DagumDistribution',_0x51a0f0(0x267e),_0x51a0f0(0x2fc6),'DampingFactor','Darker',_0x51a0f0(0x4e7c),_0x51a0f0(0x30ac),'DatabaseConnect',_0x51a0f0(0xc8b),_0x51a0f0(0x25b5),'Databin',_0x51a0f0(0x26cc),_0x51a0f0(0x517f),_0x51a0f0(0x50c2),'DatabinSubmit',_0x51a0f0(0x16e9),'DataCompression',_0x51a0f0(0x15d8),_0x51a0f0(0x3310),_0x51a0f0(0x4ceb),'Dataset','DatasetDisplayPanel',_0x51a0f0(0x425b),_0x51a0f0(0x2a63),_0x51a0f0(0xd7e),_0x51a0f0(0x448e),_0x51a0f0(0x4bf1),'Dated','DateDelimiters',_0x51a0f0(0x4b36),_0x51a0f0(0x3998),_0x51a0f0(0x3420),_0x51a0f0(0x263a),'DateGranularity',_0x51a0f0(0x3f6f),_0x51a0f0(0x1913),_0x51a0f0(0x2073),_0x51a0f0(0x197a),_0x51a0f0(0x12f5),_0x51a0f0(0x329b),_0x51a0f0(0xaa3),_0x51a0f0(0x190e),_0x51a0f0(0x152e),'DatePattern',_0x51a0f0(0x4f5c),_0x51a0f0(0x4197),_0x51a0f0(0x1516),_0x51a0f0(0x2e12),_0x51a0f0(0x1082),_0x51a0f0(0x21e9),_0x51a0f0(0x956),_0x51a0f0(0x4c78),'DateWithinQ',_0x51a0f0(0x3199),_0x51a0f0(0x3b4b),_0x51a0f0(0x4ecb),'DayCount',_0x51a0f0(0x1f61),_0x51a0f0(0x4548),_0x51a0f0(0x377a),_0x51a0f0(0x4aca),_0x51a0f0(0x4c75),_0x51a0f0(0xc7f),_0x51a0f0(0x2b2b),_0x51a0f0(0xe6b),_0x51a0f0(0x2f79),_0x51a0f0(0x4ba5),_0x51a0f0(0x651),'Debug',_0x51a0f0(0x257e),_0x51a0f0(0xbcf),_0x51a0f0(0x10e4),_0x51a0f0(0x3a06),_0x51a0f0(0xc43),'DeclareKnownSymbols','DeclarePackage',_0x51a0f0(0x1667),'DeconvolutionLayer',_0x51a0f0(0x23ae),'Decrypt',_0x51a0f0(0x70d),_0x51a0f0(0x625),'DeepSpaceProbeData',_0x51a0f0(0x22dd),_0x51a0f0(0x485d),_0x51a0f0(0x2201),_0x51a0f0(0xe94),'DefaultAxesStyle',_0x51a0f0(0x2170),_0x51a0f0(0x2758),_0x51a0f0(0x43da),_0x51a0f0(0x1544),'DefaultControlPlacement',_0x51a0f0(0x4035),_0x51a0f0(0x9ca),_0x51a0f0(0x18c1),_0x51a0f0(0x570),_0x51a0f0(0x3692),'DefaultFieldHintStyle',_0x51a0f0(0x34b1),_0x51a0f0(0x3282),_0x51a0f0(0x101d),_0x51a0f0(0x2139),_0x51a0f0(0x1fff),_0x51a0f0(0x34eb),_0x51a0f0(0x28c9),_0x51a0f0(0x139d),'DefaultLabelStyle',_0x51a0f0(0x2be),_0x51a0f0(0x2b03),'DefaultNewCellStyle','DefaultNewInlineCellStyle',_0x51a0f0(0xd1c),_0x51a0f0(0x50b6),'DefaultOutputFormatType',_0x51a0f0(0xc47),'DefaultStyle',_0x51a0f0(0x390),_0x51a0f0(0x1545),_0x51a0f0(0x24e7),_0x51a0f0(0x415b),_0x51a0f0(0x3011),_0x51a0f0(0x1898),_0x51a0f0(0x5260),_0x51a0f0(0x3644),_0x51a0f0(0x4d3e),'DefineInputStreamMethod',_0x51a0f0(0x3fa3),_0x51a0f0(0x174c),_0x51a0f0(0x3096),_0x51a0f0(0x3a81),_0x51a0f0(0x9af),_0x51a0f0(0x3e26),'DegreeLexicographic','DegreeReverseLexicographic','DEigensystem','DEigenvalues',_0x51a0f0(0x1ba7),'Del',_0x51a0f0(0x5261),_0x51a0f0(0x4d0e),_0x51a0f0(0x1609),_0x51a0f0(0x1a57),_0x51a0f0(0x3c5f),_0x51a0f0(0x1ef5),_0x51a0f0(0x3c2a),_0x51a0f0(0x4ca0),'DeleteChannel',_0x51a0f0(0x1930),_0x51a0f0(0x48e8),_0x51a0f0(0x2ede),'DeleteDuplicates',_0x51a0f0(0x2d68),'DeleteElements','DeleteFile',_0x51a0f0(0xe61),_0x51a0f0(0x7e2),_0x51a0f0(0x11e9),_0x51a0f0(0x423a),_0x51a0f0(0x3ed9),_0x51a0f0(0x2679),_0x51a0f0(0x14c7),_0x51a0f0(0x3672),_0x51a0f0(0x4c87),_0x51a0f0(0x428b),'Delimiter',_0x51a0f0(0x2321),'DelimiterFlashTime',_0x51a0f0(0xd3d),_0x51a0f0(0xddf),_0x51a0f0(0x3d1b),_0x51a0f0(0x1b73),'Denominator',_0x51a0f0(0x43d2),_0x51a0f0(0x528),_0x51a0f0(0xc92),_0x51a0f0(0x348c),'DependentVariables',_0x51a0f0(0x309f),_0x51a0f0(0x13b0),_0x51a0f0(0x420b),_0x51a0f0(0x3f58),_0x51a0f0(0x627),'DerivativeFilter',_0x51a0f0(0x2b34),_0x51a0f0(0x283f),_0x51a0f0(0x28eb),_0x51a0f0(0x303a),'DestroyAfterEvaluation','Det',_0x51a0f0(0x61d),_0x51a0f0(0x24d2),_0x51a0f0(0x4a37),_0x51a0f0(0x7b9),_0x51a0f0(0xad2),_0x51a0f0(0x2e65),'DeviceOpenQ',_0x51a0f0(0xb21),_0x51a0f0(0x78f),_0x51a0f0(0x3f2c),_0x51a0f0(0x660),_0x51a0f0(0x500e),_0x51a0f0(0x328d),_0x51a0f0(0x2646),_0x51a0f0(0x2938),_0x51a0f0(0x3ecb),_0x51a0f0(0x1771),'DiacriticalPositioning','Diagonal','DiagonalizableMatrixQ',_0x51a0f0(0x1618),_0x51a0f0(0x1519),'Dialog','DialogIndent',_0x51a0f0(0x2831),_0x51a0f0(0x4727),_0x51a0f0(0x287d),'DialogProlog',_0x51a0f0(0x3642),_0x51a0f0(0x32a2),_0x51a0f0(0x2a7f),_0x51a0f0(0x29e8),_0x51a0f0(0x39c8),_0x51a0f0(0x46c6),_0x51a0f0(0x1e23),'DifferenceDelta',_0x51a0f0(0x23a5),_0x51a0f0(0x26c6),'DifferenceRoot',_0x51a0f0(0x4e1a),_0x51a0f0(0x264e),_0x51a0f0(0x3455),_0x51a0f0(0x22a7),_0x51a0f0(0x1e3b),_0x51a0f0(0x2876),_0x51a0f0(0x1392),_0x51a0f0(0x2023),_0x51a0f0(0x25ec),_0x51a0f0(0x3ea6),_0x51a0f0(0x59f),_0x51a0f0(0x312e),_0x51a0f0(0x3347),_0x51a0f0(0x352f),'DigitQ',_0x51a0f0(0x41a0),_0x51a0f0(0x4744),_0x51a0f0(0x14ec),_0x51a0f0(0x119d),_0x51a0f0(0x2218),'DimensionReduce',_0x51a0f0(0x25f4),_0x51a0f0(0x248f),_0x51a0f0(0x4ca4),_0x51a0f0(0x1d82),_0x51a0f0(0x47b3),_0x51a0f0(0x2d8a),_0x51a0f0(0x440c),_0x51a0f0(0x1979),'DirectedGraphQ',_0x51a0f0(0x2074),_0x51a0f0(0x28be),'DirectionalLight','Directive',_0x51a0f0(0x2fc7),_0x51a0f0(0x2aca),_0x51a0f0(0xc56),_0x51a0f0(0x142c),_0x51a0f0(0x50f2),_0x51a0f0(0x9c8),_0x51a0f0(0x31ca),_0x51a0f0(0x9ae),_0x51a0f0(0xec6),'DirichletEta',_0x51a0f0(0x3e9b),_0x51a0f0(0x497b),'DirichletTransform',_0x51a0f0(0x28ca),_0x51a0f0(0x207d),_0x51a0f0(0x62e),_0x51a0f0(0x156f),_0x51a0f0(0x3530),_0x51a0f0(0x469),'DiscreteDelta',_0x51a0f0(0x4917),_0x51a0f0(0x39a7),_0x51a0f0(0x31cb),_0x51a0f0(0x2e90),'DiscreteLQEstimatorGains',_0x51a0f0(0x2c1),_0x51a0f0(0x308e),'DiscreteMarkovProcess',_0x51a0f0(0x12f1),_0x51a0f0(0x2a2b),_0x51a0f0(0x26cd),_0x51a0f0(0x4f6b),_0x51a0f0(0x2769),_0x51a0f0(0x1d30),'DiscreteShift',_0x51a0f0(0x1aeb),_0x51a0f0(0x13a2),'DiscreteVariables',_0x51a0f0(0x459f),_0x51a0f0(0x34e7),'DiscreteWaveletTransform',_0x51a0f0(0x1964),_0x51a0f0(0x1ae0),'Discriminant',_0x51a0f0(0x1bb5),'Disjunction',_0x51a0f0(0x1ce0),_0x51a0f0(0x4cf4),'DiskBoxOptions',_0x51a0f0(0xd36),_0x51a0f0(0x26ef),_0x51a0f0(0x379c),'DispatchQ','DispersionEstimatorFunction','Display',_0x51a0f0(0x249a),_0x51a0f0(0x44a2),_0x51a0f0(0x207c),_0x51a0f0(0x18bb),_0x51a0f0(0x38b4),_0x51a0f0(0x49e),_0x51a0f0(0x4d8f),_0x51a0f0(0x180e),_0x51a0f0(0x290f),_0x51a0f0(0x2861),_0x51a0f0(0x4e73),'DistanceFunction',_0x51a0f0(0x2c0a),_0x51a0f0(0x245f),_0x51a0f0(0x4699),_0x51a0f0(0x38d3),'DistributedContexts',_0x51a0f0(0x32b),_0x51a0f0(0x44c5),_0x51a0f0(0x2295),'DistributionFitTest',_0x51a0f0(0x312b),_0x51a0f0(0x3dbb),_0x51a0f0(0x17f2),_0x51a0f0(0x27c7),'Divergence',_0x51a0f0(0x47a2),_0x51a0f0(0x1047),_0x51a0f0(0x41bb),'DivideSides',_0x51a0f0(0xd4b),_0x51a0f0(0x47dd),'DivisorSigma',_0x51a0f0(0x5ac),'DMSList',_0x51a0f0(0x4f74),'Do','DockedCell',_0x51a0f0(0x46f0),_0x51a0f0(0x4224),'DocumentGeneratorInformation',_0x51a0f0(0x2afe),'DocumentGenerators',_0x51a0f0(0x4231),_0x51a0f0(0x12e3),_0x51a0f0(0x1454),'DomainRegistrationInformation',_0x51a0f0(0x3031),_0x51a0f0(0x13c2),_0x51a0f0(0x3308),_0x51a0f0(0x3c36),_0x51a0f0(0x1c87),_0x51a0f0(0x4b4b),_0x51a0f0(0x158e),_0x51a0f0(0x33ea),_0x51a0f0(0x1d13),_0x51a0f0(0x34f5),_0x51a0f0(0x14d8),_0x51a0f0(0x188c),_0x51a0f0(0x17af),_0x51a0f0(0x4c4a),_0x51a0f0(0x3e2),_0x51a0f0(0x465b),_0x51a0f0(0x3653),_0x51a0f0(0x41eb),_0x51a0f0(0x45c4),_0x51a0f0(0x6d1),_0x51a0f0(0x2d56),_0x51a0f0(0x6ab),_0x51a0f0(0x647),_0x51a0f0(0x46f7),'DoublyInfinite',_0x51a0f0(0x10ab),_0x51a0f0(0x2c92),_0x51a0f0(0x4bb8),_0x51a0f0(0xe08),_0x51a0f0(0x3e21),_0x51a0f0(0x99c),'DownLeftVector',_0x51a0f0(0xc51),_0x51a0f0(0x33bd),'DownRightVector',_0x51a0f0(0x24d7),'Downsample',_0x51a0f0(0x3771),_0x51a0f0(0x15b7),_0x51a0f0(0x4631),_0x51a0f0(0x46bf),_0x51a0f0(0x3158),_0x51a0f0(0x2649),_0x51a0f0(0x123b),'DrawFrontFaces',_0x51a0f0(0x35a6),_0x51a0f0(0x485c),_0x51a0f0(0x4db),_0x51a0f0(0x3533),_0x51a0f0(0x1efa),'DSolve',_0x51a0f0(0x17ef),_0x51a0f0(0x2137),'Dt',_0x51a0f0(0xbb4),_0x51a0f0(0x2a53),_0x51a0f0(0x4c86),'DualSystemsModel',_0x51a0f0(0x5053),_0x51a0f0(0x8e5),_0x51a0f0(0x542),'Duration',_0x51a0f0(0x449),_0x51a0f0(0x4220),_0x51a0f0(0x49e2),_0x51a0f0(0x54e),_0x51a0f0(0x2989),_0x51a0f0(0x1837),_0x51a0f0(0x22b5),'DynamicModule',_0x51a0f0(0x23e0),_0x51a0f0(0x173f),_0x51a0f0(0x2871),'DynamicModuleValues',_0x51a0f0(0x11c4),_0x51a0f0(0x35f4),'DynamicReference',_0x51a0f0(0xfa4),_0x51a0f0(0x6cb),'DynamicWrapper',_0x51a0f0(0x1517),_0x51a0f0(0x31d9),'E',_0x51a0f0(0xfa7),'EarthquakeData',_0x51a0f0(0x160e),_0x51a0f0(0x2fb4),_0x51a0f0(0x127e),_0x51a0f0(0x1920),_0x51a0f0(0x669),_0x51a0f0(0x4dd),_0x51a0f0(0x1042),_0x51a0f0(0x4857),'EdgeBetweennessCentrality',_0x51a0f0(0x2bcc),_0x51a0f0(0x302a),_0x51a0f0(0x3549),_0x51a0f0(0xdc7),'EdgeConnectivity',_0x51a0f0(0x4d8d),_0x51a0f0(0x21da),_0x51a0f0(0x4cd0),_0x51a0f0(0x389),_0x51a0f0(0x5234),_0x51a0f0(0x31a8),_0x51a0f0(0x292a),_0x51a0f0(0x2f2f),_0x51a0f0(0x1867),_0x51a0f0(0x406c),_0x51a0f0(0x3c46),_0x51a0f0(0x3f6d),_0x51a0f0(0x24c5),_0x51a0f0(0x224),_0x51a0f0(0x2f28),_0x51a0f0(0xdbb),_0x51a0f0(0xd14),_0x51a0f0(0x1475),_0x51a0f0(0x343f),'EdgeShapeFunction',_0x51a0f0(0xd5e),_0x51a0f0(0x12a3),_0x51a0f0(0x3cdc),_0x51a0f0(0xf84),_0x51a0f0(0x3ea5),_0x51a0f0(0xa05),'EdgeValueRange',_0x51a0f0(0x2c68),_0x51a0f0(0x7da),_0x51a0f0(0x3183),_0x51a0f0(0x1fe4),'EditButtonSettings','EditCellTagsSettings',_0x51a0f0(0x1276),_0x51a0f0(0xee8),_0x51a0f0(0x1eb1),_0x51a0f0(0x2f83),_0x51a0f0(0x2b33),_0x51a0f0(0x2d35),'Element',_0x51a0f0(0x56c),_0x51a0f0(0x2dd4),'ElidedForms','Eliminate',_0x51a0f0(0xc8e),_0x51a0f0(0x2e8),_0x51a0f0(0x4d45),_0x51a0f0(0x25a6),_0x51a0f0(0x1181),_0x51a0f0(0x70c),_0x51a0f0(0x2bdc),'EllipticK',_0x51a0f0(0x1087),_0x51a0f0(0x1cc8),_0x51a0f0(0x169b),_0x51a0f0(0x2e80),'EllipticTheta','EllipticThetaPrime',_0x51a0f0(0x31fb),'EmbeddedHTML',_0x51a0f0(0x26e7),_0x51a0f0(0x4a57),_0x51a0f0(0x414f),_0x51a0f0(0x363c),_0x51a0f0(0x2536),_0x51a0f0(0x46d),_0x51a0f0(0x3f08),_0x51a0f0(0x2db8),_0x51a0f0(0x214d),_0x51a0f0(0x346c),_0x51a0f0(0x421c),_0x51a0f0(0x37ac),_0x51a0f0(0x4ef8),_0x51a0f0(0x438a),_0x51a0f0(0x4725),'Encode',_0x51a0f0(0x4d76),'EncryptedObject',_0x51a0f0(0x41bf),'End','EndAdd','EndDialogPacket',_0x51a0f0(0xcdb),_0x51a0f0(0x29d0),_0x51a0f0(0x2db9),'EndOfString',_0x51a0f0(0x444a),'EngineEnvironment',_0x51a0f0(0x18f9),'Enter',_0x51a0f0(0x11b2),_0x51a0f0(0x73b),_0x51a0f0(0x2c71),_0x51a0f0(0x4b6b),'EntityClassList',_0x51a0f0(0x4b3b),_0x51a0f0(0x51d8),_0x51a0f0(0x3543),_0x51a0f0(0x1156),'EntityList','EntityPrefetch',_0x51a0f0(0x3b4),_0x51a0f0(0xfd3),_0x51a0f0(0x14ba),'EntityRegister',_0x51a0f0(0x22fb),_0x51a0f0(0x98f),_0x51a0f0(0x21e7),_0x51a0f0(0xa3d),_0x51a0f0(0xa5a),_0x51a0f0(0x9dc),'EntropyFilter',_0x51a0f0(0x488a),_0x51a0f0(0x67e),_0x51a0f0(0x1c32),_0x51a0f0(0x1bac),'EqualColumns',_0x51a0f0(0xeb8),'EqualTilde',_0x51a0f0(0x1383),_0x51a0f0(0x329),_0x51a0f0(0x2f07),_0x51a0f0(0x2cf5),_0x51a0f0(0x2d2a),_0x51a0f0(0x1312),'Erfc',_0x51a0f0(0x41fb),_0x51a0f0(0x338d),'ErlangC',_0x51a0f0(0xfc6),'Erosion',_0x51a0f0(0x233f),_0x51a0f0(0xe34),'ErrorNorm','ErrorPacket','ErrorsDialogSettings',_0x51a0f0(0x2acf),_0x51a0f0(0x23ff),_0x51a0f0(0x5015),_0x51a0f0(0x1473),_0x51a0f0(0x50a8),_0x51a0f0(0x5059),_0x51a0f0(0x132d),_0x51a0f0(0x32c3),_0x51a0f0(0x461b),_0x51a0f0(0x1a5f),_0x51a0f0(0x4166),_0x51a0f0(0x18b4),_0x51a0f0(0x4868),_0x51a0f0(0x6ad),_0x51a0f0(0x3527),_0x51a0f0(0x1919),_0x51a0f0(0x22a1),_0x51a0f0(0x324b),_0x51a0f0(0x130e),'Evaluated',_0x51a0f0(0x3017),_0x51a0f0(0x37b7),'EvaluationBox',_0x51a0f0(0x2d07),_0x51a0f0(0x249f),'EvaluationData',_0x51a0f0(0x257d),'EvaluationEnvironment',_0x51a0f0(0x50a),_0x51a0f0(0x3bf8),_0x51a0f0(0x3d61),_0x51a0f0(0x30fa),_0x51a0f0(0x105c),_0x51a0f0(0x2490),'EvaluationRateLimit',_0x51a0f0(0x1949),_0x51a0f0(0x3624),_0x51a0f0(0x4b64),'EventData',_0x51a0f0(0x3665),_0x51a0f0(0x256e),_0x51a0f0(0x77b),_0x51a0f0(0x439d),_0x51a0f0(0x1d51),_0x51a0f0(0x3396),_0x51a0f0(0x404a),'ExactRootIsolation',_0x51a0f0(0x1933),'Except',_0x51a0f0(0x4502),_0x51a0f0(0x1e0f),'ExcludedLines',_0x51a0f0(0x4515),_0x51a0f0(0x335f),_0x51a0f0(0x5f4),_0x51a0f0(0x292b),'Exists',_0x51a0f0(0x23cf),_0x51a0f0(0x1497),_0x51a0f0(0x1887),_0x51a0f0(0x489d),_0x51a0f0(0x298a),_0x51a0f0(0x44a3),_0x51a0f0(0x30ec),_0x51a0f0(0x2603),_0x51a0f0(0x1825),_0x51a0f0(0x17b7),'ExpectationE','ExpectedValue',_0x51a0f0(0x7ba),_0x51a0f0(0x2bfd),'ExpIntegralEi',_0x51a0f0(0x2844),'Exponent',_0x51a0f0(0x96d),_0x51a0f0(0x3cc0),_0x51a0f0(0x24e2),_0x51a0f0(0x5285),_0x51a0f0(0x286e),'ExponentialPowerDistribution',_0x51a0f0(0x3f1f),_0x51a0f0(0x25fc),_0x51a0f0(0x246b),_0x51a0f0(0x1f0b),_0x51a0f0(0x1e37),_0x51a0f0(0x4c8a),'ExportPacket','ExportString',_0x51a0f0(0x804),_0x51a0f0(0x355),_0x51a0f0(0x29c7),'ExpressionPacket','ExpressionTree',_0x51a0f0(0x3e5),'ExpToTrig',_0x51a0f0(0x3ba6),_0x51a0f0(0x2143),_0x51a0f0(0x28a1),_0x51a0f0(0x4a33),'ExtentMarkers','ExtentSize',_0x51a0f0(0x432f),_0x51a0f0(0x268),_0x51a0f0(0x3ac6),_0x51a0f0(0x1338),_0x51a0f0(0x2d0e),_0x51a0f0(0x11cc),_0x51a0f0(0x24b6),_0x51a0f0(0x46fa),_0x51a0f0(0x2223),_0x51a0f0(0x1a67),_0x51a0f0(0x5078),_0x51a0f0(0x1f50),'ExternalStorageDownload',_0x51a0f0(0x4e16),_0x51a0f0(0x1d27),_0x51a0f0(0x4894),'ExternalStorageUpload',_0x51a0f0(0x524c),_0x51a0f0(0x24fd),_0x51a0f0(0x5052),_0x51a0f0(0x4358),'ExtractLayer',_0x51a0f0(0x2034),_0x51a0f0(0xcd4),_0x51a0f0(0x1a6b),_0x51a0f0(0x320d),'FaceGrids',_0x51a0f0(0x342a),_0x51a0f0(0x4011),_0x51a0f0(0x4af1),'Factor','FactorComplete','Factorial',_0x51a0f0(0x2ef1),'FactorialMoment',_0x51a0f0(0x820),_0x51a0f0(0x368f),_0x51a0f0(0x4565),_0x51a0f0(0x1f8a),_0x51a0f0(0x1016),_0x51a0f0(0x4696),_0x51a0f0(0x1192),_0x51a0f0(0x270d),'Fail',_0x51a0f0(0xd2b),'FailureAction',_0x51a0f0(0x5233),_0x51a0f0(0x4634),'False','FareySequence',_0x51a0f0(0x157f),_0x51a0f0(0x2093),_0x51a0f0(0x13dc),_0x51a0f0(0x195a),_0x51a0f0(0x1494),_0x51a0f0(0x2f14),_0x51a0f0(0x2bc7),_0x51a0f0(0x3269),_0x51a0f0(0xcd9),_0x51a0f0(0x1235),'FeatureSpacePlot3D',_0x51a0f0(0x3661),_0x51a0f0(0x489a),'FeatureValueImpactPlot',_0x51a0f0(0x1ae8),_0x51a0f0(0x4b8e),_0x51a0f0(0xbdf),_0x51a0f0(0x4e6d),'FeedbackType',_0x51a0f0(0x4c20),_0x51a0f0(0x4fb6),_0x51a0f0(0xd0f),_0x51a0f0(0x4e56),_0x51a0f0(0x2e18),'FieldHint',_0x51a0f0(0x43ed),_0x51a0f0(0x47d8),'FieldSize',_0x51a0f0(0x106b),_0x51a0f0(0x29b5),_0x51a0f0(0x49af),_0x51a0f0(0x2134),'FileDate',_0x51a0f0(0x527b),'FileExtension',_0x51a0f0(0x47f7),_0x51a0f0(0xa9d),'FileFormatQ',_0x51a0f0(0x5282),'FileHash','FileInformation',_0x51a0f0(0x3747),'FileNameDepth',_0x51a0f0(0x909),'FileNameDrop',_0x51a0f0(0x220b),_0x51a0f0(0x752),_0x51a0f0(0x527e),_0x51a0f0(0x3505),'FileNameSplit',_0x51a0f0(0x1c1c),'FileNameToFormatList',_0x51a0f0(0x523a),_0x51a0f0(0x49e6),_0x51a0f0(0x522b),_0x51a0f0(0x4e3),_0x51a0f0(0x15d5),_0x51a0f0(0x34fa),'FileTemplateApply',_0x51a0f0(0xf0a),_0x51a0f0(0x1585),'FilledCurveBox',_0x51a0f0(0x1b51),'FilledTorus',_0x51a0f0(0x5063),'Filling',_0x51a0f0(0x1c81),'FillingTransform',_0x51a0f0(0x13b6),_0x51a0f0(0x14e8),_0x51a0f0(0x3a99),_0x51a0f0(0x3fed),'FinancialDerivative','FinancialIndicator','Find',_0x51a0f0(0x5000),_0x51a0f0(0x1f11),_0x51a0f0(0x3aea),_0x51a0f0(0x49d),_0x51a0f0(0x1d87),_0x51a0f0(0x4e48),_0x51a0f0(0x258c),_0x51a0f0(0x1027),_0x51a0f0(0x48fb),'FindDevices','FindDistribution',_0x51a0f0(0x1ec1),_0x51a0f0(0x1e5f),'FindEdgeColoring',_0x51a0f0(0x28fb),_0x51a0f0(0x1a0b),_0x51a0f0(0x35f),'FindEquationalProof','FindEulerianCycle','FindExternalEvaluators','FindFaces',_0x51a0f0(0x42a),_0x51a0f0(0x48f4),'FindFormula',_0x51a0f0(0x3251),_0x51a0f0(0x907),_0x51a0f0(0x142b),'FindGeometricConjectures',_0x51a0f0(0x15a5),_0x51a0f0(0x28a7),'FindGraphIsomorphism','FindGraphPartition',_0x51a0f0(0x2806),'FindHamiltonianPath',_0x51a0f0(0x1481),'FindImageText',_0x51a0f0(0x40f5),_0x51a0f0(0x1d7b),_0x51a0f0(0x30ef),'FindIntegerNullVector',_0x51a0f0(0x4bb9),_0x51a0f0(0x55d),_0x51a0f0(0x509b),'FindKClique',_0x51a0f0(0x1e90),_0x51a0f0(0x1ee3),'FindLibrary','FindLinearRecurrence',_0x51a0f0(0x153d),_0x51a0f0(0x4da3),'FindMaximum','FindMaximumCut',_0x51a0f0(0x4a84),_0x51a0f0(0xfe7),'FindMeshDefects','FindMinimum',_0x51a0f0(0x2ae1),_0x51a0f0(0xccd),_0x51a0f0(0x33b8),_0x51a0f0(0x790),_0x51a0f0(0x2402),_0x51a0f0(0x1d52),_0x51a0f0(0x505a),'FindPlanarColoring','FindPointProcessParameters',_0x51a0f0(0x3bb2),_0x51a0f0(0x390c),'FindRegionTransform','FindRepeat',_0x51a0f0(0x5bc),_0x51a0f0(0x1c89),_0x51a0f0(0x3c79),_0x51a0f0(0x3ec9),_0x51a0f0(0x3364),_0x51a0f0(0x980),_0x51a0f0(0x3390),'FindSystemModelEquilibrium',_0x51a0f0(0x3150),'FindThreshold',_0x51a0f0(0x36e5),_0x51a0f0(0x4c64),_0x51a0f0(0xf62),_0x51a0f0(0x38f0),_0x51a0f0(0x1668),_0x51a0f0(0xdac),_0x51a0f0(0x1ddc),_0x51a0f0(0x295f),_0x51a0f0(0x5279),_0x51a0f0(0x4544),'First','FirstCase',_0x51a0f0(0x40a1),_0x51a0f0(0x4b66),'FischerGroupFi22',_0x51a0f0(0x30b4),'FischerGroupFi24Prime',_0x51a0f0(0x3e91),'FisherRatioTest',_0x51a0f0(0x57a),'Fit',_0x51a0f0(0x500d),_0x51a0f0(0xffb),_0x51a0f0(0x28b6),_0x51a0f0(0x1ce3),_0x51a0f0(0x1945),_0x51a0f0(0x2336),_0x51a0f0(0x416b),_0x51a0f0(0x3f4),_0x51a0f0(0xd93),'Flatten',_0x51a0f0(0x32c9),'FlattenLayer',_0x51a0f0(0xcb5),'FlightData',_0x51a0f0(0x2c8d),'Floor','FlowPolynomial',_0x51a0f0(0x3370),_0x51a0f0(0x4a2e),_0x51a0f0(0x5089),'FoldPairList',_0x51a0f0(0x3241),'FoldWhileList',_0x51a0f0(0x50b4),_0x51a0f0(0x1d5f),_0x51a0f0(0x24fe),_0x51a0f0(0x322e),_0x51a0f0(0x2bf),_0x51a0f0(0x388d),_0x51a0f0(0xafa),_0x51a0f0(0x2c96),_0x51a0f0(0x17f1),_0x51a0f0(0x5eb),'FontSize',_0x51a0f0(0xb7d),_0x51a0f0(0x2446),_0x51a0f0(0x14bb),_0x51a0f0(0x4920),_0x51a0f0(0x243a),_0x51a0f0(0x395e),_0x51a0f0(0x1e7a),_0x51a0f0(0x16a1),_0x51a0f0(0x2b49),_0x51a0f0(0x4f49),_0x51a0f0(0x40e6),_0x51a0f0(0x49cb),_0x51a0f0(0x38a2),_0x51a0f0(0x3622),_0x51a0f0(0x15cf),_0x51a0f0(0x213a),_0x51a0f0(0x4ec7),_0x51a0f0(0x46ed),_0x51a0f0(0x1c12),_0x51a0f0(0x50b7),_0x51a0f0(0x4dc3),_0x51a0f0(0x44c8),_0x51a0f0(0x3f43),_0x51a0f0(0x29b3),_0x51a0f0(0x2ee7),_0x51a0f0(0x1441),_0x51a0f0(0x349f),'ForwardBackward',_0x51a0f0(0x16f0),_0x51a0f0(0x4dd4),_0x51a0f0(0x2bc8),_0x51a0f0(0x210),_0x51a0f0(0x72b),_0x51a0f0(0x1011),_0x51a0f0(0x33d5),_0x51a0f0(0x441d),_0x51a0f0(0x378b),_0x51a0f0(0x4f2d),_0x51a0f0(0x28ff),'FourierMatrix',_0x51a0f0(0x3dff),_0x51a0f0(0x48df),_0x51a0f0(0x4a6e),_0x51a0f0(0x264),_0x51a0f0(0x408a),_0x51a0f0(0xd40),_0x51a0f0(0x2888),_0x51a0f0(0x32d1),_0x51a0f0(0x128c),_0x51a0f0(0x1caf),_0x51a0f0(0x1e09),'FractionalD','FractionalGaussianNoiseProcess',_0x51a0f0(0x465),'FractionBox',_0x51a0f0(0x4040),'FractionLine',_0x51a0f0(0x37b6),'FrameBox',_0x51a0f0(0xcb2),'Framed',_0x51a0f0(0x2821),'FrameLabel',_0x51a0f0(0x503c),_0x51a0f0(0x2789),_0x51a0f0(0x3b3),_0x51a0f0(0xdbe),_0x51a0f0(0x3c53),_0x51a0f0(0x4058),_0x51a0f0(0x4e45),_0x51a0f0(0x2d29),_0x51a0f0(0x80a),_0x51a0f0(0x2ea8),_0x51a0f0(0x3de7),_0x51a0f0(0x4a28),_0x51a0f0(0x4f54),'FresnelF',_0x51a0f0(0x325e),_0x51a0f0(0x3d43),_0x51a0f0(0x60c),_0x51a0f0(0x2580),_0x51a0f0(0x39cc),_0x51a0f0(0xc55),_0x51a0f0(0x4695),_0x51a0f0(0x1eae),_0x51a0f0(0xcc9),'FromDate',_0x51a0f0(0x2ccd),_0x51a0f0(0x44ad),_0x51a0f0(0x4354),_0x51a0f0(0x2ed4),'FromJulianDate','FromLetterNumber',_0x51a0f0(0x4e53),'FromRawPointer',_0x51a0f0(0x718),_0x51a0f0(0x2026),_0x51a0f0(0x30a6),_0x51a0f0(0x10cb),_0x51a0f0(0x14fe),'FrontEndEventActions',_0x51a0f0(0x2cc4),_0x51a0f0(0x3268),_0x51a0f0(0x28e8),_0x51a0f0(0xd0d),'FrontEndStackSize',_0x51a0f0(0x3463),'FrontEndTokenExecute','FrontEndValueCache',_0x51a0f0(0xa90),_0x51a0f0(0x2584),_0x51a0f0(0x21e0),_0x51a0f0(0x4c7f),'FrontFaceSpecularColor',_0x51a0f0(0x3874),_0x51a0f0(0x1a2e),_0x51a0f0(0x1dd4),_0x51a0f0(0x1aa0),_0x51a0f0(0x4d5f),'FullDefinition',_0x51a0f0(0x4489),_0x51a0f0(0x3b6f),_0x51a0f0(0x239d),_0x51a0f0(0x321e),'FullRegion',_0x51a0f0(0x20cd),'Function','FunctionAnalytic','FunctionBijective','FunctionCompile',_0x51a0f0(0xc74),_0x51a0f0(0x73d),'FunctionCompileExportLibrary',_0x51a0f0(0xf0c),_0x51a0f0(0x13ef),_0x51a0f0(0x3c41),'FunctionDeclaration',_0x51a0f0(0x50ab),_0x51a0f0(0x3a41),_0x51a0f0(0x2cbf),'FunctionInjective','FunctionInterpolation','FunctionLayer',_0x51a0f0(0x439c),_0x51a0f0(0x26b3),_0x51a0f0(0x1415),'FunctionPoles',_0x51a0f0(0x17b5),_0x51a0f0(0x7f4),_0x51a0f0(0x23a2),_0x51a0f0(0x3d74),_0x51a0f0(0x19b8),_0x51a0f0(0x3c72),'GaborFilter','GaborMatrix',_0x51a0f0(0x5141),'GainMargins','GainPhaseMargins',_0x51a0f0(0x3a55),_0x51a0f0(0x2551),_0x51a0f0(0x2771),_0x51a0f0(0x186d),_0x51a0f0(0x655),_0x51a0f0(0x3b4e),'GARCHProcess',_0x51a0f0(0x6d6),'Gather','GatherBy',_0x51a0f0(0xf86),_0x51a0f0(0x2db2),_0x51a0f0(0x210e),_0x51a0f0(0x2f22),_0x51a0f0(0x2559),_0x51a0f0(0x17a7),'GaugeMarkers',_0x51a0f0(0x1e5e),_0x51a0f0(0x45e7),'GaussianIntegers',_0x51a0f0(0x2cc0),_0x51a0f0(0x2491),_0x51a0f0(0x22f8),_0x51a0f0(0x2d5a),_0x51a0f0(0xa22),'GCD',_0x51a0f0(0xd50),_0x51a0f0(0x3b6e),_0x51a0f0(0x4ff1),_0x51a0f0(0x4f76),_0x51a0f0(0x976),_0x51a0f0(0x2ca5),'GeneratedAssetLocation',_0x51a0f0(0x1191),'GeneratedCellStyles',_0x51a0f0(0x196d),_0x51a0f0(0x143b),'GenerateDigitalSignature',_0x51a0f0(0x111b),_0x51a0f0(0xb02),_0x51a0f0(0x4626),_0x51a0f0(0x32a9),_0x51a0f0(0x2485),_0x51a0f0(0x4e6f),_0x51a0f0(0x2cc5),_0x51a0f0(0x48e7),'GeneratorDescription',_0x51a0f0(0x440b),_0x51a0f0(0x30c4),'Generic','GenericCylindricalDecomposition',_0x51a0f0(0x4a14),_0x51a0f0(0x2ae3),_0x51a0f0(0x501e),_0x51a0f0(0x346a),_0x51a0f0(0x3d7f),_0x51a0f0(0x312a),'GeoBoundary',_0x51a0f0(0x48a2),'GeoBounds','GeoBoundsRegion',_0x51a0f0(0xae0),_0x51a0f0(0x1c52),_0x51a0f0(0x3ae3),_0x51a0f0(0x1549),'GeoContourPlot',_0x51a0f0(0x381d),_0x51a0f0(0x2c37),'GeodesicDilation','GeodesicErosion',_0x51a0f0(0x16b6),_0x51a0f0(0x30e1),_0x51a0f0(0x2f3e),_0x51a0f0(0x2d7a),'GeoDirection',_0x51a0f0(0x926),_0x51a0f0(0x4e07),_0x51a0f0(0x438e),_0x51a0f0(0x10be),_0x51a0f0(0x180d),_0x51a0f0(0x2dc7),_0x51a0f0(0x401a),_0x51a0f0(0x1800),_0x51a0f0(0x1595),_0x51a0f0(0x5109),'GeoGridDirectionDifference',_0x51a0f0(0xd25),_0x51a0f0(0x1aee),_0x51a0f0(0x939),_0x51a0f0(0x4afe),_0x51a0f0(0x3030),_0x51a0f0(0x3ff),_0x51a0f0(0x38cb),'GeoGridVector',_0x51a0f0(0x4241),_0x51a0f0(0x959),_0x51a0f0(0x2488),'GeoHistogram',_0x51a0f0(0x160d),_0x51a0f0(0x3844),_0x51a0f0(0x674),_0x51a0f0(0x874),'GeoListPlot',_0x51a0f0(0x1554),_0x51a0f0(0x4721),_0x51a0f0(0x16e5),_0x51a0f0(0x49ee),_0x51a0f0(0x2f92),_0x51a0f0(0xcf1),_0x51a0f0(0x22a8),_0x51a0f0(0x1ea3),_0x51a0f0(0x2814),_0x51a0f0(0x1404),'GeometricScene',_0x51a0f0(0xad6),_0x51a0f0(0x75f),_0x51a0f0(0x22bd),_0x51a0f0(0x2efd),_0x51a0f0(0xc67),_0x51a0f0(0x48e3),_0x51a0f0(0x433),'GeometricTransformationBoxOptions',_0x51a0f0(0x42c9),_0x51a0f0(0x344f),'GeoOrientationData','GeoPath',_0x51a0f0(0x282d),'GeoPosition','GeoPositionENU',_0x51a0f0(0x4ea1),'GeoProjection',_0x51a0f0(0x3e6b),_0x51a0f0(0x109b),'GeoRangePadding',_0x51a0f0(0x29fb),_0x51a0f0(0xe91),_0x51a0f0(0xdfc),'GeoServer',_0x51a0f0(0x4019),'GeoStreamPlot',_0x51a0f0(0x372),'GeoStylingImageFunction',_0x51a0f0(0x1a7b),'GeoVector','GeoVectorENU',_0x51a0f0(0x2f96),_0x51a0f0(0x968),_0x51a0f0(0x4e8d),_0x51a0f0(0x25e5),_0x51a0f0(0x4656),_0x51a0f0(0xf49),_0x51a0f0(0x19bf),_0x51a0f0(0x4fc0),_0x51a0f0(0xc27),_0x51a0f0(0xee5),'GetEnvironment','GetFileName',_0x51a0f0(0x5255),_0x51a0f0(0x1354),'Glaisher','GlobalClusteringCoefficient','GlobalPreferences',_0x51a0f0(0x3325),_0x51a0f0(0x1f6d),'GoldenAngle','GoldenRatio','GompertzMakehamDistribution',_0x51a0f0(0x143f),_0x51a0f0(0x1ab7),'GoodmanKruskalGammaTest',_0x51a0f0(0x2761),_0x51a0f0(0x4ba9),'Grad',_0x51a0f0(0x38e),_0x51a0f0(0x33ee),'GradientFittedMesh',_0x51a0f0(0x4d53),_0x51a0f0(0x11d4),_0x51a0f0(0x19ca),_0x51a0f0(0xeb3),_0x51a0f0(0x27a),'Graph3D',_0x51a0f0(0x104e),_0x51a0f0(0x4bed),_0x51a0f0(0x37bc),'GraphComplement',_0x51a0f0(0x18e0),_0x51a0f0(0x4144),_0x51a0f0(0x3867),_0x51a0f0(0x4714),_0x51a0f0(0x4f3),_0x51a0f0(0xff1),_0x51a0f0(0x145f),_0x51a0f0(0x31f2),_0x51a0f0(0x221c),_0x51a0f0(0x3c9b),_0x51a0f0(0x3c0c),'Graphics',_0x51a0f0(0x2f4),_0x51a0f0(0x246c),'Graphics3DBoxOptions',_0x51a0f0(0x232b),'GraphicsBaseline',_0x51a0f0(0x36c5),'GraphicsBoxOptions',_0x51a0f0(0x1bee),_0x51a0f0(0x36e6),_0x51a0f0(0x35d2),'GraphicsComplex3DBox','GraphicsComplex3DBoxOptions','GraphicsComplexBox',_0x51a0f0(0xb24),'GraphicsContents','GraphicsData',_0x51a0f0(0x2a50),_0x51a0f0(0xece),_0x51a0f0(0x1592),_0x51a0f0(0x2bcf),_0x51a0f0(0x2a83),_0x51a0f0(0x810),'GraphicsGroupBoxOptions','GraphicsGrouping',_0x51a0f0(0x2b91),_0x51a0f0(0xd3c),'GraphicsSpacing',_0x51a0f0(0x48fe),_0x51a0f0(0xa8b),_0x51a0f0(0x3a6e),_0x51a0f0(0x27e1),_0x51a0f0(0x39bb),_0x51a0f0(0x49bd),_0x51a0f0(0x36ca),'GraphLinkEfficiency',_0x51a0f0(0x3b7e),'GraphPlot','GraphPlot3D',_0x51a0f0(0x4d5d),_0x51a0f0(0x1912),_0x51a0f0(0x14e1),_0x51a0f0(0x14af),'GraphRadius',_0x51a0f0(0x37c4),_0x51a0f0(0x42e3),_0x51a0f0(0x2487),'GraphSum',_0x51a0f0(0x1a5d),_0x51a0f0(0x1e54),_0x51a0f0(0x113d),_0x51a0f0(0x2658),_0x51a0f0(0x4abc),_0x51a0f0(0x4630),_0x51a0f0(0x491e),_0x51a0f0(0x42b),_0x51a0f0(0x2df0),_0x51a0f0(0x49f8),_0x51a0f0(0x2677),_0x51a0f0(0x23d5),_0x51a0f0(0x4450),_0x51a0f0(0x23c9),_0x51a0f0(0x315e),_0x51a0f0(0x35f5),'GreenFunction',_0x51a0f0(0x3f34),'GridBaseline','GridBox','GridBoxAlignment',_0x51a0f0(0x217e),_0x51a0f0(0x4698),_0x51a0f0(0x3950),_0x51a0f0(0x2e1),'GridBoxItemStyle',_0x51a0f0(0x12a2),_0x51a0f0(0x387c),'GridCreationSettings',_0x51a0f0(0xfa8),_0x51a0f0(0x32a4),'GridFrame','GridFrameMargins',_0x51a0f0(0x3d10),_0x51a0f0(0x23c6),'GridLinesStyle','GridVideo',_0x51a0f0(0x28ab),_0x51a0f0(0x4bfd),_0x51a0f0(0x445a),_0x51a0f0(0xdb3),'GroupElementFromWord','GroupElementPosition',_0x51a0f0(0x1a2d),'GroupElements',_0x51a0f0(0x1c39),_0x51a0f0(0xa4f),_0x51a0f0(0xb03),_0x51a0f0(0x21de),_0x51a0f0(0x1fe7),'GroupOpenerInsideFrame',_0x51a0f0(0x3a63),_0x51a0f0(0x6c8),'GroupPageBreakWithin',_0x51a0f0(0x428f),_0x51a0f0(0x408),_0x51a0f0(0x4ad4),_0x51a0f0(0x2bee),_0x51a0f0(0x528c),_0x51a0f0(0x515),_0x51a0f0(0x227c),_0x51a0f0(0x1254),_0x51a0f0(0x3ccc),_0x51a0f0(0x3c20),_0x51a0f0(0x1ed5),_0x51a0f0(0xdb9),'HalfNormalDistribution',_0x51a0f0(0x1ea7),_0x51a0f0(0x2ba5),_0x51a0f0(0x928),_0x51a0f0(0x4608),'HammingDistance',_0x51a0f0(0xd45),_0x51a0f0(0x7f1),'HandlerFunctionsKeys',_0x51a0f0(0x184c),_0x51a0f0(0x21d7),_0x51a0f0(0x4e43),_0x51a0f0(0x455e),_0x51a0f0(0x16c4),_0x51a0f0(0x2a42),'HaradaNortonGroupHN',_0x51a0f0(0x4507),_0x51a0f0(0x479),_0x51a0f0(0x1c4a),_0x51a0f0(0x3392),_0x51a0f0(0x490b),_0x51a0f0(0x1030),_0x51a0f0(0x4b04),_0x51a0f0(0x453),_0x51a0f0(0x42d5),_0x51a0f0(0x3774),_0x51a0f0(0x17b8),_0x51a0f0(0x509),_0x51a0f0(0x407f),'HeaderBackground',_0x51a0f0(0x271e),_0x51a0f0(0x4f63),'Headers',_0x51a0f0(0x47ad),_0x51a0f0(0x481),'Heads',_0x51a0f0(0x1050),_0x51a0f0(0xd99),_0x51a0f0(0x324),_0x51a0f0(0x4d7c),_0x51a0f0(0x41db),_0x51a0f0(0x27b),'HeatTransferPDEComponent',_0x51a0f0(0x3639),_0x51a0f0(0x1e47),_0x51a0f0(0x1f84),_0x51a0f0(0x4ebb),_0x51a0f0(0x488d),_0x51a0f0(0x2489),_0x51a0f0(0xed2),_0x51a0f0(0x4ac7),_0x51a0f0(0x3da0),_0x51a0f0(0x4584),_0x51a0f0(0x4196),_0x51a0f0(0xb90),_0x51a0f0(0x46d1),_0x51a0f0(0x1762),_0x51a0f0(0xb35),_0x51a0f0(0x3dbe),'HessenbergDecomposition',_0x51a0f0(0xea7),_0x51a0f0(0x38f2),'HeunBPrime','HeunC',_0x51a0f0(0x225),_0x51a0f0(0x4d9e),'HeunDPrime',_0x51a0f0(0x185f),_0x51a0f0(0x17cc),'HeunT',_0x51a0f0(0x23d4),'HexadecimalCharacter',_0x51a0f0(0x10a5),_0x51a0f0(0x2671),_0x51a0f0(0x4c7e),_0x51a0f0(0x288e),_0x51a0f0(0x50bb),_0x51a0f0(0x15bc),_0x51a0f0(0x35fa),_0x51a0f0(0x9ec),_0x51a0f0(0x947),_0x51a0f0(0x42d7),_0x51a0f0(0x311f),_0x51a0f0(0x2b6a),_0x51a0f0(0x17b0),'HilbertCurve',_0x51a0f0(0x37a5),'HilbertMatrix',_0x51a0f0(0x4bf8),_0x51a0f0(0x2a40),_0x51a0f0(0x51dc),'HistogramList',_0x51a0f0(0x4f82),'HistogramTransform',_0x51a0f0(0x4e96),_0x51a0f0(0x11bd),_0x51a0f0(0x12aa),_0x51a0f0(0x2595),_0x51a0f0(0xe23),_0x51a0f0(0x2db7),_0x51a0f0(0x1764),_0x51a0f0(0x49ff),_0x51a0f0(0x3c42),_0x51a0f0(0x6ff),_0x51a0f0(0x4123),'HoldComplete',_0x51a0f0(0x3930),_0x51a0f0(0x2a8e),'HoldPattern',_0x51a0f0(0x43a8),_0x51a0f0(0x2b68),_0x51a0f0(0x1a22),_0x51a0f0(0x3012),_0x51a0f0(0x89a),_0x51a0f0(0xea4),_0x51a0f0(0x4a9),_0x51a0f0(0x1216),_0x51a0f0(0x14d5),'HostLookup','HotellingTSquareDistribution',_0x51a0f0(0x297),_0x51a0f0(0x271),_0x51a0f0(0x2709),_0x51a0f0(0x4d01),_0x51a0f0(0x343a),'HTTPRequestData','HTTPResponse',_0x51a0f0(0x4338),_0x51a0f0(0x4b8),_0x51a0f0(0x28de),'HumpEqual',_0x51a0f0(0x3f10),'HurwitzZeta',_0x51a0f0(0x25a0),_0x51a0f0(0x17db),_0x51a0f0(0x2065),_0x51a0f0(0x20f1),_0x51a0f0(0x4d3b),_0x51a0f0(0x276b),_0x51a0f0(0x477a),_0x51a0f0(0x3574),_0x51a0f0(0x1e59),_0x51a0f0(0x1406),'HypergeometricDistribution',_0x51a0f0(0x1b45),_0x51a0f0(0x506c),_0x51a0f0(0x4768),_0x51a0f0(0x4e0c),_0x51a0f0(0x4a65),'HyperlinkCreationSettings',_0x51a0f0(0x405d),_0x51a0f0(0x4a38),_0x51a0f0(0x22da),'HypoexponentialDistribution',_0x51a0f0(0x4df1),'I',_0x51a0f0(0x26a5),'Iconize',_0x51a0f0(0x2326),_0x51a0f0(0x331f),'Icosahedron',_0x51a0f0(0xa0e),_0x51a0f0(0x46dd),'If',_0x51a0f0(0x4240),_0x51a0f0(0xfc4),'IgnoreDiacritics',_0x51a0f0(0x45f1),_0x51a0f0(0x3dc3),_0x51a0f0(0x4f55),_0x51a0f0(0x35c9),_0x51a0f0(0x3e9f),'Im',_0x51a0f0(0x4555),_0x51a0f0(0xa58),_0x51a0f0(0x2639),_0x51a0f0(0x1a92),_0x51a0f0(0x1aa9),_0x51a0f0(0x462f),'ImageAdjust',_0x51a0f0(0x1154),_0x51a0f0(0x3e8f),'ImageApplyIndexed','ImageAspectRatio','ImageAssemble',_0x51a0f0(0x4589),_0x51a0f0(0x1200),_0x51a0f0(0x2ecb),_0x51a0f0(0x418f),_0x51a0f0(0x2d3a),_0x51a0f0(0x4595),_0x51a0f0(0x2635),_0x51a0f0(0xdd7),_0x51a0f0(0xa2e),'ImageCollage','ImageColorSpace',_0x51a0f0(0x400),_0x51a0f0(0x3fb1),_0x51a0f0(0x1f41),_0x51a0f0(0x4706),_0x51a0f0(0x4ebd),_0x51a0f0(0x4dc4),_0x51a0f0(0x4885),_0x51a0f0(0x1a3e),_0x51a0f0(0x4f38),_0x51a0f0(0xecf),_0x51a0f0(0x3871),_0x51a0f0(0x3f3f),_0x51a0f0(0x4fd2),_0x51a0f0(0x4dc5),_0x51a0f0(0x2ac2),_0x51a0f0(0x4e0d),_0x51a0f0(0x4e02),'ImageEffect',_0x51a0f0(0x45a8),_0x51a0f0(0x2fa6),_0x51a0f0(0x3a28),'ImageFileFilter',_0x51a0f0(0x1d34),_0x51a0f0(0x2ce2),_0x51a0f0(0x797),_0x51a0f0(0x1971),_0x51a0f0(0x2250),'ImageForwardTransformation',_0x51a0f0(0x1ba9),_0x51a0f0(0x1f6a),_0x51a0f0(0x48f3),_0x51a0f0(0x348f),_0x51a0f0(0xfa9),'ImageLabels','ImageLegends',_0x51a0f0(0x40cc),_0x51a0f0(0x2c1c),_0x51a0f0(0x29a4),_0x51a0f0(0x435c),_0x51a0f0(0x3320),_0x51a0f0(0x4756),_0x51a0f0(0x44e),'ImageMultiply','ImageOffset',_0x51a0f0(0x1e5a),_0x51a0f0(0x520c),_0x51a0f0(0x1e74),_0x51a0f0(0x45fa),_0x51a0f0(0x1600),_0x51a0f0(0x31f3),_0x51a0f0(0x45cb),_0x51a0f0(0x3734),_0x51a0f0(0x33a7),_0x51a0f0(0x437c),_0x51a0f0(0x4649),_0x51a0f0(0x20ca),_0x51a0f0(0x3612),'ImageRegion',_0x51a0f0(0x4a1b),_0x51a0f0(0x2678),'ImageRestyle',_0x51a0f0(0x4b5d),_0x51a0f0(0x1925),_0x51a0f0(0x9a0),_0x51a0f0(0xec1),'ImageScan','ImageSize','ImageSizeAction','ImageSizeCache',_0x51a0f0(0x17ca),_0x51a0f0(0x6bd),_0x51a0f0(0x223),_0x51a0f0(0x1fa9),'ImageTake',_0x51a0f0(0x4828),'ImageTrim',_0x51a0f0(0x4607),_0x51a0f0(0x3f1d),_0x51a0f0(0x466),_0x51a0f0(0x1268),_0x51a0f0(0x3c3b),_0x51a0f0(0x5202),_0x51a0f0(0x2d0b),_0x51a0f0(0x2703),_0x51a0f0(0x30f8),'Import',_0x51a0f0(0x128f),_0x51a0f0(0x89b),_0x51a0f0(0x29c2),_0x51a0f0(0x2ab7),'ImportString',_0x51a0f0(0x1e46),'In',_0x51a0f0(0x2b70),_0x51a0f0(0x4741),_0x51a0f0(0x35b2),_0x51a0f0(0x4640),_0x51a0f0(0x2b6d),_0x51a0f0(0x2e40),_0x51a0f0(0x15bb),_0x51a0f0(0x9e2),_0x51a0f0(0x753),_0x51a0f0(0x538),'IncludeDirectories',_0x51a0f0(0x2f55),_0x51a0f0(0xe84),_0x51a0f0(0xc9b),_0x51a0f0(0x21a2),_0x51a0f0(0x4b96),'IncludePods','IncludeQuantities',_0x51a0f0(0x43a1),_0x51a0f0(0x277c),_0x51a0f0(0x4fbb),_0x51a0f0(0x115e),_0x51a0f0(0x343b),'IndefiniteMatrixQ','Indent',_0x51a0f0(0x5289),_0x51a0f0(0xea5),_0x51a0f0(0x2ca6),_0x51a0f0(0x50af),_0x51a0f0(0x23fd),_0x51a0f0(0x1eff),_0x51a0f0(0x5100),_0x51a0f0(0x1405),_0x51a0f0(0x1f04),'IndeterminateThreshold',_0x51a0f0(0x1a4e),'Indexed','IndexEdgeTaggedGraph',_0x51a0f0(0x453c),'IndexTag',_0x51a0f0(0x1619),_0x51a0f0(0x3aa),_0x51a0f0(0x2d03),'InexactNumberQ',_0x51a0f0(0x4072),_0x51a0f0(0x2f17),_0x51a0f0(0x2092),'InfiniteLineThrough',_0x51a0f0(0x49eb),_0x51a0f0(0x3f00),_0x51a0f0(0x2486),_0x51a0f0(0x22d4),_0x51a0f0(0x40be),_0x51a0f0(0x4a21),_0x51a0f0(0x321),_0x51a0f0(0x8db),'InformationDataGrid',_0x51a0f0(0x5147),_0x51a0f0(0x73e),_0x51a0f0(0x244d),'InhomogeneousPoissonProcess',_0x51a0f0(0x46d7),_0x51a0f0(0x29a),_0x51a0f0(0x4832),_0x51a0f0(0x891),'InitializationCellWarning',_0x51a0f0(0x4dfd),'InitializationObjects',_0x51a0f0(0x175a),'Initialize','InitialSeeding',_0x51a0f0(0x284a),_0x51a0f0(0x39c3),_0x51a0f0(0xe7b),_0x51a0f0(0x3751),'InnerPolygon',_0x51a0f0(0x4b72),_0x51a0f0(0x4d5a),_0x51a0f0(0x4834),_0x51a0f0(0x11a4),_0x51a0f0(0x87a),_0x51a0f0(0x3b81),_0x51a0f0(0x26fc),'InputFieldBox',_0x51a0f0(0x313d),_0x51a0f0(0x3548),_0x51a0f0(0xaa9),_0x51a0f0(0x3e4b),_0x51a0f0(0x32e8),_0x51a0f0(0xed6),_0x51a0f0(0x8ad),'InputSettings',_0x51a0f0(0x4150),'InputString','InputStringPacket',_0x51a0f0(0x1976),_0x51a0f0(0x2f30),_0x51a0f0(0x3e90),_0x51a0f0(0x7ca),'InsertLinebreaks',_0x51a0f0(0xbfd),_0x51a0f0(0x2266),_0x51a0f0(0x4c63),_0x51a0f0(0x4bec),'InsetBox','InsetBoxOptions',_0x51a0f0(0x1663),_0x51a0f0(0xf77),_0x51a0f0(0x46fc),_0x51a0f0(0x4d55),'InString','Integer',_0x51a0f0(0x385d),'IntegerExponent',_0x51a0f0(0x1d7c),_0x51a0f0(0xd86),_0x51a0f0(0x8ed),_0x51a0f0(0x4cee),'IntegerQ','IntegerReverse',_0x51a0f0(0x2ecf),'IntegerString',_0x51a0f0(0x1af9),'Integrate','IntegrateChangeVariables',_0x51a0f0(0xe48),_0x51a0f0(0x159d),_0x51a0f0(0x24c9),'Interlaced','Interleaving',_0x51a0f0(0x1ff6),_0x51a0f0(0x2c8e),'InterpolatingPolynomial',_0x51a0f0(0x4a0d),_0x51a0f0(0x3223),_0x51a0f0(0x1dad),_0x51a0f0(0x4db4),_0x51a0f0(0x3748),_0x51a0f0(0x64c),_0x51a0f0(0x225b),_0x51a0f0(0x4897),_0x51a0f0(0x5e3),_0x51a0f0(0x24cb),_0x51a0f0(0x309c),_0x51a0f0(0x3e0c),_0x51a0f0(0x2fae),_0x51a0f0(0x517),'IntersectingQ',_0x51a0f0(0x4369),_0x51a0f0(0x2fbc),_0x51a0f0(0x293e),_0x51a0f0(0x347d),_0x51a0f0(0x23f2),_0x51a0f0(0x26c9),_0x51a0f0(0x4ed5),_0x51a0f0(0x4a47),_0x51a0f0(0x1cf3),_0x51a0f0(0x22f6),_0x51a0f0(0x18f6),'InverseBilateralLaplaceTransform',_0x51a0f0(0x4e74),_0x51a0f0(0x2ad2),_0x51a0f0(0x39db),_0x51a0f0(0x220d),_0x51a0f0(0x2202),_0x51a0f0(0x20ab),'InverseErf',_0x51a0f0(0x38df),_0x51a0f0(0x16aa),'InverseFourierCosTransform','InverseFourierSequenceTransform',_0x51a0f0(0x3b98),'InverseFourierTransform',_0x51a0f0(0x2145),_0x51a0f0(0x3af),_0x51a0f0(0x229a),_0x51a0f0(0x4098),'InverseGaussianDistribution',_0x51a0f0(0x2e3b),'InverseHankelTransform',_0x51a0f0(0x3be0),_0x51a0f0(0x2fc4),_0x51a0f0(0x397c),_0x51a0f0(0x3864),_0x51a0f0(0x37f9),_0x51a0f0(0x38d),_0x51a0f0(0x1cbe),'InverseJacobiDS',_0x51a0f0(0x4e1),'InverseJacobiND',_0x51a0f0(0x2e52),_0x51a0f0(0x349a),_0x51a0f0(0xef7),_0x51a0f0(0x7aa),'InverseLaplaceTransform',_0x51a0f0(0x9d9),'InversePermutation',_0x51a0f0(0x47d9),_0x51a0f0(0xebc),_0x51a0f0(0x1f49),_0x51a0f0(0x13e8),_0x51a0f0(0x3606),'InverseSurvivalFunction',_0x51a0f0(0x43f8),_0x51a0f0(0x3b87),_0x51a0f0(0x3016),_0x51a0f0(0xcb8),_0x51a0f0(0x4757),'Invisible','InvisibleApplication',_0x51a0f0(0x3292),'IPAddress',_0x51a0f0(0x743),_0x51a0f0(0x21ef),'IsolatingInterval',_0x51a0f0(0x2d87),'IsomorphicSubgraphQ',_0x51a0f0(0x2618),_0x51a0f0(0x419d),_0x51a0f0(0x3a8b),_0x51a0f0(0x307c),_0x51a0f0(0x45bb),_0x51a0f0(0x336b),'ItemDisplayFunction',_0x51a0f0(0x25ef),_0x51a0f0(0x16b3),'ItoProcess',_0x51a0f0(0x299c),_0x51a0f0(0x1c8e),_0x51a0f0(0x4d7e),'JacobiCD','JacobiCN',_0x51a0f0(0x2abe),_0x51a0f0(0x48f9),'JacobiDN','JacobiDS',_0x51a0f0(0x2a23),'JacobiNC',_0x51a0f0(0x2686),'JacobiNS',_0x51a0f0(0x3ae4),_0x51a0f0(0x116c),_0x51a0f0(0x48e0),_0x51a0f0(0x3f0f),_0x51a0f0(0x2371),_0x51a0f0(0x1e3c),_0x51a0f0(0x30aa),_0x51a0f0(0x10b0),_0x51a0f0(0x27a2),_0x51a0f0(0x2c2a),_0x51a0f0(0x338),_0x51a0f0(0x4137),_0x51a0f0(0xce2),'Join',_0x51a0f0(0x2eda),_0x51a0f0(0x1a96),_0x51a0f0(0x321f),_0x51a0f0(0xe68),_0x51a0f0(0x4909),'JoinForm','JordanDecomposition',_0x51a0f0(0xbd8),'JulianDate','JuliaSetBoettcher',_0x51a0f0(0x9f9),'JuliaSetPlot','JuliaSetPoints','K','KagiChart','KaiserBesselWindow',_0x51a0f0(0x3830),_0x51a0f0(0x1cbb),_0x51a0f0(0x2fcb),_0x51a0f0(0x40af),_0x51a0f0(0x2153),_0x51a0f0(0x43a4),_0x51a0f0(0x4870),_0x51a0f0(0x4822),'KEdgeConnectedComponents',_0x51a0f0(0x1c99),_0x51a0f0(0xcad),_0x51a0f0(0x3408),'KelvinBer','KelvinKei',_0x51a0f0(0x16a4),_0x51a0f0(0x14f7),_0x51a0f0(0x29b2),'KernelConfiguration',_0x51a0f0(0x2cb8),_0x51a0f0(0x48e4),'KernelMixtureDistribution',_0x51a0f0(0x42a8),_0x51a0f0(0x199d),'Ket',_0x51a0f0(0x1eb3),'KeyCollisionFunction','KeyComplement','KeyDrop',_0x51a0f0(0x3544),'KeyExistsQ',_0x51a0f0(0x4d14),'KeyIntersection',_0x51a0f0(0x97d),_0x51a0f0(0x1e42),_0x51a0f0(0x1c18),_0x51a0f0(0x43bc),'KeySelect',_0x51a0f0(0x40dd),_0x51a0f0(0xd41),'KeyTake',_0x51a0f0(0x1a94),_0x51a0f0(0x20b4),_0x51a0f0(0x4107),'Khinchin',_0x51a0f0(0x1274),'KirchhoffGraph',_0x51a0f0(0x1657),_0x51a0f0(0x2e47),'KnapsackSolve',_0x51a0f0(0x4658),_0x51a0f0(0x180b),_0x51a0f0(0x3eb),_0x51a0f0(0x28ec),_0x51a0f0(0x40d0),_0x51a0f0(0x49ef),_0x51a0f0(0x1b72),'KroneckerProduct',_0x51a0f0(0x483c),_0x51a0f0(0x1ad7),_0x51a0f0(0x3a24),_0x51a0f0(0x3189),_0x51a0f0(0x259f),_0x51a0f0(0x3f42),_0x51a0f0(0x1361),'LABColor','Label','Labeled',_0x51a0f0(0x3604),_0x51a0f0(0x1849),_0x51a0f0(0x2981),_0x51a0f0(0x34d4),_0x51a0f0(0x12bd),_0x51a0f0(0x4591),_0x51a0f0(0x3a84),_0x51a0f0(0x1d0a),_0x51a0f0(0x1763),_0x51a0f0(0x3ee4),_0x51a0f0(0x2cae),_0x51a0f0(0x26e6),_0x51a0f0(0x2030),'LameS','LameSPrime',_0x51a0f0(0x34c),_0x51a0f0(0x1dc8),_0x51a0f0(0x46f5),'Language',_0x51a0f0(0x2087),_0x51a0f0(0x45ad),_0x51a0f0(0x5170),_0x51a0f0(0x3f6e),_0x51a0f0(0xa15),_0x51a0f0(0x424a),_0x51a0f0(0x88f),_0x51a0f0(0x90c),_0x51a0f0(0xa1d),'LaplacianPDETerm',_0x51a0f0(0x4429),_0x51a0f0(0x406),_0x51a0f0(0x3dd0),_0x51a0f0(0x2338),_0x51a0f0(0x2ffa),'LatticeData',_0x51a0f0(0x296),_0x51a0f0(0x4169),'LaunchKernels',_0x51a0f0(0x4dc8),'LayeredGraphPlot3D',_0x51a0f0(0x47d3),_0x51a0f0(0xcd0),_0x51a0f0(0x2885),_0x51a0f0(0x380d),'LeaderSize',_0x51a0f0(0x2308),'LeapVariant','LeapYearQ',_0x51a0f0(0x892),'LearnedDistribution',_0x51a0f0(0x26a9),_0x51a0f0(0x2bc3),_0x51a0f0(0x136e),_0x51a0f0(0x18de),_0x51a0f0(0x2020),_0x51a0f0(0x1b78),'LeftArrowBar',_0x51a0f0(0x4be1),'LeftDownTeeVector',_0x51a0f0(0x13c1),_0x51a0f0(0x3443),_0x51a0f0(0x2d2d),_0x51a0f0(0x1309),_0x51a0f0(0x13ba),_0x51a0f0(0x34b4),_0x51a0f0(0x2ca1),_0x51a0f0(0x2a9e),_0x51a0f0(0x1a66),'LeftTriangleEqual','LeftUpDownVector','LeftUpTeeVector',_0x51a0f0(0x3b3f),_0x51a0f0(0x346),_0x51a0f0(0x402c),'LeftVectorBar',_0x51a0f0(0x3a46),'Legended',_0x51a0f0(0x367c),_0x51a0f0(0x26da),_0x51a0f0(0x16ec),_0x51a0f0(0x4266),_0x51a0f0(0x4a0),_0x51a0f0(0x2a34),_0x51a0f0(0x50ad),_0x51a0f0(0x491b),_0x51a0f0(0x4aee),_0x51a0f0(0x20b0),'LengthWhile',_0x51a0f0(0x2395),_0x51a0f0(0x3a62),_0x51a0f0(0x749),_0x51a0f0(0x4a7d),_0x51a0f0(0x1f6e),'LessFullEqual','LessGreater',_0x51a0f0(0x513),_0x51a0f0(0x5223),_0x51a0f0(0x5c8),'LessTilde',_0x51a0f0(0x26d2),_0x51a0f0(0x3163),_0x51a0f0(0xf8b),'LetterQ',_0x51a0f0(0x2509),_0x51a0f0(0x197f),_0x51a0f0(0x18d9),'LevyDistribution',_0x51a0f0(0x4908),_0x51a0f0(0xd3b),_0x51a0f0(0x1ca3),_0x51a0f0(0x1054),_0x51a0f0(0x2d94),_0x51a0f0(0x4ac1),_0x51a0f0(0x1a07),_0x51a0f0(0xd0b),_0x51a0f0(0x353),_0x51a0f0(0x1bcc),_0x51a0f0(0x2605),_0x51a0f0(0x3788),_0x51a0f0(0x47aa),_0x51a0f0(0x410b),'LicenseID','LicensingSettings',_0x51a0f0(0x3213),_0x51a0f0(0x2be3),_0x51a0f0(0x3e0d),_0x51a0f0(0x242),_0x51a0f0(0x124a),_0x51a0f0(0x26bf),_0x51a0f0(0x4853),'LightGreen','Lighting',_0x51a0f0(0x2892),_0x51a0f0(0x42e6),'LightOrange',_0x51a0f0(0x4c90),_0x51a0f0(0x32ee),'LightRed',_0x51a0f0(0x3f4a),_0x51a0f0(0xe66),_0x51a0f0(0x260d),_0x51a0f0(0x3d4c),_0x51a0f0(0x284),'LimitsPositioningTokens',_0x51a0f0(0x4206),_0x51a0f0(0x6cf),_0x51a0f0(0x3496),'Line3DBoxOptions','LinearFilter','LinearFractionalOptimization','LinearFractionalTransform','LinearGradientFilling','LinearGradientImage',_0x51a0f0(0x25b1),_0x51a0f0(0x485b),_0x51a0f0(0x903),_0x51a0f0(0x4e8e),_0x51a0f0(0xe4f),_0x51a0f0(0x4cb2),_0x51a0f0(0x459b),_0x51a0f0(0x278f),_0x51a0f0(0x442b),_0x51a0f0(0x2bd6),_0x51a0f0(0x2d3b),_0x51a0f0(0x2194),'LinebreakAdjustments',_0x51a0f0(0x3fa4),'LinebreakSemicolonWeighting','LineBreakWithin',_0x51a0f0(0x240b),_0x51a0f0(0x33b),_0x51a0f0(0x43c4),_0x51a0f0(0x2de),_0x51a0f0(0xfb0),'LineIntegralConvolutionScale',_0x51a0f0(0x1755),_0x51a0f0(0x1fbc),'LineSpacing','LineWrapParts',_0x51a0f0(0x4879),_0x51a0f0(0x32bf),_0x51a0f0(0x430e),_0x51a0f0(0x4fce),'LinkCreate','LinkError',_0x51a0f0(0x368),_0x51a0f0(0x1785),_0x51a0f0(0x2b13),'LinkInterrupt','LinkLaunch',_0x51a0f0(0x3fd8),_0x51a0f0(0x26c1),_0x51a0f0(0x4b41),_0x51a0f0(0x3724),'LinkPatterns',_0x51a0f0(0x2c55),_0x51a0f0(0xd75),_0x51a0f0(0x4ed4),_0x51a0f0(0x160a),'LinkReadyQ',_0x51a0f0(0x1f48),_0x51a0f0(0x3da1),_0x51a0f0(0x32eb),'LinkWriteHeld',_0x51a0f0(0x4343),'List',_0x51a0f0(0x4be4),_0x51a0f0(0x4403),_0x51a0f0(0x179d),_0x51a0f0(0x3ade),_0x51a0f0(0x2b64),'ListCorrelate',_0x51a0f0(0x873),_0x51a0f0(0x4603),'ListDensityPlot','ListDensityPlot3D',_0x51a0f0(0xc44),_0x51a0f0(0x359d),_0x51a0f0(0x469f),_0x51a0f0(0x4fba),_0x51a0f0(0x1048),'ListLinePlot',_0x51a0f0(0x4a86),_0x51a0f0(0x1119),_0x51a0f0(0x2e33),_0x51a0f0(0x4bcd),_0x51a0f0(0x48b9),_0x51a0f0(0x11f4),'ListPickerBoxBackground','ListPickerBoxOptions','ListPlay',_0x51a0f0(0x254),_0x51a0f0(0x53b),_0x51a0f0(0x1f8),_0x51a0f0(0x34a9),_0x51a0f0(0x2374),_0x51a0f0(0x3532),_0x51a0f0(0x5037),_0x51a0f0(0x11c1),_0x51a0f0(0x4b5e),_0x51a0f0(0x259b),_0x51a0f0(0x1fb8),'ListStreamPlot3D',_0x51a0f0(0x30ca),_0x51a0f0(0x3f29),_0x51a0f0(0x4ab0),_0x51a0f0(0x3bf9),'ListVectorPlot',_0x51a0f0(0x22cb),_0x51a0f0(0x3903),_0x51a0f0(0x50ca),_0x51a0f0(0x429e),_0x51a0f0(0x24a7),_0x51a0f0(0x3be2),_0x51a0f0(0x224f),_0x51a0f0(0xc7e),_0x51a0f0(0x1dac),'LocalEvaluate',_0x51a0f0(0x2e0f),'LocalizeVariables',_0x51a0f0(0xf24),'LocalObjects',_0x51a0f0(0x4385),'LocalSubmit',_0x51a0f0(0x46fb),_0x51a0f0(0x2024),'LocalTimeZone',_0x51a0f0(0x2466),_0x51a0f0(0x4534),'Locator',_0x51a0f0(0xf0e),_0x51a0f0(0x486f),_0x51a0f0(0x103f),_0x51a0f0(0x4c41),_0x51a0f0(0x4959),_0x51a0f0(0x3ff1),_0x51a0f0(0x2136),_0x51a0f0(0x86d),_0x51a0f0(0x2993),_0x51a0f0(0x44df),_0x51a0f0(0x4036),'Log2',_0x51a0f0(0x1371),_0x51a0f0(0x261b),_0x51a0f0(0x1d8f),_0x51a0f0(0x2a65),_0x51a0f0(0x4298),'LogisticDistribution',_0x51a0f0(0x4c8),_0x51a0f0(0x4881),'LogLikelihood',_0x51a0f0(0x26de),_0x51a0f0(0x3bd),'LogLogPlot',_0x51a0f0(0x5165),_0x51a0f0(0x1d2f),_0x51a0f0(0x3866),_0x51a0f0(0x220e),_0x51a0f0(0x19fd),'LongEqual',_0x51a0f0(0x285b),_0x51a0f0(0x119c),_0x51a0f0(0x4546),_0x51a0f0(0x42c8),_0x51a0f0(0x2920),_0x51a0f0(0x2251),'LongestOrderedSequence',_0x51a0f0(0x47cf),_0x51a0f0(0x380c),_0x51a0f0(0xb39),_0x51a0f0(0x21d8),_0x51a0f0(0x777),'LongShortTermMemoryLayer',_0x51a0f0(0x1f6b),_0x51a0f0(0x68e),_0x51a0f0(0x1685),_0x51a0f0(0x46b0),_0x51a0f0(0xfd2),_0x51a0f0(0x4410),'LowerLeftArrow',_0x51a0f0(0x300b),'LowerTriangularize','LowerTriangularMatrix','LowerTriangularMatrixQ',_0x51a0f0(0x19f5),_0x51a0f0(0x4d79),_0x51a0f0(0x123d),_0x51a0f0(0x3288),_0x51a0f0(0x2286),_0x51a0f0(0x822),'LucasL',_0x51a0f0(0x1cce),'LUDecomposition',_0x51a0f0(0x4d99),'LUVColor',_0x51a0f0(0x2899),_0x51a0f0(0x51cb),'MachineID',_0x51a0f0(0x2da1),_0x51a0f0(0x1ca7),_0x51a0f0(0x3e9d),_0x51a0f0(0x4203),_0x51a0f0(0x4fe),'Magnification','Magnify','MailAddressValidation','MailExecute',_0x51a0f0(0x19a1),_0x51a0f0(0x47d1),'MailReceiverFunction',_0x51a0f0(0x2828),'MailSearch',_0x51a0f0(0xc7c),'MailServerConnection',_0x51a0f0(0xd20),_0x51a0f0(0x4b60),_0x51a0f0(0x3a73),_0x51a0f0(0x1fed),_0x51a0f0(0x418c),_0x51a0f0(0x2a97),'MakeRules',_0x51a0f0(0x2b40),'ManagedLibraryExpressionQ','MandelbrotSetBoettcher',_0x51a0f0(0x4c76),'MandelbrotSetIterationCount',_0x51a0f0(0x45d4),_0x51a0f0(0x912),_0x51a0f0(0x3e94),_0x51a0f0(0x24c),_0x51a0f0(0x13b5),_0x51a0f0(0x40e4),_0x51a0f0(0x1951),_0x51a0f0(0x28da),'MantissaExponent','Manual',_0x51a0f0(0x4a59),'MapAll',_0x51a0f0(0x9f0),_0x51a0f0(0x4b38),_0x51a0f0(0x18c8),_0x51a0f0(0x1d29),_0x51a0f0(0x22f9),'MarchenkoPasturDistribution',_0x51a0f0(0xbc4),_0x51a0f0(0x4a83),_0x51a0f0(0x246a),_0x51a0f0(0x817),_0x51a0f0(0x1f27),'MarkovProcessProperties',_0x51a0f0(0x192d),_0x51a0f0(0x4b62),'MassFluxValue',_0x51a0f0(0x5251),_0x51a0f0(0x1015),_0x51a0f0(0x1bc5),_0x51a0f0(0x3a88),'MassTransportPDEComponent',_0x51a0f0(0x3bde),_0x51a0f0(0x870),_0x51a0f0(0x510e),_0x51a0f0(0x620),_0x51a0f0(0x39dd),_0x51a0f0(0x3cf5),_0x51a0f0(0x37c9),_0x51a0f0(0x4a08),'MathematicaNotation','MathieuC',_0x51a0f0(0x2754),'MathieuCharacteristicB','MathieuCharacteristicExponent',_0x51a0f0(0x12c3),_0x51a0f0(0x31b0),_0x51a0f0(0x517a),_0x51a0f0(0x1a58),_0x51a0f0(0x41e6),_0x51a0f0(0x261),_0x51a0f0(0x56b),_0x51a0f0(0x50c8),_0x51a0f0(0x4b88),_0x51a0f0(0x4975),_0x51a0f0(0xf48),_0x51a0f0(0x7dc),_0x51a0f0(0x3256),_0x51a0f0(0x1907),_0x51a0f0(0x42db),'MatrixNormalDistribution',_0x51a0f0(0x4ac6),_0x51a0f0(0x316e),_0x51a0f0(0x2388),_0x51a0f0(0x29a8),_0x51a0f0(0x36c1),'MatrixTDistribution',_0x51a0f0(0x2d51),_0x51a0f0(0x1bce),_0x51a0f0(0x3041),_0x51a0f0(0x3d69),_0x51a0f0(0x2f01),'MaxDetect',_0x51a0f0(0x1996),_0x51a0f0(0xb34),_0x51a0f0(0x42f7),'MaxExtraConditions','MaxFeatureDisplacement',_0x51a0f0(0x4592),_0x51a0f0(0x1ef9),_0x51a0f0(0x1173),'Maximize',_0x51a0f0(0x4bf2),'MaxIterations','MaxLimit',_0x51a0f0(0x2500),_0x51a0f0(0x26b8),_0x51a0f0(0x2565),_0x51a0f0(0x4bac),_0x51a0f0(0x2e0e),_0x51a0f0(0x41f3),'MaxStableDistribution',_0x51a0f0(0x985),_0x51a0f0(0x467f),_0x51a0f0(0x38b7),_0x51a0f0(0x250),_0x51a0f0(0x1428),_0x51a0f0(0x437b),_0x51a0f0(0xf6b),_0x51a0f0(0x3d3),_0x51a0f0(0x1f70),_0x51a0f0(0x4f3f),_0x51a0f0(0xe9e),'MeanClusteringCoefficient',_0x51a0f0(0x2a7d),'MeanDeviation',_0x51a0f0(0xdf5),_0x51a0f0(0x3de3),'MeanNeighborDegree',_0x51a0f0(0x4eb9),_0x51a0f0(0x3ca8),_0x51a0f0(0x40eb),'MeanSquaredLossLayer',_0x51a0f0(0x2dbd),_0x51a0f0(0x16fd),'MedianFilter','MedicalTestData',_0x51a0f0(0x3fbd),_0x51a0f0(0x11a1),_0x51a0f0(0x28d),_0x51a0f0(0x3f5f),_0x51a0f0(0x4f4e),_0x51a0f0(0x1788),_0x51a0f0(0x4a1c),_0x51a0f0(0x525a),'MemoryConstrained','MemoryConstraint',_0x51a0f0(0x171b),_0x51a0f0(0x311),_0x51a0f0(0x37ae),'MenuAppearance',_0x51a0f0(0x3097),_0x51a0f0(0x310e),_0x51a0f0(0x47ea),'MenuList',_0x51a0f0(0x2e9f),_0x51a0f0(0x32d7),_0x51a0f0(0x1118),_0x51a0f0(0x234f),'Merge',_0x51a0f0(0x2cf0),_0x51a0f0(0x3aed),_0x51a0f0(0x3dd6),_0x51a0f0(0x8ce),_0x51a0f0(0x2130),_0x51a0f0(0x2e8c),'MeshCellCount',_0x51a0f0(0x16e0),'MeshCellIndex','MeshCellLabel',_0x51a0f0(0x156b),'MeshCellMeasure',_0x51a0f0(0x2c73),_0x51a0f0(0x1b99),_0x51a0f0(0x204f),_0x51a0f0(0xa96),_0x51a0f0(0x421e),_0x51a0f0(0xc99),_0x51a0f0(0x447f),'MeshPrimitives',_0x51a0f0(0x3683),_0x51a0f0(0x4d2a),_0x51a0f0(0x14cc),'MeshRegion','MeshRegionQ',_0x51a0f0(0x9f5),'MeshStyle',_0x51a0f0(0x2376),'MessageDialog',_0x51a0f0(0x4ddc),_0x51a0f0(0x3746),_0x51a0f0(0x1004),'MessageOptions',_0x51a0f0(0x695),_0x51a0f0(0x4c7d),_0x51a0f0(0x1f99),'MetaCharacters',_0x51a0f0(0x2dc4),'MeteorShowerData',_0x51a0f0(0x11ed),_0x51a0f0(0x3831),_0x51a0f0(0xd72),_0x51a0f0(0x123f),_0x51a0f0(0x2f84),_0x51a0f0(0x18c2),_0x51a0f0(0x4c50),_0x51a0f0(0x4e9),_0x51a0f0(0x2afd),_0x51a0f0(0x39ec),'MineralData','MinFilter',_0x51a0f0(0x1a9c),_0x51a0f0(0x465a),_0x51a0f0(0x26c2),_0x51a0f0(0x71e),_0x51a0f0(0x218a),_0x51a0f0(0xaa2),'MinkowskiQuestionMark',_0x51a0f0(0x4e0f),'MinMax',_0x51a0f0(0x3ef1),_0x51a0f0(0x2f0),_0x51a0f0(0x408e),_0x51a0f0(0x35fe),_0x51a0f0(0x4076),_0x51a0f0(0x2ff5),_0x51a0f0(0x4ae8),_0x51a0f0(0x3944),_0x51a0f0(0x2543),_0x51a0f0(0x3886),_0x51a0f0(0x1db1),_0x51a0f0(0x15f3),'MissingDataRules',_0x51a0f0(0xfba),_0x51a0f0(0x49e8),_0x51a0f0(0x318),_0x51a0f0(0x39ca),'MissingValueSynthesis',_0x51a0f0(0x520d),_0x51a0f0(0x2291),_0x51a0f0(0x395c),_0x51a0f0(0x37d4),'MixedRadix',_0x51a0f0(0x16c3),'MixedUnit',_0x51a0f0(0x47ca),_0x51a0f0(0x3a4a),_0x51a0f0(0x28f1),_0x51a0f0(0x3a23),_0x51a0f0(0x38fc),_0x51a0f0(0x333),_0x51a0f0(0x304e),_0x51a0f0(0x336e),_0x51a0f0(0x2496),'Modulus','MoebiusMu',_0x51a0f0(0x2a5a),'MoleculeAlign',_0x51a0f0(0x3e5a),_0x51a0f0(0x1258),'MoleculeEquivalentQ','MoleculeFreeQ',_0x51a0f0(0x332b),_0x51a0f0(0x22b0),_0x51a0f0(0x224e),_0x51a0f0(0x10c1),_0x51a0f0(0x2103),_0x51a0f0(0x3b5b),_0x51a0f0(0x241),'MoleculePlot3D',_0x51a0f0(0x17bd),_0x51a0f0(0x3887),'MoleculeRecognize','MoleculeSubstructureCount','MoleculeValue',_0x51a0f0(0x22a5),_0x51a0f0(0x2959),_0x51a0f0(0x3def),'MomentGeneratingFunction',_0x51a0f0(0x2ced),'Monday',_0x51a0f0(0xf01),_0x51a0f0(0x3ad6),_0x51a0f0(0x9c7),_0x51a0f0(0x3b88),_0x51a0f0(0x590),_0x51a0f0(0x3555),'MorletWavelet','MorphologicalBinarize',_0x51a0f0(0x3af2),_0x51a0f0(0x62b),'MorphologicalEulerNumber',_0x51a0f0(0x3f97),'MorphologicalPerimeter','MorphologicalTransform',_0x51a0f0(0x4724),_0x51a0f0(0x16ce),_0x51a0f0(0x1d55),_0x51a0f0(0x2799),'MouseAppearance',_0x51a0f0(0x4c9e),_0x51a0f0(0x32b8),_0x51a0f0(0x39d3),_0x51a0f0(0x232d),_0x51a0f0(0x2fad),_0x51a0f0(0x22ce),_0x51a0f0(0x3d76),_0x51a0f0(0x2ea9),_0x51a0f0(0x3c35),_0x51a0f0(0x4754),'MultiaxisArrangement',_0x51a0f0(0x3da4),_0x51a0f0(0x3e2c),'MultigraphQ',_0x51a0f0(0x46f3),'MultiLetterItalics',_0x51a0f0(0x100f),'MultilineFunction','Multinomial','MultinomialDistribution','MultinormalDistribution','MultiplicativeOrder',_0x51a0f0(0x1b18),'MultiplySides','MultiscriptBoxOptions',_0x51a0f0(0x3987),_0x51a0f0(0x65c),'MultivariatePoissonDistribution',_0x51a0f0(0x2972),'N','NakagamiDistribution',_0x51a0f0(0x2df2),_0x51a0f0(0x1ec8),_0x51a0f0(0x2166),'NamespaceBoxOptions',_0x51a0f0(0x325c),_0x51a0f0(0x32fa),_0x51a0f0(0x40f8),'NBernoulliB',_0x51a0f0(0x419a),_0x51a0f0(0x1dd3),_0x51a0f0(0x3cb7),'NCaputoD',_0x51a0f0(0x4899),_0x51a0f0(0x2a96),'NDSolve',_0x51a0f0(0x3564),_0x51a0f0(0x1dba),_0x51a0f0(0x343d),_0x51a0f0(0x2994),_0x51a0f0(0x2cb1),_0x51a0f0(0x3359),_0x51a0f0(0x2393),_0x51a0f0(0x2d8c),_0x51a0f0(0x28fe),_0x51a0f0(0x9b6),_0x51a0f0(0x90f),_0x51a0f0(0xc59),_0x51a0f0(0x3e2a),_0x51a0f0(0x439e),'NegativelyOrientedPoints',_0x51a0f0(0x168b),_0x51a0f0(0x50e4),_0x51a0f0(0x3095),'NegativeSemidefiniteMatrixQ','NeighborhoodData',_0x51a0f0(0x15dd),_0x51a0f0(0x482b),_0x51a0f0(0xec8),_0x51a0f0(0x3c11),_0x51a0f0(0x2c32),'NestGraph',_0x51a0f0(0x16ab),_0x51a0f0(0x2e4f),'NestWhile',_0x51a0f0(0x4eec),_0x51a0f0(0x2ec3),_0x51a0f0(0xf31),_0x51a0f0(0x3319),_0x51a0f0(0x4bf0),_0x51a0f0(0x4ee9),_0x51a0f0(0x2e20),_0x51a0f0(0x19e1),_0x51a0f0(0x20ec),_0x51a0f0(0x2f53),_0x51a0f0(0x5066),_0x51a0f0(0x4bea),_0x51a0f0(0x2d41),_0x51a0f0(0x1ac3),'NetFoldOperator',_0x51a0f0(0x113e),_0x51a0f0(0x336f),_0x51a0f0(0x1f63),_0x51a0f0(0x50da),'NetInsert',_0x51a0f0(0x3838),_0x51a0f0(0x4183),_0x51a0f0(0x46d2),_0x51a0f0(0x1d47),_0x51a0f0(0x47d2),_0x51a0f0(0x43d3),_0x51a0f0(0x4ec2),_0x51a0f0(0x4315),'NetPort',_0x51a0f0(0x2d99),_0x51a0f0(0x18ff),'NetRename',_0x51a0f0(0xcfa),'NetReplacePart',_0x51a0f0(0x5256),_0x51a0f0(0x47ac),_0x51a0f0(0x452f),'NetTrain',_0x51a0f0(0x289c),'NetUnfold',_0x51a0f0(0x1f3e),_0x51a0f0(0x2bb8),_0x51a0f0(0x21f9),_0x51a0f0(0x32ce),_0x51a0f0(0x2458),_0x51a0f0(0x45ab),_0x51a0f0(0x5f7),_0x51a0f0(0xb20),_0x51a0f0(0x26cf),'NewPrimitiveStyle',_0x51a0f0(0x12d1),'Next',_0x51a0f0(0x15e8),_0x51a0f0(0x3b07),_0x51a0f0(0x319e),'NextScheduledTaskTime',_0x51a0f0(0x18b8),_0x51a0f0(0x2879),_0x51a0f0(0x279b),_0x51a0f0(0x4ba0),_0x51a0f0(0x2538),_0x51a0f0(0x4f4d),_0x51a0f0(0x484d),_0x51a0f0(0x1d6d),'NIntegrate',_0x51a0f0(0x210b),'NMaxValue',_0x51a0f0(0x35bd),_0x51a0f0(0x4d6e),_0x51a0f0(0x4083),_0x51a0f0(0x3913),_0x51a0f0(0x2358),_0x51a0f0(0x392a),'NoncentralChiSquareDistribution',_0x51a0f0(0x15fc),'NoncentralStudentTDistribution','NonCommutativeMultiply',_0x51a0f0(0x490),'NondimensionalizationTransform',_0x51a0f0(0x23dd),'NoneTrue',_0x51a0f0(0x31eb),'NonlinearStateSpaceModel',_0x51a0f0(0x2c8b),_0x51a0f0(0x139b),_0x51a0f0(0x1700),'NonNegativeRationals','NonNegativeReals',_0x51a0f0(0x26f2),'NonPositiveIntegers',_0x51a0f0(0x3910),_0x51a0f0(0x2ba6),_0x51a0f0(0x1a86),_0x51a0f0(0x1f3d),'Norm',_0x51a0f0(0x2c1f),_0x51a0f0(0x1ce2),'NormalGrouping',_0x51a0f0(0x11f2),_0x51a0f0(0x858),_0x51a0f0(0x4459),_0x51a0f0(0x1b9f),_0x51a0f0(0x2e83),_0x51a0f0(0x44c2),_0x51a0f0(0xf15),_0x51a0f0(0xd90),'NotCongruent',_0x51a0f0(0x4bd4),_0x51a0f0(0x1f65),_0x51a0f0(0x1a29),_0x51a0f0(0x48e9),'NotebookAutoSave',_0x51a0f0(0x4e19),'NotebookClose','NotebookConvertSettings',_0x51a0f0(0x52f),_0x51a0f0(0x3f38),_0x51a0f0(0x1500),'NotebookDirectory',_0x51a0f0(0x2b8e),_0x51a0f0(0x12f2),_0x51a0f0(0x2fc2),_0x51a0f0(0x1f06),_0x51a0f0(0x558),_0x51a0f0(0x2a4b),_0x51a0f0(0x386e),_0x51a0f0(0x469a),_0x51a0f0(0x3e95),_0x51a0f0(0x4e52),_0x51a0f0(0xde3),_0x51a0f0(0x1cef),_0x51a0f0(0x1901),_0x51a0f0(0x4c4),_0x51a0f0(0x4b5a),_0x51a0f0(0x3311),_0x51a0f0(0x2795),'NotebookSave',_0x51a0f0(0x4c53),_0x51a0f0(0x3c96),_0x51a0f0(0x51c4),'NotebookWrite','NotElement','NotEqualTilde','NotExists',_0x51a0f0(0x4400),'NotGreaterEqual',_0x51a0f0(0x5253),_0x51a0f0(0x41d6),_0x51a0f0(0x2764),_0x51a0f0(0x5093),_0x51a0f0(0x2846),_0x51a0f0(0xe51),_0x51a0f0(0x4e71),_0x51a0f0(0x528b),_0x51a0f0(0x5017),'NotLeftTriangle',_0x51a0f0(0x1a30),_0x51a0f0(0x3bdc),_0x51a0f0(0xe88),'NotLessEqual',_0x51a0f0(0x32df),_0x51a0f0(0x2186),_0x51a0f0(0x3e88),'NotLessSlantEqual','NotLessTilde','NotNestedGreaterGreater',_0x51a0f0(0x381f),_0x51a0f0(0x3471),_0x51a0f0(0x14fb),_0x51a0f0(0x19da),_0x51a0f0(0x114c),'NotReverseElement',_0x51a0f0(0x5221),_0x51a0f0(0x345d),_0x51a0f0(0x9cc),_0x51a0f0(0x17b2),_0x51a0f0(0x236),_0x51a0f0(0x3fff),_0x51a0f0(0x78d),_0x51a0f0(0x17f4),_0x51a0f0(0x26dd),_0x51a0f0(0x4599),_0x51a0f0(0x5042),_0x51a0f0(0x29b9),_0x51a0f0(0x351a),_0x51a0f0(0x2510),'NotSupersetEqual',_0x51a0f0(0x3b48),_0x51a0f0(0x3f53),_0x51a0f0(0x1cbf),'NotTildeTilde',_0x51a0f0(0x2399),_0x51a0f0(0x2797),'NoWhitespace',_0x51a0f0(0x38a6),_0x51a0f0(0x2946),_0x51a0f0(0x42ff),_0x51a0f0(0x2bc0),_0x51a0f0(0x509d),'NSolveValues',_0x51a0f0(0x49b),_0x51a0f0(0x4f0d),_0x51a0f0(0x1d7f),_0x51a0f0(0x23b8),_0x51a0f0(0x2ed5),'NullRecords',_0x51a0f0(0x13bc),'NullWords','Number',_0x51a0f0(0xfb1),_0x51a0f0(0x6de),_0x51a0f0(0x342),'NumberExpand',_0x51a0f0(0x16df),'NumberFieldDiscriminant','NumberFieldFundamentalUnits',_0x51a0f0(0x493c),_0x51a0f0(0x41af),_0x51a0f0(0x459a),_0x51a0f0(0x2777),_0x51a0f0(0x1ea1),_0x51a0f0(0xb06),'NumberFormat',_0x51a0f0(0x3d1a),'NumberMarks',_0x51a0f0(0x3064),_0x51a0f0(0x33fc),'NumberPoint','NumberQ',_0x51a0f0(0x4e1e),'NumberSigns','NumberString','Numerator','NumeratorDenominator','NumericalOrder',_0x51a0f0(0x628),'NumericArray',_0x51a0f0(0xebe),'NumericArrayType',_0x51a0f0(0x4a79),'NumericQ',_0x51a0f0(0x34d),_0x51a0f0(0x3491),'NyquistGridLines',_0x51a0f0(0x2925),'O',_0x51a0f0(0x6ee),_0x51a0f0(0x4fb2),'ObservabilityMatrix',_0x51a0f0(0x396e),_0x51a0f0(0xfcc),'OceanData',_0x51a0f0(0x4f61),_0x51a0f0(0x4582),_0x51a0f0(0x2244),_0x51a0f0(0x31c5),_0x51a0f0(0x4f3c),'On',_0x51a0f0(0x3ed0),'Once','OneIdentity',_0x51a0f0(0x41fd),'OpacityFunction',_0x51a0f0(0x2c02),_0x51a0f0(0x430c),_0x51a0f0(0x100d),_0x51a0f0(0x2f18),_0x51a0f0(0x1d3c),_0x51a0f0(0x19b5),_0x51a0f0(0x1661),'OpenFunctionInspectorPacket',_0x51a0f0(0x2bc5),_0x51a0f0(0x234a),'OpenSpecialOptions',_0x51a0f0(0x13aa),_0x51a0f0(0x4251),'Operate',_0x51a0f0(0x3131),_0x51a0f0(0x33bc),_0x51a0f0(0x50c0),_0x51a0f0(0x31e1),_0x51a0f0(0x4e81),'OptionInspectorSettings',_0x51a0f0(0x4fab),_0x51a0f0(0x39f8),_0x51a0f0(0x3403),_0x51a0f0(0x430),'OptionValue',_0x51a0f0(0x28f),_0x51a0f0(0x4255),'Or',_0x51a0f0(0x3da2),_0x51a0f0(0x4e4d),'OrderDistribution','OrderedQ',_0x51a0f0(0x2f66),_0x51a0f0(0x2d62),'OrderingLayer','Orderless',_0x51a0f0(0x66b),'OrdinalScale',_0x51a0f0(0x2809),_0x51a0f0(0x2a52),_0x51a0f0(0x1b8b),_0x51a0f0(0xfe8),'Outer','OuterPolygon',_0x51a0f0(0x4b11),_0x51a0f0(0x2476),_0x51a0f0(0xf23),_0x51a0f0(0x223a),'OutputForm',_0x51a0f0(0x4bb7),_0x51a0f0(0x1f4c),_0x51a0f0(0x50e9),_0x51a0f0(0x2dcc),'OutputPorts',_0x51a0f0(0x160f),_0x51a0f0(0x344),_0x51a0f0(0x308),_0x51a0f0(0x4e6c),_0x51a0f0(0x398e),_0x51a0f0(0x242c),_0x51a0f0(0x4fc5),'OverHat',_0x51a0f0(0x48f7),_0x51a0f0(0x4d0d),_0x51a0f0(0x1a4f),_0x51a0f0(0x1451),'OverlayVideo','Overscript',_0x51a0f0(0x3750),_0x51a0f0(0xce3),_0x51a0f0(0x20ef),_0x51a0f0(0x1b65),'OverwriteTarget','OwenT','OwnValues',_0x51a0f0(0x880),_0x51a0f0(0x3b9e),_0x51a0f0(0x1bea),_0x51a0f0(0x1ee9),_0x51a0f0(0x30e4),_0x51a0f0(0x47b8),_0x51a0f0(0x32c0),_0x51a0f0(0x2d17),_0x51a0f0(0x24e8),_0x51a0f0(0x178a),_0x51a0f0(0x45f7),'PacletFindRemote',_0x51a0f0(0xfc3),_0x51a0f0(0x4383),_0x51a0f0(0x45b2),'PacletNewerQ',_0x51a0f0(0x2d28),_0x51a0f0(0x44ba),'PacletSite',_0x51a0f0(0x3157),'PacletSiteRegister','PacletSites',_0x51a0f0(0x8f8),_0x51a0f0(0x34d7),_0x51a0f0(0x1e3f),_0x51a0f0(0x81d),_0x51a0f0(0x4488),'PaddedForm',_0x51a0f0(0x3a3a),_0x51a0f0(0x48fa),'PaddingSize','PadeApproximant','PadLeft',_0x51a0f0(0x253c),_0x51a0f0(0x8e0),_0x51a0f0(0x1c8c),_0x51a0f0(0x276),_0x51a0f0(0x39f4),_0x51a0f0(0x2ec0),_0x51a0f0(0xf52),_0x51a0f0(0xfe9),_0x51a0f0(0x1ac5),_0x51a0f0(0x16ba),_0x51a0f0(0xa6b),'PageWidth',_0x51a0f0(0x2450),_0x51a0f0(0x4616),_0x51a0f0(0x209b),_0x51a0f0(0x1b37),_0x51a0f0(0x33ad),'PairedTTest',_0x51a0f0(0x2106),_0x51a0f0(0x1b02),_0x51a0f0(0x2b7b),_0x51a0f0(0xeec),_0x51a0f0(0xe6e),'Pane','PaneBox',_0x51a0f0(0x2256),_0x51a0f0(0x3626),_0x51a0f0(0x1f6c),_0x51a0f0(0x2a13),_0x51a0f0(0x6be),'PaneSelector',_0x51a0f0(0x1d50),_0x51a0f0(0x2495),_0x51a0f0(0x48a),_0x51a0f0(0x2a77),'ParagraphIndent','ParagraphSpacing',_0x51a0f0(0x1256),_0x51a0f0(0x355a),_0x51a0f0(0x2d7f),_0x51a0f0(0x2f3c),_0x51a0f0(0x4b65),_0x51a0f0(0x2f2b),_0x51a0f0(0x2317),_0x51a0f0(0x1107),_0x51a0f0(0x3eae),_0x51a0f0(0x3f12),'ParallelNeeds',_0x51a0f0(0x1323),_0x51a0f0(0x2535),_0x51a0f0(0x2b80),_0x51a0f0(0x4382),_0x51a0f0(0x304c),_0x51a0f0(0x327f),_0x51a0f0(0x33ec),'ParameterEstimator',_0x51a0f0(0x4be2),'ParameterVariables','ParametricConvexOptimization',_0x51a0f0(0x2d6e),'ParametricNDSolve',_0x51a0f0(0x4652),_0x51a0f0(0xe16),_0x51a0f0(0x5240),_0x51a0f0(0x11e2),_0x51a0f0(0xd3f),'ParentBox',_0x51a0f0(0x2661),'ParentConnect',_0x51a0f0(0x2f82),_0x51a0f0(0x2211),'ParentEdgeLabelFunction','ParentEdgeLabelStyle',_0x51a0f0(0xf8c),_0x51a0f0(0xa6e),_0x51a0f0(0x4f12),_0x51a0f0(0x3161),'Parenthesize',_0x51a0f0(0x2383),_0x51a0f0(0x4911),_0x51a0f0(0x1bcf),_0x51a0f0(0x2aea),_0x51a0f0(0x2c44),_0x51a0f0(0x307e),'PartBehavior',_0x51a0f0(0x3abd),_0x51a0f0(0x1cb4),'ParticleAcceleratorData',_0x51a0f0(0x38f3),_0x51a0f0(0x383e),_0x51a0f0(0x34f0),'PartitionsP','PartitionsQ',_0x51a0f0(0xcc2),_0x51a0f0(0x1bd0),_0x51a0f0(0x1626),_0x51a0f0(0x3109),_0x51a0f0(0x1b75),'PassEventsDown',_0x51a0f0(0x2234),'Paste',_0x51a0f0(0x38ec),_0x51a0f0(0x1f3b),_0x51a0f0(0x3164),_0x51a0f0(0x6c1),_0x51a0f0(0x479e),_0x51a0f0(0x2827),_0x51a0f0(0xa93),_0x51a0f0(0x2fb5),_0x51a0f0(0x4236),_0x51a0f0(0x10a4),_0x51a0f0(0x1c09),_0x51a0f0(0x4a4a),_0x51a0f0(0x1789),_0x51a0f0(0x44fc),_0x51a0f0(0xa67),_0x51a0f0(0x3fdf),'PeakDetect','PeanoCurve','PearsonChiSquareTest',_0x51a0f0(0x3e85),_0x51a0f0(0x391a),'PenttinenPointProcess','PercentForm',_0x51a0f0(0x3185),_0x51a0f0(0x3c01),_0x51a0f0(0x2ff0),_0x51a0f0(0x23ba),'PeriodicBoundaryCondition',_0x51a0f0(0x511a),_0x51a0f0(0x4ac4),_0x51a0f0(0x11b5),_0x51a0f0(0x2f94),'Permissions',_0x51a0f0(0x3c8a),_0x51a0f0(0x1c5c),_0x51a0f0(0x399),_0x51a0f0(0xd5f),'PermissionsKeys',_0x51a0f0(0x44ee),'PermutationCyclesQ',_0x51a0f0(0x3349),_0x51a0f0(0x2ed7),_0x51a0f0(0x3fd3),'PermutationListQ','PermutationMatrix',_0x51a0f0(0x134c),'PermutationMin','PermutationOrder','PermutationPower',_0x51a0f0(0x1229),_0x51a0f0(0x21a3),'Permutations',_0x51a0f0(0x2ca8),_0x51a0f0(0x2870),_0x51a0f0(0xaa4),'Perpendicular',_0x51a0f0(0xd66),'PersistenceLocation',_0x51a0f0(0x1cc6),'PersistentObject',_0x51a0f0(0x8dd),_0x51a0f0(0x51b4),_0x51a0f0(0x3c33),_0x51a0f0(0x2287),_0x51a0f0(0x38c8),_0x51a0f0(0x394c),_0x51a0f0(0x4947),_0x51a0f0(0x390d),'PhongShading',_0x51a0f0(0x29cd),'Pi',_0x51a0f0(0x368b),_0x51a0f0(0x4d58),_0x51a0f0(0x2025),_0x51a0f0(0x1bf3),'PIDDerivativeFilter','PIDFeedforward',_0x51a0f0(0x3f0a),_0x51a0f0(0x1e2f),_0x51a0f0(0x707),_0x51a0f0(0x361b),_0x51a0f0(0x74a),_0x51a0f0(0xbfa),_0x51a0f0(0x2158),_0x51a0f0(0x10e9),_0x51a0f0(0x1da9),_0x51a0f0(0x24b3),_0x51a0f0(0x367b),_0x51a0f0(0x217d),_0x51a0f0(0x3931),_0x51a0f0(0x133c),_0x51a0f0(0x21bd),_0x51a0f0(0x3efa),'PlaceholderLayer',_0x51a0f0(0x1cfe),_0x51a0f0(0x48f2),_0x51a0f0(0xebb),_0x51a0f0(0x207a),_0x51a0f0(0x4ce2),_0x51a0f0(0x42b3),_0x51a0f0(0x3333),_0x51a0f0(0x63d),_0x51a0f0(0x441f),_0x51a0f0(0x12c7),_0x51a0f0(0x1a81),'Play',_0x51a0f0(0x478d),_0x51a0f0(0x4b1a),_0x51a0f0(0x3106),_0x51a0f0(0x4b82),_0x51a0f0(0x44c4),_0x51a0f0(0x2fbe),'PlotJoined',_0x51a0f0(0x48b4),_0x51a0f0(0x365f),_0x51a0f0(0x21b9),_0x51a0f0(0x3bea),_0x51a0f0(0x5145),'PlotPoints',_0x51a0f0(0xdc9),'PlotRangeClipping',_0x51a0f0(0x235a),_0x51a0f0(0x34bb),_0x51a0f0(0x14ee),_0x51a0f0(0x4fd0),_0x51a0f0(0x407c),'Pluralize',_0x51a0f0(0x51df),_0x51a0f0(0xf72),'Pochhammer','PodStates',_0x51a0f0(0x1d1a),_0x51a0f0(0x1218),_0x51a0f0(0x481f),_0x51a0f0(0x2248),'PointBox',_0x51a0f0(0x485e),_0x51a0f0(0x9fe),_0x51a0f0(0x249d),_0x51a0f0(0x44fb),_0x51a0f0(0x6f9),_0x51a0f0(0x239e),_0x51a0f0(0x159e),_0x51a0f0(0x2973),_0x51a0f0(0x10ca),_0x51a0f0(0x305d),_0x51a0f0(0x1722),'PointSize',_0x51a0f0(0x339d),_0x51a0f0(0x4889),'PoissonConsulDistribution',_0x51a0f0(0x23e3),_0x51a0f0(0x4141),_0x51a0f0(0x3862),_0x51a0f0(0x334d),'PoissonWindow',_0x51a0f0(0x21ec),_0x51a0f0(0x6b9),'PolarGridLines',_0x51a0f0(0x1cf9),'PolarTicks',_0x51a0f0(0x3192),_0x51a0f0(0x50be),_0x51a0f0(0xabc),_0x51a0f0(0x3ac1),_0x51a0f0(0x4db5),_0x51a0f0(0x197e),_0x51a0f0(0x38e0),_0x51a0f0(0xdf3),_0x51a0f0(0x362b),'PolygonBoxOptions',_0x51a0f0(0xfdc),'PolygonDecomposition',_0x51a0f0(0x166f),_0x51a0f0(0x3978),'PolygonScale',_0x51a0f0(0x840),'PolyhedronAngle',_0x51a0f0(0x1bf9),'PolyhedronBoxOptions','PolyhedronCoordinates',_0x51a0f0(0x71c),_0x51a0f0(0x2d34),_0x51a0f0(0x1f1e),'PolyLog',_0x51a0f0(0x2b21),_0x51a0f0(0x4c47),_0x51a0f0(0x318b),'PolynomialGCD','PolynomialLCM',_0x51a0f0(0x454d),'PolynomialQ','PolynomialQuotient',_0x51a0f0(0x19c7),_0x51a0f0(0x406a),'PolynomialRemainder',_0x51a0f0(0x2a6a),_0x51a0f0(0x4629),_0x51a0f0(0x3fb6),_0x51a0f0(0x481c),_0x51a0f0(0x38c0),_0x51a0f0(0x28a5),'PopupView','PopupWindow','Position',_0x51a0f0(0xe3f),_0x51a0f0(0x4689),_0x51a0f0(0x370a),'Positive',_0x51a0f0(0x51e5),_0x51a0f0(0x1b31),_0x51a0f0(0x25ce),'PositiveRationals',_0x51a0f0(0x40ae),'PositiveSemidefiniteMatrixQ','PossibleZeroQ',_0x51a0f0(0x3b69),_0x51a0f0(0x30f2),_0x51a0f0(0x2132),'PowerDistribution',_0x51a0f0(0x4e9c),_0x51a0f0(0x29e6),'PowerModList',_0x51a0f0(0x2a72),_0x51a0f0(0x100e),_0x51a0f0(0x613),'PowerSymmetricPolynomial','Precedence','PrecedenceForm','Precedes',_0x51a0f0(0x3847),_0x51a0f0(0x119e),_0x51a0f0(0x22b7),_0x51a0f0(0x3865),_0x51a0f0(0xb12),'PreDecrement',_0x51a0f0(0x20cc),_0x51a0f0(0x4bce),_0x51a0f0(0x3cd9),_0x51a0f0(0x4ad6),_0x51a0f0(0x1a26),'PredictorMeasurementsObject',_0x51a0f0(0x393a),_0x51a0f0(0x3835),_0x51a0f0(0x2587),_0x51a0f0(0x3823),_0x51a0f0(0x1569),_0x51a0f0(0x48ec),_0x51a0f0(0x3a3c),_0x51a0f0(0x18b6),_0x51a0f0(0xba1),'PreserveColor',_0x51a0f0(0x481e),_0x51a0f0(0x13e5),'PreviousCell',_0x51a0f0(0x3c6a),_0x51a0f0(0x3d6e),_0x51a0f0(0x4e4),_0x51a0f0(0xea6),'PrimeNu','PrimeOmega',_0x51a0f0(0x8d1),_0x51a0f0(0x4a5a),_0x51a0f0(0x30b6),_0x51a0f0(0x5241),_0x51a0f0(0x4c26),_0x51a0f0(0xc3e),_0x51a0f0(0x2f51),_0x51a0f0(0x3e18),_0x51a0f0(0xf42),_0x51a0f0(0x30bc),_0x51a0f0(0x293),_0x51a0f0(0x3e62),'PrintAction',_0x51a0f0(0x3dfe),_0x51a0f0(0x2eec),_0x51a0f0(0x2eb3),_0x51a0f0(0x3ef2),_0x51a0f0(0x1068),_0x51a0f0(0x4e93),_0x51a0f0(0x30e9),_0x51a0f0(0x10cf),'PrintPrecision','PrintTemporary',_0x51a0f0(0x313a),_0x51a0f0(0x3499),_0x51a0f0(0x3569),_0x51a0f0(0x2fb0),'PrivateEvaluationOptions',_0x51a0f0(0x7c6),'PrivateFrontEndOptions','PrivateKey',_0x51a0f0(0x3162),_0x51a0f0(0xf33),_0x51a0f0(0x4171),_0x51a0f0(0x1369),_0x51a0f0(0xcc3),'ProbabilityPr',_0x51a0f0(0x311b),_0x51a0f0(0x2efa),_0x51a0f0(0x2b2c),_0x51a0f0(0x34d3),'ProcessEnvironment',_0x51a0f0(0x4c18),_0x51a0f0(0x2174),_0x51a0f0(0x1389),_0x51a0f0(0x33ef),_0x51a0f0(0x4f37),'ProcessParameterQ',_0x51a0f0(0x1855),_0x51a0f0(0x756),_0x51a0f0(0x218b),_0x51a0f0(0x388e),_0x51a0f0(0x281),_0x51a0f0(0x3fa1),_0x51a0f0(0x20bf),_0x51a0f0(0x4c1d),'ProgressIndicatorBoxOptions',_0x51a0f0(0x3020),_0x51a0f0(0x3a67),'Prolog',_0x51a0f0(0x3fc1),_0x51a0f0(0x224c),_0x51a0f0(0xa4e),_0x51a0f0(0x2245),_0x51a0f0(0x388a),_0x51a0f0(0x2bd4),'PropertyValue','Proportion',_0x51a0f0(0x3fea),_0x51a0f0(0x851),_0x51a0f0(0x11cb),_0x51a0f0(0x4ff6),'Pruning',_0x51a0f0(0xbde),_0x51a0f0(0x274b),'PublicKey',_0x51a0f0(0x3242),'PulsarData',_0x51a0f0(0x13b4),_0x51a0f0(0xf9d),_0x51a0f0(0x2baf),_0x51a0f0(0xa3a),_0x51a0f0(0x201),_0x51a0f0(0x26e9),_0x51a0f0(0x328b),_0x51a0f0(0x26a8),_0x51a0f0(0x3424),_0x51a0f0(0x2eb8),_0x51a0f0(0x311e),_0x51a0f0(0x1423),'QPochhammer',_0x51a0f0(0x4809),'QRDecomposition','QuadraticIrrationalQ',_0x51a0f0(0x3d68),_0x51a0f0(0x5099),_0x51a0f0(0x50e6),_0x51a0f0(0x1b3f),'QuantityArray',_0x51a0f0(0x27a8),'QuantityForm','QuantityMagnitude',_0x51a0f0(0x433d),_0x51a0f0(0x41d1),_0x51a0f0(0x188b),_0x51a0f0(0x7dd),_0x51a0f0(0xcfb),_0x51a0f0(0x489b),'QuantityVariablePhysicalQuantity','Quartics',_0x51a0f0(0x39d7),_0x51a0f0(0x1a56),_0x51a0f0(0x2314),_0x51a0f0(0x2774),_0x51a0f0(0x525b),_0x51a0f0(0x1a6d),'QuestionObject',_0x51a0f0(0x2e2f),_0x51a0f0(0x934),_0x51a0f0(0x1f9),_0x51a0f0(0x32da),'Quiet',_0x51a0f0(0x50fc),_0x51a0f0(0x1acc),'Quotient',_0x51a0f0(0x17bc),'RadialAxisPlot','RadialGradientFilling',_0x51a0f0(0x11a3),_0x51a0f0(0x1f4e),_0x51a0f0(0x1e31),'RadicalBoxOptions',_0x51a0f0(0x4f7f),_0x51a0f0(0x10a0),_0x51a0f0(0x348),_0x51a0f0(0xa12),_0x51a0f0(0x23be),_0x51a0f0(0x418a),'RamanujanTau','RamanujanTauL','RamanujanTauTheta','RamanujanTauZ',_0x51a0f0(0x1f85),_0x51a0f0(0x32e2),_0x51a0f0(0x2154),_0x51a0f0(0x2b95),_0x51a0f0(0x3963),_0x51a0f0(0x42ca),_0x51a0f0(0x1a25),'RandomEntity','RandomFunction','RandomGeneratorState',_0x51a0f0(0xc50),_0x51a0f0(0xbbf),'RandomImage',_0x51a0f0(0x287b),'RandomInteger',_0x51a0f0(0x12cd),_0x51a0f0(0x1c6d),'RandomPointConfiguration',_0x51a0f0(0x3793),_0x51a0f0(0x2e76),_0x51a0f0(0xfde),_0x51a0f0(0x471a),_0x51a0f0(0x4fc7),_0x51a0f0(0x1570),'RandomSeeding',_0x51a0f0(0x3a37),_0x51a0f0(0x44cc),'RandomVariate','RandomWalkProcess',_0x51a0f0(0xeca),'Range',_0x51a0f0(0x72f),_0x51a0f0(0x388b),_0x51a0f0(0x15c2),_0x51a0f0(0x14e2),_0x51a0f0(0x2963),_0x51a0f0(0xa04),_0x51a0f0(0x4572),'Raster3DBox','Raster3DBoxOptions',_0x51a0f0(0x3afc),_0x51a0f0(0x2d72),_0x51a0f0(0x3663),_0x51a0f0(0x102c),_0x51a0f0(0x4b1),_0x51a0f0(0x4466),_0x51a0f0(0x21ce),_0x51a0f0(0xac4),_0x51a0f0(0x4467),_0x51a0f0(0x17b9),_0x51a0f0(0x1752),'RawArray',_0x51a0f0(0x563),'RawData','RawMedium','RayleighDistribution','Re',_0x51a0f0(0x1c7d),_0x51a0f0(0x3c6f),_0x51a0f0(0x3757),_0x51a0f0(0x4a31),_0x51a0f0(0x21b7),_0x51a0f0(0x3b13),'ReadList',_0x51a0f0(0x36ad),_0x51a0f0(0x100b),_0x51a0f0(0x1f8c),_0x51a0f0(0x2c5d),_0x51a0f0(0x466f),'RealDigits',_0x51a0f0(0x7de),_0x51a0f0(0x2e35),_0x51a0f0(0x14be),_0x51a0f0(0x151c),_0x51a0f0(0xf51),_0x51a0f0(0x208),_0x51a0f0(0xbd0),_0x51a0f0(0x10b2),_0x51a0f0(0x3cba),_0x51a0f0(0x42cc),_0x51a0f0(0x4d84),_0x51a0f0(0x8a0),_0x51a0f0(0x36e1),_0x51a0f0(0x2acc),_0x51a0f0(0x4e4c),_0x51a0f0(0x4329),_0x51a0f0(0x30f1),_0x51a0f0(0x1e92),_0x51a0f0(0x51b0),_0x51a0f0(0x415d),'RecurringDigitsForm','Red','Reduce',_0x51a0f0(0x3b58),_0x51a0f0(0x2b86),_0x51a0f0(0x194b),_0x51a0f0(0x4a45),_0x51a0f0(0x9c0),_0x51a0f0(0xed0),_0x51a0f0(0x16e8),'Refresh',_0x51a0f0(0x1acb),'Region',_0x51a0f0(0x3b40),_0x51a0f0(0x31be),'RegionBoundaryStyle',_0x51a0f0(0x4e2a),_0x51a0f0(0x2692),'RegionCongruent',_0x51a0f0(0x1492),_0x51a0f0(0x325),_0x51a0f0(0x16bd),'RegionDimension',_0x51a0f0(0x4da4),_0x51a0f0(0x1718),_0x51a0f0(0x106c),_0x51a0f0(0x18da),_0x51a0f0(0x1b7e),_0x51a0f0(0x4cb0),_0x51a0f0(0x2ad8),_0x51a0f0(0x212d),_0x51a0f0(0x44b7),'RegionImage',_0x51a0f0(0x2cf6),_0x51a0f0(0x3217),_0x51a0f0(0x2f0d),_0x51a0f0(0x4981),_0x51a0f0(0x4a89),_0x51a0f0(0x3fbe),'RegionNearestFunction',_0x51a0f0(0x1dec),_0x51a0f0(0x4b7e),_0x51a0f0(0xbc0),_0x51a0f0(0x3a8a),_0x51a0f0(0x47fe),_0x51a0f0(0x2d0f),_0x51a0f0(0x3483),_0x51a0f0(0x9c6),_0x51a0f0(0x14de),'RegionWithin',_0x51a0f0(0x3782),_0x51a0f0(0x1a8e),_0x51a0f0(0x2397),_0x51a0f0(0x4a49),_0x51a0f0(0xd68),_0x51a0f0(0x3529),_0x51a0f0(0x232f),_0x51a0f0(0x12c1),'ReImStyle','Reinstall',_0x51a0f0(0x407a),_0x51a0f0(0x4b92),_0x51a0f0(0x4332),_0x51a0f0(0xafd),_0x51a0f0(0x1d3f),_0x51a0f0(0x388c),_0x51a0f0(0x1b88),'RemoteAuthorizationCaching',_0x51a0f0(0x41b4),_0x51a0f0(0x3a40),_0x51a0f0(0x2369),_0x51a0f0(0xc03),_0x51a0f0(0x1c7f),_0x51a0f0(0x1196),_0x51a0f0(0x3745),_0x51a0f0(0x4a2b),_0x51a0f0(0x3d94),_0x51a0f0(0x49d3),'RemoteInputFiles',_0x51a0f0(0x2688),_0x51a0f0(0x1ded),_0x51a0f0(0x215),'RemoteRunProcess',_0x51a0f0(0x496e),_0x51a0f0(0x3985),_0x51a0f0(0x1d63),_0x51a0f0(0x1007),_0x51a0f0(0x5ee),_0x51a0f0(0x32d4),_0x51a0f0(0x3493),_0x51a0f0(0x15f2),_0x51a0f0(0x40e3),_0x51a0f0(0x488e),_0x51a0f0(0x32ec),_0x51a0f0(0xca0),_0x51a0f0(0xd04),_0x51a0f0(0x308d),'RemoveUsers','RemoveVideoStream',_0x51a0f0(0x10f2),_0x51a0f0(0x498d),_0x51a0f0(0x10a8),_0x51a0f0(0x39d4),_0x51a0f0(0x228e),_0x51a0f0(0x1ba1),_0x51a0f0(0x1110),_0x51a0f0(0x2a1b),_0x51a0f0(0x4aa4),'RepeatedString','RepeatedTiming',_0x51a0f0(0x553),'Replace',_0x51a0f0(0x3594),'ReplaceAt',_0x51a0f0(0x3477),_0x51a0f0(0x3677),_0x51a0f0(0x4d8),_0x51a0f0(0x432c),'ReplacePixelValue',_0x51a0f0(0x2d5b),_0x51a0f0(0x178e),_0x51a0f0(0x2099),_0x51a0f0(0x3203),_0x51a0f0(0x848),_0x51a0f0(0x5205),_0x51a0f0(0x4ae5),_0x51a0f0(0x2416),'ResetDirectory',_0x51a0f0(0x226c),'ReshapeLayer',_0x51a0f0(0x2660),_0x51a0f0(0x4fb8),_0x51a0f0(0x139f),_0x51a0f0(0x4a90),'ResolveContextAliases',_0x51a0f0(0x2f0e),_0x51a0f0(0x42df),'ResourceFunction',_0x51a0f0(0x485a),_0x51a0f0(0x2e7f),_0x51a0f0(0x1c9a),'ResourceSearch',_0x51a0f0(0x435d),_0x51a0f0(0x4364),_0x51a0f0(0x166c),_0x51a0f0(0x4ba4),_0x51a0f0(0x3c90),_0x51a0f0(0x5288),'ResponseForm',_0x51a0f0(0x41f5),_0x51a0f0(0x36cd),_0x51a0f0(0x10ee),'Resultant','ResumePacket',_0x51a0f0(0xb4f),_0x51a0f0(0x3e7f),_0x51a0f0(0x4108),_0x51a0f0(0xa81),_0x51a0f0(0x3f03),'ReturnPacket',_0x51a0f0(0x3bdd),'ReturnTextPacket','Reverse',_0x51a0f0(0x27c1),_0x51a0f0(0x3914),_0x51a0f0(0x29cf),_0x51a0f0(0x397b),_0x51a0f0(0x477b),'ReverseSort',_0x51a0f0(0x2109),_0x51a0f0(0x1712),_0x51a0f0(0x4f45),'RevolutionPlot3D',_0x51a0f0(0x7fd),_0x51a0f0(0x14b6),_0x51a0f0(0x4dec),_0x51a0f0(0x30bf),_0x51a0f0(0x504c),_0x51a0f0(0x1526),_0x51a0f0(0x484),_0x51a0f0(0x4e01),_0x51a0f0(0x27f2),'Right',_0x51a0f0(0x4694),_0x51a0f0(0x2791),_0x51a0f0(0x3d96),_0x51a0f0(0x4eef),_0x51a0f0(0x360a),_0x51a0f0(0x4242),_0x51a0f0(0x1928),'RightDownVectorBar','RightTee','RightTeeArrow',_0x51a0f0(0x2b1f),'RightTriangle','RightTriangleBar','RightTriangleEqual',_0x51a0f0(0xbec),'RightUpTeeVector',_0x51a0f0(0x322d),_0x51a0f0(0x1266),_0x51a0f0(0x3983),'RightVectorBar',_0x51a0f0(0x30c6),_0x51a0f0(0x3dcb),_0x51a0f0(0x36a7),_0x51a0f0(0x51e1),'RobustConvexOptimization',_0x51a0f0(0x3654),_0x51a0f0(0x446c),_0x51a0f0(0x13a9),_0x51a0f0(0x3f8),_0x51a0f0(0xde2),'RootApproximant',_0x51a0f0(0x2bcd),_0x51a0f0(0x2ae8),_0x51a0f0(0x7ee),_0x51a0f0(0x2a67),_0x51a0f0(0x3716),_0x51a0f0(0xc7d),'RootSum',_0x51a0f0(0x493d),_0x51a0f0(0x20af),_0x51a0f0(0x25ed),_0x51a0f0(0x495f),'RotateRight',_0x51a0f0(0x1072),'RotationBox',_0x51a0f0(0x24b1),_0x51a0f0(0x631),_0x51a0f0(0x4160),_0x51a0f0(0xa46),_0x51a0f0(0x457),_0x51a0f0(0xfd1),'Row',_0x51a0f0(0xaaa),_0x51a0f0(0x5082),_0x51a0f0(0x3fe9),_0x51a0f0(0x19ff),_0x51a0f0(0x3b49),'RowMinHeight','RowReduce',_0x51a0f0(0x3c9a),_0x51a0f0(0x4a3a),_0x51a0f0(0x5248),_0x51a0f0(0x4711),_0x51a0f0(0x3d24),_0x51a0f0(0x2f11),_0x51a0f0(0x15b5),_0x51a0f0(0x3a97),_0x51a0f0(0x3239),_0x51a0f0(0x5019),_0x51a0f0(0x3686),_0x51a0f0(0x3f19),_0x51a0f0(0x26b9),_0x51a0f0(0xe2e),_0x51a0f0(0x525e),_0x51a0f0(0x6e4),_0x51a0f0(0x3f31),_0x51a0f0(0x3667),_0x51a0f0(0x23d7),_0x51a0f0(0x32f5),_0x51a0f0(0xea9),_0x51a0f0(0x2424),_0x51a0f0(0x4692),_0x51a0f0(0x1ee7),'SampledEntityClass',_0x51a0f0(0x4177),_0x51a0f0(0x2a2d),_0x51a0f0(0x32c4),'SampleRate',_0x51a0f0(0x17ce),_0x51a0f0(0x1b10),'SARMAProcess',_0x51a0f0(0x4b7c),_0x51a0f0(0x32d8),'SatisfiabilityCount',_0x51a0f0(0xb79),'SatisfiableQ','Saturday',_0x51a0f0(0x46fd),_0x51a0f0(0xe9b),_0x51a0f0(0x4374),'SaveConnection','SaveDefinitions','SavitzkyGolayMatrix',_0x51a0f0(0x2f4d),_0x51a0f0(0x33d6),_0x51a0f0(0x370b),_0x51a0f0(0x761),_0x51a0f0(0x2dda),_0x51a0f0(0x4e90),'ScalePadding',_0x51a0f0(0x205b),'ScaleRangeStyle',_0x51a0f0(0x45fe),'ScalingMatrix',_0x51a0f0(0x4dd2),_0x51a0f0(0x4666),_0x51a0f0(0xbe2),_0x51a0f0(0x2fa5),_0x51a0f0(0x3e13),_0x51a0f0(0x21bf),_0x51a0f0(0x2623),'ScheduledTasks',_0x51a0f0(0x3092),_0x51a0f0(0x233),_0x51a0f0(0x4f9b),_0x51a0f0(0x4d22),_0x51a0f0(0x2237),_0x51a0f0(0x46bb),_0x51a0f0(0x2dab),_0x51a0f0(0x153a),_0x51a0f0(0x23b4),_0x51a0f0(0x3731),_0x51a0f0(0x2fcd),'ScriptLevel','ScriptMinSize',_0x51a0f0(0x4510),_0x51a0f0(0x291a),_0x51a0f0(0x3f28),_0x51a0f0(0x5f9),_0x51a0f0(0x398b),_0x51a0f0(0x6d8),_0x51a0f0(0xee6),_0x51a0f0(0x1038),_0x51a0f0(0x2391),_0x51a0f0(0x4ece),_0x51a0f0(0x3b20),_0x51a0f0(0x270f),_0x51a0f0(0x1375),'SecondOrderConeOptimization',_0x51a0f0(0x4503),_0x51a0f0(0x2ab1),_0x51a0f0(0x776),_0x51a0f0(0x4a01),_0x51a0f0(0xf63),'SecuredAuthenticationKey',_0x51a0f0(0x464a),'SecurityCertificate',_0x51a0f0(0x2743),_0x51a0f0(0x3b22),'Selectable',_0x51a0f0(0x2c35),'SelectedCells',_0x51a0f0(0x2b9d),_0x51a0f0(0x5220),_0x51a0f0(0xb85),_0x51a0f0(0x20a6),'SelectionCell','SelectionCellCreateCell','SelectionCellDefaultStyle',_0x51a0f0(0x4d0f),_0x51a0f0(0x31c9),_0x51a0f0(0x324c),'SelectionEvaluate',_0x51a0f0(0x3fe1),_0x51a0f0(0x4ca5),_0x51a0f0(0x4e33),_0x51a0f0(0x88c),_0x51a0f0(0xf90),_0x51a0f0(0x4362),'SemanticImport',_0x51a0f0(0x3937),'SemanticInterpretation',_0x51a0f0(0x779),_0x51a0f0(0x12db),_0x51a0f0(0x2c6d),_0x51a0f0(0x48d8),_0x51a0f0(0x4e7f),_0x51a0f0(0x2cd0),_0x51a0f0(0x4cd7),_0x51a0f0(0x3792),_0x51a0f0(0x883),_0x51a0f0(0x49d9),_0x51a0f0(0x3ef),'SequenceForm',_0x51a0f0(0x1fe8),_0x51a0f0(0x193a),'SequenceLastLayer','SequenceMostLayer',_0x51a0f0(0x49d5),_0x51a0f0(0x2d7d),_0x51a0f0(0x45f2),_0x51a0f0(0x28e2),_0x51a0f0(0x294b),_0x51a0f0(0x1bae),_0x51a0f0(0xa9e),_0x51a0f0(0x2eb5),_0x51a0f0(0x1883),_0x51a0f0(0x2a3f),_0x51a0f0(0x2253),_0x51a0f0(0x1208),_0x51a0f0(0x4812),_0x51a0f0(0x159b),'ServiceObject',_0x51a0f0(0x2ce7),'ServiceResponse',_0x51a0f0(0x94c),_0x51a0f0(0x4f89),_0x51a0f0(0x47a3),_0x51a0f0(0x34d5),_0x51a0f0(0x1052),_0x51a0f0(0x831),'SetAttributes',_0x51a0f0(0x4f9e),'SetCloudDirectory','SetCookies',_0x51a0f0(0x2159),_0x51a0f0(0x2505),_0x51a0f0(0x5164),_0x51a0f0(0x39c0),_0x51a0f0(0x21f4),'SetOptions',_0x51a0f0(0x3c10),_0x51a0f0(0x4816),_0x51a0f0(0x28bc),_0x51a0f0(0x450d),_0x51a0f0(0x12d4),'SetSelectedNotebook','SetSharedFunction','SetSharedVariable',_0x51a0f0(0x38da),'SetSystemModel','SetSystemOptions','Setter','SetterBar',_0x51a0f0(0x1f3a),_0x51a0f0(0x2779),'Setting','SetUsers',_0x51a0f0(0x4d10),'Shallow',_0x51a0f0(0x1d0c),_0x51a0f0(0x151a),_0x51a0f0(0x28a6),_0x51a0f0(0x1c74),_0x51a0f0(0x48ef),_0x51a0f0(0x2ddb),'ShearingTransform',_0x51a0f0(0x1490),_0x51a0f0(0x485f),_0x51a0f0(0x4e2d),_0x51a0f0(0x248),_0x51a0f0(0x4cc2),_0x51a0f0(0x4f70),_0x51a0f0(0x1a31),_0x51a0f0(0x4ccd),'ShortestPathFunction','ShortLeftArrow',_0x51a0f0(0x805),_0x51a0f0(0x2438),_0x51a0f0(0x3546),_0x51a0f0(0x15b0),_0x51a0f0(0x1455),_0x51a0f0(0x247a),_0x51a0f0(0x148e),_0x51a0f0(0x4f6a),_0x51a0f0(0x291e),_0x51a0f0(0xa03),_0x51a0f0(0x47b2),_0x51a0f0(0x726),_0x51a0f0(0x10d7),_0x51a0f0(0x14f1),_0x51a0f0(0x2917),_0x51a0f0(0x1975),_0x51a0f0(0x4ebe),_0x51a0f0(0x285),_0x51a0f0(0x3faa),_0x51a0f0(0x35aa),_0x51a0f0(0x34d2),_0x51a0f0(0x33c9),'ShowShortBoxForm',_0x51a0f0(0x49a0),_0x51a0f0(0x19e8),_0x51a0f0(0x4779),_0x51a0f0(0x8e9),'ShrinkWrapBoundingBox',_0x51a0f0(0x4d38),_0x51a0f0(0x3ae7),_0x51a0f0(0x12c2),_0x51a0f0(0x2e75),_0x51a0f0(0x1e15),'Sign',_0x51a0f0(0x3575),_0x51a0f0(0x21c1),_0x51a0f0(0x4cfd),_0x51a0f0(0x4336),_0x51a0f0(0x211a),_0x51a0f0(0x347f),'SimilarityRules',_0x51a0f0(0x1fe2),'SimpleGraphQ','SimplePolygonQ','SimplePolyhedronQ',_0x51a0f0(0x39b1),'Simplify',_0x51a0f0(0x1986),_0x51a0f0(0x8c0),'SinghMaddalaDistribution',_0x51a0f0(0x27f4),_0x51a0f0(0xac1),_0x51a0f0(0x4b3c),_0x51a0f0(0x24a1),'SingularValueList',_0x51a0f0(0x4212),_0x51a0f0(0x4092),'Sinh',_0x51a0f0(0x4f17),_0x51a0f0(0x3f32),_0x51a0f0(0x62f),'Skeleton',_0x51a0f0(0x4e68),_0x51a0f0(0x27d4),_0x51a0f0(0x7c5),'SkewNormalDistribution',_0x51a0f0(0x2d43),_0x51a0f0(0x3637),_0x51a0f0(0x854),_0x51a0f0(0x478a),_0x51a0f0(0x3af0),_0x51a0f0(0x3eb9),'Slider',_0x51a0f0(0xcb9),_0x51a0f0(0x4ab8),_0x51a0f0(0x3646),_0x51a0f0(0x516),_0x51a0f0(0x1083),_0x51a0f0(0x1ab1),_0x51a0f0(0x2838),_0x51a0f0(0x114e),'SlotSequence',_0x51a0f0(0x240e),_0x51a0f0(0x4a95),_0x51a0f0(0x281f),_0x51a0f0(0x41d4),_0x51a0f0(0x27bb),_0x51a0f0(0x2331),_0x51a0f0(0x425c),_0x51a0f0(0x82a),_0x51a0f0(0x1749),_0x51a0f0(0x2b6c),'SmoothPointDensity','SnDispersion',_0x51a0f0(0x4e6a),_0x51a0f0(0x475a),'SnubPolyhedron',_0x51a0f0(0x1e30),_0x51a0f0(0x40ed),_0x51a0f0(0x443),_0x51a0f0(0x51c6),'SocketListener',_0x51a0f0(0x263),_0x51a0f0(0x51ba),_0x51a0f0(0x11df),'SocketReadyQ','Sockets',_0x51a0f0(0x120b),_0x51a0f0(0x280f),_0x51a0f0(0x30ae),_0x51a0f0(0x1683),'SolarEclipse','SolarSystemFeatureData',_0x51a0f0(0x1766),'SolidAngle',_0x51a0f0(0x4f8f),_0x51a0f0(0x3c59),_0x51a0f0(0x44fa),_0x51a0f0(0x16be),_0x51a0f0(0x273),_0x51a0f0(0x300e),_0x51a0f0(0x1709),_0x51a0f0(0x4cbe),_0x51a0f0(0x4afd),_0x51a0f0(0x1cf5),_0x51a0f0(0x2b36),_0x51a0f0(0x3a6d),_0x51a0f0(0x3f44),_0x51a0f0(0x527f),_0x51a0f0(0x20c1),_0x51a0f0(0x35a5),_0x51a0f0(0x2817),_0x51a0f0(0x27d2),_0x51a0f0(0x5050),_0x51a0f0(0x3736),_0x51a0f0(0x3a74),_0x51a0f0(0x158f),_0x51a0f0(0x526c),_0x51a0f0(0x3383),_0x51a0f0(0x3090),_0x51a0f0(0x19b9),_0x51a0f0(0x1a0a),_0x51a0f0(0x20ad),_0x51a0f0(0x1bf4),_0x51a0f0(0xa0f),_0x51a0f0(0x35e7),_0x51a0f0(0x92f),_0x51a0f0(0x2869),_0x51a0f0(0x43b),'SpanLineThickness',_0x51a0f0(0x48b5),_0x51a0f0(0x365c),_0x51a0f0(0x268c),_0x51a0f0(0x3ead),'SparseArray','SparseArrayQ',_0x51a0f0(0x1247),'SpatialBoundaryCorrection',_0x51a0f0(0x181d),_0x51a0f0(0x4ead),_0x51a0f0(0x2f98),_0x51a0f0(0x1840),'SpatialMedian','SpatialNoiseLevel','SpatialObservationRegionQ','SpatialPointData','SpatialPointSelect',_0x51a0f0(0x31ce),'SpatialTransformationLayer',_0x51a0f0(0x2404),'Speak',_0x51a0f0(0x49f7),'SpearmanRankTest',_0x51a0f0(0x2cef),_0x51a0f0(0x3c4a),'SpecificityGoal',_0x51a0f0(0x94f),'Spectrogram',_0x51a0f0(0x2851),_0x51a0f0(0x377b),_0x51a0f0(0x2434),'SpeechInterpreter',_0x51a0f0(0x1b4d),_0x51a0f0(0x2840),_0x51a0f0(0x2fdf),_0x51a0f0(0x234),_0x51a0f0(0x104d),_0x51a0f0(0xb69),'SpellingOptions',_0x51a0f0(0x4cfa),'SphereBox','SphereBoxOptions',_0x51a0f0(0x397f),_0x51a0f0(0x22d1),_0x51a0f0(0x21af),_0x51a0f0(0x2c52),_0x51a0f0(0xc4d),_0x51a0f0(0x4047),_0x51a0f0(0x1ecc),_0x51a0f0(0x222c),_0x51a0f0(0x1a04),_0x51a0f0(0x375c),_0x51a0f0(0x2be4),'SpheroidalPS','SpheroidalPSPrime',_0x51a0f0(0x680),_0x51a0f0(0x17ba),_0x51a0f0(0x2e15),_0x51a0f0(0x4efe),_0x51a0f0(0x2891),_0x51a0f0(0x40db),_0x51a0f0(0x242a),'Splice',_0x51a0f0(0x2a5d),_0x51a0f0(0x1e9c),_0x51a0f0(0x39de),_0x51a0f0(0x3ac5),_0x51a0f0(0x1abf),_0x51a0f0(0x4cd9),_0x51a0f0(0xc6e),_0x51a0f0(0x470a),_0x51a0f0(0x3f7a),_0x51a0f0(0x3e40),_0x51a0f0(0x437f),_0x51a0f0(0x4448),'Square',_0x51a0f0(0x2590),_0x51a0f0(0x2b19),_0x51a0f0(0x165e),_0x51a0f0(0x5ab),_0x51a0f0(0x296b),_0x51a0f0(0x2731),_0x51a0f0(0x5030),_0x51a0f0(0x4f67),_0x51a0f0(0xc52),_0x51a0f0(0x4027),_0x51a0f0(0x847),'SquareWave',_0x51a0f0(0x5245),_0x51a0f0(0x1ba0),_0x51a0f0(0x3c7f),_0x51a0f0(0x81a),_0x51a0f0(0x23e9),_0x51a0f0(0x3243),_0x51a0f0(0x11b8),_0x51a0f0(0x1b35),'StackedListPlot','StackInhibit',_0x51a0f0(0x3906),_0x51a0f0(0x3ec3),_0x51a0f0(0x18f4),'StandardDeviationFilter',_0x51a0f0(0x1b7b),_0x51a0f0(0x13d9),_0x51a0f0(0x4bb),_0x51a0f0(0x2955),_0x51a0f0(0x50b),_0x51a0f0(0x2801),'StarClusterData',_0x51a0f0(0x80c),'StarGraph','StartAsynchronousTask',_0x51a0f0(0x198b),_0x51a0f0(0x34ca),_0x51a0f0(0x2d00),_0x51a0f0(0x157b),_0x51a0f0(0x39a6),_0x51a0f0(0x4671),_0x51a0f0(0x2417),_0x51a0f0(0x34e3),'StateDimensions','StateFeedbackGains',_0x51a0f0(0x4688),_0x51a0f0(0x2edf),_0x51a0f0(0x4f44),_0x51a0f0(0x11ec),'StateSpaceTransform','StateTransformationLinearize',_0x51a0f0(0xc87),'StationaryWaveletPacketTransform',_0x51a0f0(0x497d),'StatusArea',_0x51a0f0(0xe40),_0x51a0f0(0x3402),_0x51a0f0(0x2e19),'StieltjesGamma',_0x51a0f0(0x1522),_0x51a0f0(0x243),_0x51a0f0(0x2387),_0x51a0f0(0x217b),'StoppingPowerData',_0x51a0f0(0x2792),'StrataVariables',_0x51a0f0(0x3ed7),_0x51a0f0(0x3b2a),_0x51a0f0(0x4f60),_0x51a0f0(0x2335),_0x51a0f0(0x4b61),_0x51a0f0(0x6d3),'StreamMarkers',_0x51a0f0(0x422),'StreamPlot3D',_0x51a0f0(0x4e44),_0x51a0f0(0x2f2c),_0x51a0f0(0x3d01),_0x51a0f0(0x48d9),_0x51a0f0(0x20d5),'StrictInequalities',_0x51a0f0(0x3327),'StringBreak',_0x51a0f0(0x474),_0x51a0f0(0x3cc5),_0x51a0f0(0x35ad),_0x51a0f0(0x83a),_0x51a0f0(0x18ce),_0x51a0f0(0x2849),_0x51a0f0(0x112d),_0x51a0f0(0x4ac2),_0x51a0f0(0x135f),_0x51a0f0(0x1761),'StringFormat',_0x51a0f0(0x34f3),'StringFreeQ',_0x51a0f0(0x1fac),_0x51a0f0(0x23e6),'StringLength','StringMatchQ',_0x51a0f0(0x2e94),_0x51a0f0(0x4935),_0x51a0f0(0x465c),_0x51a0f0(0x459e),_0x51a0f0(0x3e22),'StringQ',_0x51a0f0(0x10df),_0x51a0f0(0x4a8a),'StringReplaceList',_0x51a0f0(0x374a),'StringReverse','StringRiffle',_0x51a0f0(0x42aa),'StringRotateRight',_0x51a0f0(0x29f0),_0x51a0f0(0x27fe),'StringStartsQ',_0x51a0f0(0xefa),_0x51a0f0(0x1447),'StringTemplate',_0x51a0f0(0x2644),_0x51a0f0(0x2969),_0x51a0f0(0x4e66),_0x51a0f0(0x1c9b),_0x51a0f0(0x1bd6),'StripStyleOnPaste',_0x51a0f0(0x4e89),'StrokeForm','Struckthrough',_0x51a0f0(0x4f91),_0x51a0f0(0xbe1),_0x51a0f0(0x520b),_0x51a0f0(0x350e),'StruveH','StruveL','Stub',_0x51a0f0(0x3fa6),_0x51a0f0(0x6ea),_0x51a0f0(0x2df),_0x51a0f0(0xb9f),_0x51a0f0(0x35de),_0x51a0f0(0x3fb3),_0x51a0f0(0x3787),'StyleHints',_0x51a0f0(0x3b0e),'StyleMenuListing',_0x51a0f0(0x1a70),_0x51a0f0(0x1aab),_0x51a0f0(0x1e13),_0x51a0f0(0x3635),_0x51a0f0(0x4a5e),'Subfactorial',_0x51a0f0(0x1641),_0x51a0f0(0x1e32),_0x51a0f0(0x28ac),'SubresultantPolynomialRemainders',_0x51a0f0(0x338f),_0x51a0f0(0xb43),_0x51a0f0(0x155d),_0x51a0f0(0x25bc),_0x51a0f0(0x32b9),_0x51a0f0(0x632),_0x51a0f0(0x5267),_0x51a0f0(0x413a),_0x51a0f0(0x24a2),_0x51a0f0(0x8c6),_0x51a0f0(0x25ac),'SubsetMap',_0x51a0f0(0x7a4),'SubsetQ','SubsetReplace',_0x51a0f0(0x2b3a),_0x51a0f0(0x25a1),_0x51a0f0(0x2747),_0x51a0f0(0x31b5),'SubsuperscriptBox',_0x51a0f0(0xace),_0x51a0f0(0x521d),'SubtitleTrackSelection',_0x51a0f0(0x3f33),_0x51a0f0(0x58d),'SubtractSides',_0x51a0f0(0x4547),_0x51a0f0(0x29e5),_0x51a0f0(0x4998),_0x51a0f0(0x2ac3),_0x51a0f0(0x3c16),_0x51a0f0(0x21c),_0x51a0f0(0x4ad3),_0x51a0f0(0x3f9d),_0x51a0f0(0x2f7a),_0x51a0f0(0x1b7d),_0x51a0f0(0x136b),_0x51a0f0(0x3de1),_0x51a0f0(0x5227),_0x51a0f0(0xfd0),_0x51a0f0(0x3503),_0x51a0f0(0x447b),_0x51a0f0(0x4ec5),'SuperPlus',_0x51a0f0(0xc2a),_0x51a0f0(0x20f6),_0x51a0f0(0x3554),_0x51a0f0(0x24f6),_0x51a0f0(0x4f2),_0x51a0f0(0x17d3),_0x51a0f0(0x451c),'SurdForm',_0x51a0f0(0x11a5),_0x51a0f0(0x1321),'SurfaceColor','SurfaceData',_0x51a0f0(0x4aef),_0x51a0f0(0x148b),_0x51a0f0(0x1c40),_0x51a0f0(0x1b70),'SurvivalModelFit',_0x51a0f0(0x3feb),_0x51a0f0(0x2ec1),_0x51a0f0(0x4345),_0x51a0f0(0x1624),_0x51a0f0(0x879),'Symbol',_0x51a0f0(0x502),_0x51a0f0(0x30fe),'Symmetric',_0x51a0f0(0x1452),_0x51a0f0(0x421a),_0x51a0f0(0xf69),'SymmetricMatrixQ','SymmetricPolynomial','SymmetricReduction','Symmetrize',_0x51a0f0(0x2516),_0x51a0f0(0x28e4),_0x51a0f0(0x42be),_0x51a0f0(0x36c2),_0x51a0f0(0x5199),_0x51a0f0(0xc97),_0x51a0f0(0x81c),'Synonyms',_0x51a0f0(0x46de),'SyntaxForm',_0x51a0f0(0x4eca),_0x51a0f0(0x204a),_0x51a0f0(0xa32),_0x51a0f0(0x3f0c),_0x51a0f0(0x1a01),_0x51a0f0(0x2dc8),_0x51a0f0(0x74b),_0x51a0f0(0x25a2),_0x51a0f0(0x4caa),_0x51a0f0(0x330f),'SystemDialogInput',_0x51a0f0(0x20a),_0x51a0f0(0x34ea),'SystemHelpPath',_0x51a0f0(0x1009),_0x51a0f0(0x17ec),_0x51a0f0(0x4c28),_0x51a0f0(0x476c),'SystemModeler',_0x51a0f0(0x291d),'SystemModelLinearize',_0x51a0f0(0x1af2),_0x51a0f0(0x496c),'SystemModelPlot',_0x51a0f0(0x2f4e),'SystemModelReliability',_0x51a0f0(0x19fe),_0x51a0f0(0xfc2),_0x51a0f0(0x14f0),'SystemModelSimulationData',_0x51a0f0(0x2089),'SystemOptions','SystemProcessData',_0x51a0f0(0x39fe),_0x51a0f0(0xca3),'SystemsModelControllerData',_0x51a0f0(0x3b46),_0x51a0f0(0xd60),_0x51a0f0(0x936),_0x51a0f0(0x3b6),_0x51a0f0(0x4bf6),_0x51a0f0(0x2efb),_0x51a0f0(0x4288),_0x51a0f0(0x440a),_0x51a0f0(0x1993),'SystemsModelOrder',_0x51a0f0(0x4610),_0x51a0f0(0x21bb),'SystemsModelStateFeedbackConnect',_0x51a0f0(0x4c2e),_0x51a0f0(0x18f5),'SystemTest','Tab',_0x51a0f0(0xf43),_0x51a0f0(0x4dd6),_0x51a0f0(0x1be6),_0x51a0f0(0x49d6),_0x51a0f0(0x48d7),_0x51a0f0(0xc29),_0x51a0f0(0x23c),'TableSpacing',_0x51a0f0(0x154a),_0x51a0f0(0x33cf),_0x51a0f0(0x330),_0x51a0f0(0x8b0),_0x51a0f0(0x476f),_0x51a0f0(0x3043),_0x51a0f0(0x1508),_0x51a0f0(0x4653),_0x51a0f0(0x3176),_0x51a0f0(0xc45),'TabViewBox',_0x51a0f0(0x2b99),_0x51a0f0(0x2ed),_0x51a0f0(0x6e9),_0x51a0f0(0x1d18),'TaggingRules',_0x51a0f0(0x1273),_0x51a0f0(0x138b),_0x51a0f0(0x3881),_0x51a0f0(0x13ac),_0x51a0f0(0x42c7),_0x51a0f0(0x2e99),'TakeLargest',_0x51a0f0(0xe02),_0x51a0f0(0x41f1),'TakeSmallest',_0x51a0f0(0x303),_0x51a0f0(0x2398),_0x51a0f0(0x3b16),_0x51a0f0(0x2c97),_0x51a0f0(0x2f70),'TargetDevice',_0x51a0f0(0xec3),_0x51a0f0(0x2480),_0x51a0f0(0x1c57),'TaskAbort',_0x51a0f0(0x8d6),_0x51a0f0(0x477f),_0x51a0f0(0x3c73),_0x51a0f0(0x2b8a),_0x51a0f0(0x1b42),_0x51a0f0(0x3e65),_0x51a0f0(0xb29),_0x51a0f0(0x3153),_0x51a0f0(0x2e03),_0x51a0f0(0x22eb),_0x51a0f0(0x2e54),_0x51a0f0(0x2187),'TemplateBoxOptions',_0x51a0f0(0x689),_0x51a0f0(0x16e4),_0x51a0f0(0x4c62),'TemplateObject',_0x51a0f0(0x43d),_0x51a0f0(0xcac),_0x51a0f0(0x2fb2),_0x51a0f0(0x727),_0x51a0f0(0x317c),_0x51a0f0(0x3d65),_0x51a0f0(0x76a),'TemporalRegularity','Temporary',_0x51a0f0(0x3cb1),_0x51a0f0(0x3d71),_0x51a0f0(0x48b1),_0x51a0f0(0x30b9),'TensorProduct',_0x51a0f0(0x3581),_0x51a0f0(0x3883),_0x51a0f0(0x1153),_0x51a0f0(0x7d6),_0x51a0f0(0x3b2e),_0x51a0f0(0x4814),_0x51a0f0(0x250b),_0x51a0f0(0x3721),_0x51a0f0(0x51e9),_0x51a0f0(0x1966),_0x51a0f0(0xd34),_0x51a0f0(0x424),'TestResultObject',_0x51a0f0(0x49d7),_0x51a0f0(0xa7e),_0x51a0f0(0x1306),_0x51a0f0(0x39c2),'TeXSave',_0x51a0f0(0x2f37),_0x51a0f0(0x2ef),_0x51a0f0(0x353b),_0x51a0f0(0xe73),_0x51a0f0(0x36e2),_0x51a0f0(0x511d),_0x51a0f0(0xf65),_0x51a0f0(0x1410),_0x51a0f0(0x4ee4),_0x51a0f0(0x1adb),_0x51a0f0(0x2904),_0x51a0f0(0x50cf),_0x51a0f0(0x3a9b),'TextForm',_0x51a0f0(0x3d51),_0x51a0f0(0x37d2),_0x51a0f0(0x1bdb),_0x51a0f0(0x43e9),_0x51a0f0(0x4730),_0x51a0f0(0xc70),_0x51a0f0(0x4136),_0x51a0f0(0x1b66),_0x51a0f0(0x4b37),_0x51a0f0(0x1678),_0x51a0f0(0x4eb),_0x51a0f0(0x494),_0x51a0f0(0x6b3),'TextTranslation',_0x51a0f0(0xd9f),_0x51a0f0(0x2728),'TextureCoordinateScaling',_0x51a0f0(0x2ef2),_0x51a0f0(0x178d),_0x51a0f0(0xf10),_0x51a0f0(0x1dd7),_0x51a0f0(0x2a11),_0x51a0f0(0x2d74),_0x51a0f0(0x2440),_0x51a0f0(0x38be),_0x51a0f0(0x36b4),'ThomasPointProcess',_0x51a0f0(0x371c),_0x51a0f0(0x2a5b),_0x51a0f0(0x7fb),_0x51a0f0(0x28a2),'ThreeJSymbol','Threshold',_0x51a0f0(0x463a),_0x51a0f0(0x1af4),'ThueMorse','Thumbnail','Thursday',_0x51a0f0(0x8ee),_0x51a0f0(0xf4e),_0x51a0f0(0x14c1),'TickLabels','TickLengths',_0x51a0f0(0x1e82),_0x51a0f0(0x2157),'TicksStyle',_0x51a0f0(0x3794),_0x51a0f0(0x3d7),'TildeEqual',_0x51a0f0(0x8d2),_0x51a0f0(0x2011),_0x51a0f0(0x18bc),'TimeConstraint',_0x51a0f0(0x3cd2),_0x51a0f0(0x1091),'TimeGoal',_0x51a0f0(0x4b0d),_0x51a0f0(0x1e14),_0x51a0f0(0x2e79),_0x51a0f0(0x2b83),_0x51a0f0(0x1fbd),'TimesBy',_0x51a0f0(0x1ec5),_0x51a0f0(0x1e41),_0x51a0f0(0x3489),'TimeSeriesInsert',_0x51a0f0(0xcb0),_0x51a0f0(0x32d9),_0x51a0f0(0x1e1d),_0x51a0f0(0x4c21),'TimeSeriesModelFit','TimeSeriesResample',_0x51a0f0(0x1ddf),_0x51a0f0(0x3833),_0x51a0f0(0x75e),_0x51a0f0(0x4f6),_0x51a0f0(0x22fd),_0x51a0f0(0x2c82),_0x51a0f0(0x359),_0x51a0f0(0xa42),_0x51a0f0(0x27b3),_0x51a0f0(0x3fcd),_0x51a0f0(0x2cad),_0x51a0f0(0x4999),'TimeZoneOffset',_0x51a0f0(0x332c),_0x51a0f0(0x34da),_0x51a0f0(0x122c),'TitsGroupT',_0x51a0f0(0x5057),_0x51a0f0(0x1409),_0x51a0f0(0x4ecf),_0x51a0f0(0x107a),_0x51a0f0(0xda1),_0x51a0f0(0x3c91),'ToDiscreteTimeModel','ToEntity',_0x51a0f0(0x2f81),_0x51a0f0(0x47c8),_0x51a0f0(0x3783),_0x51a0f0(0x1d74),_0x51a0f0(0x4da1),'ToggleFalse',_0x51a0f0(0x3326),_0x51a0f0(0x16d1),'TogglerBox',_0x51a0f0(0x4271),_0x51a0f0(0x4de5),_0x51a0f0(0x39ff),_0x51a0f0(0xf78),_0x51a0f0(0x1c16),_0x51a0f0(0xb36),_0x51a0f0(0x340b),'ToNumberField',_0x51a0f0(0x1705),_0x51a0f0(0x1e84),_0x51a0f0(0x2bbc),_0x51a0f0(0x39a5),'TooltipDelay',_0x51a0f0(0x528e),_0x51a0f0(0x3252),_0x51a0f0(0x3e1f),_0x51a0f0(0x1443),'ToPolarCoordinates',_0x51a0f0(0xdb0),_0x51a0f0(0x4359),_0x51a0f0(0x4539),_0x51a0f0(0x25d9),'Torus',_0x51a0f0(0x2b23),'ToSphericalCoordinates',_0x51a0f0(0x2e04),_0x51a0f0(0x527),'TotalHeight',_0x51a0f0(0x2435),_0x51a0f0(0xac3),_0x51a0f0(0x4697),_0x51a0f0(0xc9c),_0x51a0f0(0x4cae),_0x51a0f0(0x19d6),'ToUpperCase','TourVideo','Tr',_0x51a0f0(0x2ef4),_0x51a0f0(0x1834),'TraceAction','TraceBackward','TraceDepth',_0x51a0f0(0x4700),_0x51a0f0(0x1ca8),_0x51a0f0(0x1e6a),_0x51a0f0(0x2be1),_0x51a0f0(0x2112),_0x51a0f0(0x1f32),'TraceOriginal',_0x51a0f0(0x2d90),_0x51a0f0(0x360f),_0x51a0f0(0x4dac),_0x51a0f0(0x104f),_0x51a0f0(0xd8f),_0x51a0f0(0x370d),_0x51a0f0(0x46cc),_0x51a0f0(0xba6),_0x51a0f0(0x398c),_0x51a0f0(0x4522),'TraditionalOrder',_0x51a0f0(0x4774),_0x51a0f0(0x26ca),_0x51a0f0(0x29d4),_0x51a0f0(0xd37),_0x51a0f0(0x8c5),_0x51a0f0(0x1afe),_0x51a0f0(0x1cc9),'TrainTextContentDetector',_0x51a0f0(0x33ed),_0x51a0f0(0x2ee),_0x51a0f0(0x507a),_0x51a0f0(0x3144),'TransferFunctionPoles',_0x51a0f0(0x2e5f),_0x51a0f0(0x5c3),_0x51a0f0(0x1f7c),_0x51a0f0(0x1a27),'TransformationFunctions',_0x51a0f0(0x1646),_0x51a0f0(0x4bb5),_0x51a0f0(0x504),_0x51a0f0(0x1013),_0x51a0f0(0x3a95),_0x51a0f0(0x2a64),_0x51a0f0(0x4ab9),'TransitionEffect',_0x51a0f0(0x4ad),'TransitiveReductionGraph','Translate',_0x51a0f0(0x218f),_0x51a0f0(0x5183),_0x51a0f0(0x46a0),_0x51a0f0(0x3322),_0x51a0f0(0x43fc),_0x51a0f0(0x1442),_0x51a0f0(0x37da),_0x51a0f0(0x36f5),'TrapSelection','TravelDirections','TravelDirectionsData',_0x51a0f0(0x267a),'TravelDistanceList',_0x51a0f0(0x1505),_0x51a0f0(0x218),_0x51a0f0(0x1dd8),_0x51a0f0(0x6e2),_0x51a0f0(0x4940),'TreeCount',_0x51a0f0(0x2cdf),_0x51a0f0(0xc0f),_0x51a0f0(0x2cd5),_0x51a0f0(0x22f2),_0x51a0f0(0x4add),_0x51a0f0(0x362f),_0x51a0f0(0x32aa),_0x51a0f0(0x3ba3),_0x51a0f0(0xbd4),_0x51a0f0(0x1fb5),_0x51a0f0(0x283b),_0x51a0f0(0x4590),_0x51a0f0(0x4e3a),_0x51a0f0(0x3bfb),_0x51a0f0(0x3db8),_0x51a0f0(0x526a),_0x51a0f0(0x18aa),_0x51a0f0(0x2e5b),_0x51a0f0(0x239a),'TreeInsert',_0x51a0f0(0x4215),_0x51a0f0(0xd42),_0x51a0f0(0x2c86),_0x51a0f0(0x2e6f),_0x51a0f0(0x2469),'TreeMap',_0x51a0f0(0x38d9),_0x51a0f0(0x5284),_0x51a0f0(0x1d0f),'TreePosition','TreeQ',_0x51a0f0(0x20b9),'TreeRules','TreeScan','TreeSelect',_0x51a0f0(0x58e),_0x51a0f0(0x17fd),_0x51a0f0(0x17c4),_0x51a0f0(0x16e7),'TriangleCenter',_0x51a0f0(0x4d1c),_0x51a0f0(0x7c4),'TriangleWave',_0x51a0f0(0x2016),_0x51a0f0(0x14a8),_0x51a0f0(0x176a),_0x51a0f0(0xcf8),_0x51a0f0(0x3298),_0x51a0f0(0x25fe),_0x51a0f0(0x3f61),_0x51a0f0(0x3a49),'TrigToExp',_0x51a0f0(0xfc5),'TrimmedVariance','TropicalStormData','True',_0x51a0f0(0xdff),'TruncatedDistribution',_0x51a0f0(0x1739),_0x51a0f0(0x2a9b),_0x51a0f0(0x363d),_0x51a0f0(0x3029),_0x51a0f0(0x4719),'TubeBezierCurveBox','TubeBezierCurveBoxOptions','TubeBox',_0x51a0f0(0x34aa),_0x51a0f0(0x27ba),_0x51a0f0(0x3ca2),_0x51a0f0(0x4d6f),_0x51a0f0(0x3c8d),_0x51a0f0(0x21fd),_0x51a0f0(0x3446),_0x51a0f0(0x4495),'TuranGraph',_0x51a0f0(0x4f42),_0x51a0f0(0x34ad),'TwoWayRule',_0x51a0f0(0x4c16),_0x51a0f0(0x158d),_0x51a0f0(0x2dd1),_0x51a0f0(0x45ea),'TypeOf',_0x51a0f0(0x3c80),'UnateQ',_0x51a0f0(0x438f),_0x51a0f0(0x41ba),_0x51a0f0(0x17c3),'UnderBar',_0x51a0f0(0x44a7),_0x51a0f0(0x2bce),_0x51a0f0(0xad4),'UnderoverscriptBox',_0x51a0f0(0x1b8a),'Underscript',_0x51a0f0(0x1a85),'UnderscriptBoxOptions','UnderseaFeatureData',_0x51a0f0(0x2229),_0x51a0f0(0x908),'UndirectedGraphQ',_0x51a0f0(0x1afc),_0x51a0f0(0x2606),_0x51a0f0(0xf4a),_0x51a0f0(0x6d4),'Unevaluated',_0x51a0f0(0x439a),_0x51a0f0(0x4740),_0x51a0f0(0x44f3),_0x51a0f0(0xb5a),_0x51a0f0(0x573),_0x51a0f0(0x605),_0x51a0f0(0x1444),'UnionPlus',_0x51a0f0(0x30d3),'UniqueElements',_0x51a0f0(0x4596),_0x51a0f0(0x4971),_0x51a0f0(0x1511),_0x51a0f0(0x431),_0x51a0f0(0x1bd1),_0x51a0f0(0x2b6e),_0x51a0f0(0x242d),'UnitStep',_0x51a0f0(0x8ea),'UnitTriangle',_0x51a0f0(0x3a5e),_0x51a0f0(0x2664),_0x51a0f0(0x4059),_0x51a0f0(0x2895),'UniversityData',_0x51a0f0(0x54f),_0x51a0f0(0x13db),_0x51a0f0(0x4ef6),'Unprotect',_0x51a0f0(0x277),'UnsameQ',_0x51a0f0(0x33ab),'Unset','UnsetShared','Until',_0x51a0f0(0x4ab5),'Up',_0x51a0f0(0x3b64),_0x51a0f0(0x466a),_0x51a0f0(0x3e00),_0x51a0f0(0x4d5),_0x51a0f0(0x42ae),_0x51a0f0(0x1329),_0x51a0f0(0x4028),_0x51a0f0(0x7cf),'UpdateSearchIndex',_0x51a0f0(0x2f08),_0x51a0f0(0x464e),_0x51a0f0(0x2e5c),_0x51a0f0(0x4170),_0x51a0f0(0x4893),_0x51a0f0(0x449c),'UpperTriangularMatrix',_0x51a0f0(0x52b),_0x51a0f0(0x314f),_0x51a0f0(0x17a0),_0x51a0f0(0x4014),_0x51a0f0(0x157e),_0x51a0f0(0x5114),_0x51a0f0(0x4669),'UpValues',_0x51a0f0(0x3e4f),_0x51a0f0(0x6da),_0x51a0f0(0x24f2),'URLDispatcher',_0x51a0f0(0x4aa3),_0x51a0f0(0x18b0),'URLEncode',_0x51a0f0(0x49be),'URLExpand','URLFetch',_0x51a0f0(0x2dfc),_0x51a0f0(0xca8),_0x51a0f0(0x4bfa),_0x51a0f0(0x8e6),'URLRead',_0x51a0f0(0x1b3c),_0x51a0f0(0x2425),_0x51a0f0(0x420d),'URLShorten',_0x51a0f0(0xcd2),_0x51a0f0(0x1b90),_0x51a0f0(0x129d),_0x51a0f0(0x402f),_0x51a0f0(0x4cb6),_0x51a0f0(0xa08),_0x51a0f0(0x1551),_0x51a0f0(0x15b4),_0x51a0f0(0x5182),_0x51a0f0(0x266e),_0x51a0f0(0x367a),_0x51a0f0(0x3fd0),'ValueBox',_0x51a0f0(0x1cba),_0x51a0f0(0xdde),'ValueForm','ValuePreprocessingFunction',_0x51a0f0(0x3907),_0x51a0f0(0x30e0),_0x51a0f0(0x33a9),_0x51a0f0(0x29d7),'Variables',_0x51a0f0(0x4736),_0x51a0f0(0x4cd8),_0x51a0f0(0x906),_0x51a0f0(0x1fc2),'VarianceGammaPointProcess','VarianceTest',_0x51a0f0(0x13b3),_0x51a0f0(0x3d09),_0x51a0f0(0x3b25),_0x51a0f0(0xf58),_0x51a0f0(0x4b68),_0x51a0f0(0xa4d),'VectorColorFunctionScaling',_0x51a0f0(0x4ed7),_0x51a0f0(0x1b44),_0x51a0f0(0xce8),_0x51a0f0(0x457c),_0x51a0f0(0x4f1e),_0x51a0f0(0x5172),_0x51a0f0(0x1069),'VectorLessEqual',_0x51a0f0(0x3df0),_0x51a0f0(0x3ebc),_0x51a0f0(0x6c9),_0x51a0f0(0x302c),_0x51a0f0(0x4a6b),_0x51a0f0(0x22e6),_0x51a0f0(0x5151),_0x51a0f0(0x13c5),_0x51a0f0(0x1b0d),_0x51a0f0(0x3f69),_0x51a0f0(0x3990),_0x51a0f0(0x183a),_0x51a0f0(0x4abf),_0x51a0f0(0x476d),_0x51a0f0(0x3a00),_0x51a0f0(0x34f8),'VerifyDerivedKey',_0x51a0f0(0x2dd),_0x51a0f0(0x2d02),_0x51a0f0(0x2e0),_0x51a0f0(0x3d19),_0x51a0f0(0x3e16),_0x51a0f0(0x2a95),_0x51a0f0(0x1904),'VertexAdd','VertexCapacity',_0x51a0f0(0x3f8c),_0x51a0f0(0x3a6f),_0x51a0f0(0x15ee),_0x51a0f0(0x2bad),_0x51a0f0(0x1d36),_0x51a0f0(0x34de),_0x51a0f0(0x2572),_0x51a0f0(0x41a4),_0x51a0f0(0x3da),_0x51a0f0(0x3d95),_0x51a0f0(0xa06),_0x51a0f0(0x25fa),'VertexDegree',_0x51a0f0(0x131f),_0x51a0f0(0x4431),_0x51a0f0(0x2877),'VertexInComponent','VertexInComponentGraph','VertexInDegree',_0x51a0f0(0x397e),_0x51a0f0(0x3520),_0x51a0f0(0x2dd8),_0x51a0f0(0x7a1),'VertexLabelStyle',_0x51a0f0(0x1537),_0x51a0f0(0x2a99),_0x51a0f0(0x212c),_0x51a0f0(0x258a),_0x51a0f0(0x2bc6),_0x51a0f0(0x4a52),'VertexRenderingFunction',_0x51a0f0(0x1fc7),'VertexShape',_0x51a0f0(0x19c9),'VertexSize',_0x51a0f0(0x4187),_0x51a0f0(0x3fe4),_0x51a0f0(0x1039),_0x51a0f0(0x4462),_0x51a0f0(0x16da),'Vertical',_0x51a0f0(0x3035),_0x51a0f0(0x56d),'VerticalGauge',_0x51a0f0(0x1e34),_0x51a0f0(0x4b56),_0x51a0f0(0x1710),_0x51a0f0(0x3eb3),'VideoCapture',_0x51a0f0(0xe8b),_0x51a0f0(0x431c),_0x51a0f0(0x13a3),_0x51a0f0(0x4e25),_0x51a0f0(0xbf5),'VideoFrameMap',_0x51a0f0(0x2599),_0x51a0f0(0x2d0),_0x51a0f0(0x1f5f),_0x51a0f0(0x142e),'VideoMap',_0x51a0f0(0x3620),'VideoMapTimeSeries',_0x51a0f0(0x1a1b),_0x51a0f0(0x221f),_0x51a0f0(0x25c6),_0x51a0f0(0x450),_0x51a0f0(0x2d26),_0x51a0f0(0x2aae),_0x51a0f0(0x2347),_0x51a0f0(0x4ab),'VideoStop','VideoStream',_0x51a0f0(0x4a9c),_0x51a0f0(0x354b),'VideoTrackSelection',_0x51a0f0(0x46d9),'VideoTransparency','VideoTrim',_0x51a0f0(0x24e0),_0x51a0f0(0x3384),_0x51a0f0(0x2cd7),_0x51a0f0(0x4eae),_0x51a0f0(0x50c1),'ViewPort',_0x51a0f0(0x411a),_0x51a0f0(0x5266),_0x51a0f0(0x524f),_0x51a0f0(0x20a3),'VirtualGroupData','Visible','VisibleCell','VoiceStyleData','VoigtDistribution',_0x51a0f0(0x30eb),_0x51a0f0(0x4ec6),_0x51a0f0(0x1f39),_0x51a0f0(0x33ba),'WaitAll',_0x51a0f0(0x3a75),_0x51a0f0(0xdbd),'WaitUntil',_0x51a0f0(0x2624),_0x51a0f0(0x405b),'WaringYuleDistribution','WarpingCorrespondence',_0x51a0f0(0x20f9),_0x51a0f0(0x24a5),_0x51a0f0(0x2ff4),_0x51a0f0(0x201d),_0x51a0f0(0x1c9d),_0x51a0f0(0x1c6b),_0x51a0f0(0x1021),_0x51a0f0(0x44dd),_0x51a0f0(0x31b9),_0x51a0f0(0x23f9),_0x51a0f0(0x18bd),_0x51a0f0(0x1479),'WaveletScale',_0x51a0f0(0x2aa2),_0x51a0f0(0x2dce),_0x51a0f0(0x4307),_0x51a0f0(0x189b),_0x51a0f0(0x4e15),_0x51a0f0(0x2b05),_0x51a0f0(0xb63),_0x51a0f0(0x3e34),'WeatherForecastData',_0x51a0f0(0x5206),'WebColumn',_0x51a0f0(0x3258),_0x51a0f0(0xd7f),_0x51a0f0(0x4477),_0x51a0f0(0x367e),_0x51a0f0(0x4455),'WebItem','WebPageMetaInformation','WebRow',_0x51a0f0(0x11ce),_0x51a0f0(0x3755),_0x51a0f0(0x2ca3),_0x51a0f0(0x438b),_0x51a0f0(0x4482),'Wednesday',_0x51a0f0(0x2f6b),'WeierstrassE1',_0x51a0f0(0x293b),_0x51a0f0(0x970),'WeierstrassEta1',_0x51a0f0(0x3b0a),_0x51a0f0(0x5148),_0x51a0f0(0x2bc),_0x51a0f0(0x412c),'WeierstrassHalfPeriodW2',_0x51a0f0(0x17e3),_0x51a0f0(0x2ab4),_0x51a0f0(0x4b7f),'WeierstrassInvariants','WeierstrassP',_0x51a0f0(0x2fbd),'WeierstrassSigma',_0x51a0f0(0x2264),_0x51a0f0(0x1cb9),_0x51a0f0(0x7d9),_0x51a0f0(0x1377),_0x51a0f0(0x496),_0x51a0f0(0x2878),_0x51a0f0(0x404d),_0x51a0f0(0x5186),_0x51a0f0(0x1bd3),_0x51a0f0(0x3d11),_0x51a0f0(0x5192),_0x51a0f0(0x289b),_0x51a0f0(0x247c),_0x51a0f0(0x3b0f),_0x51a0f0(0x3262),_0x51a0f0(0x441b),_0x51a0f0(0x25c),_0x51a0f0(0x321d),_0x51a0f0(0x430b),_0x51a0f0(0x488c),_0x51a0f0(0x5064),_0x51a0f0(0x5ce),_0x51a0f0(0xb16),_0x51a0f0(0x3769),'WikidataSearch',_0x51a0f0(0x3129),_0x51a0f0(0xb5d),_0x51a0f0(0xdee),_0x51a0f0(0x4b50),_0x51a0f0(0x38c),'WindingCount',_0x51a0f0(0x2e6a),_0x51a0f0(0xb42),_0x51a0f0(0x33ac),_0x51a0f0(0x24e3),'WindowFrame',_0x51a0f0(0x30fd),_0x51a0f0(0x1b1d),_0x51a0f0(0x498),_0x51a0f0(0x4472),'WindowPersistentStyles',_0x51a0f0(0x41f4),_0x51a0f0(0x35ed),_0x51a0f0(0x35f6),_0x51a0f0(0x17a2),_0x51a0f0(0x367f),_0x51a0f0(0x4feb),_0x51a0f0(0x4089),_0x51a0f0(0x2632),'WinsorizedMean',_0x51a0f0(0x200e),_0x51a0f0(0x3291),_0x51a0f0(0x50d0),'WithCleanup',_0x51a0f0(0x3599),'WolframAlpha',_0x51a0f0(0xf41),'WolframAlphaQuantity',_0x51a0f0(0x99a),_0x51a0f0(0x4dfe),_0x51a0f0(0x575),_0x51a0f0(0x31d7),'WordBoundary',_0x51a0f0(0x1f13),'WordCloud',_0x51a0f0(0xb53),_0x51a0f0(0x46d8),_0x51a0f0(0x44a8),'WordDefinition',_0x51a0f0(0x1e29),_0x51a0f0(0x1571),_0x51a0f0(0x44a9),_0x51a0f0(0x468a),_0x51a0f0(0x5083),_0x51a0f0(0x416c),_0x51a0f0(0x3688),'WordSpacings',_0x51a0f0(0x189a),_0x51a0f0(0x4f36),_0x51a0f0(0x3678),_0x51a0f0(0x35a9),'Write','WriteLine',_0x51a0f0(0x1598),'Wronskian','XMLElement',_0x51a0f0(0x3d7b),'XMLTemplate',_0x51a0f0(0x3028),_0x51a0f0(0x2979),'XYZColor',_0x51a0f0(0x1d60),_0x51a0f0(0x48bf),_0x51a0f0(0x2a0),_0x51a0f0(0x4ed),_0x51a0f0(0x3328),'ZeroTest',_0x51a0f0(0x2481),'Zeta',_0x51a0f0(0x470),_0x51a0f0(0x33c0),'ZipfDistribution',_0x51a0f0(0x3cf0),'ZoomFactor','ZTest','ZTransform',_0x51a0f0(0x4394),_0x51a0f0(0x1f77),_0x51a0f0(0x1a13),_0x51a0f0(0x39f0),_0x51a0f0(0xcd8),_0x51a0f0(0x19c3),_0x51a0f0(0x4b4e),'$AllowInternet',_0x51a0f0(0x1c71),_0x51a0f0(0x137b),'$AsynchronousTask',_0x51a0f0(0xe26),_0x51a0f0(0xeaf),_0x51a0f0(0x46bd),'$AudioOutputDevices','$BaseDirectory',_0x51a0f0(0x1dc1),_0x51a0f0(0x3ae6),_0x51a0f0(0x4644),_0x51a0f0(0x1135),'$BoxForms',_0x51a0f0(0x322),_0x51a0f0(0x2063),_0x51a0f0(0xc5c),'$ChannelBase',_0x51a0f0(0x1ff7),_0x51a0f0(0x3cf2),'$CloudAccountName',_0x51a0f0(0x3d54),'$CloudConnected',_0x51a0f0(0x1a8c),_0x51a0f0(0x4132),_0x51a0f0(0x2195),_0x51a0f0(0x3b43),_0x51a0f0(0x2eb2),_0x51a0f0(0x3dfa),_0x51a0f0(0x288f),_0x51a0f0(0x4062),_0x51a0f0(0x4c40),_0x51a0f0(0x2732),_0x51a0f0(0x4964),'$CloudVersionNumber','$CloudWolframEngineVersionNumber',_0x51a0f0(0x2a0e),_0x51a0f0(0x276f),_0x51a0f0(0xa86),_0x51a0f0(0x1728),_0x51a0f0(0x48db),'$Context',_0x51a0f0(0x37df),_0x51a0f0(0x1fb3),_0x51a0f0(0x43fa),'$Cookies','$CookieStore','$CreationDate',_0x51a0f0(0x3e1a),_0x51a0f0(0x2fc),_0x51a0f0(0x29dc),'$CurrentWebSession','$DataStructures',_0x51a0f0(0x40df),'$DefaultAudioInputDevice','$DefaultAudioOutputDevice',_0x51a0f0(0xf08),_0x51a0f0(0x1cc7),'$DefaultImagingDevice',_0x51a0f0(0xc28),_0x51a0f0(0x343e),'$DefaultLocalKernel',_0x51a0f0(0x4569),_0x51a0f0(0x1d15),_0x51a0f0(0xc0e),_0x51a0f0(0x51ac),_0x51a0f0(0x4620),_0x51a0f0(0x1b71),'$DefaultSystemCredentialStore',_0x51a0f0(0x2d0c),_0x51a0f0(0x1b1f),_0x51a0f0(0x3737),_0x51a0f0(0x43a3),_0x51a0f0(0x5218),'$EmbedCodeEnvironments',_0x51a0f0(0x20eb),'$EntityStores',_0x51a0f0(0x48b0),'$EvaluationCloudBase',_0x51a0f0(0x2117),_0x51a0f0(0x351c),_0x51a0f0(0x3433),'$ExternalIdentifierTypes','$ExternalStorageBase',_0x51a0f0(0x21a5),_0x51a0f0(0x4c12),_0x51a0f0(0x376e),_0x51a0f0(0x4b0),_0x51a0f0(0x1adc),'$FrontEndSession',_0x51a0f0(0x16ad),_0x51a0f0(0xb6c),'$GeoLocation',_0x51a0f0(0x2930),_0x51a0f0(0x265b),_0x51a0f0(0x1313),'$GeoLocationSource','$HistoryLength','$HomeDirectory',_0x51a0f0(0x356f),_0x51a0f0(0x44c7),_0x51a0f0(0x4849),_0x51a0f0(0x4686),_0x51a0f0(0x20b3),_0x51a0f0(0x24fb),_0x51a0f0(0x1148),_0x51a0f0(0x45c2),'$ImportFormats',_0x51a0f0(0x4be6),'$InitialDirectory',_0x51a0f0(0x1c3b),_0x51a0f0(0x422c),_0x51a0f0(0x4284),_0x51a0f0(0x3e7e),_0x51a0f0(0x1a2b),_0x51a0f0(0x3387),_0x51a0f0(0xf7d),'$InstallationDirectory',_0x51a0f0(0x381),_0x51a0f0(0x1d38),_0x51a0f0(0x586),'$KernelCount',_0x51a0f0(0x4a93),'$Language',_0x51a0f0(0x2695),_0x51a0f0(0x2544),'$LicenseExpirationDate',_0x51a0f0(0xc39),_0x51a0f0(0x165f),_0x51a0f0(0x1c44),_0x51a0f0(0x42b8),_0x51a0f0(0x372a),_0x51a0f0(0x2da8),_0x51a0f0(0x1412),'$LinkSupported',_0x51a0f0(0x44af),'$LocalBase','$LocalSymbolBase',_0x51a0f0(0xd89),_0x51a0f0(0x2320),_0x51a0f0(0x11e1),_0x51a0f0(0xfd7),_0x51a0f0(0x6e0),'$MachineName','$MachinePrecision',_0x51a0f0(0x447a),_0x51a0f0(0x2ec6),'$MaxExtraPrecision',_0x51a0f0(0x455f),_0x51a0f0(0x5a7),_0x51a0f0(0x3204),_0x51a0f0(0x2c63),_0x51a0f0(0x1231),_0x51a0f0(0x3c5b),_0x51a0f0(0x720),_0x51a0f0(0x4470),_0x51a0f0(0x14f9),_0x51a0f0(0x3b84),_0x51a0f0(0xb98),_0x51a0f0(0x2c18),'$MinNumber',_0x51a0f0(0xaec),_0x51a0f0(0x304b),'$MobilePhone',_0x51a0f0(0x21ee),'$NetworkConnected',_0x51a0f0(0x4c8f),_0x51a0f0(0x3c8e),_0x51a0f0(0x1cd5),_0x51a0f0(0x15dc),_0x51a0f0(0x351f),_0x51a0f0(0x4f40),_0x51a0f0(0xc5e),_0x51a0f0(0x1c26),_0x51a0f0(0x3510),'$OperatingSystem','$Output',_0x51a0f0(0xcd3),'$OutputSizeLimit',_0x51a0f0(0x1d73),_0x51a0f0(0x2841),_0x51a0f0(0x27f),_0x51a0f0(0x528f),_0x51a0f0(0x426a),_0x51a0f0(0xd1a),'$Path',_0x51a0f0(0x492d),_0x51a0f0(0x1eaa),'$Permissions',_0x51a0f0(0x3457),_0x51a0f0(0x15b9),_0x51a0f0(0x520f),_0x51a0f0(0x26fa),_0x51a0f0(0x13de),_0x51a0f0(0x1ac0),_0x51a0f0(0x326a),_0x51a0f0(0x4bf9),_0x51a0f0(0xbca),'$PrePrint','$PreRead',_0x51a0f0(0x4093),'$PrintLiteral',_0x51a0f0(0x47ed),'$ProcessID','$ProcessorCount','$ProcessorType',_0x51a0f0(0x3d70),'$ProgramName',_0x51a0f0(0x1779),_0x51a0f0(0x1fca),'$RandomGeneratorState',_0x51a0f0(0x66e),_0x51a0f0(0x286c),'$RegisteredDeviceClasses','$RegisteredUserName',_0x51a0f0(0x3b26),_0x51a0f0(0x416f),'$RequesterCloudUserID',_0x51a0f0(0x21d1),_0x51a0f0(0x2454),'$RequesterWolframUUID',_0x51a0f0(0x219c),_0x51a0f0(0x512f),_0x51a0f0(0x292),'$ScheduledTask',_0x51a0f0(0x2b41),_0x51a0f0(0x3adb),_0x51a0f0(0x4277),_0x51a0f0(0x514f),'$Services',_0x51a0f0(0x1c76),'$SetParentLink',_0x51a0f0(0x2bca),_0x51a0f0(0x519f),'$SoundDisplay',_0x51a0f0(0x1aa5),_0x51a0f0(0x1e0c),_0x51a0f0(0x10e5),_0x51a0f0(0x12fe),_0x51a0f0(0x165d),_0x51a0f0(0x125d),'$SuppressInputFormHeads',_0x51a0f0(0xff4),_0x51a0f0(0x29b1),_0x51a0f0(0x2007),'$SystemCharacterEncoding',_0x51a0f0(0x126d),'$SystemID',_0x51a0f0(0x1754),_0x51a0f0(0x1f7a),_0x51a0f0(0xb73),_0x51a0f0(0x1b24),_0x51a0f0(0x3c83),_0x51a0f0(0x7f6),_0x51a0f0(0x4096),_0x51a0f0(0x334e),_0x51a0f0(0xe6a),_0x51a0f0(0xf8d),_0x51a0f0(0x2265),_0x51a0f0(0xbdd),_0x51a0f0(0x4437),_0x51a0f0(0x2212),_0x51a0f0(0x2408),_0x51a0f0(0x4758),_0x51a0f0(0x369b),_0x51a0f0(0xdc2),_0x51a0f0(0x15e6),_0x51a0f0(0x5007),_0x51a0f0(0x30b3),_0x51a0f0(0x34f6),_0x51a0f0(0x990),_0x51a0f0(0x5074),_0x51a0f0(0x32af),_0x51a0f0(0x45b1),_0x51a0f0(0x3819),_0x51a0f0(0x1599),'$UserAgentVersion',_0x51a0f0(0x32fc),'$UserBasePacletsDirectory',_0x51a0f0(0x3451),_0x51a0f0(0x12f9),_0x51a0f0(0x2784),'$UserURLBase',_0x51a0f0(0x1432),_0x51a0f0(0x4e04),_0x51a0f0(0x2e59),_0x51a0f0(0x1758),_0x51a0f0(0x39ee),_0x51a0f0(0x2021),_0x51a0f0(0x3f46),_0x51a0f0(0x39ae)];_0x596dd6[_0x51a0f0(0x474c)]=function(_0x1cc8f2){const _0xddaadb=_0x51a0f0,_0x1da9dd=_0x1cc8f2['regex'],_0x5a51f8=_0x1da9dd['either'](_0x1da9dd['concat'](/([2-9]|[1-2]\d|[3][0-5])\^\^/,/(\w*\.\w+|\w+\.\w*|\w+)/),/(\d*\.\d+|\d+\.\d*|\d+)/),_0x508862=_0x1da9dd['either'](/``[+-]?(\d*\.\d+|\d+\.\d*|\d+)/,/`([+-]?(\d*\.\d+|\d+\.\d*|\d+))?/),_0x441bc0={'className':_0xddaadb(0x4a80),'relevance':0x0,'begin':_0x1da9dd[_0xddaadb(0x1d1d)](_0x5a51f8,_0x1da9dd[_0xddaadb(0x51e4)](_0x508862),_0x1da9dd[_0xddaadb(0x51e4)](/\*\^[+-]?\d+/))},_0xc8c621=/[a-zA-Z$][a-zA-Z0-9$]*/,_0x3460a2=new Set(_0xa47c78),_0x225584={'variants':[{'className':_0xddaadb(0x348e),'begin':_0xc8c621,'on:begin':(_0x4a35c2,_0x5415be)=>{const _0x2f5a6c=_0xddaadb;_0x3460a2[_0x2f5a6c(0x3170)](_0x4a35c2[0x0])||_0x5415be[_0x2f5a6c(0xec5)]();}},{'className':_0xddaadb(0x239b),'relevance':0x0,'begin':_0xc8c621}]},_0x585c3e={'className':_0xddaadb(0x1d43),'relevance':0x0,'begin':_0x1da9dd[_0xddaadb(0x1d1d)]('::',_0xc8c621)};return{'name':_0xddaadb(0x49b9),'aliases':[_0xddaadb(0x1353),'wl'],'classNameAliases':{'brace':'punctuation','pattern':_0xddaadb(0xcfc),'slot':_0xddaadb(0xcfc),'symbol':_0xddaadb(0x3362),'named-character':_0xddaadb(0x3362),'builtin-symbol':_0xddaadb(0x43a),'message-name':_0xddaadb(0x2431)},'contains':[_0x1cc8f2[_0xddaadb(0x4e4f)](/\(\*/,/\*\)/,{'contains':['self']}),{'className':_0xddaadb(0x3dd7),'relevance':0x0,'begin':/([a-zA-Z$][a-zA-Z0-9$]*)?_+([a-zA-Z$][a-zA-Z0-9$]*)?/},{'className':_0xddaadb(0x2998),'relevance':0x0,'begin':/#[a-zA-Z$][a-zA-Z0-9$]*|#+[0-9]?/},_0x585c3e,_0x225584,{'className':_0xddaadb(0x41c9),'begin':/\\\[[$a-zA-Z][$a-zA-Z0-9]+\]/},_0x1cc8f2['QUOTE_STRING_MODE'],_0x441bc0,{'className':_0xddaadb(0x1182),'relevance':0x0,'begin':/[+\-*/,;.:@~=><&|_`'^?!%]+/},{'className':_0xddaadb(0x3125),'relevance':0x0,'begin':/[[\](){}]/}]};};},0x2212:_0x2e21d9=>{const _0x56e91f=a0_0x11e7;_0x2e21d9[_0x56e91f(0x474c)]=function(_0x58b191){const _0x4d1698=_0x56e91f,_0x59b43b='(\x27|\x5c.\x27)+',_0xd6433d={'relevance':0x0,'contains':[{'begin':_0x59b43b}]};return{'name':_0x4d1698(0x33f7),'keywords':{'keyword':_0x4d1698(0x2d21),'built_in':_0x4d1698(0x2976)},'illegal':_0x4d1698(0x6fd),'contains':[{'className':_0x4d1698(0x14b2),'beginKeywords':'function','end':'$','contains':[_0x58b191['UNDERSCORE_TITLE_MODE'],{'className':_0x4d1698(0xddd),'variants':[{'begin':'\x5c(','end':'\x5c)'},{'begin':'\x5c[','end':'\x5c]'}]}]},{'className':_0x4d1698(0x43a),'begin':/true|false/,'relevance':0x0,'starts':_0xd6433d},{'begin':_0x4d1698(0x46f9)+_0x59b43b,'relevance':0x0},{'className':_0x4d1698(0x4a80),'begin':_0x58b191[_0x4d1698(0x45be)],'relevance':0x0,'starts':_0xd6433d},{'className':'string','begin':'\x27','end':'\x27','contains':[{'begin':'\x27\x27'}]},{'begin':/\]|\}|\)/,'relevance':0x0,'starts':_0xd6433d},{'className':_0x4d1698(0x2431),'begin':'\x22','end':'\x22','contains':[{'begin':'\x22\x22'}],'starts':_0xd6433d},_0x58b191['COMMENT'](_0x4d1698(0x1a32),'^\x5cs*%\x5c}\x5cs*$'),_0x58b191[_0x4d1698(0x4e4f)]('%','$')]};};},0x2018:_0x474ee5=>{const _0x4ca85f=a0_0x11e7;_0x474ee5[_0x4ca85f(0x474c)]=function(_0x48b34b){const _0x219a44=_0x4ca85f;return{'name':_0x219a44(0x39dc),'keywords':{'$pattern':_0x219a44(0x4858),'keyword':_0x219a44(0x35e0),'literal':_0x219a44(0x521),'built_in':_0x219a44(0x2607),'symbol':'_\x20__\x20%|0\x20%%|0'},'contains':[{'className':_0x219a44(0x4645),'begin':_0x219a44(0x4f94),'end':_0x219a44(0x1820),'contains':['self']},_0x48b34b[_0x219a44(0x291b)],{'className':_0x219a44(0x4a80),'relevance':0x0,'variants':[{'begin':_0x219a44(0x1ebb)},{'begin':_0x219a44(0x3e31),'relevance':0xa},{'begin':'\x5cb(\x5c.\x5cd+|\x5cd+\x5c.\x5cd+)\x5cb'},{'begin':_0x219a44(0x10eb)}]}],'illegal':/@/};};},0x26e1:_0x12ed4c=>{_0x12ed4c['exports']=function(_0x332169){const _0x198137=a0_0x11e7;return{'name':_0x198137(0x20f),'keywords':_0x198137(0x5ff),'illegal':'{_0x1f8060['exports']=function(_0x4b0136){const _0x211314=a0_0x11e7,_0x4341e=_0x4b0136[_0x211314(0x4e4f)]('%','$'),_0x448fa0=_0x4b0136[_0x211314(0x46a1)](_0x4b0136[_0x211314(0xa4c)],{'relevance':0x0}),_0x12a232=_0x4b0136[_0x211314(0x46a1)](_0x4b0136[_0x211314(0x291b)],{'relevance':0x0});return _0x12a232['contains']=_0x12a232[_0x211314(0x2b31)][_0x211314(0x384c)](),_0x12a232[_0x211314(0x2b31)][_0x211314(0x1715)]({'className':_0x211314(0x2ad6),'begin':_0x211314(0x41fa),'relevance':0x0}),{'name':_0x211314(0x39ba),'aliases':['m',_0x211314(0x87f)],'keywords':{'keyword':'module\x20use_module\x20import_module\x20include_module\x20end_module\x20initialise\x20mutable\x20initialize\x20finalize\x20finalise\x20interface\x20implementation\x20pred\x20mode\x20func\x20type\x20inst\x20solver\x20any_pred\x20any_func\x20is\x20semidet\x20det\x20nondet\x20multi\x20erroneous\x20failure\x20cc_nondet\x20cc_multi\x20typeclass\x20instance\x20where\x20pragma\x20promise\x20external\x20trace\x20atomic\x20or_else\x20require_complete_switch\x20require_det\x20require_semidet\x20require_multi\x20require_nondet\x20require_cc_multi\x20require_cc_nondet\x20require_erroneous\x20require_failure','meta':_0x211314(0x4e38),'built_in':_0x211314(0x4f9f)},'contains':[{'className':_0x211314(0x43a),'variants':[{'begin':'<=>'},{'begin':'<=','relevance':0x0},{'begin':'=>','relevance':0x0},{'begin':_0x211314(0x2325)},{'begin':'\x5c\x5c/'}]},{'className':_0x211314(0x43a),'variants':[{'begin':_0x211314(0x16af)},{'begin':'=','relevance':0x0}]},_0x4341e,_0x4b0136[_0x211314(0x23fe)],{'className':_0x211314(0x4a80),'begin':'0\x27.\x5c|0[box][0-9a-fA-F]*'},_0x4b0136[_0x211314(0x30be)],_0x448fa0,_0x12a232,{'begin':/:-/},{'begin':/\.$/}]};};},0x204d:_0x2a89a9=>{const _0x49ba0e=a0_0x11e7;_0x2a89a9[_0x49ba0e(0x474c)]=function(_0x4cd282){const _0x3bc5e3=_0x49ba0e;return{'name':_0x3bc5e3(0x19ec),'case_insensitive':!0x0,'aliases':[_0x3bc5e3(0x98c)],'keywords':{'$pattern':_0x3bc5e3(0x488)+_0x4cd282['IDENT_RE'],'meta':_0x3bc5e3(0x49c6),'built_in':_0x3bc5e3(0x715)},'contains':[{'className':_0x3bc5e3(0x1357),'begin':'\x5cb(addi?u?|andi?|b(al)?|beql?|bgez(al)?l?|bgtzl?|blezl?|bltz(al)?l?|bnel?|cl[oz]|divu?|ext|ins|j(al)?|jalr(\x5c.hb)?|jr(\x5c.hb)?|lbu?|lhu?|ll|lui|lw[lr]?|maddu?|mfhi|mflo|movn|movz|move|msubu?|mthi|mtlo|mul|multu?|nop|nor|ori?|rotrv?|sb|sc|se[bh]|sh|sllv?|slti?u?|srav?|srlv?|subu?|sw[lr]?|xori?|wsbh|abs\x5c.[sd]|add\x5c.[sd]|alnv.ps|bc1[ft]l?|c\x5c.(s?f|un|u?eq|[ou]lt|[ou]le|ngle?|seq|l[et]|ng[et])\x5c.[sd]|(ceil|floor|round|trunc)\x5c.[lw]\x5c.[sd]|cfc1|cvt\x5c.d\x5c.[lsw]|cvt\x5c.l\x5c.[dsw]|cvt\x5c.ps\x5c.s|cvt\x5c.s\x5c.[dlw]|cvt\x5c.s\x5c.p[lu]|cvt\x5c.w\x5c.[dls]|div\x5c.[ds]|ldx?c1|luxc1|lwx?c1|madd\x5c.[sd]|mfc1|mov[fntz]?\x5c.[ds]|msub\x5c.[sd]|mth?c1|mul\x5c.[ds]|neg\x5c.[ds]|nmadd\x5c.[ds]|nmsub\x5c.[ds]|p[lu][lu]\x5c.ps|recip\x5c.fmt|r?sqrt\x5c.[ds]|sdx?c1|sub\x5c.[ds]|suxc1|swx?c1|break|cache|d?eret|[de]i|ehb|mfc0|mtc0|pause|prefx?|rdhwr|rdpgpr|sdbbp|ssnop|synci?|syscall|teqi?|tgei?u?|tlb(p|r|w[ir])|tlti?u?|tnei?|wait|wrpgpr)','end':'\x5cs'},_0x4cd282[_0x3bc5e3(0x4e4f)](_0x3bc5e3(0xf2f),'$'),_0x4cd282['C_BLOCK_COMMENT_MODE'],_0x4cd282[_0x3bc5e3(0x291b)],{'className':_0x3bc5e3(0x2431),'begin':'\x27','end':_0x3bc5e3(0x2324),'relevance':0x0},{'className':_0x3bc5e3(0x4685),'begin':'\x5c|','end':'\x5c|','illegal':'\x5cn','relevance':0x0},{'className':_0x3bc5e3(0x4a80),'variants':[{'begin':'0x[0-9a-f]+'},{'begin':_0x3bc5e3(0xe5d)}],'relevance':0x0},{'className':_0x3bc5e3(0x239b),'variants':[{'begin':_0x3bc5e3(0x6d7)},{'begin':_0x3bc5e3(0x1e49)},{'begin':_0x3bc5e3(0x656)}],'relevance':0x0}],'illegal':/\//};};},0x3b0:_0x27da01=>{const _0x2f0626=a0_0x11e7;_0x27da01[_0x2f0626(0x474c)]=function(_0x1420e1){const _0x48a549=_0x2f0626;return{'name':_0x48a549(0x3c4),'keywords':'environ\x20vocabularies\x20notations\x20constructors\x20definitions\x20registrations\x20theorems\x20schemes\x20requirements\x20begin\x20end\x20definition\x20registration\x20cluster\x20existence\x20pred\x20func\x20defpred\x20deffunc\x20theorem\x20proof\x20let\x20take\x20assume\x20then\x20thus\x20hence\x20ex\x20for\x20st\x20holds\x20consider\x20reconsider\x20such\x20that\x20and\x20in\x20provided\x20of\x20as\x20from\x20be\x20being\x20by\x20means\x20equals\x20implies\x20iff\x20redefine\x20define\x20now\x20not\x20or\x20attr\x20is\x20mode\x20suppose\x20per\x20cases\x20set\x20thesis\x20contradiction\x20scheme\x20reserve\x20struct\x20correctness\x20compatibility\x20coherence\x20symmetry\x20assymetry\x20reflexivity\x20irreflexivity\x20connectedness\x20uniqueness\x20commutativity\x20idempotence\x20involutiveness\x20projectivity','contains':[_0x1420e1[_0x48a549(0x4e4f)]('::','$')]};};},0x17be:_0x4e4323=>{const _0x5a2579=a0_0x11e7;_0x4e4323[_0x5a2579(0x474c)]=function(_0x63044e){const _0x1f29a1=_0x5a2579;return{'name':'Mojolicious','subLanguage':'xml','contains':[{'className':_0x1f29a1(0x5153),'begin':'^__(END|DATA)__$'},{'begin':_0x1f29a1(0x23a6),'end':'$','subLanguage':_0x1f29a1(0xf50)},{'begin':_0x1f29a1(0x6ef),'end':_0x1f29a1(0x1686),'subLanguage':_0x1f29a1(0xf50),'excludeBegin':!0x0,'excludeEnd':!0x0}]};};},0x1eaa:_0x5d2963=>{const _0x575f87=a0_0x11e7;_0x5d2963[_0x575f87(0x474c)]=function(_0x379e32){const _0x1670d8=_0x575f87,_0x5d0809={'className':'number','relevance':0x0,'variants':[{'begin':'[$][a-fA-F0-9]+'},_0x379e32[_0x1670d8(0x30be)]]},_0x15f690={'variants':[{'match':[/(function|method)/,/\s+/,_0x379e32['UNDERSCORE_IDENT_RE']]}],'scope':{0x1:'keyword',0x3:_0x1670d8(0x20db)}},_0x580fab={'variants':[{'match':[/(class|interface|extends|implements)/,/\s+/,_0x379e32['UNDERSCORE_IDENT_RE']]}],'scope':{0x1:_0x1670d8(0x1357),0x3:'title.class'}};return{'name':_0x1670d8(0x21b1),'case_insensitive':!0x0,'keywords':{'keyword':[_0x1670d8(0x39ce),'private',_0x1670d8(0x227a),_0x1670d8(0x16d9),_0x1670d8(0x4c7b),_0x1670d8(0x2068),_0x1670d8(0x4321),_0x1670d8(0x422b),_0x1670d8(0x31a3),_0x1670d8(0x2be9),_0x1670d8(0xc1a),_0x1670d8(0x3027),_0x1670d8(0x27e4),_0x1670d8(0x3fc9),_0x1670d8(0x2e7e),_0x1670d8(0x3d23),_0x1670d8(0xc01),_0x1670d8(0x16a7),'global',_0x1670d8(0x4574),_0x1670d8(0x2681),'if',_0x1670d8(0xaf5),'else',_0x1670d8(0x3790),_0x1670d8(0x39b),_0x1670d8(0x552),_0x1670d8(0x905),_0x1670d8(0x11d0),'until',_0x1670d8(0xe57),_0x1670d8(0x3c19),'to',_0x1670d8(0xf8e),_0x1670d8(0x3dc6),_0x1670d8(0xdfd),_0x1670d8(0x196c),_0x1670d8(0x2988),_0x1670d8(0x383),_0x1670d8(0x331),_0x1670d8(0x2663),'or',_0x1670d8(0x12ee),'shr','mod'],'built_in':[_0x1670d8(0x2765),_0x1670d8(0x3366),'Error',_0x1670d8(0x293),_0x1670d8(0x315),_0x1670d8(0x33c1),_0x1670d8(0x7a0),_0x1670d8(0xba5),'ATan',_0x1670d8(0x1fe1),'ATan2r',_0x1670d8(0x31e3),'Abs','Abs','Ceil',_0x1670d8(0x39eb),'Clamp','Cos',_0x1670d8(0x125c),'Exp',_0x1670d8(0x1ed2),_0x1670d8(0x44df),_0x1670d8(0x2d51),'Max',_0x1670d8(0x4c50),_0x1670d8(0x4c50),_0x1670d8(0x8af),_0x1670d8(0x1a2a),'Sgn',_0x1670d8(0x1986),_0x1670d8(0x4d0c),_0x1670d8(0x3e40),'Tan','Tanr','Seed','PI',_0x1670d8(0x287a),_0x1670d8(0x13f4)],'literal':['true',_0x1670d8(0x3984),'null']},'illegal':/\/\*/,'contains':[_0x379e32[_0x1670d8(0x4e4f)](_0x1670d8(0x36c6),_0x1670d8(0x121f)),_0x379e32['COMMENT']('\x27','$',{'relevance':0x0}),_0x15f690,_0x580fab,{'className':_0x1670d8(0xd71),'begin':/\b(self|super)\b/},{'className':_0x1670d8(0x5153),'begin':/\s*#/,'end':'$','keywords':{'keyword':'if\x20else\x20elseif\x20endif\x20end\x20then'}},{'match':[/^\s*/,/strict\b/],'scope':{0x2:_0x1670d8(0x5153)}},{'beginKeywords':_0x1670d8(0xa94),'end':'=','contains':[_0x379e32[_0x1670d8(0xb0e)]]},_0x379e32['QUOTE_STRING_MODE'],_0x5d0809]};};},0x2419:_0x57d366=>{const _0x3dd5cb=a0_0x11e7;_0x57d366[_0x3dd5cb(0x474c)]=function(_0x96abf9){const _0x388c23=_0x3dd5cb,_0x660646={'keyword':_0x388c23(0x119b),'literal':_0x388c23(0x3ebb),'built_in':_0x388c23(0x223b)},_0x68813e=_0x388c23(0x18c3),_0x1accdd={'className':'subst','begin':/#\{/,'end':/\}/,'keywords':_0x660646},_0x14f304=[_0x96abf9['inherit'](_0x96abf9[_0x388c23(0xd12)],{'starts':{'end':_0x388c23(0x4887),'relevance':0x0}}),{'className':_0x388c23(0x2431),'variants':[{'begin':/'/,'end':/'/,'contains':[_0x96abf9['BACKSLASH_ESCAPE']]},{'begin':/"/,'end':/"/,'contains':[_0x96abf9['BACKSLASH_ESCAPE'],_0x1accdd]}]},{'className':'built_in','begin':_0x388c23(0x4b3a)+_0x96abf9[_0x388c23(0xacc)]},{'begin':'@'+_0x96abf9['IDENT_RE']},{'begin':_0x96abf9[_0x388c23(0xacc)]+'\x5c\x5c'+_0x96abf9['IDENT_RE']}];_0x1accdd[_0x388c23(0x2b31)]=_0x14f304;const _0x29199a=_0x96abf9['inherit'](_0x96abf9[_0x388c23(0x2029)],{'begin':_0x68813e}),_0x27e300='(\x5c(.*\x5c)\x5cs*)?\x5cB[-=]>',_0x4c8c2b={'className':_0x388c23(0xddd),'begin':_0x388c23(0x898),'returnBegin':!0x0,'contains':[{'begin':/\(/,'end':/\)/,'keywords':_0x660646,'contains':[_0x388c23(0x4454)][_0x388c23(0x1d1d)](_0x14f304)}]};return{'name':'MoonScript','aliases':['moon'],'keywords':_0x660646,'illegal':/\/\*/,'contains':_0x14f304['concat']([_0x96abf9[_0x388c23(0x4e4f)]('--','$'),{'className':_0x388c23(0x14b2),'begin':_0x388c23(0x3df7)+_0x68813e+_0x388c23(0x2210)+_0x27e300,'end':_0x388c23(0x3200),'returnBegin':!0x0,'contains':[_0x29199a,_0x4c8c2b]},{'begin':/[\(,:=]\s*/,'relevance':0x0,'contains':[{'className':_0x388c23(0x14b2),'begin':_0x27e300,'end':_0x388c23(0x3200),'returnBegin':!0x0,'contains':[_0x4c8c2b]}]},{'className':_0x388c23(0x1390),'beginKeywords':_0x388c23(0x1390),'end':'$','illegal':/[:="\[\]]/,'contains':[{'beginKeywords':_0x388c23(0x4428),'endsWithParent':!0x0,'illegal':/[:="\[\]]/,'contains':[_0x29199a]},_0x29199a]},{'className':_0x388c23(0x11d8),'begin':_0x68813e+':','end':':','returnBegin':!0x0,'returnEnd':!0x0,'relevance':0x0}])};};},0x2663:_0x9ee23c=>{_0x9ee23c['exports']=function(_0x298878){const _0x1c0dc5=a0_0x11e7;return{'name':_0x1c0dc5(0x35d1),'case_insensitive':!0x0,'contains':[{'beginKeywords':_0x1c0dc5(0x676),'end':/;/,'keywords':{'keyword':[_0x1c0dc5(0xc36),'alter',_0x1c0dc5(0x5e7),_0x1c0dc5(0x2663),'any',_0x1c0dc5(0x26f6),'as',_0x1c0dc5(0x637),_0x1c0dc5(0x42fa),_0x1c0dc5(0x2597),_0x1c0dc5(0x4053),_0x1c0dc5(0x1e8d),_0x1c0dc5(0x4e10),_0x1c0dc5(0x482),_0x1c0dc5(0x1034),'by',_0x1c0dc5(0x236b),_0x1c0dc5(0x2e7e),_0x1c0dc5(0x1171),_0x1c0dc5(0x3fde),'collate',_0x1c0dc5(0x3caa),_0x1c0dc5(0x157a),_0x1c0dc5(0xa3b),'continue',_0x1c0dc5(0x199e),_0x1c0dc5(0x159a),'create',_0x1c0dc5(0x2f69),'dataset','datastore',_0x1c0dc5(0x45e8),_0x1c0dc5(0x4b9a),_0x1c0dc5(0x5be),'derived','desc','describe',_0x1c0dc5(0x2f6e),'do',_0x1c0dc5(0x41fc),_0x1c0dc5(0x2f9e),'element',_0x1c0dc5(0x3d4),'end',_0x1c0dc5(0x12d8),'except',_0x1c0dc5(0x433f),'execute','exists',_0x1c0dc5(0xf68),_0x1c0dc5(0x41d0),'first','flatten','for',_0x1c0dc5(0x455c),'from',_0x1c0dc5(0x14b2),'grant',_0x1c0dc5(0x4e5b),_0x1c0dc5(0x4c5e),_0x1c0dc5(0x4cfc),'if',_0x1c0dc5(0x42b4),_0x1c0dc5(0x1757),'in','include',_0x1c0dc5(0x4a85),_0x1c0dc5(0x3bb5),_0x1c0dc5(0x2ca4),_0x1c0dc5(0x2988),'inner','insert',_0x1c0dc5(0x13b2),_0x1c0dc5(0x31cf),'is',_0x1c0dc5(0x3541),_0x1c0dc5(0x49fe),_0x1c0dc5(0x1ea9),'keyspace',_0x1c0dc5(0x2182),_0x1c0dc5(0x4d3c),_0x1c0dc5(0x48eb),_0x1c0dc5(0x1e61),_0x1c0dc5(0x235b),_0x1c0dc5(0x83d),_0x1c0dc5(0x1c08),'lsm',_0x1c0dc5(0x4833),_0x1c0dc5(0x33b2),_0x1c0dc5(0x2083),'materialized',_0x1c0dc5(0x10d1),_0x1c0dc5(0x3ce8),_0x1c0dc5(0x37f7),_0x1c0dc5(0x2e3),_0x1c0dc5(0xc1a),_0x1c0dc5(0x4a80),_0x1c0dc5(0x20c7),_0x1c0dc5(0xf16),'on',_0x1c0dc5(0x1081),'or',_0x1c0dc5(0xd8d),_0x1c0dc5(0x3c1),_0x1c0dc5(0x135c),_0x1c0dc5(0x2956),'partition','password','path',_0x1c0dc5(0x37dd),_0x1c0dc5(0x3df8),_0x1c0dc5(0x1a1f),_0x1c0dc5(0x4ef4),_0x1c0dc5(0x1cb1),_0x1c0dc5(0x1285),_0x1c0dc5(0x39ce),_0x1c0dc5(0x1efd),_0x1c0dc5(0x2c81),_0x1c0dc5(0x24d8),_0x1c0dc5(0x2022),_0x1c0dc5(0xdfd),_0x1c0dc5(0x18ec),'revoke',_0x1c0dc5(0x4d50),_0x1c0dc5(0x2bf0),_0x1c0dc5(0x1b33),_0x1c0dc5(0x4ad2),_0x1c0dc5(0x313b),_0x1c0dc5(0x3fc9),_0x1c0dc5(0x4454),'semi',_0x1c0dc5(0x1fa),_0x1c0dc5(0x2e0d),_0x1c0dc5(0x363a),_0x1c0dc5(0x4cc4),_0x1c0dc5(0x34cb),_0x1c0dc5(0x2431),_0x1c0dc5(0x2ec4),_0x1c0dc5(0xaf5),'to',_0x1c0dc5(0x2bdb),_0x1c0dc5(0xe6f),'truncate',_0x1c0dc5(0x4ec1),'union','unique',_0x1c0dc5(0x40b5),_0x1c0dc5(0x2ad),'unset',_0x1c0dc5(0x38d6),'upsert',_0x1c0dc5(0x84a),_0x1c0dc5(0x4b31),_0x1c0dc5(0x347a),_0x1c0dc5(0x1509),_0x1c0dc5(0x4fe9),_0x1c0dc5(0x1ae1),_0x1c0dc5(0x1fae),'via',_0x1c0dc5(0x1961),_0x1c0dc5(0x191b),'where','while',_0x1c0dc5(0x2aa7),'within','work',_0x1c0dc5(0x32a6)],'literal':[_0x1c0dc5(0x4022),_0x1c0dc5(0x3984),_0x1c0dc5(0x1582),_0x1c0dc5(0xe4d)],'built_in':[_0x1c0dc5(0x4979),_0x1c0dc5(0x2bfb),_0x1c0dc5(0x4ef0),'array_contains',_0x1c0dc5(0x11af),_0x1c0dc5(0x290b),_0x1c0dc5(0x5080),'array_length',_0x1c0dc5(0x21e2),_0x1c0dc5(0x3a09),_0x1c0dc5(0x21b4),_0x1c0dc5(0x525c),'array_put','array_range',_0x1c0dc5(0x1ef4),_0x1c0dc5(0x4a39),_0x1c0dc5(0xbbe),_0x1c0dc5(0x4fa3),'array_sort','array_sum','avg',_0x1c0dc5(0x404e),_0x1c0dc5(0x4529),_0x1c0dc5(0x37c8),'sum',_0x1c0dc5(0x486),'least',_0x1c0dc5(0x13f5),_0x1c0dc5(0x3875),_0x1c0dc5(0x2f78),_0x1c0dc5(0x2e72),'nullif',_0x1c0dc5(0x3062),_0x1c0dc5(0x175c),_0x1c0dc5(0x2766),_0x1c0dc5(0x46d3),_0x1c0dc5(0x30f3),_0x1c0dc5(0x1bdd),_0x1c0dc5(0x1939),_0x1c0dc5(0x1277),_0x1c0dc5(0x3708),_0x1c0dc5(0x2e07),_0x1c0dc5(0x44c),'date_diff_str','date_part_millis',_0x1c0dc5(0x3d3f),'date_trunc_millis','date_trunc_str',_0x1c0dc5(0x38e5),'millis','str_to_millis',_0x1c0dc5(0x2d6b),_0x1c0dc5(0x5134),'millis_to_zone_name',_0x1c0dc5(0x3fac),_0x1c0dc5(0x215a),_0x1c0dc5(0x182c),_0x1c0dc5(0x2379),_0x1c0dc5(0x2f93),'decode_json',_0x1c0dc5(0x322f),_0x1c0dc5(0x348b),'poly_length','base64','base64_encode',_0x1c0dc5(0xd19),'meta',_0x1c0dc5(0x6a9),'abs',_0x1c0dc5(0x2c6e),'asin',_0x1c0dc5(0x3aab),_0x1c0dc5(0x41c2),_0x1c0dc5(0x10aa),'cos',_0x1c0dc5(0x187c),'e',_0x1c0dc5(0x3a1b),'ln',_0x1c0dc5(0x20ff),_0x1c0dc5(0x2e2d),'pi',_0x1c0dc5(0x17a4),'radians',_0x1c0dc5(0xe98),_0x1c0dc5(0x3d6c),_0x1c0dc5(0x7f7),_0x1c0dc5(0x2a37),_0x1c0dc5(0x5011),'tan',_0x1c0dc5(0x3cae),_0x1c0dc5(0x3ba),_0x1c0dc5(0x495b),_0x1c0dc5(0x27fc),_0x1c0dc5(0x836),_0x1c0dc5(0x3b90),'object_inner_values',_0x1c0dc5(0x3000),'object_put','object_remove',_0x1c0dc5(0x1a18),_0x1c0dc5(0x1998),_0x1c0dc5(0x3cf7),_0x1c0dc5(0x31b6),_0x1c0dc5(0x3060),_0x1c0dc5(0x2b31),_0x1c0dc5(0x1707),'length',_0x1c0dc5(0x52d),_0x1c0dc5(0x1070),'position',_0x1c0dc5(0x11d0),'replace',_0x1c0dc5(0x3ded),_0x1c0dc5(0x1117),_0x1c0dc5(0x3a72),_0x1c0dc5(0x4685),_0x1c0dc5(0x1b23),_0x1c0dc5(0x4541),_0x1c0dc5(0x43cf),_0x1c0dc5(0x384a),_0x1c0dc5(0x7b5),_0x1c0dc5(0x3e9),'isobject',_0x1c0dc5(0x1311),_0x1c0dc5(0xcfc),_0x1c0dc5(0x2ffb),'toatom',_0x1c0dc5(0x210a),'tonumber',_0x1c0dc5(0x76c),_0x1c0dc5(0x2cf3)]},'contains':[{'className':_0x1c0dc5(0x2431),'begin':'\x27','end':'\x27','contains':[_0x298878['BACKSLASH_ESCAPE']]},{'className':_0x1c0dc5(0x2431),'begin':'\x22','end':'\x22','contains':[_0x298878[_0x1c0dc5(0x4a76)]]},{'className':_0x1c0dc5(0x239b),'begin':'`','end':'`','contains':[_0x298878[_0x1c0dc5(0x4a76)]]},_0x298878[_0x1c0dc5(0xd12)],_0x298878[_0x1c0dc5(0x23fe)]]},_0x298878['C_BLOCK_COMMENT_MODE']]};};},0x124b:_0x426295=>{_0x426295['exports']=function(_0x2c543e){const _0x340fe0=a0_0x11e7;return{'name':_0x340fe0(0x211b),'aliases':['nt'],'contains':[_0x2c543e[_0x340fe0(0x46a1)](_0x2c543e['HASH_COMMENT_MODE'],{'begin':/^\s*(?=#)/,'excludeBegin':!0x0}),{'variants':[{'match':[/^\s*/,/-/,/[ ]/,/.*$/]},{'match':[/^\s*/,/-$/]}],'className':{0x2:'bullet',0x4:_0x340fe0(0x2431)}},{'match':[/^\s*/,/>/,/[ ]/,/.*$/],'className':{0x2:_0x340fe0(0xa25),0x4:_0x340fe0(0x2431)}},{'match':[/^\s*(?=\S)/,/[^:]+/,/:\s*/,/$/],'className':{0x2:_0x340fe0(0x263f),0x3:_0x340fe0(0xa25)}},{'match':[/^\s*(?=\S)/,/[^:]*[^: ]/,/[ ]*:/,/[ ]/,/.*$/],'className':{0x2:_0x340fe0(0x263f),0x3:_0x340fe0(0xa25),0x5:_0x340fe0(0x2431)}}]};};},0x1c5b:_0xbb18e4=>{const _0x3e43ae=a0_0x11e7;_0xbb18e4[_0x3e43ae(0x474c)]=function(_0x1b7eb3){const _0x386461=_0x3e43ae,_0xf41f4f=_0x1b7eb3[_0x386461(0x41d2)],_0x2b9896={'className':'variable','variants':[{'begin':/\$\d+/},{'begin':/\$\{\w+\}/},{'begin':_0xf41f4f[_0x386461(0x1d1d)](/[$@]/,_0x1b7eb3[_0x386461(0x206e)])}]},_0x3cbade={'endsWithParent':!0x0,'keywords':{'$pattern':/[a-z_]{2,}|\/dev\/poll/,'literal':['on',_0x386461(0x2422),_0x386461(0x1df8),'no',_0x386461(0x4022),'false',_0x386461(0x28b),_0x386461(0x43a9),'debug',_0x386461(0x3a85),_0x386461(0x524b),_0x386461(0x4ab1),_0x386461(0x3d85),_0x386461(0x38c5),_0x386461(0x3fc9),_0x386461(0x4e10),_0x386461(0x4d3c),_0x386461(0x14f3),_0x386461(0x2115),'kqueue',_0x386461(0x3b01),_0x386461(0x1e63),'poll',_0x386461(0x4dc0)]},'relevance':0x0,'illegal':'=>','contains':[_0x1b7eb3['HASH_COMMENT_MODE'],{'className':'string','contains':[_0x1b7eb3[_0x386461(0x4a76)],_0x2b9896],'variants':[{'begin':/"/,'end':/"/},{'begin':/'/,'end':/'/}]},{'begin':_0x386461(0x38ab),'end':'\x5cs','endsWithParent':!0x0,'excludeEnd':!0x0,'contains':[_0x2b9896]},{'className':'regexp','contains':[_0x1b7eb3['BACKSLASH_ESCAPE'],_0x2b9896],'variants':[{'begin':'\x5cs\x5c^','end':_0x386461(0x4015),'returnEnd':!0x0},{'begin':_0x386461(0x26f9),'end':_0x386461(0x4015),'returnEnd':!0x0},{'begin':_0x386461(0x356)},{'begin':'([a-z\x5c-]+\x5c.)+\x5c*'}]},{'className':_0x386461(0x4a80),'begin':_0x386461(0xbf0)},{'className':_0x386461(0x4a80),'begin':_0x386461(0xac9),'relevance':0x0},_0x2b9896]};return{'name':_0x386461(0x1914),'aliases':['nginxconf'],'contains':[_0x1b7eb3['HASH_COMMENT_MODE'],{'beginKeywords':_0x386461(0x22df),'end':/;|\{/,'contains':_0x3cbade[_0x386461(0x2b31)],'keywords':{'section':_0x386461(0x22df)}},{'className':'section','begin':_0xf41f4f['concat'](_0x1b7eb3['UNDERSCORE_IDENT_RE']+_0xf41f4f[_0x386461(0x3296)](/\s+\{/)),'relevance':0x0},{'begin':_0xf41f4f[_0x386461(0x3296)](_0x1b7eb3[_0x386461(0x206e)]+'\x5cs'),'end':';|\x5c{','contains':[{'className':_0x386461(0x263f),'begin':_0x1b7eb3['UNDERSCORE_IDENT_RE'],'starts':_0x3cbade}],'relevance':0x0}],'illegal':'[^\x5cs\x5c}\x5c{]'};};},0x6d3:_0x51e896=>{const _0x350639=a0_0x11e7;_0x51e896[_0x350639(0x474c)]=function(_0x2d82fa){const _0x281915=_0x350639;return{'name':'Nim','keywords':{'keyword':[_0x281915(0xca2),'and','as',_0x281915(0x1651),_0x281915(0x39e8),_0x281915(0x1f2e),_0x281915(0x4e10),_0x281915(0x2e7e),_0x281915(0x1171),_0x281915(0xc01),_0x281915(0x16d9),'converter',_0x281915(0x3fae),'distinct',_0x281915(0x4c88),'do','elif','else','end',_0x281915(0x44d8),'except',_0x281915(0x2bb9),'finally',_0x281915(0x3c19),_0x281915(0x27e6),'func',_0x281915(0xff8),_0x281915(0x110a),'if',_0x281915(0x331),'in',_0x281915(0x478e),_0x281915(0x321b),'is',_0x281915(0x3ca7),_0x281915(0x16a6),'let',_0x281915(0x172d),_0x281915(0x510c),_0x281915(0xc06),'mod',_0x281915(0x3e27),_0x281915(0xc1a),_0x281915(0x4545),_0x281915(0x20c7),'of','or',_0x281915(0x3ab5),_0x281915(0x4e27),_0x281915(0x2f48),_0x281915(0x2c1e),_0x281915(0x21c3),_0x281915(0xdfd),_0x281915(0x2b1c),'shl',_0x281915(0x389f),_0x281915(0x2c7c),_0x281915(0x15c6),'try',_0x281915(0x3cab),_0x281915(0xcfc),'using',_0x281915(0x469d),_0x281915(0x191b),_0x281915(0x552),_0x281915(0x2aa7),'without',_0x281915(0x32a6),_0x281915(0x5075)],'literal':[_0x281915(0x4022),_0x281915(0x3984)],'type':[_0x281915(0xc16),_0x281915(0xc11),_0x281915(0x22b1),_0x281915(0x3f1b),_0x281915(0x13c3),_0x281915(0x3a0b),_0x281915(0xd4d),_0x281915(0xf20),_0x281915(0xa09),_0x281915(0x491f),'float','float32',_0x281915(0x2209),_0x281915(0x3ebd),_0x281915(0x373c),'string',_0x281915(0x31ee),_0x281915(0x43e4),'expr',_0x281915(0xc6b),_0x281915(0x27d6),'auto',_0x281915(0x4684),_0x281915(0x51f),_0x281915(0x26f6),_0x281915(0x44f2),_0x281915(0x3c5),_0x281915(0xc26),_0x281915(0x1fa),'clong',_0x281915(0x2f8f),'cchar','cschar',_0x281915(0x1ac6),'cint',_0x281915(0xa02),_0x281915(0x1495),_0x281915(0x277e),'cdouble','clongdouble',_0x281915(0x17c6),_0x281915(0x3c3a),_0x281915(0x2b3d),_0x281915(0x1dc4),_0x281915(0x1251),'semistatic'],'built_in':[_0x281915(0x1e79),_0x281915(0x49a2),_0x281915(0x40ee),_0x281915(0xa34)]},'contains':[{'className':'meta','begin':/\{\./,'end':/\.\}/,'relevance':0xa},{'className':_0x281915(0x2431),'begin':/[a-zA-Z]\w*"/,'end':/"/,'contains':[{'begin':/""/}]},{'className':_0x281915(0x2431),'begin':/([a-zA-Z]\w*)?"""/,'end':/"""/},_0x2d82fa[_0x281915(0x291b)],{'className':_0x281915(0xcfc),'begin':/\b[A-Z]\w+\b/,'relevance':0x0},{'className':_0x281915(0x4a80),'relevance':0x0,'variants':[{'begin':/\b(0[xX][0-9a-fA-F][_0-9a-fA-F]*)('?[iIuU](8|16|32|64))?/},{'begin':/\b(0o[0-7][_0-7]*)('?[iIuUfF](8|16|32|64))?/},{'begin':/\b(0(b|B)[01][_01]*)('?[iIuUfF](8|16|32|64))?/},{'begin':/\b(\d[_\d]*)('?[iIuUfF](8|16|32|64))?/}]},_0x2d82fa['HASH_COMMENT_MODE']]};};},0x140:_0x5eab75=>{const _0x159120=a0_0x11e7;_0x5eab75[_0x159120(0x474c)]=function(_0x4c8b89){const _0xf44952=_0x159120,_0x4152b5={'keyword':['rec','with','let','in',_0xf44952(0x46a1),'assert','if',_0xf44952(0x3d4),_0xf44952(0xaf5)],'literal':['true',_0xf44952(0x3984),'or',_0xf44952(0x2663),_0xf44952(0x1582)],'built_in':['import',_0xf44952(0x1ec4),_0xf44952(0x3103),_0xf44952(0x8b5),_0xf44952(0x1fc8),_0xf44952(0x3770),'map',_0xf44952(0x10d3),_0xf44952(0x383),'toString',_0xf44952(0x484e)]},_0x378e7b={'className':_0xf44952(0x2ad6),'begin':/\$\{/,'end':/\}/,'keywords':_0x4152b5},_0x32772e={'className':'string','contains':[{'className':_0xf44952(0x2825),'begin':/''\$/},_0x378e7b],'variants':[{'begin':'\x27\x27','end':'\x27\x27'},{'begin':'\x22','end':'\x22'}]},_0x1c06c0=[_0x4c8b89['NUMBER_MODE'],_0x4c8b89[_0xf44952(0x2bbe)],_0x4c8b89[_0xf44952(0x23fe)],_0x32772e,{'begin':/[a-zA-Z0-9-_]+(\s*=)/,'returnBegin':!0x0,'relevance':0x0,'contains':[{'className':_0xf44952(0x431d),'begin':/\S+/,'relevance':0.2}]}];return _0x378e7b[_0xf44952(0x2b31)]=_0x1c06c0,{'name':_0xf44952(0x37cf),'aliases':['nixos'],'keywords':_0x4152b5,'contains':_0x1c06c0};};},0x1a11:_0x21a47f=>{const _0x36afc1=a0_0x11e7;_0x21a47f[_0x36afc1(0x474c)]=function(_0x1da5ce){const _0x299eb1=_0x36afc1;return{'name':'Node\x20REPL','contains':[{'className':_0x299eb1(0x4cea),'starts':{'end':/ |$/,'starts':{'end':'$','subLanguage':_0x299eb1(0x45ac)}},'variants':[{'begin':/^>(?=[ ]|$)/},{'begin':/^\.\.\.(?=[ ]|$)/}]}]};};},0x8c:_0x205d39=>{const _0x131cd8=a0_0x11e7;_0x205d39[_0x131cd8(0x474c)]=function(_0x35654c){const _0x5a6dbc=_0x131cd8,_0x4eaea7=_0x35654c[_0x5a6dbc(0x41d2)],_0x568a4e={'className':_0x5a6dbc(0x41f7),'begin':_0x4eaea7[_0x5a6dbc(0x1d1d)](/\$/,_0x4eaea7['either']('ADMINTOOLS',_0x5a6dbc(0x3d5b),_0x5a6dbc(0x4e62),_0x5a6dbc(0x4f14),'COMMONFILES32',_0x5a6dbc(0x263c),_0x5a6dbc(0x1fcf),_0x5a6dbc(0x1531),'DESKTOP',_0x5a6dbc(0x3f5c),_0x5a6dbc(0x46ae),'EXEFILE',_0x5a6dbc(0x3f39),'FAVORITES',_0x5a6dbc(0x1c13),_0x5a6dbc(0x2ddc),'HWNDPARENT',_0x5a6dbc(0xf59),_0x5a6dbc(0x470b),_0x5a6dbc(0x53c),_0x5a6dbc(0x3eb4),'MUSIC',_0x5a6dbc(0xa61),_0x5a6dbc(0x2241),_0x5a6dbc(0x4e47),_0x5a6dbc(0x34ff),_0x5a6dbc(0x28f4),'PROFILE',_0x5a6dbc(0xe1e),_0x5a6dbc(0x9d1),_0x5a6dbc(0x516e),_0x5a6dbc(0x49f9),'RECENT',_0x5a6dbc(0x2a10),_0x5a6dbc(0x4f13),_0x5a6dbc(0x458b),_0x5a6dbc(0x3b9c),_0x5a6dbc(0x3b15),_0x5a6dbc(0x2a09),_0x5a6dbc(0x3040),_0x5a6dbc(0x41dd),'TEMPLATES',_0x5a6dbc(0x3629),_0x5a6dbc(0x2858)))},_0x2d4590={'className':_0x5a6dbc(0x3362),'begin':/\$+\{[\!\w.:-]+\}/},_0x2c274a={'className':_0x5a6dbc(0x3362),'begin':/\$+\w[\w\.]*/,'illegal':/\(\)\{\}/},_0xa8c7ce={'className':_0x5a6dbc(0x3362),'begin':/\$+\([\w^.:!-]+\)/},_0x52a9ec={'className':'params','begin':_0x4eaea7[_0x5a6dbc(0x583)](_0x5a6dbc(0x305a),_0x5a6dbc(0x4fd),_0x5a6dbc(0x2893),'FILE_ATTRIBUTE_OFFLINE','FILE_ATTRIBUTE_READONLY','FILE_ATTRIBUTE_SYSTEM','FILE_ATTRIBUTE_TEMPORARY',_0x5a6dbc(0x3ca3),'HKCU','HKDD',_0x5a6dbc(0x500),_0x5a6dbc(0x2050),_0x5a6dbc(0x2ac8),'HKEY_DYN_DATA','HKEY_LOCAL_MACHINE',_0x5a6dbc(0x427b),'HKEY_USERS',_0x5a6dbc(0x5155),'HKPD',_0x5a6dbc(0x31c2),_0x5a6dbc(0x379e),'IDCANCEL','IDIGNORE',_0x5a6dbc(0x1dee),_0x5a6dbc(0xf46),_0x5a6dbc(0x29dd),_0x5a6dbc(0x420f),'MB_ABORTRETRYIGNORE',_0x5a6dbc(0x20b7),'MB_DEFBUTTON2',_0x5a6dbc(0x2ae),_0x5a6dbc(0x10bb),_0x5a6dbc(0x2e1d),_0x5a6dbc(0x41e1),_0x5a6dbc(0x26a4),_0x5a6dbc(0x3f37),'MB_OK',_0x5a6dbc(0xb0b),_0x5a6dbc(0x3159),_0x5a6dbc(0x3d06),_0x5a6dbc(0x45f),'MB_SETFOREGROUND',_0x5a6dbc(0x1f43),_0x5a6dbc(0x7b4),_0x5a6dbc(0x2a54),'NORMAL','OFFLINE',_0x5a6dbc(0x10fb),_0x5a6dbc(0x4b9d),_0x5a6dbc(0x3044),_0x5a6dbc(0x417))},_0x40af5a={'className':_0x5a6dbc(0x1357),'begin':_0x4eaea7[_0x5a6dbc(0x1d1d)](/!/,_0x4eaea7[_0x5a6dbc(0x583)]('addincludedir',_0x5a6dbc(0x128d),_0x5a6dbc(0x4838),_0x5a6dbc(0x4fd4),'cd',_0x5a6dbc(0x1bb1),_0x5a6dbc(0x30da),_0x5a6dbc(0x4978),'else',_0x5a6dbc(0x39b),'error',_0x5a6dbc(0x2162),_0x5a6dbc(0x257a),_0x5a6dbc(0x4a1e),'gettlbversion','if',_0x5a6dbc(0xe33),_0x5a6dbc(0x7c7),_0x5a6dbc(0x2a32),_0x5a6dbc(0x4536),'include',_0x5a6dbc(0x36d0),'macro','macroend',_0x5a6dbc(0x5fa),_0x5a6dbc(0xb33),_0x5a6dbc(0x37f2),_0x5a6dbc(0x46c5),_0x5a6dbc(0x2ec4),_0x5a6dbc(0x1d0b),_0x5a6dbc(0x1f88),_0x5a6dbc(0x116b),'verbose',_0x5a6dbc(0x4020)))},_0x97fc94={'className':_0x5a6dbc(0x2431),'variants':[{'begin':'\x22','end':'\x22'},{'begin':'\x27','end':'\x27'},{'begin':'`','end':'`'}],'illegal':/\n/,'contains':[{'className':_0x5a6dbc(0x2825),'begin':/\$(\\[nrt]|\$)/},_0x568a4e,_0x2d4590,_0x2c274a,_0xa8c7ce]},_0xd2644f={'match':[/Function/,/\s+/,_0x4eaea7['concat'](/(\.)?/,_0x35654c[_0x5a6dbc(0xacc)])],'scope':{0x1:_0x5a6dbc(0x1357),0x3:_0x5a6dbc(0x20db)}},_0x49a346={'match':[/Var/,/\s+/,/(?:\/GLOBAL\s+)?/,/[A-Za-z][\w.]*/],'scope':{0x1:'keyword',0x3:_0x5a6dbc(0xddd),0x4:'variable'}};return{'name':_0x5a6dbc(0x1335),'case_insensitive':!0x0,'keywords':{'keyword':['Abort','AddBrandingImage','AddSize',_0x5a6dbc(0x2e50),'AllowSkipFiles',_0x5a6dbc(0x1f01),'BGFont',_0x5a6dbc(0x3df9),_0x5a6dbc(0x10d4),_0x5a6dbc(0x21c4),_0x5a6dbc(0x44db),_0x5a6dbc(0x4f8e),_0x5a6dbc(0x650),_0x5a6dbc(0x2d25),_0x5a6dbc(0x1a51),_0x5a6dbc(0x47e3),'CompletedText',_0x5a6dbc(0x1905),'CopyFiles',_0x5a6dbc(0x3dc2),_0x5a6dbc(0x482e),_0x5a6dbc(0x5238),_0x5a6dbc(0xad9),'Delete',_0x5a6dbc(0x3f0d),_0x5a6dbc(0x835),'DeleteRegKey',_0x5a6dbc(0x2d0a),'DetailPrint',_0x5a6dbc(0x4cc5),_0x5a6dbc(0x2d8b),_0x5a6dbc(0x147a),_0x5a6dbc(0x284c),'EnableWindow',_0x5a6dbc(0x3b66),_0x5a6dbc(0x4a17),_0x5a6dbc(0x2561),_0x5a6dbc(0x4d1e),_0x5a6dbc(0x4029),_0x5a6dbc(0x46ad),_0x5a6dbc(0x2726),_0x5a6dbc(0x4025),_0x5a6dbc(0x106b),_0x5a6dbc(0x352c),_0x5a6dbc(0x1343),_0x5a6dbc(0x203b),_0x5a6dbc(0x44f),'FileRead','FileReadByte',_0x5a6dbc(0x36ff),_0x5a6dbc(0x439),_0x5a6dbc(0xf57),'FileSeek',_0x5a6dbc(0x3177),'FileWriteByte',_0x5a6dbc(0x501a),_0x5a6dbc(0x3469),_0x5a6dbc(0x39d0),_0x5a6dbc(0x2290),_0x5a6dbc(0x499b),_0x5a6dbc(0x1623),'GetCurInstType',_0x5a6dbc(0x27b8),_0x5a6dbc(0x2467),'GetDLLVersion','GetDLLVersionLocal',_0x5a6dbc(0x26e5),_0x5a6dbc(0x4cfe),_0x5a6dbc(0x412),_0x5a6dbc(0x34fb),'GetFunctionAddress',_0x5a6dbc(0x3053),_0x5a6dbc(0x3891),_0x5a6dbc(0x4268),_0x5a6dbc(0x438c),'GetWinVer',_0x5a6dbc(0x2761),'HideWindow',_0x5a6dbc(0x518a),_0x5a6dbc(0x3c74),_0x5a6dbc(0x4b79),_0x5a6dbc(0x44f7),_0x5a6dbc(0xe05),_0x5a6dbc(0x3339),_0x5a6dbc(0x18c7),'IfSilent',_0x5a6dbc(0x2be5),_0x5a6dbc(0xe25),_0x5a6dbc(0x2d60),'InstallDir',_0x5a6dbc(0x2c1a),_0x5a6dbc(0x196a),_0x5a6dbc(0x15bf),_0x5a6dbc(0x4c13),_0x5a6dbc(0x267),_0x5a6dbc(0x1e5b),_0x5a6dbc(0x306e),_0x5a6dbc(0x3473),'IntCmp','IntCmpU',_0x5a6dbc(0x81b),_0x5a6dbc(0xec9),'IntPtrCmp',_0x5a6dbc(0x5029),_0x5a6dbc(0x38a3),_0x5a6dbc(0xab4),_0x5a6dbc(0x2fb7),_0x5a6dbc(0x48a1),_0x5a6dbc(0x2a6f),'LicenseForceSelection','LicenseLangString',_0x5a6dbc(0x25ca),_0x5a6dbc(0x3a35),_0x5a6dbc(0xbbc),_0x5a6dbc(0x3972),_0x5a6dbc(0x1073),_0x5a6dbc(0x1a65),'ManifestDPIAware',_0x5a6dbc(0xbe3),_0x5a6dbc(0x2254),_0x5a6dbc(0x37e2),_0x5a6dbc(0x4cdd),_0x5a6dbc(0x280),_0x5a6dbc(0x4269),_0x5a6dbc(0x143a),'OutFile','Page',_0x5a6dbc(0x2839),_0x5a6dbc(0x27ee),_0x5a6dbc(0x21dc),_0x5a6dbc(0x2373),'PESubsysVer',_0x5a6dbc(0x4344),_0x5a6dbc(0x3f40),_0x5a6dbc(0x1acc),'ReadEnvStr',_0x5a6dbc(0x2d40),'ReadRegDWORD',_0x5a6dbc(0x29f),_0x5a6dbc(0x480d),'RegDLL',_0x5a6dbc(0xcfe),_0x5a6dbc(0x133a),_0x5a6dbc(0x3966),_0x5a6dbc(0xb4f),'RMDir',_0x5a6dbc(0x4078),'SectionGetFlags',_0x5a6dbc(0x4798),_0x5a6dbc(0x508f),_0x5a6dbc(0x3d0b),_0x5a6dbc(0x448a),_0x5a6dbc(0x4fbe),_0x5a6dbc(0x1799),_0x5a6dbc(0x39d),_0x5a6dbc(0x1116),_0x5a6dbc(0x48d8),_0x5a6dbc(0x42b6),_0x5a6dbc(0x4190),_0x5a6dbc(0x3d9),_0x5a6dbc(0x32d0),_0x5a6dbc(0x480f),_0x5a6dbc(0x49c4),_0x5a6dbc(0x1341),'SetDatablockOptimize',_0x5a6dbc(0x3065),_0x5a6dbc(0x229f),_0x5a6dbc(0x4ed1),_0x5a6dbc(0x51e8),_0x5a6dbc(0x4559),_0x5a6dbc(0x226b),_0x5a6dbc(0x3c8),'SetOutPath',_0x5a6dbc(0x677),_0x5a6dbc(0x3fdd),_0x5a6dbc(0x2ff6),_0x5a6dbc(0x36c8),_0x5a6dbc(0x4308),_0x5a6dbc(0x4bae),_0x5a6dbc(0x217),_0x5a6dbc(0x1c8f),'SilentInstall',_0x5a6dbc(0x2659),_0x5a6dbc(0x1cf2),_0x5a6dbc(0x39cf),_0x5a6dbc(0x470e),_0x5a6dbc(0xd61),_0x5a6dbc(0x4687),_0x5a6dbc(0xef1),_0x5a6dbc(0x2ff7),_0x5a6dbc(0x2dea),'UninstallButtonText',_0x5a6dbc(0x2ed1),_0x5a6dbc(0x1257),_0x5a6dbc(0x1b63),_0x5a6dbc(0x31e4),_0x5a6dbc(0x49c5),_0x5a6dbc(0x4c85),'Var','VIAddVersionKey',_0x5a6dbc(0x26af),_0x5a6dbc(0x28b4),'WindowIcon',_0x5a6dbc(0x30e5),_0x5a6dbc(0x6eb),_0x5a6dbc(0x3919),_0x5a6dbc(0x323c),_0x5a6dbc(0x4e79),_0x5a6dbc(0x2300),_0x5a6dbc(0x45b3),_0x5a6dbc(0xab5),_0x5a6dbc(0x2953)],'literal':[_0x5a6dbc(0x37bf),_0x5a6dbc(0xc36),_0x5a6dbc(0x4c14),'both',_0x5a6dbc(0x3335),'bzip2',_0x5a6dbc(0x2bff),_0x5a6dbc(0x2a55),_0x5a6dbc(0xbb8),_0x5a6dbc(0x162e),_0x5a6dbc(0x44b1),_0x5a6dbc(0x3984),'force',_0x5a6dbc(0x9c4),_0x5a6dbc(0x4b4a),'ifdiff','ifnewer','instfiles',_0x5a6dbc(0x526b),'leave',_0x5a6dbc(0x48eb),'license','listonly',_0x5a6dbc(0x4863),_0x5a6dbc(0x266a),_0x5a6dbc(0x28b),_0x5a6dbc(0x47d),_0x5a6dbc(0x384),_0x5a6dbc(0x2422),'on',_0x5a6dbc(0x1795),_0x5a6dbc(0x4957),_0x5a6dbc(0x4d50),_0x5a6dbc(0x2e0d),_0x5a6dbc(0x1d5e),_0x5a6dbc(0x31c1),_0x5a6dbc(0x3f51),'textonly',_0x5a6dbc(0x279d),'true',_0x5a6dbc(0x422b),'un.components',_0x5a6dbc(0x4bc8),_0x5a6dbc(0x2dd5),_0x5a6dbc(0x38b5),_0x5a6dbc(0x10ba),_0x5a6dbc(0x3f4f),_0x5a6dbc(0x4b31),_0x5a6dbc(0x80d),_0x5a6dbc(0x3d0d),_0x5a6dbc(0x6dc),_0x5a6dbc(0xe69),_0x5a6dbc(0x1944)]},'contains':[_0x35654c[_0x5a6dbc(0x2bbe)],_0x35654c[_0x5a6dbc(0x23fe)],_0x35654c['COMMENT'](';','$',{'relevance':0x0}),_0x49a346,_0xd2644f,{'beginKeywords':_0x5a6dbc(0x4d16)},_0x97fc94,_0x40af5a,_0x2d4590,_0x2c274a,_0xa8c7ce,_0x52a9ec,{'className':_0x5a6dbc(0x20db),'begin':/\w+::\w+/},_0x35654c[_0x5a6dbc(0x30be)]]};};},0x3af:_0x5464e3=>{_0x5464e3['exports']=function(_0x1b8916){const _0x19f9f5=a0_0x11e7,_0x3908f1=/[a-zA-Z@][a-zA-Z0-9_]*/,_0x47d1e6={'$pattern':_0x3908f1,'keyword':[_0x19f9f5(0x499c),'@class',_0x19f9f5(0x2621),'@implementation']};return{'name':_0x19f9f5(0x2a8d),'aliases':['mm','objc','obj-c',_0x19f9f5(0x114f),_0x19f9f5(0x2f9a)],'keywords':{'variable.language':['this',_0x19f9f5(0x2cc)],'$pattern':_0x3908f1,'keyword':['while',_0x19f9f5(0x2bb9),'sizeof','typedef','const','struct',_0x19f9f5(0x3c19),_0x19f9f5(0x29d),_0x19f9f5(0x3512),_0x19f9f5(0x2c7c),_0x19f9f5(0x1c6c),'if','do','return',_0x19f9f5(0x139c),_0x19f9f5(0x44d8),'else',_0x19f9f5(0x4e10),_0x19f9f5(0x2068),'asm',_0x19f9f5(0x2e7e),_0x19f9f5(0x3d23),_0x19f9f5(0x49b4),'explicit',_0x19f9f5(0x2e71),_0x19f9f5(0x857),_0x19f9f5(0x16d9),'inline',_0x19f9f5(0x1aa2),'assign','readwrite',_0x19f9f5(0x4454),'@synchronized','id','typeof',_0x19f9f5(0x21ae),_0x19f9f5(0x3f5a),_0x19f9f5(0x2da2),_0x19f9f5(0x40c9),_0x19f9f5(0x7ae),_0x19f9f5(0x16c0),'in',_0x19f9f5(0x3ab5),_0x19f9f5(0x99f),'bycopy',_0x19f9f5(0x48e6),'oneway',_0x19f9f5(0xdba),_0x19f9f5(0x40d1),_0x19f9f5(0x3de2),'__autoreleasing',_0x19f9f5(0x4578),'@protected',_0x19f9f5(0x442c),_0x19f9f5(0xcda),'@property',_0x19f9f5(0x30c2),_0x19f9f5(0x2f7c),'@catch',_0x19f9f5(0x42d8),'@autoreleasepool',_0x19f9f5(0x5274),_0x19f9f5(0x19ac),_0x19f9f5(0x5041),_0x19f9f5(0x4d32),'@required',_0x19f9f5(0x2ee4),'@package',_0x19f9f5(0x38bf),_0x19f9f5(0x4cf),_0x19f9f5(0x224d),_0x19f9f5(0xb99),_0x19f9f5(0x3118),'__bridge_retained',_0x19f9f5(0x1230),_0x19f9f5(0x2333),_0x19f9f5(0x20c2),'__kindof',_0x19f9f5(0x32b2),_0x19f9f5(0x71f),_0x19f9f5(0x4766),'__FUNCTION__',_0x19f9f5(0x2082),_0x19f9f5(0x3879),'getter',_0x19f9f5(0x1365),_0x19f9f5(0x3c50),'unsafe_unretained',_0x19f9f5(0x216f),_0x19f9f5(0x3f6b),'null_unspecified','null_resettable',_0x19f9f5(0x1390),_0x19f9f5(0x2a9c),'NS_DESIGNATED_INITIALIZER',_0x19f9f5(0x9c1),'NS_REQUIRES_SUPER',_0x19f9f5(0x2713),_0x19f9f5(0x519c),'NS_AVAILABLE',_0x19f9f5(0x1ad4),'NS_ENUM',_0x19f9f5(0x3224),'NS_SWIFT_UNAVAILABLE',_0x19f9f5(0x127c),_0x19f9f5(0x4234),_0x19f9f5(0xbaa),_0x19f9f5(0x10c3),_0x19f9f5(0x1d58),'NS_DURING',_0x19f9f5(0x1499),'NS_ENDHANDLER',_0x19f9f5(0x11fc),_0x19f9f5(0x4fcc)],'literal':[_0x19f9f5(0x3984),_0x19f9f5(0x4022),_0x19f9f5(0x2119),_0x19f9f5(0x1088),_0x19f9f5(0x3e27),_0x19f9f5(0x1c96),'NO',_0x19f9f5(0xa45)],'built_in':[_0x19f9f5(0x290c),_0x19f9f5(0x3776),_0x19f9f5(0x32ef),_0x19f9f5(0x383b),'dispatch_once'],'type':[_0x19f9f5(0xc16),_0x19f9f5(0x1ab8),_0x19f9f5(0x373c),'unsigned',_0x19f9f5(0x3908),_0x19f9f5(0x4085),_0x19f9f5(0x324f),_0x19f9f5(0x5024),_0x19f9f5(0x480b),'unichar','void',_0x19f9f5(0x3ebd),'BOOL',_0x19f9f5(0xd7b),_0x19f9f5(0x3671)]},'illegal':'/,'end':/$/,'illegal':'\x5cn'},_0x1b8916[_0x19f9f5(0x2ae2)],_0x1b8916[_0x19f9f5(0x23fe)]]},{'className':_0x19f9f5(0x1390),'begin':'('+_0x47d1e6['keyword'][_0x19f9f5(0x3541)]('|')+_0x19f9f5(0x716),'end':/(\{|$)/,'excludeEnd':!0x0,'keywords':_0x47d1e6,'contains':[_0x1b8916[_0x19f9f5(0xb0e)]]},{'begin':'\x5c.'+_0x1b8916[_0x19f9f5(0x206e)],'relevance':0x0}]};};},0x6ff:_0x55e9e8=>{const _0x244f98=a0_0x11e7;_0x55e9e8[_0x244f98(0x474c)]=function(_0x6128ad){const _0x547ef1=_0x244f98;return{'name':'OCaml','aliases':['ml'],'keywords':{'$pattern':_0x547ef1(0x4525),'keyword':_0x547ef1(0x3360),'built_in':_0x547ef1(0x2939),'literal':'true\x20false'},'illegal':/\/\/|>>/,'contains':[{'className':_0x547ef1(0x2706),'begin':_0x547ef1(0xd57),'relevance':0x0},_0x6128ad[_0x547ef1(0x4e4f)](_0x547ef1(0x3b33),_0x547ef1(0x17ad),{'contains':[_0x547ef1(0x4454)]}),{'className':_0x547ef1(0x239b),'begin':_0x547ef1(0x495e)},{'className':_0x547ef1(0xcfc),'begin':_0x547ef1(0x45fb)},{'className':_0x547ef1(0xcfc),'begin':_0x547ef1(0x2013),'relevance':0x0},{'begin':'[a-z_]\x5cw*\x27[\x5cw\x27]*','relevance':0x0},_0x6128ad['inherit'](_0x6128ad[_0x547ef1(0xa4c)],{'className':_0x547ef1(0x2431),'relevance':0x0}),_0x6128ad[_0x547ef1(0x46a1)](_0x6128ad[_0x547ef1(0x291b)],{'illegal':null}),{'className':_0x547ef1(0x4a80),'begin':_0x547ef1(0x1695),'relevance':0x0},{'begin':/->/}]};};},0x1a10:_0x64537e=>{const _0x194ab7=a0_0x11e7;_0x64537e[_0x194ab7(0x474c)]=function(_0x8f097c){const _0x212c63=_0x194ab7,_0x262538={'className':_0x212c63(0x1357),'begin':'\x5c$(f[asn]|t|vp[rtd]|children)'},_0x1dbef4={'className':'number','begin':_0x212c63(0x185d),'relevance':0x0},_0x186b30=_0x8f097c[_0x212c63(0x46a1)](_0x8f097c['QUOTE_STRING_MODE'],{'illegal':null}),_0xbe042f={'className':_0x212c63(0x14b2),'beginKeywords':_0x212c63(0x5009),'end':/=|\{/,'contains':[{'className':_0x212c63(0xddd),'begin':'\x5c(','end':'\x5c)','contains':['self',_0x1dbef4,_0x186b30,_0x262538,{'className':_0x212c63(0x2706),'begin':_0x212c63(0x13e2)}]},_0x8f097c[_0x212c63(0xb0e)]]};return{'name':'OpenSCAD','aliases':[_0x212c63(0x5209)],'keywords':{'keyword':_0x212c63(0xe17),'literal':_0x212c63(0x2c58),'built_in':_0x212c63(0x2a28)},'contains':[_0x8f097c[_0x212c63(0x2ae2)],_0x8f097c['C_BLOCK_COMMENT_MODE'],_0x1dbef4,{'className':_0x212c63(0x5153),'keywords':{'keyword':_0x212c63(0x3fee)},'begin':_0x212c63(0x360c),'end':'>'},_0x186b30,_0x262538,{'begin':_0x212c63(0x2592),'relevance':0x0},_0xbe042f]};};},0xbd4:_0x3bf14f=>{const _0x5a4a01=a0_0x11e7;_0x3bf14f[_0x5a4a01(0x474c)]=function(_0x5afe3a){const _0x14f8dd=_0x5a4a01,_0x2015e0={'$pattern':/\.?\w+/,'keyword':_0x14f8dd(0x15f0)},_0x4be28b=_0x5afe3a[_0x14f8dd(0x4e4f)](/\{/,/\}/,{'relevance':0x0}),_0x13861c=_0x5afe3a[_0x14f8dd(0x4e4f)](_0x14f8dd(0x3b33),_0x14f8dd(0x17ad),{'relevance':0xa}),_0x1e4e17={'className':'string','begin':'\x27','end':'\x27','contains':[{'begin':'\x27\x27'}]},_0x55b239={'className':_0x14f8dd(0x2431),'begin':'(#\x5cd+)+'},_0x3b38b2={'beginKeywords':_0x14f8dd(0x1f3c),'end':_0x14f8dd(0x5038),'keywords':'function\x20constructor|10\x20destructor|10\x20procedure|10\x20method|10','contains':[_0x5afe3a[_0x14f8dd(0x46a1)](_0x5afe3a['TITLE_MODE'],{'scope':_0x14f8dd(0x20db)}),{'className':'params','begin':'\x5c(','end':'\x5c)','keywords':_0x2015e0,'contains':[_0x1e4e17,_0x55b239]},_0x4be28b,_0x13861c]};return{'name':'Oxygene','case_insensitive':!0x0,'keywords':_0x2015e0,'illegal':'(\x22|\x5c$[G-Zg-z]|\x5c/\x5c*||->)','contains':[_0x4be28b,_0x13861c,_0x5afe3a[_0x14f8dd(0x2ae2)],_0x1e4e17,_0x55b239,_0x5afe3a[_0x14f8dd(0x30be)],_0x3b38b2,{'scope':_0x14f8dd(0xa25),'match':/;/,'relevance':0x0}]};};},0x23cb:_0x4c0e8f=>{const _0x1ee7f0=a0_0x11e7;_0x4c0e8f[_0x1ee7f0(0x474c)]=function(_0x2a0eb9){const _0x452dd2=_0x1ee7f0,_0x1090a2=_0x2a0eb9[_0x452dd2(0x4e4f)](/\{/,/\}/,{'contains':[_0x452dd2(0x4454)]});return{'name':_0x452dd2(0x453f),'subLanguage':_0x452dd2(0x2655),'relevance':0x0,'contains':[_0x2a0eb9[_0x452dd2(0x4e4f)]('^#','$'),_0x2a0eb9['COMMENT'](/\^rem\{/,/\}/,{'relevance':0xa,'contains':[_0x1090a2]}),{'className':_0x452dd2(0x5153),'begin':_0x452dd2(0x5286),'relevance':0xa},{'className':_0x452dd2(0x4685),'begin':'@[\x5cw\x5c-]+\x5c[[\x5cw^;\x5c-]*\x5c](?:\x5c[[\x5cw^;\x5c-]*\x5c])?(?:.*)$'},{'className':_0x452dd2(0x3362),'begin':/\$\{?[\w\-.:]+\}?/},{'className':_0x452dd2(0x1357),'begin':/\^[\w\-.:]+/},{'className':_0x452dd2(0x4a80),'begin':_0x452dd2(0x249c)},_0x2a0eb9[_0x452dd2(0xd12)]]};};},0x3b2:_0x2b155b=>{_0x2b155b['exports']=function(_0x4204ac){const _0x50af20=a0_0x11e7,_0xa2ea8=_0x4204ac[_0x50af20(0x41d2)],_0x33ba2c=/[dualxmsipngr]{0,12}/,_0x5be7b3={'$pattern':/[\w.]+/,'keyword':[_0x50af20(0xbe0),_0x50af20(0x3165),_0x50af20(0x2f8e),'and',_0x50af20(0x41c2),'bind','binmode',_0x50af20(0x12bc),_0x50af20(0x4e10),_0x50af20(0x3722),_0x50af20(0x4772),_0x50af20(0x1cb7),_0x50af20(0x1036),_0x50af20(0x3b30),_0x50af20(0x44c3),_0x50af20(0x3baf),'chroot',_0x50af20(0x50d8),'closedir','connect',_0x50af20(0x16d9),_0x50af20(0x3935),'crypt','dbmclose',_0x50af20(0x21cc),_0x50af20(0x183b),'delete',_0x50af20(0x2032),'do',_0x50af20(0x3dd8),_0x50af20(0x2f9e),_0x50af20(0x3d4),_0x50af20(0x3e5b),_0x50af20(0x2ffc),_0x50af20(0x47ab),'endnetent',_0x50af20(0x1512),_0x50af20(0x2729),'endservent','eof',_0x50af20(0x2ff9),_0x50af20(0x198d),_0x50af20(0x449f),_0x50af20(0x4c7b),'exp',_0x50af20(0x7f8),_0x50af20(0x3c97),_0x50af20(0x2e36),'for',_0x50af20(0x4185),_0x50af20(0x12f8),'format',_0x50af20(0x3696),_0x50af20(0x261e),_0x50af20(0x341b),_0x50af20(0x4f30),_0x50af20(0xfb8),_0x50af20(0x1e55),'gethostbyname',_0x50af20(0x231d),_0x50af20(0x4dbc),_0x50af20(0x4f4c),_0x50af20(0x46b4),_0x50af20(0x2d86),_0x50af20(0x10bf),_0x50af20(0x51f0),_0x50af20(0x7cc),_0x50af20(0x127a),_0x50af20(0x1dd0),_0x50af20(0x3375),_0x50af20(0x9db),_0x50af20(0x2327),_0x50af20(0xa66),_0x50af20(0x1865),_0x50af20(0x3a90),'getservent',_0x50af20(0x21df),_0x50af20(0x3ad4),_0x50af20(0x4cc0),_0x50af20(0x3bca),_0x50af20(0x4fb5),'goto',_0x50af20(0x3e9a),'gt',_0x50af20(0x3536),'if',_0x50af20(0x3bb5),_0x50af20(0xc16),_0x50af20(0x1b41),_0x50af20(0x3541),_0x50af20(0x1ea9),_0x50af20(0x3c2d),'last','lc','lcfirst',_0x50af20(0x1b19),_0x50af20(0x4b32),_0x50af20(0x4452),'local',_0x50af20(0xe00),_0x50af20(0x20ff),_0x50af20(0x5138),'lt','ma',_0x50af20(0x4833),_0x50af20(0x2228),_0x50af20(0x35e4),_0x50af20(0x4f95),_0x50af20(0xbfc),_0x50af20(0x2e09),'my','ne',_0x50af20(0x3dc6),'no',_0x50af20(0xc1a),_0x50af20(0x3d82),_0x50af20(0x1795),_0x50af20(0x8bf),'or',_0x50af20(0x38b2),'our',_0x50af20(0x22d6),_0x50af20(0x4bd0),_0x50af20(0x1877),_0x50af20(0x3d35),'pos',_0x50af20(0x4957),_0x50af20(0x32fe),'prototype',_0x50af20(0x1715),_0x50af20(0x156e),'qq',_0x50af20(0x3a33),'qw','qx',_0x50af20(0x4fa0),'read','readdir',_0x50af20(0xb44),_0x50af20(0x461e),'readpipe',_0x50af20(0x17d1),'redo',_0x50af20(0x21c3),_0x50af20(0x2022),_0x50af20(0x4031),_0x50af20(0x24ed),_0x50af20(0xdfd),'reverse','rewinddir','rindex',_0x50af20(0x4d1a),_0x50af20(0x20c),_0x50af20(0x45ed),'seek',_0x50af20(0x4143),_0x50af20(0x3fc9),'semctl',_0x50af20(0x192c),_0x50af20(0x1e05),_0x50af20(0x3710),_0x50af20(0x2bab),'sethostent',_0x50af20(0xde5),'setpgrp',_0x50af20(0x4e87),_0x50af20(0x37a2),'setpwent',_0x50af20(0x1187),_0x50af20(0x2fd4),_0x50af20(0x34fe),_0x50af20(0x2652),_0x50af20(0x219e),_0x50af20(0x7d7),_0x50af20(0xfb4),_0x50af20(0x3e61),_0x50af20(0x2a37),_0x50af20(0x4b2e),_0x50af20(0x6d2),_0x50af20(0x18f8),_0x50af20(0x4c33),'splice',_0x50af20(0x1117),_0x50af20(0xd87),_0x50af20(0x5011),_0x50af20(0x2a51),_0x50af20(0x8f5),_0x50af20(0x206b),_0x50af20(0x3ed2),_0x50af20(0x217c),'substr',_0x50af20(0x3a59),'syscall','sysopen','sysread',_0x50af20(0x4bc1),_0x50af20(0x2ec4),_0x50af20(0x278a),_0x50af20(0x1a90),_0x50af20(0x31de),_0x50af20(0x32d2),_0x50af20(0x3c31),'time',_0x50af20(0xc31),'tr',_0x50af20(0x4dbe),'uc','ucfirst',_0x50af20(0x4709),_0x50af20(0x1f88),'unless',_0x50af20(0x1e6f),_0x50af20(0x4f08),'unshift',_0x50af20(0x46e1),_0x50af20(0x30d6),_0x50af20(0x84a),'utime',_0x50af20(0x1fae),_0x50af20(0x4a05),'wait','waitpid',_0x50af20(0x5142),_0x50af20(0x4ab1),_0x50af20(0x191b),_0x50af20(0x552),_0x50af20(0x4c95),'x|0',_0x50af20(0x32a6),_0x50af20(0x4292)][_0x50af20(0x3541)]('\x20')},_0xcca0d6={'className':_0x50af20(0x2ad6),'begin':_0x50af20(0x3c30),'end':'\x5c}','keywords':_0x5be7b3},_0x13b2d6={'begin':/->\{/,'end':/\}/},_0x36cf37={'variants':[{'begin':/\$\d/},{'begin':_0xa2ea8['concat'](/[$%@](\^\w\b|#\w+(::\w+)*|\{\w+\}|\w+(::\w*)*)/,'(?![A-Za-z])(?![@$%])')},{'begin':/[$%@][^\s\w{]/,'relevance':0x0}]},_0x46a67d=[_0x4204ac[_0x50af20(0x4a76)],_0xcca0d6,_0x36cf37],_0xa1f40b=[/!/,/\//,/\|/,/\?/,/'/,/"/,/#/],_0x5c1508=(_0x1b0e94,_0x19b781,_0x16eeab='\x5c1')=>{const _0x1ab234=_0x50af20,_0x257def='\x5c1'===_0x16eeab?_0x16eeab:_0xa2ea8['concat'](_0x16eeab,_0x19b781);return _0xa2ea8[_0x1ab234(0x1d1d)](_0xa2ea8[_0x1ab234(0x1d1d)](_0x1ab234(0xf54),_0x1b0e94,')'),_0x19b781,/(?:\\.|[^\\\/])*?/,_0x257def,/(?:\\.|[^\\\/])*?/,_0x16eeab,_0x33ba2c);},_0xc3c26c=(_0x4c31f2,_0x49e366,_0x3fd9b7)=>_0xa2ea8[_0x50af20(0x1d1d)](_0xa2ea8['concat']('(?:',_0x4c31f2,')'),_0x49e366,/(?:\\.|[^\\\/])*?/,_0x3fd9b7,_0x33ba2c),_0x4d5df0=[_0x36cf37,_0x4204ac[_0x50af20(0x2bbe)],_0x4204ac[_0x50af20(0x4e4f)](/^=\w/,/=cut/,{'endsWithParent':!0x0}),_0x13b2d6,{'className':_0x50af20(0x2431),'contains':_0x46a67d,'variants':[{'begin':_0x50af20(0x392c),'end':'\x5c)','relevance':0x5},{'begin':_0x50af20(0x3f45),'end':'\x5c]','relevance':0x5},{'begin':_0x50af20(0x3442),'end':'\x5c}','relevance':0x5},{'begin':_0x50af20(0x3c76),'end':'\x5c|','relevance':0x5},{'begin':_0x50af20(0x2197),'end':'>','relevance':0x5},{'begin':_0x50af20(0x2832),'end':'q','relevance':0x5},{'begin':'\x27','end':'\x27','contains':[_0x4204ac[_0x50af20(0x4a76)]]},{'begin':'\x22','end':'\x22'},{'begin':'`','end':'`','contains':[_0x4204ac[_0x50af20(0x4a76)]]},{'begin':/\{\w+\}/,'relevance':0x0},{'begin':'-?\x5cw+\x5cs*=>','relevance':0x0}]},{'className':_0x50af20(0x4a80),'begin':'(\x5cb0[0-7_]+)|(\x5cb0x[0-9a-fA-F_]+)|(\x5cb[1-9][0-9_]*(\x5c.[0-9_]+)?)|[0_]\x5cb','relevance':0x0},{'begin':_0x50af20(0x3ecd)+_0x4204ac[_0x50af20(0x49de)]+'|\x5cb(split|return|print|reverse|grep)\x5cb)\x5cs*','keywords':_0x50af20(0xa44),'relevance':0x0,'contains':[_0x4204ac[_0x50af20(0x2bbe)],{'className':_0x50af20(0x4d1d),'variants':[{'begin':_0x5c1508(_0x50af20(0x4401),_0xa2ea8[_0x50af20(0x583)](..._0xa1f40b,{'capture':!0x0}))},{'begin':_0x5c1508(_0x50af20(0x4401),'\x5c(','\x5c)')},{'begin':_0x5c1508(_0x50af20(0x4401),'\x5c[','\x5c]')},{'begin':_0x5c1508(_0x50af20(0x4401),'\x5c{','\x5c}')}],'relevance':0x2},{'className':_0x50af20(0x4d1d),'variants':[{'begin':/(m|qr)\/\//,'relevance':0x0},{'begin':_0xc3c26c(_0x50af20(0x2cfa),/\//,/\//)},{'begin':_0xc3c26c(_0x50af20(0x1e43),_0xa2ea8[_0x50af20(0x583)](..._0xa1f40b,{'capture':!0x0}),/\1/)},{'begin':_0xc3c26c(_0x50af20(0x1e43),/\(/,/\)/)},{'begin':_0xc3c26c(_0x50af20(0x1e43),/\[/,/\]/)},{'begin':_0xc3c26c(_0x50af20(0x1e43),/\{/,/\}/)}]}]},{'className':_0x50af20(0x14b2),'beginKeywords':_0x50af20(0x217c),'end':'(\x5cs*\x5c(.*?\x5c))?[;{]','excludeEnd':!0x0,'relevance':0x5,'contains':[_0x4204ac[_0x50af20(0x2029)]]},{'begin':_0x50af20(0x44fd),'relevance':0x0},{'begin':_0x50af20(0x5021),'end':_0x50af20(0x1b59),'subLanguage':_0x50af20(0x35a3),'contains':[{'begin':_0x50af20(0x94a),'end':'$','className':_0x50af20(0x4645)}]}];return _0xcca0d6[_0x50af20(0x2b31)]=_0x4d5df0,_0x13b2d6[_0x50af20(0x2b31)]=_0x4d5df0,{'name':_0x50af20(0xd53),'aliases':['pl','pm'],'keywords':_0x5be7b3,'contains':_0x4d5df0};};},0x2285:_0x42edf7=>{const _0x3a4938=a0_0x11e7;_0x42edf7[_0x3a4938(0x474c)]=function(_0x3d1306){const _0x5e396a=_0x3a4938;return{'name':'Packet\x20Filter\x20config','aliases':[_0x5e396a(0x3d92)],'keywords':{'$pattern':/[a-z0-9_<>-]+/,'built_in':_0x5e396a(0x3e12),'keyword':_0x5e396a(0x4924),'literal':_0x5e396a(0x51bf)},'contains':[_0x3d1306[_0x5e396a(0x2bbe)],_0x3d1306[_0x5e396a(0x30be)],_0x3d1306[_0x5e396a(0x291b)],{'className':_0x5e396a(0x3362),'begin':/\$[\w\d#@][\w\d_]*/,'relevance':0x0},{'className':_0x5e396a(0x3362),'begin':/<(?!\/)/,'end':/>/}]};};},0x1adc:_0x276987=>{const _0x735ba8=a0_0x11e7;_0x276987[_0x735ba8(0x474c)]=function(_0x4301fc){const _0x964fe2=_0x735ba8,_0x5ef3ec=_0x4301fc[_0x964fe2(0x4e4f)]('--','$'),_0x4e57d4='\x5c$([a-zA-Z_]?|[a-zA-Z_][a-zA-Z_0-9]*)\x5c$',_0x387b7e=_0x964fe2(0x1b80),_0x597463=_0x387b7e[_0x964fe2(0x1b23)]()['split']('\x20')[_0x964fe2(0x4833)](function(_0x43ed63){const _0x1a8169=_0x964fe2;return _0x43ed63[_0x1a8169(0x1117)]('|')[0x0];})[_0x964fe2(0x3541)]('|'),_0x532a5f=_0x964fe2(0x3e0a)[_0x964fe2(0x1b23)]()[_0x964fe2(0x1117)]('\x20')[_0x964fe2(0x4833)](function(_0x5d211e){const _0x317375=_0x964fe2;return _0x5d211e[_0x317375(0x1117)]('|')[0x0];})[_0x964fe2(0x3541)]('|');return{'name':_0x964fe2(0x5085),'aliases':[_0x964fe2(0x2d06),'postgresql'],'supersetOf':_0x964fe2(0x124d),'case_insensitive':!0x0,'keywords':{'keyword':_0x964fe2(0x4013),'built_in':'CURRENT_TIME\x20CURRENT_TIMESTAMP\x20CURRENT_USER\x20CURRENT_CATALOG|10\x20CURRENT_DATE\x20LOCALTIME\x20LOCALTIMESTAMP\x20CURRENT_ROLE|10\x20CURRENT_SCHEMA|10\x20SESSION_USER\x20PUBLIC\x20FOUND\x20NEW\x20OLD\x20TG_NAME|10\x20TG_WHEN|10\x20TG_LEVEL|10\x20TG_OP|10\x20TG_RELID|10\x20TG_RELNAME|10\x20TG_TABLE_NAME|10\x20TG_TABLE_SCHEMA|10\x20TG_NARGS|10\x20TG_ARGV|10\x20TG_EVENT|10\x20TG_TAG|10\x20ROW_COUNT\x20RESULT_OID|10\x20PG_CONTEXT|10\x20RETURNED_SQLSTATE\x20COLUMN_NAME\x20CONSTRAINT_NAME\x20PG_DATATYPE_NAME|10\x20MESSAGE_TEXT\x20TABLE_NAME\x20SCHEMA_NAME\x20PG_EXCEPTION_DETAIL|10\x20PG_EXCEPTION_HINT|10\x20PG_EXCEPTION_CONTEXT|10\x20SQLSTATE\x20SQLERRM|10\x20SUCCESSFUL_COMPLETION\x20WARNING\x20DYNAMIC_RESULT_SETS_RETURNED\x20IMPLICIT_ZERO_BIT_PADDING\x20NULL_VALUE_ELIMINATED_IN_SET_FUNCTION\x20PRIVILEGE_NOT_GRANTED\x20PRIVILEGE_NOT_REVOKED\x20STRING_DATA_RIGHT_TRUNCATION\x20DEPRECATED_FEATURE\x20NO_DATA\x20NO_ADDITIONAL_DYNAMIC_RESULT_SETS_RETURNED\x20SQL_STATEMENT_NOT_YET_COMPLETE\x20CONNECTION_EXCEPTION\x20CONNECTION_DOES_NOT_EXIST\x20CONNECTION_FAILURE\x20SQLCLIENT_UNABLE_TO_ESTABLISH_SQLCONNECTION\x20SQLSERVER_REJECTED_ESTABLISHMENT_OF_SQLCONNECTION\x20TRANSACTION_RESOLUTION_UNKNOWN\x20PROTOCOL_VIOLATION\x20TRIGGERED_ACTION_EXCEPTION\x20FEATURE_NOT_SUPPORTED\x20INVALID_TRANSACTION_INITIATION\x20LOCATOR_EXCEPTION\x20INVALID_LOCATOR_SPECIFICATION\x20INVALID_GRANTOR\x20INVALID_GRANT_OPERATION\x20INVALID_ROLE_SPECIFICATION\x20DIAGNOSTICS_EXCEPTION\x20STACKED_DIAGNOSTICS_ACCESSED_WITHOUT_ACTIVE_HANDLER\x20CASE_NOT_FOUND\x20CARDINALITY_VIOLATION\x20DATA_EXCEPTION\x20ARRAY_SUBSCRIPT_ERROR\x20CHARACTER_NOT_IN_REPERTOIRE\x20DATETIME_FIELD_OVERFLOW\x20DIVISION_BY_ZERO\x20ERROR_IN_ASSIGNMENT\x20ESCAPE_CHARACTER_CONFLICT\x20INDICATOR_OVERFLOW\x20INTERVAL_FIELD_OVERFLOW\x20INVALID_ARGUMENT_FOR_LOGARITHM\x20INVALID_ARGUMENT_FOR_NTILE_FUNCTION\x20INVALID_ARGUMENT_FOR_NTH_VALUE_FUNCTION\x20INVALID_ARGUMENT_FOR_POWER_FUNCTION\x20INVALID_ARGUMENT_FOR_WIDTH_BUCKET_FUNCTION\x20INVALID_CHARACTER_VALUE_FOR_CAST\x20INVALID_DATETIME_FORMAT\x20INVALID_ESCAPE_CHARACTER\x20INVALID_ESCAPE_OCTET\x20INVALID_ESCAPE_SEQUENCE\x20NONSTANDARD_USE_OF_ESCAPE_CHARACTER\x20INVALID_INDICATOR_PARAMETER_VALUE\x20INVALID_PARAMETER_VALUE\x20INVALID_REGULAR_EXPRESSION\x20INVALID_ROW_COUNT_IN_LIMIT_CLAUSE\x20INVALID_ROW_COUNT_IN_RESULT_OFFSET_CLAUSE\x20INVALID_TABLESAMPLE_ARGUMENT\x20INVALID_TABLESAMPLE_REPEAT\x20INVALID_TIME_ZONE_DISPLACEMENT_VALUE\x20INVALID_USE_OF_ESCAPE_CHARACTER\x20MOST_SPECIFIC_TYPE_MISMATCH\x20NULL_VALUE_NOT_ALLOWED\x20NULL_VALUE_NO_INDICATOR_PARAMETER\x20NUMERIC_VALUE_OUT_OF_RANGE\x20SEQUENCE_GENERATOR_LIMIT_EXCEEDED\x20STRING_DATA_LENGTH_MISMATCH\x20STRING_DATA_RIGHT_TRUNCATION\x20SUBSTRING_ERROR\x20TRIM_ERROR\x20UNTERMINATED_C_STRING\x20ZERO_LENGTH_CHARACTER_STRING\x20FLOATING_POINT_EXCEPTION\x20INVALID_TEXT_REPRESENTATION\x20INVALID_BINARY_REPRESENTATION\x20BAD_COPY_FILE_FORMAT\x20UNTRANSLATABLE_CHARACTER\x20NOT_AN_XML_DOCUMENT\x20INVALID_XML_DOCUMENT\x20INVALID_XML_CONTENT\x20INVALID_XML_COMMENT\x20INVALID_XML_PROCESSING_INSTRUCTION\x20INTEGRITY_CONSTRAINT_VIOLATION\x20RESTRICT_VIOLATION\x20NOT_NULL_VIOLATION\x20FOREIGN_KEY_VIOLATION\x20UNIQUE_VIOLATION\x20CHECK_VIOLATION\x20EXCLUSION_VIOLATION\x20INVALID_CURSOR_STATE\x20INVALID_TRANSACTION_STATE\x20ACTIVE_SQL_TRANSACTION\x20BRANCH_TRANSACTION_ALREADY_ACTIVE\x20HELD_CURSOR_REQUIRES_SAME_ISOLATION_LEVEL\x20INAPPROPRIATE_ACCESS_MODE_FOR_BRANCH_TRANSACTION\x20INAPPROPRIATE_ISOLATION_LEVEL_FOR_BRANCH_TRANSACTION\x20NO_ACTIVE_SQL_TRANSACTION_FOR_BRANCH_TRANSACTION\x20READ_ONLY_SQL_TRANSACTION\x20SCHEMA_AND_DATA_STATEMENT_MIXING_NOT_SUPPORTED\x20NO_ACTIVE_SQL_TRANSACTION\x20IN_FAILED_SQL_TRANSACTION\x20IDLE_IN_TRANSACTION_SESSION_TIMEOUT\x20INVALID_SQL_STATEMENT_NAME\x20TRIGGERED_DATA_CHANGE_VIOLATION\x20INVALID_AUTHORIZATION_SPECIFICATION\x20INVALID_PASSWORD\x20DEPENDENT_PRIVILEGE_DESCRIPTORS_STILL_EXIST\x20DEPENDENT_OBJECTS_STILL_EXIST\x20INVALID_TRANSACTION_TERMINATION\x20SQL_ROUTINE_EXCEPTION\x20FUNCTION_EXECUTED_NO_RETURN_STATEMENT\x20MODIFYING_SQL_DATA_NOT_PERMITTED\x20PROHIBITED_SQL_STATEMENT_ATTEMPTED\x20READING_SQL_DATA_NOT_PERMITTED\x20INVALID_CURSOR_NAME\x20EXTERNAL_ROUTINE_EXCEPTION\x20CONTAINING_SQL_NOT_PERMITTED\x20MODIFYING_SQL_DATA_NOT_PERMITTED\x20PROHIBITED_SQL_STATEMENT_ATTEMPTED\x20READING_SQL_DATA_NOT_PERMITTED\x20EXTERNAL_ROUTINE_INVOCATION_EXCEPTION\x20INVALID_SQLSTATE_RETURNED\x20NULL_VALUE_NOT_ALLOWED\x20TRIGGER_PROTOCOL_VIOLATED\x20SRF_PROTOCOL_VIOLATED\x20EVENT_TRIGGER_PROTOCOL_VIOLATED\x20SAVEPOINT_EXCEPTION\x20INVALID_SAVEPOINT_SPECIFICATION\x20INVALID_CATALOG_NAME\x20INVALID_SCHEMA_NAME\x20TRANSACTION_ROLLBACK\x20TRANSACTION_INTEGRITY_CONSTRAINT_VIOLATION\x20SERIALIZATION_FAILURE\x20STATEMENT_COMPLETION_UNKNOWN\x20DEADLOCK_DETECTED\x20SYNTAX_ERROR_OR_ACCESS_RULE_VIOLATION\x20SYNTAX_ERROR\x20INSUFFICIENT_PRIVILEGE\x20CANNOT_COERCE\x20GROUPING_ERROR\x20WINDOWING_ERROR\x20INVALID_RECURSION\x20INVALID_FOREIGN_KEY\x20INVALID_NAME\x20NAME_TOO_LONG\x20RESERVED_NAME\x20DATATYPE_MISMATCH\x20INDETERMINATE_DATATYPE\x20COLLATION_MISMATCH\x20INDETERMINATE_COLLATION\x20WRONG_OBJECT_TYPE\x20GENERATED_ALWAYS\x20UNDEFINED_COLUMN\x20UNDEFINED_FUNCTION\x20UNDEFINED_TABLE\x20UNDEFINED_PARAMETER\x20UNDEFINED_OBJECT\x20DUPLICATE_COLUMN\x20DUPLICATE_CURSOR\x20DUPLICATE_DATABASE\x20DUPLICATE_FUNCTION\x20DUPLICATE_PREPARED_STATEMENT\x20DUPLICATE_SCHEMA\x20DUPLICATE_TABLE\x20DUPLICATE_ALIAS\x20DUPLICATE_OBJECT\x20AMBIGUOUS_COLUMN\x20AMBIGUOUS_FUNCTION\x20AMBIGUOUS_PARAMETER\x20AMBIGUOUS_ALIAS\x20INVALID_COLUMN_REFERENCE\x20INVALID_COLUMN_DEFINITION\x20INVALID_CURSOR_DEFINITION\x20INVALID_DATABASE_DEFINITION\x20INVALID_FUNCTION_DEFINITION\x20INVALID_PREPARED_STATEMENT_DEFINITION\x20INVALID_SCHEMA_DEFINITION\x20INVALID_TABLE_DEFINITION\x20INVALID_OBJECT_DEFINITION\x20WITH_CHECK_OPTION_VIOLATION\x20INSUFFICIENT_RESOURCES\x20DISK_FULL\x20OUT_OF_MEMORY\x20TOO_MANY_CONNECTIONS\x20CONFIGURATION_LIMIT_EXCEEDED\x20PROGRAM_LIMIT_EXCEEDED\x20STATEMENT_TOO_COMPLEX\x20TOO_MANY_COLUMNS\x20TOO_MANY_ARGUMENTS\x20OBJECT_NOT_IN_PREREQUISITE_STATE\x20OBJECT_IN_USE\x20CANT_CHANGE_RUNTIME_PARAM\x20LOCK_NOT_AVAILABLE\x20OPERATOR_INTERVENTION\x20QUERY_CANCELED\x20ADMIN_SHUTDOWN\x20CRASH_SHUTDOWN\x20CANNOT_CONNECT_NOW\x20DATABASE_DROPPED\x20SYSTEM_ERROR\x20IO_ERROR\x20UNDEFINED_FILE\x20DUPLICATE_FILE\x20SNAPSHOT_TOO_OLD\x20CONFIG_FILE_ERROR\x20LOCK_FILE_EXISTS\x20FDW_ERROR\x20FDW_COLUMN_NAME_NOT_FOUND\x20FDW_DYNAMIC_PARAMETER_VALUE_NEEDED\x20FDW_FUNCTION_SEQUENCE_ERROR\x20FDW_INCONSISTENT_DESCRIPTOR_INFORMATION\x20FDW_INVALID_ATTRIBUTE_VALUE\x20FDW_INVALID_COLUMN_NAME\x20FDW_INVALID_COLUMN_NUMBER\x20FDW_INVALID_DATA_TYPE\x20FDW_INVALID_DATA_TYPE_DESCRIPTORS\x20FDW_INVALID_DESCRIPTOR_FIELD_IDENTIFIER\x20FDW_INVALID_HANDLE\x20FDW_INVALID_OPTION_INDEX\x20FDW_INVALID_OPTION_NAME\x20FDW_INVALID_STRING_LENGTH_OR_BUFFER_LENGTH\x20FDW_INVALID_STRING_FORMAT\x20FDW_INVALID_USE_OF_NULL_POINTER\x20FDW_TOO_MANY_HANDLES\x20FDW_OUT_OF_MEMORY\x20FDW_NO_SCHEMAS\x20FDW_OPTION_NAME_NOT_FOUND\x20FDW_REPLY_HANDLE\x20FDW_SCHEMA_NOT_FOUND\x20FDW_TABLE_NOT_FOUND\x20FDW_UNABLE_TO_CREATE_EXECUTION\x20FDW_UNABLE_TO_CREATE_REPLY\x20FDW_UNABLE_TO_ESTABLISH_CONNECTION\x20PLPGSQL_ERROR\x20RAISE_EXCEPTION\x20NO_DATA_FOUND\x20TOO_MANY_ROWS\x20ASSERT_FAILURE\x20INTERNAL_ERROR\x20DATA_CORRUPTED\x20INDEX_CORRUPTED\x20'},'illegal':/:==|\W\s*\(\*|(^|\s)\$[a-z]|\{\{|[a-z]:\s*$|\.\.\.|TO:|DO:/,'contains':[{'className':'keyword','variants':[{'begin':/\bTEXT\s*SEARCH\b/},{'begin':/\b(PRIMARY|FOREIGN|FOR(\s+NO)?)\s+KEY\b/},{'begin':/\bPARALLEL\s+(UNSAFE|RESTRICTED|SAFE)\b/},{'begin':/\bSTORAGE\s+(PLAIN|EXTERNAL|EXTENDED|MAIN)\b/},{'begin':/\bMATCH\s+(FULL|PARTIAL|SIMPLE)\b/},{'begin':/\bNULLS\s+(FIRST|LAST)\b/},{'begin':/\bEVENT\s+TRIGGER\b/},{'begin':/\b(MAPPING|OR)\s+REPLACE\b/},{'begin':/\b(FROM|TO)\s+(PROGRAM|STDIN|STDOUT)\b/},{'begin':/\b(SHARE|EXCLUSIVE)\s+MODE\b/},{'begin':/\b(LEFT|RIGHT)\s+(OUTER\s+)?JOIN\b/},{'begin':/\b(FETCH|MOVE)\s+(NEXT|PRIOR|FIRST|LAST|ABSOLUTE|RELATIVE|FORWARD|BACKWARD)\b/},{'begin':/\bPRESERVE\s+ROWS\b/},{'begin':/\bDISCARD\s+PLANS\b/},{'begin':/\bREFERENCING\s+(OLD|NEW)\b/},{'begin':/\bSKIP\s+LOCKED\b/},{'begin':/\bGROUPING\s+SETS\b/},{'begin':/\b(BINARY|INSENSITIVE|SCROLL|NO\s+SCROLL)\s+(CURSOR|FOR)\b/},{'begin':/\b(WITH|WITHOUT)\s+HOLD\b/},{'begin':/\bWITH\s+(CASCADED|LOCAL)\s+CHECK\s+OPTION\b/},{'begin':/\bEXCLUDE\s+(TIES|NO\s+OTHERS)\b/},{'begin':/\bFORMAT\s+(TEXT|XML|JSON|YAML)\b/},{'begin':/\bSET\s+((SESSION|LOCAL)\s+)?NAMES\b/},{'begin':/\bIS\s+(NOT\s+)?UNKNOWN\b/},{'begin':/\bSECURITY\s+LABEL\b/},{'begin':/\bSTANDALONE\s+(YES|NO|NO\s+VALUE)\b/},{'begin':/\bWITH\s+(NO\s+)?DATA\b/},{'begin':/\b(FOREIGN|SET)\s+DATA\b/},{'begin':/\bSET\s+(CATALOG|CONSTRAINTS)\b/},{'begin':/\b(WITH|FOR)\s+ORDINALITY\b/},{'begin':/\bIS\s+(NOT\s+)?DOCUMENT\b/},{'begin':/\bXML\s+OPTION\s+(DOCUMENT|CONTENT)\b/},{'begin':/\b(STRIP|PRESERVE)\s+WHITESPACE\b/},{'begin':/\bNO\s+(ACTION|MAXVALUE|MINVALUE)\b/},{'begin':/\bPARTITION\s+BY\s+(RANGE|LIST|HASH)\b/},{'begin':/\bAT\s+TIME\s+ZONE\b/},{'begin':/\bGRANTED\s+BY\b/},{'begin':/\bRETURN\s+(QUERY|NEXT)\b/},{'begin':/\b(ATTACH|DETACH)\s+PARTITION\b/},{'begin':/\bFORCE\s+ROW\s+LEVEL\s+SECURITY\b/},{'begin':/\b(INCLUDING|EXCLUDING)\s+(COMMENTS|CONSTRAINTS|DEFAULTS|IDENTITY|INDEXES|STATISTICS|STORAGE|ALL)\b/},{'begin':/\bAS\s+(ASSIGNMENT|IMPLICIT|PERMISSIVE|RESTRICTIVE|ENUM|RANGE)\b/}]},{'begin':/\b(FORMAT|FAMILY|VERSION)\s*\(/},{'begin':/\bINCLUDE\s*\(/,'keywords':_0x964fe2(0x51c3)},{'begin':/\bRANGE(?!\s*(BETWEEN|UNBOUNDED|CURRENT|[-0-9]+))/},{'begin':/\b(VERSION|OWNER|TEMPLATE|TABLESPACE|CONNECTION\s+LIMIT|PROCEDURE|RESTRICT|JOIN|PARSER|COPY|START|END|COLLATION|INPUT|ANALYZE|STORAGE|LIKE|DEFAULT|DELIMITER|ENCODING|COLUMN|CONSTRAINT|TABLE|SCHEMA)\s*=/},{'begin':/\b(PG_\w+?|HAS_[A-Z_]+_PRIVILEGE)\b/,'relevance':0xa},{'begin':/\bEXTRACT\s*\(/,'end':/\bFROM\b/,'returnEnd':!0x0,'keywords':{'type':_0x964fe2(0x8d9)}},{'begin':/\b(XMLELEMENT|XMLPI)\s*\(\s*NAME/,'keywords':{'keyword':_0x964fe2(0x64a)}},{'begin':/\b(XMLPARSE|XMLSERIALIZE)\s*\(\s*(DOCUMENT|CONTENT)/,'keywords':{'keyword':'DOCUMENT\x20CONTENT'}},{'beginKeywords':_0x964fe2(0x3d29),'end':_0x4301fc['C_NUMBER_RE'],'returnEnd':!0x0,'keywords':'BY\x20CACHE\x20INCREMENT\x20MAXVALUE\x20MINVALUE'},{'className':_0x964fe2(0xcfc),'begin':/\b(WITH|WITHOUT)\s+TIME\s+ZONE\b/},{'className':_0x964fe2(0xcfc),'begin':/\bINTERVAL\s+(YEAR|MONTH|DAY|HOUR|MINUTE|SECOND)(\s+TO\s+(MONTH|HOUR|MINUTE|SECOND))?\b/},{'begin':/\bRETURNS\s+(LANGUAGE_HANDLER|TRIGGER|EVENT_TRIGGER|FDW_HANDLER|INDEX_AM_HANDLER|TSM_HANDLER)\b/,'keywords':{'keyword':_0x964fe2(0x10fd),'type':_0x964fe2(0x1caa)}},{'begin':_0x964fe2(0x4cd5)+_0x532a5f+_0x964fe2(0x1138)},{'begin':_0x964fe2(0x1af1)+_0x597463+_0x964fe2(0x716)},{'begin':_0x964fe2(0x4cd5)+_0x597463+_0x964fe2(0x322a),'keywords':{'keyword':_0x964fe2(0x97a),'type':_0x387b7e[_0x964fe2(0x741)](_0x964fe2(0x3336),'')}},{'className':'type','begin':_0x964fe2(0x4cd5)+_0x597463+_0x964fe2(0x716)},{'className':_0x964fe2(0x2431),'begin':'\x27','end':'\x27','contains':[{'begin':'\x27\x27'}]},{'className':_0x964fe2(0x2431),'begin':_0x964fe2(0xc81),'end':'\x27','contains':[{'begin':'\x5c\x5c.'}],'relevance':0xa},_0x4301fc[_0x964fe2(0x453e)]({'begin':_0x4e57d4,'end':_0x4e57d4,'contains':[{'subLanguage':[_0x964fe2(0x288c),_0x964fe2(0xf50),_0x964fe2(0x1304),'tcl','r','lua',_0x964fe2(0x39e1),_0x964fe2(0x3521),_0x964fe2(0x4430),_0x964fe2(0x3be4),'scheme','xml',_0x964fe2(0x3289)],'endsWithParent':!0x0}]}),{'begin':'\x22','end':'\x22','contains':[{'begin':'\x22\x22'}]},_0x4301fc[_0x964fe2(0xd12)],_0x4301fc['C_BLOCK_COMMENT_MODE'],_0x5ef3ec,{'className':_0x964fe2(0x5153),'variants':[{'begin':'%(ROW)?TYPE','relevance':0xa},{'begin':_0x964fe2(0x40a)},{'begin':_0x964fe2(0xd5d),'end':'$'}]},{'className':_0x964fe2(0x239b),'begin':_0x964fe2(0x3173),'relevance':0xa}]};};},0x6be:_0x4a9b5f=>{const _0x51b704=a0_0x11e7;_0x4a9b5f[_0x51b704(0x474c)]=function(_0x53c4ab){const _0x24b5a4=_0x51b704;return{'name':_0x24b5a4(0x13c0),'subLanguage':_0x24b5a4(0x2655),'contains':[{'begin':/<\?(php|=)?/,'end':/\?>/,'subLanguage':_0x24b5a4(0x3521),'contains':[{'begin':_0x24b5a4(0x4f94),'end':_0x24b5a4(0x1820),'skip':!0x0},{'begin':'b\x22','end':'\x22','skip':!0x0},{'begin':'b\x27','end':'\x27','skip':!0x0},_0x53c4ab['inherit'](_0x53c4ab[_0x24b5a4(0xa4c)],{'illegal':null,'className':null,'contains':null,'skip':!0x0}),_0x53c4ab[_0x24b5a4(0x46a1)](_0x53c4ab[_0x24b5a4(0x291b)],{'illegal':null,'className':null,'contains':null,'skip':!0x0})]}]};};},0xc27:_0x458c44=>{_0x458c44['exports']=function(_0x395111){const _0x213ca0=a0_0x11e7,_0x3d097b=_0x395111[_0x213ca0(0x41d2)],_0x38e956=/(?![A-Za-z0-9])(?![$])/,_0x3d5a36=_0x3d097b['concat'](/[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/,_0x38e956),_0x431f0a=_0x3d097b[_0x213ca0(0x1d1d)](/(\\?[A-Z][a-z0-9_\x7f-\xff]+|\\?[A-Z]+(?=[A-Z][a-z0-9_\x7f-\xff])){1,}/,_0x38e956),_0x4be331={'scope':_0x213ca0(0x3362),'match':_0x213ca0(0x4ac)+_0x3d5a36},_0x3ea183={'scope':_0x213ca0(0x2ad6),'variants':[{'begin':/\$\w+/},{'begin':/\{\$/,'end':/\}/}]},_0x2800f5=_0x395111['inherit'](_0x395111[_0x213ca0(0xa4c)],{'illegal':null}),_0x4ba0f2='[\x20\x09\x0a]',_0x4b1dd7={'scope':_0x213ca0(0x2431),'variants':[_0x395111[_0x213ca0(0x46a1)](_0x395111[_0x213ca0(0x291b)],{'illegal':null,'contains':_0x395111[_0x213ca0(0x291b)][_0x213ca0(0x2b31)][_0x213ca0(0x1d1d)](_0x3ea183)}),_0x2800f5,{'begin':/<<<[ \t]*(?:(\w+)|"(\w+)")\n/,'end':/[ \t]*(\w+)\b/,'contains':_0x395111[_0x213ca0(0x291b)][_0x213ca0(0x2b31)][_0x213ca0(0x1d1d)](_0x3ea183),'on:begin':(_0x48ea93,_0x4eda06)=>{const _0x312bea=_0x213ca0;_0x4eda06[_0x312bea(0x5139)]['_beginMatch']=_0x48ea93[0x1]||_0x48ea93[0x2];},'on:end':(_0x45d641,_0x585c0b)=>{_0x585c0b['data']['_beginMatch']!==_0x45d641[0x1]&&_0x585c0b['ignoreMatch']();}},_0x395111[_0x213ca0(0x453e)]({'begin':/<<<[ \t]*'(\w+)'\n/,'end':/[ \t]*(\w+)\b/})]},_0x3b65d0={'scope':_0x213ca0(0x4a80),'variants':[{'begin':_0x213ca0(0x3559)},{'begin':_0x213ca0(0x3b50)},{'begin':_0x213ca0(0x44de)},{'begin':'(?:\x5cb\x5cd+(?:_\x5cd+)*(\x5c.(?:\x5cd+(?:_\x5cd+)*))?|\x5cB\x5c.\x5cd+)(?:[eE][+-]?\x5cd+)?'}],'relevance':0x0},_0x2a41ab=[_0x213ca0(0x3984),'null','true'],_0x266051=[_0x213ca0(0x2940),'__DIR__',_0x213ca0(0x4583),_0x213ca0(0x2bfe),_0x213ca0(0x3a7c),_0x213ca0(0x3088),_0x213ca0(0x4558),_0x213ca0(0x4c1),_0x213ca0(0x304f),'die',_0x213ca0(0x4978),_0x213ca0(0x4c7b),_0x213ca0(0x478e),_0x213ca0(0x1853),_0x213ca0(0x4957),_0x213ca0(0x4031),'require_once',_0x213ca0(0x26f6),_0x213ca0(0x3027),'and','as',_0x213ca0(0x4053),'bool',_0x213ca0(0x1e8d),_0x213ca0(0x4e10),_0x213ca0(0xdef),_0x213ca0(0x2e7e),_0x213ca0(0x31a3),_0x213ca0(0x1390),_0x213ca0(0x150c),_0x213ca0(0xc01),_0x213ca0(0x16d9),_0x213ca0(0x45e8),_0x213ca0(0x3d23),'do',_0x213ca0(0x5024),_0x213ca0(0x3d4),_0x213ca0(0x3790),_0x213ca0(0x168c),_0x213ca0(0x517b),'endfor','endforeach',_0x213ca0(0x39b),'endswitch',_0x213ca0(0x1f12),_0x213ca0(0x44d8),_0x213ca0(0x2ff9),_0x213ca0(0x4428),'final',_0x213ca0(0x37b2),'float','for','foreach',_0x213ca0(0x27e6),'global','goto','if',_0x213ca0(0x6c3),_0x213ca0(0xf3c),_0x213ca0(0x3f4d),_0x213ca0(0xc16),_0x213ca0(0x410f),_0x213ca0(0x321b),_0x213ca0(0x3f3b),_0x213ca0(0x518c),_0x213ca0(0x144e),_0x213ca0(0x1fe3),_0x213ca0(0x2418),'new',_0x213ca0(0x1d3d),_0x213ca0(0x20c7),'or','private',_0x213ca0(0xc14),_0x213ca0(0x39ce),_0x213ca0(0x1aa2),_0x213ca0(0x47f6),_0x213ca0(0xdfd),_0x213ca0(0x2431),'switch',_0x213ca0(0x383),'trait',_0x213ca0(0x422b),_0x213ca0(0x4479),_0x213ca0(0x84a),_0x213ca0(0x469d),_0x213ca0(0x27d6),_0x213ca0(0x552),_0x213ca0(0x32a6),_0x213ca0(0x5075)],_0xb35303=[_0x213ca0(0x2816),'AppendIterator',_0x213ca0(0x15ba),_0x213ca0(0x3d2d),_0x213ca0(0x26d9),_0x213ca0(0x10a3),_0x213ca0(0x4670),_0x213ca0(0xeff),'BadMethodCallException',_0x213ca0(0x3ee8),_0x213ca0(0x22f0),_0x213ca0(0x120d),_0x213ca0(0x2df3),'DirectoryIterator',_0x213ca0(0x1520),_0x213ca0(0x4222),_0x213ca0(0x422a),_0x213ca0(0x8b3),_0x213ca0(0x168e),_0x213ca0(0x1486),'FilterIterator',_0x213ca0(0x1650),_0x213ca0(0x35eb),_0x213ca0(0xe3e),'IteratorIterator',_0x213ca0(0x12a6),_0x213ca0(0x2862),_0x213ca0(0x1725),_0x213ca0(0x1970),_0x213ca0(0x1884),_0x213ca0(0xa7f),_0x213ca0(0x5db),'OuterIterator',_0x213ca0(0x169f),_0x213ca0(0xb1d),_0x213ca0(0x186a),_0x213ca0(0x47bf),'RecursiveArrayIterator',_0x213ca0(0x3e5f),_0x213ca0(0x3ce4),_0x213ca0(0x1fe9),'RecursiveFilterIterator',_0x213ca0(0x16ca),_0x213ca0(0x4773),_0x213ca0(0x326d),_0x213ca0(0x2d95),'RegexIterator',_0x213ca0(0x490a),_0x213ca0(0x1f24),'SplDoublyLinkedList','SplFileInfo','SplFileObject',_0x213ca0(0x32e),_0x213ca0(0x2824),'SplMaxHeap',_0x213ca0(0x175d),_0x213ca0(0x2612),'SplObserver',_0x213ca0(0x1772),_0x213ca0(0x12ac),_0x213ca0(0x4747),_0x213ca0(0x3c43),'SplTempFileObject',_0x213ca0(0x1a6c),_0x213ca0(0x1950),_0x213ca0(0x1e4c),_0x213ca0(0x425),_0x213ca0(0x24f4),_0x213ca0(0x31dd),_0x213ca0(0x1e02),'Fiber',_0x213ca0(0x2122),_0x213ca0(0x45eb),_0x213ca0(0x50f1),_0x213ca0(0xae2),'Stringable','Throwable',_0x213ca0(0x4d6d),_0x213ca0(0x2d54),_0x213ca0(0x1158),'WeakMap',_0x213ca0(0x2fc7),_0x213ca0(0x1a60),_0x213ca0(0x46f8),_0x213ca0(0x8c3),_0x213ca0(0x4454),_0x213ca0(0x2c7c),_0x213ca0(0x8cd)],_0x3adaf2={'keyword':_0x266051,'literal':(_0x5a23fe=>{const _0x2a7413=[];return _0x5a23fe['forEach'](_0x541095=>{const _0x19eaf4=a0_0x11e7;_0x2a7413[_0x19eaf4(0x1715)](_0x541095),_0x541095['toLowerCase']()===_0x541095?_0x2a7413[_0x19eaf4(0x1715)](_0x541095[_0x19eaf4(0x44ff)]()):_0x2a7413['push'](_0x541095[_0x19eaf4(0x6e8)]());}),_0x2a7413;})(_0x2a41ab),'built_in':_0xb35303},_0x480d97=_0x57c975=>_0x57c975['map'](_0x325571=>_0x325571['replace'](/\|\d+$/,'')),_0x29cdd1={'variants':[{'match':[/new/,_0x3d097b[_0x213ca0(0x1d1d)](_0x4ba0f2,'+'),_0x3d097b[_0x213ca0(0x1d1d)](_0x213ca0(0x2d75),_0x480d97(_0xb35303)[_0x213ca0(0x3541)](_0x213ca0(0xfab)),'\x5cb)'),_0x431f0a],'scope':{0x1:_0x213ca0(0x1357),0x4:'title.class'}}]},_0x2cd92b=_0x3d097b[_0x213ca0(0x1d1d)](_0x3d5a36,'\x5cb(?!\x5c()'),_0x457398={'variants':[{'match':[_0x3d097b[_0x213ca0(0x1d1d)](/::/,_0x3d097b['lookahead'](/(?!class\b)/)),_0x2cd92b],'scope':{0x2:_0x213ca0(0x41f7)}},{'match':[/::/,/class/],'scope':{0x2:'variable.language'}},{'match':[_0x431f0a,_0x3d097b[_0x213ca0(0x1d1d)](/::/,_0x3d097b[_0x213ca0(0x3296)](/(?!class\b)/)),_0x2cd92b],'scope':{0x1:_0x213ca0(0x19e4),0x3:_0x213ca0(0x41f7)}},{'match':[_0x431f0a,_0x3d097b[_0x213ca0(0x1d1d)]('::',_0x3d097b['lookahead'](/(?!class\b)/))],'scope':{0x1:_0x213ca0(0x19e4)}},{'match':[_0x431f0a,/::/,/class/],'scope':{0x1:_0x213ca0(0x19e4),0x3:_0x213ca0(0xd71)}}]},_0x182e17={'scope':_0x213ca0(0x431d),'match':_0x3d097b[_0x213ca0(0x1d1d)](_0x3d5a36,_0x3d097b[_0x213ca0(0x3296)](':'),_0x3d097b[_0x213ca0(0x3296)](/(?!::)/))},_0x11493c={'relevance':0x0,'begin':/\(/,'end':/\)/,'keywords':_0x3adaf2,'contains':[_0x182e17,_0x4be331,_0x457398,_0x395111[_0x213ca0(0x23fe)],_0x4b1dd7,_0x3b65d0,_0x29cdd1]},_0x2dcaf3={'relevance':0x0,'match':[/\b/,_0x3d097b['concat'](_0x213ca0(0x4017),_0x480d97(_0x266051)['join'](_0x213ca0(0xfab)),'|',_0x480d97(_0xb35303)[_0x213ca0(0x3541)](_0x213ca0(0xfab)),'\x5cb)'),_0x3d5a36,_0x3d097b[_0x213ca0(0x1d1d)](_0x4ba0f2,'*'),_0x3d097b[_0x213ca0(0x3296)](/(?=\()/)],'scope':{0x3:'title.function.invoke'},'contains':[_0x11493c]};_0x11493c[_0x213ca0(0x2b31)][_0x213ca0(0x1715)](_0x2dcaf3);const _0x3e8f67=[_0x182e17,_0x457398,_0x395111[_0x213ca0(0x23fe)],_0x4b1dd7,_0x3b65d0,_0x29cdd1];return{'case_insensitive':!0x1,'keywords':_0x3adaf2,'contains':[{'begin':_0x3d097b[_0x213ca0(0x1d1d)](/#\[\s*/,_0x431f0a),'beginScope':_0x213ca0(0x5153),'end':/]/,'endScope':'meta','keywords':{'literal':_0x2a41ab,'keyword':[_0x213ca0(0x4321),_0x213ca0(0x26f6)]},'contains':[{'begin':/\[/,'end':/]/,'keywords':{'literal':_0x2a41ab,'keyword':[_0x213ca0(0x4321),_0x213ca0(0x26f6)]},'contains':['self',..._0x3e8f67]},..._0x3e8f67,{'scope':_0x213ca0(0x5153),'match':_0x431f0a}]},_0x395111[_0x213ca0(0x2bbe)],_0x395111[_0x213ca0(0x4e4f)]('//','$'),_0x395111[_0x213ca0(0x4e4f)](_0x213ca0(0x4f94),_0x213ca0(0x1820),{'contains':[{'scope':_0x213ca0(0x4593),'match':_0x213ca0(0x4d18)}]}),{'match':/__halt_compiler\(\);/,'keywords':_0x213ca0(0x18d4),'starts':{'scope':_0x213ca0(0x4645),'end':_0x395111[_0x213ca0(0x1640)],'contains':[{'match':/\?>/,'scope':_0x213ca0(0x5153),'endsParent':!0x0}]}},{'scope':_0x213ca0(0x5153),'variants':[{'begin':/<\?php/,'relevance':0xa},{'begin':/<\?=/},{'begin':/<\?/,'relevance':0.1},{'begin':/\?>/}]},{'scope':_0x213ca0(0xd71),'match':/\$this\b/},_0x4be331,_0x2dcaf3,_0x457398,{'match':[/const/,/\s/,_0x3d5a36],'scope':{0x1:'keyword',0x3:'variable.constant'}},_0x29cdd1,{'scope':_0x213ca0(0x14b2),'relevance':0x0,'beginKeywords':_0x213ca0(0x957),'end':/[;{]/,'excludeEnd':!0x0,'illegal':_0x213ca0(0x11c5),'contains':[{'beginKeywords':_0x213ca0(0x84a)},_0x395111[_0x213ca0(0xb0e)],{'begin':'=>','endsParent':!0x0},{'scope':_0x213ca0(0xddd),'begin':'\x5c(','end':'\x5c)','excludeBegin':!0x0,'excludeEnd':!0x0,'keywords':_0x3adaf2,'contains':[_0x213ca0(0x4454),_0x4be331,_0x457398,_0x395111[_0x213ca0(0x23fe)],_0x4b1dd7,_0x3b65d0]}]},{'scope':_0x213ca0(0x1390),'variants':[{'beginKeywords':_0x213ca0(0x44d8),'illegal':/[($"]/},{'beginKeywords':_0x213ca0(0x3117),'illegal':/[:($"]/}],'relevance':0x0,'end':/\{/,'excludeEnd':!0x0,'contains':[{'beginKeywords':_0x213ca0(0x4c1c)},_0x395111[_0x213ca0(0xb0e)]]},{'beginKeywords':_0x213ca0(0x37f7),'relevance':0x0,'end':';','illegal':/[.']/,'contains':[_0x395111[_0x213ca0(0x46a1)](_0x395111[_0x213ca0(0xb0e)],{'scope':_0x213ca0(0x19e4)})]},{'beginKeywords':_0x213ca0(0x84a),'relevance':0x0,'end':';','contains':[{'match':/\b(as|const|function)\b/,'scope':'keyword'},_0x395111[_0x213ca0(0xb0e)]]},_0x4b1dd7,_0x3b65d0]};};},0x2350:_0x515acb=>{const _0x380bac=a0_0x11e7;_0x515acb[_0x380bac(0x474c)]=function(_0x4ef8d4){const _0x423d52=_0x380bac;return{'name':_0x423d52(0x14a1),'aliases':[_0x423d52(0x4006),'txt'],'disableAutodetect':!0x0};};},0x1af7:_0x5310a9=>{const _0x40ef2f=a0_0x11e7;_0x5310a9[_0x40ef2f(0x474c)]=function(_0x10c479){const _0x240c41=_0x40ef2f;return{'name':'Pony','keywords':{'keyword':_0x240c41(0x1915),'meta':_0x240c41(0x1f1d),'literal':'this\x20false\x20true'},'contains':[{'className':'type','begin':_0x240c41(0xb6d),'relevance':0x0},{'className':_0x240c41(0x2431),'begin':_0x240c41(0xb00),'end':'\x22\x22\x22','relevance':0xa},{'className':_0x240c41(0x2431),'begin':'\x22','end':'\x22','contains':[_0x10c479[_0x240c41(0x4a76)]]},{'className':_0x240c41(0x2431),'begin':'\x27','end':'\x27','contains':[_0x10c479[_0x240c41(0x4a76)]],'relevance':0x0},{'begin':_0x10c479[_0x240c41(0xacc)]+'\x27','relevance':0x0},{'className':'number','begin':_0x240c41(0x3ba7),'relevance':0x0},_0x10c479[_0x240c41(0x2ae2)],_0x10c479[_0x240c41(0x23fe)]]};};},0xf8e:_0xed0281=>{const _0x6640df=a0_0x11e7;_0xed0281[_0x6640df(0x474c)]=function(_0xe746e8){const _0x4ed504=_0x6640df,_0x36b182={'$pattern':/-?[A-z\.\-]+\b/,'keyword':_0x4ed504(0x50f7),'built_in':_0x4ed504(0x3461)},_0x4dce99={'begin':'`[\x5cs\x5cS]','relevance':0x0},_0x754615={'className':'variable','variants':[{'begin':/\$\B/},{'className':_0x4ed504(0x1357),'begin':/\$this/},{'begin':/\$[\w\d][\w\d_:]*/}]},_0x5037f7={'className':_0x4ed504(0x2431),'variants':[{'begin':/"/,'end':/"/},{'begin':/@"/,'end':/^"@/}],'contains':[_0x4dce99,_0x754615,{'className':_0x4ed504(0x3362),'begin':/\$[A-z]/,'end':/[^A-z]/}]},_0x27959f={'className':_0x4ed504(0x2431),'variants':[{'begin':/'/,'end':/'/},{'begin':/@'/,'end':/^'@/}]},_0x585584=_0xe746e8[_0x4ed504(0x46a1)](_0xe746e8[_0x4ed504(0x4e4f)](null,null),{'variants':[{'begin':/#/,'end':/$/},{'begin':/<#/,'end':/#>/}],'contains':[{'className':_0x4ed504(0x4593),'variants':[{'begin':/\.(synopsis|description|example|inputs|outputs|notes|link|component|role|functionality)/},{'begin':/\.(parameter|forwardhelptargetname|forwardhelpcategory|remotehelprunspace|externalhelp)\s+\S+/}]}]}),_0x3d6c5f={'className':_0x4ed504(0x43a),'variants':[{'begin':'('[_0x4ed504(0x1d1d)](_0x4ed504(0x4af9),_0x4ed504(0x2e28))}]},_0x4d6fa4={'className':'class','beginKeywords':_0x4ed504(0x2ed9),'end':/\s*[{]/,'excludeEnd':!0x0,'relevance':0x0,'contains':[_0xe746e8[_0x4ed504(0x2029)]]},_0x2184f0={'className':'function','begin':/function\s+/,'end':/\s*\{|$/,'excludeEnd':!0x0,'returnBegin':!0x0,'relevance':0x0,'contains':[{'begin':'function','relevance':0x0,'className':_0x4ed504(0x1357)},{'className':_0x4ed504(0x4685),'begin':/\w[\w\d]*((-)[\w\d]+)*/,'relevance':0x0},{'begin':/\(/,'end':/\)/,'className':'params','relevance':0x0,'contains':[_0x754615]}]},_0x536af0={'begin':/using\s/,'end':/$/,'returnBegin':!0x0,'contains':[_0x5037f7,_0x27959f,{'className':_0x4ed504(0x1357),'begin':/(using|assembly|command|module|namespace|type)/}]},_0xccc471={'variants':[{'className':_0x4ed504(0x1182),'begin':'('[_0x4ed504(0x1d1d)](_0x4ed504(0x30b5),_0x4ed504(0x716))},{'className':_0x4ed504(0x2706),'begin':/(-){1,2}[\w\d-]+/,'relevance':0x0}]},_0x3225b8={'className':_0x4ed504(0x14b2),'begin':/\[.*\]\s*[\w]+[ ]??\(/,'end':/$/,'returnBegin':!0x0,'relevance':0x0,'contains':[{'className':_0x4ed504(0x1357),'begin':'('[_0x4ed504(0x1d1d)](_0x36b182[_0x4ed504(0x1357)][_0x4ed504(0x8e8)]()['replace'](/\s/g,'|'),_0x4ed504(0x716)),'endsParent':!0x0,'relevance':0x0},_0xe746e8[_0x4ed504(0x46a1)](_0xe746e8[_0x4ed504(0x2029)],{'endsParent':!0x0})]},_0x3d2ac9=[_0x3225b8,_0x585584,_0x4dce99,_0xe746e8[_0x4ed504(0x30be)],_0x5037f7,_0x27959f,_0x3d6c5f,_0x754615,{'className':_0x4ed504(0x2706),'begin':/\$(null|true|false)\b/},{'className':_0x4ed504(0x527d),'begin':/@\B/,'relevance':0x0}],_0x3de0d4={'begin':/\[/,'end':/\]/,'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0,'contains':[][_0x4ed504(0x1d1d)](_0x4ed504(0x4454),_0x3d2ac9,{'begin':'('+['string',_0x4ed504(0x373c),_0x4ed504(0x961),_0x4ed504(0xc16),_0x4ed504(0x324f),_0x4ed504(0x3ebd),_0x4ed504(0x2353),_0x4ed504(0x4d4),_0x4ed504(0x5024),_0x4ed504(0x4339),_0x4ed504(0x2655),'array',_0x4ed504(0x48c3),_0x4ed504(0x27d6)][_0x4ed504(0x3541)]('|')+')','className':_0x4ed504(0x43a),'relevance':0x0},{'className':'type','begin':/[\.\w\d]+/,'relevance':0x0})};return _0x3225b8[_0x4ed504(0x2b31)][_0x4ed504(0x2767)](_0x3de0d4),{'name':_0x4ed504(0x4324),'aliases':['pwsh','ps',_0x4ed504(0x306f)],'case_insensitive':!0x0,'keywords':_0x36b182,'contains':_0x3d2ac9[_0x4ed504(0x1d1d)](_0x4d6fa4,_0x2184f0,_0x536af0,_0xccc471,_0x3de0d4)};};},0x23f0:_0xeda993=>{const _0xa02b6f=a0_0x11e7;_0xeda993[_0xa02b6f(0x474c)]=function(_0x36abc1){const _0x5a23be=_0xa02b6f,_0x4d8023=_0x36abc1[_0x5a23be(0x41d2)],_0x1c032a=[_0x5a23be(0x22c3),_0x5a23be(0x17d5),_0x5a23be(0x3ec6),_0x5a23be(0x114a),_0x5a23be(0x30f),_0x5a23be(0x2cec),_0x5a23be(0x396),_0x5a23be(0x49fe),_0x5a23be(0x4562),_0x5a23be(0x434e),'focused',_0x5a23be(0x3953),'frameRate',_0x5a23be(0x3cd6),_0x5a23be(0x17d2),_0x5a23be(0x395f),_0x5a23be(0x3add),'beginDraw','createShape',_0x5a23be(0x3d8d),_0x5a23be(0x2a1e),_0x5a23be(0x4f06),'ellipse',_0x5a23be(0x3572),_0x5a23be(0x2464),_0x5a23be(0x2bec),_0x5a23be(0x2a8a),_0x5a23be(0xfd5),'bezier',_0x5a23be(0x45ae),_0x5a23be(0x4551),'bezierTangent',_0x5a23be(0xaf9),_0x5a23be(0x23ee),_0x5a23be(0x2ded),_0x5a23be(0x3799),_0x5a23be(0x6ac),_0x5a23be(0x4d0a),_0x5a23be(0x26be),'beginContour',_0x5a23be(0x1220),_0x5a23be(0x170e),_0x5a23be(0x1a3a),'endContour',_0x5a23be(0x505),_0x5a23be(0x4261),_0x5a23be(0x2793),_0x5a23be(0x1895),_0x5a23be(0x1636),'rectMode',_0x5a23be(0x3f51),_0x5a23be(0x4847),'strokeJoin',_0x5a23be(0x358e),_0x5a23be(0xabb),'mouseDragged',_0x5a23be(0x36f),'mousePressed','mouseReleased','mouseWheel',_0x5a23be(0x2b9),_0x5a23be(0x1a8b),_0x5a23be(0x1f54),_0x5a23be(0x4957),_0x5a23be(0x2d7e),_0x5a23be(0x2514),_0x5a23be(0x30df),'day',_0x5a23be(0x19aa),_0x5a23be(0x2ba8),_0x5a23be(0x149b),_0x5a23be(0x3ed3),_0x5a23be(0x274e),_0x5a23be(0x4848),_0x5a23be(0x1471),_0x5a23be(0x4933),_0x5a23be(0x4a32),_0x5a23be(0x1fe5),_0x5a23be(0x6d0),'noStroke',_0x5a23be(0x1d1b),_0x5a23be(0x174a),_0x5a23be(0x1536),_0x5a23be(0x2f2d),_0x5a23be(0xe81),'green',_0x5a23be(0x1ffb),_0x5a23be(0x1a23),_0x5a23be(0x117e),_0x5a23be(0x235c),_0x5a23be(0x4b2d),_0x5a23be(0x2285),_0x5a23be(0xb3d),_0x5a23be(0x4877),_0x5a23be(0x2be7),'screenZ',_0x5a23be(0x463),_0x5a23be(0x3240),_0x5a23be(0x3a79),_0x5a23be(0x4b5f),'add',_0x5a23be(0x49dc),_0x5a23be(0x3d1e),_0x5a23be(0x5180),_0x5a23be(0x1f78),_0x5a23be(0xd64),'ortho',_0x5a23be(0x2b0d),'printCamera',_0x5a23be(0x3264),_0x5a23be(0x824),_0x5a23be(0x2553),_0x5a23be(0x35c6),_0x5a23be(0x4c7b),_0x5a23be(0x110b),_0x5a23be(0x37bb),'popStyle',_0x5a23be(0x121e),_0x5a23be(0x219b),_0x5a23be(0x4053),_0x5a23be(0x1e8d),_0x5a23be(0x961),'char',_0x5a23be(0x1ab8),_0x5a23be(0x3536),_0x5a23be(0xc16),_0x5a23be(0x257f),_0x5a23be(0xd4c),_0x5a23be(0x1c77),'join','match',_0x5a23be(0x950),'nf',_0x5a23be(0x1653),'nfp','nfs','split',_0x5a23be(0x658),_0x5a23be(0x1b23),_0x5a23be(0x366b),_0x5a23be(0x3600),_0x5a23be(0x1d1d),'expand',_0x5a23be(0x78b),_0x5a23be(0x4049),'sort','splice',_0x5a23be(0x11f1),'box','sphere',_0x5a23be(0x248b),_0x5a23be(0x411b),_0x5a23be(0x351),'loadBytes',_0x5a23be(0x2f1e),'loadJSONObject',_0x5a23be(0x1628),_0x5a23be(0x2cfd),_0x5a23be(0xa98),_0x5a23be(0x1795),_0x5a23be(0x3584),_0x5a23be(0x38c9),_0x5a23be(0x4a94),_0x5a23be(0x3f74),_0x5a23be(0x4939),_0x5a23be(0x5031),_0x5a23be(0x96b),'createWriter',_0x5a23be(0x3976),_0x5a23be(0xd8c),_0x5a23be(0x4a3f),_0x5a23be(0x4d28),'saveJSONObject',_0x5a23be(0x7d4),_0x5a23be(0x1857),_0x5a23be(0x4105),_0x5a23be(0x375e),_0x5a23be(0x4427),_0x5a23be(0x3b05),'pushMatrix',_0x5a23be(0x431e),'rotate',_0x5a23be(0x2462),_0x5a23be(0x1c45),_0x5a23be(0x41e4),_0x5a23be(0x2926),_0x5a23be(0x974),_0x5a23be(0x4910),_0x5a23be(0x296f),_0x5a23be(0x314e),_0x5a23be(0x27e0),_0x5a23be(0x3c4b),_0x5a23be(0x2775),_0x5a23be(0x254d),_0x5a23be(0x27d1),'normal',_0x5a23be(0x23cb),'spotLight',_0x5a23be(0x178c),_0x5a23be(0xc73),_0x5a23be(0x1126),_0x5a23be(0x391c),_0x5a23be(0x1434),'tint','texture',_0x5a23be(0xd1b),'textureWrap',_0x5a23be(0x32bc),_0x5a23be(0x16c0),'filter',_0x5a23be(0xf9e),_0x5a23be(0x1def),_0x5a23be(0x1fa),'updatePixels',_0x5a23be(0x3882),_0x5a23be(0x3bfe),_0x5a23be(0x3f2d),'shader',_0x5a23be(0x50d9),_0x5a23be(0xded),_0x5a23be(0x4006),'textFont',_0x5a23be(0x48dc),'textLeading',_0x5a23be(0x27ab),_0x5a23be(0x3052),'textWidth',_0x5a23be(0x1a46),_0x5a23be(0x3fa2),_0x5a23be(0xbe0),_0x5a23be(0x10aa),_0x5a23be(0x2fa4),_0x5a23be(0x3fef),'exp','floor','lerp',_0x5a23be(0x20ff),_0x5a23be(0x390e),_0x5a23be(0x4833),'max',_0x5a23be(0x37c8),_0x5a23be(0xf89),'pow',_0x5a23be(0x3d6c),'sq',_0x5a23be(0x5011),_0x5a23be(0x2c6e),'asin',_0x5a23be(0x3aab),_0x5a23be(0x41c2),_0x5a23be(0x3935),_0x5a23be(0x187c),_0x5a23be(0x5f5),_0x5a23be(0x2a37),_0x5a23be(0x38de),'noise',_0x5a23be(0x3447),_0x5a23be(0x2ac4),'random',_0x5a23be(0x6bf),_0x5a23be(0x263d)],_0x43d3ad=_0x36abc1[_0x5a23be(0xacc)],_0x13d31c={'variants':[{'match':_0x4d8023['concat'](_0x4d8023[_0x5a23be(0x583)](..._0x1c032a),_0x4d8023[_0x5a23be(0x3296)](/\s*\(/)),'className':_0x5a23be(0x43a)},{'relevance':0x0,'match':_0x4d8023[_0x5a23be(0x1d1d)](/\b(?!for|if|while)/,_0x43d3ad,_0x4d8023[_0x5a23be(0x3296)](/\s*\(/)),'className':_0x5a23be(0x20db)}]},_0x4f0683={'match':[/new\s+/,_0x43d3ad],'className':{0x1:_0x5a23be(0x1357),0x2:_0x5a23be(0x2ea2)}},_0x218a7e={'relevance':0x0,'match':[/\./,_0x43d3ad],'className':{0x2:_0x5a23be(0x227a)}},_0x232748={'variants':[{'match':[/class/,/\s+/,_0x43d3ad,/\s+/,/extends/,/\s+/,_0x43d3ad]},{'match':[/class/,/\s+/,_0x43d3ad]}],'className':{0x1:'keyword',0x3:_0x5a23be(0x19e4),0x5:_0x5a23be(0x1357),0x7:'title.class.inherited'}};return{'name':_0x5a23be(0x3104),'aliases':[_0x5a23be(0x11b1)],'keywords':{'keyword':[_0x5a23be(0x3027),_0x5a23be(0x4fd4),_0x5a23be(0x4e10),_0x5a23be(0x2e7e),_0x5a23be(0x31a3),'const',_0x5a23be(0x16d9),_0x5a23be(0x3d23),'else',_0x5a23be(0x44d8),'final',_0x5a23be(0x37b2),_0x5a23be(0x3c19),'if',_0x5a23be(0x331),_0x5a23be(0xf3c),_0x5a23be(0x324f),_0x5a23be(0x50dc),_0x5a23be(0x4321),_0x5a23be(0x4bd0),_0x5a23be(0x4ef4),_0x5a23be(0x4ef4),_0x5a23be(0xc14),_0x5a23be(0xc14),_0x5a23be(0x39ce),_0x5a23be(0x39ce),_0x5a23be(0xdfd),_0x5a23be(0x2c7c),_0x5a23be(0x4005),_0x5a23be(0x857),'synchronized','throw',_0x5a23be(0x20f8),_0x5a23be(0xf95),'try',_0x5a23be(0x27d6),'volatile',_0x5a23be(0x552)],'literal':_0x5a23be(0x4861),'title':_0x5a23be(0x4ddf),'variable':_0x5a23be(0x280c),'built_in':[..._0x1c032a,_0x5a23be(0x1396),'PVector',_0x5a23be(0x59c),'PImage',_0x5a23be(0xa88),_0x5a23be(0x2cc9),'String',_0x5a23be(0x4b6a),_0x5a23be(0x239f),'ArrayList',_0x5a23be(0x123a),_0x5a23be(0x5247),_0x5a23be(0x238c),'JSONArray','JSONObject',_0x5a23be(0x108b),_0x5a23be(0x40c),_0x5a23be(0x400a),_0x5a23be(0x4dd6),'TableRow',_0x5a23be(0x1743)],'type':['boolean',_0x5a23be(0x961),_0x5a23be(0x373c),_0x5a23be(0xe81),'double',_0x5a23be(0x1ab8),_0x5a23be(0xc16),'long',_0x5a23be(0x4085)]},'contains':[_0x232748,_0x4f0683,_0x13d31c,_0x218a7e,_0x36abc1[_0x5a23be(0x2ae2)],_0x36abc1['C_BLOCK_COMMENT_MODE'],_0x36abc1[_0x5a23be(0xa4c)],_0x36abc1[_0x5a23be(0x291b)],_0x36abc1[_0x5a23be(0xd12)]]};};},0x1456:_0xc403ad=>{_0xc403ad['exports']=function(_0x2f13e1){const _0x50f5ab=a0_0x11e7;return{'name':'Python\x20profiler','contains':[_0x2f13e1[_0x50f5ab(0xd12)],{'begin':_0x50f5ab(0x4247),'end':':','excludeEnd':!0x0},{'begin':_0x50f5ab(0x700),'end':'$','keywords':_0x50f5ab(0x2a3a),'relevance':0xa},{'begin':'function\x20calls','end':'$','contains':[_0x2f13e1[_0x50f5ab(0xd12)]],'relevance':0xa},_0x2f13e1[_0x50f5ab(0xa4c)],_0x2f13e1[_0x50f5ab(0x291b)],{'className':_0x50f5ab(0x2431),'begin':'\x5c(','end':_0x50f5ab(0x731),'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0}]};};},0x1818:_0x79b437=>{const _0x465107=a0_0x11e7;_0x79b437[_0x465107(0x474c)]=function(_0xb7311a){const _0x1e8f05=_0x465107,_0x5a3b3c={'begin':/\(/,'end':/\)/,'relevance':0x0},_0x40f61c={'begin':/\[/,'end':/\]/},_0x46dfbe={'className':_0x1e8f05(0x4645),'begin':/%/,'end':/$/,'contains':[_0xb7311a[_0x1e8f05(0x18cc)]]},_0x3419b3={'className':_0x1e8f05(0x2431),'begin':/`/,'end':/`/,'contains':[_0xb7311a[_0x1e8f05(0x4a76)]]},_0x510b10=[{'begin':/[a-z][A-Za-z0-9_]*/,'relevance':0x0},{'className':_0x1e8f05(0x239b),'variants':[{'begin':/[A-Z][a-zA-Z0-9_]*/},{'begin':/_[A-Za-z0-9_]*/}],'relevance':0x0},_0x5a3b3c,{'begin':/:-/},_0x40f61c,_0x46dfbe,_0xb7311a[_0x1e8f05(0x23fe)],_0xb7311a[_0x1e8f05(0x291b)],_0xb7311a[_0x1e8f05(0xa4c)],_0x3419b3,{'className':'string','begin':/0'(\\'|.)/},{'className':_0x1e8f05(0x2431),'begin':/0'\\s/},_0xb7311a[_0x1e8f05(0xd12)]];return _0x5a3b3c[_0x1e8f05(0x2b31)]=_0x510b10,_0x40f61c[_0x1e8f05(0x2b31)]=_0x510b10,{'name':_0x1e8f05(0x474d),'contains':_0x510b10[_0x1e8f05(0x1d1d)]([{'begin':/\.$/}])};};},0x1dde:_0x15df22=>{const _0x3b8cbc=a0_0x11e7;_0x15df22[_0x3b8cbc(0x474c)]=function(_0x250351){const _0x44d889=_0x3b8cbc,_0x4f3ce2=_0x44d889(0x2c53),_0x3c959c=_0x4f3ce2+_0x44d889(0x29e4)+_0x4f3ce2,_0x8999d=_0x44d889(0x4d1b),_0x5c7137='([^\x5c\x5c:=\x20\x5ct\x5cf\x5cn]|\x5c\x5c.)+',_0x2e5549={'end':'('+_0x3c959c+'|'+_0x8999d+')','relevance':0x0,'starts':{'className':'string','end':/$/,'relevance':0x0,'contains':[{'begin':_0x44d889(0x4682)},{'begin':_0x44d889(0x32a3)}]}};return{'name':'.properties','disableAutodetect':!0x0,'case_insensitive':!0x0,'illegal':/\S/,'contains':[_0x250351[_0x44d889(0x4e4f)](_0x44d889(0x76f),'$'),{'returnBegin':!0x0,'variants':[{'begin':_0x5c7137+_0x3c959c},{'begin':_0x5c7137+_0x8999d}],'contains':[{'className':'attr','begin':_0x5c7137,'endsParent':!0x0}],'starts':_0x2e5549},{'className':_0x44d889(0x431d),'begin':_0x5c7137+_0x4f3ce2+'$'}]};};},0x1c84:_0x3cc050=>{const _0x2da369=a0_0x11e7;_0x3cc050[_0x2da369(0x474c)]=function(_0x522ade){const _0x53a235=_0x2da369,_0x5e3dd3={'match':[/(message|enum|service)\s+/,_0x522ade[_0x53a235(0xacc)]],'scope':{0x1:_0x53a235(0x1357),0x2:_0x53a235(0x19e4)}};return{'name':'Protocol\x20Buffers','aliases':['proto'],'keywords':{'keyword':['package','import',_0x53a235(0x1081),_0x53a235(0x51e4),'required',_0x53a235(0x2912),_0x53a235(0x4e5b),_0x53a235(0x6c4)],'type':[_0x53a235(0x5024),_0x53a235(0x1ab8),_0x53a235(0x3f1b),'int64',_0x53a235(0xa09),'uint64','sint32','sint64',_0x53a235(0x3c4f),_0x53a235(0x3edc),_0x53a235(0x4683),_0x53a235(0x35cc),_0x53a235(0x3ebd),'string',_0x53a235(0x2bb2)],'literal':[_0x53a235(0x4022),_0x53a235(0x3984)]},'contains':[_0x522ade[_0x53a235(0x291b)],_0x522ade['NUMBER_MODE'],_0x522ade['C_LINE_COMMENT_MODE'],_0x522ade[_0x53a235(0x23fe)],_0x5e3dd3,{'className':_0x53a235(0x14b2),'beginKeywords':'rpc','end':/[{;]/,'excludeEnd':!0x0,'keywords':_0x53a235(0x71d)},{'begin':/^\s*[A-Z_]+(?=\s*=[^\n]+;$)/}]};};},0x2161:_0x22d3df=>{const _0x3cb586=a0_0x11e7;_0x22d3df[_0x3cb586(0x474c)]=function(_0x213f26){const _0xa7519a=_0x3cb586,_0xc58c6f=_0x213f26['COMMENT']('#','$'),_0x51fce0=_0xa7519a(0x2018),_0x195e3=_0x213f26[_0xa7519a(0x46a1)](_0x213f26[_0xa7519a(0x2029)],{'begin':_0x51fce0}),_0x5035ab={'className':_0xa7519a(0x3362),'begin':'\x5c$'+_0x51fce0},_0x2d7d20={'className':_0xa7519a(0x2431),'contains':[_0x213f26[_0xa7519a(0x4a76)],_0x5035ab],'variants':[{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/}]};return{'name':'Puppet','aliases':['pp'],'contains':[_0xc58c6f,_0x5035ab,_0x2d7d20,{'beginKeywords':_0xa7519a(0x1390),'end':_0xa7519a(0x4cc7),'illegal':/=/,'contains':[_0x195e3,_0xc58c6f]},{'beginKeywords':'define','end':/\{/,'contains':[{'className':_0xa7519a(0x69d),'begin':_0x213f26[_0xa7519a(0xacc)],'endsParent':!0x0}]},{'begin':_0x213f26[_0xa7519a(0xacc)]+_0xa7519a(0x3248),'returnBegin':!0x0,'end':/\S/,'contains':[{'className':'keyword','begin':_0x213f26[_0xa7519a(0xacc)],'relevance':0.2},{'begin':/\{/,'end':/\}/,'keywords':{'keyword':_0xa7519a(0x496b),'literal':_0xa7519a(0x2e39),'built_in':_0xa7519a(0x509a)},'relevance':0x0,'contains':[_0x2d7d20,_0xc58c6f,{'begin':_0xa7519a(0x2ddf),'returnBegin':!0x0,'end':'=>','contains':[{'className':_0xa7519a(0x431d),'begin':_0x213f26[_0xa7519a(0xacc)]}]},{'className':_0xa7519a(0x4a80),'begin':_0xa7519a(0x3019),'relevance':0x0},_0x5035ab]}],'relevance':0x0}]};};},0x1f3b:_0x262146=>{const _0x1404c9=a0_0x11e7;_0x262146[_0x1404c9(0x474c)]=function(_0x353990){const _0x2d3752=_0x1404c9;return{'name':_0x2d3752(0x3ab0),'aliases':['pb',_0x2d3752(0x375a)],'keywords':'Align\x20And\x20Array\x20As\x20Break\x20CallDebugger\x20Case\x20CompilerCase\x20CompilerDefault\x20CompilerElse\x20CompilerElseIf\x20CompilerEndIf\x20CompilerEndSelect\x20CompilerError\x20CompilerIf\x20CompilerSelect\x20CompilerWarning\x20Continue\x20Data\x20DataSection\x20Debug\x20DebugLevel\x20Declare\x20DeclareC\x20DeclareCDLL\x20DeclareDLL\x20DeclareModule\x20Default\x20Define\x20Dim\x20DisableASM\x20DisableDebugger\x20DisableExplicit\x20Else\x20ElseIf\x20EnableASM\x20EnableDebugger\x20EnableExplicit\x20End\x20EndDataSection\x20EndDeclareModule\x20EndEnumeration\x20EndIf\x20EndImport\x20EndInterface\x20EndMacro\x20EndModule\x20EndProcedure\x20EndSelect\x20EndStructure\x20EndStructureUnion\x20EndWith\x20Enumeration\x20EnumerationBinary\x20Extends\x20FakeReturn\x20For\x20ForEach\x20ForEver\x20Global\x20Gosub\x20Goto\x20If\x20Import\x20ImportC\x20IncludeBinary\x20IncludeFile\x20IncludePath\x20Interface\x20List\x20Macro\x20MacroExpandedCount\x20Map\x20Module\x20NewList\x20NewMap\x20Next\x20Not\x20Or\x20Procedure\x20ProcedureC\x20ProcedureCDLL\x20ProcedureDLL\x20ProcedureReturn\x20Protected\x20Prototype\x20PrototypeC\x20ReDim\x20Read\x20Repeat\x20Restore\x20Return\x20Runtime\x20Select\x20Shared\x20Static\x20Step\x20Structure\x20StructureUnion\x20Swap\x20Threaded\x20To\x20UndefineMacro\x20Until\x20Until\x20\x20UnuseModule\x20UseModule\x20Wend\x20While\x20With\x20XIncludeFile\x20XOr','contains':[_0x353990[_0x2d3752(0x4e4f)](';','$',{'relevance':0x0}),{'className':_0x2d3752(0x14b2),'begin':_0x2d3752(0xe07),'end':'\x5c(','excludeEnd':!0x0,'returnBegin':!0x0,'contains':[{'className':_0x2d3752(0x1357),'begin':'(Procedure|Declare)(C|CDLL|DLL)?','excludeEnd':!0x0},{'className':_0x2d3752(0xcfc),'begin':_0x2d3752(0x34dc)},_0x353990['UNDERSCORE_TITLE_MODE']]},{'className':_0x2d3752(0x2431),'begin':_0x2d3752(0x4dea),'end':'\x22','illegal':'\x5cn'},{'className':'symbol','begin':_0x2d3752(0x3413)}]};};},0x11b:_0x352293=>{const _0x1415c7=a0_0x11e7;_0x352293[_0x1415c7(0x474c)]=function(_0x42c256){const _0xff3054=_0x1415c7;return{'aliases':['pycon'],'contains':[{'className':_0xff3054(0x4cea),'starts':{'end':/ |$/,'starts':{'end':'$','subLanguage':_0xff3054(0x1304)}},'variants':[{'begin':/^>>>(?=[ ]|$)/},{'begin':/^\.\.\.(?=[ ]|$)/}]}]};};},0x45d:_0xcc6806=>{const _0x55de1f=a0_0x11e7;_0xcc6806[_0x55de1f(0x474c)]=function(_0x11cda5){const _0x1a8ac9=_0x55de1f,_0x2ec097=_0x11cda5[_0x1a8ac9(0x41d2)],_0x575ff2=/[\p{XID_Start}_]\p{XID_Continue}*/u,_0x1802c4=[_0x1a8ac9(0x2663),'as',_0x1a8ac9(0x4fd4),'async',_0x1a8ac9(0x371f),_0x1a8ac9(0x4e10),'case',_0x1a8ac9(0x1390),_0x1a8ac9(0x16d9),_0x1a8ac9(0x452b),_0x1a8ac9(0x109c),_0x1a8ac9(0x4ef2),_0x1a8ac9(0x3d4),'except',_0x1a8ac9(0x37b2),_0x1a8ac9(0x3c19),_0x1a8ac9(0x27e6),_0x1a8ac9(0x501b),'if',_0x1a8ac9(0x331),'in','is',_0x1a8ac9(0xc80),'match',_0x1a8ac9(0x25bb),'not','or',_0x1a8ac9(0xed1),'raise',_0x1a8ac9(0xdfd),_0x1a8ac9(0x422b),'while',_0x1a8ac9(0x2aa7),_0x1a8ac9(0x5075)],_0x51ab0a={'$pattern':/[A-Za-z]\w+|__\w+__/,'keyword':_0x1802c4,'built_in':['__import__',_0x1a8ac9(0xbe0),_0x1a8ac9(0xc36),_0x1a8ac9(0x4684),_0x1a8ac9(0x30f5),_0x1a8ac9(0x4680),'bool',_0x1a8ac9(0x36f1),_0x1a8ac9(0x156a),'bytes',_0x1a8ac9(0xdef),_0x1a8ac9(0x3baf),_0x1a8ac9(0x36a4),_0x1a8ac9(0x23cd),'complex',_0x1a8ac9(0x3e81),_0x1a8ac9(0x4e11),'dir','divmod',_0x1a8ac9(0x145b),_0x1a8ac9(0x2ff9),_0x1a8ac9(0x198d),'filter',_0x1a8ac9(0x1ab8),_0x1a8ac9(0x29a7),_0x1a8ac9(0x2ec2),_0x1a8ac9(0x3bd8),'globals','hasattr',_0x1a8ac9(0x40c0),_0x1a8ac9(0x2e5),_0x1a8ac9(0x3536),'id',_0x1a8ac9(0x7b0),_0x1a8ac9(0xc16),'isinstance',_0x1a8ac9(0x51d3),'iter','len',_0x1a8ac9(0x144e),_0x1a8ac9(0x224b),_0x1a8ac9(0x4833),'max',_0x1a8ac9(0x4ccc),_0x1a8ac9(0x37c8),_0x1a8ac9(0x3dc6),_0x1a8ac9(0x20c7),_0x1a8ac9(0x3d82),'open',_0x1a8ac9(0x38b2),'pow','print','property',_0x1a8ac9(0x51f),_0x1a8ac9(0x27f6),'reversed','round',_0x1a8ac9(0x1fa),_0x1a8ac9(0x24ac),'slice',_0x1a8ac9(0x1c73),_0x1a8ac9(0x3284),_0x1a8ac9(0x257f),_0x1a8ac9(0x13b9),_0x1a8ac9(0x2cc),_0x1a8ac9(0x3cab),_0x1a8ac9(0xcfc),_0x1a8ac9(0x3674),_0x1a8ac9(0x2f88)],'literal':['__debug__',_0x1a8ac9(0x1a0d),_0x1a8ac9(0x2af9),_0x1a8ac9(0x23dd),_0x1a8ac9(0x3b73),_0x1a8ac9(0x23a3)],'type':[_0x1a8ac9(0x954),_0x1a8ac9(0x4f51),_0x1a8ac9(0x66a),_0x1a8ac9(0x2b96),'List',_0x1a8ac9(0x50ca),'Generic',_0x1a8ac9(0x31e1),'Sequence',_0x1a8ac9(0x34d5),_0x1a8ac9(0x166e),_0x1a8ac9(0x2b7e),_0x1a8ac9(0x605)]},_0x5e3eac={'className':_0x1a8ac9(0x5153),'begin':/^(>>>|\.\.\.) /},_0x74be69={'className':_0x1a8ac9(0x2ad6),'begin':/\{/,'end':/\}/,'keywords':_0x51ab0a,'illegal':/#/},_0x5d78c6={'begin':/\{\{/,'relevance':0x0},_0x372ff1={'className':_0x1a8ac9(0x2431),'contains':[_0x11cda5[_0x1a8ac9(0x4a76)]],'variants':[{'begin':/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?'''/,'end':/'''/,'contains':[_0x11cda5[_0x1a8ac9(0x4a76)],_0x5e3eac],'relevance':0xa},{'begin':/([uU]|[bB]|[rR]|[bB][rR]|[rR][bB])?"""/,'end':/"""/,'contains':[_0x11cda5[_0x1a8ac9(0x4a76)],_0x5e3eac],'relevance':0xa},{'begin':/([fF][rR]|[rR][fF]|[fF])'''/,'end':/'''/,'contains':[_0x11cda5['BACKSLASH_ESCAPE'],_0x5e3eac,_0x5d78c6,_0x74be69]},{'begin':/([fF][rR]|[rR][fF]|[fF])"""/,'end':/"""/,'contains':[_0x11cda5[_0x1a8ac9(0x4a76)],_0x5e3eac,_0x5d78c6,_0x74be69]},{'begin':/([uU]|[rR])'/,'end':/'/,'relevance':0xa},{'begin':/([uU]|[rR])"/,'end':/"/,'relevance':0xa},{'begin':/([bB]|[bB][rR]|[rR][bB])'/,'end':/'/},{'begin':/([bB]|[bB][rR]|[rR][bB])"/,'end':/"/},{'begin':/([fF][rR]|[rR][fF]|[fF])'/,'end':/'/,'contains':[_0x11cda5[_0x1a8ac9(0x4a76)],_0x5d78c6,_0x74be69]},{'begin':/([fF][rR]|[rR][fF]|[fF])"/,'end':/"/,'contains':[_0x11cda5[_0x1a8ac9(0x4a76)],_0x5d78c6,_0x74be69]},_0x11cda5[_0x1a8ac9(0xa4c)],_0x11cda5['QUOTE_STRING_MODE']]},_0x6c1fc9=_0x1a8ac9(0x1a47),_0x3bac32='(\x5cb('+_0x6c1fc9+_0x1a8ac9(0x91d)+_0x6c1fc9+_0x1a8ac9(0x3147)+_0x6c1fc9+_0x1a8ac9(0x31f7),_0x1345f8=_0x1a8ac9(0xfab)+_0x1802c4[_0x1a8ac9(0x3541)]('|'),_0x1e68f6={'className':_0x1a8ac9(0x4a80),'relevance':0x0,'variants':[{'begin':_0x1a8ac9(0x5ea)+_0x6c1fc9+_0x1a8ac9(0x3a2c)+_0x3bac32+_0x1a8ac9(0x2c22)+_0x6c1fc9+')[jJ]?(?='+_0x1345f8+')'},{'begin':'('+_0x3bac32+_0x1a8ac9(0x37a8)},{'begin':_0x1a8ac9(0x3d57)+_0x1345f8+')'},{'begin':_0x1a8ac9(0x2918)+_0x1345f8+')'},{'begin':_0x1a8ac9(0x73c)+_0x1345f8+')'},{'begin':'\x5cb0[xX](_?[0-9a-fA-F])+[lL]?(?='+_0x1345f8+')'},{'begin':'\x5cb('+_0x6c1fc9+_0x1a8ac9(0xe12)+_0x1345f8+')'}]},_0x36b2a6={'className':_0x1a8ac9(0x4645),'begin':_0x2ec097[_0x1a8ac9(0x3296)](/# type:/),'end':/$/,'keywords':_0x51ab0a,'contains':[{'begin':/# type:/},{'begin':/#/,'end':/\b\B/,'endsWithParent':!0x0}]},_0x3b7961={'className':_0x1a8ac9(0xddd),'variants':[{'className':'','begin':/\(\s*\)/,'skip':!0x0},{'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'keywords':_0x51ab0a,'contains':[_0x1a8ac9(0x4454),_0x5e3eac,_0x1e68f6,_0x372ff1,_0x11cda5[_0x1a8ac9(0x2bbe)]]}]};return _0x74be69['contains']=[_0x372ff1,_0x1e68f6,_0x5e3eac],{'name':_0x1a8ac9(0x42e0),'aliases':['py','gyp','ipython'],'unicodeRegex':!0x0,'keywords':_0x51ab0a,'illegal':/(<\/|\?)|=>/,'contains':[_0x5e3eac,_0x1e68f6,{'begin':/\bself\b/},{'beginKeywords':'if','relevance':0x0},_0x372ff1,_0x36b2a6,_0x11cda5[_0x1a8ac9(0x2bbe)],{'match':[/\bdef/,/\s+/,_0x575ff2],'scope':{0x1:_0x1a8ac9(0x1357),0x3:_0x1a8ac9(0x20db)},'contains':[_0x3b7961]},{'variants':[{'match':[/\bclass/,/\s+/,_0x575ff2,/\s*/,/\(\s*/,_0x575ff2,/\s*\)/]},{'match':[/\bclass/,/\s+/,_0x575ff2]}],'scope':{0x1:_0x1a8ac9(0x1357),0x3:_0x1a8ac9(0x19e4),0x6:_0x1a8ac9(0x3235)}},{'className':'meta','begin':/^[\t ]*@/,'end':/(?=#)|$/,'contains':[_0x1e68f6,_0x3b7961,_0x372ff1]}]};};},0x24a8:_0x5a57b3=>{const _0x2a4105=a0_0x11e7;_0x5a57b3[_0x2a4105(0x474c)]=function(_0x21cb51){const _0x245551=_0x2a4105;return{'name':'Q','aliases':['k',_0x245551(0x180c)],'keywords':{'$pattern':/(`?)[A-Za-z0-9_]+\b/,'keyword':_0x245551(0x41e0),'literal':'0b\x201b','built_in':_0x245551(0x5122),'type':_0x245551(0x2c83)},'contains':[_0x21cb51[_0x245551(0x2ae2)],_0x21cb51[_0x245551(0x291b)],_0x21cb51[_0x245551(0xd12)]]};};},0x2645:_0x1bb4f9=>{const _0x15bf69=a0_0x11e7;_0x1bb4f9[_0x15bf69(0x474c)]=function(_0x489a83){const _0x1cd36d=_0x15bf69,_0x2c9d9a=_0x1cd36d(0x180f),_0x406964={'className':_0x1cd36d(0x263f),'begin':_0x1cd36d(0x43e7),'starts':{'className':_0x1cd36d(0x2431),'end':_0x2c9d9a,'returnEnd':!0x1}},_0x2b6a3f={'begin':_0x2c9d9a+_0x1cd36d(0x3fb4),'returnBegin':!0x0,'contains':[{'className':'attribute','begin':_0x2c9d9a,'end':_0x1cd36d(0x3fb4),'excludeEnd':!0x0,'relevance':0x0}],'relevance':0x0},_0xa85c0c={'begin':_0x489a83[_0x1cd36d(0x41d2)]['concat'](_0x2c9d9a,/\s*\{/),'end':/\{/,'returnBegin':!0x0,'relevance':0x0,'contains':[_0x489a83[_0x1cd36d(0x46a1)](_0x489a83['TITLE_MODE'],{'begin':_0x2c9d9a})]};return{'name':_0x1cd36d(0x31d8),'aliases':['qt'],'case_insensitive':!0x1,'keywords':{'keyword':_0x1cd36d(0x8bb),'literal':_0x1cd36d(0x4743),'built_in':_0x1cd36d(0x3e6f)},'contains':[{'className':_0x1cd36d(0x5153),'begin':/^\s*['"]use (strict|asm)['"]/},_0x489a83[_0x1cd36d(0xa4c)],_0x489a83['QUOTE_STRING_MODE'],{'className':_0x1cd36d(0x2431),'begin':'`','end':'`','contains':[_0x489a83[_0x1cd36d(0x4a76)],{'className':_0x1cd36d(0x2ad6),'begin':_0x1cd36d(0x47da),'end':'\x5c}'}]},_0x489a83[_0x1cd36d(0x2ae2)],_0x489a83[_0x1cd36d(0x23fe)],{'className':_0x1cd36d(0x4a80),'variants':[{'begin':_0x1cd36d(0x46c3)},{'begin':'\x5cb(0[oO][0-7]+)'},{'begin':_0x489a83[_0x1cd36d(0x45be)]}],'relevance':0x0},{'begin':'('+_0x489a83[_0x1cd36d(0x49de)]+_0x1cd36d(0x643),'keywords':_0x1cd36d(0x2519),'contains':[_0x489a83['C_LINE_COMMENT_MODE'],_0x489a83[_0x1cd36d(0x23fe)],_0x489a83['REGEXP_MODE'],{'begin':/\s*[);\]]/,'relevance':0x0,'subLanguage':_0x1cd36d(0x2655)}],'relevance':0x0},{'className':_0x1cd36d(0x1357),'begin':_0x1cd36d(0x47b5),'starts':{'className':_0x1cd36d(0x2431),'end':_0x1cd36d(0x3314),'returnEnd':!0x0}},{'className':'keyword','begin':_0x1cd36d(0x3961),'starts':{'className':_0x1cd36d(0x2431),'end':'(:|=|;|,|//|/\x5c*|$)','returnEnd':!0x0}},{'className':_0x1cd36d(0x14b2),'beginKeywords':_0x1cd36d(0x14b2),'end':/\{/,'excludeEnd':!0x0,'contains':[_0x489a83[_0x1cd36d(0x46a1)](_0x489a83[_0x1cd36d(0x2029)],{'begin':/[A-Za-z$_][0-9A-Za-z$_]*/}),{'className':_0x1cd36d(0xddd),'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'contains':[_0x489a83[_0x1cd36d(0x2ae2)],_0x489a83[_0x1cd36d(0x23fe)]]}],'illegal':/\[|%/},{'begin':'\x5c.'+_0x489a83[_0x1cd36d(0xacc)],'relevance':0x0},_0x406964,_0x2b6a3f,_0xa85c0c],'illegal':/#/};};},0x1fc1:_0x178e1c=>{const _0x839fcb=a0_0x11e7;_0x178e1c[_0x839fcb(0x474c)]=function(_0x14e78f){const _0x5e7d8a=_0x839fcb,_0x297ba0=_0x14e78f[_0x5e7d8a(0x41d2)],_0x34be5e=/(?:(?:[a-zA-Z]|\.[._a-zA-Z])[._a-zA-Z0-9]*)|\.(?!\d)/,_0x434ff5=_0x297ba0[_0x5e7d8a(0x583)](/0[xX][0-9a-fA-F]+\.[0-9a-fA-F]*[pP][+-]?\d+i?/,/0[xX][0-9a-fA-F]+(?:[pP][+-]?\d+)?[Li]?/,/(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][+-]?\d+)?[Li]?/),_0x39c18f=/[=!<>:]=|\|\||&&|:::?|<-|<<-|->>|->|\|>|[-+*\/?!$&|:<=>@^~]|\*\*/,_0x5e1415=_0x297ba0['either'](/[()]/,/[{}]/,/\[\[/,/[[\]]/,/\\/,/,/);return{'name':'R','keywords':{'$pattern':_0x34be5e,'keyword':_0x5e7d8a(0x5bb),'literal':'NULL\x20NA\x20TRUE\x20FALSE\x20Inf\x20NaN\x20NA_integer_|10\x20NA_real_|10\x20NA_character_|10\x20NA_complex_|10','built_in':_0x5e7d8a(0x3440)},'contains':[_0x14e78f[_0x5e7d8a(0x4e4f)](/#'/,/$/,{'contains':[{'scope':_0x5e7d8a(0x4593),'match':/@examples/,'starts':{'end':_0x297ba0[_0x5e7d8a(0x3296)](_0x297ba0[_0x5e7d8a(0x583)](/\n^#'\s*(?=@[a-zA-Z]+)/,/\n^(?!#')/)),'endsParent':!0x0}},{'scope':_0x5e7d8a(0x4593),'begin':_0x5e7d8a(0x27cc),'end':/$/,'contains':[{'scope':'variable','variants':[{'match':_0x34be5e},{'match':/`(?:\\.|[^`\\])+`/}],'endsParent':!0x0}]},{'scope':_0x5e7d8a(0x4593),'match':/@[a-zA-Z]+/},{'scope':_0x5e7d8a(0x1357),'match':/\\[a-zA-Z]+/}]}),_0x14e78f[_0x5e7d8a(0x2bbe)],{'scope':_0x5e7d8a(0x2431),'contains':[_0x14e78f[_0x5e7d8a(0x4a76)]],'variants':[_0x14e78f[_0x5e7d8a(0x453e)]({'begin':/[rR]"(-*)\(/,'end':/\)(-*)"/}),_0x14e78f['END_SAME_AS_BEGIN']({'begin':/[rR]"(-*)\{/,'end':/\}(-*)"/}),_0x14e78f[_0x5e7d8a(0x453e)]({'begin':/[rR]"(-*)\[/,'end':/\](-*)"/}),_0x14e78f[_0x5e7d8a(0x453e)]({'begin':/[rR]'(-*)\(/,'end':/\)(-*)'/}),_0x14e78f[_0x5e7d8a(0x453e)]({'begin':/[rR]'(-*)\{/,'end':/\}(-*)'/}),_0x14e78f[_0x5e7d8a(0x453e)]({'begin':/[rR]'(-*)\[/,'end':/\](-*)'/}),{'begin':'\x22','end':'\x22','relevance':0x0},{'begin':'\x27','end':'\x27','relevance':0x0}]},{'relevance':0x0,'variants':[{'scope':{0x1:_0x5e7d8a(0x1182),0x2:_0x5e7d8a(0x4a80)},'match':[_0x39c18f,_0x434ff5]},{'scope':{0x1:_0x5e7d8a(0x1182),0x2:'number'},'match':[/%[^%]*%/,_0x434ff5]},{'scope':{0x1:_0x5e7d8a(0xa25),0x2:_0x5e7d8a(0x4a80)},'match':[_0x5e1415,_0x434ff5]},{'scope':{0x2:_0x5e7d8a(0x4a80)},'match':[/[^a-zA-Z0-9._]|^/,_0x434ff5]}]},{'scope':{0x3:_0x5e7d8a(0x1182)},'match':[_0x34be5e,/\s+/,/<-/,/\s+/]},{'scope':'operator','relevance':0x0,'variants':[{'match':_0x39c18f},{'match':/%[^%]*%/}]},{'scope':_0x5e7d8a(0xa25),'relevance':0x0,'match':_0x5e1415},{'begin':'`','end':'`','contains':[{'begin':/\\./}]}]};};},0x192e:_0x348b16=>{_0x348b16['exports']=function(_0x298e20){const _0x2bbc2b=a0_0x11e7;return{'name':'ReasonML','aliases':['re'],'keywords':{'$pattern':/[a-z_]\w*!?/,'keyword':['and','as',_0x2bbc2b(0x19d9),_0x2bbc2b(0x4fd4),_0x2bbc2b(0x42fa),_0x2bbc2b(0x1390),_0x2bbc2b(0x29bd),'do',_0x2bbc2b(0x37e),_0x2bbc2b(0x4003),'else',_0x2bbc2b(0x2681),_0x2bbc2b(0x1489),'exception','external',_0x2bbc2b(0x3c19),_0x2bbc2b(0x451d),_0x2bbc2b(0x14b2),_0x2bbc2b(0x4a15),'if','in',_0x2bbc2b(0x478e),_0x2bbc2b(0x46a1),_0x2bbc2b(0x1831),_0x2bbc2b(0x2967),'lazy',_0x2bbc2b(0x1e61),_0x2bbc2b(0x44e0),'lsl',_0x2bbc2b(0x27bc),_0x2bbc2b(0x305c),_0x2bbc2b(0x4531),'module',_0x2bbc2b(0x1c6c),'new',_0x2bbc2b(0x50aa),_0x2bbc2b(0x20c7),'of',_0x2bbc2b(0x1795),'or',_0x2bbc2b(0xb6e),'pub',_0x2bbc2b(0x4860),_0x2bbc2b(0x31a4),_0x2bbc2b(0x4146),_0x2bbc2b(0x857),_0x2bbc2b(0xaf5),'to',_0x2bbc2b(0x422b),_0x2bbc2b(0xcfc),_0x2bbc2b(0x15ec),'virtual',_0x2bbc2b(0x191b),_0x2bbc2b(0x552),_0x2bbc2b(0x2aa7)],'built_in':[_0x2bbc2b(0x26f6),'bool',_0x2bbc2b(0x2bb2),_0x2bbc2b(0x373c),_0x2bbc2b(0xa8d),_0x2bbc2b(0x1ab8),_0x2bbc2b(0xc16),'int32','int64',_0x2bbc2b(0x144e),_0x2bbc2b(0x2ef8),'nativeint|5',_0x2bbc2b(0x21c3),_0x2bbc2b(0x2431),'unit'],'literal':[_0x2bbc2b(0x4022),_0x2bbc2b(0x3984)]},'illegal':/(:-|:=|\$\{|\+=)/,'contains':[{'scope':_0x2bbc2b(0x2706),'match':/\[(\|\|)?\]|\(\)/,'relevance':0x0},_0x298e20[_0x2bbc2b(0x2ae2)],_0x298e20[_0x2bbc2b(0x4e4f)](/\/\*/,/\*\//,{'illegal':/^(#,\/\/)/}),{'scope':'symbol','match':/\'[A-Za-z_](?!\')[\w\']*/},{'scope':'type','match':/`[A-Z][\w\']*/},{'scope':_0x2bbc2b(0xcfc),'match':/\b[A-Z][\w\']*/,'relevance':0x0},{'match':/[a-z_]\w*\'[\w\']*/,'relevance':0x0},{'scope':_0x2bbc2b(0x1182),'match':/\s+(\|\||\+[\+\.]?|\*[\*\/\.]?|\/[\.]?|\.\.\.|\|>|&&|===?)\s+/,'relevance':0x0},_0x298e20[_0x2bbc2b(0x46a1)](_0x298e20[_0x2bbc2b(0xa4c)],{'scope':'string','relevance':0x0}),_0x298e20[_0x2bbc2b(0x46a1)](_0x298e20[_0x2bbc2b(0x291b)],{'illegal':null}),{'scope':_0x2bbc2b(0x4a80),'variants':[{'match':/\b0[xX][a-fA-F0-9_]+[Lln]?/},{'match':/\b0[oO][0-7_]+[Lln]?/},{'match':/\b0[bB][01_]+[Lln]?/},{'match':/\b[0-9][0-9_]*([Lln]|(\.[0-9_]*)?([eE][-+]?[0-9_]+)?)/}],'relevance':0x0}]};};},0x185e:_0x3323bb=>{const _0x346618=a0_0x11e7;_0x3323bb[_0x346618(0x474c)]=function(_0xbb1f90){const _0x1c2625=_0x346618;return{'name':_0x1c2625(0x2aa9),'keywords':_0x1c2625(0x2e2a),'illegal':'{const _0x1c0633=a0_0x11e7;_0x6a413a[_0x1c0633(0x474c)]=function(_0x44a27c){const _0x11ede9=_0x1c0633,_0x1cd965=_0x11ede9(0x355f),_0x4b81d7={'className':_0x11ede9(0x263f),'begin':/[a-zA-Z-_]+/,'end':/\s*:/,'excludeEnd':!0x0,'starts':{'end':';','relevance':0x0,'contains':[{'className':_0x11ede9(0x3362),'begin':/\.[a-zA-Z-_]+/},{'className':_0x11ede9(0x1357),'begin':/\(optional\)/}]}};return{'name':_0x11ede9(0x3eb2),'aliases':[_0x11ede9(0x125a),_0x11ede9(0x3abe)],'case_insensitive':!0x0,'keywords':_0x11ede9(0x331),'contains':[{'begin':_0x11ede9(0x2a5)+_0x1cd965,'end':/\}/,'keywords':_0x11ede9(0x22af),'contains':[_0x4b81d7,_0x44a27c[_0x11ede9(0x2bbe)]]},{'begin':_0x11ede9(0x42f)+_0x1cd965,'end':/\}/,'keywords':'name\x20count\x20channels\x20instance-data\x20instance-state\x20instance\x20of','illegal':/\S/,'contains':[_0x11ede9(0x4454),_0x4b81d7,_0x44a27c[_0x11ede9(0x2bbe)]]},{'begin':'^'+_0x1cd965,'end':/\}/,'contains':[_0x4b81d7,_0x44a27c[_0x11ede9(0x2bbe)]]},_0x44a27c[_0x11ede9(0x2bbe)]]};};},0x175c:_0x3311da=>{const _0x14b329=a0_0x11e7;_0x3311da[_0x14b329(0x474c)]=function(_0x4b1a62){const _0x16dcf3=_0x14b329,_0x50a43c=_0x16dcf3(0x1344),_0x4f1d0f=_0x16dcf3(0x1372),_0x5af053={'className':_0x16dcf3(0x3362),'variants':[{'begin':/\$[\w\d#@][\w\d_]*/},{'begin':/\$\{(.*?)\}/}]},_0x34a337={'className':_0x16dcf3(0x2431),'begin':/"/,'end':/"/,'contains':[_0x4b1a62[_0x16dcf3(0x4a76)],_0x5af053,{'className':_0x16dcf3(0x3362),'begin':/\$\(/,'end':/\)/,'contains':[_0x4b1a62[_0x16dcf3(0x4a76)]]}]},_0xb41b1d={'className':'string','begin':/'/,'end':/'/};return{'name':_0x16dcf3(0x4e4e),'aliases':[_0x16dcf3(0x919)],'case_insensitive':!0x0,'keywords':{'$pattern':/:?[\w-]+/,'literal':_0x4f1d0f,'keyword':_0x50a43c+'\x20:'+_0x50a43c['split']('\x20')['join']('\x20:')+'\x20:'+_0x16dcf3(0x8d4)[_0x16dcf3(0x1117)]('\x20')['join']('\x20:')},'contains':[{'variants':[{'begin':/\/\*/,'end':/\*\//},{'begin':/\/\//,'end':/$/},{'begin':/<\//,'end':/>/}],'illegal':/./},_0x4b1a62[_0x16dcf3(0x4e4f)]('^#','$'),_0x34a337,_0xb41b1d,_0x5af053,{'begin':/[\w-]+=([^\s{}[\]()>]+)/,'relevance':0x0,'returnBegin':!0x0,'contains':[{'className':_0x16dcf3(0x263f),'begin':/[^=]+/},{'begin':/=/,'endsWithParent':!0x0,'relevance':0x0,'contains':[_0x34a337,_0xb41b1d,_0x5af053,{'className':_0x16dcf3(0x2706),'begin':_0x16dcf3(0x4cd5)+_0x4f1d0f[_0x16dcf3(0x1117)]('\x20')[_0x16dcf3(0x3541)]('|')+_0x16dcf3(0x716)},{'begin':/("[^"]*"|[^\s{}[\]]+)/}]}]},{'className':_0x16dcf3(0x4a80),'begin':/\*[0-9a-fA-F]+/},{'begin':_0x16dcf3(0x4cd5)+_0x16dcf3(0x3286)['split']('\x20')[_0x16dcf3(0x3541)]('|')+')([\x5cs[(\x5c]|])','returnBegin':!0x0,'contains':[{'className':_0x16dcf3(0x43a),'begin':/\w+/}]},{'className':'built_in','variants':[{'begin':_0x16dcf3(0x4b7)+_0x16dcf3(0x3ec4)['split']('\x20')[_0x16dcf3(0x3541)]('|')+_0x16dcf3(0x93f)},{'begin':/\.\./,'relevance':0x0}]}]};};},0x19fa:_0x238d97=>{const _0x576c8d=a0_0x11e7;_0x238d97[_0x576c8d(0x474c)]=function(_0x2a5e){const _0x59f2f8=_0x576c8d,_0x16f532={'match':[/(surface|displacement|light|volume|imager)/,/\s+/,_0x2a5e['IDENT_RE']],'scope':{0x1:'keyword',0x3:_0x59f2f8(0x19e4)}};return{'name':_0x59f2f8(0x21a6),'keywords':{'keyword':[_0x59f2f8(0x552),_0x59f2f8(0x3c19),'if','do',_0x59f2f8(0xdfd),'else',_0x59f2f8(0x4e10),_0x59f2f8(0x2068),_0x59f2f8(0x16d9)],'built_in':[_0x59f2f8(0xbe0),_0x59f2f8(0x2c6e),_0x59f2f8(0x463),_0x59f2f8(0x20e7),_0x59f2f8(0x3c15),_0x59f2f8(0x3aab),_0x59f2f8(0x18c5),'attribute',_0x59f2f8(0x5069),_0x59f2f8(0x10aa),_0x59f2f8(0x1ab9),_0x59f2f8(0x3c8c),_0x59f2f8(0x2ca),_0x59f2f8(0x1d1d),_0x59f2f8(0x3935),'degrees',_0x59f2f8(0x368e),_0x59f2f8(0x19f7),'diffuse',_0x59f2f8(0x3845),'Du','Dv','environment',_0x59f2f8(0x3a1b),'faceforward',_0x59f2f8(0x4e20),_0x59f2f8(0x2e2d),_0x59f2f8(0x29a7),_0x59f2f8(0x4f64),'incident',_0x59f2f8(0x1b19),_0x59f2f8(0x4961),'log','match',_0x59f2f8(0x4529),_0x59f2f8(0x37c8),_0x59f2f8(0x4531),'noise',_0x59f2f8(0x2429),_0x59f2f8(0x3cd3),'opposite',_0x59f2f8(0x1081),_0x59f2f8(0x4306),_0x59f2f8(0x4b48),_0x59f2f8(0x43bd),_0x59f2f8(0x32fe),_0x59f2f8(0x48ff),'radians',_0x59f2f8(0xe98),_0x59f2f8(0x469c),_0x59f2f8(0x4387),_0x59f2f8(0x4a66),_0x59f2f8(0x3d6c),_0x59f2f8(0xfe4),'setxcomp',_0x59f2f8(0x269f),_0x59f2f8(0x23dc),_0x59f2f8(0x4796),'sign',_0x59f2f8(0x2a37),'smoothstep','specular',_0x59f2f8(0x1185),'spline',_0x59f2f8(0x5011),_0x59f2f8(0xf8e),_0x59f2f8(0x38de),_0x59f2f8(0x587),'textureinfo','trace',_0x59f2f8(0x5161),_0x59f2f8(0x146b),_0x59f2f8(0x31ea),_0x59f2f8(0x4bd1),_0x59f2f8(0x3fa)],'type':['matrix',_0x59f2f8(0x1ab8),_0x59f2f8(0xe81),_0x59f2f8(0x2464),_0x59f2f8(0x47d),_0x59f2f8(0x4836)]},'illegal':'{const _0x3be750=a0_0x11e7;_0x2312be[_0x3be750(0x474c)]=function(_0x32dcad){const _0x1d93bc=_0x3be750,_0x53bb48=_0x32dcad['regex'],_0x43017a='([a-zA-Z_]\x5cw*[!?=]?|[-+~]@|<<|>>|=~|===?|<=>|[<>]=?|\x5c*\x5c*|[-/+%^&*~`|]|\x5c[\x5c]=?)',_0x46a23d=_0x53bb48[_0x1d93bc(0x583)](/\b([A-Z]+[a-z0-9]+)+/,/\b([A-Z]+[a-z0-9]+)+[A-Z]+/),_0x2a835e=_0x53bb48[_0x1d93bc(0x1d1d)](_0x46a23d,/(::\w+)*/),_0x18f5af={'variable.constant':[_0x1d93bc(0x4583),_0x1d93bc(0x3088),_0x1d93bc(0x10f8)],'variable.language':[_0x1d93bc(0x4454),'super'],'keyword':['alias',_0x1d93bc(0x2663),_0x1d93bc(0x42fa),_0x1d93bc(0x1cbc),_0x1d93bc(0x4e10),_0x1d93bc(0x2e7e),_0x1d93bc(0x1390),_0x1d93bc(0x183b),'do',_0x1d93bc(0x3d4),_0x1d93bc(0x3e5b),_0x1d93bc(0x2681),'END',_0x1d93bc(0x3f14),_0x1d93bc(0x3c19),'if','in',_0x1d93bc(0x196c),_0x1d93bc(0x3dc6),_0x1d93bc(0xc1a),'or',_0x1d93bc(0x1ebd),'require','rescue',_0x1d93bc(0x1977),_0x1d93bc(0xdfd),'then','undef',_0x1d93bc(0x26b1),'until',_0x1d93bc(0x191b),_0x1d93bc(0x552),_0x1d93bc(0x5075),_0x1d93bc(0x478e),_0x1d93bc(0x38b6),'prepend',_0x1d93bc(0x39ce),_0x1d93bc(0x4ef4),'protected',_0x1d93bc(0x2c1e),_0x1d93bc(0x383)],'built_in':[_0x1d93bc(0x4e27),'lambda',_0x1d93bc(0x2315),_0x1d93bc(0x3351),_0x1d93bc(0x462c),'define_method',_0x1d93bc(0x772),_0x1d93bc(0x1c15)],'literal':[_0x1d93bc(0x4022),'false',_0x1d93bc(0x3e27)]},_0x3404fb={'className':_0x1d93bc(0x4593),'begin':_0x1d93bc(0x4d18)},_0x3c49fc={'begin':'#<','end':'>'},_0x417a51=[_0x32dcad['COMMENT']('#','$',{'contains':[_0x3404fb]}),_0x32dcad[_0x1d93bc(0x4e4f)]('^=begin',_0x1d93bc(0x2f5),{'contains':[_0x3404fb],'relevance':0xa}),_0x32dcad[_0x1d93bc(0x4e4f)]('^__END__',_0x32dcad[_0x1d93bc(0x1640)])],_0x2aed48={'className':'subst','begin':/#\{/,'end':/\}/,'keywords':_0x18f5af},_0xe291d6={'className':_0x1d93bc(0x2431),'contains':[_0x32dcad[_0x1d93bc(0x4a76)],_0x2aed48],'variants':[{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/},{'begin':/`/,'end':/`/},{'begin':/%[qQwWx]?\(/,'end':/\)/},{'begin':/%[qQwWx]?\[/,'end':/\]/},{'begin':/%[qQwWx]?\{/,'end':/\}/},{'begin':/%[qQwWx]?/},{'begin':/%[qQwWx]?\//,'end':/\//},{'begin':/%[qQwWx]?%/,'end':/%/},{'begin':/%[qQwWx]?-/,'end':/-/},{'begin':/%[qQwWx]?\|/,'end':/\|/},{'begin':/\B\?(\\\d{1,3})/},{'begin':/\B\?(\\x[A-Fa-f0-9]{1,2})/},{'begin':/\B\?(\\u\{?[A-Fa-f0-9]{1,6}\}?)/},{'begin':/\B\?(\\M-\\C-|\\M-\\c|\\c\\M-|\\M-|\\C-\\M-)[\x20-\x7e]/},{'begin':/\B\?\\(c|C-)[\x20-\x7e]/},{'begin':/\B\?\\?\S/},{'begin':_0x53bb48[_0x1d93bc(0x1d1d)](/<<[-~]?'?/,_0x53bb48[_0x1d93bc(0x3296)](/(\w+)(?=\W)[^\n]*\n(?:[^\n]*\n)*?\s*\1\b/)),'contains':[_0x32dcad[_0x1d93bc(0x453e)]({'begin':/(\w+)/,'end':/(\w+)/,'contains':[_0x32dcad[_0x1d93bc(0x4a76)],_0x2aed48]})]}]},_0x28389a=_0x1d93bc(0x1a47),_0x4a242d={'className':_0x1d93bc(0x4a80),'relevance':0x0,'variants':[{'begin':_0x1d93bc(0x4aaa)+_0x28389a+_0x1d93bc(0x2385)+_0x28389a+_0x1d93bc(0x18e5)},{'begin':_0x1d93bc(0x254c)},{'begin':_0x1d93bc(0x4ec)},{'begin':_0x1d93bc(0x174f)},{'begin':'\x5cb0[xX][0-9a-fA-F](_?[0-9a-fA-F])*r?i?\x5cb'},{'begin':_0x1d93bc(0x1a2c)}]},_0xb3141={'variants':[{'match':/\(\)/},{'className':'params','begin':/\(/,'end':/(?=\))/,'excludeBegin':!0x0,'endsParent':!0x0,'keywords':_0x18f5af}]},_0x3e04e0=[_0xe291d6,{'variants':[{'match':[/class\s+/,_0x2a835e,/\s+<\s+/,_0x2a835e]},{'match':[/\b(class|module)\s+/,_0x2a835e]}],'scope':{0x2:_0x1d93bc(0x19e4),0x4:_0x1d93bc(0x3235)},'keywords':_0x18f5af},{'match':[/(include|extend)\s+/,_0x2a835e],'scope':{0x2:'title.class'},'keywords':_0x18f5af},{'relevance':0x0,'match':[_0x2a835e,/\.new[. (]/],'scope':{0x1:_0x1d93bc(0x19e4)}},{'relevance':0x0,'match':/\b[A-Z][A-Z_0-9]+\b/,'className':_0x1d93bc(0x41f7)},{'relevance':0x0,'match':_0x46a23d,'scope':_0x1d93bc(0x19e4)},{'match':[/def/,/\s+/,_0x43017a],'scope':{0x1:_0x1d93bc(0x1357),0x3:_0x1d93bc(0x20db)},'contains':[_0xb3141]},{'begin':_0x32dcad[_0x1d93bc(0xacc)]+'::'},{'className':_0x1d93bc(0x239b),'begin':_0x32dcad['UNDERSCORE_IDENT_RE']+_0x1d93bc(0x2df5),'relevance':0x0},{'className':_0x1d93bc(0x239b),'begin':_0x1d93bc(0x1d2d),'contains':[_0xe291d6,{'begin':_0x43017a}],'relevance':0x0},_0x4a242d,{'className':'variable','begin':_0x1d93bc(0x1b5c)},{'className':_0x1d93bc(0xddd),'begin':/\|/,'end':/\|/,'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0,'keywords':_0x18f5af},{'begin':'('+_0x32dcad['RE_STARTERS_RE']+_0x1d93bc(0x4f88),'keywords':_0x1d93bc(0x26b1),'contains':[{'className':_0x1d93bc(0x4d1d),'contains':[_0x32dcad['BACKSLASH_ESCAPE'],_0x2aed48],'illegal':/\n/,'variants':[{'begin':'/','end':'/[a-z]*'},{'begin':/%r\{/,'end':/\}[a-z]*/},{'begin':_0x1d93bc(0x1f29),'end':_0x1d93bc(0x3e0f)},{'begin':_0x1d93bc(0x55f),'end':'![a-z]*'},{'begin':_0x1d93bc(0x95d),'end':_0x1d93bc(0x391f)}]}]['concat'](_0x3c49fc,_0x417a51),'relevance':0x0}][_0x1d93bc(0x1d1d)](_0x3c49fc,_0x417a51);_0x2aed48[_0x1d93bc(0x2b31)]=_0x3e04e0,_0xb3141[_0x1d93bc(0x2b31)]=_0x3e04e0;const _0x2150c9=[{'begin':/^\s*=>/,'starts':{'end':'$','contains':_0x3e04e0}},{'className':'meta.prompt','begin':_0x1d93bc(0x36ac),'starts':{'end':'$','keywords':_0x18f5af,'contains':_0x3e04e0}}];return _0x417a51[_0x1d93bc(0x2767)](_0x3c49fc),{'name':_0x1d93bc(0x2339),'aliases':['rb',_0x1d93bc(0x1057),_0x1d93bc(0x4817),_0x1d93bc(0x2b3b),_0x1d93bc(0x4844)],'keywords':_0x18f5af,'illegal':/\/\*/,'contains':[_0x32dcad[_0x1d93bc(0x307a)]({'binary':_0x1d93bc(0x4430)})][_0x1d93bc(0x1d1d)](_0x2150c9)[_0x1d93bc(0x1d1d)](_0x417a51)[_0x1d93bc(0x1d1d)](_0x3e04e0)};};},0x163a:_0x1ccd6c=>{const _0xecb15e=a0_0x11e7;_0x1ccd6c[_0xecb15e(0x474c)]=function(_0x4109a4){const _0x5d6af2=_0xecb15e;return{'name':_0x5d6af2(0x20aa),'keywords':{'keyword':_0x5d6af2(0x3e7d),'built_in':_0x5d6af2(0x7a8)},'contains':[_0x4109a4['C_LINE_COMMENT_MODE'],_0x4109a4[_0x5d6af2(0x23fe)],_0x4109a4['APOS_STRING_MODE'],_0x4109a4[_0x5d6af2(0x291b)],_0x4109a4['C_NUMBER_MODE'],{'className':_0x5d6af2(0x2706),'variants':[{'begin':'#\x5cs+','relevance':0x0},{'begin':'#[a-zA-Z\x20.]+'}]}]};};},0x1521:_0x13e189=>{const _0x285ec9=a0_0x11e7;_0x13e189[_0x285ec9(0x474c)]=function(_0xef2826){const _0x286118=_0x285ec9,_0x532daa=_0xef2826[_0x286118(0x41d2)],_0x4ff495={'className':_0x286118(0x2fe1),'relevance':0x0,'begin':_0x532daa[_0x286118(0x1d1d)](/\b/,/(?!let|for|while|if|else|match\b)/,_0xef2826['IDENT_RE'],_0x532daa[_0x286118(0x3296)](/\s*\(/))},_0x388b4b='([ui](8|16|32|64|128|size)|f(32|64))?',_0xf857ac=['drop\x20',_0x286118(0x44e1),_0x286118(0x473c),_0x286118(0x33a2),'Sync','Drop','Fn',_0x286118(0x3de),_0x286118(0x3e19),_0x286118(0x3693),_0x286118(0x26ae),'Debug',_0x286118(0x245e),'PartialOrd','Eq','Ord',_0x286118(0x19cd),_0x286118(0x48ed),_0x286118(0x1cf3),_0x286118(0x3220),'Default',_0x286118(0x45eb),_0x286118(0x294d),_0x286118(0x16a8),_0x286118(0x2ce5),_0x286118(0x1348),_0x286118(0x29b8),_0x286118(0x2e04),'assert!',_0x286118(0x2f7b),_0x286118(0xa1a),_0x286118(0x486c),_0x286118(0x45aa),_0x286118(0x5292),_0x286118(0x49fc),_0x286118(0x2c54),_0x286118(0x2f50),_0x286118(0x45e5),_0x286118(0x4b03),_0x286118(0x246f),'panic!',_0x286118(0x2036),_0x286118(0x4280),_0x286118(0x3032),_0x286118(0x3d46),_0x286118(0x2b51),'line!',_0x286118(0x1793),_0x286118(0x243b),_0x286118(0x4799),_0x286118(0x1b5d),_0x286118(0xe27),_0x286118(0x2c4d),_0x286118(0x1a76),_0x286118(0x3307),'unimplemented!',_0x286118(0x33df),_0x286118(0x2fb9),_0x286118(0x18c9),'writeln!',_0x286118(0x14c3),'assert_ne!','debug_assert_ne!'],_0x1de572=['i8','i16',_0x286118(0x25dd),'i64','i128','isize','u8','u16',_0x286118(0x3979),_0x286118(0x23e5),'u128',_0x286118(0x4f2c),_0x286118(0x2889),_0x286118(0x1d09),'str',_0x286118(0x373c),'bool','Box','Option',_0x286118(0x3614),_0x286118(0x3327),_0x286118(0x11f3)];return{'name':'Rust','aliases':['rs'],'keywords':{'$pattern':_0xef2826[_0x286118(0xacc)]+'!?','type':_0x1de572,'keyword':[_0x286118(0x3027),'as',_0x286118(0x16c2),_0x286118(0x371f),'become',_0x286118(0x22ec),'break',_0x286118(0xc01),_0x286118(0x16d9),_0x286118(0x3aeb),'do',_0x286118(0x4677),_0x286118(0x3d4),_0x286118(0x44d8),'extern',_0x286118(0x3984),'final','fn',_0x286118(0x3c19),'if','impl','in',_0x286118(0x1e61),_0x286118(0x110b),_0x286118(0x172d),_0x286118(0x2d96),'mod','move',_0x286118(0x3d56),'override','priv',_0x286118(0x13cb),_0x286118(0x21c3),_0x286118(0xdfd),_0x286118(0x4454),_0x286118(0x3184),_0x286118(0x2c7c),_0x286118(0x4146),_0x286118(0x2cc),_0x286118(0x4659),_0x286118(0x4022),'try','type',_0x286118(0x3368),_0x286118(0x2b2),'unsized','use','virtual',_0x286118(0x3b62),_0x286118(0x552),_0x286118(0x5075)],'literal':['true','false','Some',_0x286118(0x23dd),'Ok',_0x286118(0x4795)],'built_in':_0xf857ac},'illegal':''},_0x4ff495]};};},0x1e4a:_0x3cede0=>{_0x3cede0['exports']=function(_0x412226){const _0x2a3d47=a0_0x11e7,_0x29e1ac=_0x412226[_0x2a3d47(0x41d2)];return{'name':_0x2a3d47(0xa91),'case_insensitive':!0x0,'keywords':{'literal':[_0x2a3d47(0x1582),_0x2a3d47(0x4122),'_all_',_0x2a3d47(0x320a),'_character_',_0x2a3d47(0x155a),_0x2a3d47(0x3892),_0x2a3d47(0x1671),_0x2a3d47(0x47ae),'_numeric_',_0x2a3d47(0xa6f),_0x2a3d47(0x12ba)],'keyword':['do','if',_0x2a3d47(0xaf5),'else','end',_0x2a3d47(0x30d6),'while','abort','array','attrib','by','call',_0x2a3d47(0xc33),'cards4','catname',_0x2a3d47(0x16d9),_0x2a3d47(0x45a7),_0x2a3d47(0x42c),_0x2a3d47(0x5be),_0x2a3d47(0x4b94),'delimiter',_0x2a3d47(0x12ca),'dm',_0x2a3d47(0x41fc),'endsas',_0x2a3d47(0x3d85),_0x2a3d47(0x184e),'filename',_0x2a3d47(0xc72),'format',_0x2a3d47(0x139c),'in',_0x2a3d47(0x2b93),'informat',_0x2a3d47(0x7b0),'keep',_0x2a3d47(0x3b71),'leave',_0x2a3d47(0x1b19),_0x2a3d47(0x23b6),_0x2a3d47(0x4b32),'list',_0x2a3d47(0x884),'merge',_0x2a3d47(0x4122),_0x2a3d47(0x189e),_0x2a3d47(0x20b6),_0x2a3d47(0x4d21),'out',_0x2a3d47(0x32a0),_0x2a3d47(0xbe8),_0x2a3d47(0x2115),_0x2a3d47(0x42a1),_0x2a3d47(0x2022),'replace',_0x2a3d47(0x3c50),_0x2a3d47(0xdfd),_0x2a3d47(0x3fc9),_0x2a3d47(0x1fa),'skip',_0x2a3d47(0x23ec),'stop','title',_0x2a3d47(0x38d6),_0x2a3d47(0x2ce3),'where',_0x2a3d47(0x18db),_0x2a3d47(0x3e75),_0x2a3d47(0xde1),'add',_0x2a3d47(0x2663),_0x2a3d47(0x2598),'as',_0x2a3d47(0x428d),_0x2a3d47(0x4e5c),_0x2a3d47(0x1d3a),'delete',_0x2a3d47(0x4453),'distinct',_0x2a3d47(0x41fc),_0x2a3d47(0x20bd),'from',_0x2a3d47(0x4e5b),_0x2a3d47(0x4cfc),_0x2a3d47(0x3bb5),_0x2a3d47(0x1178),_0x2a3d47(0x31cf),'in','key',_0x2a3d47(0x83d),_0x2a3d47(0x4cdf),_0x2a3d47(0x189e),'msgtype',_0x2a3d47(0xc1a),_0x2a3d47(0x1582),'on','or',_0x2a3d47(0xd8d),_0x2a3d47(0x1a1f),'references',_0x2a3d47(0x24ed),_0x2a3d47(0x5027),'select',_0x2a3d47(0x1fa),_0x2a3d47(0x1639),_0x2a3d47(0x390b),_0x2a3d47(0x38d6),_0x2a3d47(0x1509),_0x2a3d47(0x1961),_0x2a3d47(0x3b62)]},'contains':[{'className':_0x2a3d47(0x1357),'begin':/^\s*(proc [\w\d_]+|data|run|quit)[\s;]/},{'className':'variable','begin':/&[a-zA-Z_&][a-zA-Z0-9_]*\.?/},{'begin':[/^\s*/,/datalines;|cards;/,/(?:.*\n)+/,/^\s*;\s*$/],'className':{0x2:_0x2a3d47(0x1357),0x3:_0x2a3d47(0x2431)}},{'begin':[/%mend|%macro/,/\s+/,/[a-zA-Z_&][a-zA-Z0-9_]*/],'className':{0x1:_0x2a3d47(0x43a),0x3:_0x2a3d47(0x20db)}},{'className':_0x2a3d47(0x43a),'begin':'%'+_0x29e1ac[_0x2a3d47(0x583)](_0x2a3d47(0x15a8),_0x2a3d47(0x1df5),_0x2a3d47(0x4fc),_0x2a3d47(0x1033),_0x2a3d47(0x3c1c),'datatyp','display','do','else',_0x2a3d47(0x2681),_0x2a3d47(0x2ff9),_0x2a3d47(0x501b),_0x2a3d47(0x139c),'if','index',_0x2a3d47(0x7b0),_0x2a3d47(0x4341),_0x2a3d47(0x3b71),_0x2a3d47(0x48eb),_0x2a3d47(0x1b19),_0x2a3d47(0x1e61),_0x2a3d47(0x16a7),_0x2a3d47(0x21b3),'macro',_0x2a3d47(0xfcd),_0x2a3d47(0x1df5),_0x2a3d47(0x3c7a),_0x2a3d47(0x4b0b),_0x2a3d47(0xbe8),_0x2a3d47(0x1033),_0x2a3d47(0x3130),_0x2a3d47(0x32ad),_0x2a3d47(0x4c74),_0x2a3d47(0x25d1),_0x2a3d47(0x4f7a),'qtrim',_0x2a3d47(0x3567),_0x2a3d47(0x4d9f),'scan',_0x2a3d47(0x257f),_0x2a3d47(0x3a72),_0x2a3d47(0x337d),'syscall','sysevalf',_0x2a3d47(0xd24),_0x2a3d47(0x65b),_0x2a3d47(0x1ccc),'syslput','sysprod','sysrc','sysrput',_0x2a3d47(0xaf5),'to',_0x2a3d47(0x1b23),_0x2a3d47(0x3a1f),_0x2a3d47(0x30d6),_0x2a3d47(0x1e9a),'verify','while',_0x2a3d47(0x18db))},{'className':_0x2a3d47(0x20db),'begin':/%[a-zA-Z_][a-zA-Z_0-9]*/},{'className':_0x2a3d47(0x5153),'begin':_0x29e1ac[_0x2a3d47(0x583)](_0x2a3d47(0xbe0),_0x2a3d47(0xca2),'airy',_0x2a3d47(0x196b),_0x2a3d47(0x4259),_0x2a3d47(0x3aab),_0x2a3d47(0x459d),_0x2a3d47(0x131c),'band',_0x2a3d47(0x170a),_0x2a3d47(0x26ed),_0x2a3d47(0x1482),_0x2a3d47(0x3fc5),'brshift',_0x2a3d47(0x2a4c),_0x2a3d47(0x961),_0x2a3d47(0x2cc3),_0x2a3d47(0x10aa),_0x2a3d47(0x3c2c),_0x2a3d47(0x1978),_0x2a3d47(0x50d8),'cnonct',_0x2a3d47(0x4ff0),_0x2a3d47(0x44b3),_0x2a3d47(0x194a),_0x2a3d47(0x412f),_0x2a3d47(0x3935),'cosh',_0x2a3d47(0xdd4),'curobs','cv',_0x2a3d47(0xc5a),_0x2a3d47(0x26d0),_0x2a3d47(0x1aa1),_0x2a3d47(0x34ba),_0x2a3d47(0x36da),_0x2a3d47(0x35a1),_0x2a3d47(0x40d9),_0x2a3d47(0x2116),_0x2a3d47(0x4643),'datetime','day',_0x2a3d47(0x7af),'depdb','depdbsl',_0x2a3d47(0x1dcd),'depsl','depsl',_0x2a3d47(0x2884),'depsyd',_0x2a3d47(0x1d1c),_0x2a3d47(0x1d1c),_0x2a3d47(0x18cf),_0x2a3d47(0x57e),_0x2a3d47(0x4139),_0x2a3d47(0xf9f),_0x2a3d47(0x4fae),_0x2a3d47(0x2e3c),_0x2a3d47(0xf5a),_0x2a3d47(0xc60),_0x2a3d47(0x30ee),_0x2a3d47(0xaac),_0x2a3d47(0x51fd),_0x2a3d47(0x1e2a),_0x2a3d47(0x4350),_0x2a3d47(0x1a55),_0x2a3d47(0x1147),_0x2a3d47(0x226),_0x2a3d47(0x3a1b),'fappend','fclose','fcol',_0x2a3d47(0x4bcb),_0x2a3d47(0x41d0),_0x2a3d47(0x25e),_0x2a3d47(0x4ecd),_0x2a3d47(0x2014),_0x2a3d47(0x341c),_0x2a3d47(0x40ca),'fileref',_0x2a3d47(0x1b2c),'finv','fipname',_0x2a3d47(0x3bc6),_0x2a3d47(0x1b83),'floor','fnonct',_0x2a3d47(0x1f36),_0x2a3d47(0x105f),'foptname','foptnum','fpoint','fpos','fput','fread',_0x2a3d47(0x50bf),_0x2a3d47(0x275b),_0x2a3d47(0x20d8),'fuzz','fwrite','gaminv',_0x2a3d47(0x1c19),_0x2a3d47(0x4140),_0x2a3d47(0x1d75),_0x2a3d47(0x329c),_0x2a3d47(0x244a),_0x2a3d47(0x4f1d),_0x2a3d47(0x457e),'hour','ibessel',_0x2a3d47(0x3bb5),_0x2a3d47(0xa7b),'indexw','input',_0x2a3d47(0xaf4),_0x2a3d47(0x3d37),'int',_0x2a3d47(0x3542),_0x2a3d47(0x4780),_0x2a3d47(0xf03),_0x2a3d47(0x4d7b),_0x2a3d47(0x12c6),'juldate','kurtosis','lag',_0x2a3d47(0x1b09),'left','length',_0x2a3d47(0x3f91),_0x2a3d47(0x23b6),_0x2a3d47(0x3bc4),_0x2a3d47(0x20ff),_0x2a3d47(0x1463),'log2','logpdf',_0x2a3d47(0x28ea),_0x2a3d47(0x172c),_0x2a3d47(0x21b3),_0x2a3d47(0x4529),_0x2a3d47(0x4586),_0x2a3d47(0x4919),_0x2a3d47(0x37c8),_0x2a3d47(0x149b),_0x2a3d47(0x4531),_0x2a3d47(0x3ed3),_0x2a3d47(0x3839),_0x2a3d47(0x4c0c),'n','netpv',_0x2a3d47(0x1532),'normal',_0x2a3d47(0x318c),_0x2a3d47(0x1bff),_0x2a3d47(0x1795),_0x2a3d47(0x1a35),'pathname',_0x2a3d47(0x2bf1),_0x2a3d47(0x2c3c),_0x2a3d47(0xa14),_0x2a3d47(0x373f),_0x2a3d47(0x2464),'poisson',_0x2a3d47(0x4bde),_0x2a3d47(0x796),'probbnml','probchi',_0x2a3d47(0x4b39),_0x2a3d47(0x4e51),_0x2a3d47(0x3e37),_0x2a3d47(0x131e),_0x2a3d47(0xccc),_0x2a3d47(0x2d0d),_0x2a3d47(0x2525),_0x2a3d47(0xbe8),_0x2a3d47(0x1faa),'putn','qtr',_0x2a3d47(0x3567),_0x2a3d47(0x214c),_0x2a3d47(0x13ab),_0x2a3d47(0x4444),'rangam',_0x2a3d47(0x51f),_0x2a3d47(0x846),_0x2a3d47(0x14f6),_0x2a3d47(0x214),_0x2a3d47(0x1eba),_0x2a3d47(0x29fe),'ranuni',_0x2a3d47(0x11d0),_0x2a3d47(0x1521),_0x2a3d47(0x78b),'rewind',_0x2a3d47(0x4d50),_0x2a3d47(0x3d6c),_0x2a3d47(0x1a5c),_0x2a3d47(0x4318),_0x2a3d47(0x1328),_0x2a3d47(0x274e),'sign',_0x2a3d47(0x2a37),'sinh',_0x2a3d47(0xcfd),'soundex',_0x2a3d47(0x32d),_0x2a3d47(0x5011),'std','stderr',_0x2a3d47(0x2e7a),_0x2a3d47(0x2ac),_0x2a3d47(0x4755),_0x2a3d47(0x3a72),_0x2a3d47(0x13b9),_0x2a3d47(0x1afd),_0x2a3d47(0x1ccc),_0x2a3d47(0xd2e),'sysprod',_0x2a3d47(0x20dd),_0x2a3d47(0x2ec4),_0x2a3d47(0x38de),_0x2a3d47(0x327c),_0x2a3d47(0x51b6),_0x2a3d47(0x10f0),_0x2a3d47(0x4115),_0x2a3d47(0x4fb7),'today',_0x2a3d47(0x296f),_0x2a3d47(0x437e),_0x2a3d47(0x3b5e),_0x2a3d47(0x1b23),_0x2a3d47(0x47fb),'trunc',_0x2a3d47(0x394),'upcase','uss','var','varfmt',_0x2a3d47(0x3e80),_0x2a3d47(0x4f1f),'varlen',_0x2a3d47(0x362d),_0x2a3d47(0x395d),_0x2a3d47(0x1064),'varrayx','vartype',_0x2a3d47(0x1a64),'vformat',_0x2a3d47(0x3281),_0x2a3d47(0x4b81),_0x2a3d47(0x3659),_0x2a3d47(0x4f58),_0x2a3d47(0x2f60),_0x2a3d47(0x4577),'vformatx',_0x2a3d47(0x27dc),_0x2a3d47(0x2add),_0x2a3d47(0x3d78),'vinformatd',_0x2a3d47(0x3b8a),'vinformatn',_0x2a3d47(0x4be0),_0x2a3d47(0x364a),_0x2a3d47(0x41ee),_0x2a3d47(0x4e67),_0x2a3d47(0x1676),_0x2a3d47(0x2459),'vlength',_0x2a3d47(0x4492),_0x2a3d47(0x2515),_0x2a3d47(0x3b79),'vtype',_0x2a3d47(0x1fc),_0x2a3d47(0xa1c),'year','yyq',_0x2a3d47(0x48ae),_0x2a3d47(0xba9),'zipnamel',_0x2a3d47(0x4256))+'(?=\x5c()'},{'className':_0x2a3d47(0x2431),'variants':[_0x412226[_0x2a3d47(0xa4c)],_0x412226[_0x2a3d47(0x291b)]]},_0x412226['COMMENT']('\x5c*',';'),_0x412226[_0x2a3d47(0x23fe)]]};};},0xd79:_0x337a98=>{const _0x4221b5=a0_0x11e7;_0x337a98[_0x4221b5(0x474c)]=function(_0xb04214){const _0x1c1888=_0x4221b5,_0x25d8c3=_0xb04214[_0x1c1888(0x41d2)],_0x19dba5={'className':'subst','variants':[{'begin':'\x5c$[A-Za-z0-9_]+'},{'begin':/\$\{/,'end':/\}/}]},_0x2303e3={'className':_0x1c1888(0x2431),'variants':[{'begin':_0x1c1888(0xb00),'end':_0x1c1888(0xb00)},{'begin':'\x22','end':'\x22','illegal':'\x5cn','contains':[_0xb04214[_0x1c1888(0x4a76)]]},{'begin':_0x1c1888(0x15a6),'end':'\x22','illegal':'\x5cn','contains':[_0xb04214['BACKSLASH_ESCAPE'],_0x19dba5]},{'className':_0x1c1888(0x2431),'begin':_0x1c1888(0x3a69),'end':'\x22\x22\x22','contains':[_0x19dba5],'relevance':0xa}]},_0x1f4b83={'className':_0x1c1888(0xcfc),'begin':_0x1c1888(0x17a5),'relevance':0x0},_0xcb9670={'className':'title','begin':/[^0-9\n\t "'(),.`{}\[\]:;][^\n\t "'(),.`{}\[\]:;]+|[^0-9\n\t "'(),.`{}\[\]:;=]/,'relevance':0x0},_0x1448dc={'className':_0x1c1888(0x1390),'beginKeywords':_0x1c1888(0x15ea),'end':/[:={\[\n;]/,'excludeEnd':!0x0,'contains':[_0xb04214[_0x1c1888(0x2ae2)],_0xb04214[_0x1c1888(0x23fe)],{'beginKeywords':_0x1c1888(0x403b),'relevance':0xa},{'begin':/\[/,'end':/\]/,'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0,'contains':[_0x1f4b83,_0xb04214['C_LINE_COMMENT_MODE'],_0xb04214[_0x1c1888(0x23fe)]]},{'className':_0x1c1888(0xddd),'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0,'contains':[_0x1f4b83,_0xb04214[_0x1c1888(0x2ae2)],_0xb04214[_0x1c1888(0x23fe)]]},_0xcb9670]},_0x21f1e2={'className':_0x1c1888(0x14b2),'beginKeywords':_0x1c1888(0x452b),'end':_0x25d8c3[_0x1c1888(0x3296)](/[:={\[(\n;]/),'contains':[_0xcb9670]};return{'name':_0x1c1888(0x2ce),'keywords':{'literal':_0x1c1888(0x12f3),'keyword':'type\x20yield\x20lazy\x20override\x20def\x20with\x20val\x20var\x20sealed\x20abstract\x20private\x20trait\x20object\x20if\x20then\x20forSome\x20for\x20while\x20do\x20throw\x20finally\x20protected\x20extends\x20import\x20final\x20return\x20else\x20break\x20new\x20catch\x20super\x20class\x20case\x20package\x20default\x20try\x20this\x20match\x20continue\x20throws\x20implicit\x20export\x20enum\x20given\x20transparent'},'contains':[{'begin':[_0x1c1888(0x1137),/\s+/,/using/,/\s+/,/\S+/],'beginScope':{0x1:'comment',0x3:_0x1c1888(0x1357),0x5:_0x1c1888(0xcfc)},'end':/$/,'contains':[{'className':'string','begin':/\S+/}]},_0xb04214[_0x1c1888(0x2ae2)],_0xb04214[_0x1c1888(0x23fe)],_0x2303e3,_0x1f4b83,_0x21f1e2,_0x1448dc,_0xb04214[_0x1c1888(0xd12)],{'begin':[/^\s*/,_0x1c1888(0x1dd9),/\s+(?=[[(])/],'beginScope':{0x2:'keyword'}},{'begin':[/^\s*/,/end/,/\s+/,/(extension\b)?/],'beginScope':{0x2:'keyword',0x4:_0x1c1888(0x1357)}},{'match':/\.inline\b/},{'begin':/\binline(?=\s)/,'keywords':'inline'},{'begin':[/\(\s*/,/using/,/\s+(?!\))/],'beginScope':{0x2:_0x1c1888(0x1357)}},{'className':_0x1c1888(0x5153),'begin':_0x1c1888(0x4d18)}]};};},0x1864:_0x2808de=>{const _0x463e28=a0_0x11e7;_0x2808de[_0x463e28(0x474c)]=function(_0x3d3775){const _0x3cf204=_0x463e28,_0x1edd88=_0x3cf204(0x2c46),_0x68a530=_0x3cf204(0xcf4),_0x19eb3a={'$pattern':_0x1edd88,'built_in':'case-lambda\x20call/cc\x20class\x20define-class\x20exit-handler\x20field\x20import\x20inherit\x20init-field\x20interface\x20let*-values\x20let-values\x20let/ec\x20mixin\x20opt-lambda\x20override\x20protect\x20provide\x20public\x20rename\x20require\x20require-for-syntax\x20syntax\x20syntax-case\x20syntax-error\x20unit/sig\x20unless\x20when\x20with-syntax\x20and\x20begin\x20call-with-current-continuation\x20call-with-input-file\x20call-with-output-file\x20case\x20cond\x20define\x20define-syntax\x20delay\x20do\x20dynamic-wind\x20else\x20for-each\x20if\x20lambda\x20let\x20let*\x20let-syntax\x20letrec\x20letrec-syntax\x20map\x20or\x20syntax-rules\x20\x27\x20*\x20+\x20,\x20,@\x20-\x20...\x20/\x20;\x20<\x20<=\x20=\x20=>\x20>\x20>=\x20`\x20abs\x20acos\x20angle\x20append\x20apply\x20asin\x20assoc\x20assq\x20assv\x20atan\x20boolean?\x20caar\x20cadr\x20call-with-input-file\x20call-with-output-file\x20call-with-values\x20car\x20cdddar\x20cddddr\x20cdr\x20ceiling\x20char->integer\x20char-alphabetic?\x20char-ci<=?\x20char-ci=?\x20char-ci>?\x20char-downcase\x20char-lower-case?\x20char-numeric?\x20char-ready?\x20char-upcase\x20char-upper-case?\x20char-whitespace?\x20char<=?\x20char=?\x20char>?\x20char?\x20close-input-port\x20close-output-port\x20complex?\x20cons\x20cos\x20current-input-port\x20current-output-port\x20denominator\x20display\x20eof-object?\x20eq?\x20equal?\x20eqv?\x20eval\x20even?\x20exact->inexact\x20exact?\x20exp\x20expt\x20floor\x20force\x20gcd\x20imag-part\x20inexact->exact\x20inexact?\x20input-port?\x20integer->char\x20integer?\x20interaction-environment\x20lcm\x20length\x20list\x20list->string\x20list->vector\x20list-ref\x20list-tail\x20list?\x20load\x20log\x20magnitude\x20make-polar\x20make-rectangular\x20make-string\x20make-vector\x20max\x20member\x20memq\x20memv\x20min\x20modulo\x20negative?\x20newline\x20not\x20null-environment\x20null?\x20number->string\x20number?\x20numerator\x20odd?\x20open-input-file\x20open-output-file\x20output-port?\x20pair?\x20peek-char\x20port?\x20positive?\x20procedure?\x20quasiquote\x20quote\x20quotient\x20rational?\x20rationalize\x20read\x20read-char\x20real-part\x20real?\x20remainder\x20reverse\x20round\x20scheme-report-environment\x20set!\x20set-car!\x20set-cdr!\x20sin\x20sqrt\x20string\x20string->list\x20string->number\x20string->symbol\x20string-append\x20string-ci<=?\x20string-ci=?\x20string-ci>?\x20string-copy\x20string-fill!\x20string-length\x20string-ref\x20string-set!\x20string<=?\x20string=?\x20string>?\x20string?\x20substring\x20symbol->string\x20symbol?\x20tan\x20transcript-off\x20transcript-on\x20truncate\x20values\x20vector\x20vector->list\x20vector-fill!\x20vector-length\x20vector-ref\x20vector-set!\x20with-input-from-file\x20with-output-to-file\x20write\x20write-char\x20zero?'},_0x1b9de2={'className':_0x3cf204(0x2706),'begin':'(#t|#f|#\x5c\x5c'+_0x1edd88+'|#\x5c\x5c.)'},_0x2a0d8f={'className':_0x3cf204(0x4a80),'variants':[{'begin':_0x68a530,'relevance':0x0},{'begin':_0x68a530+_0x3cf204(0x2a49)+_0x68a530+'i','relevance':0x0},{'begin':'#b[0-1]+(/[0-1]+)?'},{'begin':_0x3cf204(0x1a71)},{'begin':'#x[0-9a-f]+(/[0-9a-f]+)?'}]},_0x2029ff=_0x3d3775[_0x3cf204(0x291b)],_0x4a9fa8=[_0x3d3775[_0x3cf204(0x4e4f)](';','$',{'relevance':0x0}),_0x3d3775['COMMENT'](_0x3cf204(0x33d2),'\x5c|#')],_0x3f61c2={'begin':_0x1edd88,'relevance':0x0},_0x4dc39a={'className':'symbol','begin':'\x27'+_0x1edd88},_0x20d250={'endsWithParent':!0x0,'relevance':0x0},_0x4c35bb={'variants':[{'begin':/'/},{'begin':'`'}],'contains':[{'begin':'\x5c(','end':'\x5c)','contains':[_0x3cf204(0x4454),_0x1b9de2,_0x2029ff,_0x2a0d8f,_0x3f61c2,_0x4dc39a]}]},_0x35909b={'className':_0x3cf204(0x11d8),'relevance':0x0,'begin':_0x1edd88,'keywords':_0x19eb3a},_0x3a96f2={'variants':[{'begin':'\x5c(','end':'\x5c)'},{'begin':'\x5c[','end':'\x5c]'}],'contains':[{'begin':/lambda/,'endsWithParent':!0x0,'returnBegin':!0x0,'contains':[_0x35909b,{'endsParent':!0x0,'variants':[{'begin':/\(/,'end':/\)/},{'begin':/\[/,'end':/\]/}],'contains':[_0x3f61c2]}]},_0x35909b,_0x20d250]};return _0x20d250[_0x3cf204(0x2b31)]=[_0x1b9de2,_0x2a0d8f,_0x2029ff,_0x3f61c2,_0x4dc39a,_0x4c35bb,_0x3a96f2]['concat'](_0x4a9fa8),{'name':_0x3cf204(0x1e89),'aliases':[_0x3cf204(0x4221)],'illegal':/\S/,'contains':[_0x3d3775['SHEBANG'](),_0x2a0d8f,_0x2029ff,_0x4dc39a,_0x4c35bb,_0x3a96f2][_0x3cf204(0x1d1d)](_0x4a9fa8)};};},0x18b:_0x3d26f1=>{const _0x3015bc=a0_0x11e7;_0x3d26f1[_0x3015bc(0x474c)]=function(_0x5394ad){const _0x29e2d4=_0x3015bc,_0x4ab6dd=[_0x5394ad[_0x29e2d4(0xd12)],{'className':'string','begin':_0x29e2d4(0x3bb4),'end':_0x29e2d4(0x3bb4),'contains':[_0x5394ad[_0x29e2d4(0x4a76)],{'begin':'\x27\x27'}]}];return{'name':_0x29e2d4(0xe58),'aliases':[_0x29e2d4(0x18c6)],'keywords':{'$pattern':/%?\w+/,'keyword':_0x29e2d4(0x301c),'literal':_0x29e2d4(0x13a1),'built_in':_0x29e2d4(0x4874)},'illegal':_0x29e2d4(0x279c),'contains':[{'className':_0x29e2d4(0x14b2),'beginKeywords':_0x29e2d4(0x14b2),'end':'$','contains':[_0x5394ad[_0x29e2d4(0xb0e)],{'className':_0x29e2d4(0xddd),'begin':'\x5c(','end':'\x5c)'}]},{'begin':_0x29e2d4(0x2908),'relevance':0x0},{'begin':'\x5c[','end':_0x29e2d4(0x28fd),'relevance':0x0,'contains':_0x4ab6dd},_0x5394ad[_0x29e2d4(0x4e4f)]('//','$')]['concat'](_0x4ab6dd)};};},0x64b:_0x50766f=>{const _0x6b5bac=a0_0x11e7,_0x211d3c=['a',_0x6b5bac(0xd80),'address',_0x6b5bac(0x51c1),'aside',_0x6b5bac(0x645),'b',_0x6b5bac(0x702),'body',_0x6b5bac(0x18b7),_0x6b5bac(0x3115),'caption',_0x6b5bac(0x447d),_0x6b5bac(0x4948),'dd',_0x6b5bac(0x109c),_0x6b5bac(0x2dd7),'dfn','div','dl','dt','em',_0x6b5bac(0x31c0),_0x6b5bac(0x4252),_0x6b5bac(0x12f4),_0x6b5bac(0x3af9),_0x6b5bac(0x31e0),'h1','h2','h3','h4','h5','h6','header','hgroup',_0x6b5bac(0x2acd),'i',_0x6b5bac(0x2168),_0x6b5bac(0x3045),_0x6b5bac(0x7b0),_0x6b5bac(0x4432),'kbd',_0x6b5bac(0x3b71),_0x6b5bac(0x201b),'li',_0x6b5bac(0x3212),_0x6b5bac(0x1af6),_0x6b5bac(0x3888),'nav',_0x6b5bac(0x20c7),'ol','p','q','quote','samp','section',_0x6b5bac(0x2bbd),_0x6b5bac(0x40c9),_0x6b5bac(0x4829),_0x6b5bac(0xb95),_0x6b5bac(0x1639),_0x6b5bac(0x2e69),'td',_0x6b5bac(0x39cd),_0x6b5bac(0x2f6d),'th',_0x6b5bac(0x46d4),_0x6b5bac(0x51b6),'tr','ul',_0x6b5bac(0x469d),_0x6b5bac(0xcde)],_0x6fd2fa=[_0x6b5bac(0x3f0b),_0x6b5bac(0x4435),_0x6b5bac(0x1bc1),_0x6b5bac(0xe81),'color-gamut',_0x6b5bac(0x2dc3),'device-aspect-ratio',_0x6b5bac(0x2dc),_0x6b5bac(0x4790),_0x6b5bac(0x4a4d),'forced-colors',_0x6b5bac(0x2461),_0x6b5bac(0x3cd6),_0x6b5bac(0x3d6d),_0x6b5bac(0x5231),_0x6b5bac(0x3dc),_0x6b5bac(0xc6a),_0x6b5bac(0xe97),'overflow-inline',_0x6b5bac(0x43e4),_0x6b5bac(0x1b7f),_0x6b5bac(0x4099),_0x6b5bac(0x228c),_0x6b5bac(0x2c6b),_0x6b5bac(0x4de3),_0x6b5bac(0x4318),_0x6b5bac(0x39d8),_0x6b5bac(0x38d6),_0x6b5bac(0x17d2),_0x6b5bac(0x2c06),_0x6b5bac(0x3e30),_0x6b5bac(0x2613),_0x6b5bac(0x2ebc)],_0x3ead5a=[_0x6b5bac(0x182f),'any-link',_0x6b5bac(0x2bd),_0x6b5bac(0x494d),'current','default','defined',_0x6b5bac(0x177b),_0x6b5bac(0x3f89),_0x6b5bac(0x41fc),_0x6b5bac(0x168c),_0x6b5bac(0x223e),'first',_0x6b5bac(0x14ca),'first-of-type',_0x6b5bac(0x363),_0x6b5bac(0x3ee),_0x6b5bac(0x4ba),'focus-visible',_0x6b5bac(0x1874),'has',_0x6b5bac(0x1a61),_0x6b5bac(0x933),_0x6b5bac(0x3d6d),_0x6b5bac(0x33a8),_0x6b5bac(0x1418),'invalid','is','lang',_0x6b5bac(0x11ef),_0x6b5bac(0x69f),_0x6b5bac(0x48eb),_0x6b5bac(0x4b32),_0x6b5bac(0xaea),_0x6b5bac(0xc1a),_0x6b5bac(0x2fee),_0x6b5bac(0x3d31),_0x6b5bac(0x3837),'nth-last-col',_0x6b5bac(0x17bb),_0x6b5bac(0x4153),_0x6b5bac(0x12a4),_0x6b5bac(0x5133),_0x6b5bac(0x51e4),_0x6b5bac(0x48d0),_0x6b5bac(0x38e6),_0x6b5bac(0x38ef),_0x6b5bac(0x1128),_0x6b5bac(0x3229),_0x6b5bac(0x3e07),_0x6b5bac(0x4d50),_0x6b5bac(0x507b),_0x6b5bac(0x4cd),'target',_0x6b5bac(0x4579),_0x6b5bac(0x4147),'valid',_0x6b5bac(0xd18),_0x6b5bac(0x3b62)],_0x4aeb9a=[_0x6b5bac(0x1349),_0x6b5bac(0x313),_0x6b5bac(0x5097),_0x6b5bac(0x18bf),_0x6b5bac(0x255f),'first-letter',_0x6b5bac(0x42e5),'grammar-error',_0x6b5bac(0x5121),'part',_0x6b5bac(0x10e2),'selection',_0x6b5bac(0x3e0e),_0x6b5bac(0x3c13)],_0x535603=[_0x6b5bac(0x31e),'align-items',_0x6b5bac(0xe82),_0x6b5bac(0xc36),_0x6b5bac(0x148a),_0x6b5bac(0x1b0e),_0x6b5bac(0x206f),_0x6b5bac(0x2c5c),_0x6b5bac(0x4fd3),'animation-iteration-count',_0x6b5bac(0x1efe),'animation-play-state','animation-timing-function',_0x6b5bac(0x460d),_0x6b5bac(0x1471),'background-attachment','background-blend-mode',_0x6b5bac(0x1195),_0x6b5bac(0x11ac),'background-image','background-origin','background-position','background-repeat',_0x6b5bac(0x4065),'block-size',_0x6b5bac(0x369a),_0x6b5bac(0x49ba),_0x6b5bac(0x3374),_0x6b5bac(0x2bbf),'border-block-end-color',_0x6b5bac(0x43ac),_0x6b5bac(0xbbb),_0x6b5bac(0x34f),_0x6b5bac(0x480a),_0x6b5bac(0x2a76),_0x6b5bac(0x3b35),'border-block-style',_0x6b5bac(0x4d37),_0x6b5bac(0x129e),_0x6b5bac(0x1fda),'border-bottom-left-radius','border-bottom-right-radius',_0x6b5bac(0x4491),_0x6b5bac(0x2855),'border-collapse',_0x6b5bac(0x4ede),'border-image','border-image-outset',_0x6b5bac(0x19a5),'border-image-slice',_0x6b5bac(0x4d0),'border-image-width',_0x6b5bac(0x6cc),_0x6b5bac(0x21cd),_0x6b5bac(0x2a98),_0x6b5bac(0xb81),_0x6b5bac(0x3c06),'border-inline-end-width','border-inline-start',_0x6b5bac(0x3152),_0x6b5bac(0x2444),_0x6b5bac(0x1c4c),_0x6b5bac(0x1b8d),_0x6b5bac(0xb54),_0x6b5bac(0x953),_0x6b5bac(0x82b),'border-left-style',_0x6b5bac(0x1b93),_0x6b5bac(0x2557),'border-right',_0x6b5bac(0x4fdc),_0x6b5bac(0x4a2d),_0x6b5bac(0x2fd6),'border-spacing',_0x6b5bac(0x18e8),_0x6b5bac(0x49df),_0x6b5bac(0x5090),_0x6b5bac(0x1be7),_0x6b5bac(0x106f),_0x6b5bac(0x4b1c),_0x6b5bac(0x1b9b),_0x6b5bac(0x24a4),_0x6b5bac(0x3335),_0x6b5bac(0x1db9),_0x6b5bac(0x4419),'box-sizing',_0x6b5bac(0x2ce9),_0x6b5bac(0x11a9),_0x6b5bac(0x425d),'caption-side',_0x6b5bac(0x1cb2),_0x6b5bac(0x4933),'clip',_0x6b5bac(0x19a3),_0x6b5bac(0x1c7c),'color',_0x6b5bac(0x2c2e),'column-fill',_0x6b5bac(0x4679),_0x6b5bac(0x1175),_0x6b5bac(0x2ccf),'column-rule-style','column-rule-width',_0x6b5bac(0x3085),'column-width','columns','contain',_0x6b5bac(0x484f),_0x6b5bac(0xc79),_0x6b5bac(0x2aa6),_0x6b5bac(0x43c7),_0x6b5bac(0x18bf),_0x6b5bac(0x377d),_0x6b5bac(0x18ef),_0x6b5bac(0x824),_0x6b5bac(0x296c),_0x6b5bac(0x12ca),_0x6b5bac(0x22c7),_0x6b5bac(0x1465),_0x6b5bac(0x2a07),_0x6b5bac(0x2c01),_0x6b5bac(0x295a),_0x6b5bac(0x494f),_0x6b5bac(0x3458),_0x6b5bac(0x2a69),_0x6b5bac(0x32ae),_0x6b5bac(0x1ab8),_0x6b5bac(0x4b13),_0x6b5bac(0xe53),_0x6b5bac(0x2019),_0x6b5bac(0x40de),_0x6b5bac(0xeea),_0x6b5bac(0x3048),_0x6b5bac(0x44a5),_0x6b5bac(0x464),_0x6b5bac(0x750),_0x6b5bac(0x4a87),_0x6b5bac(0x1d98),_0x6b5bac(0x3764),_0x6b5bac(0x4fd6),_0x6b5bac(0xb7a),_0x6b5bac(0x2694),_0x6b5bac(0x3afe),_0x6b5bac(0x2eae),_0x6b5bac(0x2813),_0x6b5bac(0x474f),_0x6b5bac(0x4c7a),_0x6b5bac(0x417d),_0x6b5bac(0x44f8),_0x6b5bac(0x32f0),'grid','grid-area',_0x6b5bac(0xf9c),_0x6b5bac(0x3a86),'grid-auto-rows',_0x6b5bac(0x3058),_0x6b5bac(0x3cf),'grid-column-start',_0x6b5bac(0x28a9),'grid-row',_0x6b5bac(0x48ab),_0x6b5bac(0x3ffa),'grid-template',_0x6b5bac(0x3fda),_0x6b5bac(0x2bb0),_0x6b5bac(0xef6),_0x6b5bac(0xc89),_0x6b5bac(0x3cd6),'hyphens',_0x6b5bac(0x169e),_0x6b5bac(0x3e53),_0x6b5bac(0x3f79),_0x6b5bac(0x2833),_0x6b5bac(0x23ed),'inline-size',_0x6b5bac(0x3ba0),_0x6b5bac(0xedd),'left',_0x6b5bac(0x4e98),_0x6b5bac(0x2352),'line-height','list-style',_0x6b5bac(0x1438),_0x6b5bac(0x1a28),'list-style-type',_0x6b5bac(0x2b9f),_0x6b5bac(0x1e53),_0x6b5bac(0x4273),_0x6b5bac(0x562),_0x6b5bac(0x90d),_0x6b5bac(0x249b),'margin-inline-end','margin-inline-start',_0x6b5bac(0x26a1),_0x6b5bac(0x5022),_0x6b5bac(0xefe),_0x6b5bac(0x4b5c),'mask',_0x6b5bac(0x1ebe),_0x6b5bac(0x140d),_0x6b5bac(0x1706),_0x6b5bac(0x2c4f),_0x6b5bac(0xad0),'mask-border-source','mask-border-width',_0x6b5bac(0x897),_0x6b5bac(0x504b),'mask-image',_0x6b5bac(0x3dfd),_0x6b5bac(0x498f),_0x6b5bac(0x321c),_0x6b5bac(0x3d6b),'mask-size','mask-type','max-block-size',_0x6b5bac(0x2ebc),'max-inline-size',_0x6b5bac(0x3e30),_0x6b5bac(0x4000),_0x6b5bac(0x2613),_0x6b5bac(0x446d),_0x6b5bac(0x2c06),_0x6b5bac(0x2b16),_0x6b5bac(0x516a),_0x6b5bac(0x4b63),'nav-left',_0x6b5bac(0x467),'nav-up',_0x6b5bac(0x28b),_0x6b5bac(0x47d),_0x6b5bac(0x3bd7),_0x6b5bac(0x1541),'opacity',_0x6b5bac(0xd8d),_0x6b5bac(0x4840),_0x6b5bac(0x2a21),_0x6b5bac(0x4ae0),_0x6b5bac(0x40fb),_0x6b5bac(0x230d),_0x6b5bac(0x282),_0x6b5bac(0xa69),_0x6b5bac(0xdad),_0x6b5bac(0x49a7),'overflow-y',_0x6b5bac(0x5252),'padding-block',_0x6b5bac(0x4012),_0x6b5bac(0x2511),_0x6b5bac(0x5060),_0x6b5bac(0x1aac),'padding-inline-end',_0x6b5bac(0x12fa),'padding-left',_0x6b5bac(0x41de),_0x6b5bac(0xb57),_0x6b5bac(0x45a),_0x6b5bac(0x2d8f),_0x6b5bac(0x1e45),_0x6b5bac(0x468c),_0x6b5bac(0x10c9),_0x6b5bac(0x295c),_0x6b5bac(0x2b0d),_0x6b5bac(0x22c6),_0x6b5bac(0x23a0),'position','quotes',_0x6b5bac(0x4df5),'rest',_0x6b5bac(0x58c),'rest-before',_0x6b5bac(0x4d50),_0x6b5bac(0x133b),_0x6b5bac(0x4fcd),_0x6b5bac(0xa40),_0x6b5bac(0x2d9b),_0x6b5bac(0x2914),'scroll-margin-bottom',_0x6b5bac(0x21e3),_0x6b5bac(0x252e),'scroll-margin-inline-start',_0x6b5bac(0x10d2),_0x6b5bac(0x277d),_0x6b5bac(0x4e95),_0x6b5bac(0x3796),_0x6b5bac(0x328a),_0x6b5bac(0x35f3),_0x6b5bac(0x230b),_0x6b5bac(0x2bef),_0x6b5bac(0x13cc),_0x6b5bac(0x11ba),_0x6b5bac(0x4761),_0x6b5bac(0x4705),_0x6b5bac(0x1a7a),_0x6b5bac(0x25b3),_0x6b5bac(0x2934),_0x6b5bac(0x3c84),_0x6b5bac(0x4392),_0x6b5bac(0x337),_0x6b5bac(0xaaf),_0x6b5bac(0x4a35),_0x6b5bac(0x24d9),'shape-margin',_0x6b5bac(0x2e55),_0x6b5bac(0x4bba),_0x6b5bac(0x2df4),'src',_0x6b5bac(0x2ff2),'table-layout',_0x6b5bac(0x3e47),_0x6b5bac(0x222f),_0x6b5bac(0x1470),_0x6b5bac(0x40ab),'text-decoration',_0x6b5bac(0x4c70),'text-decoration-line',_0x6b5bac(0x1866),_0x6b5bac(0x178f),_0x6b5bac(0x3d6a),_0x6b5bac(0x4866),_0x6b5bac(0x246d),_0x6b5bac(0x541),'text-justify',_0x6b5bac(0x22d7),_0x6b5bac(0x3d18),_0x6b5bac(0x25da),'text-shadow','text-transform','text-underline-position',_0x6b5bac(0x279d),'transform','transform-box',_0x6b5bac(0xb11),_0x6b5bac(0x478c),_0x6b5bac(0x427e),_0x6b5bac(0x2d3e),_0x6b5bac(0xce9),'transition-property',_0x6b5bac(0x416a),_0x6b5bac(0x4074),_0x6b5bac(0x1677),_0x6b5bac(0x4703),_0x6b5bac(0x792),'voice-duration',_0x6b5bac(0xd49),_0x6b5bac(0x2483),_0x6b5bac(0x1002),_0x6b5bac(0x22e9),_0x6b5bac(0x5055),_0x6b5bac(0x4f11),'white-space','widows',_0x6b5bac(0x17d2),_0x6b5bac(0x4258),_0x6b5bac(0x27ad),_0x6b5bac(0xe5e),'word-wrap','writing-mode','z-index'][_0x6b5bac(0x78b)]();_0x50766f[_0x6b5bac(0x474c)]=function(_0x4a2d01){const _0x1bae27=_0x6b5bac,_0x51c127=(_0x4f1ae7=>({'IMPORTANT':{'scope':_0x1bae27(0x5153),'begin':_0x1bae27(0x2293)},'BLOCK_COMMENT':_0x4f1ae7[_0x1bae27(0x23fe)],'HEXCOLOR':{'scope':'number','begin':/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},'FUNCTION_DISPATCH':{'className':_0x1bae27(0x43a),'begin':/[\w-]+(?=\()/},'ATTRIBUTE_SELECTOR_MODE':{'scope':_0x1bae27(0x4f99),'begin':/\[/,'end':/\]/,'illegal':'$','contains':[_0x4f1ae7[_0x1bae27(0xa4c)],_0x4f1ae7[_0x1bae27(0x291b)]]},'CSS_NUMBER_MODE':{'scope':_0x1bae27(0x4a80),'begin':_0x4f1ae7[_0x1bae27(0x5047)]+_0x1bae27(0xf71),'relevance':0x0},'CSS_VARIABLE':{'className':_0x1bae27(0x431d),'begin':/--[A-Za-z_][A-Za-z0-9_-]*/}}))(_0x4a2d01),_0x3b9c71=_0x4aeb9a,_0x40223b=_0x3ead5a,_0x6e40e2=_0x1bae27(0x2768),_0x38ca21={'className':'variable','begin':'(\x5c$[a-zA-Z-][a-zA-Z0-9_-]*)\x5cb','relevance':0x0};return{'name':_0x1bae27(0x2712),'case_insensitive':!0x0,'illegal':_0x1bae27(0x3eaf),'contains':[_0x4a2d01[_0x1bae27(0x2ae2)],_0x4a2d01['C_BLOCK_COMMENT_MODE'],_0x51c127[_0x1bae27(0x46dc)],{'className':_0x1bae27(0x3713),'begin':_0x1bae27(0x37c7),'relevance':0x0},{'className':'selector-class','begin':'\x5c.[A-Za-z0-9_-]+','relevance':0x0},_0x51c127[_0x1bae27(0x3880)],{'className':_0x1bae27(0x527d),'begin':'\x5cb('+_0x211d3c[_0x1bae27(0x3541)]('|')+_0x1bae27(0x716),'relevance':0x0},{'className':_0x1bae27(0x277a),'begin':':('+_0x40223b[_0x1bae27(0x3541)]('|')+')'},{'className':_0x1bae27(0x277a),'begin':_0x1bae27(0x2b0)+_0x3b9c71[_0x1bae27(0x3541)]('|')+')'},_0x38ca21,{'begin':/\(/,'end':/\)/,'contains':[_0x51c127[_0x1bae27(0x46dc)]]},_0x51c127[_0x1bae27(0x21be)],{'className':_0x1bae27(0x263f),'begin':_0x1bae27(0x4cd5)+_0x535603[_0x1bae27(0x3541)]('|')+_0x1bae27(0x716)},{'begin':_0x1bae27(0x36cb)},{'begin':/:/,'end':/[;}{]/,'relevance':0x0,'contains':[_0x51c127[_0x1bae27(0x4213)],_0x38ca21,_0x51c127[_0x1bae27(0x1469)],_0x51c127[_0x1bae27(0x46dc)],_0x4a2d01['QUOTE_STRING_MODE'],_0x4a2d01[_0x1bae27(0xa4c)],_0x51c127[_0x1bae27(0x3df6)],_0x51c127[_0x1bae27(0x47a9)]]},{'begin':_0x1bae27(0x38c4),'keywords':{'$pattern':_0x6e40e2,'keyword':_0x1bae27(0x127f)}},{'begin':'@','end':_0x1bae27(0x4dcc),'returnBegin':!0x0,'keywords':{'$pattern':/[a-z-]+/,'keyword':_0x1bae27(0x29d9),'attribute':_0x6fd2fa[_0x1bae27(0x3541)]('\x20')},'contains':[{'begin':_0x6e40e2,'className':_0x1bae27(0x1357)},{'begin':/[a-z-]+(?=:)/,'className':_0x1bae27(0x263f)},_0x38ca21,_0x4a2d01[_0x1bae27(0x291b)],_0x4a2d01[_0x1bae27(0xa4c)],_0x51c127[_0x1bae27(0x1469)],_0x51c127[_0x1bae27(0x46dc)]]},_0x51c127[_0x1bae27(0x47a9)]]};};},0x226d:_0x59719e=>{const _0xafce3e=a0_0x11e7;_0x59719e[_0xafce3e(0x474c)]=function(_0x4a7fa0){const _0x3ae3ee=_0xafce3e;return{'name':_0x3ae3ee(0x1476),'aliases':['console',_0x3ae3ee(0x38d8)],'contains':[{'className':_0x3ae3ee(0x4cea),'begin':/^\s{0,3}[/~\w\d[\]()@-]*[>%$#][ ]?/,'starts':{'end':/[^\\](?=\s*$)/,'subLanguage':'bash'}}]};};},0x1017:_0x42dc09=>{const _0x495d51=a0_0x11e7;_0x42dc09[_0x495d51(0x474c)]=function(_0x913386){const _0x2868bb=_0x495d51,_0x54cea9=[_0x2868bb(0x362c),'and',_0x2868bb(0x2637),_0x2868bb(0x4ea7),_0x2868bb(0x4ee2),_0x2868bb(0xc01),'div','double',_0x2868bb(0x1ab8),_0x2868bb(0x139c),'if',_0x2868bb(0xc16),_0x2868bb(0x324f),'move',_0x2868bb(0x4f05),'neg',_0x2868bb(0x4321),_0x2868bb(0x29d8),'not','or',_0x2868bb(0x22e5),_0x2868bb(0xdfd),'shl',_0x2868bb(0x389f),'sput',_0x2868bb(0x217c),_0x2868bb(0x383),_0x2868bb(0x3014),_0x2868bb(0x32a6)];return{'name':_0x2868bb(0x4dcb),'contains':[{'className':_0x2868bb(0x2431),'begin':'\x22','end':'\x22','relevance':0x0},_0x913386['COMMENT']('#','$',{'relevance':0x0}),{'className':_0x2868bb(0x1357),'variants':[{'begin':'\x5cs*\x5c.end\x5cs[a-zA-Z0-9]*'},{'begin':_0x2868bb(0x42c3),'relevance':0x0},{'begin':'\x5cs:[a-zA-Z_0-9]*','relevance':0x0},{'begin':_0x2868bb(0x216b)+['transient',_0x2868bb(0x4514),_0x2868bb(0x3027),_0x2868bb(0x27e4),_0x2868bb(0x415f),_0x2868bb(0x39ce),_0x2868bb(0x4ef4),_0x2868bb(0xc14),_0x2868bb(0x2c7c),_0x2868bb(0x83f),'system'][_0x2868bb(0x3541)]('|')+')'}]},{'className':_0x2868bb(0x43a),'variants':[{'begin':'\x5cs('+_0x54cea9[_0x2868bb(0x3541)]('|')+')\x5cs'},{'begin':'\x5cs('+_0x54cea9['join']('|')+')((-|/)[a-zA-Z0-9]+)+\x5cs','relevance':0xa},{'begin':_0x2868bb(0x216b)+[_0x2868bb(0x4707),_0x2868bb(0x2b55),_0x2868bb(0x26f6),_0x2868bb(0x4e5c),_0x2868bb(0x2162),_0x2868bb(0x1fe5),_0x2868bb(0xe21),_0x2868bb(0x2259),_0x2868bb(0x6f4),_0x2868bb(0x1c79),'instance',_0x2868bb(0x3a9a),_0x2868bb(0x39af),_0x2868bb(0x2d78),_0x2868bb(0x32f4),_0x2868bb(0x1b94),_0x2868bb(0x35d3)][_0x2868bb(0x3541)]('|')+_0x2868bb(0x1846),'relevance':0xa}]},{'className':'class','begin':_0x2868bb(0x16b2),'relevance':0x0},{'begin':'[vp][0-9]+'}]};};},0x4e:_0x26f234=>{const _0x2ccacc=a0_0x11e7;_0x26f234[_0x2ccacc(0x474c)]=function(_0x29c120){const _0x13e2da=_0x2ccacc,_0x433ac2=_0x13e2da(0x28a),_0x129567={'className':_0x13e2da(0x2431),'begin':_0x13e2da(0x3acf)},_0x14ac2c={'className':'symbol','begin':'#'+_0x29c120[_0x13e2da(0x206e)]};return{'name':_0x13e2da(0x2d27),'aliases':['st'],'keywords':[_0x13e2da(0x4454),_0x13e2da(0x2cc),_0x13e2da(0x3e27),'true','false',_0x13e2da(0xefc)],'contains':[_0x29c120['COMMENT']('\x22','\x22'),_0x29c120[_0x13e2da(0xa4c)],{'className':_0x13e2da(0xcfc),'begin':'\x5cb[A-Z][A-Za-z0-9_]*','relevance':0x0},{'begin':_0x433ac2+':','relevance':0x0},_0x29c120[_0x13e2da(0xd12)],_0x14ac2c,_0x129567,{'begin':_0x13e2da(0x35ee)+_0x433ac2+'([\x20]+'+_0x433ac2+_0x13e2da(0x692),'returnBegin':!0x0,'end':/\|/,'illegal':/\S/,'contains':[{'begin':_0x13e2da(0x34e0)+_0x433ac2}]},{'begin':_0x13e2da(0x2d91),'end':'\x5c)','contains':[_0x29c120[_0x13e2da(0xa4c)],_0x129567,_0x29c120[_0x13e2da(0xd12)],_0x14ac2c]}]};};},0x1b2f:_0x31224d=>{const _0x4a4b62=a0_0x11e7;_0x31224d[_0x4a4b62(0x474c)]=function(_0x588af0){const _0x295eae=_0x4a4b62;return{'name':'SML\x20(Standard\x20ML)','aliases':['ml'],'keywords':{'$pattern':_0x295eae(0x4525),'keyword':_0x295eae(0x101a),'built_in':'array\x20bool\x20char\x20exn\x20int\x20list\x20option\x20order\x20real\x20ref\x20string\x20substring\x20vector\x20unit\x20word','literal':_0x295eae(0x4e8c)},'illegal':/\/\/|>>/,'contains':[{'className':_0x295eae(0x2706),'begin':/\[(\|\|)?\]|\(\)/,'relevance':0x0},_0x588af0[_0x295eae(0x4e4f)]('\x5c(\x5c*',_0x295eae(0x17ad),{'contains':['self']}),{'className':'symbol','begin':_0x295eae(0x495e)},{'className':_0x295eae(0xcfc),'begin':_0x295eae(0x45fb)},{'className':'type','begin':_0x295eae(0x2013),'relevance':0x0},{'begin':_0x295eae(0x3a44)},_0x588af0[_0x295eae(0x46a1)](_0x588af0[_0x295eae(0xa4c)],{'className':_0x295eae(0x2431),'relevance':0x0}),_0x588af0[_0x295eae(0x46a1)](_0x588af0[_0x295eae(0x291b)],{'illegal':null}),{'className':_0x295eae(0x4a80),'begin':'\x5cb(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\x5c.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)','relevance':0x0},{'begin':/[-=]>/}]};};},0x18b5:_0x2f357e=>{const _0xf3c1b2=a0_0x11e7;_0x2f357e[_0xf3c1b2(0x474c)]=function(_0x5136c5){const _0x374e21=_0xf3c1b2,_0xf18714={'className':_0x374e21(0x2431),'variants':[{'begin':'\x22','end':'\x22','contains':[{'begin':'\x22\x22','relevance':0x0}]},{'begin':'\x27','end':'\x27','contains':[{'begin':'\x27\x27','relevance':0x0}]}]},_0x143342={'className':_0x374e21(0x5153),'begin':/#\s*[a-z]+\b/,'end':/$/,'keywords':'define\x20undef\x20ifdef\x20ifndef\x20else\x20endif\x20include\x20if','contains':[{'begin':/\\\n/,'relevance':0x0},_0x5136c5[_0x374e21(0x46a1)](_0xf18714,{'className':_0x374e21(0x2431)}),{'begin':/<[^\n>]*>/,'end':/$/,'illegal':'\x5cn'},_0x5136c5['C_LINE_COMMENT_MODE'],_0x5136c5[_0x374e21(0x23fe)]]};return{'name':'SQF','case_insensitive':!0x0,'keywords':{'keyword':[_0x374e21(0x4e10),_0x374e21(0x20b2),_0x374e21(0x1506),_0x374e21(0x4786),_0x374e21(0x2e7e),_0x374e21(0x31a3),'continue',_0x374e21(0x3b5d),_0x374e21(0x3d23),'do',_0x374e21(0x3d4),'exit',_0x374e21(0x1133),_0x374e21(0x3c19),_0x374e21(0xa21),_0x374e21(0x27e6),'if',_0x374e21(0x16a7),_0x374e21(0x4ef4),'switch',_0x374e21(0xf8e),'then',_0x374e21(0x383),'to',_0x374e21(0x422b),'waitUntil','while','with'],'built_in':[_0x374e21(0xbe0),_0x374e21(0x3647),_0x374e21(0x2c6e),_0x374e21(0x1a54),_0x374e21(0x2ba9),_0x374e21(0xe89),_0x374e21(0x2c30),_0x374e21(0x2146),_0x374e21(0x20e),_0x374e21(0x1368),_0x374e21(0x3852),_0x374e21(0x3650),_0x374e21(0x3501),_0x374e21(0x1561),_0x374e21(0x251),_0x374e21(0x10fc),_0x374e21(0x24a9),'add3DENEventHandler','add3DENLayer',_0x374e21(0x208d),'addBackpack','addBackpackCargo',_0x374e21(0x842),'addBackpackGlobal',_0x374e21(0xd2d),_0x374e21(0x4fec),_0x374e21(0x317d),_0x374e21(0x3578),_0x374e21(0x31ef),_0x374e21(0x25d4),_0x374e21(0x2e7d),_0x374e21(0x4a0a),_0x374e21(0x4334),_0x374e21(0x4172),_0x374e21(0x34b0),'addGoggles','addGroupIcon',_0x374e21(0x1c03),'addHeadgear',_0x374e21(0x36e9),_0x374e21(0x5244),_0x374e21(0x27a1),_0x374e21(0x158c),_0x374e21(0x44d2),'addItemToUniform','addItemToVest',_0x374e21(0x2b97),_0x374e21(0x4229),_0x374e21(0x4953),_0x374e21(0xb2a),_0x374e21(0x3e0),_0x374e21(0x306d),_0x374e21(0x4299),_0x374e21(0x3260),_0x374e21(0x5006),_0x374e21(0x2f32),'addMenuItem',_0x374e21(0x50e8),'addMPEventHandler',_0x374e21(0x1179),'addonFiles',_0x374e21(0x8a5),_0x374e21(0x12e6),_0x374e21(0x17c9),_0x374e21(0xb8a),_0x374e21(0x3dbd),_0x374e21(0x25a7),_0x374e21(0x2608),_0x374e21(0x1f86),_0x374e21(0x1ee1),_0x374e21(0x1b9a),_0x374e21(0x4f0b),_0x374e21(0x14e7),_0x374e21(0x236c),_0x374e21(0x1cfb),_0x374e21(0x50a5),_0x374e21(0x2dfa),'addVest',_0x374e21(0x3488),_0x374e21(0x9a3),_0x374e21(0x3b3d),_0x374e21(0xcaa),_0x374e21(0x330d),'addWeaponItem','addWeaponPool',_0x374e21(0x40b3),_0x374e21(0x3a2a),_0x374e21(0x30f0),_0x374e21(0x37bf),'agent',_0x374e21(0x4d20),_0x374e21(0x44b5),_0x374e21(0x1bdc),'aimPos',_0x374e21(0x24da),'airDensityRTD',_0x374e21(0x27a7),_0x374e21(0x78c),_0x374e21(0x4c3),_0x374e21(0x7d5),_0x374e21(0x1936),'allActiveTitleEffects',_0x374e21(0x2e73),_0x374e21(0x3668),_0x374e21(0x186c),_0x374e21(0x1020),_0x374e21(0x3331),_0x374e21(0x28f5),_0x374e21(0x25fd),_0x374e21(0x5128),'allDiarySubjects','allDisplays',_0x374e21(0x4edb),_0x374e21(0x757),_0x374e21(0x252b),_0x374e21(0xbd1),_0x374e21(0x34c4),_0x374e21(0x8e1),_0x374e21(0x12af),_0x374e21(0xbc2),_0x374e21(0x3c4c),'allowCuratorLogicIgnoreAreas',_0x374e21(0x4e29),_0x374e21(0x9f4),_0x374e21(0x345b),_0x374e21(0x2080),'allowFleeing',_0x374e21(0x21f1),_0x374e21(0x2fba),_0x374e21(0x37b1),_0x374e21(0x3933),'allSimpleObjects','allSites','allTurrets',_0x374e21(0x6e6),_0x374e21(0x4052),_0x374e21(0x3756),'allVariables',_0x374e21(0x4b90),_0x374e21(0x252c),_0x374e21(0x39e6),_0x374e21(0x2663),'animate',_0x374e21(0x35b9),'animateDoor',_0x374e21(0x1814),_0x374e21(0x3763),'animationNames',_0x374e21(0xa51),_0x374e21(0x3946),'animationState',_0x374e21(0x3e49),_0x374e21(0x366b),_0x374e21(0x4c31),_0x374e21(0x1c48),_0x374e21(0x27b2),_0x374e21(0x3c15),_0x374e21(0x21ac),_0x374e21(0x1ae7),_0x374e21(0x4fd4),'assignAsCargo',_0x374e21(0x99e),_0x374e21(0x3d3d),_0x374e21(0x219),'assignAsGunner',_0x374e21(0x4ee5),_0x374e21(0x36e0),_0x374e21(0x1f68),_0x374e21(0x2684),_0x374e21(0x1988),_0x374e21(0x298c),_0x374e21(0x4521),_0x374e21(0x3f2),_0x374e21(0x24d3),'assignedTeam',_0x374e21(0x506a),'assignedVehicleRole',_0x374e21(0x41cf),'assignItem',_0x374e21(0x2ae6),_0x374e21(0x1848),_0x374e21(0x3aab),_0x374e21(0x41c2),_0x374e21(0x2fdb),_0x374e21(0x4cf8),_0x374e21(0x384d),'attachedObjects',_0x374e21(0x2d11),'attachObject',_0x374e21(0x5207),_0x374e21(0x501),_0x374e21(0x46ba),_0x374e21(0x4aec),_0x374e21(0x10c0),_0x374e21(0x45e9),'backpackItems',_0x374e21(0x2c33),_0x374e21(0x1206),_0x374e21(0x45a0),_0x374e21(0x4fb1),_0x374e21(0x187b),_0x374e21(0x421b),_0x374e21(0x4f8c),'binocularMagazine','boundingBox',_0x374e21(0x5ec),_0x374e21(0x5262),_0x374e21(0x4aff),'briefingName','buildingExit',_0x374e21(0x405f),_0x374e21(0x4c7c),'buldozer_IsEnabledRoadDiag','buldozer_LoadNewRoads',_0x374e21(0x3608),_0x374e21(0x2a3e),'buttonSetAction',_0x374e21(0x315f),'calculatePath',_0x374e21(0x21f3),'call',_0x374e21(0x41f8),_0x374e21(0x20b),_0x374e21(0x50d6),'camCommitPrepared','camCommitted',_0x374e21(0x2ad1),_0x374e21(0x10b9),_0x374e21(0x2312),_0x374e21(0x1d17),_0x374e21(0x514d),_0x374e21(0x25df),_0x374e21(0x4257),_0x374e21(0x1d4e),'campaignConfigFile',_0x374e21(0x41c1),_0x374e21(0x22b),_0x374e21(0x3d8c),'camPrepareDir','camPrepareDive','camPrepareFocus','camPrepareFov','camPrepareFovRange',_0x374e21(0x1765),_0x374e21(0x2e96),_0x374e21(0x3540),_0x374e21(0xb15),_0x374e21(0x47b0),'camSetDive','camSetFocus',_0x374e21(0x3f87),_0x374e21(0x130d),_0x374e21(0x4384),'camSetRelPos',_0x374e21(0x444d),'camTarget',_0x374e21(0x64b),_0x374e21(0x2dcb),_0x374e21(0xc5b),_0x374e21(0xba4),'canAddItemToVest',_0x374e21(0x1a1a),_0x374e21(0x2163),'canFire',_0x374e21(0x444),'canSlingLoad',_0x374e21(0x591),_0x374e21(0x31ec),_0x374e21(0x21b6),_0x374e21(0x665),_0x374e21(0xaef),'captive',_0x374e21(0x32cb),_0x374e21(0x32f9),_0x374e21(0x28e3),_0x374e21(0x10aa),_0x374e21(0x117a),_0x374e21(0xa57),_0x374e21(0x45c),_0x374e21(0x3369),_0x374e21(0x1cea),_0x374e21(0x1f89),_0x374e21(0x1c41),_0x374e21(0x41c6),_0x374e21(0x468e),_0x374e21(0x1610),_0x374e21(0x3d7c),'clearGroupIcons',_0x374e21(0x3ae0),_0x374e21(0x4367),'clearItemPool',_0x374e21(0x24d),_0x374e21(0x45c7),_0x374e21(0x5d8),_0x374e21(0x924),_0x374e21(0x24ec),'clearWeaponCargo','clearWeaponCargoGlobal',_0x374e21(0x3070),'clientOwner',_0x374e21(0xd33),_0x374e21(0x37fa),'closeOverlay','collapseObjectTree','collect3DENHistory',_0x374e21(0x3b23),'collisionDisabledWith',_0x374e21(0x1f2a),'combatMode',_0x374e21(0x1e2c),_0x374e21(0x252d),'commander','commandFire',_0x374e21(0x1a7d),'commandFSM','commandGetOut','commandingMenu','commandMove',_0x374e21(0x635),'commandStop',_0x374e21(0x3422),_0x374e21(0x30b7),_0x374e21(0x1e51),_0x374e21(0x4645),'commitOverlay','compatibleItems','compatibleMagazines',_0x374e21(0x23cd),'compileFinal',_0x374e21(0x17d7),_0x374e21(0xcca),_0x374e21(0x44a),_0x374e21(0x2bdf),'configFile',_0x374e21(0x3b5c),_0x374e21(0x137e),_0x374e21(0x43c6),_0x374e21(0x4323),'configSourceAddonList','configSourceMod',_0x374e21(0x3bce),_0x374e21(0x5028),_0x374e21(0x267c),'connectToServer','controlsGroupCtrl',_0x374e21(0xb2f),'copyFromClipboard',_0x374e21(0x3c2e),_0x374e21(0x1c2b),_0x374e21(0x3935),_0x374e21(0x404e),_0x374e21(0x423c),_0x374e21(0x1832),_0x374e21(0x5275),'countType',_0x374e21(0x2617),_0x374e21(0x3fe5),_0x374e21(0x332a),'createAgent','createCenter',_0x374e21(0x28d6),_0x374e21(0x16e3),'createDiaryRecord',_0x374e21(0x361c),'createDisplay',_0x374e21(0x3d52),_0x374e21(0x1dd1),_0x374e21(0x1923),_0x374e21(0x3dd4),_0x374e21(0x15ad),_0x374e21(0x2ec7),_0x374e21(0xb75),'createMarkerLocal',_0x374e21(0x4041),_0x374e21(0x3f4b),_0x374e21(0x1286),'createMPCampaignDisplay',_0x374e21(0x4e8b),_0x374e21(0x2330),_0x374e21(0x3d33),'createSoundSource',_0x374e21(0xd15),_0x374e21(0x1a59),'createTrigger',_0x374e21(0x2be6),_0x374e21(0xebf),'createVehicleCrew',_0x374e21(0x41a8),'crew',_0x374e21(0x19c8),_0x374e21(0x2062),_0x374e21(0x1b15),_0x374e21(0x301a),_0x374e21(0x45ce),_0x374e21(0x3c23),_0x374e21(0x49b1),_0x374e21(0x40f9),'ctHeaderCount',_0x374e21(0x2629),'ctRemoveRows',_0x374e21(0x4831),_0x374e21(0x27ef),_0x374e21(0x2c12),_0x374e21(0x1cb8),_0x374e21(0x1fbe),_0x374e21(0x2868),'ctrlAutoScrollDelay','ctrlAutoScrollRewind',_0x374e21(0x3840),_0x374e21(0x222e),'ctrlChecked',_0x374e21(0x3d53),_0x374e21(0xe11),_0x374e21(0x5ae),'ctrlCreate',_0x374e21(0x3c0f),_0x374e21(0xed8),'ctrlEnabled',_0x374e21(0x76d),_0x374e21(0x11de),_0x374e21(0x4ef3),_0x374e21(0x3513),_0x374e21(0x1630),_0x374e21(0x1a3f),_0x374e21(0x1fd),_0x374e21(0x4f18),_0x374e21(0x2dbb),_0x374e21(0x2ae4),_0x374e21(0x4360),_0x374e21(0xe22),_0x374e21(0x2172),_0x374e21(0xa75),_0x374e21(0x4bb6),'ctrlMapSetPosition',_0x374e21(0x302),_0x374e21(0x272b),_0x374e21(0x3f01),_0x374e21(0x2292),_0x374e21(0x4f1c),_0x374e21(0x35b1),_0x374e21(0x4352),_0x374e21(0x2059),_0x374e21(0x47a5),_0x374e21(0x3aaf),_0x374e21(0x494e),_0x374e21(0x3bef),_0x374e21(0x1dda),_0x374e21(0x4fcf),_0x374e21(0x2faf),_0x374e21(0x3ffe),_0x374e21(0x2ada),_0x374e21(0x4639),_0x374e21(0x3989),_0x374e21(0x378f),_0x374e21(0x3ba2),_0x374e21(0xe18),_0x374e21(0xad3),_0x374e21(0x4b1e),'ctrlSetFontH1',_0x374e21(0x140a),_0x374e21(0x141a),_0x374e21(0x26a6),'ctrlSetFontH3','ctrlSetFontH3B','ctrlSetFontH4',_0x374e21(0x3470),_0x374e21(0x5132),_0x374e21(0x34ed),_0x374e21(0x1d4b),_0x374e21(0x4690),_0x374e21(0x45e4),_0x374e21(0x2ad7),_0x374e21(0x9f1),'ctrlSetFontHeightH3',_0x374e21(0x304d),'ctrlSetFontHeightH5',_0x374e21(0x802),'ctrlSetFontHeightSecondary',_0x374e21(0x402a),_0x374e21(0x4ac0),_0x374e21(0x3d60),_0x374e21(0x3fb5),'ctrlSetModel',_0x374e21(0x2f12),_0x374e21(0x4e64),'ctrlSetMousePosition','ctrlSetPixelPrecision',_0x374e21(0x7d2),_0x374e21(0x33c6),_0x374e21(0x413c),_0x374e21(0x3b6b),_0x374e21(0x1955),_0x374e21(0x47e8),'ctrlSetScrollValues',_0x374e21(0x440),_0x374e21(0x43e6),_0x374e21(0x3046),_0x374e21(0xd7d),_0x374e21(0x2fcf),_0x374e21(0x8d7),_0x374e21(0xb23),'ctrlSetTooltip',_0x374e21(0x4056),_0x374e21(0x25a4),'ctrlSetTooltipColorText',_0x374e21(0x2c19),_0x374e21(0x1719),'ctrlSetURLOverlayMode','ctrlShadow',_0x374e21(0x2b06),_0x374e21(0x4e1d),_0x374e21(0x8ae),_0x374e21(0x50e7),_0x374e21(0x2582),_0x374e21(0x4da0),'ctrlTextSecondary',_0x374e21(0x1ef7),_0x374e21(0x10b1),_0x374e21(0x2b82),_0x374e21(0x4748),'ctrlURL',_0x374e21(0x27d7),_0x374e21(0x2741),'ctRowControls',_0x374e21(0x3f8e),_0x374e21(0x333e),_0x374e21(0x2156),_0x374e21(0x3187),_0x374e21(0x23e2),_0x374e21(0x1ad6),_0x374e21(0x214e),'curatorAddons',_0x374e21(0x4ed8),_0x374e21(0x503f),'curatorCameraAreaCeiling','curatorCoef','curatorEditableObjects','curatorEditingArea','curatorEditingAreaType',_0x374e21(0x3ebf),_0x374e21(0x19a8),_0x374e21(0x523e),_0x374e21(0x15f1),_0x374e21(0x46d0),'current3DENOperation','currentChannel','currentCommand',_0x374e21(0x2b4c),_0x374e21(0x233d),_0x374e21(0x2cd3),_0x374e21(0x35ab),_0x374e21(0x29a5),_0x374e21(0xfa6),_0x374e21(0x46f),_0x374e21(0x37d0),'currentTasks',_0x374e21(0x20fc),_0x374e21(0x39a0),'currentWaypoint',_0x374e21(0x3c2b),_0x374e21(0x4989),_0x374e21(0x378c),_0x374e21(0x3484),_0x374e21(0x20d0),_0x374e21(0x3cbc),_0x374e21(0x1084),_0x374e21(0x4207),_0x374e21(0x422e),_0x374e21(0x2b5b),'cutObj',_0x374e21(0x48a3),_0x374e21(0x51e3),_0x374e21(0x3766),_0x374e21(0x40d9),_0x374e21(0x22ef),_0x374e21(0x10dd),_0x374e21(0x4d2e),_0x374e21(0xe64),_0x374e21(0xa60),_0x374e21(0x466c),_0x374e21(0x30c1),_0x374e21(0x1060),_0x374e21(0x11e5),_0x374e21(0x3fdc),_0x374e21(0x1122),'deleteCollection',_0x374e21(0x2c74),_0x374e21(0x281a),'deleteGroupWhenEmpty','deleteIdentity',_0x374e21(0x1263),_0x374e21(0x1213),_0x374e21(0x165a),_0x374e21(0xbb7),_0x374e21(0xb4c),'deleteSite','deleteStatus','deleteTeam',_0x374e21(0x13c9),_0x374e21(0x14ce),_0x374e21(0x30b1),'detach',_0x374e21(0x2cdb),_0x374e21(0xb46),_0x374e21(0x3b2),_0x374e21(0x37f),_0x374e21(0x49a9),_0x374e21(0x179e),_0x374e21(0x173c),'diag_captureFrameToFile',_0x374e21(0x1d33),_0x374e21(0x2e34),_0x374e21(0x4fdb),_0x374e21(0x4030),_0x374e21(0x39b8),_0x374e21(0xea0),'diag_dumpTerrainSynth',_0x374e21(0x160b),_0x374e21(0x250a),_0x374e21(0x4793),_0x374e21(0x4b09),_0x374e21(0x29a3),_0x374e21(0x41b2),_0x374e21(0x240f),_0x374e21(0x213),_0x374e21(0x3ef7),_0x374e21(0xe49),_0x374e21(0x3761),_0x374e21(0x1d70),_0x374e21(0x3d25),_0x374e21(0x447c),'diag_mergeConfigFile',_0x374e21(0x4737),'diag_resetFSM',_0x374e21(0x50ba),_0x374e21(0x2ecd),_0x374e21(0x3854),_0x374e21(0x4ae),_0x374e21(0x38cf),_0x374e21(0x24c8),_0x374e21(0x7f3),_0x374e21(0x1629),_0x374e21(0xe8c),_0x374e21(0x29b6),_0x374e21(0x4c9f),_0x374e21(0x1ce5),_0x374e21(0x12a5),_0x374e21(0x521c),_0x374e21(0x296c),_0x374e21(0x9b3),_0x374e21(0x1e6d),_0x374e21(0x1f57),'disableBrakes',_0x374e21(0x41b3),_0x374e21(0x1967),'disableDebriefingStats','disableMapIndicators',_0x374e21(0xb14),_0x374e21(0x21fe),_0x374e21(0xa6d),_0x374e21(0xff6),_0x374e21(0x15fb),_0x374e21(0x2528),_0x374e21(0x16cd),_0x374e21(0x271f),_0x374e21(0x32b3),'displayParent',_0x374e21(0xd9d),_0x374e21(0x10b7),_0x374e21(0x23c0),_0x374e21(0x1bc4),'displayUpdate','dissolveTeam',_0x374e21(0x3845),'distance2D',_0x374e21(0x10e0),_0x374e21(0x4069),_0x374e21(0xa70),'doArtilleryFire',_0x374e21(0x5169),_0x374e21(0x1515),'doFSM',_0x374e21(0x2968),_0x374e21(0x2c7e),_0x374e21(0x999),_0x374e21(0x16f5),_0x374e21(0x2a9d),_0x374e21(0x4c56),'doWatch',_0x374e21(0x3cb6),'drawEllipse',_0x374e21(0x7ab),_0x374e21(0x1ffa),_0x374e21(0x16f3),_0x374e21(0x4fde),'drawLine3D',_0x374e21(0x3bd9),_0x374e21(0x1a78),_0x374e21(0x3531),'drawRectangle',_0x374e21(0x3119),_0x374e21(0x45fc),'drop',_0x374e21(0x90a),_0x374e21(0x1e1b),_0x374e21(0x895),_0x374e21(0x18dc),'echo','edit3DENMissionAttributes','editObject',_0x374e21(0x356e),'effectiveCommander',_0x374e21(0x2eac),_0x374e21(0x2185),'enableAI','enableAIFeature',_0x374e21(0x1f08),_0x374e21(0x2dcf),_0x374e21(0x3b1e),_0x374e21(0x35e),'enableAutoTrimRTD','enableCamShake',_0x374e21(0x364f),_0x374e21(0x4dbd),'enableCollisionWith',_0x374e21(0x4b9e),'enableDebriefingStats',_0x374e21(0x2d01),_0x374e21(0x43d1),'enableDynamicSimulation',_0x374e21(0x431f),_0x374e21(0x2b79),'enableEngineArtillery','enableEnvironment',_0x374e21(0x4d63),_0x374e21(0x26e3),'enableInfoPanelComponent',_0x374e21(0x314b),'enableMimics','enablePersonTurret',_0x374e21(0x5130),_0x374e21(0x2246),_0x374e21(0x29c4),_0x374e21(0x43b5),_0x374e21(0x3105),_0x374e21(0x3274),_0x374e21(0x4cdc),_0x374e21(0x187f),_0x374e21(0x499a),'enableStressDamage',_0x374e21(0x124f),_0x374e21(0x5269),_0x374e21(0x37a7),_0x374e21(0x1b32),_0x374e21(0x1adf),_0x374e21(0x3c81),_0x374e21(0x4f26),'endLoadingScreen',_0x374e21(0x4a2a),_0x374e21(0x14b4),_0x374e21(0x42ac),_0x374e21(0x292c),'enginesRpmRTD','enginesTorqueRTD','entities','environmentEnabled',_0x374e21(0x312f),_0x374e21(0x21ea),_0x374e21(0x4c79),'estimatedTimeLeft','evalObjectArgument',_0x374e21(0x25e6),_0x374e21(0x3e50),_0x374e21(0x198d),_0x374e21(0x3899),'execFSM',_0x374e21(0x374e),_0x374e21(0x3a1b),_0x374e21(0xf44),_0x374e21(0x2db1),'eyeDirection',_0x374e21(0xb8b),_0x374e21(0x490d),_0x374e21(0x483f),'fadeEnvironment',_0x374e21(0x29df),_0x374e21(0xdda),_0x374e21(0x61b),_0x374e21(0x2550),_0x374e21(0x4a7),_0x374e21(0xd31),'fillWeaponsFromPool',_0x374e21(0x5144),_0x374e21(0x51cd),_0x374e21(0xeb7),_0x374e21(0x1616),_0x374e21(0x4f2b),_0x374e21(0xdd0),'findEmptyPositionReady',_0x374e21(0x10b5),'findNearestEnemy',_0x374e21(0x1dbe),_0x374e21(0x1b1b),'fire','fireAtTarget','firstBackpack',_0x374e21(0x10c7),'flagAnimationPhase',_0x374e21(0x4283),_0x374e21(0x80e),_0x374e21(0x1abb),_0x374e21(0x14ff),_0x374e21(0x432a),_0x374e21(0x2e2d),_0x374e21(0x3bcc),'flyInHeightASL',_0x374e21(0x3432),_0x374e21(0x2150),_0x374e21(0x2796),_0x374e21(0x2b5c),'forceAddUniform',_0x374e21(0x2c9c),_0x374e21(0x1f94),'forcedMap',_0x374e21(0x3810),_0x374e21(0x1816),_0x374e21(0x3d3c),_0x374e21(0x3552),_0x374e21(0x143e),_0x374e21(0x2cf2),_0x374e21(0x1af5),_0x374e21(0x19ee),'forceWalk',_0x374e21(0x381a),_0x374e21(0xa8a),_0x374e21(0xaa0),_0x374e21(0x2591),_0x374e21(0x4c4e),_0x374e21(0x3d4f),'format',_0x374e21(0x3dc8),'formationDirection','formationLeader','formationMembers',_0x374e21(0x3814),'formationTask',_0x374e21(0x3e7a),_0x374e21(0x44e6),_0x374e21(0x1386),_0x374e21(0x662),_0x374e21(0x4c5),_0x374e21(0x4db9),_0x374e21(0x4564),_0x374e21(0x26a7),_0x374e21(0x113b),_0x374e21(0x501d),'gestureState',_0x374e21(0xf9e),_0x374e21(0x1193),_0x374e21(0x7e7),_0x374e21(0x40fa),_0x374e21(0x5104),_0x374e21(0x1448),'get3DENEntityID',_0x374e21(0x3673),_0x374e21(0x379b),'get3DENLayerEntities',_0x374e21(0x35af),_0x374e21(0x34d6),'get3DENMouseOver',_0x374e21(0x2906),_0x374e21(0x1f0a),_0x374e21(0x3272),'getAllEnvSoundControllers',_0x374e21(0x1e91),_0x374e21(0x11eb),_0x374e21(0x31c7),_0x374e21(0x36d),_0x374e21(0x3eab),_0x374e21(0x2c15),_0x374e21(0x2f8b),_0x374e21(0x4b4),_0x374e21(0x10bc),_0x374e21(0x3e6),'getArtilleryComputerSettings',_0x374e21(0x4233),'getAssetDLCInfo',_0x374e21(0x8f7),_0x374e21(0x187e),_0x374e21(0x2725),'getAudioOptionVolumes',_0x374e21(0x238b),_0x374e21(0x33fb),_0x374e21(0x670),'getCalculatePlayerVisibilityByFriendly',_0x374e21(0x1deb),_0x374e21(0x42ad),_0x374e21(0xe20),_0x374e21(0xc30),_0x374e21(0x3fd),_0x374e21(0x1c97),_0x374e21(0x3cf9),_0x374e21(0x256c),_0x374e21(0x2750),_0x374e21(0x3e3f),_0x374e21(0x14b0),_0x374e21(0x284b),_0x374e21(0x3e2f),_0x374e21(0x14c6),_0x374e21(0x311c),_0x374e21(0x19c1),'getDebriefingText',_0x374e21(0x2815),_0x374e21(0x43b9),'getDirVisual',_0x374e21(0x2c60),'getDLCAssetsUsage',_0x374e21(0x3a4e),_0x374e21(0x40ba),'getDLCUsageTime','getEditorCamera',_0x374e21(0x16ac),_0x374e21(0x7db),_0x374e21(0x3b5a),'getEngineTargetRPMRTD',_0x374e21(0x2b6f),'getEnvSoundController',_0x374e21(0x23bd),'getFatigue','getFieldManualStartPage',_0x374e21(0x2f6f),_0x374e21(0x17cf),_0x374e21(0x1152),_0x374e21(0x2eab),_0x374e21(0x2e8b),'getGraphValues',_0x374e21(0x5048),_0x374e21(0x4285),_0x374e21(0x4c42),_0x374e21(0xd2c),_0x374e21(0x1fd0),_0x374e21(0x118f),_0x374e21(0x1add),_0x374e21(0x3fb0),'getLighting',_0x374e21(0x39c1),_0x374e21(0x1576),_0x374e21(0x4638),_0x374e21(0x1b60),_0x374e21(0x4168),_0x374e21(0x1fdf),_0x374e21(0x4440),_0x374e21(0x177f),_0x374e21(0x15e9),_0x374e21(0x1fce),_0x374e21(0x4dca),_0x374e21(0x2fc1),_0x374e21(0x247d),_0x374e21(0x1603),_0x374e21(0x2199),_0x374e21(0x2d4c),_0x374e21(0x147d),_0x374e21(0x4806),_0x374e21(0x5224),_0x374e21(0x46c1),_0x374e21(0x4b6),_0x374e21(0x4e41),_0x374e21(0x4f29),'getObjectMaterials',_0x374e21(0x3357),_0x374e21(0xa31),'getObjectTextures','getObjectType',_0x374e21(0x3c87),_0x374e21(0x841),'getOrDefault',_0x374e21(0x1946),_0x374e21(0x3896),_0x374e21(0x3aee),_0x374e21(0x289a),_0x374e21(0x1897),_0x374e21(0x358c),_0x374e21(0x30b2),_0x374e21(0x2b32),'getPlateNumber',_0x374e21(0x45c8),_0x374e21(0x25e8),_0x374e21(0x3962),_0x374e21(0x3a31),_0x374e21(0x313f),_0x374e21(0x1f79),'getPosASL','getPosASLVisual',_0x374e21(0x2e38),'getPosATL',_0x374e21(0x429a),'getPosVisual',_0x374e21(0x50d2),_0x374e21(0x3e78),_0x374e21(0x4ce9),'getRelDir','getRelPos',_0x374e21(0x2504),_0x374e21(0x4718),_0x374e21(0x886),'getRoadInfo',_0x374e21(0x183d),_0x374e21(0x554),_0x374e21(0x2c98),_0x374e21(0x16c8),_0x374e21(0x1d81),'getSlingLoad',_0x374e21(0x38ce),_0x374e21(0x3856),_0x374e21(0x31d3),_0x374e21(0xadd),_0x374e21(0x4960),_0x374e21(0x18fc),_0x374e21(0x3e48),_0x374e21(0xe9f),_0x374e21(0x1fcb),_0x374e21(0x1dfa),_0x374e21(0x14d2),_0x374e21(0x4b87),'getText',_0x374e21(0x47c7),_0x374e21(0x3fb8),'getTextWidth',_0x374e21(0x81f),_0x374e21(0x2e1e),_0x374e21(0x3785),_0x374e21(0x4cd3),'getTurretOpticsMode',_0x374e21(0x4e76),_0x374e21(0x31dc),_0x374e21(0x17f0),_0x374e21(0x3cee),_0x374e21(0x3001),_0x374e21(0x2d76),'getUserMFDValue',_0x374e21(0x4c2f),_0x374e21(0x11b4),_0x374e21(0x4916),'getWeaponCargo',_0x374e21(0x205d),_0x374e21(0x17cd),_0x374e21(0x489f),'getWPPos',_0x374e21(0xf12),'globalChat','globalRadio','goggles',_0x374e21(0x139c),_0x374e21(0x4e5b),'groupChat',_0x374e21(0x174b),'groupIconSelectable',_0x374e21(0x34b2),_0x374e21(0x4af7),'groupOwner','groupRadio',_0x374e21(0x9e6),_0x374e21(0x2f9f),_0x374e21(0x25c8),_0x374e21(0x1fcc),_0x374e21(0x1b11),'halt',_0x374e21(0x2470),_0x374e21(0x1043),'handgunWeapon',_0x374e21(0x2b4b),_0x374e21(0x67b),_0x374e21(0x3d55),_0x374e21(0x3149),_0x374e21(0x3bff),'hcAllGroups',_0x374e21(0x63e),'hcLeader','hcRemoveAllGroups',_0x374e21(0x328),_0x374e21(0x471b),'hcSelectGroup','hcSetGroup',_0x374e21(0x4601),_0x374e21(0x1da8),_0x374e21(0x4d71),_0x374e21(0xde8),_0x374e21(0x253b),_0x374e21(0x4f65),_0x374e21(0x2cde),'hint',_0x374e21(0x27b0),_0x374e21(0x11c8),_0x374e21(0x4a96),'hmd',_0x374e21(0x3c52),_0x374e21(0x2af0),_0x374e21(0x2578),_0x374e21(0x7f0),_0x374e21(0x178c),'importAllGroups','importance','in',_0x374e21(0x3fb9),_0x374e21(0x3991),_0x374e21(0x3f2f),_0x374e21(0x141e),'inflamed',_0x374e21(0x4739),_0x374e21(0x816),'infoPanelComponents','infoPanels',_0x374e21(0x42cb),_0x374e21(0x2757),'initAmbientLife',_0x374e21(0xfb7),_0x374e21(0x256f),_0x374e21(0x51c0),'inputMouse',_0x374e21(0x6d9),_0x374e21(0x1178),'insertEditorObject','intersect',_0x374e21(0x3915),_0x374e21(0x23b5),'is3DENPreview',_0x374e21(0x1d25),'isActionMenuVisible',_0x374e21(0x1f51),_0x374e21(0x1d88),_0x374e21(0x20a4),'isArray','isAutoHoverOn',_0x374e21(0x4c89),_0x374e21(0x1ed4),_0x374e21(0x19cc),_0x374e21(0x4962),_0x374e21(0x1113),_0x374e21(0xf2a),_0x374e21(0x2935),_0x374e21(0x3a64),_0x374e21(0x28ed),'isCopilotEnabled',_0x374e21(0x3ae5),_0x374e21(0x47d7),_0x374e21(0x3b61),_0x374e21(0x4434),_0x374e21(0x37fe),_0x374e21(0x27aa),_0x374e21(0xf0f),'isEqualTypeAll',_0x374e21(0x1f5a),'isEqualTypeArray',_0x374e21(0x1929),_0x374e21(0x922),_0x374e21(0x8a6),_0x374e21(0x3bbf),'isFlatEmpty',_0x374e21(0x2e53),_0x374e21(0x10f5),_0x374e21(0x1cc3),_0x374e21(0x3344),_0x374e21(0x29ac),_0x374e21(0x1fb),_0x374e21(0x2b5f),'isInstructorFigureEnabled','isIRLaserOn',_0x374e21(0x34ec),'isKindOf',_0x374e21(0x3e51),'isLightOn',_0x374e21(0x3753),_0x374e21(0x509c),_0x374e21(0x6fe),_0x374e21(0x3e63),_0x374e21(0x173a),'isMultiplayerSolo',_0x374e21(0x2423),_0x374e21(0x24af),_0x374e21(0x27a4),_0x374e21(0x1fc8),_0x374e21(0x458f),_0x374e21(0x7f5),_0x374e21(0x6b6),_0x374e21(0x15d7),_0x374e21(0x24a0),_0x374e21(0x5196),_0x374e21(0x1555),'isRemoteExecuted',_0x374e21(0x4ae9),_0x374e21(0x1b43),_0x374e21(0x5179),_0x374e21(0x181a),_0x374e21(0x4b6f),'isSimpleObject',_0x374e21(0x3acb),_0x374e21(0x2b71),'isSteamMission',_0x374e21(0x2322),'isStreamFriendlyUIEnabled','isStressDamageEnabled',_0x374e21(0xa8e),_0x374e21(0x4673),_0x374e21(0x337f),_0x374e21(0x4363),_0x374e21(0x4d31),_0x374e21(0x43ab),'isUIContext',_0x374e21(0x3d41),'isVehicleCargo',_0x374e21(0x42a5),'isVehicleSensorEnabled',_0x374e21(0x525d),_0x374e21(0x17fe),_0x374e21(0xfe2),'itemCargo','items','itemsWithMagazines',_0x374e21(0x3541),_0x374e21(0x1841),_0x374e21(0x39a1),_0x374e21(0x4ca2),_0x374e21(0xd13),_0x374e21(0x1a1e),_0x374e21(0x1d59),'kbAddTopic','kbHasTopic','kbReact','kbRemoveTopic',_0x374e21(0x216e),_0x374e21(0x324e),_0x374e21(0x49b8),_0x374e21(0x2bf3),_0x374e21(0x1ea9),_0x374e21(0x1fbb),'land',_0x374e21(0x4dde),_0x374e21(0x473b),'language',_0x374e21(0x4376),_0x374e21(0x118b),_0x374e21(0x37a0),_0x374e21(0x4504),_0x374e21(0xaf7),_0x374e21(0x3870),_0x374e21(0x12ed),'lbDelete','lbIsSelected',_0x374e21(0x3e8d),_0x374e21(0x5044),_0x374e21(0x19ed),_0x374e21(0x255b),_0x374e21(0x222d),'lbSetCurSel',_0x374e21(0x4fe3),_0x374e21(0x3795),_0x374e21(0x4ad1),'lbSetPictureColorDisabled',_0x374e21(0x1265),_0x374e21(0x1226),_0x374e21(0x34e6),_0x374e21(0x4ecc),'lbSetPictureRightColorSelected','lbSetSelectColor',_0x374e21(0x21a7),'lbSetSelected','lbSetText',_0x374e21(0x1291),_0x374e21(0x3698),_0x374e21(0x3e02),_0x374e21(0xb45),'lbSort',_0x374e21(0x477d),_0x374e21(0x237b),_0x374e21(0x493a),'lbTextRight',_0x374e21(0x3ddb),_0x374e21(0x25c9),'leader',_0x374e21(0xad5),'leaderboardGetRows',_0x374e21(0x4fb),_0x374e21(0x3e6c),_0x374e21(0x462b),_0x374e21(0x33d4),_0x374e21(0x49c2),_0x374e21(0x3237),'leaderboardState',_0x374e21(0x663),_0x374e21(0x513d),_0x374e21(0x2685),'lifeState','lightAttachObject',_0x374e21(0x4d7d),_0x374e21(0x4c22),_0x374e21(0x1f8b),_0x374e21(0x3860),_0x374e21(0x211d),'lineIntersects',_0x374e21(0x41c4),'lineIntersectsSurfaces','lineIntersectsWith',_0x374e21(0xa89),_0x374e21(0x144e),_0x374e21(0x40d7),_0x374e21(0x28ad),_0x374e21(0x1d71),'ln','lnbAddArray','lnbAddColumn',_0x374e21(0x1b0f),_0x374e21(0xb89),_0x374e21(0x15e0),_0x374e21(0x307d),'lnbCurSelRow',_0x374e21(0x2dc6),_0x374e21(0xb37),_0x374e21(0x74e),_0x374e21(0x2fe9),_0x374e21(0x2eed),'lnbPictureRight',_0x374e21(0x1d9b),_0x374e21(0x197d),'lnbSetColumnsPos','lnbSetCurSelRow','lnbSetData',_0x374e21(0x50e1),_0x374e21(0x2b04),_0x374e21(0x297e),_0x374e21(0x33d0),_0x374e21(0x2a6d),'lnbSetPictureRight',_0x374e21(0xa0a),_0x374e21(0x1407),_0x374e21(0x48a8),_0x374e21(0x8cc),'lnbSize',_0x374e21(0xf98),_0x374e21(0x434b),_0x374e21(0x3a8f),_0x374e21(0x2dff),_0x374e21(0x4bc6),_0x374e21(0xf5e),'load',_0x374e21(0x419f),_0x374e21(0x31e8),_0x374e21(0x3ece),_0x374e21(0x3c14),_0x374e21(0x2eb4),_0x374e21(0x25ff),_0x374e21(0x5010),_0x374e21(0x1f9a),'loadStatus',_0x374e21(0x2b02),_0x374e21(0xbc9),_0x374e21(0x25dc),'localNamespace',_0x374e21(0x48ee),_0x374e21(0x192b),_0x374e21(0x3a5f),_0x374e21(0x31cc),'lockDriver',_0x374e21(0x49a3),_0x374e21(0x172f),_0x374e21(0x1682),'lockedDriver','lockedInventory','lockedTurret',_0x374e21(0x3b37),_0x374e21(0x240c),_0x374e21(0x1ec2),'lockWp',_0x374e21(0x20ff),_0x374e21(0x42af),'logNetwork',_0x374e21(0x450e),_0x374e21(0x4665),_0x374e21(0x4095),_0x374e21(0x2ef3),_0x374e21(0x2214),_0x374e21(0xb0c),'magazinesAmmo','magazinesAmmoCargo',_0x374e21(0x18d1),'magazinesDetail',_0x374e21(0x346b),_0x374e21(0x4b24),'magazinesDetailVest',_0x374e21(0x4b0a),_0x374e21(0x1ffe),_0x374e21(0x69e),_0x374e21(0x2f06),_0x374e21(0x1fb7),'mapAnimDone',_0x374e21(0x1797),'mapGridPosition',_0x374e21(0x1f2f),_0x374e21(0x450b),_0x374e21(0x519b),_0x374e21(0x5143),_0x374e21(0x4239),_0x374e21(0x3306),_0x374e21(0x991),_0x374e21(0x2e10),'markerShadow','markerShape',_0x374e21(0x5263),'markerText','markerType',_0x374e21(0x436b),_0x374e21(0xced),_0x374e21(0x4529),_0x374e21(0xe3b),'members',_0x374e21(0x38e3),_0x374e21(0x108c),_0x374e21(0x2f35),_0x374e21(0x3a54),_0x374e21(0x16c9),_0x374e21(0x35fb),_0x374e21(0x2e4c),'menuEnable',_0x374e21(0x14aa),_0x374e21(0x4119),_0x374e21(0x493f),_0x374e21(0x419e),'menuSetAction','menuSetCheck',_0x374e21(0x20c8),'menuSetPicture',_0x374e21(0x25a),'menuSetText',_0x374e21(0x29f2),_0x374e21(0x2167),_0x374e21(0x3602),_0x374e21(0x4691),_0x374e21(0x39c7),_0x374e21(0x4a4c),_0x374e21(0x27cb),_0x374e21(0x522),_0x374e21(0xc9e),_0x374e21(0x10d1),_0x374e21(0x37c8),_0x374e21(0x1239),_0x374e21(0x1a50),_0x374e21(0x505b),_0x374e21(0x3dec),_0x374e21(0x30a),'missionDifficulty','missionEnd',_0x374e21(0x4424),_0x374e21(0x2e6),'missionNamespace',_0x374e21(0x2a19),'missionStart',_0x374e21(0x3eb6),_0x374e21(0x4531),'modelToWorld','modelToWorldVisual',_0x374e21(0x4a41),'modelToWorldWorld',_0x374e21(0x496f),_0x374e21(0x1b6f),_0x374e21(0x4c82),_0x374e21(0x32db),_0x374e21(0x2676),_0x374e21(0x27ec),'moveInAny',_0x374e21(0x3bc8),_0x374e21(0x214a),'moveInDriver',_0x374e21(0x49f0),_0x374e21(0x3ce1),_0x374e21(0x979),_0x374e21(0x18e4),_0x374e21(0x2d3c),'moveTo',_0x374e21(0x15f6),_0x374e21(0x4b5),_0x374e21(0x1140),_0x374e21(0x11d8),'namedProperties',_0x374e21(0x474a),_0x374e21(0x1347),_0x374e21(0x238f),_0x374e21(0x4a9e),_0x374e21(0x2666),'nearestLocationWithDubbing',_0x374e21(0x51f1),'nearestObject',_0x374e21(0x3d8),'nearestTerrainObjects',_0x374e21(0x1956),_0x374e21(0x50d1),'nearRoads',_0x374e21(0x3459),'nearTargets',_0x374e21(0x464f),'needService','netId',_0x374e21(0x2626),'newOverlay',_0x374e21(0x51ad),_0x374e21(0x22b9),_0x374e21(0x1be9),_0x374e21(0xc1a),_0x374e21(0x4c4c),_0x374e21(0x5e6),_0x374e21(0x529c),_0x374e21(0x35b0),_0x374e21(0x32bd),_0x374e21(0x399c),_0x374e21(0x2611),_0x374e21(0xacf),'onBriefingPlan',_0x374e21(0x5bf),_0x374e21(0x456e),_0x374e21(0x135b),_0x374e21(0x260),_0x374e21(0x5079),_0x374e21(0x4331),_0x374e21(0x8d8),'onHCGroupSelectionChanged',_0x374e21(0x28ef),_0x374e21(0x2800),_0x374e21(0x1559),_0x374e21(0x22b2),_0x374e21(0x25b4),_0x374e21(0x28e1),_0x374e21(0x3bc2),_0x374e21(0x199c),_0x374e21(0x2bde),'openGPS',_0x374e21(0x18d7),'openSteamApp','openYoutubeVideo','or',_0x374e21(0x402d),_0x374e21(0x616),_0x374e21(0x5210),_0x374e21(0x13fd),_0x374e21(0x2c08),'params',_0x374e21(0x1bde),_0x374e21(0x288),_0x374e21(0x35b5),'parsingNamespace','particlesQuality',_0x374e21(0xd5b),'pickWeaponPool',_0x374e21(0x9e3),_0x374e21(0x50d),'pixelGridBase',_0x374e21(0x4d9d),'pixelH',_0x374e21(0x4811),_0x374e21(0x2cdc),_0x374e21(0x51fa),_0x374e21(0x2b78),_0x374e21(0x22ab),'player',_0x374e21(0x371e),'playerSide',_0x374e21(0x4c0e),_0x374e21(0xa6c),_0x374e21(0x994),_0x374e21(0x478b),_0x374e21(0x3bcd),_0x374e21(0x40a5),'playScriptedMission',_0x374e21(0x449e),_0x374e21(0x2015),_0x374e21(0x420),_0x374e21(0x4f56),_0x374e21(0x25f1),_0x374e21(0x232e),_0x374e21(0x2dd0),_0x374e21(0x30f6),_0x374e21(0x3356),_0x374e21(0x199f),_0x374e21(0x5b8),_0x374e21(0x215c),'ppEffectDestroy',_0x374e21(0x6a8),_0x374e21(0x4af0),_0x374e21(0x1c83),_0x374e21(0x23f8),'preloadCamera',_0x374e21(0x20a5),'preloadSound',_0x374e21(0xe41),_0x374e21(0x46cd),_0x374e21(0x1a15),_0x374e21(0x460f),_0x374e21(0x2196),_0x374e21(0x24e4),_0x374e21(0x55e),_0x374e21(0x3707),'processDiaryLink',_0x374e21(0x5c9),_0x374e21(0x140b),_0x374e21(0x4b67),'profileNameSteam',_0x374e21(0x372b),_0x374e21(0x3318),_0x374e21(0x28ba),_0x374e21(0x4f92),_0x374e21(0xf91),_0x374e21(0x21e5),'pushBack',_0x374e21(0x1702),'putWeaponPool','queryItemsPool',_0x374e21(0x808),'queryWeaponPool',_0x374e21(0x1b16),_0x374e21(0x3f6c),_0x374e21(0x475d),_0x374e21(0x2d77),_0x374e21(0x3316),'radioChannelSetCallSign',_0x374e21(0x46a8),_0x374e21(0x163c),'radioVolume',_0x374e21(0x3c54),_0x374e21(0x40d),_0x374e21(0x58b),_0x374e21(0xe98),'rank',_0x374e21(0x2e9e),_0x374e21(0x4483),'rectangular',_0x374e21(0x515f),'regexMatch',_0x374e21(0x4289),_0x374e21(0x45c1),_0x374e21(0x3c38),'reload',_0x374e21(0x2fa0),_0x374e21(0x46ff),_0x374e21(0x43b4),_0x374e21(0x35df),_0x374e21(0x16ef),'remove3DENConnection',_0x374e21(0x3781),'remove3DENLayer',_0x374e21(0x2e26),'removeAll3DENEventHandlers',_0x374e21(0x4b7a),_0x374e21(0x438),_0x374e21(0x3744),_0x374e21(0x3343),_0x374e21(0x3208),_0x374e21(0x10e1),'removeAllCuratorEditingAreas',_0x374e21(0x33a6),'removeAllHandgunItems',_0x374e21(0x4f7d),_0x374e21(0x166b),_0x374e21(0x4a5f),'removeAllMPEventHandlers',_0x374e21(0x3918),_0x374e21(0x4762),'removeAllPrimaryWeaponItems',_0x374e21(0x13d8),'removeAllUserActionEventHandlers',_0x374e21(0x195d),'removeBackpack',_0x374e21(0xa10),_0x374e21(0x3947),_0x374e21(0x2ee8),_0x374e21(0x111d),_0x374e21(0x3566),_0x374e21(0x2a82),_0x374e21(0x1293),_0x374e21(0x5c4),'removeDrawIcon',_0x374e21(0x94b),_0x374e21(0x19dc),_0x374e21(0x4199),_0x374e21(0x261a),'removeGroupIcon',_0x374e21(0x3568),'removeHeadgear','removeItem','removeItemFromBackpack',_0x374e21(0x5222),'removeItemFromVest',_0x374e21(0x36f8),_0x374e21(0x2f3),_0x374e21(0xf29),_0x374e21(0x26d3),_0x374e21(0x2096),_0x374e21(0x4612),_0x374e21(0x3091),'removeMissionEventHandler',_0x374e21(0x18c4),'removeMusicEventHandler',_0x374e21(0x2ed2),_0x374e21(0x4e2),_0x374e21(0x33bb),'removeSimpleTask',_0x374e21(0x3290),_0x374e21(0x50db),'removeUniform',_0x374e21(0x1a24),_0x374e21(0x2b57),'removeWeapon',_0x374e21(0x2d3d),'removeWeaponCargo',_0x374e21(0x362a),'removeWeaponTurret',_0x374e21(0x3c1a),_0x374e21(0x20f2),_0x374e21(0x3fa7),'resetSubgroupDirection','resize',_0x374e21(0x26e4),_0x374e21(0x379),'restartEditorCamera',_0x374e21(0x225f),'revealMine',_0x374e21(0x78b),_0x374e21(0x441e),_0x374e21(0x5249),_0x374e21(0x491c),_0x374e21(0x5211),_0x374e21(0x6b5),_0x374e21(0x51a5),'ropeAttachEnabled','ropeAttachTo',_0x374e21(0x3bf7),'ropeCut',_0x374e21(0x463c),_0x374e21(0x3e45),'ropeEndPosition',_0x374e21(0x4043),_0x374e21(0xecd),_0x374e21(0x3c4d),_0x374e21(0x38a7),_0x374e21(0x48be),_0x374e21(0x411f),_0x374e21(0x193d),_0x374e21(0x44dc),_0x374e21(0x3d6c),_0x374e21(0x574),_0x374e21(0x1565),_0x374e21(0xa33),_0x374e21(0x4a71),'safeZoneX',_0x374e21(0x1d72),_0x374e21(0x2b98),_0x374e21(0x2717),_0x374e21(0x407),'saveIdentity',_0x374e21(0x2f86),_0x374e21(0xba8),_0x374e21(0x2f4c),_0x374e21(0x1e7b),_0x374e21(0x1591),_0x374e21(0x408c),_0x374e21(0x4777),_0x374e21(0x20c),_0x374e21(0x30d2),_0x374e21(0x1e58),_0x374e21(0x411e),'score',_0x374e21(0x4912),_0x374e21(0x316f),_0x374e21(0x506d),_0x374e21(0x1ec3),'scriptName','scudState','secondaryWeapon','secondaryWeaponItems',_0x374e21(0x39df),_0x374e21(0x3fc9),'selectBestPlaces',_0x374e21(0x36bc),_0x374e21(0x4112),_0x374e21(0x41e2),_0x374e21(0x736),_0x374e21(0x4ea3),_0x374e21(0x1660),_0x374e21(0x3df1),'selectMax',_0x374e21(0x2adf),_0x374e21(0x2a22),_0x374e21(0x12d9),_0x374e21(0x12f6),'selectRandomWeighted',_0x374e21(0x51ec),_0x374e21(0x2bc4),'sendAUMessage',_0x374e21(0x653),'sendTask',_0x374e21(0x4abb),'sendUDPMessage',_0x374e21(0x107e),_0x374e21(0x4372),_0x374e21(0x4d2f),'serverCommandExecutable','serverName',_0x374e21(0x44d0),_0x374e21(0x3f1c),_0x374e21(0x1fa),'set3DENAttribute',_0x374e21(0x328f),'set3DENGrid',_0x374e21(0x15fa),'set3DENLayer',_0x374e21(0x40a8),_0x374e21(0x2f9c),_0x374e21(0x24ad),_0x374e21(0x2a91),'set3DENModelsVisible',_0x374e21(0x48c2),_0x374e21(0x7ec),_0x374e21(0x2ad9),_0x374e21(0x1ac9),_0x374e21(0xe42),'setAirportSide',_0x374e21(0x2dc1),_0x374e21(0x1416),_0x374e21(0x453b),'setAnimSpeedCoef',_0x374e21(0x40e9),_0x374e21(0x4e97),_0x374e21(0x161b),_0x374e21(0x15c9),_0x374e21(0x2316),_0x374e21(0x457b),'setBehaviourStrong','setBleedingRemaining',_0x374e21(0x3225),'setCameraInterest',_0x374e21(0x4ec4),'setCamShakeParams',_0x374e21(0x1058),_0x374e21(0x2d4),_0x374e21(0x26fd),'setCollisionLight',_0x374e21(0x4446),'setCombatMode',_0x374e21(0x25d2),_0x374e21(0x35c5),_0x374e21(0x307b),_0x374e21(0x36a2),_0x374e21(0x1e8e),_0x374e21(0x3276),_0x374e21(0x1a69),_0x374e21(0x2a6c),_0x374e21(0x4aa7),'setCurrentWaypoint',_0x374e21(0x1fc6),_0x374e21(0x3eff),'setCustomSoundController',_0x374e21(0x3dad),'setDamage',_0x374e21(0x45cf),'setDate','setDebriefingText',_0x374e21(0x14eb),_0x374e21(0x4556),'setDetailMapBlendPars',_0x374e21(0x7e8),'setDiarySubjectPicture','setDir',_0x374e21(0x3f1e),_0x374e21(0xa18),_0x374e21(0x33c7),_0x374e21(0x21d6),_0x374e21(0x659),_0x374e21(0x1215),_0x374e21(0xdf0),_0x374e21(0x295d),'setEffectCondition',_0x374e21(0x5295),_0x374e21(0x2601),'setFace',_0x374e21(0x30b),_0x374e21(0x809),_0x374e21(0x5d1),_0x374e21(0x7b6),'setFlagOwner',_0x374e21(0x3855),_0x374e21(0x1e85),_0x374e21(0xb2c),_0x374e21(0x9bc),'setFormation',_0x374e21(0x2203),_0x374e21(0x32ab),_0x374e21(0x2f1c),'setFromEditor','setFSMVariable','setFuel',_0x374e21(0x27c2),_0x374e21(0x13ec),_0x374e21(0x472a),'setGroupIconsSelectable',_0x374e21(0xb4d),_0x374e21(0x1f67),_0x374e21(0x1194),'setGroupOwner',_0x374e21(0x29f3),_0x374e21(0x244c),'setHit','setHitIndex',_0x374e21(0x14b7),_0x374e21(0x1642),_0x374e21(0x3d97),_0x374e21(0x2d32),'setIdentity',_0x374e21(0x2456),_0x374e21(0x24c4),'setLeader',_0x374e21(0xefd),_0x374e21(0x697),_0x374e21(0x2b22),_0x374e21(0x492a),_0x374e21(0x1981),_0x374e21(0x37e7),_0x374e21(0xcf0),_0x374e21(0x17ae),_0x374e21(0xa1f),'setLightIR',_0x374e21(0x3658),_0x374e21(0x38d0),_0x374e21(0x4e2e),_0x374e21(0x4066),_0x374e21(0x1a39),'setMarkerAlpha',_0x374e21(0x4883),_0x374e21(0x4d6c),_0x374e21(0x2289),'setMarkerColor',_0x374e21(0x113c),_0x374e21(0x3468),_0x374e21(0x1edf),'setMarkerPolyline','setMarkerPolylineLocal',_0x374e21(0x97c),_0x374e21(0x45dd),_0x374e21(0x1654),_0x374e21(0x4a6),'setMarkerShape',_0x374e21(0x2269),_0x374e21(0x1942),_0x374e21(0x30a2),_0x374e21(0x2216),_0x374e21(0x3a1e),'setMarkerType',_0x374e21(0xff7),_0x374e21(0x4854),_0x374e21(0x2c56),_0x374e21(0x526d),_0x374e21(0x301),'setMissileTargetPos','setMousePosition',_0x374e21(0x4396),_0x374e21(0x46a3),'setName',_0x374e21(0x3a2d),_0x374e21(0x3a7f),'setObjectMaterial','setObjectMaterialGlobal',_0x374e21(0x2fa3),'setObjectScale',_0x374e21(0x1606),_0x374e21(0x3b5),_0x374e21(0x4223),_0x374e21(0x3e1d),_0x374e21(0x3660),_0x374e21(0x4b3d),'setOxygenRemaining','setParticleCircle','setParticleClass',_0x374e21(0x37c2),_0x374e21(0xdab),'setParticleRandom','setPilotCameraDirection',_0x374e21(0x14d9),_0x374e21(0x50d5),_0x374e21(0x57d),'setPiPEffect','setPiPViewDistance',_0x374e21(0x43c1),_0x374e21(0x374b),_0x374e21(0x1d0e),_0x374e21(0x146d),_0x374e21(0x34bc),_0x374e21(0x517d),'setPosASL',_0x374e21(0x1ef8),_0x374e21(0x1ad8),_0x374e21(0x3452),'setPosition',_0x374e21(0x2eaf),_0x374e21(0x9d6),'setPylonsPriority',_0x374e21(0x4c80),_0x374e21(0x1cd1),'setRainbow',_0x374e21(0x79f),_0x374e21(0x3b7c),_0x374e21(0x1281),_0x374e21(0x4de0),_0x374e21(0x3a57),_0x374e21(0x36dd),_0x374e21(0x512),_0x374e21(0x34c1),'setSimpleTaskAlwaysVisible',_0x374e21(0x334a),_0x374e21(0x2193),'setSimpleTaskDestination',_0x374e21(0x2494),_0x374e21(0x29d5),_0x374e21(0x407e),'setSize',_0x374e21(0x276a),_0x374e21(0x51a4),_0x374e21(0x5051),_0x374e21(0x3f3e),_0x374e21(0x1ab5),_0x374e21(0xb78),_0x374e21(0x339f),_0x374e21(0x335),_0x374e21(0x1729),_0x374e21(0x31e5),_0x374e21(0x46c7),'setTargetAge',_0x374e21(0xb1a),'setTaskResult',_0x374e21(0x2118),_0x374e21(0x3057),_0x374e21(0x1051),_0x374e21(0x21e6),_0x374e21(0x297d),_0x374e21(0xd3a),_0x374e21(0x90e),_0x374e21(0x201e),'setTrafficDensity',_0x374e21(0x2e92),'setTrafficGap',_0x374e21(0xaf6),'setTriggerActivation',_0x374e21(0x1ce8),'setTriggerInterval','setTriggerStatements',_0x374e21(0x2183),_0x374e21(0x4032),_0x374e21(0x3bc5),'setTurretLimits',_0x374e21(0x17bf),_0x374e21(0x141d),_0x374e21(0x5096),_0x374e21(0x4e84),_0x374e21(0x3739),_0x374e21(0x3400),_0x374e21(0x9e8),_0x374e21(0x1b6a),'setUnitPosWeak','setUnitRank',_0x374e21(0x3c7b),_0x374e21(0xc0c),_0x374e21(0x37a1),_0x374e21(0x2daf),_0x374e21(0x280a),_0x374e21(0x4720),'setVariable',_0x374e21(0xf94),_0x374e21(0x3680),_0x374e21(0x827),'setVehicleAmmo',_0x374e21(0x4750),_0x374e21(0x309a),_0x374e21(0x5d4),_0x374e21(0x1219),_0x374e21(0x4807),_0x374e21(0x582),_0x374e21(0x50b0),_0x374e21(0x2873),_0x374e21(0xcf7),'setVehicleReportRemoteTargets','setVehicleTiPars',_0x374e21(0x1918),_0x374e21(0x38fb),_0x374e21(0x1c25),_0x374e21(0x9e4),_0x374e21(0x46ab),_0x374e21(0x4f24),_0x374e21(0xab3),'setWaves','setWaypointBehaviour',_0x374e21(0xf2d),'setWaypointCompletionRadius',_0x374e21(0x2d23),_0x374e21(0xaf1),_0x374e21(0x1622),'setWaypointHousePosition','setWaypointLoiterAltitude',_0x374e21(0x3a5c),_0x374e21(0x2d36),_0x374e21(0x12a0),_0x374e21(0x4b21),_0x374e21(0x3334),_0x374e21(0x14ed),'setWaypointStatements',_0x374e21(0x1124),_0x374e21(0x935),_0x374e21(0x1ba8),_0x374e21(0xd43),_0x374e21(0x4bb0),'setWind',_0x374e21(0x3ed4),_0x374e21(0x2864),_0x374e21(0x2f8),_0x374e21(0x2e1c),_0x374e21(0x3ede),'show3DIcons',_0x374e21(0x382a),'showCinemaBorder',_0x374e21(0x1299),'showCompass',_0x374e21(0x3ced),_0x374e21(0x4475),_0x374e21(0x2039),_0x374e21(0x2275),_0x374e21(0x4b57),_0x374e21(0xc10),_0x374e21(0x3f1),_0x374e21(0x219a),_0x374e21(0x3140),'showNewEditorObject',_0x374e21(0x1fea),_0x374e21(0x2958),'shownMap','shownPad',_0x374e21(0x3494),_0x374e21(0x1bc6),_0x374e21(0x3da7),_0x374e21(0x17e0),'shownWarrant',_0x374e21(0xa38),'showPad','showRadio',_0x374e21(0x2739),_0x374e21(0x35f7),_0x374e21(0x210c),_0x374e21(0xf92),_0x374e21(0x51bd),_0x374e21(0x2d97),_0x374e21(0x241d),_0x374e21(0xd11),'sideChat','sideRadio',_0x374e21(0x520),_0x374e21(0x3bf3),_0x374e21(0x12de),_0x374e21(0x1437),_0x374e21(0x3d5c),_0x374e21(0x406f),'sin',_0x374e21(0x395f),_0x374e21(0x39e3),'skill',_0x374e21(0x39bd),_0x374e21(0xd85),_0x374e21(0x4b2e),'sliderPosition',_0x374e21(0x45d8),_0x374e21(0x2680),'sliderSetRange','sliderSetSpeed','sliderSpeed',_0x374e21(0x3d47),'soldierMagazines',_0x374e21(0x2c43),_0x374e21(0x4c33),_0x374e21(0x1562),_0x374e21(0x2c5e),_0x374e21(0x4674),_0x374e21(0x49a6),_0x374e21(0xae8),_0x374e21(0x2b3c),'splitString',_0x374e21(0x5011),'squadParams',_0x374e21(0x37d9),_0x374e21(0x14d3),_0x374e21(0xaee),'stopEngineRTD',_0x374e21(0x86f),_0x374e21(0x257f),_0x374e21(0x2903),'supportInfo',_0x374e21(0x2a3d),'surfaceIsWater',_0x374e21(0x5ca),_0x374e21(0x3728),_0x374e21(0x4ffb),_0x374e21(0x4e8),_0x374e21(0x12d7),_0x374e21(0x50f5),_0x374e21(0x2c24),_0x374e21(0x1564),_0x374e21(0x2945),_0x374e21(0xc2d),_0x374e21(0x4ffd),_0x374e21(0xd52),'synchronizedWaypoints',_0x374e21(0x4dfb),'synchronizeObjectsRemove','synchronizeTrigger',_0x374e21(0x28e9),_0x374e21(0xd76),_0x374e21(0x37e3),'systemTime',_0x374e21(0x3d8f),_0x374e21(0x38de),_0x374e21(0x134f),_0x374e21(0x45e0),'targetsAggregate',_0x374e21(0x4afc),_0x374e21(0x3254),_0x374e21(0x2beb),_0x374e21(0x4e3f),_0x374e21(0x10ec),_0x374e21(0x4216),_0x374e21(0x142d),_0x374e21(0x4511),_0x374e21(0x113a),'taskName',_0x374e21(0x3582),_0x374e21(0x1b05),_0x374e21(0x3ab3),_0x374e21(0x231e),_0x374e21(0x4af6),'teamName',_0x374e21(0x3e67),_0x374e21(0x20c6),_0x374e21(0x3073),'teamType',_0x374e21(0x315c),_0x374e21(0x16b0),_0x374e21(0x172a),'terrainIntersectAtASL',_0x374e21(0x4006),_0x374e21(0x2b18),_0x374e21(0x1cdd),'tg',_0x374e21(0x51b6),_0x374e21(0x2721),_0x374e21(0x1941),_0x374e21(0xfb2),'titleObj',_0x374e21(0x43df),_0x374e21(0x2a7a),'toArray',_0x374e21(0x1dc6),_0x374e21(0x395),_0x374e21(0x3611),_0x374e21(0x8e8),'toUpper',_0x374e21(0x43dc),'triggerActivated',_0x374e21(0x2c09),'triggerAmmo',_0x374e21(0xd29),_0x374e21(0x35a0),_0x374e21(0x4fca),_0x374e21(0x302d),_0x374e21(0x2965),_0x374e21(0x3d05),_0x374e21(0x49b6),_0x374e21(0x1111),'triggerTimeout',_0x374e21(0x26c0),_0x374e21(0xe70),_0x374e21(0x1b23),_0x374e21(0x1713),_0x374e21(0x1ca9),_0x374e21(0x1eb6),'tvAdd',_0x374e21(0x4a9b),'tvCollapse',_0x374e21(0x4219),'tvCount',_0x374e21(0x6aa),'tvData',_0x374e21(0x2aec),_0x374e21(0x461),_0x374e21(0x3009),_0x374e21(0x2698),_0x374e21(0x2faa),_0x374e21(0x3007),_0x374e21(0x2fcc),_0x374e21(0x2441),_0x374e21(0x1095),'tvSetData',_0x374e21(0x3942),'tvSetPictureColor',_0x374e21(0x3ba9),_0x374e21(0x3ae9),_0x374e21(0x67d),'tvSetPictureRightColor','tvSetPictureRightColorDisabled',_0x374e21(0x3094),_0x374e21(0x331e),_0x374e21(0x1077),_0x374e21(0x46bc),_0x374e21(0x2a8c),'tvSetValue',_0x374e21(0x25b6),'tvSortAll',_0x374e21(0x39fa),_0x374e21(0x955),'tvText',_0x374e21(0x4edc),_0x374e21(0x35d0),'type',_0x374e21(0x2ec9),_0x374e21(0x1474),'UAVControl',_0x374e21(0xa2d),_0x374e21(0x44ea),_0x374e21(0x3dcd),_0x374e21(0x4db2),'unassignTeam',_0x374e21(0x17c1),_0x374e21(0x1237),_0x374e21(0x394),'uniformContainer','uniformItems',_0x374e21(0x442a),_0x374e21(0x3e86),_0x374e21(0x401d),_0x374e21(0x2d37),_0x374e21(0x3a82),_0x374e21(0x3aad),_0x374e21(0x18d6),_0x374e21(0x84d),_0x374e21(0x3af1),_0x374e21(0x4272),_0x374e21(0x9d0),'units','unitsBelowHeight',_0x374e21(0x2d4f),_0x374e21(0x1563),_0x374e21(0x2554),_0x374e21(0x2f7e),_0x374e21(0x1ca4),'updateMenuItem',_0x374e21(0x2a73),'useAIOperMapObstructionTest',_0x374e21(0x5020),'useAudioTimeForMoves',_0x374e21(0x4746),_0x374e21(0x1fae),_0x374e21(0x37a4),_0x374e21(0x3be9),_0x374e21(0x9d4),_0x374e21(0x37f4),'vectorDir','vectorDirVisual','vectorDistance',_0x374e21(0x226f),_0x374e21(0x3573),_0x374e21(0x3ad3),_0x374e21(0x30e6),_0x374e21(0x3777),'vectorMagnitudeSqr',_0x374e21(0x489),_0x374e21(0x241e),'vectorMultiply',_0x374e21(0x3a2),_0x374e21(0xee1),_0x374e21(0x322c),_0x374e21(0x2135),'vectorWorldToModelVisual',_0x374e21(0x3f60),_0x374e21(0x3124),_0x374e21(0x2a58),_0x374e21(0x4319),_0x374e21(0x41da),'vehicleReceiveRemoteTargets',_0x374e21(0x46a2),_0x374e21(0x3a29),_0x374e21(0x4fc3),_0x374e21(0x4e18),'velocity','velocityModelSpace',_0x374e21(0x32fd),'vest',_0x374e21(0x1794),'vestItems',_0x374e21(0x12cb),'viewDistance',_0x374e21(0x3a10),_0x374e21(0x458a),_0x374e21(0x996),'visiblePosition',_0x374e21(0x255d),'visibleScoretable',_0x374e21(0xa9f),_0x374e21(0x482a),_0x374e21(0x1ad2),_0x374e21(0x434d),'waypointAttachObject',_0x374e21(0x4a24),_0x374e21(0x2b69),_0x374e21(0x871),_0x374e21(0x6d5),_0x374e21(0x524),'waypointForceBehaviour','waypointFormation','waypointHousePosition',_0x374e21(0x2a7e),_0x374e21(0x3aae),_0x374e21(0x2360),_0x374e21(0x247f),_0x374e21(0x24ff),_0x374e21(0x3a8c),_0x374e21(0x41ab),_0x374e21(0x1a12),_0x374e21(0x36b0),'waypointSpeed',_0x374e21(0x5191),_0x374e21(0x252),_0x374e21(0x1358),_0x374e21(0x2da3),_0x374e21(0x80f),_0x374e21(0x32b5),'weaponAccessoriesCargo',_0x374e21(0x22ad),_0x374e21(0x161d),_0x374e21(0x2b35),_0x374e21(0x1fd5),_0x374e21(0x1e8a),_0x374e21(0x44f4),_0x374e21(0x989),_0x374e21(0x26ea),_0x374e21(0x51a1),'weaponState','weaponsTurret',_0x374e21(0x3a4c),_0x374e21(0x1e7c),_0x374e21(0x2f44),_0x374e21(0x3417),_0x374e21(0x1fa2),_0x374e21(0x272),_0x374e21(0x1560),_0x374e21(0x4bf),_0x374e21(0x4ff4),_0x374e21(0xb5e),_0x374e21(0x89f),_0x374e21(0x319b)],'literal':['blufor',_0x374e21(0x22f1),_0x374e21(0x45e),_0x374e21(0x1a0c),_0x374e21(0x5135),_0x374e21(0x282b),_0x374e21(0x48bc),_0x374e21(0x4bb2),_0x374e21(0x3984),_0x374e21(0x4dc1),_0x374e21(0x79b),_0x374e21(0x1f81),'locationNull',_0x374e21(0x3e27),_0x374e21(0x36e8),_0x374e21(0xc02),'pi',_0x374e21(0x2311),_0x374e21(0x4243),'sideAmbientLife',_0x374e21(0x2cd1),_0x374e21(0x3e4c),_0x374e21(0x43ca),_0x374e21(0x4309),_0x374e21(0x188e),_0x374e21(0x36f3),_0x374e21(0x4633),_0x374e21(0x4022),'west']},'contains':[_0x5136c5[_0x374e21(0x2ae2)],_0x5136c5['C_BLOCK_COMMENT_MODE'],_0x5136c5[_0x374e21(0x30be)],{'className':_0x374e21(0x3362),'begin':/\b_+[a-zA-Z]\w*/},{'className':_0x374e21(0x4685),'begin':/[a-zA-Z][a-zA-Z_0-9]*_fnc_[a-zA-Z_0-9]+/},_0xf18714,_0x143342],'illegal':[/\$[^a-fA-F0-9]/,/\w\$/,/\?/,/@/,/ \| /,/[a-zA-Z_]\./,/\:\=/,/\[\:/]};};},0x13b:_0x387043=>{const _0x16828f=a0_0x11e7;_0x387043[_0x16828f(0x474c)]=function(_0x1a9669){const _0x242be7=_0x16828f,_0x5546ae=_0x1a9669[_0x242be7(0x41d2)],_0x19a37c=_0x1a9669[_0x242be7(0x4e4f)]('--','$'),_0x5a8d3c=[_0x242be7(0x4022),'false',_0x242be7(0x40b5)],_0x465aac=['bigint',_0x242be7(0x4053),_0x242be7(0x38fd),_0x242be7(0x1e8d),_0x242be7(0x373c),'character','clob','date',_0x242be7(0x461a),_0x242be7(0x2ab2),_0x242be7(0x2353),'float',_0x242be7(0xc16),'integer',_0x242be7(0x15ab),'nchar',_0x242be7(0xee0),_0x242be7(0x1969),_0x242be7(0x2307),_0x242be7(0x47f6),_0x242be7(0x3648),_0x242be7(0x41c3),_0x242be7(0x51b6),'timestamp',_0x242be7(0x473f),_0x242be7(0x39f7),'varbinary'],_0x4dc567=[_0x242be7(0xbe0),_0x242be7(0x2c6e),_0x242be7(0x4979),_0x242be7(0x3c15),_0x242be7(0x3aab),_0x242be7(0x1280),'cast',_0x242be7(0x10aa),'ceiling',_0x242be7(0x48f5),'corr',_0x242be7(0x3935),'cosh',_0x242be7(0x404e),'covar_pop','covar_samp',_0x242be7(0x36be),_0x242be7(0x4dd1),_0x242be7(0x1a42),_0x242be7(0x285a),'exp','extract',_0x242be7(0x4b86),'floor','json_array',_0x242be7(0x376d),'json_exists','json_object',_0x242be7(0x2e9d),_0x242be7(0x154e),_0x242be7(0x4a8d),'json_table_primitive',_0x242be7(0x969),'lag','last_value',_0x242be7(0x4fc1),_0x242be7(0x2bf2),'ln',_0x242be7(0x20ff),_0x242be7(0x1463),'lower',_0x242be7(0x4529),_0x242be7(0x37c8),'mod',_0x242be7(0x2571),_0x242be7(0x2de7),_0x242be7(0x12bb),'percent_rank',_0x242be7(0x4aab),_0x242be7(0x40f),_0x242be7(0x25f1),'position_regex','power',_0x242be7(0x846),_0x242be7(0x4cf2),_0x242be7(0x4333),_0x242be7(0x49f1),_0x242be7(0x47be),'regr_r2',_0x242be7(0x155e),_0x242be7(0x68a),_0x242be7(0x4753),_0x242be7(0x327b),_0x242be7(0x4af4),_0x242be7(0x2a37),_0x242be7(0x4bb3),_0x242be7(0x5011),_0x242be7(0x13a7),'stddev_samp','substring',_0x242be7(0x3cd0),'sum',_0x242be7(0x38de),'tanh',_0x242be7(0x296f),_0x242be7(0xc9a),_0x242be7(0x4465),_0x242be7(0x1b23),'trim_array',_0x242be7(0x2ad),_0x242be7(0x4541),_0x242be7(0x3e04),_0x242be7(0x3d17),'var_samp',_0x242be7(0x3df)],_0x4daf53=[_0x242be7(0x237f),_0x242be7(0x3b10),'primary\x20key','foreign\x20key','not\x20null',_0x242be7(0x86a),_0x242be7(0x288a),_0x242be7(0x23ab),'on\x20overflow','character\x20set',_0x242be7(0x1cde),'ignore\x20nulls',_0x242be7(0x1810),_0x242be7(0x47ee),_0x242be7(0x4b55),_0x242be7(0x3975)],_0x525a75=_0x4dc567,_0x38d5ac=[_0x242be7(0xbe0),_0x242be7(0x2c6e),_0x242be7(0xc36),_0x242be7(0x2f21),'alter',_0x242be7(0x2663),_0x242be7(0x4684),_0x242be7(0x9d5),_0x242be7(0x26f6),'array_agg',_0x242be7(0x43e1),'as',_0x242be7(0x23a),_0x242be7(0x3c15),_0x242be7(0x1bb4),'at',_0x242be7(0x3aab),_0x242be7(0x1a00),_0x242be7(0x3ab4),_0x242be7(0x1280),_0x242be7(0x42fa),_0x242be7(0x349d),_0x242be7(0x3d5e),_0x242be7(0x2597),_0x242be7(0x3179),'binary',_0x242be7(0x38fd),_0x242be7(0x1e8d),'both','by','call',_0x242be7(0x417f),'cardinality',_0x242be7(0x41fe),_0x242be7(0x2e7e),_0x242be7(0x1171),_0x242be7(0x10aa),'ceiling',_0x242be7(0x373c),_0x242be7(0x150d),_0x242be7(0x4b35),'character_length',_0x242be7(0x4e5c),_0x242be7(0x41a2),_0x242be7(0x3c56),_0x242be7(0x50d8),_0x242be7(0x48f5),_0x242be7(0x4ff0),_0x242be7(0x325d),_0x242be7(0x4a1a),_0x242be7(0x157a),_0x242be7(0x1c5e),_0x242be7(0xa3b),_0x242be7(0x29bd),'contains',_0x242be7(0x1eed),_0x242be7(0x16c0),'corr',_0x242be7(0x1090),_0x242be7(0x3935),_0x242be7(0x486e),_0x242be7(0x404e),'covar_pop','covar_samp','create','cross',_0x242be7(0x4104),_0x242be7(0x36be),_0x242be7(0xbb8),_0x242be7(0x51fc),_0x242be7(0x12dc),'current_default_transform_group',_0x242be7(0x3ed1),_0x242be7(0x3454),_0x242be7(0x1fd4),_0x242be7(0x1339),'current_time',_0x242be7(0x2dae),'current_path',_0x242be7(0x3454),'current_transform_group_for_type',_0x242be7(0x2882),'cursor',_0x242be7(0x1429),_0x242be7(0x40d9),_0x242be7(0x3a4),_0x242be7(0x6e1),_0x242be7(0x461a),'decimal','decfloat','declare',_0x242be7(0x3d23),_0x242be7(0x1bb1),'delete',_0x242be7(0x4dd1),'deref',_0x242be7(0x4453),_0x242be7(0x3f21),_0x242be7(0x12a9),_0x242be7(0x2f6e),'double','drop',_0x242be7(0x41aa),_0x242be7(0x2f9e),'element',_0x242be7(0x3d4),_0x242be7(0x168c),_0x242be7(0x2681),_0x242be7(0xe76),_0x242be7(0xc32),_0x242be7(0x2a6e),_0x242be7(0x39f9),'escape','every',_0x242be7(0x233e),_0x242be7(0x198d),_0x242be7(0x2162),_0x242be7(0x449f),_0x242be7(0x3a1b),_0x242be7(0x2f0b),_0x242be7(0x5b6),_0x242be7(0x3984),'fetch','filter',_0x242be7(0x4b86),_0x242be7(0x1ab8),_0x242be7(0x2e2d),_0x242be7(0x3c19),_0x242be7(0x20bd),_0x242be7(0x3066),_0x242be7(0x2e98),_0x242be7(0x27e6),'full',_0x242be7(0x14b2),'fusion',_0x242be7(0xf9e),_0x242be7(0x501b),_0x242be7(0x2fa9),_0x242be7(0x4e5b),_0x242be7(0x1c29),_0x242be7(0x9e6),'having',_0x242be7(0x4b0f),_0x242be7(0x19aa),_0x242be7(0x36bd),'in','indicator',_0x242be7(0x9b1),_0x242be7(0x2113),_0x242be7(0x99f),_0x242be7(0x1205),_0x242be7(0x1178),'int','integer',_0x242be7(0x13b2),_0x242be7(0x687),_0x242be7(0x15ab),'into','is',_0x242be7(0x3541),'json_array',_0x242be7(0x376d),'json_exists',_0x242be7(0x1db3),_0x242be7(0x2e9d),_0x242be7(0x154e),_0x242be7(0x4a8d),_0x242be7(0x347b),_0x242be7(0x969),_0x242be7(0x1282),'language','large',_0x242be7(0x1ac4),_0x242be7(0xf64),_0x242be7(0x4fc1),_0x242be7(0x3928),_0x242be7(0x48eb),'like','like_regex',_0x242be7(0x2bf2),'ln',_0x242be7(0x16a7),_0x242be7(0xe00),_0x242be7(0x4b89),_0x242be7(0x20ff),_0x242be7(0x1463),_0x242be7(0x52d),'match',_0x242be7(0xa83),_0x242be7(0x5103),_0x242be7(0x3a0e),_0x242be7(0x4529),'member','merge',_0x242be7(0x510c),_0x242be7(0x37c8),_0x242be7(0x149b),'mod','modifies',_0x242be7(0x196c),'month',_0x242be7(0xc0a),_0x242be7(0x1969),_0x242be7(0x3c37),_0x242be7(0x309b),'nclob',_0x242be7(0x4321),'no','none','normalize',_0x242be7(0xc1a),_0x242be7(0x2571),_0x242be7(0x2de7),_0x242be7(0x1582),_0x242be7(0x12bb),_0x242be7(0x2307),'octet_length',_0x242be7(0x2060),'of',_0x242be7(0xf16),_0x242be7(0x2823),'omit','on',_0x242be7(0x1d8a),_0x242be7(0x257c),_0x242be7(0x1795),'or',_0x242be7(0xd8d),_0x242be7(0x3ab5),_0x242be7(0x3c1),_0x242be7(0x135c),'overlaps',_0x242be7(0x194d),_0x242be7(0x3740),'partition',_0x242be7(0x3dd7),_0x242be7(0x2c16),'percent',_0x242be7(0x2aab),_0x242be7(0x4aab),'percentile_disc',_0x242be7(0x387b),_0x242be7(0x493),'position',_0x242be7(0x4db6),_0x242be7(0x17a4),_0x242be7(0x3f07),_0x242be7(0x23f8),'prepare',_0x242be7(0x1a1f),_0x242be7(0x1285),_0x242be7(0x38bb),'range','rank',_0x242be7(0x2268),_0x242be7(0x47f6),'recursive',_0x242be7(0x21c3),_0x242be7(0xeb2),'referencing','regr_avgx',_0x242be7(0x4333),_0x242be7(0x49f1),_0x242be7(0x47be),_0x242be7(0x393c),_0x242be7(0x155e),'regr_sxx','regr_sxy',_0x242be7(0x327b),_0x242be7(0x4a55),_0x242be7(0xa34),_0x242be7(0xdfd),_0x242be7(0xcc4),_0x242be7(0x12a8),'right','rollback',_0x242be7(0x1cf6),_0x242be7(0x3648),'row_number',_0x242be7(0x48a7),_0x242be7(0x3ff6),'savepoint','scope',_0x242be7(0xe13),_0x242be7(0x3190),'second','seek',_0x242be7(0x3fc9),_0x242be7(0x1679),'session_user','set',_0x242be7(0x2e0d),'similar','sin',_0x242be7(0x4bb3),'skip',_0x242be7(0x41c3),_0x242be7(0x363a),_0x242be7(0x3ccf),_0x242be7(0x3940),_0x242be7(0x124d),'sqlexception',_0x242be7(0x60b),'sqlwarning',_0x242be7(0x5011),'start',_0x242be7(0x2c7c),_0x242be7(0x13a7),'stddev_samp',_0x242be7(0x38d7),_0x242be7(0x11f1),_0x242be7(0x37b5),_0x242be7(0x3cd0),_0x242be7(0xed3),_0x242be7(0x13b9),_0x242be7(0x4966),_0x242be7(0x2ec4),_0x242be7(0x4cb8),_0x242be7(0x16f8),_0x242be7(0x1639),_0x242be7(0xf04),'tan',_0x242be7(0x327c),'then',_0x242be7(0x51b6),_0x242be7(0x443f),_0x242be7(0x27da),_0x242be7(0x4acb),'to',_0x242be7(0x3943),_0x242be7(0x296f),_0x242be7(0xc9a),_0x242be7(0x4f15),_0x242be7(0x4465),_0x242be7(0xe6f),_0x242be7(0x1b23),_0x242be7(0x2d5c),'true',_0x242be7(0x4dbe),_0x242be7(0x9ce),'union',_0x242be7(0x390b),'unknown',_0x242be7(0x2ad),'update',_0x242be7(0x4541),_0x242be7(0x4b31),_0x242be7(0x347a),_0x242be7(0x4fe9),_0x242be7(0x1fae),'value_of',_0x242be7(0x3d17),'var_samp','varbinary',_0x242be7(0x473f),_0x242be7(0x39f7),_0x242be7(0x4da6),_0x242be7(0x191b),'whenever',_0x242be7(0x3b62),_0x242be7(0x3df),_0x242be7(0x18db),_0x242be7(0x2aa7),_0x242be7(0x5c5),_0x242be7(0x297b),'year','add','asc','collation','desc',_0x242be7(0x27e4),_0x242be7(0x4d51),_0x242be7(0x4d3c),_0x242be7(0x1961)][_0x242be7(0x1465)](_0x101ed5=>!_0x4dc567[_0x242be7(0x2628)](_0x101ed5)),_0x8de5c5={'begin':_0x5546ae[_0x242be7(0x1d1d)](/\b/,_0x5546ae[_0x242be7(0x583)](..._0x525a75),/\s*\(/),'relevance':0x0,'keywords':{'built_in':_0x525a75}};return{'name':_0x242be7(0x3e41),'case_insensitive':!0x0,'illegal':/[{}]|<\//,'keywords':{'$pattern':/\b[\w\.]+/,'keyword':function(_0x42da10,{exceptions:_0x25c5fb,when:_0x3a3689}={}){const _0x20ce2c=_0x242be7,_0x5bc46e=_0x3a3689;return _0x25c5fb=_0x25c5fb||[],_0x42da10[_0x20ce2c(0x4833)](_0x3b0e2d=>_0x3b0e2d['match'](/\|\d+$/)||_0x25c5fb[_0x20ce2c(0x2628)](_0x3b0e2d)?_0x3b0e2d:_0x5bc46e(_0x3b0e2d)?_0x3b0e2d+'|0':_0x3b0e2d);}(_0x38d5ac,{'when':_0x1641ca=>_0x1641ca[_0x242be7(0x1b19)]<0x3}),'literal':_0x5a8d3c,'type':_0x465aac,'built_in':['current_catalog',_0x242be7(0x12dc),_0x242be7(0x43f0),_0x242be7(0x3ed1),_0x242be7(0x3454),_0x242be7(0x1339),_0x242be7(0x1790),_0x242be7(0x2882),'session_user',_0x242be7(0x4cb8),_0x242be7(0x16f8),'current_time',_0x242be7(0xe00),'current_timestamp','localtimestamp']},'contains':[{'begin':_0x5546ae['either'](..._0x4daf53),'relevance':0x0,'keywords':{'$pattern':/[\w\.]+/,'keyword':_0x38d5ac[_0x242be7(0x1d1d)](_0x4daf53),'literal':_0x5a8d3c,'type':_0x465aac}},{'className':_0x242be7(0xcfc),'begin':_0x5546ae[_0x242be7(0x583)]('double\x20precision',_0x242be7(0x769),'with\x20timezone','without\x20timezone')},_0x8de5c5,{'className':'variable','begin':/@[a-z0-9][a-z0-9_]*/},{'className':_0x242be7(0x2431),'variants':[{'begin':/'/,'end':/'/,'contains':[{'begin':/''/}]}]},{'begin':/"/,'end':/"/,'contains':[{'begin':/""/}]},_0x1a9669[_0x242be7(0xd12)],_0x1a9669['C_BLOCK_COMMENT_MODE'],_0x19a37c,{'className':_0x242be7(0x1182),'begin':/[-+*/=%^~]|&&?|\|\|?|!=?|<(?:=>?|<|>)?|>[>=]?/,'relevance':0x0}]};};},0x1ed3:_0x3d75fa=>{const _0x2af515=a0_0x11e7;_0x3d75fa[_0x2af515(0x474c)]=function(_0x4be0b3){const _0x5f529a=_0x2af515,_0x462b73=_0x4be0b3[_0x5f529a(0x41d2)],_0x3148a8=[_0x5f529a(0xb04),_0x5f529a(0x504f),'bernoulli_logit_glm',_0x5f529a(0x9ab),'beta_binomial',_0x5f529a(0x5a2),_0x5f529a(0x182b),'binomial_logit',_0x5f529a(0x18b5),_0x5f529a(0x2da5),_0x5f529a(0x49da),'cauchy',_0x5f529a(0x3482),_0x5f529a(0x10a7),_0x5f529a(0x237a),_0x5f529a(0x2047),_0x5f529a(0xe2a),_0x5f529a(0x49a4),_0x5f529a(0x3789),_0x5f529a(0x1c19),_0x5f529a(0x44f9),_0x5f529a(0x2a1d),_0x5f529a(0x3a45),_0x5f529a(0x40ec),_0x5f529a(0x357a),_0x5f529a(0x2b76),_0x5f529a(0x2a8),_0x5f529a(0x46da),_0x5f529a(0x1783),'lkj_corr_cholesky',_0x5f529a(0x4b43),_0x5f529a(0xc6f),_0x5f529a(0x1d19),'multi_gp',_0x5f529a(0x2b6),'multinomial',_0x5f529a(0x4167),_0x5f529a(0x8ff),_0x5f529a(0x29fd),'multi_normal_prec','multi_student_cholesky_t',_0x5f529a(0x2cf1),_0x5f529a(0x275e),_0x5f529a(0x41a1),'neg_binomial_2',_0x5f529a(0x4395),'neg_binomial_2_log_glm',_0x5f529a(0x47d),'normal_id_glm',_0x5f529a(0x40e2),'ordered_logistic_glm',_0x5f529a(0x24a6),'pareto','pareto_type_2',_0x5f529a(0x2927),_0x5f529a(0x41cb),_0x5f529a(0x33e4),_0x5f529a(0xc6c),_0x5f529a(0x2ee2),_0x5f529a(0x389d),_0x5f529a(0xfe5),_0x5f529a(0x1c94),'std_normal_log','student_t',_0x5f529a(0x394),_0x5f529a(0x3a96),_0x5f529a(0x2b8c),_0x5f529a(0x4588),_0x5f529a(0x5aa),_0x5f529a(0x2d48)],_0xc7c902=_0x4be0b3[_0x5f529a(0x4e4f)](/\/\*/,/\*\//,{'relevance':0x0,'contains':[{'scope':_0x5f529a(0x4593),'match':/@(return|param)/}]}),_0x9d26ad={'scope':_0x5f529a(0x5153),'begin':/#include\b/,'end':/$/,'contains':[{'match':/[a-z][a-z-._]+/,'scope':_0x5f529a(0x2431)},_0x4be0b3[_0x5f529a(0x2ae2)]]},_0x421593=[_0x5f529a(0x52d),_0x5f529a(0x4541),_0x5f529a(0xf16),_0x5f529a(0x4162)];return{'name':_0x5f529a(0x244),'aliases':[_0x5f529a(0x4a1)],'keywords':{'$pattern':_0x4be0b3['IDENT_RE'],'title':[_0x5f529a(0x4b4c),_0x5f529a(0x1556),_0x5f529a(0x5139),_0x5f529a(0x1e98),_0x5f529a(0x8de),'transformed',_0x5f529a(0x21f)],'type':[_0x5f529a(0x26f6),_0x5f529a(0x3cab),_0x5f529a(0x3a77),_0x5f529a(0xc16),_0x5f529a(0x47f6),_0x5f529a(0x4836),'complex_vector',_0x5f529a(0x1287),_0x5f529a(0x3773),_0x5f529a(0x1ee2),'unit_vector','row_vector',_0x5f529a(0x3f2a),_0x5f529a(0x21d5),_0x5f529a(0x22ee),'cholesky_factor_corr|10',_0x5f529a(0x1fb9),'corr_matrix|10',_0x5f529a(0x38cc),_0x5f529a(0x27d6)],'keyword':[_0x5f529a(0x3c19),'in','if',_0x5f529a(0x3d4),_0x5f529a(0x552),'break',_0x5f529a(0x16d9),_0x5f529a(0xdfd)],'built_in':[_0x5f529a(0xbe0),'acos',_0x5f529a(0x5159),_0x5f529a(0x2d31),_0x5f529a(0xc1c),_0x5f529a(0x36cc),'append_array',_0x5f529a(0x276e),'append_row',_0x5f529a(0x3c15),'asinh',_0x5f529a(0x3aab),_0x5f529a(0x41c2),_0x5f529a(0x3075),_0x5f529a(0x5185),_0x5f529a(0x3da5),_0x5f529a(0x2052),'block','cbrt','ceil',_0x5f529a(0x4e37),_0x5f529a(0x4bd9),_0x5f529a(0x3ab9),_0x5f529a(0x1721),_0x5f529a(0x767),'columns_dot_product',_0x5f529a(0x2e46),'complex_schur_decompose',_0x5f529a(0xa72),_0x5f529a(0x39f6),_0x5f529a(0x1a6a),_0x5f529a(0x3935),'cosh',_0x5f529a(0x1023),'crossprod','csr_extract',_0x5f529a(0x19fa),_0x5f529a(0x4557),_0x5f529a(0x49a1),_0x5f529a(0x3216),'csr_to_dense_matrix',_0x5f529a(0x36a9),'dae',_0x5f529a(0x3d67),_0x5f529a(0x1d9c),_0x5f529a(0x220),'diagonal',_0x5f529a(0x1bd2),_0x5f529a(0x3a07),_0x5f529a(0xf9f),_0x5f529a(0x2ace),'distance',_0x5f529a(0x428c),_0x5f529a(0x4632),'eigendecompose',_0x5f529a(0x3a05),_0x5f529a(0x1ee8),_0x5f529a(0x294c),_0x5f529a(0x2a45),_0x5f529a(0x3849),'erf','erfc',_0x5f529a(0x3a1b),_0x5f529a(0x39bc),_0x5f529a(0x330c),_0x5f529a(0x2479),_0x5f529a(0x3ff3),'fft',_0x5f529a(0x235),_0x5f529a(0x2e2d),_0x5f529a(0x9ac),_0x5f529a(0x1d2b),'fmin',_0x5f529a(0x3695),_0x5f529a(0x5032),_0x5f529a(0x1858),'generalized_inverse',_0x5f529a(0x4beb),_0x5f529a(0x28ce),_0x5f529a(0x1da1),_0x5f529a(0x1fb6),_0x5f529a(0x3a61),_0x5f529a(0x1466),_0x5f529a(0x4ae7),_0x5f529a(0x4ff2),_0x5f529a(0x498c),_0x5f529a(0x2753),_0x5f529a(0x43a5),_0x5f529a(0x4e3d),'integrate_ode_rk45',_0x5f529a(0x22c),_0x5f529a(0x2560),_0x5f529a(0xba7),_0x5f529a(0x1f4f),_0x5f529a(0x3279),_0x5f529a(0x1ae3),'inv_fft',_0x5f529a(0x36ee),_0x5f529a(0x463f),_0x5f529a(0x3230),_0x5f529a(0x3abb),_0x5f529a(0x3a58),_0x5f529a(0x25eb),'is_inf',_0x5f529a(0x8ba),_0x5f529a(0x1a97),_0x5f529a(0x3738),_0x5f529a(0xd1e),_0x5f529a(0x1c84),_0x5f529a(0x3afb),'lgamma',_0x5f529a(0x3c70),_0x5f529a(0x31db),_0x5f529a(0x41e3),_0x5f529a(0x5123),_0x5f529a(0xebd),_0x5f529a(0x25e3),'log',_0x5f529a(0xda7),_0x5f529a(0x2410),_0x5f529a(0x2a0f),_0x5f529a(0x1bb7),_0x5f529a(0x13ea),_0x5f529a(0x50d4),_0x5f529a(0x4918),'log_falling_factorial','log_inv_logit',_0x5f529a(0x4621),_0x5f529a(0x2288),'log_mix',_0x5f529a(0x1c66),_0x5f529a(0x472),_0x5f529a(0x293f),'log_sum_exp',_0x5f529a(0x3305),_0x5f529a(0x3ed8),_0x5f529a(0x1547),_0x5f529a(0x49ea),_0x5f529a(0x1552),'max',_0x5f529a(0x225a),_0x5f529a(0x1851),'mdivide_right_spd',_0x5f529a(0x20c3),'mean',_0x5f529a(0x37c8),_0x5f529a(0x4e1f),_0x5f529a(0x3c3f),_0x5f529a(0xc71),_0x5f529a(0xc4e),'norm',_0x5f529a(0x4be3),_0x5f529a(0x334b),_0x5f529a(0x329e),_0x5f529a(0xcbd),_0x5f529a(0xc22),_0x5f529a(0x39b0),_0x5f529a(0x303f),_0x5f529a(0x445e),_0x5f529a(0x2bd2),_0x5f529a(0x1ccd),'ode_ckrk_tol',_0x5f529a(0x29d6),_0x5f529a(0x446b),_0x5f529a(0x744),_0x5f529a(0x3587),_0x5f529a(0x3fe2),'one_hot_vector',_0x5f529a(0x41e),_0x5f529a(0xbba),'ones_row_vector','ones_vector',_0x5f529a(0x47f1),_0x5f529a(0x24cd),'Phi_approx',_0x5f529a(0x3fa9),_0x5f529a(0x4c60),'pow',_0x5f529a(0x4957),'prod',_0x5f529a(0x24d0),'qr','qr_Q',_0x5f529a(0xb1f),_0x5f529a(0x51ff),_0x5f529a(0x54b),'qr_thin_R',_0x5f529a(0x667),_0x5f529a(0x3093),_0x5f529a(0x1a8f),_0x5f529a(0x234c),_0x5f529a(0x846),_0x5f529a(0x1bef),_0x5f529a(0x26fe),_0x5f529a(0xd27),'rep_matrix',_0x5f529a(0x1d28),_0x5f529a(0x2cc6),'reverse',_0x5f529a(0x491),_0x5f529a(0x3d6c),_0x5f529a(0x3648),_0x5f529a(0x48a7),_0x5f529a(0x1ea0),_0x5f529a(0x4a22),_0x5f529a(0x4117),'sd',_0x5f529a(0x2375),_0x5f529a(0x2a37),_0x5f529a(0x4304),_0x5f529a(0x4bb3),_0x5f529a(0x395f),_0x5f529a(0x3081),_0x5f529a(0x51ca),_0x5f529a(0x19f2),_0x5f529a(0x3a5d),'sort_indices_desc',_0x5f529a(0x5011),_0x5f529a(0x23bb),_0x5f529a(0x2deb),'step',_0x5f529a(0x1627),_0x5f529a(0x36bb),_0x5f529a(0x13b9),_0x5f529a(0x46b),'svd_U','svd_V',_0x5f529a(0x3449),_0x5f529a(0x25c1),'tan',_0x5f529a(0x327c),_0x5f529a(0x1bc7),'tcrossprod',_0x5f529a(0x5d6),_0x5f529a(0x699),_0x5f529a(0x3ce9),'to_complex',_0x5f529a(0x41c8),_0x5f529a(0x1184),'to_row_vector',_0x5f529a(0x25cf),_0x5f529a(0x1f44),_0x5f529a(0x37c6),_0x5f529a(0x475c),_0x5f529a(0x3b5e),'trunc','uniform_simplex',_0x5f529a(0x3231),_0x5f529a(0x1633),_0x5f529a(0x2e86),_0x5f529a(0xe30)]},'contains':[_0x4be0b3[_0x5f529a(0x2ae2)],_0x9d26ad,_0x4be0b3[_0x5f529a(0x2bbe)],_0xc7c902,{'scope':_0x5f529a(0x43a),'match':/\s(pi|e|sqrt2|log2|log10)(?=\()/,'relevance':0x0},{'match':_0x462b73[_0x5f529a(0x1d1d)](/[<,]\s*/,_0x462b73[_0x5f529a(0x583)](..._0x421593),/\s*=/),'keywords':_0x421593},{'scope':_0x5f529a(0x1357),'match':/\btarget(?=\s*\+=)/},{'match':[/~\s*/,_0x462b73[_0x5f529a(0x583)](..._0x3148a8),/(?:\(\))/,/\s*T(?=\s*\[)/],'scope':{0x2:_0x5f529a(0x43a),0x4:_0x5f529a(0x1357)}},{'scope':_0x5f529a(0x43a),'keywords':_0x3148a8,'begin':_0x462b73[_0x5f529a(0x1d1d)](/\w*/,_0x462b73[_0x5f529a(0x583)](..._0x3148a8),/(_lpdf|_lupdf|_lpmf|_cdf|_lcdf|_lccdf|_qf)(?=\s*[\(.*\)])/)},{'begin':[/~/,/\s*/,_0x462b73[_0x5f529a(0x1d1d)](_0x462b73['either'](..._0x3148a8),/(?=\s*[\(.*\)])/)],'scope':{0x3:_0x5f529a(0x43a)}},{'begin':[/~/,/\s*\w+(?=\s*[\(.*\)])/,_0x5f529a(0x4b3)+_0x462b73[_0x5f529a(0x583)](..._0x3148a8)+')\x08)'],'scope':{0x2:_0x5f529a(0x20db)}},{'scope':_0x5f529a(0x20db),'begin':/\w*(_lpdf|_lupdf|_lpmf|_cdf|_lcdf|_lccdf|_qf)(?=\s*[\(.*\)])/},{'scope':'number','match':_0x462b73[_0x5f529a(0x1d1d)](/(?:\b\d+(?:_\d+)*(?:\.(?:\d+(?:_\d+)*)?)?|\B\.\d+(?:_\d+)*)/,/(?:[eE][+-]?\d+(?:_\d+)*)?i?(?!\w)/),'relevance':0x0},{'scope':_0x5f529a(0x2431),'begin':/"/,'end':/"/}]};};},0x9f2:_0x537b38=>{const _0x5a4421=a0_0x11e7;_0x537b38[_0x5a4421(0x474c)]=function(_0x46c2fa){const _0x23e148=_0x5a4421;return{'name':'Stata','aliases':['do',_0x23e148(0x48a4)],'case_insensitive':!0x0,'keywords':_0x23e148(0x1edd),'contains':[{'className':_0x23e148(0x239b),'begin':/`[a-zA-Z0-9_]+'/},{'className':_0x23e148(0x3362),'begin':/\$\{?[a-zA-Z0-9_]+\}?/,'relevance':0x0},{'className':_0x23e148(0x2431),'variants':[{'begin':_0x23e148(0x1c60)},{'begin':_0x23e148(0x383c)}]},{'className':_0x23e148(0x43a),'variants':[{'begin':_0x23e148(0x3da9)}]},_0x46c2fa['COMMENT'](_0x23e148(0xb8e),!0x1),_0x46c2fa[_0x23e148(0x2ae2)],_0x46c2fa['C_BLOCK_COMMENT_MODE']]};};},0x1fc0:_0x84b239=>{const _0x160b88=a0_0x11e7;_0x84b239[_0x160b88(0x474c)]=function(_0x1bcd24){const _0x4b1047=_0x160b88;return{'name':_0x4b1047(0x326c),'aliases':['p21',_0x4b1047(0xf8e),_0x4b1047(0x28c8)],'case_insensitive':!0x0,'keywords':{'$pattern':_0x4b1047(0x188f),'keyword':[_0x4b1047(0x44d5),'ENDSEC',_0x4b1047(0x4b2f)]},'contains':[{'className':'meta','begin':'ISO-10303-21;','relevance':0xa},{'className':_0x4b1047(0x5153),'begin':_0x4b1047(0x4fad),'relevance':0xa},_0x1bcd24[_0x4b1047(0x2ae2)],_0x1bcd24[_0x4b1047(0x23fe)],_0x1bcd24[_0x4b1047(0x4e4f)](_0x4b1047(0x1f96),'\x5c*/'),_0x1bcd24[_0x4b1047(0xd12)],_0x1bcd24[_0x4b1047(0x46a1)](_0x1bcd24[_0x4b1047(0xa4c)],{'illegal':null}),_0x1bcd24['inherit'](_0x1bcd24['QUOTE_STRING_MODE'],{'illegal':null}),{'className':'string','begin':'\x27','end':'\x27'},{'className':'symbol','variants':[{'begin':'#','end':'\x5cd+','illegal':'\x5cW'}]}]};};},0x1665:_0x13713d=>{const _0x1961c9=a0_0x11e7,_0x4459da=['a',_0x1961c9(0xd80),_0x1961c9(0x3db6),_0x1961c9(0x51c1),_0x1961c9(0x2a6),'audio','b',_0x1961c9(0x702),_0x1961c9(0x4f1a),_0x1961c9(0x18b7),_0x1961c9(0x3115),'caption',_0x1961c9(0x447d),_0x1961c9(0x4948),'dd',_0x1961c9(0x109c),_0x1961c9(0x2dd7),_0x1961c9(0x2b08),_0x1961c9(0x4c88),'dl','dt','em',_0x1961c9(0x31c0),_0x1961c9(0x4252),_0x1961c9(0x12f4),_0x1961c9(0x3af9),_0x1961c9(0x31e0),'h1','h2','h3','h4','h5','h6',_0x1961c9(0x17f8),_0x1961c9(0x507c),_0x1961c9(0x2acd),'i','iframe',_0x1961c9(0x3045),'input',_0x1961c9(0x4432),_0x1961c9(0x728),_0x1961c9(0x3b71),_0x1961c9(0x201b),'li','main','mark',_0x1961c9(0x3888),_0x1961c9(0x1ec0),_0x1961c9(0x20c7),'ol','p','q',_0x1961c9(0x3567),'samp',_0x1961c9(0x69d),_0x1961c9(0x2bbd),_0x1961c9(0x40c9),'summary',_0x1961c9(0xb95),_0x1961c9(0x1639),_0x1961c9(0x2e69),'td',_0x1961c9(0x39cd),_0x1961c9(0x2f6d),'th',_0x1961c9(0x46d4),_0x1961c9(0x51b6),'tr','ul',_0x1961c9(0x469d),_0x1961c9(0xcde)],_0x3e204f=['any-hover','any-pointer',_0x1961c9(0x1bc1),'color',_0x1961c9(0xdf1),_0x1961c9(0x2dc3),'device-aspect-ratio',_0x1961c9(0x2dc),_0x1961c9(0x4790),'display-mode',_0x1961c9(0x1e01),_0x1961c9(0x2461),_0x1961c9(0x3cd6),'hover',_0x1961c9(0x5231),_0x1961c9(0x3dc),_0x1961c9(0xc6a),'overflow-block',_0x1961c9(0x31fe),_0x1961c9(0x43e4),_0x1961c9(0x1b7f),'prefers-contrast','prefers-reduced-motion',_0x1961c9(0x2c6b),_0x1961c9(0x4de3),_0x1961c9(0x4318),_0x1961c9(0x39d8),_0x1961c9(0x38d6),'width',_0x1961c9(0x2c06),_0x1961c9(0x3e30),_0x1961c9(0x2613),_0x1961c9(0x2ebc)],_0x180490=[_0x1961c9(0x182f),_0x1961c9(0x4f6f),_0x1961c9(0x2bd),_0x1961c9(0x494d),_0x1961c9(0xbb8),_0x1961c9(0x3d23),_0x1961c9(0x183b),'dir',_0x1961c9(0x3f89),_0x1961c9(0x41fc),'empty',_0x1961c9(0x223e),_0x1961c9(0x4d51),_0x1961c9(0x14ca),_0x1961c9(0x2c0c),_0x1961c9(0x363),'future',_0x1961c9(0x4ba),'focus-visible',_0x1961c9(0x1874),_0x1961c9(0x3170),'host','host-context',_0x1961c9(0x3d6d),_0x1961c9(0x33a8),'in-range',_0x1961c9(0x498a),'is',_0x1961c9(0x1f46),_0x1961c9(0x11ef),'last-of-type',_0x1961c9(0x48eb),_0x1961c9(0x4b32),'local-link',_0x1961c9(0xc1a),_0x1961c9(0x2fee),_0x1961c9(0x3d31),_0x1961c9(0x3837),'nth-last-col','nth-last-of-type',_0x1961c9(0x4153),_0x1961c9(0x12a4),'only-of-type',_0x1961c9(0x51e4),_0x1961c9(0x48d0),_0x1961c9(0x38e6),_0x1961c9(0x38ef),_0x1961c9(0x1128),_0x1961c9(0x3229),'required',_0x1961c9(0x4d50),_0x1961c9(0x507b),_0x1961c9(0x4cd),_0x1961c9(0x1bc7),_0x1961c9(0x4579),_0x1961c9(0x4147),_0x1961c9(0x4297),_0x1961c9(0xd18),_0x1961c9(0x3b62)],_0x19a5a6=[_0x1961c9(0x1349),_0x1961c9(0x313),_0x1961c9(0x5097),_0x1961c9(0x18bf),'cue-region',_0x1961c9(0x222a),'first-line',_0x1961c9(0x50a3),_0x1961c9(0x5121),_0x1961c9(0x2aa),_0x1961c9(0x10e2),_0x1961c9(0x2e6e),_0x1961c9(0x3e0e),_0x1961c9(0x3c13)],_0x77939c=[_0x1961c9(0x31e),'align-items',_0x1961c9(0xe82),_0x1961c9(0xc36),_0x1961c9(0x148a),_0x1961c9(0x1b0e),'animation-direction',_0x1961c9(0x2c5c),_0x1961c9(0x4fd3),'animation-iteration-count',_0x1961c9(0x1efe),_0x1961c9(0x1cc4),_0x1961c9(0x40b0),_0x1961c9(0x460d),_0x1961c9(0x1471),_0x1961c9(0x7b3),_0x1961c9(0x163b),_0x1961c9(0x1195),_0x1961c9(0x11ac),_0x1961c9(0x965),_0x1961c9(0x93e),_0x1961c9(0x4295),'background-repeat',_0x1961c9(0x4065),_0x1961c9(0x2f89),'border','border-block',_0x1961c9(0x3374),_0x1961c9(0x2bbf),_0x1961c9(0x2f10),_0x1961c9(0x43ac),'border-block-end-width',_0x1961c9(0x34f),_0x1961c9(0x480a),_0x1961c9(0x2a76),_0x1961c9(0x3b35),'border-block-style',_0x1961c9(0x4d37),'border-bottom',_0x1961c9(0x1fda),_0x1961c9(0x494c),_0x1961c9(0x11d1),_0x1961c9(0x4491),_0x1961c9(0x2855),_0x1961c9(0x2d19),_0x1961c9(0x4ede),'border-image','border-image-outset','border-image-repeat',_0x1961c9(0x2b01),_0x1961c9(0x4d0),_0x1961c9(0x2aa0),_0x1961c9(0x6cc),_0x1961c9(0x21cd),_0x1961c9(0x2a98),_0x1961c9(0xb81),_0x1961c9(0x3c06),_0x1961c9(0xdfa),'border-inline-start','border-inline-start-color',_0x1961c9(0x2444),_0x1961c9(0x1c4c),'border-inline-style',_0x1961c9(0xb54),_0x1961c9(0x953),_0x1961c9(0x82b),_0x1961c9(0xb7c),_0x1961c9(0x1b93),'border-radius','border-right',_0x1961c9(0x4fdc),_0x1961c9(0x4a2d),'border-right-width',_0x1961c9(0x5a6),'border-style','border-top',_0x1961c9(0x5090),'border-top-left-radius','border-top-right-radius',_0x1961c9(0x4b1c),'border-top-width','border-width','bottom',_0x1961c9(0x1db9),_0x1961c9(0x4419),_0x1961c9(0x4f0e),_0x1961c9(0x2ce9),'break-before',_0x1961c9(0x425d),'caption-side',_0x1961c9(0x1cb2),_0x1961c9(0x4933),_0x1961c9(0x15b6),'clip-path',_0x1961c9(0x1c7c),_0x1961c9(0xe81),'column-count',_0x1961c9(0x25c4),_0x1961c9(0x4679),'column-rule',_0x1961c9(0x2ccf),'column-rule-style',_0x1961c9(0x1b1e),_0x1961c9(0x3085),_0x1961c9(0x27a0),_0x1961c9(0x4457),_0x1961c9(0x1166),'content','content-visibility','counter-increment',_0x1961c9(0x43c7),'cue',_0x1961c9(0x377d),_0x1961c9(0x18ef),_0x1961c9(0x824),_0x1961c9(0x296c),_0x1961c9(0x12ca),_0x1961c9(0x22c7),_0x1961c9(0x1465),_0x1961c9(0x2a07),_0x1961c9(0x2c01),_0x1961c9(0x295a),_0x1961c9(0x494f),_0x1961c9(0x3458),_0x1961c9(0x2a69),_0x1961c9(0x32ae),_0x1961c9(0x1ab8),_0x1961c9(0x4b13),_0x1961c9(0xe53),_0x1961c9(0x2019),_0x1961c9(0x40de),_0x1961c9(0xeea),_0x1961c9(0x3048),_0x1961c9(0x44a5),_0x1961c9(0x464),_0x1961c9(0x750),_0x1961c9(0x4a87),'font-stretch','font-style',_0x1961c9(0x4fd6),_0x1961c9(0xb7a),_0x1961c9(0x2694),_0x1961c9(0x3afe),_0x1961c9(0x2eae),_0x1961c9(0x2813),'font-variant-position',_0x1961c9(0x4c7a),_0x1961c9(0x417d),_0x1961c9(0x44f8),_0x1961c9(0x32f0),_0x1961c9(0x2461),'grid-area',_0x1961c9(0xf9c),_0x1961c9(0x3a86),'grid-auto-rows',_0x1961c9(0x3058),_0x1961c9(0x3cf),_0x1961c9(0x4df),_0x1961c9(0x28a9),_0x1961c9(0x92e),_0x1961c9(0x48ab),_0x1961c9(0x3ffa),_0x1961c9(0x4a8e),_0x1961c9(0x3fda),_0x1961c9(0x2bb0),'grid-template-rows',_0x1961c9(0xc89),_0x1961c9(0x3cd6),'hyphens',_0x1961c9(0x169e),_0x1961c9(0x3e53),'image-rendering',_0x1961c9(0x2833),'ime-mode',_0x1961c9(0x41a9),'isolation',_0x1961c9(0xedd),_0x1961c9(0x48eb),_0x1961c9(0x4e98),_0x1961c9(0x2352),'line-height','list-style',_0x1961c9(0x1438),_0x1961c9(0x1a28),_0x1961c9(0x27cf),_0x1961c9(0x2b9f),_0x1961c9(0x1e53),'margin-block-end','margin-block-start',_0x1961c9(0x90d),'margin-inline',_0x1961c9(0x442),_0x1961c9(0x2d5),_0x1961c9(0x26a1),_0x1961c9(0x5022),_0x1961c9(0xefe),_0x1961c9(0x4b5c),_0x1961c9(0x1cee),_0x1961c9(0x1ebe),'mask-border-mode',_0x1961c9(0x1706),_0x1961c9(0x2c4f),_0x1961c9(0xad0),_0x1961c9(0x4985),_0x1961c9(0x2691),_0x1961c9(0x897),_0x1961c9(0x504b),_0x1961c9(0x4cac),_0x1961c9(0x3dfd),_0x1961c9(0x498f),_0x1961c9(0x321c),_0x1961c9(0x3d6b),_0x1961c9(0x4c09),_0x1961c9(0x4921),'max-block-size','max-height',_0x1961c9(0x1e50),_0x1961c9(0x3e30),_0x1961c9(0x4000),_0x1961c9(0x2613),_0x1961c9(0x446d),_0x1961c9(0x2c06),'mix-blend-mode',_0x1961c9(0x516a),'nav-index',_0x1961c9(0x150e),_0x1961c9(0x467),_0x1961c9(0x14fc),_0x1961c9(0x28b),_0x1961c9(0x47d),_0x1961c9(0x3bd7),_0x1961c9(0x1541),_0x1961c9(0x4bd2),_0x1961c9(0xd8d),_0x1961c9(0x4840),_0x1961c9(0x2a21),'outline-color',_0x1961c9(0x40fb),_0x1961c9(0x230d),_0x1961c9(0x282),_0x1961c9(0xa69),_0x1961c9(0xdad),_0x1961c9(0x49a7),_0x1961c9(0x4704),_0x1961c9(0x5252),_0x1961c9(0x49ca),_0x1961c9(0x4012),_0x1961c9(0x2511),_0x1961c9(0x5060),_0x1961c9(0x1aac),_0x1961c9(0x25de),_0x1961c9(0x12fa),_0x1961c9(0x3bdb),_0x1961c9(0x41de),'padding-top',_0x1961c9(0x45a),'page-break-before',_0x1961c9(0x1e45),_0x1961c9(0x468c),_0x1961c9(0x10c9),_0x1961c9(0x295c),_0x1961c9(0x2b0d),'perspective-origin',_0x1961c9(0x23a0),_0x1961c9(0x25f1),_0x1961c9(0x37b),'resize','rest',_0x1961c9(0x58c),_0x1961c9(0x27f9),'right','row-gap',_0x1961c9(0x4fcd),_0x1961c9(0xa40),_0x1961c9(0x2d9b),_0x1961c9(0x2914),'scroll-margin-bottom',_0x1961c9(0x21e3),_0x1961c9(0x252e),_0x1961c9(0x3b8),_0x1961c9(0x10d2),'scroll-margin-right',_0x1961c9(0x4e95),_0x1961c9(0x3796),_0x1961c9(0x328a),_0x1961c9(0x35f3),'scroll-padding-block-start',_0x1961c9(0x2bef),_0x1961c9(0x13cc),_0x1961c9(0x11ba),_0x1961c9(0x4761),'scroll-padding-left',_0x1961c9(0x1a7a),'scroll-padding-top',_0x1961c9(0x2934),_0x1961c9(0x3c84),_0x1961c9(0x4392),_0x1961c9(0x337),_0x1961c9(0xaaf),_0x1961c9(0x4a35),_0x1961c9(0x24d9),_0x1961c9(0x764),_0x1961c9(0x2e55),_0x1961c9(0x4bba),'speak-as',_0x1961c9(0x3d6),'tab-size',_0x1961c9(0x45c0),_0x1961c9(0x3e47),'text-align-all',_0x1961c9(0x1470),_0x1961c9(0x40ab),_0x1961c9(0x21e1),'text-decoration-color',_0x1961c9(0x3cde),_0x1961c9(0x1866),_0x1961c9(0x178f),_0x1961c9(0x3d6a),_0x1961c9(0x4866),_0x1961c9(0x246d),'text-indent','text-justify',_0x1961c9(0x22d7),_0x1961c9(0x3d18),_0x1961c9(0x25da),_0x1961c9(0x2803),_0x1961c9(0x1aba),_0x1961c9(0x3deb),_0x1961c9(0x279d),_0x1961c9(0x5161),_0x1961c9(0x1fc1),_0x1961c9(0xb11),_0x1961c9(0x478c),'transition',_0x1961c9(0x2d3e),_0x1961c9(0xce9),_0x1961c9(0x28db),_0x1961c9(0x416a),_0x1961c9(0x4074),_0x1961c9(0x1677),_0x1961c9(0x4703),'voice-balance',_0x1961c9(0x175f),_0x1961c9(0xd49),_0x1961c9(0x2483),_0x1961c9(0x1002),'voice-rate',_0x1961c9(0x5055),_0x1961c9(0x4f11),_0x1961c9(0x711),_0x1961c9(0x274c),_0x1961c9(0x17d2),_0x1961c9(0x4258),_0x1961c9(0x27ad),_0x1961c9(0xe5e),_0x1961c9(0x4c34),_0x1961c9(0x2a38),_0x1961c9(0x25fb)][_0x1961c9(0x78b)]();_0x13713d[_0x1961c9(0x474c)]=function(_0x55c04d){const _0x52e7a2=_0x1961c9,_0x2d85bb=(_0x31e1f6=>({'IMPORTANT':{'scope':_0x52e7a2(0x5153),'begin':'!important'},'BLOCK_COMMENT':_0x31e1f6[_0x52e7a2(0x23fe)],'HEXCOLOR':{'scope':_0x52e7a2(0x4a80),'begin':/#(([0-9a-fA-F]{3,4})|(([0-9a-fA-F]{2}){3,4}))\b/},'FUNCTION_DISPATCH':{'className':_0x52e7a2(0x43a),'begin':/[\w-]+(?=\()/},'ATTRIBUTE_SELECTOR_MODE':{'scope':'selector-attr','begin':/\[/,'end':/\]/,'illegal':'$','contains':[_0x31e1f6[_0x52e7a2(0xa4c)],_0x31e1f6[_0x52e7a2(0x291b)]]},'CSS_NUMBER_MODE':{'scope':_0x52e7a2(0x4a80),'begin':_0x31e1f6[_0x52e7a2(0x5047)]+'(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?','relevance':0x0},'CSS_VARIABLE':{'className':_0x52e7a2(0x431d),'begin':/--[A-Za-z_][A-Za-z0-9_-]*/}}))(_0x55c04d),_0x4088b5={'className':'variable','begin':'\x5c$'+_0x55c04d[_0x52e7a2(0xacc)]},_0x533016=_0x52e7a2(0x18f3);return{'name':'Stylus','aliases':['styl'],'case_insensitive':!0x1,'keywords':'if\x20else\x20for\x20in','illegal':'('+['\x5c?',_0x52e7a2(0xa35),_0x52e7a2(0x2cc7),'(\x5cbend\x5cb)',_0x52e7a2(0x203c),';','#\x5cs',_0x52e7a2(0x142a),'===\x5cs','\x5c|','%'][_0x52e7a2(0x3541)]('|')+')','contains':[_0x55c04d[_0x52e7a2(0x291b)],_0x55c04d[_0x52e7a2(0xa4c)],_0x55c04d[_0x52e7a2(0x2ae2)],_0x55c04d['C_BLOCK_COMMENT_MODE'],_0x2d85bb['HEXCOLOR'],{'begin':_0x52e7a2(0x798)+_0x533016,'className':_0x52e7a2(0x326)},{'begin':'#[a-zA-Z][a-zA-Z0-9_-]*'+_0x533016,'className':_0x52e7a2(0x3713)},{'begin':_0x52e7a2(0x4cd5)+_0x4459da[_0x52e7a2(0x3541)]('|')+')'+_0x533016,'className':_0x52e7a2(0x527d)},{'className':_0x52e7a2(0x277a),'begin':_0x52e7a2(0x205c)+_0x180490[_0x52e7a2(0x3541)]('|')+')'+_0x533016},{'className':_0x52e7a2(0x277a),'begin':_0x52e7a2(0x4c37)+_0x19a5a6[_0x52e7a2(0x3541)]('|')+')'+_0x533016},_0x2d85bb[_0x52e7a2(0x3880)],{'className':_0x52e7a2(0x1357),'begin':/@media/,'starts':{'end':/[{;}]/,'keywords':{'$pattern':/[a-z-]+/,'keyword':_0x52e7a2(0x29d9),'attribute':_0x3e204f[_0x52e7a2(0x3541)]('\x20')},'contains':[_0x2d85bb['CSS_NUMBER_MODE']]}},{'className':'keyword','begin':'@((-(o|moz|ms|webkit)-)?('+[_0x52e7a2(0x2d4d),_0x52e7a2(0xdd4),_0x52e7a2(0x534),_0x52e7a2(0x38b6),_0x52e7a2(0x5193),_0x52e7a2(0x3c19),_0x52e7a2(0x331),_0x52e7a2(0x478e),_0x52e7a2(0x2406),'media',_0x52e7a2(0xc06),_0x52e7a2(0x32a0),_0x52e7a2(0x4ab1),_0x52e7a2(0x552)]['join']('|')+_0x52e7a2(0x2901)},_0x4088b5,_0x2d85bb[_0x52e7a2(0x46dc)],{'className':_0x52e7a2(0x14b2),'begin':_0x52e7a2(0x729),'illegal':_0x52e7a2(0x1db0),'returnBegin':!0x0,'contains':[{'className':_0x52e7a2(0x4685),'begin':_0x52e7a2(0x15ff)},{'className':_0x52e7a2(0xddd),'begin':/\(/,'end':/\)/,'contains':[_0x2d85bb[_0x52e7a2(0x1469)],_0x4088b5,_0x55c04d['APOS_STRING_MODE'],_0x2d85bb[_0x52e7a2(0x46dc)],_0x55c04d[_0x52e7a2(0x291b)]]}]},_0x2d85bb['CSS_VARIABLE'],{'className':'attribute','begin':_0x52e7a2(0x4cd5)+_0x77939c[_0x52e7a2(0x3541)]('|')+_0x52e7a2(0x716),'starts':{'end':/;|$/,'contains':[_0x2d85bb[_0x52e7a2(0x1469)],_0x4088b5,_0x55c04d[_0x52e7a2(0xa4c)],_0x55c04d[_0x52e7a2(0x291b)],_0x2d85bb['CSS_NUMBER_MODE'],_0x55c04d[_0x52e7a2(0x23fe)],_0x2d85bb[_0x52e7a2(0x3df6)],_0x2d85bb[_0x52e7a2(0x47a9)]],'illegal':/\./,'relevance':0x0}},_0x2d85bb[_0x52e7a2(0x47a9)]]};};},0x194b:_0xa9e813=>{const _0x1d0f6f=a0_0x11e7;_0xa9e813[_0x1d0f6f(0x474c)]=function(_0x1c0b68){const _0x4ffb50=_0x1d0f6f;return{'name':_0x4ffb50(0x2647),'case_insensitive':!0x0,'contains':[{'className':_0x4ffb50(0x2431),'begin':'\x5c[\x0a(multipart)?','end':_0x4ffb50(0x9e0)},{'className':_0x4ffb50(0x2431),'begin':_0x4ffb50(0x1be0)},{'className':_0x4ffb50(0x2431),'begin':_0x4ffb50(0x21aa)},{'className':_0x4ffb50(0x1357),'relevance':0xa,'variants':[{'begin':_0x4ffb50(0x4bfc)},{'begin':'^progress(:?)(\x5cs+)?(pop|push)?'},{'begin':'^tags:'},{'begin':_0x4ffb50(0x12b3)}]}]};};},0x5d8:_0x16d500=>{const _0x175f7b=a0_0x11e7;function _0x54568c(_0x5bce34){const _0x19f039=a0_0x11e7;return _0x5bce34?_0x19f039(0x2431)==typeof _0x5bce34?_0x5bce34:_0x5bce34[_0x19f039(0x33b0)]:null;}function _0x58e314(_0x8b1ff7){return _0x3aecc1('(?=',_0x8b1ff7,')');}function _0x3aecc1(..._0x95131b){const _0x3eacef=a0_0x11e7;return _0x95131b[_0x3eacef(0x4833)](_0x168b5b=>_0x54568c(_0x168b5b))[_0x3eacef(0x3541)]('');}function _0x3c5e8b(..._0x561ea2){const _0x1443fc=a0_0x11e7,_0x54e16c=function(_0xfcd785){const _0x3777d4=a0_0x11e7,_0x35ca78=_0xfcd785[_0xfcd785['length']-0x1];return _0x3777d4(0x20c7)==typeof _0x35ca78&&_0x35ca78[_0x3777d4(0x4514)]===Object?(_0xfcd785[_0x3777d4(0x4986)](_0xfcd785[_0x3777d4(0x1b19)]-0x1,0x1),_0x35ca78):{};}(_0x561ea2);return'('+(_0x54e16c[_0x1443fc(0x2f26)]?'':'?:')+_0x561ea2[_0x1443fc(0x4833)](_0x41269=>_0x54568c(_0x41269))[_0x1443fc(0x3541)]('|')+')';}const _0x58757c=_0xd7c63c=>_0x3aecc1(/\b/,_0xd7c63c,/\w$/[_0x175f7b(0x1769)](_0xd7c63c)?/\b/:/\B/),_0x377580=['Protocol',_0x175f7b(0x2b7e)][_0x175f7b(0x4833)](_0x58757c),_0x191144=[_0x175f7b(0x28bf),_0x175f7b(0x4454)]['map'](_0x58757c),_0x3988d8=[_0x175f7b(0x954),'Self'],_0x69dbe3=[_0x175f7b(0x3aa2),_0x175f7b(0x4684),_0x175f7b(0x3ab6),'async',_0x175f7b(0x371f),/as\?/,/as!/,'as',_0x175f7b(0x4f27),'break','case','catch','class',_0x175f7b(0x2b2e),_0x175f7b(0x3098),_0x175f7b(0x16d9),_0x175f7b(0x193e),_0x175f7b(0x16c0),_0x175f7b(0x3d23),_0x175f7b(0x2ceb),_0x175f7b(0x3039),_0x175f7b(0x24cc),_0x175f7b(0x35e5),'do',_0x175f7b(0x41aa),'each',_0x175f7b(0x3d4),_0x175f7b(0x44d8),_0x175f7b(0x1dd9),'fallthrough',/fileprivate\(set\)/,'fileprivate','final',_0x175f7b(0x3c19),'func',_0x175f7b(0xf9e),_0x175f7b(0x241c),'if','import',_0x175f7b(0x9fa),'infix',/init\?/,/init!/,'inout',/internal\(set\)/,_0x175f7b(0x2dad),'in','is',_0x175f7b(0xda5),_0x175f7b(0x11bc),_0x175f7b(0x393d),_0x175f7b(0x1e61),'macro',_0x175f7b(0x1ed7),_0x175f7b(0x10f9),/open\(set\)/,_0x175f7b(0x1795),_0x175f7b(0x1182),_0x175f7b(0x51e4),_0x175f7b(0x35a7),'postfix','precedencegroup',_0x175f7b(0x11e8),/private\(set\)/,'private',_0x175f7b(0x33e5),/public\(set\)/,_0x175f7b(0x39ce),_0x175f7b(0x11d0),_0x175f7b(0x3e07),_0x175f7b(0x237),_0x175f7b(0xdfd),'set',_0x175f7b(0x363a),_0x175f7b(0x2c7c),'struct',_0x175f7b(0x4067),_0x175f7b(0x2cc),_0x175f7b(0x857),_0x175f7b(0x20f8),_0x175f7b(0x383),/try\?/,/try!/,_0x175f7b(0x422b),_0x175f7b(0x46b8),/unowned\(safe\)/,/unowned\(unsafe\)/,_0x175f7b(0x42c0),_0x175f7b(0x469d),'weak',_0x175f7b(0x3b62),_0x175f7b(0x552),_0x175f7b(0x11d3)],_0x21db82=['false',_0x175f7b(0x3e27),_0x175f7b(0x4022)],_0x216721=[_0x175f7b(0xa73),_0x175f7b(0x1e57),_0x175f7b(0x33a4),_0x175f7b(0x48eb),_0x175f7b(0x4ca9),_0x175f7b(0x28b),_0x175f7b(0x4d50)],_0x75d0fd=[_0x175f7b(0x41f2),_0x175f7b(0x1a74),_0x175f7b(0x234e),_0x175f7b(0xe50),_0x175f7b(0x1909),_0x175f7b(0xc34),_0x175f7b(0x32ea),'#file',_0x175f7b(0x4c3c),_0x175f7b(0x16b1),'#filePath',_0x175f7b(0xfdb),_0x175f7b(0x1c0f),_0x175f7b(0x197b),_0x175f7b(0x1d5a),_0x175f7b(0x952),_0x175f7b(0x4f43),'#sourceLocation',_0x175f7b(0x2cb2)],_0x13cdff=['abs',_0x175f7b(0xc36),_0x175f7b(0x4684),_0x175f7b(0x4fd4),_0x175f7b(0x1cd4),_0x175f7b(0x34b6),'dump',_0x175f7b(0x29c0),_0x175f7b(0x1fab),_0x175f7b(0x1e7f),_0x175f7b(0x4529),'min',_0x175f7b(0x51d4),_0x175f7b(0x31e2),_0x175f7b(0x19b6),_0x175f7b(0x10d0),_0x175f7b(0xb27),'print',_0x175f7b(0x2e93),_0x175f7b(0x17b1),'sequence',_0x175f7b(0x37de),_0x175f7b(0x25a9),_0x175f7b(0x10ed),'transcode','type',_0x175f7b(0xc08),_0x175f7b(0x2a35),_0x175f7b(0x3cc7),_0x175f7b(0x487c),_0x175f7b(0x4ed9),_0x175f7b(0x208c),'withoutActuallyEscaping',_0x175f7b(0x2f88)],_0x32fa7a=_0x3c5e8b(/[/=\-+!*%<>&|^~?]/,/[\u00A1-\u00A7]/,/[\u00A9\u00AB]/,/[\u00AC\u00AE]/,/[\u00B0\u00B1]/,/[\u00B6\u00BB\u00BF\u00D7\u00F7]/,/[\u2016-\u2017]/,/[\u2020-\u2027]/,/[\u2030-\u203E]/,/[\u2041-\u2053]/,/[\u2055-\u205E]/,/[\u2190-\u23FF]/,/[\u2500-\u2775]/,/[\u2794-\u2BFF]/,/[\u2E00-\u2E7F]/,/[\u3001-\u3003]/,/[\u3008-\u3020]/,/[\u3030]/),_0x339abd=_0x3c5e8b(_0x32fa7a,/[\u0300-\u036F]/,/[\u1DC0-\u1DFF]/,/[\u20D0-\u20FF]/,/[\uFE00-\uFE0F]/,/[\uFE20-\uFE2F]/),_0x12a9ae=_0x3aecc1(_0x32fa7a,_0x339abd,'*'),_0x5a5642=_0x3c5e8b(/[a-zA-Z_]/,/[\u00A8\u00AA\u00AD\u00AF\u00B2-\u00B5\u00B7-\u00BA]/,/[\u00BC-\u00BE\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]/,/[\u0100-\u02FF\u0370-\u167F\u1681-\u180D\u180F-\u1DBF]/,/[\u1E00-\u1FFF]/,/[\u200B-\u200D\u202A-\u202E\u203F-\u2040\u2054\u2060-\u206F]/,/[\u2070-\u20CF\u2100-\u218F\u2460-\u24FF\u2776-\u2793]/,/[\u2C00-\u2DFF\u2E80-\u2FFF]/,/[\u3004-\u3007\u3021-\u302F\u3031-\u303F\u3040-\uD7FF]/,/[\uF900-\uFD3D\uFD40-\uFDCF\uFDF0-\uFE1F\uFE30-\uFE44]/,/[\uFE47-\uFEFE\uFF00-\uFFFD]/),_0x508cd1=_0x3c5e8b(_0x5a5642,/\d/,/[\u0300-\u036F\u1DC0-\u1DFF\u20D0-\u20FF\uFE20-\uFE2F]/),_0x525606=_0x3aecc1(_0x5a5642,_0x508cd1,'*'),_0x44a45c=_0x3aecc1(/[A-Z]/,_0x508cd1,'*'),_0x392985=['attached',_0x175f7b(0x3f47),_0x3aecc1(/convention\(/,_0x3c5e8b('swift',_0x175f7b(0x1f2e),'c'),/\)/),_0x175f7b(0x238),_0x175f7b(0x22c4),_0x175f7b(0x4a9a),_0x175f7b(0x124c),_0x175f7b(0x466d),_0x175f7b(0xf75),_0x175f7b(0x1c78),_0x175f7b(0x2da2),'IBDesignable',_0x175f7b(0x270b),'IBOutlet',_0x175f7b(0x3767),'inlinable',_0x175f7b(0x3212),'nonobjc','NSApplicationMain',_0x175f7b(0x3084),'NSManaged',_0x3aecc1(/objc\(/,_0x525606,/\)/),_0x175f7b(0x344e),_0x175f7b(0x1735),_0x175f7b(0x4f2f),_0x175f7b(0x297f),_0x175f7b(0x4662),'Sendable',_0x175f7b(0x507d),_0x175f7b(0x4414),_0x175f7b(0xb09),_0x175f7b(0x40b5),_0x175f7b(0x1bf6),'warn_unqualified_access'],_0x261296=[_0x175f7b(0x4767),'iOSApplicationExtension',_0x175f7b(0x24d5),_0x175f7b(0x218d),'macCatalyst',_0x175f7b(0x3bf5),_0x175f7b(0x3008),_0x175f7b(0x48bb),_0x175f7b(0x5061),_0x175f7b(0xcdf),_0x175f7b(0xd63)];_0x16d500['exports']=function(_0x52e25e){const _0x12fe23=_0x175f7b,_0x29872d={'match':/\s+/,'relevance':0x0},_0x2439d7=_0x52e25e[_0x12fe23(0x4e4f)](_0x12fe23(0x4f94),'\x5c*/',{'contains':[_0x12fe23(0x4454)]}),_0x7623d6=[_0x52e25e[_0x12fe23(0x2ae2)],_0x2439d7],_0x562aaf={'match':[/\./,_0x3c5e8b(..._0x377580,..._0x191144)],'className':{0x2:_0x12fe23(0x1357)}},_0x431c00={'match':_0x3aecc1(/\./,_0x3c5e8b(..._0x69dbe3)),'relevance':0x0},_0x44b702=_0x69dbe3['filter'](_0xedbd6b=>_0x12fe23(0x2431)==typeof _0xedbd6b)[_0x12fe23(0x1d1d)]([_0x12fe23(0x2834)]),_0x19300d={'variants':[{'className':'keyword','match':_0x3c5e8b(..._0x69dbe3['filter'](_0x1dec3a=>_0x12fe23(0x2431)!=typeof _0x1dec3a)[_0x12fe23(0x1d1d)](_0x3988d8)[_0x12fe23(0x4833)](_0x58757c),..._0x191144)}]},_0x167546={'$pattern':_0x3c5e8b(/\b\w+/,/#\w+/),'keyword':_0x44b702[_0x12fe23(0x1d1d)](_0x75d0fd),'literal':_0x21db82},_0x1b3505=[_0x562aaf,_0x431c00,_0x19300d],_0xc890eb=[{'match':_0x3aecc1(/\./,_0x3c5e8b(..._0x13cdff)),'relevance':0x0},{'className':_0x12fe23(0x43a),'match':_0x3aecc1(/\b/,_0x3c5e8b(..._0x13cdff),/(?=\()/)}],_0x51fdda={'match':/->/,'relevance':0x0},_0x43fdd6=[_0x51fdda,{'className':_0x12fe23(0x1182),'relevance':0x0,'variants':[{'match':_0x12a9ae},{'match':_0x12fe23(0x1340)+_0x339abd+')+'}]}],_0x35488d='([0-9]_*)+',_0x373a61=_0x12fe23(0x4edf),_0x5ccb18={'className':_0x12fe23(0x4a80),'relevance':0x0,'variants':[{'match':_0x12fe23(0x4cd5)+_0x35488d+_0x12fe23(0x2742)+_0x35488d+'))?([eE][+-]?('+_0x35488d+_0x12fe23(0x1ed0)},{'match':_0x12fe23(0x3ff2)+_0x373a61+_0x12fe23(0x2742)+_0x373a61+_0x12fe23(0x1c49)+_0x35488d+_0x12fe23(0x1ed0)},{'match':/\b0o([0-7]_*)+\b/},{'match':/\b0b([01]_*)+\b/}]},_0x9d586a=(_0x4c35a2='')=>({'className':_0x12fe23(0x2ad6),'variants':[{'match':_0x3aecc1(/\\/,_0x4c35a2,/[0\\tnr"']/)},{'match':_0x3aecc1(/\\/,_0x4c35a2,/u\{[0-9a-fA-F]{1,8}\}/)}]}),_0x367d75=(_0x41ac13='')=>({'className':_0x12fe23(0x2ad6),'match':_0x3aecc1(/\\/,_0x41ac13,/[\t ]*(?:[\r\n]|\r\n)/)}),_0x49c799=(_0x31048e='')=>({'className':_0x12fe23(0x2ad6),'label':'interpol','begin':_0x3aecc1(/\\/,_0x31048e,/\(/),'end':/\)/}),_0x438e92=(_0xf19d85='')=>({'begin':_0x3aecc1(_0xf19d85,/"""/),'end':_0x3aecc1(/"""/,_0xf19d85),'contains':[_0x9d586a(_0xf19d85),_0x367d75(_0xf19d85),_0x49c799(_0xf19d85)]}),_0x1c8db5=(_0x1a0dc2='')=>({'begin':_0x3aecc1(_0x1a0dc2,/"/),'end':_0x3aecc1(/"/,_0x1a0dc2),'contains':[_0x9d586a(_0x1a0dc2),_0x49c799(_0x1a0dc2)]}),_0xb360ef={'className':_0x12fe23(0x2431),'variants':[_0x438e92(),_0x438e92('#'),_0x438e92('##'),_0x438e92(_0x12fe23(0x1bbc)),_0x1c8db5(),_0x1c8db5('#'),_0x1c8db5('##'),_0x1c8db5('###')]},_0x156dae=[_0x52e25e[_0x12fe23(0x4a76)],{'begin':/\[/,'end':/\]/,'relevance':0x0,'contains':[_0x52e25e[_0x12fe23(0x4a76)]]}],_0x1f7ce2={'begin':/\/[^\s](?=[^/\n]*\/)/,'end':/\//,'contains':_0x156dae},_0x1f0dd6=_0x2d4481=>{const _0xdb51ee=_0x12fe23,_0x2d7755=_0x3aecc1(_0x2d4481,/\//),_0x547419=_0x3aecc1(/\//,_0x2d4481);return{'begin':_0x2d7755,'end':_0x547419,'contains':[..._0x156dae,{'scope':_0xdb51ee(0x4645),'begin':_0xdb51ee(0x1d9e)+_0x547419+')','end':/$/}]};},_0x2e7582={'scope':_0x12fe23(0x4d1d),'variants':[_0x1f0dd6(_0x12fe23(0x1bbc)),_0x1f0dd6('##'),_0x1f0dd6('#'),_0x1f7ce2]},_0x417246={'match':_0x3aecc1(/`/,_0x525606,/`/)},_0x47312b=[_0x417246,{'className':_0x12fe23(0x3362),'match':/\$\d+/},{'className':_0x12fe23(0x3362),'match':'\x5c$'+_0x508cd1+'+'}],_0x378271=[{'match':/(@|#(un)?)available/,'scope':_0x12fe23(0x1357),'starts':{'contains':[{'begin':/\(/,'end':/\)/,'keywords':_0x261296,'contains':[..._0x43fdd6,_0x5ccb18,_0xb360ef]}]}},{'scope':_0x12fe23(0x1357),'match':_0x3aecc1(/@/,_0x3c5e8b(..._0x392985))},{'scope':_0x12fe23(0x5153),'match':_0x3aecc1(/@/,_0x525606)}],_0x3f7a86={'match':_0x58e314(/\b[A-Z]/),'relevance':0x0,'contains':[{'className':_0x12fe23(0xcfc),'match':_0x3aecc1(/(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)/,_0x508cd1,'+')},{'className':_0x12fe23(0xcfc),'match':_0x44a45c,'relevance':0x0},{'match':/[?!]+/,'relevance':0x0},{'match':/\.\.\./,'relevance':0x0},{'match':_0x3aecc1(/\s+&\s+/,_0x58e314(_0x44a45c)),'relevance':0x0}]},_0x11ae86={'begin'://,'keywords':_0x167546,'contains':[..._0x7623d6,..._0x1b3505,..._0x378271,_0x51fdda,_0x3f7a86]};_0x3f7a86[_0x12fe23(0x2b31)][_0x12fe23(0x1715)](_0x11ae86);const _0x580451={'begin':/\(/,'end':/\)/,'relevance':0x0,'keywords':_0x167546,'contains':[_0x12fe23(0x4454),{'match':_0x3aecc1(_0x525606,/\s*:/),'keywords':_0x12fe23(0x2834),'relevance':0x0},..._0x7623d6,_0x2e7582,..._0x1b3505,..._0xc890eb,..._0x43fdd6,_0x5ccb18,_0xb360ef,..._0x47312b,..._0x378271,_0x3f7a86]},_0x1a2b51={'begin'://,'keywords':'repeat\x20each','contains':[..._0x7623d6,_0x3f7a86]},_0x1a385b={'begin':/\(/,'end':/\)/,'keywords':_0x167546,'contains':[{'begin':_0x3c5e8b(_0x58e314(_0x3aecc1(_0x525606,/\s*:/)),_0x58e314(_0x3aecc1(_0x525606,/\s+/,_0x525606,/\s*:/))),'end':/:/,'relevance':0x0,'contains':[{'className':_0x12fe23(0x1357),'match':/\b_\b/},{'className':_0x12fe23(0xddd),'match':_0x525606}]},..._0x7623d6,..._0x1b3505,..._0x43fdd6,_0x5ccb18,_0xb360ef,..._0x378271,_0x3f7a86,_0x580451],'endsParent':!0x0,'illegal':/["']/},_0x4e9242={'match':[/(func|macro)/,/\s+/,_0x3c5e8b(_0x417246[_0x12fe23(0x2d96)],_0x525606,_0x12a9ae)],'className':{0x1:'keyword',0x3:_0x12fe23(0x20db)},'contains':[_0x1a2b51,_0x1a385b,_0x29872d],'illegal':[/\[/,/%/]},_0x5157ac={'match':[/\b(?:subscript|init[?!]?)/,/\s*(?=[<(])/],'className':{0x1:_0x12fe23(0x1357)},'contains':[_0x1a2b51,_0x1a385b,_0x29872d],'illegal':/\[|%/},_0x127e09={'match':[/operator/,/\s+/,_0x12a9ae],'className':{0x1:'keyword',0x3:_0x12fe23(0x4685)}},_0xf1465b={'begin':[/precedencegroup/,/\s+/,_0x44a45c],'className':{0x1:_0x12fe23(0x1357),0x3:_0x12fe23(0x4685)},'contains':[_0x3f7a86],'keywords':[..._0x216721,..._0x21db82],'end':/}/};for(const _0x560d10 of _0xb360ef[_0x12fe23(0x2807)]){const _0x342254=_0x560d10['contains'][_0x12fe23(0x5144)](_0x1d27ee=>_0x12fe23(0x748)===_0x1d27ee[_0x12fe23(0x3b71)]);_0x342254[_0x12fe23(0xe37)]=_0x167546;const _0x484804=[..._0x1b3505,..._0xc890eb,..._0x43fdd6,_0x5ccb18,_0xb360ef,..._0x47312b];_0x342254[_0x12fe23(0x2b31)]=[..._0x484804,{'begin':/\(/,'end':/\)/,'contains':[_0x12fe23(0x4454),..._0x484804]}];}return{'name':_0x12fe23(0x515c),'keywords':_0x167546,'contains':[..._0x7623d6,_0x4e9242,_0x5157ac,{'beginKeywords':_0x12fe23(0x3c49),'end':'\x5c{','excludeEnd':!0x0,'keywords':_0x167546,'contains':[_0x52e25e[_0x12fe23(0x46a1)](_0x52e25e['TITLE_MODE'],{'className':_0x12fe23(0x19e4),'begin':/[A-Za-z$_][\u00C0-\u02B80-9A-Za-z$_]*/}),..._0x1b3505]},_0x127e09,_0xf1465b,{'beginKeywords':_0x12fe23(0x331),'end':/$/,'contains':[..._0x7623d6],'relevance':0x0},_0x2e7582,..._0x1b3505,..._0xc890eb,..._0x43fdd6,_0x5ccb18,_0xb360ef,..._0x47312b,..._0x378271,_0x3f7a86,_0x580451]};};},0x96:_0x10524b=>{_0x10524b['exports']=function(_0x2cbc5f){const _0x5ddb9b=a0_0x11e7;return{'name':_0x5ddb9b(0x3194),'contains':[{'className':_0x5ddb9b(0x4645),'begin':/\$noop\(/,'end':/\)/,'contains':[{'begin':/\\[()]/},{'begin':/\(/,'end':/\)/,'contains':[{'begin':/\\[()]/},_0x5ddb9b(0x4454)]}],'relevance':0xa},{'className':'keyword','begin':/\$[_a-zA-Z0-9]+(?=\()/},{'className':_0x5ddb9b(0x3362),'begin':/%[_a-zA-Z0-9:]+%/},{'className':_0x5ddb9b(0x239b),'begin':/\\[\\nt$%,()]/},{'className':'symbol','begin':/\\u[a-fA-F0-9]{4}/}]};};},0x22ee:_0x2d7d15=>{const _0x2aff37=a0_0x11e7;_0x2d7d15[_0x2aff37(0x474c)]=function(_0x1cab70){const _0x55d19b=_0x2aff37;return{'name':_0x55d19b(0x26c3),'case_insensitive':!0x0,'contains':[_0x1cab70['HASH_COMMENT_MODE'],{'className':_0x55d19b(0x5153),'variants':[{'begin':_0x55d19b(0x1316)},{'begin':_0x55d19b(0x11b7)}]},{'begin':/---$/,'end':_0x55d19b(0x27eb),'subLanguage':_0x55d19b(0x4e26),'relevance':0x0},{'className':'number','begin':'\x20(\x5cd+)\x20'},{'className':_0x55d19b(0x239b),'variants':[{'begin':_0x55d19b(0x2982)},{'begin':_0x55d19b(0x32b4)}]}]};};},0x4f8:_0x5efe5a=>{const _0x232286=a0_0x11e7;_0x5efe5a[_0x232286(0x474c)]=function(_0x39fde4){const _0x251f8a=_0x232286,_0x27653d=_0x39fde4[_0x251f8a(0x41d2)],_0x4d6244=/[a-zA-Z_][a-zA-Z0-9_]*/,_0x14cc2b={'className':_0x251f8a(0x4a80),'variants':[_0x39fde4[_0x251f8a(0xed7)],_0x39fde4['C_NUMBER_MODE']]};return{'name':'Tcl','aliases':['tk'],'keywords':[_0x251f8a(0x1349),'append',_0x251f8a(0x4c31),_0x251f8a(0x26f6),'auto_execok','auto_import',_0x251f8a(0x1588),'auto_mkindex','auto_mkindex_old',_0x251f8a(0x450f),_0x251f8a(0xb18),_0x251f8a(0x1c2a),_0x251f8a(0x4053),_0x251f8a(0x4e10),_0x251f8a(0x31a3),'cd',_0x251f8a(0x1159),_0x251f8a(0x4901),_0x251f8a(0x50d8),_0x251f8a(0x1d1d),_0x251f8a(0x16d9),'dde',_0x251f8a(0x4e11),_0x251f8a(0x42b5),'eof',_0x251f8a(0x3d85),_0x251f8a(0x2ff9),_0x251f8a(0x198d),_0x251f8a(0x4c7b),_0x251f8a(0x226e),_0x251f8a(0x4e85),'fconfigure',_0x251f8a(0x4722),_0x251f8a(0x184e),_0x251f8a(0x4ab7),'filename',_0x251f8a(0x14a2),_0x251f8a(0x3c19),'foreach',_0x251f8a(0x29a7),'gets',_0x251f8a(0x3bca),_0x251f8a(0x501b),_0x251f8a(0x3c32),_0x251f8a(0x35ba),'if',_0x251f8a(0x4ada),'info',_0x251f8a(0x27c6),'join',_0x251f8a(0x10c4),_0x251f8a(0x5201),_0x251f8a(0x556),'linsert|10','list',_0x251f8a(0x2e32),'load',_0x251f8a(0x3936),_0x251f8a(0xee9),_0x251f8a(0x1a93),'lreverse|10',_0x251f8a(0x1b7a),_0x251f8a(0x3bf0),_0x251f8a(0x259c),'mathfunc',_0x251f8a(0x317),_0x251f8a(0xb0a),_0x251f8a(0x2277),_0x251f8a(0x37f7),'open','package','parray',_0x251f8a(0x46eb),_0x251f8a(0x1836),_0x251f8a(0x29ea),_0x251f8a(0x12ec),_0x251f8a(0x565),'proc',_0x251f8a(0xe0c),_0x251f8a(0x44cf),_0x251f8a(0x50a6),_0x251f8a(0x3337),_0x251f8a(0x4d1d),_0x251f8a(0x2a81),_0x251f8a(0x3904),_0x251f8a(0x2022),_0x251f8a(0xdfd),_0x251f8a(0x569),'scan',_0x251f8a(0x25f5),'set',_0x251f8a(0x6d2),_0x251f8a(0x33b0),'split',_0x251f8a(0x2431),_0x251f8a(0x2ad6),_0x251f8a(0x857),_0x251f8a(0x16ea),_0x251f8a(0x1a4c),_0x251f8a(0x14cd),_0x251f8a(0x1b91),_0x251f8a(0x3b04),_0x251f8a(0x4d3d),_0x251f8a(0x4c66),_0x251f8a(0x1ad5),_0x251f8a(0x1a90),_0x251f8a(0x51b6),'tm','trace',_0x251f8a(0x40b5),_0x251f8a(0x2802),_0x251f8a(0x4479),_0x251f8a(0x38d6),_0x251f8a(0x2abc),_0x251f8a(0x4a54),_0x251f8a(0x3362),_0x251f8a(0x220f),'while'],'contains':[_0x39fde4['COMMENT'](_0x251f8a(0x2192),'$'),_0x39fde4[_0x251f8a(0x4e4f)]('^[\x20\x5ct]*#','$'),{'beginKeywords':_0x251f8a(0x4e27),'end':_0x251f8a(0x1cf1),'excludeEnd':!0x0,'contains':[{'className':_0x251f8a(0x4685),'begin':_0x251f8a(0x1279),'end':_0x251f8a(0x4e00),'endsWithParent':!0x0,'excludeEnd':!0x0}]},{'className':_0x251f8a(0x3362),'variants':[{'begin':_0x27653d[_0x251f8a(0x1d1d)](/\$/,_0x27653d['optional'](/::/),_0x4d6244,_0x251f8a(0x4568),_0x4d6244,')*')},{'begin':'\x5c$\x5c{(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*','end':'\x5c}','contains':[_0x14cc2b]}]},{'className':_0x251f8a(0x2431),'contains':[_0x39fde4[_0x251f8a(0x4a76)]],'variants':[_0x39fde4[_0x251f8a(0x46a1)](_0x39fde4[_0x251f8a(0x291b)],{'illegal':null})]},_0x14cc2b]};};},0x26a0:_0x303fe9=>{_0x303fe9['exports']=function(_0x109862){const _0x23429b=a0_0x11e7,_0x2fcc78=['bool',_0x23429b(0x961),'i16',_0x23429b(0x25dd),_0x23429b(0x2c91),'double',_0x23429b(0x2431),_0x23429b(0x4053)];return{'name':_0x23429b(0x4c48),'keywords':{'keyword':[_0x23429b(0x37f7),'const',_0x23429b(0xc83),'struct',_0x23429b(0x44d8),_0x23429b(0x3780),'exception',_0x23429b(0x27d6),'oneway','set','list','map','required',_0x23429b(0x51e4)],'type':_0x2fcc78,'literal':'true\x20false'},'contains':[_0x109862[_0x23429b(0x291b)],_0x109862[_0x23429b(0x30be)],_0x109862[_0x23429b(0x2ae2)],_0x109862['C_BLOCK_COMMENT_MODE'],{'className':_0x23429b(0x1390),'beginKeywords':_0x23429b(0x4e75),'end':/\{/,'illegal':/\n/,'contains':[_0x109862['inherit'](_0x109862[_0x23429b(0x2029)],{'starts':{'endsWithParent':!0x0,'excludeEnd':!0x0}})]},{'begin':_0x23429b(0x4672),'keywords':{'type':[..._0x2fcc78,'set',_0x23429b(0x144e),_0x23429b(0x4833)]},'end':'>','contains':[_0x23429b(0x4454)]}]};};},0x3db:_0x509cb0=>{const _0x5531d3=a0_0x11e7;_0x509cb0[_0x5531d3(0x474c)]=function(_0x5ec9e5){const _0x507995=_0x5531d3,_0x364e64={'className':_0x507995(0x4a80),'begin':_0x507995(0x4485),'relevance':0x0},_0x364a79={'className':_0x507995(0x239b),'begin':_0x507995(0x24f8)};return{'name':'TP','keywords':{'keyword':[_0x507995(0x95f),_0x507995(0x5026),_0x507995(0x139e),_0x507995(0x278b),_0x507995(0xf66),_0x507995(0x287f),_0x507995(0x42ab),_0x507995(0x3f88),_0x507995(0x1e87),'CONDITION',_0x507995(0xaa1),'DA','DB',_0x507995(0x42a0),'DETECT',_0x507995(0x2c17),_0x507995(0x2acb),'ENDFOR','ERR_NUM',_0x507995(0x15ac),_0x507995(0x9a9),_0x507995(0x36fb),'GP',_0x507995(0x1a17),_0x507995(0x4195),'IF','JMP',_0x507995(0x266f),_0x507995(0x20f0),'MOD',_0x507995(0x39c4),'OFFSET',_0x507995(0x31c5),'OR',_0x507995(0xdfe),_0x507995(0x1a3c),_0x507995(0x285c),'PTH','RT_LD',_0x507995(0x8ec),_0x507995(0x2b90),_0x507995(0x1aad),'Skip','TA','TB','TO',_0x507995(0x49b2),'Tool_Offset','UF','UT','UFRAME_NUM','UTOOL_NUM',_0x507995(0x1ce1),_0x507995(0x4824),'X','Y','Z','W','P','R','STRLEN',_0x507995(0x4c39),_0x507995(0x1c1d),_0x507995(0x1ca5),_0x507995(0x8fd),'ATTR','MN',_0x507995(0x21b)],'literal':['ON',_0x507995(0x36c3),_0x507995(0x2812),_0x507995(0x2411),_0x507995(0x3de8),_0x507995(0xa29),'DISABLE',_0x507995(0x165c),_0x507995(0x4ab6),_0x507995(0xd82)]},'contains':[{'className':_0x507995(0x43a),'begin':_0x507995(0x23d6),'end':'\x5c]','contains':[_0x507995(0x4454),_0x364e64,_0x364a79]},{'className':'built_in','begin':_0x507995(0x794),'end':'\x5c]','contains':[_0x507995(0x4454),_0x364e64,_0x5ec9e5[_0x507995(0x291b)],_0x364a79]},{'className':_0x507995(0x1357),'begin':_0x507995(0x216)},{'className':_0x507995(0x1357),'begin':_0x507995(0x416e)},{'className':_0x507995(0x1357),'begin':'\x5cb(ACC|CNT|Skip|Offset|PSPD|RT_LD|AP_LD|Tool_Offset)'},{'className':_0x507995(0x4a80),'begin':'\x5cd+(sec|msec|mm/sec|cm/min|inch/min|deg/sec|mm|in|cm)?\x5cb','relevance':0x0},_0x5ec9e5[_0x507995(0x4e4f)]('//','[;$]'),_0x5ec9e5['COMMENT']('!',_0x507995(0x2033)),_0x5ec9e5[_0x507995(0x4e4f)](_0x507995(0x3938),'$'),_0x5ec9e5['QUOTE_STRING_MODE'],{'className':_0x507995(0x2431),'begin':'\x27','end':'\x27'},_0x5ec9e5[_0x507995(0xd12)],{'className':_0x507995(0x3362),'begin':_0x507995(0x3f5d)}]};};},0xde:_0x526e3b=>{_0x526e3b['exports']=function(_0x1251db){const _0x4dfdc8=a0_0x11e7,_0x4e43fe=_0x1251db[_0x4dfdc8(0x41d2)],_0x5c8bc7=[_0x4dfdc8(0x1143),_0x4dfdc8(0x2b17),_0x4dfdc8(0x2566),'attribute',_0x4dfdc8(0x1f2e),'constant','controller|0','country_timezones',_0x4dfdc8(0x15c7),_0x4dfdc8(0x1429),'date',_0x4dfdc8(0x3dd8),'expression','form|0',_0x4dfdc8(0x4d15),_0x4dfdc8(0x4784),_0x4dfdc8(0x4c00),_0x4dfdc8(0x4204),_0x4dfdc8(0x1078),_0x4dfdc8(0x51b),'form_start',_0x4dfdc8(0xbc8),_0x4dfdc8(0x21d4),_0x4dfdc8(0x478e),_0x4dfdc8(0x3615),'logout_path',_0x4dfdc8(0x2c39),'max',_0x4dfdc8(0x37c8),_0x4dfdc8(0x46f8),_0x4dfdc8(0x44f1),_0x4dfdc8(0xe98),'range','relative_path',_0x4dfdc8(0x17d9),_0x4dfdc8(0x44e3),'source',_0x4dfdc8(0x3eee),'url|0'];let _0x487e02=['apply',_0x4dfdc8(0xa01),_0x4dfdc8(0x1f2e),'cache',_0x4dfdc8(0x3df5),'do',_0x4dfdc8(0x6a5),_0x4dfdc8(0x4428),_0x4dfdc8(0x1465),'flush',_0x4dfdc8(0x3c19),_0x4dfdc8(0x15d0),_0x4dfdc8(0x27e6),'if','import',_0x4dfdc8(0x478e),_0x4dfdc8(0x172d),_0x4dfdc8(0x3f56),_0x4dfdc8(0x1fa),'stopwatch',_0x4dfdc8(0x2c14),_0x4dfdc8(0x3a9e),'transchoice',_0x4dfdc8(0x84a),_0x4dfdc8(0x4fb3),'with'];_0x487e02=_0x487e02[_0x4dfdc8(0x1d1d)](_0x487e02['map'](_0x1de66c=>_0x4dfdc8(0x2681)+_0x1de66c));const _0x2b5afe={'scope':_0x4dfdc8(0x2431),'variants':[{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/}]},_0x3f71d7={'scope':_0x4dfdc8(0x4a80),'match':/\d+/},_0x354e36={'begin':/\(/,'end':/\)/,'excludeBegin':!0x0,'excludeEnd':!0x0,'contains':[_0x2b5afe,_0x3f71d7]},_0x1432b7={'beginKeywords':_0x5c8bc7[_0x4dfdc8(0x3541)]('\x20'),'keywords':{'name':_0x5c8bc7},'relevance':0x0,'contains':[_0x354e36]},_0x559f08={'match':/\|(?=[A-Za-z_]+:?)/,'beginScope':'punctuation','relevance':0x0,'contains':[{'match':/[A-Za-z_]+:?/,'keywords':[_0x4dfdc8(0xbe0),_0x4dfdc8(0x199a),_0x4dfdc8(0x2e4a),_0x4dfdc8(0x451),_0x4dfdc8(0x191c),_0x4dfdc8(0x4a1a),_0x4dfdc8(0x4524),'country_name',_0x4dfdc8(0x1477),_0x4dfdc8(0x12e0),_0x4dfdc8(0x3f11),_0x4dfdc8(0x40d9),_0x4dfdc8(0x218c),_0x4dfdc8(0x3d23),_0x4dfdc8(0x4e2c),_0x4dfdc8(0x4eb5),'file_link',_0x4dfdc8(0x4dd0),_0x4dfdc8(0x1465),_0x4dfdc8(0x4d51),'format',_0x4dfdc8(0x140e),'format_args_as_text',_0x4dfdc8(0xa53),_0x4dfdc8(0x83b),'format_datetime',_0x4dfdc8(0x421f),_0x4dfdc8(0x4a19),_0x4dfdc8(0x3a3e),_0x4dfdc8(0x12eb),_0x4dfdc8(0x167c),_0x4dfdc8(0x813),'inky_to_html','inline_css',_0x4dfdc8(0x3541),_0x4dfdc8(0x115a),_0x4dfdc8(0x1ea9),_0x4dfdc8(0x1aea),_0x4dfdc8(0x4d3c),_0x4dfdc8(0x1b19),'locale_name',_0x4dfdc8(0x52d),_0x4dfdc8(0x4833),_0x4dfdc8(0x4116),'markdown_to_html',_0x4dfdc8(0x10d1),_0x4dfdc8(0x4152),_0x4dfdc8(0x3fce),'raw','reduce','replace',_0x4dfdc8(0x78b),_0x4dfdc8(0x3d6c),_0x4dfdc8(0x384c),'slug',_0x4dfdc8(0x4c33),'spaceless',_0x4dfdc8(0x1117),_0x4dfdc8(0x2af1),_0x4dfdc8(0x9a7),_0x4dfdc8(0x4685),_0x4dfdc8(0x2c14),_0x4dfdc8(0x271c),'trim','u|0',_0x4dfdc8(0x4541),_0x4dfdc8(0x4d2),'yaml_dump',_0x4dfdc8(0x3c65)]}]},_0x2652b3=(_0x4dc38c,{relevance:_0x24b798})=>({'beginScope':{0x1:_0x4dfdc8(0x499d),0x3:_0x4dfdc8(0x11d8)},'relevance':_0x24b798||0x2,'endScope':_0x4dfdc8(0x499d),'begin':[/\{%/,/\s*/,_0x4e43fe[_0x4dfdc8(0x583)](..._0x4dc38c)],'end':/%\}/,'keywords':'in','contains':[_0x559f08,_0x1432b7,_0x2b5afe,_0x3f71d7]}),_0x570f29=_0x2652b3(_0x487e02,{'relevance':0x2}),_0xbd65ea=_0x2652b3([/[a-z_]+/],{'relevance':0x1});return{'name':_0x4dfdc8(0x4d68),'aliases':[_0x4dfdc8(0x3f04)],'case_insensitive':!0x0,'subLanguage':'xml','contains':[_0x1251db[_0x4dfdc8(0x4e4f)](/\{#/,/#\}/),_0x570f29,_0xbd65ea,{'className':_0x4dfdc8(0x2916),'begin':/\{\{/,'end':/\}\}/,'contains':[_0x4dfdc8(0x4454),_0x559f08,_0x1432b7,_0x2b5afe,_0x3f71d7]}]};};},0x21c0:_0x4d97ec=>{const _0x559d0d=a0_0x11e7,_0x14c12a=_0x559d0d(0x18c3),_0x26a31b=['as','in','of','if','for','while',_0x559d0d(0x37b2),'var','new',_0x559d0d(0x14b2),'do',_0x559d0d(0xdfd),_0x559d0d(0x27d6),'else',_0x559d0d(0x4e10),'catch',_0x559d0d(0xf3c),'with',_0x559d0d(0x383),_0x559d0d(0x2e7e),_0x559d0d(0x3d23),_0x559d0d(0x422b),'switch',_0x559d0d(0x16d9),_0x559d0d(0x3368),_0x559d0d(0x5be),'let',_0x559d0d(0x5075),_0x559d0d(0xc01),_0x559d0d(0x1390),_0x559d0d(0x2085),_0x559d0d(0x16c2),_0x559d0d(0x371f),'static','import',_0x559d0d(0x27e6),'export',_0x559d0d(0x4428)],_0xc304b0=[_0x559d0d(0x4022),_0x559d0d(0x3984),_0x559d0d(0x1582),_0x559d0d(0x1daa),_0x559d0d(0x494b),_0x559d0d(0x2486)],_0x157ee0=[_0x559d0d(0x108b),_0x559d0d(0x2ac5),_0x559d0d(0x3bc9),_0x559d0d(0x2e8a),_0x559d0d(0x37c3),_0x559d0d(0x448e),_0x559d0d(0x3cd),_0x559d0d(0x4cd1),_0x559d0d(0x3327),_0x559d0d(0xae9),'Array',_0x559d0d(0x4016),_0x559d0d(0x3ae8),_0x559d0d(0x927),_0x559d0d(0x15b2),_0x559d0d(0x5091),_0x559d0d(0x4f9),'Int32Array',_0x559d0d(0x323e),'Uint32Array',_0x559d0d(0x3711),'BigUint64Array',_0x559d0d(0x34d5),_0x559d0d(0x4a59),_0x559d0d(0x1968),'WeakMap',_0x559d0d(0x3304),_0x559d0d(0x593),'Atomics',_0x559d0d(0x35d6),_0x559d0d(0x9c3),'Promise',_0x559d0d(0x2122),'GeneratorFunction',_0x559d0d(0x4265),_0x559d0d(0x373),'Proxy',_0x559d0d(0xcef),'WebAssembly'],_0x1f6d69=[_0x559d0d(0x5e5),_0x559d0d(0x47b6),_0x559d0d(0x1b9d),_0x559d0d(0x7b7),_0x559d0d(0x2041),_0x559d0d(0x22ea),'TypeError',_0x559d0d(0x3f3)],_0xe37842=[_0x559d0d(0x2a87),'setTimeout',_0x559d0d(0x2da),'clearTimeout',_0x559d0d(0x4031),_0x559d0d(0x474c),_0x559d0d(0x2ff9),'isFinite','isNaN',_0x559d0d(0x42de),_0x559d0d(0x3cfb),_0x559d0d(0x2b42),'decodeURIComponent',_0x559d0d(0x51f2),_0x559d0d(0x2c10),_0x559d0d(0x4e2c),_0x559d0d(0x254e)],_0x2da30b=[_0x559d0d(0x14b3),_0x559d0d(0x138f),_0x559d0d(0x2cc),'console',_0x559d0d(0x18db),_0x559d0d(0x295),_0x559d0d(0x301d),_0x559d0d(0x38a4),_0x559d0d(0x196c),_0x559d0d(0x501b)],_0x3f3749=[]['concat'](_0xe37842,_0x157ee0,_0x1f6d69);function _0x4b4dab(_0x2a6f7b){const _0x4eaab9=_0x559d0d,_0x18e133=_0x2a6f7b[_0x4eaab9(0x41d2)],_0x19ee7e=_0x14c12a,_0x4cb2be='<>',_0x29373c=_0x4eaab9(0x2545),_0x3442dc={'begin':/<[A-Za-z0-9\\._:-]+/,'end':/\/[A-Za-z0-9\\._:-]+>|\/>/,'isTrulyOpeningTag':(_0x30be54,_0x3c07b1)=>{const _0x3d6e43=_0x4eaab9,_0x5cf2c1=_0x30be54[0x0]['length']+_0x30be54['index'],_0x3aa853=_0x30be54[_0x3d6e43(0x7b0)][_0x5cf2c1];if('<'===_0x3aa853||','===_0x3aa853)return void _0x3c07b1[_0x3d6e43(0xec5)]();let _0x272e84;'>'===_0x3aa853&&(((_0x1da66d,{after:_0x900e0f})=>{const _0x5003f2=_0x3d6e43,_0x5aa237='/},{'begin':_0x3442dc[_0x4eaab9(0x42fa)],'on:begin':_0x3442dc[_0x4eaab9(0x15ca)],'end':_0x3442dc[_0x4eaab9(0x2681)]}],'subLanguage':_0x4eaab9(0x2655),'contains':[{'begin':_0x3442dc[_0x4eaab9(0x42fa)],'end':_0x3442dc[_0x4eaab9(0x2681)],'skip':!0x0,'contains':[_0x4eaab9(0x4454)]}]}]},_0x32e624,{'beginKeywords':_0x4eaab9(0x2c27)},{'begin':_0x4eaab9(0x2a8f)+_0x2a6f7b[_0x4eaab9(0x206e)]+_0x4eaab9(0x3c93),'returnBegin':!0x0,'label':_0x4eaab9(0x2d89),'contains':[_0x1a8952,_0x2a6f7b[_0x4eaab9(0x46a1)](_0x2a6f7b[_0x4eaab9(0x2029)],{'begin':_0x19ee7e,'className':_0x4eaab9(0x20db)})]},{'match':/\.\.\./,'relevance':0x0},_0x44455d,{'match':'\x5c$'+_0x19ee7e,'relevance':0x0},{'match':[/\bconstructor(?=\s*\()/],'className':{0x1:'title.function'},'contains':[_0x1a8952]},_0x350d6e,{'relevance':0x0,'match':/\b[A-Z][A-Z_0-9]+\b/,'className':_0x4eaab9(0x41f7)},_0x410fd9,_0xb8f7ac,{'match':/\$[(.]/}]};}_0x4d97ec[_0x559d0d(0x474c)]=function(_0x5785a){const _0x4a2728=_0x559d0d,_0x30342e=_0x4b4dab(_0x5785a),_0x9a154=_0x14c12a,_0x1fdd77=[_0x4a2728(0x4684),'void',_0x4a2728(0x4a80),_0x4a2728(0x1e8d),_0x4a2728(0x2431),_0x4a2728(0x20c7),'never',_0x4a2728(0x239b),_0x4a2728(0x3179),_0x4a2728(0x40b5)],_0x22f67d={'beginKeywords':_0x4a2728(0x37f7),'end':/\{/,'excludeEnd':!0x0,'contains':[_0x30342e[_0x4a2728(0x474c)][_0x4a2728(0x1d78)]]},_0x515c25={'beginKeywords':_0x4a2728(0x321b),'end':/\{/,'excludeEnd':!0x0,'keywords':{'keyword':_0x4a2728(0x43e5),'built_in':_0x1fdd77},'contains':[_0x30342e[_0x4a2728(0x474c)]['CLASS_REFERENCE']]},_0x1c020d={'$pattern':_0x14c12a,'keyword':_0x26a31b[_0x4a2728(0x1d1d)]([_0x4a2728(0xcfc),_0x4a2728(0x37f7),_0x4a2728(0x321b),'public',_0x4a2728(0x4ef4),'protected','implements','declare','abstract','readonly',_0x4a2728(0x44d8),_0x4a2728(0x35a7)]),'literal':_0xc304b0,'built_in':_0x3f3749[_0x4a2728(0x1d1d)](_0x1fdd77),'variable.language':_0x2da30b},_0x501185={'className':'meta','begin':'@'+_0x9a154},_0x103b01=(_0x3796a8,_0x20c52b,_0x435902)=>{const _0x469261=_0x4a2728,_0x40de13=_0x3796a8[_0x469261(0x2b31)][_0x469261(0x59b)](_0x3afaa1=>_0x3afaa1['label']===_0x20c52b);if(-0x1===_0x40de13)throw new Error('can\x20not\x20find\x20mode\x20to\x20replace');_0x3796a8[_0x469261(0x2b31)][_0x469261(0x4986)](_0x40de13,0x1,_0x435902);};return Object[_0x4a2728(0x4e14)](_0x30342e['keywords'],_0x1c020d),_0x30342e[_0x4a2728(0x474c)][_0x4a2728(0x4e3c)][_0x4a2728(0x1715)](_0x501185),_0x30342e[_0x4a2728(0x2b31)]=_0x30342e[_0x4a2728(0x2b31)]['concat']([_0x501185,_0x22f67d,_0x515c25]),_0x103b01(_0x30342e,_0x4a2728(0x3d44),_0x5785a[_0x4a2728(0x307a)]()),_0x103b01(_0x30342e,_0x4a2728(0x4642),{'className':_0x4a2728(0x5153),'relevance':0xa,'begin':/^\s*['"]use strict['"]/}),_0x30342e[_0x4a2728(0x2b31)]['find'](_0xdca16c=>'func.def'===_0xdca16c['label'])[_0x4a2728(0x48b6)]=0x0,Object[_0x4a2728(0x4e14)](_0x30342e,{'name':'TypeScript','aliases':['ts','tsx',_0x4a2728(0x2d6f),_0x4a2728(0x4c17)]}),_0x30342e;};},0x24f5:_0x3ffb54=>{const _0x5e12ce=a0_0x11e7;_0x3ffb54[_0x5e12ce(0x474c)]=function(_0x1bbc10){const _0x1e4471=_0x5e12ce;return{'name':'Vala','keywords':{'keyword':_0x1e4471(0x11ea),'built_in':_0x1e4471(0x351b),'literal':'false\x20true\x20null'},'contains':[{'className':_0x1e4471(0x1390),'beginKeywords':_0x1e4471(0x3047),'end':/\{/,'excludeEnd':!0x0,'illegal':_0x1e4471(0x25d0),'contains':[_0x1bbc10[_0x1e4471(0xb0e)]]},_0x1bbc10['C_LINE_COMMENT_MODE'],_0x1bbc10['C_BLOCK_COMMENT_MODE'],{'className':_0x1e4471(0x2431),'begin':_0x1e4471(0xb00),'end':'\x22\x22\x22','relevance':0x5},_0x1bbc10[_0x1e4471(0xa4c)],_0x1bbc10[_0x1e4471(0x291b)],_0x1bbc10[_0x1e4471(0xd12)],{'className':_0x1e4471(0x5153),'begin':'^#','end':'$'}]};};},0x22e0:_0x3b80bf=>{_0x3b80bf['exports']=function(_0x40d28f){const _0x2d3771=a0_0x11e7,_0xb9082f=_0x40d28f[_0x2d3771(0x41d2)],_0x3825b7=/\d{1,2}\/\d{1,2}\/\d{4}/,_0x156941=/\d{4}-\d{1,2}-\d{1,2}/,_0x3c891b=/(\d|1[012])(:\d+){0,2} *(AM|PM)/,_0x3ce8ce=/\d{1,2}(:\d{1,2}){1,2}/,_0x1c37d5={'className':_0x2d3771(0x2706),'variants':[{'begin':_0xb9082f[_0x2d3771(0x1d1d)](/# */,_0xb9082f[_0x2d3771(0x583)](_0x156941,_0x3825b7),/ *#/)},{'begin':_0xb9082f[_0x2d3771(0x1d1d)](/# */,_0x3ce8ce,/ *#/)},{'begin':_0xb9082f[_0x2d3771(0x1d1d)](/# */,_0x3c891b,/ *#/)},{'begin':_0xb9082f['concat'](/# */,_0xb9082f[_0x2d3771(0x583)](_0x156941,_0x3825b7),/ +/,_0xb9082f[_0x2d3771(0x583)](_0x3c891b,_0x3ce8ce),/ *#/)}]},_0x40fe28=_0x40d28f['COMMENT'](/'''/,/$/,{'contains':[{'className':'doctag','begin':/<\/?/,'end':/>/}]}),_0x150d7b=_0x40d28f['COMMENT'](null,/$/,{'variants':[{'begin':/'/},{'begin':/([\t ]|^)REM(?=\s)/}]});return{'name':_0x2d3771(0xc6d),'aliases':['vb'],'case_insensitive':!0x0,'classNameAliases':{'label':_0x2d3771(0x239b)},'keywords':{'keyword':'addhandler\x20alias\x20aggregate\x20ansi\x20as\x20async\x20assembly\x20auto\x20binary\x20by\x20byref\x20byval\x20call\x20case\x20catch\x20class\x20compare\x20const\x20continue\x20custom\x20declare\x20default\x20delegate\x20dim\x20distinct\x20do\x20each\x20equals\x20else\x20elseif\x20end\x20enum\x20erase\x20error\x20event\x20exit\x20explicit\x20finally\x20for\x20friend\x20from\x20function\x20get\x20global\x20goto\x20group\x20handles\x20if\x20implements\x20imports\x20in\x20inherits\x20interface\x20into\x20iterator\x20join\x20key\x20let\x20lib\x20loop\x20me\x20mid\x20module\x20mustinherit\x20mustoverride\x20mybase\x20myclass\x20namespace\x20narrowing\x20new\x20next\x20notinheritable\x20notoverridable\x20of\x20off\x20on\x20operator\x20option\x20optional\x20order\x20overloads\x20overridable\x20overrides\x20paramarray\x20partial\x20preserve\x20private\x20property\x20protected\x20public\x20raiseevent\x20readonly\x20redim\x20removehandler\x20resume\x20return\x20select\x20set\x20shadows\x20shared\x20skip\x20static\x20step\x20stop\x20structure\x20strict\x20sub\x20synclock\x20take\x20text\x20then\x20throw\x20to\x20try\x20unicode\x20until\x20using\x20when\x20where\x20while\x20widening\x20with\x20withevents\x20writeonly\x20yield','built_in':_0x2d3771(0x524e),'type':_0x2d3771(0x2389),'literal':_0x2d3771(0x47c6)},'illegal':_0x2d3771(0x413d),'contains':[{'className':_0x2d3771(0x2431),'begin':/"(""|[^/n])"C\b/},{'className':_0x2d3771(0x2431),'begin':/"/,'end':/"/,'illegal':/\n/,'contains':[{'begin':/""/}]},_0x1c37d5,{'className':_0x2d3771(0x4a80),'relevance':0x0,'variants':[{'begin':/\b\d[\d_]*((\.[\d_]+(E[+-]?[\d_]+)?)|(E[+-]?[\d_]+))[RFD@!#]?/},{'begin':/\b\d[\d_]*((U?[SIL])|[%&])?/},{'begin':/&H[\dA-F_]+((U?[SIL])|[%&])?/},{'begin':/&O[0-7_]+((U?[SIL])|[%&])?/},{'begin':/&B[01_]+((U?[SIL])|[%&])?/}]},{'className':'label','begin':/^\w+:/},_0x40fe28,_0x150d7b,{'className':'meta','begin':/[\t ]*#(const|disable|else|elseif|enable|end|externalsource|if|region)\b/,'end':/$/,'keywords':{'keyword':_0x2d3771(0x1f69)},'contains':[_0x150d7b]}]};};},0xa4:_0x1491b5=>{const _0x396aec=a0_0x11e7;_0x1491b5[_0x396aec(0x474c)]=function(_0x147e80){const _0xf4c1c1=_0x396aec;return{'name':_0xf4c1c1(0x23db),'subLanguage':_0xf4c1c1(0x2655),'contains':[{'begin':'<%','end':'%>','subLanguage':_0xf4c1c1(0x2cd9)}]};};},0x4b0:_0x5cd572=>{const _0x392477=a0_0x11e7;_0x5cd572[_0x392477(0x474c)]=function(_0x5cec0c){const _0x382540=_0x392477,_0x9e16f5=_0x5cec0c[_0x382540(0x41d2)],_0x395579=[_0x382540(0x4ce5),_0x382540(0x3ed3),_0x382540(0x26e),_0x382540(0x1a52),_0x382540(0x3adf),_0x382540(0x2e85),_0x382540(0x171e),_0x382540(0x3068),'getref',_0x382540(0x2431),_0x382540(0x19e2),_0x382540(0x3e23),_0x382540(0x4fcb),_0x382540(0x2ea7),_0x382540(0xae4),_0x382540(0x3a4),_0x382540(0x149b),_0x382540(0x43cf),_0x382540(0x521f),'round',_0x382540(0x50f4),_0x382540(0x40c2),_0x382540(0x3348),_0x382540(0x2547),_0x382540(0x274e),_0x382540(0x4848),_0x382540(0x3a03),_0x382540(0xbe0),_0x382540(0x3a5),_0x382540(0x191a),_0x382540(0x39b2),_0x382540(0x4de4),_0x382540(0x637),_0x382540(0x2d8),'maths',_0x382540(0x1f53),_0x382540(0x359f),_0x382540(0x1525),'isobject',_0x382540(0x1465),_0x382540(0xa1c),_0x382540(0x21f6),_0x382540(0x3154),_0x382540(0x4b30),'instr','datediff',_0x382540(0x38bc),'replace','isnull',_0x382540(0x4d50),_0x382540(0x454),'array','snumeric','log',_0x382540(0x35ca),'hex','chr',_0x382540(0x1b09),'msgbox',_0x382540(0x49dd),_0x382540(0x2e01),_0x382540(0x3935),'cdate',_0x382540(0x28e0),_0x382540(0x3ded),_0x382540(0x3541),'hour','oct','typename',_0x382540(0x1b23),_0x382540(0x255a),_0x382540(0xc16),_0x382540(0x43d9),_0x382540(0x439b),_0x382540(0x38de),_0x382540(0x159c),_0x382540(0x333d),_0x382540(0x1117),'cint',_0x382540(0x2a37),_0x382540(0x4643),_0x382540(0x1070),_0x382540(0x4655),'time',_0x382540(0x39e4),_0x382540(0x2ff9),_0x382540(0x40d9),'formatpercent',_0x382540(0x3a1b),_0x382540(0x4c32),'left',_0x382540(0xbe5),_0x382540(0x1f5),_0x382540(0x4d1d),_0x382540(0x40a6),_0x382540(0x3803)];return{'name':_0x382540(0x18e3),'aliases':['vbs'],'case_insensitive':!0x0,'keywords':{'keyword':[_0x382540(0x236b),_0x382540(0x1390),_0x382540(0xc01),'dim','do','loop',_0x382540(0x3d84),_0x382540(0x2162),_0x382540(0x200f),_0x382540(0x4c7b),_0x382540(0x3c19),_0x382540(0x2f9e),_0x382540(0x3dc6),'function','if',_0x382540(0xaf5),_0x382540(0x3d4),'on',_0x382540(0x3d85),_0x382540(0x1081),_0x382540(0x2121),_0x382540(0x4321),_0x382540(0x4ef4),_0x382540(0x227a),_0x382540(0x1e61),_0x382540(0xf9e),_0x382540(0x39ce),'randomize',_0x382540(0x1e40),_0x382540(0x22e5),_0x382540(0x3fc9),_0x382540(0x2e7e),_0x382540(0x1fa),_0x382540(0xaee),_0x382540(0x217c),_0x382540(0x552),_0x382540(0x905),_0x382540(0x2aa7),'end','to',_0x382540(0x3790),'is','or',_0x382540(0x32a6),_0x382540(0x2663),_0x382540(0xc1a),'class_initialize',_0x382540(0x38e9),_0x382540(0x3d23),'preserve','in','me',_0x382540(0xb13),_0x382540(0x48e6),_0x382540(0xf8e),_0x382540(0x2d85),_0x382540(0x139c)],'built_in':[_0x382540(0x395b),'response','request',_0x382540(0x1bfc),_0x382540(0x3588),_0x382540(0x13c8),_0x382540(0x4b53)],'literal':[_0x382540(0x4022),'false','null',_0x382540(0x25d8),_0x382540(0x168c)]},'illegal':'//','contains':[{'begin':_0x9e16f5[_0x382540(0x1d1d)](_0x9e16f5['either'](..._0x395579),_0x382540(0x7ef)),'relevance':0x0,'keywords':{'built_in':_0x395579}},_0x5cec0c['inherit'](_0x5cec0c['QUOTE_STRING_MODE'],{'contains':[{'begin':'\x22\x22'}]}),_0x5cec0c[_0x382540(0x4e4f)](/'/,/$/,{'relevance':0x0}),_0x5cec0c[_0x382540(0xd12)]]};};},0x2039:_0x5a4c35=>{const _0x46c79a=a0_0x11e7;_0x5a4c35[_0x46c79a(0x474c)]=function(_0x22ab62){const _0xc82ea7=_0x46c79a,_0x478bf3=_0x22ab62[_0xc82ea7(0x41d2)],_0x30f4fc=[_0xc82ea7(0x21a8),_0xc82ea7(0x3565),_0xc82ea7(0x4365),_0xc82ea7(0x4b01),_0xc82ea7(0x4e05),_0xc82ea7(0x1bb1),_0xc82ea7(0xf37),_0xc82ea7(0x1c07),_0xc82ea7(0x2630),'delay_mode_zero',_0xc82ea7(0x3d4),_0xc82ea7(0x3e5b),_0xc82ea7(0x1de5),_0xc82ea7(0x31ae),_0xc82ea7(0x39b),_0xc82ea7(0xe33),_0xc82ea7(0x4536),'include',_0xc82ea7(0x3572),_0xc82ea7(0x4646),_0xc82ea7(0x21fc),_0xc82ea7(0x25bd),_0xc82ea7(0x3b9d),_0xc82ea7(0x1129),_0xc82ea7(0x1f88),'undefineall'];return{'name':_0xc82ea7(0xe79),'aliases':['v','sv',_0xc82ea7(0x2a31)],'case_insensitive':!0x1,'keywords':{'$pattern':/\$?[\w]+(\$[\w]+)*/,'keyword':['accept_on',_0xc82ea7(0xa94),_0xc82ea7(0x4505),_0xc82ea7(0x4a68),_0xc82ea7(0x5056),'always_latch',_0xc82ea7(0x2663),_0xc82ea7(0x4fd4),_0xc82ea7(0x4e14),_0xc82ea7(0x13a5),_0xc82ea7(0x141c),_0xc82ea7(0x5097),_0xc82ea7(0x42fa),_0xc82ea7(0x39e8),_0xc82ea7(0xb3a),'binsof',_0xc82ea7(0x2523),_0xc82ea7(0x4e10),_0xc82ea7(0x47ff),_0xc82ea7(0x3121),_0xc82ea7(0xa5b),_0xc82ea7(0x961),_0xc82ea7(0x2e7e),_0xc82ea7(0x3e96),_0xc82ea7(0x3a68),'cell','chandle','checker',_0xc82ea7(0x1390),_0xc82ea7(0x306b),_0xc82ea7(0x510),'config','const',_0xc82ea7(0x29bd),_0xc82ea7(0x458c),_0xc82ea7(0x16d9),_0xc82ea7(0x159a),'covergroup',_0xc82ea7(0x4422),_0xc82ea7(0x1852),_0xc82ea7(0x3aa1),_0xc82ea7(0x3d23),_0xc82ea7(0x41cd),'design',_0xc82ea7(0x4124),_0xc82ea7(0x3fef),'do',_0xc82ea7(0x2c51),_0xc82ea7(0x3d4),_0xc82ea7(0x2681),_0xc82ea7(0x4926),_0xc82ea7(0x48f8),_0xc82ea7(0x9b7),_0xc82ea7(0x3b21),_0xc82ea7(0x2648),_0xc82ea7(0x24c1),_0xc82ea7(0xf6a),_0xc82ea7(0x448f),_0xc82ea7(0x184a),_0xc82ea7(0x35f1),'endpackage',_0xc82ea7(0xfe3),_0xc82ea7(0x507),_0xc82ea7(0x136d),_0xc82ea7(0x4ff3),_0xc82ea7(0x2334),_0xc82ea7(0x4c49),_0xc82ea7(0x3b82),_0xc82ea7(0x44d8),'event',_0xc82ea7(0x33e7),'expect',_0xc82ea7(0x2bb9),_0xc82ea7(0x4428),_0xc82ea7(0x2068),_0xc82ea7(0x27e4),'first_match','for',_0xc82ea7(0x455c),_0xc82ea7(0x4185),'forever',_0xc82ea7(0x12f8),'forkjoin','function',_0xc82ea7(0x23aa),_0xc82ea7(0x221d),_0xc82ea7(0x501b),_0xc82ea7(0x4d2d),_0xc82ea7(0xdf8),'if',_0xc82ea7(0x3d4d),'ifnone',_0xc82ea7(0x4379),_0xc82ea7(0x10e3),_0xc82ea7(0x6c3),'implies',_0xc82ea7(0x331),'incdir','include',_0xc82ea7(0x9b1),'inout',_0xc82ea7(0x7b0),_0xc82ea7(0x4f28),_0xc82ea7(0x4279),_0xc82ea7(0xc16),_0xc82ea7(0x410f),_0xc82ea7(0x3b38),'interface',_0xc82ea7(0x13b2),'join',_0xc82ea7(0x423d),_0xc82ea7(0x2e25),'large',_0xc82ea7(0x1e61),_0xc82ea7(0x2bb3),_0xc82ea7(0x2e60),_0xc82ea7(0x16a7),_0xc82ea7(0x2123),'logic','longint',_0xc82ea7(0x5149),'matches',_0xc82ea7(0x29be),'modport',_0xc82ea7(0x196c),_0xc82ea7(0x2b73),_0xc82ea7(0x50cc),'nettype','new',_0xc82ea7(0x1498),_0xc82ea7(0x2ce0),'nor',_0xc82ea7(0x51a0),_0xc82ea7(0xc1a),_0xc82ea7(0x2936),'notif1','or','output','package',_0xc82ea7(0x32f4),_0xc82ea7(0x3740),'pmos',_0xc82ea7(0x3b17),_0xc82ea7(0x303e),_0xc82ea7(0x3707),_0xc82ea7(0x336a),_0xc82ea7(0x227a),_0xc82ea7(0xc14),_0xc82ea7(0x277b),_0xc82ea7(0x398),'pulldown',_0xc82ea7(0x13e6),_0xc82ea7(0x1dd2),_0xc82ea7(0x1801),_0xc82ea7(0x44ca),_0xc82ea7(0x4fa0),_0xc82ea7(0x42a3),_0xc82ea7(0x3d16),_0xc82ea7(0x2d9),_0xc82ea7(0x3666),_0xc82ea7(0x47f6),_0xc82ea7(0x3bcb),_0xc82ea7(0x21c3),'reg','reject_on',_0xc82ea7(0x4a55),_0xc82ea7(0x11d0),_0xc82ea7(0x5027),_0xc82ea7(0xdfd),_0xc82ea7(0x2403),_0xc82ea7(0xa41),_0xc82ea7(0x49bc),_0xc82ea7(0x34ee),_0xc82ea7(0x19be),'s_always',_0xc82ea7(0x1e71),'s_nexttime','s_until',_0xc82ea7(0x1376),'scalared',_0xc82ea7(0x3c2),_0xc82ea7(0x1b2a),_0xc82ea7(0x46b2),_0xc82ea7(0x4e46),'signed',_0xc82ea7(0x4830),'soft',_0xc82ea7(0x33fe),_0xc82ea7(0x1abc),_0xc82ea7(0x10e7),_0xc82ea7(0x2c7c),_0xc82ea7(0x2431),'strong',_0xc82ea7(0xe4c),'strong1','struct',_0xc82ea7(0x2cc),'supply0','supply1','sync_accept_on',_0xc82ea7(0x33f5),_0xc82ea7(0x1639),_0xc82ea7(0x88a),_0xc82ea7(0x3cef),_0xc82ea7(0x138f),_0xc82ea7(0x3b14),_0xc82ea7(0x51b6),'timeprecision',_0xc82ea7(0x2098),_0xc82ea7(0xe87),_0xc82ea7(0x21a4),'tranif1',_0xc82ea7(0x358f),_0xc82ea7(0x401b),'tri1',_0xc82ea7(0x1d37),_0xc82ea7(0xdc4),'trireg',_0xc82ea7(0xcfc),_0xc82ea7(0xc83),_0xc82ea7(0x29d),_0xc82ea7(0x390b),_0xc82ea7(0x287c),'unsigned',_0xc82ea7(0x30d6),_0xc82ea7(0x3986),_0xc82ea7(0xbdc),_0xc82ea7(0x84a),_0xc82ea7(0x22b8),'var',_0xc82ea7(0x5a1),_0xc82ea7(0x38b8),'void',_0xc82ea7(0x1c14),_0xc82ea7(0x4497),_0xc82ea7(0x23b),_0xc82ea7(0x7ae),_0xc82ea7(0x1efb),_0xc82ea7(0xf00),'while','wildcard',_0xc82ea7(0x1c85),_0xc82ea7(0x2aa7),_0xc82ea7(0x5c5),_0xc82ea7(0x2bf6),_0xc82ea7(0x2b1b),_0xc82ea7(0x32a6)],'literal':[_0xc82ea7(0x1582)],'built_in':[_0xc82ea7(0x3132),_0xc82ea7(0x2142),'$exit','$fatal',_0xc82ea7(0x41b7),_0xc82ea7(0x19c2),_0xc82ea7(0x1bb8),_0xc82ea7(0x153f),_0xc82ea7(0x3f90),_0xc82ea7(0xb01),_0xc82ea7(0x1a43),_0xc82ea7(0x2dbe),_0xc82ea7(0x43eb),_0xc82ea7(0x354a),'$cast',_0xc82ea7(0x2414),_0xc82ea7(0x2240),'$timeformat','$realtobits',_0xc82ea7(0x38f4),'$rtoi',_0xc82ea7(0x4a26),_0xc82ea7(0x1238),_0xc82ea7(0x40c5),'$assertpasson','$assertfailon','$assertnonvacuouson','$assertoff',_0xc82ea7(0x1d48),_0xc82ea7(0x101b),_0xc82ea7(0x28b7),_0xc82ea7(0x176b),_0xc82ea7(0x210f),'$sampled',_0xc82ea7(0x2b20),'$changed',_0xc82ea7(0x1830),_0xc82ea7(0x169a),_0xc82ea7(0x416),_0xc82ea7(0x1a75),_0xc82ea7(0x43fe),_0xc82ea7(0x16c5),_0xc82ea7(0x1e0a),_0xc82ea7(0x2b1),_0xc82ea7(0xe92),_0xc82ea7(0x4aa9),_0xc82ea7(0x1f1c),_0xc82ea7(0x4984),_0xc82ea7(0x2d50),_0xc82ea7(0x2c9b),_0xc82ea7(0x314a),_0xc82ea7(0x4ba3),_0xc82ea7(0x43d5),_0xc82ea7(0x2c25),_0xc82ea7(0x3690),_0xc82ea7(0x1e1e),_0xc82ea7(0x3aa9),_0xc82ea7(0xb28),_0xc82ea7(0x20be),_0xc82ea7(0x250c),'$left','$low','$increment',_0xc82ea7(0x2419),_0xc82ea7(0x1370),'$log10','$exp',_0xc82ea7(0x40a7),_0xc82ea7(0x360e),_0xc82ea7(0x8d5),_0xc82ea7(0xc19),_0xc82ea7(0x3ce3),_0xc82ea7(0x4ce0),_0xc82ea7(0x4cc8),_0xc82ea7(0x1f19),_0xc82ea7(0x434f),_0xc82ea7(0x452a),_0xc82ea7(0x202c),_0xc82ea7(0x19c2),_0xc82ea7(0x1a91),_0xc82ea7(0x341a),_0xc82ea7(0x5d5),_0xc82ea7(0x34e8),'$asin',_0xc82ea7(0x2de6),_0xc82ea7(0x35b8),_0xc82ea7(0x1767),'$hypot',_0xc82ea7(0x2354),_0xc82ea7(0x2b75),_0xc82ea7(0xf27),'$asinh',_0xc82ea7(0x38dd),_0xc82ea7(0x24f),'$countones',_0xc82ea7(0x481b),_0xc82ea7(0x41b7),'$info',_0xc82ea7(0x1861),_0xc82ea7(0x4af),_0xc82ea7(0x3c5c),_0xc82ea7(0x287),'$dist_normal',_0xc82ea7(0x2da6),_0xc82ea7(0x4a5d),_0xc82ea7(0x45b6),_0xc82ea7(0x46e5),_0xc82ea7(0x1c27),'$q_exam',_0xc82ea7(0x1514),_0xc82ea7(0x3294),_0xc82ea7(0x1f38),_0xc82ea7(0x3b45),_0xc82ea7(0x4d39),_0xc82ea7(0x1209),_0xc82ea7(0x1035),_0xc82ea7(0x3ea8),_0xc82ea7(0x2883),'$q_full','$psprintf',_0xc82ea7(0x1352),'$async$nand$plane',_0xc82ea7(0x544),_0xc82ea7(0x2859),_0xc82ea7(0x4a75),_0xc82ea7(0x1847),'$sync$or$plane',_0xc82ea7(0x4532),_0xc82ea7(0x2b29),'$display',_0xc82ea7(0x40f2),_0xc82ea7(0x50fd),_0xc82ea7(0xec4),_0xc82ea7(0x16c7),_0xc82ea7(0x1dbf),_0xc82ea7(0x3bd2),_0xc82ea7(0xc4c),_0xc82ea7(0x3d89),_0xc82ea7(0x2215),_0xc82ea7(0x4afb),_0xc82ea7(0x47bd),'$value$plusargs',_0xc82ea7(0xa2c),_0xc82ea7(0x12e8),_0xc82ea7(0x80b),'$dumpports',_0xc82ea7(0x4bee),'$dumpportslimit','$writeb',_0xc82ea7(0x12d0),_0xc82ea7(0x258f),'$monitor',_0xc82ea7(0x1ffc),_0xc82ea7(0x223d),_0xc82ea7(0x2f2e),_0xc82ea7(0x3e3a),_0xc82ea7(0xc3c),_0xc82ea7(0xa43),_0xc82ea7(0x120a),'$dumpflush','$dumpportsoff',_0xc82ea7(0x4daf),_0xc82ea7(0xe67),_0xc82ea7(0xd4a),_0xc82ea7(0x1da3),_0xc82ea7(0x4371),_0xc82ea7(0x15d3),'$fdisplayo',_0xc82ea7(0x3749),_0xc82ea7(0x3393),_0xc82ea7(0xb41),_0xc82ea7(0x382c),_0xc82ea7(0xab1),'$swriteb','$swriteh',_0xc82ea7(0x901),_0xc82ea7(0x25ee),_0xc82ea7(0x334f),_0xc82ea7(0x3ef5),_0xc82ea7(0x38b),'$feof',_0xc82ea7(0x4113),_0xc82ea7(0x1c88),_0xc82ea7(0x2128),'$fwriteh','$fwriteo',_0xc82ea7(0x130a),_0xc82ea7(0xac2),'$fmonitorh',_0xc82ea7(0x29f4),_0xc82ea7(0x3b99),_0xc82ea7(0x46d5),_0xc82ea7(0x435b),_0xc82ea7(0x497f),'$fgets','$sscanf','$rewind',_0xc82ea7(0x11d7),_0xc82ea7(0x2f5d)]},'contains':[_0x22ab62[_0xc82ea7(0x23fe)],_0x22ab62[_0xc82ea7(0x2ae2)],_0x22ab62[_0xc82ea7(0x291b)],{'scope':_0xc82ea7(0x4a80),'contains':[_0x22ab62[_0xc82ea7(0x4a76)]],'variants':[{'begin':/\b((\d+'([bhodBHOD]))[0-9xzXZa-fA-F_]+)/},{'begin':/\B(('([bhodBHOD]))[0-9xzXZa-fA-F_]+)/},{'begin':/\b[0-9][0-9_]*/,'relevance':0x0}]},{'scope':_0xc82ea7(0x3362),'variants':[{'begin':_0xc82ea7(0x23e4)},{'begin':_0xc82ea7(0x24c7),'relevance':0x0}]},{'scope':_0xc82ea7(0x41f7),'match':_0x478bf3[_0xc82ea7(0x1d1d)](/`/,_0x478bf3['either']('__FILE__',_0xc82ea7(0x3088)))},{'scope':_0xc82ea7(0x5153),'begin':_0x478bf3[_0xc82ea7(0x1d1d)](/`/,_0x478bf3[_0xc82ea7(0x583)](..._0x30f4fc)),'end':/$|\/\/|\/\*/,'returnEnd':!0x0,'keywords':_0x30f4fc}]};};},0x2211:_0x41d6e7=>{const _0x171247=a0_0x11e7;_0x41d6e7[_0x171247(0x474c)]=function(_0x2d821e){const _0x444df5=_0x171247,_0x29f241='\x5cd(_|\x5cd)*',_0x55641f=_0x444df5(0x709)+_0x29f241,_0x3eb386=_0x444df5(0x4cd5)+(_0x29f241+_0x444df5(0x134d)+_0x55641f+')?')+'|'+(_0x29f241+'(\x5c.'+_0x29f241+_0x444df5(0x225e)+_0x55641f+')?')+')';return{'name':_0x444df5(0xf73),'case_insensitive':!0x0,'keywords':{'keyword':[_0x444df5(0xbe0),'access',_0x444df5(0x1349),_0x444df5(0xa94),_0x444df5(0xc36),_0x444df5(0x2663),_0x444df5(0x1067),'array','assert',_0x444df5(0x13a5),_0x444df5(0x245a),_0x444df5(0x263f),'begin',_0x444df5(0x1f2e),_0x444df5(0x4f1a),_0x444df5(0x2d10),_0x444df5(0x28f0),_0x444df5(0x2e7e),_0x444df5(0x2bf5),_0x444df5(0x4825),_0x444df5(0x30ea),_0x444df5(0x458c),'cover',_0x444df5(0x12a9),_0x444df5(0x4003),_0x444df5(0x3d23),_0x444df5(0x3d4),_0x444df5(0x3e5b),_0x444df5(0x2681),_0x444df5(0x3816),'exit',_0x444df5(0x863),_0x444df5(0x184e),_0x444df5(0x3c19),_0x444df5(0x455c),_0x444df5(0x14b2),_0x444df5(0x644),'generic',_0x444df5(0x4e5b),'guarded','if',_0x444df5(0x31a),'in',_0x444df5(0x3791),_0x444df5(0x99f),'is',_0x444df5(0x3b71),'library','linkage',_0x444df5(0x2706),_0x444df5(0x110b),_0x444df5(0x4833),'mod','nand','new',_0x444df5(0x3dc6),_0x444df5(0x2fac),_0x444df5(0xc1a),_0x444df5(0x1582),'of','on','open','or','others',_0x444df5(0x3ab5),_0x444df5(0x4bd0),_0x444df5(0x3740),'port','postponed',_0x444df5(0x1285),'process','property',_0x444df5(0xc14),_0x444df5(0x44ca),'range',_0x444df5(0x15bd),_0x444df5(0x49b4),_0x444df5(0x26fe),_0x444df5(0x4a55),_0x444df5(0x22e5),'report',_0x444df5(0x5027),_0x444df5(0x3c9d),_0x444df5(0xdfd),_0x444df5(0x2cb7),'ror','select',_0x444df5(0x3c2),_0x444df5(0x1cb3),_0x444df5(0x2b1c),_0x444df5(0x514e),_0x444df5(0x341),_0x444df5(0x35f0),'sra',_0x444df5(0x3808),'strong','subtype',_0x444df5(0xaf5),'to',_0x444df5(0x4d75),_0x444df5(0xcfc),_0x444df5(0x1b64),'units',_0x444df5(0x30d6),'use',_0x444df5(0x3362),_0x444df5(0x1961),_0x444df5(0x1864),_0x444df5(0x3397),'vunit','wait',_0x444df5(0x191b),_0x444df5(0x552),'with',_0x444df5(0x2b1b),_0x444df5(0x32a6)],'built_in':[_0x444df5(0x1e8d),'bit',_0x444df5(0x4b35),'integer',_0x444df5(0x51b6),_0x444df5(0x982),_0x444df5(0x3c37),_0x444df5(0x25b),'string','bit_vector',_0x444df5(0x4226),'file_open_status','std_logic','std_logic_vector',_0x444df5(0x2edb),_0x444df5(0x3908),_0x444df5(0x2f2),'integer_vector',_0x444df5(0x3fca),_0x444df5(0x2600),_0x444df5(0xac5),_0x444df5(0x3a0a),_0x444df5(0x30ad),_0x444df5(0x3b42),_0x444df5(0x34af),'time_vector'],'literal':[_0x444df5(0x3984),_0x444df5(0x4022),_0x444df5(0x318c),_0x444df5(0x4020),_0x444df5(0x3d85),'failure',_0x444df5(0x3572),_0x444df5(0x4006),_0x444df5(0xd11),_0x444df5(0x17d2)]},'illegal':/\{/,'contains':[_0x2d821e[_0x444df5(0x23fe)],_0x2d821e['COMMENT']('--','$'),_0x2d821e[_0x444df5(0x291b)],{'className':_0x444df5(0x4a80),'begin':_0x3eb386,'relevance':0x0},{'className':'string','begin':_0x444df5(0x2e9b),'contains':[_0x2d821e[_0x444df5(0x4a76)]]},{'className':'symbol','begin':'\x27[A-Za-z](_?[A-Za-z0-9])*','contains':[_0x2d821e[_0x444df5(0x4a76)]]}]};};},0x22cb:_0x405a80=>{const _0x41d94c=a0_0x11e7;_0x405a80[_0x41d94c(0x474c)]=function(_0x3e04f6){const _0x35ca5b=_0x41d94c;return{'name':_0x35ca5b(0xe59),'keywords':{'$pattern':/[!#@\w]+/,'keyword':_0x35ca5b(0x29b0),'built_in':_0x35ca5b(0x42fd)},'illegal':/;/,'contains':[_0x3e04f6[_0x35ca5b(0x30be)],{'className':_0x35ca5b(0x2431),'begin':'\x27','end':'\x27','illegal':'\x5cn'},{'className':_0x35ca5b(0x2431),'begin':/"(\\"|\n\\|[^"\n])*"/},_0x3e04f6['COMMENT']('\x22','$'),{'className':_0x35ca5b(0x3362),'begin':/[bwtglsav]:[\w\d_]+/},{'begin':[/\b(?:function|function!)/,/\s+/,_0x3e04f6[_0x35ca5b(0xacc)]],'className':{0x1:'keyword',0x3:_0x35ca5b(0x4685)},'end':'$','relevance':0x0,'contains':[{'className':_0x35ca5b(0xddd),'begin':'\x5c(','end':'\x5c)'}]},{'className':_0x35ca5b(0x239b),'begin':/<[\w-]+>/}]};};},0x2487:_0x3301aa=>{const _0x5f644a=a0_0x11e7;_0x3301aa[_0x5f644a(0x474c)]=function(_0x2ece2e){const _0x5226cf=_0x5f644a;_0x2ece2e[_0x5226cf(0x41d2)];const _0x335318=_0x2ece2e[_0x5226cf(0x4e4f)](/\(;/,/;\)/);return _0x335318[_0x5226cf(0x2b31)][_0x5226cf(0x1715)](_0x5226cf(0x4454)),{'name':_0x5226cf(0x1d53),'keywords':{'$pattern':/[\w.]+/,'keyword':[_0x5226cf(0x819),_0x5226cf(0x1f2e),'br',_0x5226cf(0x1493),_0x5226cf(0x3270),_0x5226cf(0x236b),_0x5226cf(0x272c),'data',_0x5226cf(0x41fc),_0x5226cf(0x391e),_0x5226cf(0x3d4),'end',_0x5226cf(0x2bb9),'func','global.get',_0x5226cf(0x4d48),_0x5226cf(0x446f),_0x5226cf(0x1731),_0x5226cf(0x4b18),_0x5226cf(0x4205),_0x5226cf(0x99b),_0x5226cf(0x501b),'if',_0x5226cf(0x331),_0x5226cf(0x16a7),_0x5226cf(0x110b),_0x5226cf(0xb0a),_0x5226cf(0x4c24),_0x5226cf(0x33c8),_0x5226cf(0x196c),_0x5226cf(0x3d56),_0x5226cf(0x29d8),_0x5226cf(0xf16),_0x5226cf(0x2c08),_0x5226cf(0xa34),_0x5226cf(0xdfd),_0x5226cf(0x3fc9),_0x5226cf(0x43d0),'set_local',_0x5226cf(0x4cc4),_0x5226cf(0x1639),_0x5226cf(0x21f7),_0x5226cf(0xaf5),_0x5226cf(0xcfc),'unreachable']},'contains':[_0x2ece2e['COMMENT'](/;;/,/$/),_0x335318,{'match':[/(?:offset|align)/,/\s*/,/=/],'className':{0x1:'keyword',0x3:_0x5226cf(0x1182)}},{'className':'variable','begin':/\$[\w_]+/},{'match':/(\((?!;)|\))+/,'className':'punctuation','relevance':0x0},{'begin':[/(?:func|call|call_indirect)/,/\s+/,/\$[^\s)]+/],'className':{0x1:'keyword',0x3:_0x5226cf(0x20db)}},_0x2ece2e[_0x5226cf(0x291b)],{'match':/(i32|i64|f32|f64)(?!\.)/,'className':_0x5226cf(0xcfc)},{'className':'keyword','match':/\b(f32|f64|i32|i64)(?:\.(?:abs|add|and|ceil|clz|const|convert_[su]\/i(?:32|64)|copysign|ctz|demote\/f64|div(?:_[su])?|eqz?|extend_[su]\/i32|floor|ge(?:_[su])?|gt(?:_[su])?|le(?:_[su])?|load(?:(?:8|16|32)_[su])?|lt(?:_[su])?|max|min|mul|nearest|neg?|or|popcnt|promote\/f32|reinterpret\/[fi](?:32|64)|rem_[su]|rot[lr]|shl|shr_[su]|store(?:8|16|32)?|sqrt|sub|trunc(?:_[su]\/f(?:32|64))?|wrap\/i64|xor))\b/},{'className':_0x5226cf(0x4a80),'relevance':0x0,'match':/[+-]?\b(?:\d(?:_?\d)*(?:\.\d(?:_?\d)*)?(?:[eE][+-]?\d(?:_?\d)*)?|0x[\da-fA-F](?:_?[\da-fA-F])*(?:\.[\da-fA-F](?:_?[\da-fA-D])*)?(?:[pP][+-]?\d(?:_?\d)*)?)\b|\binf\b|\bnan(?::0x[\da-fA-F](?:_?[\da-fA-D])*)?\b/}]};};},0x13cd:_0x1ff14c=>{const _0xba2cfb=a0_0x11e7;_0x1ff14c[_0xba2cfb(0x474c)]=function(_0x41e5f3){const _0x3c5fb5=_0xba2cfb,_0x1e1779=_0x41e5f3['regex'],_0x521ede=/[a-zA-Z]\w*/,_0x5ebb21=['as',_0x3c5fb5(0x4e10),_0x3c5fb5(0x1390),'construct','continue',_0x3c5fb5(0x3d4),_0x3c5fb5(0x3c19),_0x3c5fb5(0x20bd),'if',_0x3c5fb5(0x331),'in','is',_0x3c5fb5(0xdfd),_0x3c5fb5(0x2c7c),'var',_0x3c5fb5(0x552)],_0x2aea5b=[_0x3c5fb5(0x4022),_0x3c5fb5(0x3984),_0x3c5fb5(0x1582)],_0x42b160=[_0x3c5fb5(0x138f),'super'],_0x4a6887=['-','~',/\*/,'%',/\.\.\./,/\.\./,/\+/,'<<','>>','>=','<=','<','>',/\^/,/!=/,/!/,/\bis\b/,'==','&&','&',/\|\|/,/\|/,/\?:/,'='],_0x28b41b={'relevance':0x0,'match':_0x1e1779[_0x3c5fb5(0x1d1d)](/\b(?!(if|while|for|else|super)\b)/,_0x521ede,/(?=\s*[({])/),'className':_0x3c5fb5(0x20db)},_0x4b1be4={'match':_0x1e1779[_0x3c5fb5(0x1d1d)](_0x1e1779[_0x3c5fb5(0x583)](_0x1e1779[_0x3c5fb5(0x1d1d)](/\b(?!(if|while|for|else|super)\b)/,_0x521ede),_0x1e1779['either'](..._0x4a6887)),/(?=\s*\([^)]+\)\s*\{)/),'className':_0x3c5fb5(0x20db),'starts':{'contains':[{'begin':/\(/,'end':/\)/,'contains':[{'relevance':0x0,'scope':_0x3c5fb5(0xddd),'match':_0x521ede}]}]}},_0x39be6a={'variants':[{'match':[/class\s+/,_0x521ede,/\s+is\s+/,_0x521ede]},{'match':[/class\s+/,_0x521ede]}],'scope':{0x2:_0x3c5fb5(0x19e4),0x4:'title.class.inherited'},'keywords':_0x5ebb21},_0x4bbffa={'relevance':0x0,'match':_0x1e1779[_0x3c5fb5(0x583)](..._0x4a6887),'className':_0x3c5fb5(0x1182)},_0x30963c={'className':_0x3c5fb5(0x227a),'begin':_0x1e1779[_0x3c5fb5(0x1d1d)](/\./,_0x1e1779['lookahead'](_0x521ede)),'end':_0x521ede,'excludeBegin':!0x0,'relevance':0x0},_0x1c4a9f={'relevance':0x0,'match':_0x1e1779['concat'](/\b_/,_0x521ede),'scope':_0x3c5fb5(0x3362)},_0x3fb7ff={'relevance':0x0,'match':/\b[A-Z]+[a-z]+([A-Z]+[a-z]+)*/,'scope':_0x3c5fb5(0x19e4),'keywords':{'_':[_0x3c5fb5(0x3563),_0x3c5fb5(0x3352),'Fiber','Fn',_0x3c5fb5(0x191f),_0x3c5fb5(0x4a59),_0x3c5fb5(0x2ed5),_0x3c5fb5(0x11ab),_0x3c5fb5(0x108b),'Range',_0x3c5fb5(0x4e7f),'String',_0x3c5fb5(0xa68)]}},_0x51bef1=_0x41e5f3[_0x3c5fb5(0xd12)],_0x59031f={'match':[_0x521ede,/\s*/,/=/,/\s*/,/\(/,_0x521ede,/\)\s*\{/],'scope':{0x1:_0x3c5fb5(0x20db),0x3:'operator',0x6:_0x3c5fb5(0xddd)}},_0x3b5080=_0x41e5f3['COMMENT'](/\/\*\*/,/\*\//,{'contains':[{'match':/@[a-z]+/,'scope':_0x3c5fb5(0x4593)},_0x3c5fb5(0x4454)]}),_0x4eae43={'scope':'subst','begin':/%\(/,'end':/\)/,'contains':[_0x51bef1,_0x3fb7ff,_0x28b41b,_0x1c4a9f,_0x4bbffa]},_0x45d059={'scope':'string','begin':/"/,'end':/"/,'contains':[_0x4eae43,{'scope':_0x3c5fb5(0x2825),'variants':[{'match':/\\\\|\\["0%abefnrtv]/},{'match':/\\x[0-9A-F]{2}/},{'match':/\\u[0-9A-F]{4}/},{'match':/\\U[0-9A-F]{8}/}]}]};_0x4eae43['contains'][_0x3c5fb5(0x1715)](_0x45d059);const _0x57d94e=[..._0x5ebb21,..._0x42b160,..._0x2aea5b],_0x181d1b={'relevance':0x0,'match':_0x1e1779[_0x3c5fb5(0x1d1d)](_0x3c5fb5(0x3ad0),_0x57d94e[_0x3c5fb5(0x3541)]('|'),_0x3c5fb5(0x295e),/[a-zA-Z_]\w*(?:[?!]|\b)/),'className':_0x3c5fb5(0x3362)};return{'name':'Wren','keywords':{'keyword':_0x5ebb21,'variable.language':_0x42b160,'literal':_0x2aea5b},'contains':[{'scope':_0x3c5fb5(0x4645),'variants':[{'begin':[/#!?/,/[A-Za-z_]+(?=\()/],'beginScope':{},'keywords':{'literal':_0x2aea5b},'contains':[],'end':/\)/},{'begin':[/#!?/,/[A-Za-z_]+/],'beginScope':{},'end':/$/}]},_0x51bef1,_0x45d059,{'className':_0x3c5fb5(0x2431),'begin':/"""/,'end':/"""/},_0x3b5080,_0x41e5f3['C_LINE_COMMENT_MODE'],_0x41e5f3[_0x3c5fb5(0x23fe)],_0x3fb7ff,_0x39be6a,_0x59031f,_0x4b1be4,_0x28b41b,_0x4bbffa,_0x1c4a9f,_0x30963c,_0x181d1b]};};},0x1d5a:_0xb41a30=>{const _0x49bd9f=a0_0x11e7;_0xb41a30[_0x49bd9f(0x474c)]=function(_0x573f61){const _0x176e22=_0x49bd9f;return{'name':_0x176e22(0x51fe),'case_insensitive':!0x0,'keywords':{'$pattern':_0x176e22(0x2c7f)+_0x573f61[_0x176e22(0xacc)],'keyword':_0x176e22(0x2381),'built_in':_0x176e22(0xa0b),'meta':'%define\x20%xdefine\x20%+\x20%undef\x20%defstr\x20%deftok\x20%assign\x20%strcat\x20%strlen\x20%substr\x20%rotate\x20%elif\x20%else\x20%endif\x20%if\x20%ifmacro\x20%ifctx\x20%ifidn\x20%ifidni\x20%ifid\x20%ifnum\x20%ifstr\x20%iftoken\x20%ifempty\x20%ifenv\x20%error\x20%warning\x20%fatal\x20%rep\x20%endrep\x20%include\x20%push\x20%pop\x20%repl\x20%pathsearch\x20%depend\x20%use\x20%arg\x20%stacksize\x20%local\x20%line\x20%comment\x20%endcomment\x20.nolist\x20__FILE__\x20__LINE__\x20__SECT__\x20\x20__BITS__\x20__OUTPUT_FORMAT__\x20__DATE__\x20__TIME__\x20__DATE_NUM__\x20__TIME_NUM__\x20__UTC_DATE__\x20__UTC_TIME__\x20__UTC_DATE_NUM__\x20__UTC_TIME_NUM__\x20\x20__PASS__\x20struc\x20endstruc\x20istruc\x20at\x20iend\x20align\x20alignb\x20sectalign\x20daz\x20nodaz\x20up\x20down\x20zero\x20default\x20option\x20assume\x20public\x20bits\x20use16\x20use32\x20use64\x20default\x20section\x20segment\x20absolute\x20extern\x20global\x20common\x20cpu\x20float\x20__utf16__\x20__utf16le__\x20__utf16be__\x20__utf32__\x20__utf32le__\x20__utf32be__\x20__float8__\x20__float16__\x20__float32__\x20__float64__\x20__float80m__\x20__float80e__\x20__float128l__\x20__float128h__\x20__Infinity__\x20__QNaN__\x20__SNaN__\x20Inf\x20NaN\x20QNaN\x20SNaN\x20float8\x20float16\x20float32\x20float64\x20float80m\x20float80e\x20float128l\x20float128h\x20__FLOAT_DAZ__\x20__FLOAT_ROUND__\x20__FLOAT__'},'contains':[_0x573f61[_0x176e22(0x4e4f)](';','$',{'relevance':0x0}),{'className':'number','variants':[{'begin':_0x176e22(0xd48),'relevance':0x0},{'begin':_0x176e22(0x20c4),'relevance':0x0},{'begin':_0x176e22(0x885)},{'begin':'\x5cb(?:0[Xx][0-9A-Fa-f_]+|0[DdTt][0-9_]+|0[QqOo][0-7_]+|0[BbYy][0-1_]+)\x5cb'}]},_0x573f61[_0x176e22(0x291b)],{'className':'string','variants':[{'begin':'\x27','end':_0x176e22(0x2324)},{'begin':'`','end':_0x176e22(0xb08)}],'relevance':0x0},{'className':_0x176e22(0x239b),'variants':[{'begin':_0x176e22(0x1c30)},{'begin':_0x176e22(0x3f41)}],'relevance':0x0},{'className':_0x176e22(0x2ad6),'begin':_0x176e22(0x209),'relevance':0x0},{'className':_0x176e22(0x2ad6),'begin':_0x176e22(0x2efc),'relevance':0x0},{'className':_0x176e22(0x5153),'begin':/^\s*\.[\w_-]+/}]};};},0x21e3:_0x2bd5fa=>{const _0x4ee794=a0_0x11e7;_0x2bd5fa[_0x4ee794(0x474c)]=function(_0x424772){const _0x3bef35=_0x4ee794,_0x3bc9fb={'$pattern':/[a-zA-Z][a-zA-Z0-9_?]*/,'keyword':['if','then',_0x3bef35(0x3d4),'do',_0x3bef35(0x552),_0x3bef35(0x30d6),_0x3bef35(0x3c19),_0x3bef35(0x110b),_0x3bef35(0x331),_0x3bef35(0x2aa7),'is','as',_0x3bef35(0x3b62),'when','by',_0x3bef35(0x5139),_0x3bef35(0x30ea),_0x3bef35(0x410f),_0x3bef35(0x47f6),_0x3bef35(0x4006),_0x3bef35(0x11d8),'boolean',_0x3bef35(0x239b),_0x3bef35(0x10c2),_0x3bef35(0x11e8),_0x3bef35(0x182e),_0x3bef35(0x1f2e),_0x3bef35(0x2a2a)],'literal':[_0x3bef35(0x4022),_0x3bef35(0x3984),_0x3bef35(0x3e27)],'built_in':['in',_0x3bef35(0x4531),_0x3bef35(0x22e5),'and','or',_0x3bef35(0x32a6),'not',_0x3bef35(0xbe0),_0x3bef35(0x7f7),_0x3bef35(0x2e2d),_0x3bef35(0x10aa),_0x3bef35(0x5011),_0x3bef35(0x2a37),_0x3bef35(0x3935),_0x3bef35(0x38de),'asin',_0x3bef35(0x2c6e),'atan','exp',_0x3bef35(0x330c),_0x3bef35(0x20ff),_0x3bef35(0x40da),'log10','log1p','pi','at','text_length',_0x3bef35(0x4c52),_0x3bef35(0x270c),_0x3bef35(0xa95),_0x3bef35(0x2b31),_0x3bef35(0x32a0),_0x3bef35(0x1ab6),_0x3bef35(0x41a5),'title_slide',_0x3bef35(0x4685),_0x3bef35(0x4e9a),_0x3bef35(0x1367),'fade_out',_0x3bef35(0x103d),_0x3bef35(0x1f37),'color',_0x3bef35(0x1d06),_0x3bef35(0x34dd),'texture_wrap',_0x3bef35(0x3cbe),_0x3bef35(0x587),_0x3bef35(0x3c99),_0x3bef35(0x48c1),_0x3bef35(0x302e),_0x3bef35(0x3ad),_0x3bef35(0x40ff),_0x3bef35(0x121d),_0x3bef35(0x786),_0x3bef35(0x3638),_0x3bef35(0x33b5),'rectangle',_0x3bef35(0x20f5),_0x3bef35(0x474b),_0x3bef35(0x4118),_0x3bef35(0x83c),_0x3bef35(0x1c0c),'move_to',_0x3bef35(0x2035),_0x3bef35(0x3b24),_0x3bef35(0x3640),_0x3bef35(0x1471),_0x3bef35(0x3baa),'locally',_0x3bef35(0x51b6),'mouse_?x',_0x3bef35(0x2672),_0x3bef35(0x34a)][_0x3bef35(0x1d1d)]([_0x3bef35(0x23c1),_0x3bef35(0x4c05),'MovieCredits',_0x3bef35(0x361),_0x3bef35(0x4a20),'Shading',_0x3bef35(0x31c4),_0x3bef35(0x1699),_0x3bef35(0x349b),_0x3bef35(0xc64),'StereoDecoder','PointCloud',_0x3bef35(0x3926),_0x3bef35(0x3b09),_0x3bef35(0xae9),'ChromaKey',_0x3bef35(0x1842),_0x3bef35(0x31bd),_0x3bef35(0x276c),'Charts'])},_0x44a5bb={'className':'string','begin':'\x22','end':'\x22','illegal':'\x5cn'},_0x52c509={'beginKeywords':'import','end':'$','keywords':_0x3bc9fb,'contains':[_0x44a5bb]},_0x5a4bb4={'className':_0x3bef35(0x14b2),'begin':/[a-z][^\n]*->/,'returnBegin':!0x0,'end':/->/,'contains':[_0x424772[_0x3bef35(0x46a1)](_0x424772[_0x3bef35(0x2029)],{'starts':{'endsWithParent':!0x0,'keywords':_0x3bc9fb}})]};return{'name':'XL','aliases':[_0x3bef35(0x4875)],'keywords':_0x3bc9fb,'contains':[_0x424772['C_LINE_COMMENT_MODE'],_0x424772[_0x3bef35(0x23fe)],_0x44a5bb,{'className':'string','begin':'\x27','end':'\x27','illegal':'\x5cn'},{'className':_0x3bef35(0x2431),'begin':'<<','end':'>>'},_0x5a4bb4,_0x52c509,{'className':_0x3bef35(0x4a80),'begin':_0x3bef35(0xc49)},_0x424772['NUMBER_MODE']]};};},0x72:_0x1f72a3=>{const _0x4979f0=a0_0x11e7;_0x1f72a3[_0x4979f0(0x474c)]=function(_0x541710){const _0x3dc515=_0x4979f0,_0x35bde4=_0x541710['regex'],_0x3b4b82=_0x35bde4['concat'](/[\p{L}_]/u,_0x35bde4[_0x3dc515(0x51e4)](/[\p{L}0-9_.-]*:/u),/[\p{L}0-9_.-]*/u),_0xc27931={'className':'symbol','begin':/&[a-z]+;|&#[0-9]+;|&#x[a-f0-9]+;/},_0x55182e={'begin':/\s/,'contains':[{'className':_0x3dc515(0x1357),'begin':/#?[a-z_][a-z1-9_-]+/,'illegal':/\n/}]},_0xdfd204=_0x541710[_0x3dc515(0x46a1)](_0x55182e,{'begin':/\(/,'end':/\)/}),_0x1bc437=_0x541710[_0x3dc515(0x46a1)](_0x541710[_0x3dc515(0xa4c)],{'className':_0x3dc515(0x2431)}),_0x3d01d9=_0x541710['inherit'](_0x541710['QUOTE_STRING_MODE'],{'className':_0x3dc515(0x2431)}),_0x2acd67={'endsWithParent':!0x0,'illegal':/`]+/}]}]}]};return{'name':_0x3dc515(0x15f9),'aliases':['html',_0x3dc515(0x381e),'rss',_0x3dc515(0x3c17),_0x3dc515(0x358d),_0x3dc515(0x2778),_0x3dc515(0x1cca),_0x3dc515(0x3af3),_0x3dc515(0x2bb1),'svg'],'case_insensitive':!0x0,'unicodeRegex':!0x0,'contains':[{'className':_0x3dc515(0x5153),'begin'://,'relevance':0xa,'contains':[_0x55182e,_0x3d01d9,_0x1bc437,_0xdfd204,{'begin':/\[/,'end':/\]/,'contains':[{'className':'meta','begin'://,'contains':[_0x55182e,_0xdfd204,_0x3d01d9,_0x1bc437]}]}]},_0x541710[_0x3dc515(0x4e4f)](//,{'relevance':0xa}),{'begin'://,'relevance':0xa},_0xc27931,{'className':'meta','end':/\?>/,'variants':[{'begin':/<\?xml/,'relevance':0xa,'contains':[_0x3d01d9]},{'begin':/<\?[a-z][a-z0-9]+/}]},{'className':_0x3dc515(0x15a9),'begin':/)/,'end':/>/,'keywords':{'name':_0x3dc515(0x1a84)},'contains':[_0x2acd67],'starts':{'end':/<\/style>/,'returnEnd':!0x0,'subLanguage':[_0x3dc515(0xdd4),_0x3dc515(0x2655)]}},{'className':'tag','begin':/)/,'end':/>/,'keywords':{'name':_0x3dc515(0x5e1)},'contains':[_0x2acd67],'starts':{'end':/<\/script>/,'returnEnd':!0x0,'subLanguage':[_0x3dc515(0x45ac),_0x3dc515(0x1427),'xml']}},{'className':_0x3dc515(0x15a9),'begin':/<>|<\/>/},{'className':_0x3dc515(0x15a9),'begin':_0x35bde4[_0x3dc515(0x1d1d)](//,/>/,/\s/)))),'end':/\/?>/,'contains':[{'className':'name','begin':_0x3b4b82,'relevance':0x0,'starts':_0x2acd67}]},{'className':_0x3dc515(0x15a9),'begin':_0x35bde4[_0x3dc515(0x1d1d)](/<\//,_0x35bde4['lookahead'](_0x35bde4[_0x3dc515(0x1d1d)](_0x3b4b82,/>/))),'contains':[{'className':_0x3dc515(0x11d8),'begin':_0x3b4b82,'relevance':0x0},{'begin':/>/,'relevance':0x0,'endsParent':!0x0}]}]};};},0xbeb:_0xb5398f=>{const _0x49db9b=a0_0x11e7;_0xb5398f[_0x49db9b(0x474c)]=function(_0x3d7e8a){const _0x1cce04=_0x49db9b;return{'name':_0x1cce04(0x1bfa),'aliases':[_0x1cce04(0x2c61),'xq',_0x1cce04(0x25d5)],'case_insensitive':!0x1,'illegal':/(proc)|(abstract)|(extends)|(until)|(#)/,'keywords':{'$pattern':/[a-zA-Z$][a-zA-Z0-9_:-]*/,'keyword':[_0x1cce04(0x196c),_0x1cce04(0x313b),_0x1cce04(0x37f7),_0x1cce04(0x3ff4),_0x1cce04(0x1a9a),'no-preserve',_0x1cce04(0x9a5),_0x1cce04(0x3d23),_0x1cce04(0x2ba1),_0x1cce04(0x31c),_0x1cce04(0x421),'context',_0x1cce04(0x2493),_0x1cce04(0xb83),_0x1cce04(0x536),_0x1cce04(0x47f8),'except',_0x1cce04(0x20f4),_0x1cce04(0x2f0b),_0x1cce04(0x19d4),_0x1cce04(0x46a1),_0x1cce04(0x42f3),_0x1cce04(0x3431),_0x1cce04(0x300f),_0x1cce04(0x2e84),_0x1cce04(0x11f8),_0x1cce04(0xb8f),_0x1cce04(0x4c83),'strict','unordered',_0x1cce04(0x2a0a),_0x1cce04(0x45e8),_0x1cce04(0x331),'option',_0x1cce04(0x14b2),'validate','variable',_0x1cce04(0x3c19),'at','in',_0x1cce04(0x1e61),'where',_0x1cce04(0xd8d),'group','by',_0x1cce04(0xdfd),'if','then','else',_0x1cce04(0x1d42),_0x1cce04(0x4a6d),_0x1cce04(0x18db),_0x1cce04(0x4cc4),_0x1cce04(0x191b),'only',_0x1cce04(0x2681),_0x1cce04(0x40bb),_0x1cce04(0x3dc6),_0x1cce04(0x516d),'ascending',_0x1cce04(0x221b),_0x1cce04(0xc23),'empty',_0x1cce04(0x486),_0x1cce04(0x1b54),_0x1cce04(0x363a),'every',_0x1cce04(0x4ad2),_0x1cce04(0x857),'case',_0x1cce04(0x34c0),'try','catch',_0x1cce04(0x2663),'or','to',_0x1cce04(0x29d),_0x1cce04(0x13b2),_0x1cce04(0x4279),'of',_0x1cce04(0x4465),'as',_0x1cce04(0x4ba6),'cast',_0x1cce04(0x4833),_0x1cce04(0x26f6),'delete','insert',_0x1cce04(0x31cf),_0x1cce04(0x741),'value',_0x1cce04(0x2022),'copy',_0x1cce04(0x189e),_0x1cce04(0x38d6)],'type':[_0x1cce04(0x1baa),_0x1cce04(0x1f80),_0x1cce04(0x13c6),_0x1cce04(0x263f),_0x1cce04(0x295),_0x1cce04(0x285a),_0x1cce04(0x4645),_0x1cce04(0x37f7),'namespace-node',_0x1cce04(0x4aeb),_0x1cce04(0x4006),'construction',_0x1cce04(0x1f9c),'xs:untypedAtomic',_0x1cce04(0x4627),_0x1cce04(0x25e9),_0x1cce04(0x4965),'xs:float',_0x1cce04(0x32e7),'xs:gYearMonth',_0x1cce04(0x2d6),_0x1cce04(0x40e5),'xs:gMonth','xs:gDay',_0x1cce04(0x49e5),_0x1cce04(0x105e),_0x1cce04(0x2c5),_0x1cce04(0x1dc2),'xs:QName',_0x1cce04(0x564),'xs:dateTime',_0x1cce04(0x29b),_0x1cce04(0x43cd),_0x1cce04(0x2f8a),'xs:normalizedString',_0x1cce04(0x1808),_0x1cce04(0x306c),'xs:NMTOKEN',_0x1cce04(0x774),'xs:NCName',_0x1cce04(0x21c8),_0x1cce04(0x1a20),_0x1cce04(0x18b1),_0x1cce04(0x888),_0x1cce04(0x2fe3),_0x1cce04(0x1ef2),_0x1cce04(0x1e6c),_0x1cce04(0x1601),_0x1cce04(0x15ce),_0x1cce04(0x28a4),_0x1cce04(0x2078),_0x1cce04(0x2ab6),_0x1cce04(0xddc),_0x1cce04(0x3f7b),'xs:unsignedByte','xs:positiveInteger','xs:yearMonthDuration',_0x1cce04(0x420e)],'literal':['eq','ne','lt','le','gt','ge','is',_0x1cce04(0xfe6),_0x1cce04(0x18df),_0x1cce04(0x49a8),_0x1cce04(0x69a),_0x1cce04(0x4496),'following::',_0x1cce04(0x1567),_0x1cce04(0x214f),'ancestor::','ancestor-or-self::',_0x1cce04(0x36dc),'preceding-sibling::',_0x1cce04(0x494b)]},'contains':[{'className':_0x1cce04(0x3362),'begin':/[$][\w\-:]+/},{'className':_0x1cce04(0x43a),'variants':[{'begin':/\barray:/,'end':/(?:append|filter|flatten|fold-(?:left|right)|for-each(?:-pair)?|get|head|insert-before|join|put|remove|reverse|size|sort|subarray|tail)\b/},{'begin':/\bmap:/,'end':/(?:contains|entry|find|for-each|get|keys|merge|put|remove|size)\b/},{'begin':/\bmath:/,'end':/(?:a(?:cos|sin|tan[2]?)|cos|exp(?:10)?|log(?:10)?|pi|pow|sin|sqrt|tan)\b/},{'begin':/\bop:/,'end':/\(/,'excludeEnd':!0x0},{'begin':/\bfn:/,'end':/\(/,'excludeEnd':!0x0},{'begin':/[^/,'end':/(\/[\w._:-]+>)/,'subLanguage':_0x1cce04(0x2655),'contains':[{'begin':/\{/,'end':/\}/,'subLanguage':_0x1cce04(0x685)},_0x1cce04(0x4454)]}]};};},0x15d4:_0x517670=>{const _0x534be8=a0_0x11e7;_0x517670[_0x534be8(0x474c)]=function(_0x3bc451){const _0x82fe8e=_0x534be8,_0x39b138=_0x82fe8e(0x71a),_0x129d05=_0x82fe8e(0x325f),_0x5d42d9={'className':_0x82fe8e(0x2431),'relevance':0x0,'variants':[{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/},{'begin':/\S+/}],'contains':[_0x3bc451[_0x82fe8e(0x4a76)],{'className':'template-variable','variants':[{'begin':/\{\{/,'end':/\}\}/},{'begin':/%\{/,'end':/\}/}]}]},_0x533f90=_0x3bc451[_0x82fe8e(0x46a1)](_0x5d42d9,{'variants':[{'begin':/'/,'end':/'/},{'begin':/"/,'end':/"/},{'begin':/[^\s,{}[\]]+/}]}),_0x34fa74={'className':'number','begin':_0x82fe8e(0x10d5)},_0x4b2588={'end':',','endsWithParent':!0x0,'excludeEnd':!0x0,'keywords':_0x39b138,'relevance':0x0},_0x1e1807={'begin':/\{/,'end':/\}/,'contains':[_0x4b2588],'illegal':'\x5cn','relevance':0x0},_0x1d6df9={'begin':'\x5c[','end':'\x5c]','contains':[_0x4b2588],'illegal':'\x5cn','relevance':0x0},_0x1fa858=[{'className':'attr','variants':[{'begin':_0x82fe8e(0x4155)},{'begin':_0x82fe8e(0x4498)},{'begin':_0x82fe8e(0x2f56)}]},{'className':_0x82fe8e(0x5153),'begin':_0x82fe8e(0x1723),'relevance':0xa},{'className':_0x82fe8e(0x2431),'begin':'[\x5c|>]([1-9]?[+-])?[\x20]*\x5cn(\x20+)[^\x20][^\x5cn]*\x5cn(\x5c2[^\x5cn]+\x5cn?)*'},{'begin':'<%[%=-]?','end':'[%-]?%>','subLanguage':_0x82fe8e(0x4430),'excludeBegin':!0x0,'excludeEnd':!0x0,'relevance':0x0},{'className':_0x82fe8e(0xcfc),'begin':_0x82fe8e(0x428e)+_0x129d05},{'className':_0x82fe8e(0xcfc),'begin':'!<'+_0x129d05+'>'},{'className':_0x82fe8e(0xcfc),'begin':'!'+_0x129d05},{'className':'type','begin':'!!'+_0x129d05},{'className':_0x82fe8e(0x5153),'begin':'&'+_0x3bc451[_0x82fe8e(0x206e)]+'$'},{'className':_0x82fe8e(0x5153),'begin':'\x5c*'+_0x3bc451['UNDERSCORE_IDENT_RE']+'$'},{'className':'bullet','begin':_0x82fe8e(0x424b),'relevance':0x0},_0x3bc451['HASH_COMMENT_MODE'],{'beginKeywords':_0x39b138,'keywords':{'literal':_0x39b138}},_0x34fa74,{'className':'number','begin':_0x3bc451[_0x82fe8e(0x45be)]+'\x5cb','relevance':0x0},_0x1e1807,_0x1d6df9,_0x5d42d9],_0x486999=[..._0x1fa858];return _0x486999['pop'](),_0x486999[_0x82fe8e(0x1715)](_0x533f90),_0x4b2588['contains']=_0x486999,{'name':_0x82fe8e(0x500b),'case_insensitive':!0x0,'aliases':[_0x82fe8e(0x2a4)],'contains':_0x1fa858};};},0x2229:_0x3e5265=>{const _0x3aa480=a0_0x11e7;_0x3e5265[_0x3aa480(0x474c)]=function(_0x4d9324){const _0x4292a0=_0x3aa480,_0x53e295={'className':'string','contains':[_0x4d9324['BACKSLASH_ESCAPE']],'variants':[_0x4d9324[_0x4292a0(0x46a1)](_0x4d9324[_0x4292a0(0xa4c)],{'illegal':null}),_0x4d9324[_0x4292a0(0x46a1)](_0x4d9324[_0x4292a0(0x291b)],{'illegal':null})]},_0x48fa41=_0x4d9324['UNDERSCORE_TITLE_MODE'],_0x246d83={'variants':[_0x4d9324[_0x4292a0(0xed7)],_0x4d9324['C_NUMBER_MODE']]},_0x9098a6='namespace\x20class\x20interface\x20use\x20extends\x20function\x20return\x20abstract\x20final\x20public\x20protected\x20private\x20static\x20deprecated\x20throw\x20try\x20catch\x20Exception\x20echo\x20empty\x20isset\x20instanceof\x20unset\x20let\x20var\x20new\x20const\x20self\x20require\x20if\x20else\x20elseif\x20switch\x20case\x20default\x20do\x20while\x20loop\x20for\x20continue\x20break\x20likely\x20unlikely\x20__LINE__\x20__FILE__\x20__DIR__\x20__FUNCTION__\x20__CLASS__\x20__TRAIT__\x20__METHOD__\x20__NAMESPACE__\x20array\x20boolean\x20float\x20double\x20integer\x20object\x20resource\x20string\x20char\x20long\x20unsigned\x20bool\x20int\x20uint\x20ulong\x20uchar\x20true\x20false\x20null\x20undefined';return{'name':_0x4292a0(0x49c3),'aliases':[_0x4292a0(0x3d4b)],'keywords':_0x9098a6,'contains':[_0x4d9324[_0x4292a0(0x2ae2)],_0x4d9324['COMMENT'](/\/\*/,/\*\//,{'contains':[{'className':_0x4292a0(0x4593),'begin':/@[A-Za-z]+/}]}),{'className':_0x4292a0(0x2431),'begin':/<<<['"]?\w+['"]?$/,'end':/^\w+;/,'contains':[_0x4d9324[_0x4292a0(0x4a76)]]},{'begin':/(::|->)+[a-zA-Z_\x7f-\xff][a-zA-Z0-9_\x7f-\xff]*/},{'className':_0x4292a0(0x14b2),'beginKeywords':_0x4292a0(0x8e7),'end':/[;{]/,'excludeEnd':!0x0,'illegal':/\$|\[|%/,'contains':[_0x48fa41,{'className':_0x4292a0(0xddd),'begin':/\(/,'end':/\)/,'keywords':_0x9098a6,'contains':[_0x4292a0(0x4454),_0x4d9324[_0x4292a0(0x23fe)],_0x53e295,_0x246d83]}]},{'className':_0x4292a0(0x1390),'beginKeywords':_0x4292a0(0x1ba3),'end':/\{/,'excludeEnd':!0x0,'illegal':/[:($"]/,'contains':[{'beginKeywords':_0x4292a0(0x4c1c)},_0x48fa41]},{'beginKeywords':'namespace','end':/;/,'illegal':/[.']/,'contains':[_0x48fa41]},{'beginKeywords':_0x4292a0(0x84a),'end':/;/,'contains':[_0x48fa41]},{'begin':/=>/},_0x53e295,_0x246d83]};};}},_0x1ed365={};function _0x2d589b(_0x453aeb){const _0x18d4bb=a0_0x11e7;var _0x5d6513=_0x1ed365[_0x453aeb];if(void 0x0!==_0x5d6513)return _0x5d6513['exports'];var _0x49f52c=_0x1ed365[_0x453aeb]={'id':_0x453aeb,'exports':{}};return _0x14ebb1[_0x453aeb](_0x49f52c,_0x49f52c[_0x18d4bb(0x474c)],_0x2d589b),_0x49f52c['exports'];}_0x2d589b['m']=_0x14ebb1,_0x2d589b['n']=_0x1e4ddf=>{const _0x15e075=a0_0x11e7;var _0x1580d5=_0x1e4ddf&&_0x1e4ddf[_0x15e075(0x1c56)]?()=>_0x1e4ddf[_0x15e075(0x3d23)]:()=>_0x1e4ddf;return _0x2d589b['d'](_0x1580d5,{'a':_0x1580d5}),_0x1580d5;},_0x2d589b['d']=(_0x5803c8,_0x35c462)=>{const _0x303837=a0_0x11e7;for(var _0x1941c5 in _0x35c462)_0x2d589b['o'](_0x35c462,_0x1941c5)&&!_0x2d589b['o'](_0x5803c8,_0x1941c5)&&Object[_0x303837(0x6f7)](_0x5803c8,_0x1941c5,{'enumerable':!0x0,'get':_0x35c462[_0x1941c5]});},_0x2d589b['u']=_0x454bbf=>_0x454bbf+_0x4a492c(0x53d),_0x2d589b['g']=(function(){const _0x9ca210=_0x4a492c;if(_0x9ca210(0x20c7)==typeof globalThis)return globalThis;try{return this||new Function(_0x9ca210(0x639))();}catch(_0x3c3347){if(_0x9ca210(0x20c7)==typeof window)return window;}}()),_0x2d589b['o']=(_0x1217f3,_0x7dfc0a)=>Object[_0x4a492c(0x3b3c)][_0x4a492c(0x2427)][_0x4a492c(0x236b)](_0x1217f3,_0x7dfc0a),_0x2d589b['r']=_0x2b07ea=>{const _0x2c859f=_0x4a492c;_0x2c859f(0x1daa)!=typeof Symbol&&Symbol[_0x2c859f(0x3a1c)]&&Object['defineProperty'](_0x2b07ea,Symbol[_0x2c859f(0x3a1c)],{'value':_0x2c859f(0x2496)}),Object[_0x2c859f(0x6f7)](_0x2b07ea,_0x2c859f(0x1c56),{'value':!0x0});},((()=>{const _0x39693f=_0x4a492c;var _0x192d39;_0x2d589b['g'][_0x39693f(0x13fe)]&&(_0x192d39=_0x2d589b['g'][_0x39693f(0x167e)]+'');var _0x48d6ab=_0x2d589b['g']['document'];if(!_0x192d39&&_0x48d6ab&&(_0x48d6ab['currentScript']&&(_0x192d39=_0x48d6ab['currentScript'][_0x39693f(0x3d6)]),!_0x192d39)){var _0x489c55=_0x48d6ab[_0x39693f(0x3560)]('script');if(_0x489c55['length']){for(var _0x15c349=_0x489c55['length']-0x1;_0x15c349>-0x1&&(!_0x192d39||!/^http(s?):/[_0x39693f(0x1769)](_0x192d39));)_0x192d39=_0x489c55[_0x15c349--][_0x39693f(0x3d6)];}}if(!_0x192d39)throw new Error(_0x39693f(0x3afa));_0x192d39=_0x192d39[_0x39693f(0x741)](/#.*$/,'')[_0x39693f(0x741)](/\?.*$/,'')[_0x39693f(0x741)](/\/[^\/]+$/,'/'),_0x2d589b['p']=_0x192d39;})()),_0x2d589b['b']=document[_0x4a492c(0x1d8d)]||self[_0x4a492c(0x167e)]['href'],((()=>{'use strict';const _0x37e46c=_0x4a492c;var _0x1ef793={};_0x2d589b['r'](_0x1ef793),_0x2d589b['d'](_0x1ef793,{'decode':()=>_0x59651b,'encode':()=>_0x39157d,'format':()=>_0x3d11ed,'parse':()=>_0x44b13d});var _0x129aa7={};_0x2d589b['r'](_0x129aa7),_0x2d589b['d'](_0x129aa7,{'Any':()=>_0x392380,'Cc':()=>_0x212275,'Cf':()=>_0x279c0b,'P':()=>_0x4bf2bf,'S':()=>_0x521c9d,'Z':()=>_0x5f3583});var _0xc54b7={};_0x2d589b['r'](_0xc54b7),_0x2d589b['d'](_0xc54b7,{'arrayReplaceAt':()=>_0x38c141,'assign':()=>_0xa6cec9,'escapeHtml':()=>_0x3191fd,'escapeRE':()=>_0x746769,'fromCodePoint':()=>_0x463426,'has':()=>_0x2c462b,'isMdAsciiPunct':()=>_0x5950b9,'isPunctChar':()=>_0x1f4fb3,'isSpace':()=>_0x290708,'isString':()=>_0x297e70,'isValidEntityCode':()=>_0x11f3ea,'isWhiteSpace':()=>_0x138dc2,'lib':()=>_0x5af902,'normalizeReference':()=>_0x1fe7b1,'unescapeAll':()=>_0x5bec25,'unescapeMd':()=>_0x3833ce});var _0x502007={};_0x2d589b['r'](_0x502007),_0x2d589b['d'](_0x502007,{'parseLinkDestination':()=>_0x1cba23,'parseLinkLabel':()=>_0x3592ba,'parseLinkTitle':()=>_0x26920b});const _0x2a6bba=_0x37e46c(0x3d5a);var _0x33594a=_0x2d589b(0x1fbd),_0x4bd3f3=_0x2d589b['n'](_0x33594a);const _0x3410e7={};function _0x293974(_0x6048e3,_0x41e525){const _0x13f5d2=_0x37e46c;_0x13f5d2(0x2431)!=typeof _0x41e525&&(_0x41e525=_0x293974[_0x13f5d2(0x3a2b)]);const _0x242807=function(_0x5adf5d){const _0x2312ad=_0x13f5d2;let _0x33d0e8=_0x3410e7[_0x5adf5d];if(_0x33d0e8)return _0x33d0e8;_0x33d0e8=_0x3410e7[_0x5adf5d]=[];for(let _0x28897a=0x0;_0x28897a<0x80;_0x28897a++){const _0x48e1db=String[_0x2312ad(0x49bf)](_0x28897a);_0x33d0e8[_0x2312ad(0x1715)](_0x48e1db);}for(let _0x39f277=0x0;_0x39f277<_0x5adf5d[_0x2312ad(0x1b19)];_0x39f277++){const _0x51dd9f=_0x5adf5d[_0x2312ad(0x4955)](_0x39f277);_0x33d0e8[_0x51dd9f]='%'+('0'+_0x51dd9f[_0x2312ad(0x8e8)](0x10)[_0x2312ad(0x44ff)]())[_0x2312ad(0x384c)](-0x2);}return _0x33d0e8;}(_0x41e525);return _0x6048e3[_0x13f5d2(0x741)](/(%[a-f0-9]{2})+/gi,function(_0x4f92c7){const _0x32f7ee=_0x13f5d2;let _0x75048c='';for(let _0x5b6549=0x0,_0x19c30d=_0x4f92c7[_0x32f7ee(0x1b19)];_0x5b6549<_0x19c30d;_0x5b6549+=0x3){const _0x3d485d=parseInt(_0x4f92c7[_0x32f7ee(0x384c)](_0x5b6549+0x1,_0x5b6549+0x3),0x10);if(_0x3d485d<0x80)_0x75048c+=_0x242807[_0x3d485d];else{if(0xc0==(0xe0&_0x3d485d)&&_0x5b6549+0x3<_0x19c30d){const _0x4a4a85=parseInt(_0x4f92c7[_0x32f7ee(0x384c)](_0x5b6549+0x4,_0x5b6549+0x6),0x10);if(0x80==(0xc0&_0x4a4a85)){const _0x2994ef=_0x3d485d<<0x6&0x7c0|0x3f&_0x4a4a85;_0x75048c+=_0x2994ef<0x80?'��':String['fromCharCode'](_0x2994ef),_0x5b6549+=0x3;continue;}}if(0xe0==(0xf0&_0x3d485d)&&_0x5b6549+0x6<_0x19c30d){const _0x2c7ac1=parseInt(_0x4f92c7['slice'](_0x5b6549+0x4,_0x5b6549+0x6),0x10),_0x4d564a=parseInt(_0x4f92c7[_0x32f7ee(0x384c)](_0x5b6549+0x7,_0x5b6549+0x9),0x10);if(0x80==(0xc0&_0x2c7ac1)&&0x80==(0xc0&_0x4d564a)){const _0x43f524=_0x3d485d<<0xc&0xf000|_0x2c7ac1<<0x6&0xfc0|0x3f&_0x4d564a;_0x75048c+=_0x43f524<0x800||_0x43f524>=0xd800&&_0x43f524<=0xdfff?_0x32f7ee(0x51f6):String[_0x32f7ee(0x49bf)](_0x43f524),_0x5b6549+=0x6;continue;}}if(0xf0==(0xf8&_0x3d485d)&&_0x5b6549+0x9<_0x19c30d){const _0x8f9d57=parseInt(_0x4f92c7[_0x32f7ee(0x384c)](_0x5b6549+0x4,_0x5b6549+0x6),0x10),_0x53427c=parseInt(_0x4f92c7[_0x32f7ee(0x384c)](_0x5b6549+0x7,_0x5b6549+0x9),0x10),_0x36cbfc=parseInt(_0x4f92c7[_0x32f7ee(0x384c)](_0x5b6549+0xa,_0x5b6549+0xc),0x10);if(0x80==(0xc0&_0x8f9d57)&&0x80==(0xc0&_0x53427c)&&0x80==(0xc0&_0x36cbfc)){let _0x5d9bed=_0x3d485d<<0x12&0x1c0000|_0x8f9d57<<0xc&0x3f000|_0x53427c<<0x6&0xfc0|0x3f&_0x36cbfc;_0x5d9bed<0x10000||_0x5d9bed>0x10ffff?_0x75048c+='����':(_0x5d9bed-=0x10000,_0x75048c+=String[_0x32f7ee(0x49bf)](0xd800+(_0x5d9bed>>0xa),0xdc00+(0x3ff&_0x5d9bed))),_0x5b6549+=0x9;continue;}}_0x75048c+='�';}}return _0x75048c;});}_0x293974[_0x37e46c(0x3a2b)]=';/?:@&=+$,#',_0x293974[_0x37e46c(0x37d6)]='';const _0x59651b=_0x293974,_0x19cc73={};function _0x7a5f81(_0x5e38f5,_0x1ba819,_0x3c00c0){const _0x305d69=_0x37e46c;_0x305d69(0x2431)!=typeof _0x1ba819&&(_0x3c00c0=_0x1ba819,_0x1ba819=_0x7a5f81[_0x305d69(0x3a2b)]),void 0x0===_0x3c00c0&&(_0x3c00c0=!0x0);const _0x2c6723=function(_0x17d28d){const _0x2538a8=_0x305d69;let _0x1c6816=_0x19cc73[_0x17d28d];if(_0x1c6816)return _0x1c6816;_0x1c6816=_0x19cc73[_0x17d28d]=[];for(let _0x4f444d=0x0;_0x4f444d<0x80;_0x4f444d++){const _0x37d094=String['fromCharCode'](_0x4f444d);/^[0-9a-z]$/i[_0x2538a8(0x1769)](_0x37d094)?_0x1c6816[_0x2538a8(0x1715)](_0x37d094):_0x1c6816[_0x2538a8(0x1715)]('%'+('0'+_0x4f444d[_0x2538a8(0x8e8)](0x10)[_0x2538a8(0x44ff)]())[_0x2538a8(0x384c)](-0x2));}for(let _0x465930=0x0;_0x465930<_0x17d28d[_0x2538a8(0x1b19)];_0x465930++)_0x1c6816[_0x17d28d[_0x2538a8(0x4955)](_0x465930)]=_0x17d28d[_0x465930];return _0x1c6816;}(_0x1ba819);let _0x1e4cec='';for(let _0x70634d=0x0,_0x5d2180=_0x5e38f5['length'];_0x70634d<_0x5d2180;_0x70634d++){const _0x38fb71=_0x5e38f5['charCodeAt'](_0x70634d);if(_0x3c00c0&&0x25===_0x38fb71&&_0x70634d+0x2<_0x5d2180&&/^[0-9a-f]{2}$/i[_0x305d69(0x1769)](_0x5e38f5['slice'](_0x70634d+0x1,_0x70634d+0x3)))_0x1e4cec+=_0x5e38f5[_0x305d69(0x384c)](_0x70634d,_0x70634d+0x3),_0x70634d+=0x2;else{if(_0x38fb71<0x80)_0x1e4cec+=_0x2c6723[_0x38fb71];else{if(_0x38fb71>=0xd800&&_0x38fb71<=0xdfff){if(_0x38fb71>=0xd800&&_0x38fb71<=0xdbff&&_0x70634d+0x1<_0x5d2180){const _0x355f7c=_0x5e38f5[_0x305d69(0x4955)](_0x70634d+0x1);if(_0x355f7c>=0xdc00&&_0x355f7c<=0xdfff){_0x1e4cec+=encodeURIComponent(_0x5e38f5[_0x70634d]+_0x5e38f5[_0x70634d+0x1]),_0x70634d++;continue;}}_0x1e4cec+=_0x305d69(0x1803);}else _0x1e4cec+=encodeURIComponent(_0x5e38f5[_0x70634d]);}}}return _0x1e4cec;}_0x7a5f81[_0x37e46c(0x3a2b)]=_0x37e46c(0xcea),_0x7a5f81['componentChars']=_0x37e46c(0x327e);const _0x39157d=_0x7a5f81;function _0x3d11ed(_0x50d935){const _0x3cae4b=_0x37e46c;let _0x1ea910='';return _0x1ea910+=_0x50d935[_0x3cae4b(0x33e5)]||'',_0x1ea910+=_0x50d935['slashes']?'//':'',_0x1ea910+=_0x50d935[_0x3cae4b(0x2010)]?_0x50d935[_0x3cae4b(0x2010)]+'@':'',_0x50d935[_0x3cae4b(0xf76)]&&-0x1!==_0x50d935[_0x3cae4b(0xf76)][_0x3cae4b(0x8c9)](':')?_0x1ea910+='['+_0x50d935['hostname']+']':_0x1ea910+=_0x50d935['hostname']||'',_0x1ea910+=_0x50d935[_0x3cae4b(0x299e)]?':'+_0x50d935[_0x3cae4b(0x299e)]:'',_0x1ea910+=_0x50d935[_0x3cae4b(0x1211)]||'',_0x1ea910+=_0x50d935[_0x3cae4b(0x3190)]||'',_0x1ea910+=_0x50d935[_0x3cae4b(0x40c0)]||'',_0x1ea910;}function _0x2e8cdb(){const _0xb0b665=_0x37e46c;this['protocol']=null,this[_0xb0b665(0x1b30)]=null,this['auth']=null,this['port']=null,this[_0xb0b665(0xf76)]=null,this[_0xb0b665(0x40c0)]=null,this[_0xb0b665(0x3190)]=null,this[_0xb0b665(0x1211)]=null;}const _0x33b9c6=/^([a-z0-9.+-]+:)/i,_0x699a00=/:[0-9]*$/,_0x48286c=/^(\/\/?(?!\/)[^\?\s]*)(\?[^\s]*)?$/,_0x358ab5=['{','}','|','\x5c','^','`'][_0x37e46c(0x1d1d)](['<','>','\x22','`','\x20','\x0d','\x0a','\x09']),_0x464e25=['\x27'][_0x37e46c(0x1d1d)](_0x358ab5),_0xc1213b=['%','/','?',';','#'][_0x37e46c(0x1d1d)](_0x464e25),_0x100fb6=['/','?','#'],_0x299677=/^[+a-z0-9A-Z_-]{0,63}$/,_0x4849f2=/^([+a-z0-9A-Z_-]{0,63})(.*)$/,_0x4c7d06={'javascript':!0x0,'javascript:':!0x0},_0x1497a3={'http':!0x0,'https':!0x0,'ftp':!0x0,'gopher':!0x0,'file':!0x0,'http:':!0x0,'https:':!0x0,'ftp:':!0x0,'gopher:':!0x0,'file:':!0x0};_0x2e8cdb[_0x37e46c(0x3b3c)][_0x37e46c(0x2956)]=function(_0x4b42d8,_0x54487e){const _0x1de85d=_0x37e46c;let _0x311b23,_0x5709c4,_0x109961,_0x217830=_0x4b42d8;if(_0x217830=_0x217830[_0x1de85d(0x1b23)](),!_0x54487e&&0x1===_0x4b42d8[_0x1de85d(0x1117)]('#')[_0x1de85d(0x1b19)]){const _0x3e3888=_0x48286c[_0x1de85d(0x198d)](_0x217830);if(_0x3e3888)return this[_0x1de85d(0x1211)]=_0x3e3888[0x1],_0x3e3888[0x2]&&(this['search']=_0x3e3888[0x2]),this;}let _0x41efc9=_0x33b9c6[_0x1de85d(0x198d)](_0x217830);if(_0x41efc9&&(_0x41efc9=_0x41efc9[0x0],_0x311b23=_0x41efc9['toLowerCase'](),this[_0x1de85d(0x33e5)]=_0x41efc9,_0x217830=_0x217830[_0x1de85d(0x3a72)](_0x41efc9[_0x1de85d(0x1b19)])),(_0x54487e||_0x41efc9||_0x217830['match'](/^\/\/[^@\/]+@[^@\/]+/))&&(_0x109961='//'===_0x217830['substr'](0x0,0x2),!_0x109961||_0x41efc9&&_0x4c7d06[_0x41efc9]||(_0x217830=_0x217830[_0x1de85d(0x3a72)](0x2),this[_0x1de85d(0x1b30)]=!0x0)),!_0x4c7d06[_0x41efc9]&&(_0x109961||_0x41efc9&&!_0x1497a3[_0x41efc9])){let _0x491e55,_0xa8bdf3,_0xb2f464=-0x1;for(let _0x1e59f2=0x0;_0x1e59f2<_0x100fb6['length'];_0x1e59f2++)_0x5709c4=_0x217830['indexOf'](_0x100fb6[_0x1e59f2]),-0x1!==_0x5709c4&&(-0x1===_0xb2f464||_0x5709c4<_0xb2f464)&&(_0xb2f464=_0x5709c4);_0xa8bdf3=-0x1===_0xb2f464?_0x217830[_0x1de85d(0x4004)]('@'):_0x217830['lastIndexOf']('@',_0xb2f464),-0x1!==_0xa8bdf3&&(_0x491e55=_0x217830[_0x1de85d(0x384c)](0x0,_0xa8bdf3),_0x217830=_0x217830[_0x1de85d(0x384c)](_0xa8bdf3+0x1),this[_0x1de85d(0x2010)]=_0x491e55),_0xb2f464=-0x1;for(let _0x5c15fc=0x0;_0x5c15fc<_0xc1213b['length'];_0x5c15fc++)_0x5709c4=_0x217830[_0x1de85d(0x8c9)](_0xc1213b[_0x5c15fc]),-0x1!==_0x5709c4&&(-0x1===_0xb2f464||_0x5709c4<_0xb2f464)&&(_0xb2f464=_0x5709c4);-0x1===_0xb2f464&&(_0xb2f464=_0x217830['length']),':'===_0x217830[_0xb2f464-0x1]&&_0xb2f464--;const _0x4cc24a=_0x217830[_0x1de85d(0x384c)](0x0,_0xb2f464);_0x217830=_0x217830[_0x1de85d(0x384c)](_0xb2f464),this[_0x1de85d(0x3063)](_0x4cc24a),this[_0x1de85d(0xf76)]=this[_0x1de85d(0xf76)]||'';const _0x408e1b='['===this[_0x1de85d(0xf76)][0x0]&&']'===this[_0x1de85d(0xf76)][this[_0x1de85d(0xf76)][_0x1de85d(0x1b19)]-0x1];if(!_0x408e1b){const _0x351b77=this['hostname']['split'](/\./);for(let _0x4117a4=0x0,_0x39791a=_0x351b77[_0x1de85d(0x1b19)];_0x4117a4<_0x39791a;_0x4117a4++){const _0x28e863=_0x351b77[_0x4117a4];if(_0x28e863&&!_0x28e863[_0x1de85d(0x2d96)](_0x299677)){let _0x47a50e='';for(let _0x45b2d4=0x0,_0x431f28=_0x28e863[_0x1de85d(0x1b19)];_0x45b2d4<_0x431f28;_0x45b2d4++)_0x28e863[_0x1de85d(0x4955)](_0x45b2d4)>0x7f?_0x47a50e+='x':_0x47a50e+=_0x28e863[_0x45b2d4];if(!_0x47a50e[_0x1de85d(0x2d96)](_0x299677)){const _0x10cd3e=_0x351b77[_0x1de85d(0x384c)](0x0,_0x4117a4),_0x325d0d=_0x351b77[_0x1de85d(0x384c)](_0x4117a4+0x1),_0x4ef1a4=_0x28e863[_0x1de85d(0x2d96)](_0x4849f2);_0x4ef1a4&&(_0x10cd3e[_0x1de85d(0x1715)](_0x4ef1a4[0x1]),_0x325d0d['unshift'](_0x4ef1a4[0x2])),_0x325d0d[_0x1de85d(0x1b19)]&&(_0x217830=_0x325d0d[_0x1de85d(0x3541)]('.')+_0x217830),this[_0x1de85d(0xf76)]=_0x10cd3e[_0x1de85d(0x3541)]('.');break;}}}}this[_0x1de85d(0xf76)][_0x1de85d(0x1b19)]>0xff&&(this['hostname']=''),_0x408e1b&&(this[_0x1de85d(0xf76)]=this['hostname'][_0x1de85d(0x3a72)](0x1,this[_0x1de85d(0xf76)]['length']-0x2));}const _0x2af8be=_0x217830[_0x1de85d(0x8c9)]('#');-0x1!==_0x2af8be&&(this['hash']=_0x217830[_0x1de85d(0x3a72)](_0x2af8be),_0x217830=_0x217830['slice'](0x0,_0x2af8be));const _0x241116=_0x217830[_0x1de85d(0x8c9)]('?');return-0x1!==_0x241116&&(this['search']=_0x217830['substr'](_0x241116),_0x217830=_0x217830[_0x1de85d(0x384c)](0x0,_0x241116)),_0x217830&&(this['pathname']=_0x217830),_0x1497a3[_0x311b23]&&this[_0x1de85d(0xf76)]&&!this['pathname']&&(this[_0x1de85d(0x1211)]=''),this;},_0x2e8cdb['prototype']['parseHost']=function(_0x4ca6c0){const _0x123ce8=_0x37e46c;let _0x366a4a=_0x699a00[_0x123ce8(0x198d)](_0x4ca6c0);_0x366a4a&&(_0x366a4a=_0x366a4a[0x0],':'!==_0x366a4a&&(this['port']=_0x366a4a[_0x123ce8(0x3a72)](0x1)),_0x4ca6c0=_0x4ca6c0[_0x123ce8(0x3a72)](0x0,_0x4ca6c0[_0x123ce8(0x1b19)]-_0x366a4a[_0x123ce8(0x1b19)])),_0x4ca6c0&&(this[_0x123ce8(0xf76)]=_0x4ca6c0);};const _0x44b13d=function(_0x3711cc,_0x4e6330){const _0x28a9ac=_0x37e46c;if(_0x3711cc&&_0x3711cc instanceof _0x2e8cdb)return _0x3711cc;const _0x23e31e=new _0x2e8cdb();return _0x23e31e[_0x28a9ac(0x2956)](_0x3711cc,_0x4e6330),_0x23e31e;},_0x4bf2bf=/[!-#%-\*,-\/:;\?@\[-\]_\{\}\xA1\xA7\xAB\xB6\xB7\xBB\xBF\u037E\u0387\u055A-\u055F\u0589\u058A\u05BE\u05C0\u05C3\u05C6\u05F3\u05F4\u0609\u060A\u060C\u060D\u061B\u061D-\u061F\u066A-\u066D\u06D4\u0700-\u070D\u07F7-\u07F9\u0830-\u083E\u085E\u0964\u0965\u0970\u09FD\u0A76\u0AF0\u0C77\u0C84\u0DF4\u0E4F\u0E5A\u0E5B\u0F04-\u0F12\u0F14\u0F3A-\u0F3D\u0F85\u0FD0-\u0FD4\u0FD9\u0FDA\u104A-\u104F\u10FB\u1360-\u1368\u1400\u166E\u169B\u169C\u16EB-\u16ED\u1735\u1736\u17D4-\u17D6\u17D8-\u17DA\u1800-\u180A\u1944\u1945\u1A1E\u1A1F\u1AA0-\u1AA6\u1AA8-\u1AAD\u1B5A-\u1B60\u1B7D\u1B7E\u1BFC-\u1BFF\u1C3B-\u1C3F\u1C7E\u1C7F\u1CC0-\u1CC7\u1CD3\u2010-\u2027\u2030-\u2043\u2045-\u2051\u2053-\u205E\u207D\u207E\u208D\u208E\u2308-\u230B\u2329\u232A\u2768-\u2775\u27C5\u27C6\u27E6-\u27EF\u2983-\u2998\u29D8-\u29DB\u29FC\u29FD\u2CF9-\u2CFC\u2CFE\u2CFF\u2D70\u2E00-\u2E2E\u2E30-\u2E4F\u2E52-\u2E5D\u3001-\u3003\u3008-\u3011\u3014-\u301F\u3030\u303D\u30A0\u30FB\uA4FE\uA4FF\uA60D-\uA60F\uA673\uA67E\uA6F2-\uA6F7\uA874-\uA877\uA8CE\uA8CF\uA8F8-\uA8FA\uA8FC\uA92E\uA92F\uA95F\uA9C1-\uA9CD\uA9DE\uA9DF\uAA5C-\uAA5F\uAADE\uAADF\uAAF0\uAAF1\uABEB\uFD3E\uFD3F\uFE10-\uFE19\uFE30-\uFE52\uFE54-\uFE61\uFE63\uFE68\uFE6A\uFE6B\uFF01-\uFF03\uFF05-\uFF0A\uFF0C-\uFF0F\uFF1A\uFF1B\uFF1F\uFF20\uFF3B-\uFF3D\uFF3F\uFF5B\uFF5D\uFF5F-\uFF65]|\uD800[\uDD00-\uDD02\uDF9F\uDFD0]|\uD801\uDD6F|\uD802[\uDC57\uDD1F\uDD3F\uDE50-\uDE58\uDE7F\uDEF0-\uDEF6\uDF39-\uDF3F\uDF99-\uDF9C]|\uD803[\uDEAD\uDF55-\uDF59\uDF86-\uDF89]|\uD804[\uDC47-\uDC4D\uDCBB\uDCBC\uDCBE-\uDCC1\uDD40-\uDD43\uDD74\uDD75\uDDC5-\uDDC8\uDDCD\uDDDB\uDDDD-\uDDDF\uDE38-\uDE3D\uDEA9]|\uD805[\uDC4B-\uDC4F\uDC5A\uDC5B\uDC5D\uDCC6\uDDC1-\uDDD7\uDE41-\uDE43\uDE60-\uDE6C\uDEB9\uDF3C-\uDF3E]|\uD806[\uDC3B\uDD44-\uDD46\uDDE2\uDE3F-\uDE46\uDE9A-\uDE9C\uDE9E-\uDEA2\uDF00-\uDF09]|\uD807[\uDC41-\uDC45\uDC70\uDC71\uDEF7\uDEF8\uDF43-\uDF4F\uDFFF]|\uD809[\uDC70-\uDC74]|\uD80B[\uDFF1\uDFF2]|\uD81A[\uDE6E\uDE6F\uDEF5\uDF37-\uDF3B\uDF44]|\uD81B[\uDE97-\uDE9A\uDFE2]|\uD82F\uDC9F|\uD836[\uDE87-\uDE8B]|\uD83A[\uDD5E\uDD5F]/,_0x392380=/[\0-\uD7FF\uE000-\uFFFF]|[\uD800-\uDBFF][\uDC00-\uDFFF]|[\uD800-\uDBFF](?![\uDC00-\uDFFF])|(?:[^\uD800-\uDBFF]|^)[\uDC00-\uDFFF]/,_0x212275=/[\0-\x1F\x7F-\x9F]/,_0x279c0b=/[\xAD\u0600-\u0605\u061C\u06DD\u070F\u0890\u0891\u08E2\u180E\u200B-\u200F\u202A-\u202E\u2060-\u2064\u2066-\u206F\uFEFF\uFFF9-\uFFFB]|\uD804[\uDCBD\uDCCD]|\uD80D[\uDC30-\uDC3F]|\uD82F[\uDCA0-\uDCA3]|\uD834[\uDD73-\uDD7A]|\uDB40[\uDC01\uDC20-\uDC7F]/,_0x521c9d=/[\$\+<->\^`\|~\xA2-\xA6\xA8\xA9\xAC\xAE-\xB1\xB4\xB8\xD7\xF7\u02C2-\u02C5\u02D2-\u02DF\u02E5-\u02EB\u02ED\u02EF-\u02FF\u0375\u0384\u0385\u03F6\u0482\u058D-\u058F\u0606-\u0608\u060B\u060E\u060F\u06DE\u06E9\u06FD\u06FE\u07F6\u07FE\u07FF\u0888\u09F2\u09F3\u09FA\u09FB\u0AF1\u0B70\u0BF3-\u0BFA\u0C7F\u0D4F\u0D79\u0E3F\u0F01-\u0F03\u0F13\u0F15-\u0F17\u0F1A-\u0F1F\u0F34\u0F36\u0F38\u0FBE-\u0FC5\u0FC7-\u0FCC\u0FCE\u0FCF\u0FD5-\u0FD8\u109E\u109F\u1390-\u1399\u166D\u17DB\u1940\u19DE-\u19FF\u1B61-\u1B6A\u1B74-\u1B7C\u1FBD\u1FBF-\u1FC1\u1FCD-\u1FCF\u1FDD-\u1FDF\u1FED-\u1FEF\u1FFD\u1FFE\u2044\u2052\u207A-\u207C\u208A-\u208C\u20A0-\u20C0\u2100\u2101\u2103-\u2106\u2108\u2109\u2114\u2116-\u2118\u211E-\u2123\u2125\u2127\u2129\u212E\u213A\u213B\u2140-\u2144\u214A-\u214D\u214F\u218A\u218B\u2190-\u2307\u230C-\u2328\u232B-\u2426\u2440-\u244A\u249C-\u24E9\u2500-\u2767\u2794-\u27C4\u27C7-\u27E5\u27F0-\u2982\u2999-\u29D7\u29DC-\u29FB\u29FE-\u2B73\u2B76-\u2B95\u2B97-\u2BFF\u2CE5-\u2CEA\u2E50\u2E51\u2E80-\u2E99\u2E9B-\u2EF3\u2F00-\u2FD5\u2FF0-\u2FFF\u3004\u3012\u3013\u3020\u3036\u3037\u303E\u303F\u309B\u309C\u3190\u3191\u3196-\u319F\u31C0-\u31E3\u31EF\u3200-\u321E\u322A-\u3247\u3250\u3260-\u327F\u328A-\u32B0\u32C0-\u33FF\u4DC0-\u4DFF\uA490-\uA4C6\uA700-\uA716\uA720\uA721\uA789\uA78A\uA828-\uA82B\uA836-\uA839\uAA77-\uAA79\uAB5B\uAB6A\uAB6B\uFB29\uFBB2-\uFBC2\uFD40-\uFD4F\uFDCF\uFDFC-\uFDFF\uFE62\uFE64-\uFE66\uFE69\uFF04\uFF0B\uFF1C-\uFF1E\uFF3E\uFF40\uFF5C\uFF5E\uFFE0-\uFFE6\uFFE8-\uFFEE\uFFFC\uFFFD]|\uD800[\uDD37-\uDD3F\uDD79-\uDD89\uDD8C-\uDD8E\uDD90-\uDD9C\uDDA0\uDDD0-\uDDFC]|\uD802[\uDC77\uDC78\uDEC8]|\uD805\uDF3F|\uD807[\uDFD5-\uDFF1]|\uD81A[\uDF3C-\uDF3F\uDF45]|\uD82F\uDC9C|\uD833[\uDF50-\uDFC3]|\uD834[\uDC00-\uDCF5\uDD00-\uDD26\uDD29-\uDD64\uDD6A-\uDD6C\uDD83\uDD84\uDD8C-\uDDA9\uDDAE-\uDDEA\uDE00-\uDE41\uDE45\uDF00-\uDF56]|\uD835[\uDEC1\uDEDB\uDEFB\uDF15\uDF35\uDF4F\uDF6F\uDF89\uDFA9\uDFC3]|\uD836[\uDC00-\uDDFF\uDE37-\uDE3A\uDE6D-\uDE74\uDE76-\uDE83\uDE85\uDE86]|\uD838[\uDD4F\uDEFF]|\uD83B[\uDCAC\uDCB0\uDD2E\uDEF0\uDEF1]|\uD83C[\uDC00-\uDC2B\uDC30-\uDC93\uDCA0-\uDCAE\uDCB1-\uDCBF\uDCC1-\uDCCF\uDCD1-\uDCF5\uDD0D-\uDDAD\uDDE6-\uDE02\uDE10-\uDE3B\uDE40-\uDE48\uDE50\uDE51\uDE60-\uDE65\uDF00-\uDFFF]|\uD83D[\uDC00-\uDED7\uDEDC-\uDEEC\uDEF0-\uDEFC\uDF00-\uDF76\uDF7B-\uDFD9\uDFE0-\uDFEB\uDFF0]|\uD83E[\uDC00-\uDC0B\uDC10-\uDC47\uDC50-\uDC59\uDC60-\uDC87\uDC90-\uDCAD\uDCB0\uDCB1\uDD00-\uDE53\uDE60-\uDE6D\uDE70-\uDE7C\uDE80-\uDE88\uDE90-\uDEBD\uDEBF-\uDEC5\uDECE-\uDEDB\uDEE0-\uDEE8\uDEF0-\uDEF8\uDF00-\uDF92\uDF94-\uDFCA]/,_0x5f3583=/[ \xA0\u1680\u2000-\u200A\u2028\u2029\u202F\u205F\u3000]/,_0x21291b=new Uint16Array(_0x37e46c(0x1524)[_0x37e46c(0x1117)]('')[_0x37e46c(0x4833)](_0x481e33=>_0x481e33[_0x37e46c(0x4955)](0x0))),_0x2c9e85=new Uint16Array(_0x37e46c(0x3e15)['split']('')[_0x37e46c(0x4833)](_0x4db2bf=>_0x4db2bf['charCodeAt'](0x0)));var _0x9bfeb6;const _0x1d9373=new Map([[0x0,0xfffd],[0x80,0x20ac],[0x82,0x201a],[0x83,0x192],[0x84,0x201e],[0x85,0x2026],[0x86,0x2020],[0x87,0x2021],[0x88,0x2c6],[0x89,0x2030],[0x8a,0x160],[0x8b,0x2039],[0x8c,0x152],[0x8e,0x17d],[0x91,0x2018],[0x92,0x2019],[0x93,0x201c],[0x94,0x201d],[0x95,0x2022],[0x96,0x2013],[0x97,0x2014],[0x98,0x2dc],[0x99,0x2122],[0x9a,0x161],[0x9b,0x203a],[0x9c,0x153],[0x9e,0x17e],[0x9f,0x178]]),_0x3e0c4e=null!==(_0x9bfeb6=String[_0x37e46c(0xe5f)])&&void 0x0!==_0x9bfeb6?_0x9bfeb6:function(_0x3972ea){const _0x24bcd5=_0x37e46c;let _0x4b5a5d='';return _0x3972ea>0xffff&&(_0x3972ea-=0x10000,_0x4b5a5d+=String[_0x24bcd5(0x49bf)](_0x3972ea>>>0xa&0x3ff|0xd800),_0x3972ea=0xdc00|0x3ff&_0x3972ea),_0x4b5a5d+=String[_0x24bcd5(0x49bf)](_0x3972ea),_0x4b5a5d;};function _0x5ccf22(_0x39ba02){const _0x5e1776=_0x37e46c;var _0x31466d;return _0x39ba02>=0xd800&&_0x39ba02<=0xdfff||_0x39ba02>0x10ffff?0xfffd:null!==(_0x31466d=_0x1d9373[_0x5e1776(0xf9e)](_0x39ba02))&&void 0x0!==_0x31466d?_0x31466d:_0x39ba02;}var _0x1b57fb;!function(_0x5c260b){const _0x69437a=_0x37e46c;_0x5c260b[_0x5c260b[_0x69437a(0xb65)]=0x23]=_0x69437a(0xb65),_0x5c260b[_0x5c260b[_0x69437a(0x2e5d)]=0x3b]=_0x69437a(0x2e5d),_0x5c260b[_0x5c260b[_0x69437a(0x104c)]=0x3d]=_0x69437a(0x104c),_0x5c260b[_0x5c260b[_0x69437a(0x3101)]=0x30]=_0x69437a(0x3101),_0x5c260b[_0x5c260b[_0x69437a(0x524d)]=0x39]='NINE',_0x5c260b[_0x5c260b[_0x69437a(0x146c)]=0x61]=_0x69437a(0x146c),_0x5c260b[_0x5c260b[_0x69437a(0x4a9f)]=0x66]=_0x69437a(0x4a9f),_0x5c260b[_0x5c260b['LOWER_X']=0x78]='LOWER_X',_0x5c260b[_0x5c260b[_0x69437a(0x126a)]=0x7a]=_0x69437a(0x126a),_0x5c260b[_0x5c260b[_0x69437a(0x299d)]=0x41]=_0x69437a(0x299d),_0x5c260b[_0x5c260b[_0x69437a(0x9b9)]=0x46]='UPPER_F',_0x5c260b[_0x5c260b[_0x69437a(0x38e2)]=0x5a]=_0x69437a(0x38e2);}(_0x1b57fb||(_0x1b57fb={}));var _0x3457fd,_0x2cd48d,_0x18ddac;function _0xa101c0(_0x23c3f5){const _0x5e3dd4=_0x37e46c;return _0x23c3f5>=_0x1b57fb[_0x5e3dd4(0x3101)]&&_0x23c3f5<=_0x1b57fb[_0x5e3dd4(0x524d)];}function _0x371f4f(_0x2fc612){const _0x5a2e18=_0x37e46c;return _0x2fc612>=_0x1b57fb[_0x5a2e18(0x299d)]&&_0x2fc612<=_0x1b57fb[_0x5a2e18(0x9b9)]||_0x2fc612>=_0x1b57fb[_0x5a2e18(0x146c)]&&_0x2fc612<=_0x1b57fb['LOWER_F'];}function _0x5877ca(_0x2ac833){return _0x2ac833===_0x1b57fb['EQUALS']||function(_0x4caf5a){const _0x56f950=a0_0x11e7;return _0x4caf5a>=_0x1b57fb['UPPER_A']&&_0x4caf5a<=_0x1b57fb[_0x56f950(0x38e2)]||_0x4caf5a>=_0x1b57fb['LOWER_A']&&_0x4caf5a<=_0x1b57fb[_0x56f950(0x126a)]||_0xa101c0(_0x4caf5a);}(_0x2ac833);}!function(_0x36f633){const _0x42d0f5=_0x37e46c;_0x36f633[_0x36f633['VALUE_LENGTH']=0xc000]=_0x42d0f5(0x6b0),_0x36f633[_0x36f633[_0x42d0f5(0x3cc3)]=0x3f80]='BRANCH_LENGTH',_0x36f633[_0x36f633[_0x42d0f5(0xa97)]=0x7f]='JUMP_TABLE';}(_0x3457fd||(_0x3457fd={})),function(_0x17c7f3){const _0xd4e869=_0x37e46c;_0x17c7f3[_0x17c7f3[_0xd4e869(0xe0b)]=0x0]=_0xd4e869(0xe0b),_0x17c7f3[_0x17c7f3[_0xd4e869(0x200d)]=0x1]=_0xd4e869(0x200d),_0x17c7f3[_0x17c7f3['NumericDecimal']=0x2]=_0xd4e869(0x2c50),_0x17c7f3[_0x17c7f3[_0xd4e869(0x23a7)]=0x3]=_0xd4e869(0x23a7),_0x17c7f3[_0x17c7f3[_0xd4e869(0x4bfe)]=0x4]=_0xd4e869(0x4bfe);}(_0x2cd48d||(_0x2cd48d={})),function(_0x228b58){const _0x5b8c48=_0x37e46c;_0x228b58[_0x228b58[_0x5b8c48(0x2719)]=0x0]=_0x5b8c48(0x2719),_0x228b58[_0x228b58[_0x5b8c48(0x972)]=0x1]=_0x5b8c48(0x972),_0x228b58[_0x228b58[_0x5b8c48(0x1395)]=0x2]=_0x5b8c48(0x1395);}(_0x18ddac||(_0x18ddac={}));class _0x376184{constructor(_0x3093e6,_0x2f8f00,_0xd8e8aa){const _0x1fbfed=_0x37e46c;this['decodeTree']=_0x3093e6,this[_0x1fbfed(0x280e)]=_0x2f8f00,this[_0x1fbfed(0x36ab)]=_0xd8e8aa,this[_0x1fbfed(0x206b)]=_0x2cd48d[_0x1fbfed(0xe0b)],this[_0x1fbfed(0x2252)]=0x1,this['result']=0x0,this[_0x1fbfed(0x3e8c)]=0x0,this['excess']=0x1,this[_0x1fbfed(0x1b12)]=_0x18ddac['Strict'];}[_0x37e46c(0x2141)](_0x18d7d9){const _0x240aff=_0x37e46c;this['decodeMode']=_0x18d7d9,this[_0x240aff(0x206b)]=_0x2cd48d[_0x240aff(0xe0b)],this[_0x240aff(0xa34)]=0x0,this['treeIndex']=0x0,this[_0x240aff(0x9fd)]=0x1,this[_0x240aff(0x2252)]=0x1;}['write'](_0x44c4b6,_0x96f688){const _0x418122=_0x37e46c;switch(this[_0x418122(0x206b)]){case _0x2cd48d[_0x418122(0xe0b)]:return _0x44c4b6[_0x418122(0x4955)](_0x96f688)===_0x1b57fb[_0x418122(0xb65)]?(this[_0x418122(0x206b)]=_0x2cd48d[_0x418122(0x200d)],this[_0x418122(0x2252)]+=0x1,this['stateNumericStart'](_0x44c4b6,_0x96f688+0x1)):(this['state']=_0x2cd48d[_0x418122(0x4bfe)],this[_0x418122(0x11da)](_0x44c4b6,_0x96f688));case _0x2cd48d[_0x418122(0x200d)]:return this[_0x418122(0x1f16)](_0x44c4b6,_0x96f688);case _0x2cd48d[_0x418122(0x2c50)]:return this['stateNumericDecimal'](_0x44c4b6,_0x96f688);case _0x2cd48d[_0x418122(0x23a7)]:return this[_0x418122(0x2c0d)](_0x44c4b6,_0x96f688);case _0x2cd48d[_0x418122(0x4bfe)]:return this[_0x418122(0x11da)](_0x44c4b6,_0x96f688);}}[_0x37e46c(0x1f16)](_0x2a9590,_0xea8d11){const _0x4fe677=_0x37e46c;return _0xea8d11>=_0x2a9590[_0x4fe677(0x1b19)]?-0x1:(0x20|_0x2a9590['charCodeAt'](_0xea8d11))===_0x1b57fb[_0x4fe677(0x3226)]?(this[_0x4fe677(0x206b)]=_0x2cd48d[_0x4fe677(0x23a7)],this[_0x4fe677(0x2252)]+=0x1,this[_0x4fe677(0x2c0d)](_0x2a9590,_0xea8d11+0x1)):(this[_0x4fe677(0x206b)]=_0x2cd48d[_0x4fe677(0x2c50)],this[_0x4fe677(0x867)](_0x2a9590,_0xea8d11));}['addToNumericResult'](_0x538046,_0x8ccd6c,_0x231155,_0x5939b2){const _0x516ed0=_0x37e46c;if(_0x8ccd6c!==_0x231155){const _0x4da67c=_0x231155-_0x8ccd6c;this[_0x516ed0(0xa34)]=this[_0x516ed0(0xa34)]*Math[_0x516ed0(0x43bd)](_0x5939b2,_0x4da67c)+parseInt(_0x538046['substr'](_0x8ccd6c,_0x4da67c),_0x5939b2),this[_0x516ed0(0x2252)]+=_0x4da67c;}}[_0x37e46c(0x2c0d)](_0x14f48d,_0x5651be){const _0x390b0b=_0x37e46c,_0x9112d5=_0x5651be;for(;_0x5651be<_0x14f48d[_0x390b0b(0x1b19)];){const _0x86942b=_0x14f48d[_0x390b0b(0x4955)](_0x5651be);if(!_0xa101c0(_0x86942b)&&!_0x371f4f(_0x86942b))return this['addToNumericResult'](_0x14f48d,_0x9112d5,_0x5651be,0x10),this[_0x390b0b(0x3f8f)](_0x86942b,0x3);_0x5651be+=0x1;}return this['addToNumericResult'](_0x14f48d,_0x9112d5,_0x5651be,0x10),-0x1;}[_0x37e46c(0x867)](_0x2a7f72,_0x389d9a){const _0xbd6f3a=_0x37e46c,_0x4850ef=_0x389d9a;for(;_0x389d9a<_0x2a7f72['length'];){const _0x1a7347=_0x2a7f72['charCodeAt'](_0x389d9a);if(!_0xa101c0(_0x1a7347))return this[_0xbd6f3a(0x4073)](_0x2a7f72,_0x4850ef,_0x389d9a,0xa),this[_0xbd6f3a(0x3f8f)](_0x1a7347,0x2);_0x389d9a+=0x1;}return this[_0xbd6f3a(0x4073)](_0x2a7f72,_0x4850ef,_0x389d9a,0xa),-0x1;}[_0x37e46c(0x3f8f)](_0x1db9ef,_0x3ab3dd){const _0x2503c1=_0x37e46c;var _0x2bb0b4;if(this[_0x2503c1(0x2252)]<=_0x3ab3dd)return null===(_0x2bb0b4=this[_0x2503c1(0x36ab)])||void 0x0===_0x2bb0b4||_0x2bb0b4[_0x2503c1(0x2b0f)](this['consumed']),0x0;if(_0x1db9ef===_0x1b57fb[_0x2503c1(0x2e5d)])this[_0x2503c1(0x2252)]+=0x1;else{if(this[_0x2503c1(0x1b12)]===_0x18ddac[_0x2503c1(0x972)])return 0x0;}return this['emitCodePoint'](_0x5ccf22(this['result']),this[_0x2503c1(0x2252)]),this[_0x2503c1(0x36ab)]&&(_0x1db9ef!==_0x1b57fb[_0x2503c1(0x2e5d)]&&this['errors']['missingSemicolonAfterCharacterReference'](),this[_0x2503c1(0x36ab)][_0x2503c1(0x184f)](this[_0x2503c1(0xa34)])),this[_0x2503c1(0x2252)];}[_0x37e46c(0x11da)](_0x4b3f03,_0x51c471){const _0x24851c=_0x37e46c,{decodeTree:_0x30eaed}=this;let _0x17e104=_0x30eaed[this[_0x24851c(0x3e8c)]],_0x1d3419=(_0x17e104&_0x3457fd[_0x24851c(0x6b0)])>>0xe;for(;_0x51c471<_0x4b3f03[_0x24851c(0x1b19)];_0x51c471++,this[_0x24851c(0x9fd)]++){const _0x47e3c4=_0x4b3f03[_0x24851c(0x4955)](_0x51c471);if(this[_0x24851c(0x3e8c)]=_0x539c8e(_0x30eaed,_0x17e104,this[_0x24851c(0x3e8c)]+Math[_0x24851c(0x4529)](0x1,_0x1d3419),_0x47e3c4),this[_0x24851c(0x3e8c)]<0x0)return 0x0===this['result']||this['decodeMode']===_0x18ddac[_0x24851c(0x1395)]&&(0x0===_0x1d3419||_0x5877ca(_0x47e3c4))?0x0:this['emitNotTerminatedNamedEntity']();if(_0x17e104=_0x30eaed[this['treeIndex']],_0x1d3419=(_0x17e104&_0x3457fd[_0x24851c(0x6b0)])>>0xe,0x0!==_0x1d3419){if(_0x47e3c4===_0x1b57fb[_0x24851c(0x2e5d)])return this[_0x24851c(0xa36)](this[_0x24851c(0x3e8c)],_0x1d3419,this[_0x24851c(0x2252)]+this[_0x24851c(0x9fd)]);this['decodeMode']!==_0x18ddac[_0x24851c(0x972)]&&(this['result']=this[_0x24851c(0x3e8c)],this['consumed']+=this[_0x24851c(0x9fd)],this[_0x24851c(0x9fd)]=0x0);}}return-0x1;}[_0x37e46c(0x278c)](){const _0x36ca05=_0x37e46c;var _0x161c50;const {result:_0x3d4eb7,decodeTree:_0x1d84f1}=this,_0x124a8f=(_0x1d84f1[_0x3d4eb7]&_0x3457fd[_0x36ca05(0x6b0)])>>0xe;return this[_0x36ca05(0xa36)](_0x3d4eb7,_0x124a8f,this[_0x36ca05(0x2252)]),null===(_0x161c50=this[_0x36ca05(0x36ab)])||void 0x0===_0x161c50||_0x161c50['missingSemicolonAfterCharacterReference'](),this['consumed'];}[_0x37e46c(0xa36)](_0x441695,_0x12f53a,_0x22dc0a){const _0x245ec6=_0x37e46c,{decodeTree:_0x1bce6a}=this;return this[_0x245ec6(0x280e)](0x1===_0x12f53a?_0x1bce6a[_0x441695]&~_0x3457fd[_0x245ec6(0x6b0)]:_0x1bce6a[_0x441695+0x1],_0x22dc0a),0x3===_0x12f53a&&this[_0x245ec6(0x280e)](_0x1bce6a[_0x441695+0x2],_0x22dc0a),_0x22dc0a;}['end'](){const _0x136a94=_0x37e46c;var _0x92431f;switch(this['state']){case _0x2cd48d[_0x136a94(0x4bfe)]:return 0x0===this[_0x136a94(0xa34)]||this[_0x136a94(0x1b12)]===_0x18ddac[_0x136a94(0x1395)]&&this[_0x136a94(0xa34)]!==this[_0x136a94(0x3e8c)]?0x0:this[_0x136a94(0x278c)]();case _0x2cd48d['NumericDecimal']:return this['emitNumericEntity'](0x0,0x2);case _0x2cd48d[_0x136a94(0x23a7)]:return this['emitNumericEntity'](0x0,0x3);case _0x2cd48d[_0x136a94(0x200d)]:return null===(_0x92431f=this[_0x136a94(0x36ab)])||void 0x0===_0x92431f||_0x92431f[_0x136a94(0x2b0f)](this[_0x136a94(0x2252)]),0x0;case _0x2cd48d['EntityStart']:return 0x0;}}}function _0x4a3399(_0x24ac3e){let _0x16523d='';const _0x5d5c81=new _0x376184(_0x24ac3e,_0x193b5d=>_0x16523d+=_0x3e0c4e(_0x193b5d));return function(_0x944083,_0x3976){const _0xa0dc4e=a0_0x11e7;let _0x1732c1=0x0,_0x2d5dba=0x0;for(;(_0x2d5dba=_0x944083[_0xa0dc4e(0x8c9)]('&',_0x2d5dba))>=0x0;){_0x16523d+=_0x944083[_0xa0dc4e(0x384c)](_0x1732c1,_0x2d5dba),_0x5d5c81[_0xa0dc4e(0x2141)](_0x3976);const _0x20c6b6=_0x5d5c81[_0xa0dc4e(0x4c95)](_0x944083,_0x2d5dba+0x1);if(_0x20c6b6<0x0){_0x1732c1=_0x2d5dba+_0x5d5c81[_0xa0dc4e(0x2681)]();break;}_0x1732c1=_0x2d5dba+_0x20c6b6,_0x2d5dba=0x0===_0x20c6b6?_0x1732c1+0x1:_0x1732c1;}const _0x2e0689=_0x16523d+_0x944083['slice'](_0x1732c1);return _0x16523d='',_0x2e0689;};}function _0x539c8e(_0x573843,_0x164d8a,_0x593fae,_0x318822){const _0xcfcaa2=_0x37e46c,_0x487a45=(_0x164d8a&_0x3457fd['BRANCH_LENGTH'])>>0x7,_0x4cee08=_0x164d8a&_0x3457fd[_0xcfcaa2(0xa97)];if(0x0===_0x487a45)return 0x0!==_0x4cee08&&_0x318822===_0x4cee08?_0x593fae:-0x1;if(_0x4cee08){const _0x59d68d=_0x318822-_0x4cee08;return _0x59d68d<0x0||_0x59d68d>=_0x487a45?-0x1:_0x573843[_0x593fae+_0x59d68d]-0x1;}let _0x1a288a=_0x593fae,_0x34376e=_0x1a288a+_0x487a45-0x1;for(;_0x1a288a<=_0x34376e;){const _0x18ea77=_0x1a288a+_0x34376e>>>0x1,_0x2687d4=_0x573843[_0x18ea77];if(_0x2687d4<_0x318822)_0x1a288a=_0x18ea77+0x1;else{if(!(_0x2687d4>_0x318822))return _0x573843[_0x18ea77+_0x487a45];_0x34376e=_0x18ea77-0x1;}}return-0x1;}const _0x4e0f5f=_0x4a3399(_0x21291b);_0x4a3399(_0x2c9e85);function _0x14991c(_0x17d19d,_0x189944=_0x18ddac['Legacy']){return _0x4e0f5f(_0x17d19d,_0x189944);}function _0x35f077(_0x436a93){const _0x45a15c=_0x37e46c;for(let _0x27ee2f=0x1;_0x27ee2f<_0x436a93[_0x45a15c(0x1b19)];_0x27ee2f++)_0x436a93[_0x27ee2f][0x0]+=_0x436a93[_0x27ee2f-0x1][0x0]+0x1;return _0x436a93;}new Map(_0x35f077([[0x9,_0x37e46c(0x14a7)],[0x0,_0x37e46c(0x4048)],[0x16,_0x37e46c(0x16b7)],[0x0,_0x37e46c(0x2a2)],[0x0,'#'],[0x0,'$'],[0x0,_0x37e46c(0x89d)],[0x0,'&'],[0x0,_0x37e46c(0x36f2)],[0x0,_0x37e46c(0x38fe)],[0x0,')'],[0x0,'*'],[0x0,'+'],[0x0,_0x37e46c(0x3dca)],[0x1,'.'],[0x0,_0x37e46c(0x396c)],[0xa,':'],[0x0,_0x37e46c(0x9a1)],[0x0,{'v':_0x37e46c(0x3026),'n':0x20d2,'o':'<⃒'}],[0x0,{'v':_0x37e46c(0x526e),'n':0x20e5,'o':_0x37e46c(0x3c95)}],[0x0,{'v':_0x37e46c(0x713),'n':0x20d2,'o':'>⃒'}],[0x0,_0x37e46c(0x2b43)],[0x0,_0x37e46c(0x4218)],[0x1a,_0x37e46c(0x1c05)],[0x0,_0x37e46c(0x4230)],[0x0,_0x37e46c(0x2d5d)],[0x0,'^'],[0x0,_0x37e46c(0x59d)],[0x0,_0x37e46c(0x522d)],[0x5,{'n':0x6a,'o':_0x37e46c(0x1878)}],[0x14,_0x37e46c(0x366f)],[0x0,_0x37e46c(0x3c51)],[0x0,_0x37e46c(0x199b)],[0x22,_0x37e46c(0x514b)],[0x0,_0x37e46c(0x975)],[0x0,'¢'],[0x0,_0x37e46c(0x2e77)],[0x0,_0x37e46c(0x42c1)],[0x0,_0x37e46c(0x4950)],[0x0,'¦'],[0x0,_0x37e46c(0x4fe1)],[0x0,_0x37e46c(0x987)],[0x0,'©'],[0x0,'ª'],[0x0,_0x37e46c(0x4c67)],[0x0,_0x37e46c(0x3ef9)],[0x0,_0x37e46c(0x460)],[0x0,'®'],[0x0,_0x37e46c(0xcbc)],[0x0,_0x37e46c(0x693)],[0x0,_0x37e46c(0x3f78)],[0x0,_0x37e46c(0x5277)],[0x0,_0x37e46c(0x15c8)],[0x0,'´'],[0x0,_0x37e46c(0xf32)],[0x0,_0x37e46c(0x4cb9)],[0x0,_0x37e46c(0x4c73)],[0x0,'¸'],[0x0,_0x37e46c(0x222)],[0x0,_0x37e46c(0xccf)],[0x0,'»'],[0x0,_0x37e46c(0x45c5)],[0x0,'½'],[0x0,_0x37e46c(0x28c1)],[0x0,_0x37e46c(0xb25)],[0x0,_0x37e46c(0x3d32)],[0x0,_0x37e46c(0xd01)],[0x0,_0x37e46c(0x23d)],[0x0,'Ã'],[0x0,'Ä'],[0x0,_0x37e46c(0x1b28)],[0x0,_0x37e46c(0x38a5)],[0x0,_0x37e46c(0x3752)],[0x0,_0x37e46c(0x1838)],[0x0,_0x37e46c(0xadc)],[0x0,'Ê'],[0x0,_0x37e46c(0x759)],[0x0,_0x37e46c(0x4cff)],[0x0,'Í'],[0x0,_0x37e46c(0x3d3a)],[0x0,_0x37e46c(0x1948)],[0x0,'Ð'],[0x0,_0x37e46c(0x3684)],[0x0,'Ò'],[0x0,_0x37e46c(0x1f33)],[0x0,'Ô'],[0x0,'Õ'],[0x0,_0x37e46c(0x32d3)],[0x0,'×'],[0x0,_0x37e46c(0x3ceb)],[0x0,_0x37e46c(0x1821)],[0x0,'Ú'],[0x0,_0x37e46c(0x2363)],[0x0,_0x37e46c(0x39c5)],[0x0,_0x37e46c(0x4f10)],[0x0,_0x37e46c(0x2954)],[0x0,_0x37e46c(0x441)],[0x0,_0x37e46c(0x1d56)],[0x0,_0x37e46c(0x4715)],[0x0,'â'],[0x0,_0x37e46c(0x48af)],[0x0,'ä'],[0x0,_0x37e46c(0x4a8f)],[0x0,_0x37e46c(0x4e57)],[0x0,'ç'],[0x0,_0x37e46c(0x21c0)],[0x0,'é'],[0x0,'ê'],[0x0,_0x37e46c(0x30b0)],[0x0,_0x37e46c(0x1bcd)],[0x0,_0x37e46c(0x4ddb)],[0x0,'î'],[0x0,_0x37e46c(0xef5)],[0x0,_0x37e46c(0x24bf)],[0x0,_0x37e46c(0x45b)],[0x0,_0x37e46c(0x4d35)],[0x0,_0x37e46c(0x484b)],[0x0,'ô'],[0x0,'õ'],[0x0,_0x37e46c(0x161e)],[0x0,_0x37e46c(0xdbf)],[0x0,_0x37e46c(0x2f8c)],[0x0,_0x37e46c(0x394b)],[0x0,'ú'],[0x0,_0x37e46c(0x1203)],[0x0,'ü'],[0x0,_0x37e46c(0x3c40)],[0x0,_0x37e46c(0x65d)],[0x0,_0x37e46c(0x3613)],[0x0,_0x37e46c(0x2682)],[0x0,_0x37e46c(0x1d23)],[0x0,'Ă'],[0x0,_0x37e46c(0x1773)],[0x0,_0x37e46c(0x3376)],[0x0,'ą'],[0x0,_0x37e46c(0x26c7)],[0x0,_0x37e46c(0x3bf6)],[0x0,_0x37e46c(0x4476)],[0x0,_0x37e46c(0x3b76)],[0x0,_0x37e46c(0x34ef)],[0x0,'ċ'],[0x0,_0x37e46c(0xe32)],[0x0,'č'],[0x0,'Ď'],[0x0,_0x37e46c(0x859)],[0x0,_0x37e46c(0x29af)],[0x0,_0x37e46c(0x31d2)],[0x0,_0x37e46c(0x25af)],[0x0,_0x37e46c(0x4d82)],[0x2,'Ė'],[0x0,_0x37e46c(0x1c3a)],[0x0,'Ę'],[0x0,_0x37e46c(0x34a8)],[0x0,_0x37e46c(0x41d7)],[0x0,_0x37e46c(0x23d1)],[0x0,_0x37e46c(0x426f)],[0x0,_0x37e46c(0x3b53)],[0x0,'Ğ'],[0x0,'ğ'],[0x0,_0x37e46c(0x2144)],[0x0,'ġ'],[0x0,_0x37e46c(0x4e78)],[0x1,_0x37e46c(0x23b9)],[0x0,_0x37e46c(0x3b8d)],[0x0,'Ħ'],[0x0,'ħ'],[0x0,'Ĩ'],[0x0,_0x37e46c(0x24aa)],[0x0,'Ī'],[0x0,_0x37e46c(0x3abc)],[0x2,_0x37e46c(0x47c5)],[0x0,_0x37e46c(0x1be2)],[0x0,_0x37e46c(0x4f21)],[0x0,_0x37e46c(0x1787)],[0x0,_0x37e46c(0x1fd2)],[0x0,_0x37e46c(0x4e17)],[0x0,_0x37e46c(0x3fbf)],[0x0,'ĵ'],[0x0,_0x37e46c(0x1680)],[0x0,_0x37e46c(0x29e9)],[0x0,_0x37e46c(0x4097)],[0x0,_0x37e46c(0x317b)],[0x0,_0x37e46c(0x518f)],[0x0,_0x37e46c(0x3eb0)],[0x0,_0x37e46c(0x150b)],[0x0,_0x37e46c(0x403f)],[0x0,_0x37e46c(0x3eb8)],[0x0,_0x37e46c(0x51b9)],[0x0,_0x37e46c(0x3897)],[0x0,_0x37e46c(0x3c2f)],[0x0,_0x37e46c(0x33ff)],[0x0,'Ń'],[0x0,_0x37e46c(0x3c18)],[0x0,_0x37e46c(0x45e1)],[0x0,_0x37e46c(0x1ceb)],[0x0,'Ň'],[0x0,_0x37e46c(0x1f0c)],[0x0,_0x37e46c(0x340d)],[0x0,_0x37e46c(0x48da)],[0x0,_0x37e46c(0x579)],[0x0,_0x37e46c(0x3a66)],[0x0,'ō'],[0x2,'Ő'],[0x0,_0x37e46c(0x28c7)],[0x0,_0x37e46c(0x4182)],[0x0,'œ'],[0x0,_0x37e46c(0x3d91)],[0x0,_0x37e46c(0x17f6)],[0x0,_0x37e46c(0x5259)],[0x0,_0x37e46c(0xae5)],[0x0,'Ř'],[0x0,_0x37e46c(0x27d0)],[0x0,'Ś'],[0x0,_0x37e46c(0x43ff)],[0x0,_0x37e46c(0x1ea4)],[0x0,_0x37e46c(0x112c)],[0x0,'Ş'],[0x0,'ş'],[0x0,_0x37e46c(0x3b08)],[0x0,_0x37e46c(0x3a25)],[0x0,_0x37e46c(0x3927)],[0x0,_0x37e46c(0x1f2b)],[0x0,_0x37e46c(0x890)],[0x0,_0x37e46c(0x2263)],[0x0,_0x37e46c(0x1796)],[0x0,_0x37e46c(0xfee)],[0x0,_0x37e46c(0x357e)],[0x0,'ũ'],[0x0,_0x37e46c(0x1f62)],[0x0,_0x37e46c(0x260e)],[0x0,_0x37e46c(0xb7f)],[0x0,_0x37e46c(0x13bb)],[0x0,_0x37e46c(0x971)],[0x0,_0x37e46c(0x45d9)],[0x0,'Ű'],[0x0,_0x37e46c(0x3373)],[0x0,_0x37e46c(0x46ce)],[0x0,_0x37e46c(0x2337)],[0x0,_0x37e46c(0x103c)],[0x0,_0x37e46c(0xe90)],[0x0,'Ŷ'],[0x0,'ŷ'],[0x0,'Ÿ'],[0x0,_0x37e46c(0x2e49)],[0x0,_0x37e46c(0x28bd)],[0x0,_0x37e46c(0x111c)],[0x0,'ż'],[0x0,'Ž'],[0x0,'ž'],[0x13,'ƒ'],[0x22,_0x37e46c(0xb3c)],[0x3f,_0x37e46c(0x14bc)],[0x41,_0x37e46c(0x2782)],[0x8e,'ˆ'],[0x0,'ˇ'],[0x10,_0x37e46c(0x22d2)],[0x0,_0x37e46c(0x1833)],[0x0,'˚'],[0x0,'˛'],[0x0,_0x37e46c(0x65f)],[0x0,_0x37e46c(0x3617)],[0x33,'̑'],[0x7f,_0x37e46c(0x32fb)],[0x0,_0x37e46c(0x46f1)],[0x0,_0x37e46c(0x3de6)],[0x0,_0x37e46c(0x2b1a)],[0x0,'Ε'],[0x0,_0x37e46c(0x15ed)],[0x0,'Η'],[0x0,'Θ'],[0x0,_0x37e46c(0x27e)],[0x0,_0x37e46c(0x244e)],[0x0,_0x37e46c(0x145a)],[0x0,_0x37e46c(0x3acd)],[0x0,_0x37e46c(0x1157)],[0x0,_0x37e46c(0xcb4)],[0x0,_0x37e46c(0x3db3)],[0x0,_0x37e46c(0x22ba)],[0x0,_0x37e46c(0x1a72)],[0x1,_0x37e46c(0x1b98)],[0x0,_0x37e46c(0x4ff8)],[0x0,'Υ'],[0x0,_0x37e46c(0x6b1)],[0x0,'Χ'],[0x0,_0x37e46c(0x2c13)],[0x0,_0x37e46c(0x4ffc)],[0x7,'α'],[0x0,_0x37e46c(0x4fe7)],[0x0,_0x37e46c(0x346e)],[0x0,'δ'],[0x0,_0x37e46c(0x2ab9)],[0x0,_0x37e46c(0x1131)],[0x0,_0x37e46c(0x2fd2)],[0x0,_0x37e46c(0x1972)],[0x0,_0x37e46c(0x41bc)],[0x0,_0x37e46c(0x429d)],[0x0,_0x37e46c(0x40bc)],[0x0,_0x37e46c(0x3b27)],[0x0,_0x37e46c(0x1acd)],[0x0,_0x37e46c(0x1ca6)],[0x0,_0x37e46c(0xeed)],[0x0,'π'],[0x0,'ρ'],[0x0,_0x37e46c(0x5213)],[0x0,_0x37e46c(0x4f7)],[0x0,'τ'],[0x0,_0x37e46c(0x4eb1)],[0x0,_0x37e46c(0x4550)],[0x0,_0x37e46c(0xa1e)],[0x0,_0x37e46c(0x400d)],[0x0,_0x37e46c(0x3462)],[0x7,_0x37e46c(0x4e1c)],[0x0,'ϒ'],[0x2,_0x37e46c(0x2cb)],[0x0,'ϖ'],[0x5,_0x37e46c(0x255)],[0x0,_0x37e46c(0x503e)],[0x12,_0x37e46c(0x4647)],[0x0,_0x37e46c(0x2149)],[0x3,_0x37e46c(0x1724)],[0x0,'϶'],[0xa,_0x37e46c(0x27db)],[0x0,_0x37e46c(0x7a6)],[0x0,'Ѓ'],[0x0,_0x37e46c(0x33cb)],[0x0,_0x37e46c(0x872)],[0x0,_0x37e46c(0x19a4)],[0x0,_0x37e46c(0x4142)],[0x0,'Ј'],[0x0,_0x37e46c(0x25e7)],[0x0,'Њ'],[0x0,_0x37e46c(0x310a)],[0x0,_0x37e46c(0x119a)],[0x1,_0x37e46c(0x1e16)],[0x0,_0x37e46c(0x12c9)],[0x0,'А'],[0x0,_0x37e46c(0xf6c)],[0x0,_0x37e46c(0x192a)],[0x0,_0x37e46c(0x3a26)],[0x0,'Д'],[0x0,'Е'],[0x0,'Ж'],[0x0,_0x37e46c(0x2573)],[0x0,'И'],[0x0,'Й'],[0x0,_0x37e46c(0x493b)],[0x0,'Л'],[0x0,'М'],[0x0,_0x37e46c(0x3cca)],[0x0,_0x37e46c(0x4fb4)],[0x0,_0x37e46c(0x4864)],[0x0,_0x37e46c(0x1bf7)],[0x0,_0x37e46c(0x330e)],[0x0,_0x37e46c(0x2465)],[0x0,'У'],[0x0,_0x37e46c(0x498b)],[0x0,_0x37e46c(0x5230)],[0x0,_0x37e46c(0x3daa)],[0x0,_0x37e46c(0x2b48)],[0x0,'Ш'],[0x0,_0x37e46c(0x1bc3)],[0x0,_0x37e46c(0x40b1)],[0x0,_0x37e46c(0x3a87)],[0x0,'Ь'],[0x0,_0x37e46c(0x3664)],[0x0,'Ю'],[0x0,_0x37e46c(0x2319)],[0x0,_0x37e46c(0x28c6)],[0x0,_0x37e46c(0x3434)],[0x0,_0x37e46c(0x2808)],[0x0,'г'],[0x0,_0x37e46c(0xfd6)],[0x0,_0x37e46c(0x4c93)],[0x0,_0x37e46c(0x845)],[0x0,_0x37e46c(0x1dea)],[0x0,_0x37e46c(0xa5c)],[0x0,_0x37e46c(0x5c0)],[0x0,_0x37e46c(0x1ca0)],[0x0,_0x37e46c(0x331c)],[0x0,_0x37e46c(0x281c)],[0x0,'н'],[0x0,_0x37e46c(0x4130)],[0x0,_0x37e46c(0x8e4)],[0x0,'р'],[0x0,_0x37e46c(0x526)],[0x0,_0x37e46c(0x3aa3)],[0x0,'у'],[0x0,'ф'],[0x0,_0x37e46c(0x1604)],[0x0,_0x37e46c(0x364e)],[0x0,_0x37e46c(0x3917)],[0x0,_0x37e46c(0x1f74)],[0x0,_0x37e46c(0x4726)],[0x0,'ъ'],[0x0,'ы'],[0x0,_0x37e46c(0x2d2f)],[0x0,_0x37e46c(0x282f)],[0x0,_0x37e46c(0x26bd)],[0x0,_0x37e46c(0x26b0)],[0x1,'ё'],[0x0,_0x37e46c(0x3181)],[0x0,_0x37e46c(0x1234)],[0x0,_0x37e46c(0x251c)],[0x0,_0x37e46c(0x571)],[0x0,_0x37e46c(0x30ba)],[0x0,_0x37e46c(0x336c)],[0x0,_0x37e46c(0x3577)],[0x0,_0x37e46c(0x15ef)],[0x0,_0x37e46c(0x3e69)],[0x0,'ћ'],[0x0,_0x37e46c(0x77f)],[0x1,_0x37e46c(0x1006)],[0x0,'џ'],[0x1ba2,' '],[0x0,_0x37e46c(0x4839)],[0x0,_0x37e46c(0x2673)],[0x0,_0x37e46c(0x15d6)],[0x1,_0x37e46c(0xd67)],[0x0,_0x37e46c(0x19a2)],[0x0,_0x37e46c(0x9d3)],[0x0,' '],[0x0,_0x37e46c(0x1abe)],[0x0,'‌'],[0x0,_0x37e46c(0x2794)],[0x0,_0x37e46c(0x4b78)],[0x0,_0x37e46c(0x3857)],[0x0,_0x37e46c(0x2e31)],[0x2,'–'],[0x0,_0x37e46c(0x34e2)],[0x0,_0x37e46c(0x1d35)],[0x0,_0x37e46c(0x43db)],[0x1,_0x37e46c(0xbc3)],[0x0,_0x37e46c(0x37f0)],[0x0,_0x37e46c(0x2c93)],[0x1,_0x37e46c(0x3eb7)],[0x0,_0x37e46c(0x4997)],[0x0,_0x37e46c(0x3d2f)],[0x1,_0x37e46c(0x4974)],[0x0,_0x37e46c(0x3e93)],[0x0,_0x37e46c(0x4702)],[0x2,_0x37e46c(0x1daf)],[0x0,'…'],[0x9,_0x37e46c(0x4b2c)],[0x0,'‱'],[0x0,_0x37e46c(0x1eef)],[0x0,_0x37e46c(0x50e5)],[0x0,'‴'],[0x0,'‵'],[0x3,_0x37e46c(0x2b11)],[0x0,_0x37e46c(0x532)],[0x3,'‾'],[0x2,'⁁'],[0x1,_0x37e46c(0x13b1)],[0x0,'⁄'],[0xa,_0x37e46c(0x3f5e)],[0x7,_0x37e46c(0x3b77)],[0x7,{'v':' ','n':0x200a,'o':_0x37e46c(0x2eba)}],[0x0,_0x37e46c(0x371a)],[0x0,_0x37e46c(0x814)],[0x0,_0x37e46c(0x2898)],[0x0,_0x37e46c(0x3ba8)],[0x48,_0x37e46c(0x1142)],[0x2e,_0x37e46c(0x95a)],[0x0,_0x37e46c(0x2f05)],[0x25,_0x37e46c(0x47ce)],[0x2,'℅'],[0x4,_0x37e46c(0x3fd1)],[0x0,_0x37e46c(0x2f59)],[0x0,_0x37e46c(0x13ee)],[0x0,_0x37e46c(0xfc9)],[0x0,_0x37e46c(0x250e)],[0x0,_0x37e46c(0x1c6a)],[0x0,_0x37e46c(0x2ce6)],[0x0,'ℑ'],[0x0,'ℒ'],[0x0,'ℓ'],[0x1,'ℕ'],[0x0,'№'],[0x0,_0x37e46c(0xc3b)],[0x0,_0x37e46c(0x3f8a)],[0x0,_0x37e46c(0x2b74)],[0x0,'ℚ'],[0x0,_0x37e46c(0x4f5e)],[0x0,_0x37e46c(0x19a6)],[0x0,_0x37e46c(0x119f)],[0x0,_0x37e46c(0x2fb3)],[0x3,_0x37e46c(0x3948)],[0x1,_0x37e46c(0xe4b)],[0x2,_0x37e46c(0x4826)],[0x0,'ℨ'],[0x0,'℩'],[0x2,_0x37e46c(0x416d)],[0x0,_0x37e46c(0x23d8)],[0x1,_0x37e46c(0x20a9)],[0x0,_0x37e46c(0xdcb)],[0x0,_0x37e46c(0x3bfc)],[0x1,_0x37e46c(0x1e25)],[0x0,_0x37e46c(0x2c95)],[0x0,_0x37e46c(0x162c)],[0x0,_0x37e46c(0x2c36)],[0x0,'ℷ'],[0x0,_0x37e46c(0x5cc)],[0xc,_0x37e46c(0x43a7)],[0x0,'ⅆ'],[0x0,_0x37e46c(0x332e)],[0x0,'ⅈ'],[0xa,_0x37e46c(0x3a2e)],[0x0,_0x37e46c(0x43cc)],[0x0,_0x37e46c(0x2ec8)],[0x0,_0x37e46c(0x497)],[0x0,_0x37e46c(0x42da)],[0x0,_0x37e46c(0x1c33)],[0x0,_0x37e46c(0xe86)],[0x0,'⅚'],[0x0,_0x37e46c(0xbcd)],[0x0,_0x37e46c(0x4423)],[0x0,_0x37e46c(0x4248)],[0x0,'⅞'],[0x31,_0x37e46c(0x4a7f)],[0x0,_0x37e46c(0x33e9)],[0x0,'→'],[0x0,_0x37e46c(0x1fa3)],[0x0,'↔'],[0x0,_0x37e46c(0x1535)],[0x0,_0x37e46c(0x434a)],[0x0,_0x37e46c(0x1c00)],[0x0,_0x37e46c(0x5108)],[0x0,_0x37e46c(0x3ccb)],[0x0,_0x37e46c(0x206a)],[0x0,_0x37e46c(0x4461)],[0x1,{'v':_0x37e46c(0x2386),'n':0x338,'o':_0x37e46c(0x19db)}],[0x0,_0x37e46c(0x2d55)],[0x0,_0x37e46c(0xb6b)],[0x0,_0x37e46c(0x50c7)],[0x0,_0x37e46c(0x1aca)],[0x0,_0x37e46c(0x200b)],[0x0,_0x37e46c(0x2b7c)],[0x0,'↤'],[0x0,_0x37e46c(0x449b)],[0x0,_0x37e46c(0x3b92)],[0x0,_0x37e46c(0x3004)],[0x1,_0x37e46c(0x7b2)],[0x0,'↪'],[0x0,_0x37e46c(0x352a)],[0x0,_0x37e46c(0x2534)],[0x0,_0x37e46c(0x803)],[0x0,_0x37e46c(0x33f0)],[0x1,_0x37e46c(0x22e1)],[0x0,_0x37e46c(0x2fd8)],[0x0,'↲'],[0x0,'↳'],[0x1,'↵'],[0x0,_0x37e46c(0xba2)],[0x0,_0x37e46c(0x5177)],[0x2,'↺'],[0x0,_0x37e46c(0x1c3c)],[0x0,_0x37e46c(0x913)],[0x0,_0x37e46c(0xde4)],[0x0,_0x37e46c(0x529d)],[0x0,_0x37e46c(0x4d64)],[0x0,'⇀'],[0x0,_0x37e46c(0xa30)],[0x0,_0x37e46c(0x1954)],[0x0,_0x37e46c(0x11b6)],[0x0,_0x37e46c(0x1910)],[0x0,'⇅'],[0x0,'⇆'],[0x0,_0x37e46c(0x2b26)],[0x0,'⇈'],[0x0,_0x37e46c(0x20c5)],[0x0,_0x37e46c(0x496a)],[0x0,'⇋'],[0x0,_0x37e46c(0x24de)],[0x0,'⇍'],[0x0,'⇎'],[0x0,'⇏'],[0x0,_0x37e46c(0x4513)],[0x0,'⇑'],[0x0,'⇒'],[0x0,'⇓'],[0x0,_0x37e46c(0x1557)],[0x0,_0x37e46c(0x3d62)],[0x0,'⇖'],[0x0,_0x37e46c(0x5b1)],[0x0,'⇘'],[0x0,'⇙'],[0x0,'⇚'],[0x0,'⇛'],[0x1,_0x37e46c(0x6f5)],[0x6,_0x37e46c(0x1314)],[0x0,_0x37e46c(0x3f76)],[0xf,_0x37e46c(0x2e13)],[0x7,_0x37e46c(0x4859)],[0x0,'⇾'],[0x0,_0x37e46c(0x96c)],[0x0,_0x37e46c(0x9ee)],[0x0,'∁'],[0x0,{'v':_0x37e46c(0x4bbf),'n':0x338,'o':'∂̸'}],[0x0,_0x37e46c(0x3abf)],[0x0,'∄'],[0x0,_0x37e46c(0x4a00)],[0x1,_0x37e46c(0xadb)],[0x0,_0x37e46c(0x340a)],[0x0,_0x37e46c(0x2f38)],[0x1,'∋'],[0x0,_0x37e46c(0x29c9)],[0x2,'∏'],[0x0,_0x37e46c(0x2a4a)],[0x0,_0x37e46c(0x4c6d)],[0x0,'−'],[0x0,'∓'],[0x0,'∔'],[0x1,'∖'],[0x0,'∗'],[0x0,'∘'],[0x1,_0x37e46c(0x3122)],[0x2,_0x37e46c(0x8aa)],[0x0,_0x37e46c(0x3895)],[0x0,'∟'],[0x0,{'v':'∠','n':0x20d2,'o':_0x37e46c(0x4745)}],[0x0,'∡'],[0x0,_0x37e46c(0x195b)],[0x0,_0x37e46c(0x967)],[0x0,_0x37e46c(0x664)],[0x0,_0x37e46c(0x1862)],[0x0,_0x37e46c(0x3b06)],[0x0,_0x37e46c(0xd30)],[0x0,'∨'],[0x0,{'v':_0x37e46c(0x4e9e),'n':0xfe00,'o':_0x37e46c(0x15a7)}],[0x0,{'v':'∪','n':0xfe00,'o':_0x37e46c(0x2d84)}],[0x0,_0x37e46c(0x164d)],[0x0,_0x37e46c(0xe8e)],[0x0,_0x37e46c(0x4b1b)],[0x0,'∮'],[0x0,_0x37e46c(0x307f)],[0x0,'∰'],[0x0,_0x37e46c(0x2368)],[0x0,_0x37e46c(0x16d4)],[0x0,_0x37e46c(0x3d27)],[0x0,_0x37e46c(0x1e07)],[0x0,_0x37e46c(0x33ca)],[0x0,'∶'],[0x0,_0x37e46c(0x1b4f)],[0x0,_0x37e46c(0x51bb)],[0x1,'∺'],[0x0,'∻'],[0x0,{'v':'∼','n':0x20d2,'o':_0x37e46c(0x124b)}],[0x0,{'v':_0x37e46c(0x4370),'n':0x331,'o':_0x37e46c(0x513b)}],[0x0,{'v':_0x37e46c(0x359b),'n':0x333,'o':_0x37e46c(0x3f55)}],[0x0,'∿'],[0x0,_0x37e46c(0x1e1f)],[0x0,_0x37e46c(0x3c0d)],[0x0,{'v':'≂','n':0x338,'o':_0x37e46c(0x42a9)}],[0x0,_0x37e46c(0x3841)],[0x0,'≄'],[0x0,_0x37e46c(0x1759)],[0x0,_0x37e46c(0x1d2a)],[0x0,_0x37e46c(0x21e4)],[0x0,_0x37e46c(0xfb9)],[0x0,_0x37e46c(0x4dd3)],[0x0,_0x37e46c(0x5178)],[0x0,{'v':'≋','n':0x338,'o':_0x37e46c(0x41f9)}],[0x0,_0x37e46c(0x29ec)],[0x0,{'v':_0x37e46c(0x33f1),'n':0x20d2,'o':_0x37e46c(0x31ab)}],[0x0,{'v':'≎','n':0x338,'o':_0x37e46c(0x3201)}],[0x0,{'v':_0x37e46c(0x490f),'n':0x338,'o':_0x37e46c(0x5cf)}],[0x0,{'v':_0x37e46c(0x3a12),'n':0x338,'o':_0x37e46c(0x209a)}],[0x0,_0x37e46c(0x5270)],[0x0,'≒'],[0x0,_0x37e46c(0xc84)],[0x0,_0x37e46c(0xd47)],[0x0,_0x37e46c(0xe1c)],[0x0,_0x37e46c(0x1e33)],[0x0,'≗'],[0x1,_0x37e46c(0x1f7)],[0x0,'≚'],[0x1,_0x37e46c(0xd9b)],[0x2,_0x37e46c(0x3d73)],[0x0,_0x37e46c(0x2a12)],[0x0,{'v':_0x37e46c(0x1e4e),'n':0x20e5,'o':_0x37e46c(0x4e28)}],[0x0,_0x37e46c(0x3dbc)],[0x1,{'v':_0x37e46c(0xfff),'n':0x20d2,'o':'≤⃒'}],[0x0,{'v':_0x37e46c(0x47a1),'n':0x20d2,'o':_0x37e46c(0x1572)}],[0x0,{'v':'≦','n':0x338,'o':_0x37e46c(0x14d6)}],[0x0,{'v':_0x37e46c(0x1776),'n':0x338,'o':_0x37e46c(0x43ad)}],[0x0,{'v':_0x37e46c(0x2178),'n':0xfe00,'o':_0x37e46c(0x4872)}],[0x0,{'v':_0x37e46c(0x2cb0),'n':0xfe00,'o':_0x37e46c(0xaf0)}],[0x0,{'v':'≪','n':new Map(_0x35f077([[0x338,'≪̸'],[0x1d99,_0x37e46c(0x50f6)]]))}],[0x0,{'v':_0x37e46c(0x3fd7),'n':new Map(_0x35f077([[0x338,_0x37e46c(0x21db)],[0x1d99,_0x37e46c(0x1e10)]]))}],[0x0,'≬'],[0x0,'≭'],[0x0,_0x37e46c(0x35d)],[0x0,'≯'],[0x0,_0x37e46c(0x288b)],[0x0,'≱'],[0x0,_0x37e46c(0x3d15)],[0x0,'≳'],[0x0,'≴'],[0x0,'≵'],[0x0,_0x37e46c(0x1099)],[0x0,'≷'],[0x0,_0x37e46c(0x1d08)],[0x0,_0x37e46c(0x460e)],[0x0,'≺'],[0x0,'≻'],[0x0,_0x37e46c(0x2257)],[0x0,_0x37e46c(0x4de7)],[0x0,'≾'],[0x0,{'v':_0x37e46c(0x450c),'n':0x338,'o':_0x37e46c(0x1ec9)}],[0x0,_0x37e46c(0x23e8)],[0x0,_0x37e46c(0x4d5b)],[0x0,{'v':'⊂','n':0x20d2,'o':_0x37e46c(0x259e)}],[0x0,{'v':_0x37e46c(0x38f5),'n':0x20d2,'o':_0x37e46c(0x354e)}],[0x0,_0x37e46c(0x223f)],[0x0,'⊅'],[0x0,'⊆'],[0x0,_0x37e46c(0x3358)],[0x0,_0x37e46c(0x3921)],[0x0,'⊉'],[0x0,{'v':_0x37e46c(0x17d8),'n':0xfe00,'o':_0x37e46c(0x12b5)}],[0x0,{'v':_0x37e46c(0x332d),'n':0xfe00,'o':_0x37e46c(0x4018)}],[0x1,_0x37e46c(0x9fb)],[0x0,_0x37e46c(0x3d88)],[0x0,{'v':_0x37e46c(0x3726),'n':0x338,'o':'⊏̸'}],[0x0,{'v':'⊐','n':0x338,'o':_0x37e46c(0x2e7b)}],[0x0,'⊑'],[0x0,_0x37e46c(0x3cf4)],[0x0,{'v':_0x37e46c(0x35fd),'n':0xfe00,'o':_0x37e46c(0x339c)}],[0x0,{'v':_0x37e46c(0x2e6c),'n':0xfe00,'o':'⊔︀'}],[0x0,'⊕'],[0x0,_0x37e46c(0xad7)],[0x0,_0x37e46c(0x125f)],[0x0,'⊘'],[0x0,_0x37e46c(0x2382)],[0x0,_0x37e46c(0x3388)],[0x0,_0x37e46c(0x29f8)],[0x1,_0x37e46c(0x37c0)],[0x0,_0x37e46c(0x4335)],[0x0,_0x37e46c(0x32de)],[0x0,_0x37e46c(0x3c00)],[0x0,_0x37e46c(0x9d2)],[0x0,_0x37e46c(0x1019)],[0x0,_0x37e46c(0xee2)],[0x0,_0x37e46c(0x45b9)],[0x0,_0x37e46c(0x1a48)],[0x1,_0x37e46c(0x3924)],[0x0,_0x37e46c(0xef0)],[0x0,'⊩'],[0x0,'⊪'],[0x0,_0x37e46c(0x45ec)],[0x0,_0x37e46c(0x2696)],[0x0,'⊭'],[0x0,'⊮'],[0x0,_0x37e46c(0x799)],[0x0,_0x37e46c(0x5129)],[0x1,_0x37e46c(0x3fc7)],[0x0,_0x37e46c(0x73f)],[0x0,{'v':_0x37e46c(0x2c65),'n':0x20d2,'o':'⊴⃒'}],[0x0,{'v':_0x37e46c(0xc05),'n':0x20d2,'o':_0x37e46c(0x2c26)}],[0x0,_0x37e46c(0x36df)],[0x0,'⊷'],[0x0,_0x37e46c(0x1d39)],[0x0,_0x37e46c(0x2451)],[0x0,'⊺'],[0x0,_0x37e46c(0x189d)],[0x1,'⊽'],[0x0,_0x37e46c(0x46f6)],[0x0,_0x37e46c(0x5071)],[0x0,_0x37e46c(0x30c8)],[0x0,_0x37e46c(0x2ffe)],[0x0,_0x37e46c(0x2e4b)],[0x0,_0x37e46c(0x513f)],[0x0,'⋄'],[0x0,_0x37e46c(0x61f)],[0x0,'⋆'],[0x0,_0x37e46c(0x37ca)],[0x0,_0x37e46c(0x1922)],[0x0,_0x37e46c(0x39b4)],[0x0,'⋊'],[0x0,_0x37e46c(0x2533)],[0x0,_0x37e46c(0x4bc4)],[0x0,_0x37e46c(0x4417)],[0x0,'⋎'],[0x0,_0x37e46c(0x3ea3)],[0x0,_0x37e46c(0x4b83)],[0x0,_0x37e46c(0x1b97)],[0x0,_0x37e46c(0x387)],[0x0,_0x37e46c(0x4fd9)],[0x0,'⋔'],[0x0,_0x37e46c(0x1b8e)],[0x0,'⋖'],[0x0,'⋗'],[0x0,{'v':_0x37e46c(0x4fa6),'n':0x338,'o':_0x37e46c(0x51d2)}],[0x0,{'v':'⋙','n':0x338,'o':_0x37e46c(0x636)}],[0x0,{'v':_0x37e46c(0x29ca),'n':0xfe00,'o':_0x37e46c(0xe55)}],[0x0,{'v':'⋛','n':0xfe00,'o':_0x37e46c(0x42fb)}],[0x2,'⋞'],[0x0,_0x37e46c(0xd7a)],[0x0,'⋠'],[0x0,_0x37e46c(0x2512)],[0x0,_0x37e46c(0x1189)],[0x0,_0x37e46c(0x7e6)],[0x2,'⋦'],[0x0,_0x37e46c(0x2aee)],[0x0,_0x37e46c(0x4d07)],[0x0,_0x37e46c(0x4c2)],[0x0,_0x37e46c(0x3bb7)],[0x0,'⋫'],[0x0,_0x37e46c(0x1ccb)],[0x0,_0x37e46c(0x1af7)],[0x0,_0x37e46c(0x1963)],[0x0,_0x37e46c(0x29a9)],[0x0,_0x37e46c(0x62a)],[0x0,_0x37e46c(0x129a)],[0x0,'⋲'],[0x0,_0x37e46c(0x3b32)],[0x0,_0x37e46c(0x31c8)],[0x0,{'v':_0x37e46c(0x2c1b),'n':0x338,'o':_0x37e46c(0x422d)}],[0x0,_0x37e46c(0x302f)],[0x0,'⋷'],[0x1,{'v':_0x37e46c(0x3f1a),'n':0x338,'o':_0x37e46c(0x285e)}],[0x0,_0x37e46c(0x17e6)],[0x0,_0x37e46c(0x145c)],[0x0,'⋼'],[0x0,_0x37e46c(0x391)],[0x0,_0x37e46c(0x86e)],[0x6,_0x37e46c(0x3d34)],[0x0,_0x37e46c(0x2929)],[0x1,_0x37e46c(0x2640)],[0x0,_0x37e46c(0x4023)],[0x0,'⌊'],[0x0,_0x37e46c(0x1594)],[0x0,_0x37e46c(0x4dfa)],[0x0,'⌍'],[0x0,_0x37e46c(0x6a3)],[0x0,'⌏'],[0x0,_0x37e46c(0x2f0f)],[0x1,_0x37e46c(0x588)],[0x0,'⌓'],[0x1,_0x37e46c(0x4c92)],[0x0,_0x37e46c(0x263b)],[0x5,_0x37e46c(0x14cb)],[0x0,_0x37e46c(0x42cf)],[0x0,_0x37e46c(0x39ef)],[0x0,_0x37e46c(0x3bad)],[0x2,_0x37e46c(0x1e4b)],[0x0,_0x37e46c(0x18a8)],[0x9,_0x37e46c(0x4f02)],[0x0,'⌮'],[0x7,_0x37e46c(0x235d)],[0x6,_0x37e46c(0x1581)],[0x1,_0x37e46c(0x2d46)],[0x3c,_0x37e46c(0x4228)],[0x33,'⎰'],[0x0,_0x37e46c(0x29db)],[0x2,_0x37e46c(0x2b4a)],[0x0,'⎵'],[0x0,'⎶'],[0x25,_0x37e46c(0x2508)],[0x0,'⏝'],[0x0,'⏞'],[0x0,_0x37e46c(0x5a4)],[0x2,'⏢'],[0x4,_0x37e46c(0x46fe)],[0x3b,_0x37e46c(0x385a)],[0xa4,_0x37e46c(0x3fe3)],[0x37,_0x37e46c(0xa8c)],[0x1,_0x37e46c(0x2110)],[0x9,_0x37e46c(0x8a7)],[0x3,_0x37e46c(0x6fb)],[0x3,_0x37e46c(0x3490)],[0x3,_0x37e46c(0x34a1)],[0x3,_0x37e46c(0x2fb)],[0x7,_0x37e46c(0x2996)],[0x7,_0x37e46c(0x4835)],[0x7,_0x37e46c(0x383d)],[0x7,_0x37e46c(0x2b28)],[0x13,_0x37e46c(0x3550)],[0x0,_0x37e46c(0x1aef)],[0x0,_0x37e46c(0x20cf)],[0x0,'╓'],[0x0,'╔'],[0x0,'╕'],[0x0,'╖'],[0x0,_0x37e46c(0x3ce7)],[0x0,'╘'],[0x0,'╙'],[0x0,_0x37e46c(0x49d1)],[0x0,_0x37e46c(0xea8)],[0x0,_0x37e46c(0x135a)],[0x0,_0x37e46c(0x1a06)],[0x0,'╞'],[0x0,'╟'],[0x0,_0x37e46c(0x36d5)],[0x0,_0x37e46c(0x3d2e)],[0x0,_0x37e46c(0x2847)],[0x0,_0x37e46c(0x14c2)],[0x0,_0x37e46c(0x42d9)],[0x0,_0x37e46c(0x2e22)],[0x0,_0x37e46c(0x2b77)],[0x0,_0x37e46c(0xe01)],[0x0,_0x37e46c(0x138e)],[0x0,_0x37e46c(0x21d2)],[0x0,'╪'],[0x0,_0x37e46c(0x37ea)],[0x0,_0x37e46c(0x1e2e)],[0x13,_0x37e46c(0x1f1a)],[0x3,_0x37e46c(0x337a)],[0x3,_0x37e46c(0x3651)],[0x8,'░'],[0x0,_0x37e46c(0xd22)],[0x0,_0x37e46c(0x1261)],[0xd,_0x37e46c(0x26b4)],[0x8,_0x37e46c(0x3edf)],[0x0,'▫'],[0x1,'▭'],[0x0,'▮'],[0x2,'▱'],[0x1,_0x37e46c(0x1b36)],[0x0,_0x37e46c(0x4d3a)],[0x0,_0x37e46c(0xd00)],[0x2,_0x37e46c(0x335d)],[0x0,'▹'],[0x3,_0x37e46c(0x3ac7)],[0x0,_0x37e46c(0x44bc)],[0x0,_0x37e46c(0x4163)],[0x2,'◂'],[0x0,_0x37e46c(0x3e38)],[0x6,_0x37e46c(0x3492)],[0x0,_0x37e46c(0x492b)],[0x20,_0x37e46c(0x24e)],[0x2,_0x37e46c(0x4087)],[0x8,_0x37e46c(0x3d86)],[0x0,_0x37e46c(0x2c3b)],[0x0,_0x37e46c(0x13b7)],[0x0,_0x37e46c(0x2880)],[0x0,_0x37e46c(0xb48)],[0x8,_0x37e46c(0x3c3)],[0x0,'☆'],[0x7,_0x37e46c(0x7ad)],[0x31,'♀'],[0x1,_0x37e46c(0x31f8)],[0x1d,_0x37e46c(0x4294)],[0x2,'♣'],[0x1,_0x37e46c(0x43b0)],[0x0,_0x37e46c(0x1379)],[0x3,_0x37e46c(0x3ef0)],[0x2,_0x37e46c(0x220c)],[0x0,_0x37e46c(0x29c8)],[0x0,'♯'],[0xa3,_0x37e46c(0x1ea6)],[0x3,_0x37e46c(0xfed)],[0x8,_0x37e46c(0x4b8a)],[0x15,'✶'],[0x21,_0x37e46c(0x1149)],[0x19,_0x37e46c(0x20d2)],[0x0,_0x37e46c(0x335c)],[0x54,'⟈'],[0x0,_0x37e46c(0x1860)],[0x1c,_0x37e46c(0x31b2)],[0x0,_0x37e46c(0x1819)],[0x0,_0x37e46c(0x2124)],[0x0,_0x37e46c(0x3b70)],[0x0,'⟪'],[0x0,_0x37e46c(0x41f)],[0x0,'⟬'],[0x0,_0x37e46c(0x3472)],[0x7,_0x37e46c(0x47e9)],[0x0,_0x37e46c(0x3bfa)],[0x0,'⟷'],[0x0,'⟸'],[0x0,_0x37e46c(0x59e)],[0x0,_0x37e46c(0x1b50)],[0x1,'⟼'],[0x2,_0x37e46c(0x39b7)],[0x102,_0x37e46c(0x3b9b)],[0x0,_0x37e46c(0x475f)],[0x0,_0x37e46c(0x401e)],[0x0,_0x37e46c(0x3c07)],[0x6,_0x37e46c(0x2a1)],[0x0,_0x37e46c(0x8f2)],[0x0,_0x37e46c(0x33de)],[0x0,_0x37e46c(0x27e3)],[0x0,_0x37e46c(0x2f0c)],[0x0,_0x37e46c(0x2077)],[0x0,_0x37e46c(0x1240)],[0x0,_0x37e46c(0x1098)],[0x2,_0x37e46c(0x50ea)],[0x2,_0x37e46c(0x323a)],[0x0,'⤚'],[0x0,_0x37e46c(0x3697)],[0x0,_0x37e46c(0x1902)],[0x0,_0x37e46c(0x4b12)],[0x0,_0x37e46c(0x16bb)],[0x0,'⤟'],[0x0,_0x37e46c(0x457d)],[0x2,'⤣'],[0x0,_0x37e46c(0x108f)],[0x0,'⤥'],[0x0,_0x37e46c(0x3d12)],[0x0,_0x37e46c(0xb38)],[0x0,'⤨'],[0x0,_0x37e46c(0x40c8)],[0x0,'⤪'],[0x8,{'v':_0x37e46c(0x509e),'n':0x338,'o':_0x37e46c(0x3439)}],[0x1,_0x37e46c(0x2003)],[0x0,_0x37e46c(0x4f96)],[0x0,_0x37e46c(0x1014)],[0x0,_0x37e46c(0x1bfb)],[0x0,_0x37e46c(0x4bbd)],[0x2,_0x37e46c(0xecc)],[0x0,_0x37e46c(0x4ac5)],[0x7,_0x37e46c(0x4cf7)],[0x2,_0x37e46c(0x8e2)],[0x0,_0x37e46c(0x373a)],[0x0,_0x37e46c(0x3af7)],[0x0,'⥋'],[0x2,'⥎'],[0x0,'⥏'],[0x0,_0x37e46c(0x11ca)],[0x0,_0x37e46c(0x1543)],[0x0,_0x37e46c(0x10af)],[0x0,_0x37e46c(0xdca)],[0x0,_0x37e46c(0x5058)],[0x0,_0x37e46c(0x1ce9)],[0x0,_0x37e46c(0x49f3)],[0x0,'⥗'],[0x0,_0x37e46c(0x37b4)],[0x0,'⥙'],[0x0,'⥚'],[0x0,'⥛'],[0x0,_0x37e46c(0x285d)],[0x0,_0x37e46c(0xaf2)],[0x0,_0x37e46c(0x1bdf)],[0x0,'⥟'],[0x0,_0x37e46c(0x20e8)],[0x0,'⥡'],[0x0,'⥢'],[0x0,_0x37e46c(0x37e8)],[0x0,_0x37e46c(0x311a)],[0x0,_0x37e46c(0x405c)],[0x0,_0x37e46c(0x42d2)],[0x0,_0x37e46c(0x2b2f)],[0x0,_0x37e46c(0x43de)],[0x0,_0x37e46c(0x478f)],[0x0,_0x37e46c(0x1303)],[0x0,'⥫'],[0x0,_0x37e46c(0xe95)],[0x0,_0x37e46c(0x3ec8)],[0x0,_0x37e46c(0x40c6)],[0x0,'⥯'],[0x0,_0x37e46c(0x84c)],[0x0,'⥱'],[0x0,_0x37e46c(0x3905)],[0x0,_0x37e46c(0xcf3)],[0x0,_0x37e46c(0x4716)],[0x0,_0x37e46c(0x5118)],[0x0,_0x37e46c(0x3705)],[0x1,_0x37e46c(0x352)],[0x0,'⥹'],[0x1,'⥻'],[0x0,_0x37e46c(0x37e0)],[0x0,_0x37e46c(0x27a6)],[0x0,_0x37e46c(0x4ce8)],[0x0,'⥿'],[0x5,_0x37e46c(0x39e)],[0x0,_0x37e46c(0x4bf4)],[0x4,_0x37e46c(0x1298)],[0x0,'⦌'],[0x0,_0x37e46c(0x247)],[0x0,_0x37e46c(0x4de8)],[0x0,_0x37e46c(0x1c5d)],[0x0,_0x37e46c(0x1f23)],[0x0,_0x37e46c(0x2da9)],[0x0,_0x37e46c(0x365d)],[0x0,_0x37e46c(0x2e8e)],[0x0,_0x37e46c(0x15c4)],[0x0,_0x37e46c(0x1424)],[0x0,'⦖'],[0x3,_0x37e46c(0x40cd)],[0x1,'⦜'],[0x0,_0x37e46c(0x1662)],[0x6,'⦤'],[0x0,_0x37e46c(0x426d)],[0x0,_0x37e46c(0x3222)],[0x0,'⦧'],[0x0,_0x37e46c(0x3ca9)],[0x0,_0x37e46c(0x3db1)],[0x0,'⦪'],[0x0,_0x37e46c(0xf36)],[0x0,_0x37e46c(0x79e)],[0x0,_0x37e46c(0x3b68)],[0x0,_0x37e46c(0x35d5)],[0x0,_0x37e46c(0x30cc)],[0x0,_0x37e46c(0x1a0e)],[0x0,_0x37e46c(0x36cf)],[0x0,_0x37e46c(0x94d)],[0x0,'⦳'],[0x0,_0x37e46c(0x19af)],[0x0,_0x37e46c(0x4ff5)],[0x0,_0x37e46c(0x688)],[0x0,'⦷'],[0x1,_0x37e46c(0x4191)],[0x1,'⦻'],[0x0,_0x37e46c(0x1398)],[0x1,'⦾'],[0x0,_0x37e46c(0x382d)],[0x0,_0x37e46c(0x1ade)],[0x0,'⧁'],[0x0,_0x37e46c(0x393f)],[0x0,_0x37e46c(0x2dc0)],[0x0,_0x37e46c(0x3558)],[0x0,_0x37e46c(0x493e)],[0x3,_0x37e46c(0x1be4)],[0x3,_0x37e46c(0x2ed0)],[0x0,_0x37e46c(0x1d83)],[0x0,{'v':_0x37e46c(0x5107),'n':0x338,'o':_0x37e46c(0x1c3d)}],[0x0,{'v':_0x37e46c(0xdc8),'n':0x338,'o':_0x37e46c(0x3834)}],[0xb,_0x37e46c(0x47eb)],[0x0,_0x37e46c(0x2983)],[0x0,_0x37e46c(0x1c3e)],[0x4,_0x37e46c(0x1df2)],[0x0,_0x37e46c(0x286a)],[0x0,_0x37e46c(0x66f)],[0x5,_0x37e46c(0x33fd)],[0x8,_0x37e46c(0x20c9)],[0x1,'⧶'],[0x9,_0x37e46c(0x2afa)],[0x0,_0x37e46c(0x4447)],[0x0,'⨂'],[0x1,_0x37e46c(0x4cbb)],[0x1,_0x37e46c(0x4055)],[0x5,'⨌'],[0x0,_0x37e46c(0x8d0)],[0x2,_0x37e46c(0x3efb)],[0x0,_0x37e46c(0x4b8b)],[0x0,_0x37e46c(0x4cd4)],[0x0,_0x37e46c(0x4df0)],[0x0,'⨔'],[0x0,'⨕'],[0x0,_0x37e46c(0x2eb7)],[0x0,_0x37e46c(0x3b85)],[0xa,_0x37e46c(0x1d40)],[0x0,_0x37e46c(0x49d0)],[0x0,_0x37e46c(0x275)],[0x0,'⨥'],[0x0,_0x37e46c(0x3b31)],[0x0,_0x37e46c(0x33f9)],[0x1,_0x37e46c(0xad8)],[0x0,_0x37e46c(0x294f)],[0x2,_0x37e46c(0x38f1)],[0x0,_0x37e46c(0x283a)],[0x0,_0x37e46c(0x84f)],[0x0,_0x37e46c(0x333c)],[0x0,'⨱'],[0x1,_0x37e46c(0x23ea)],[0x0,_0x37e46c(0x1b5f)],[0x0,'⨵'],[0x0,'⨶'],[0x0,'⨷'],[0x0,_0x37e46c(0x36e4)],[0x0,_0x37e46c(0x2987)],[0x0,'⨺'],[0x0,'⨻'],[0x0,_0x37e46c(0x3b4a)],[0x2,'⨿'],[0x0,_0x37e46c(0x4eaa)],[0x1,'⩂'],[0x0,'⩃'],[0x0,_0x37e46c(0x4c03)],[0x0,_0x37e46c(0x10ad)],[0x0,_0x37e46c(0x4b3e)],[0x0,_0x37e46c(0x2bac)],[0x0,_0x37e46c(0x4079)],[0x0,_0x37e46c(0x1c75)],[0x0,'⩊'],[0x0,'⩋'],[0x0,'⩌'],[0x0,_0x37e46c(0x3878)],[0x2,_0x37e46c(0x3b6c)],[0x2,'⩓'],[0x0,_0x37e46c(0x2a4e)],[0x0,_0x37e46c(0xc48)],[0x0,_0x37e46c(0x2104)],[0x0,_0x37e46c(0x4852)],[0x0,_0x37e46c(0x3a5b)],[0x1,_0x37e46c(0x1464)],[0x0,_0x37e46c(0x1342)],[0x0,_0x37e46c(0x4109)],[0x0,_0x37e46c(0x4326)],[0x1,_0x37e46c(0x21ca)],[0x6,_0x37e46c(0x30d5)],[0x3,_0x37e46c(0x1fa1)],[0x2,{'v':_0x37e46c(0x15eb),'n':0x338,'o':'⩭̸'}],[0x0,_0x37e46c(0x1467)],[0x0,_0x37e46c(0x3f35)],[0x0,{'v':_0x37e46c(0x47a8),'n':0x338,'o':'⩰̸'}],[0x0,_0x37e46c(0x44ce)],[0x0,_0x37e46c(0x14f5)],[0x0,_0x37e46c(0x2f0a)],[0x0,_0x37e46c(0x1c65)],[0x0,_0x37e46c(0x3625)],[0x1,'⩷'],[0x0,'⩸'],[0x0,_0x37e46c(0x1cae)],[0x0,_0x37e46c(0x1246)],[0x0,'⩻'],[0x0,_0x37e46c(0x4bb1)],[0x0,{'v':_0x37e46c(0x4ca3),'n':0x338,'o':_0x37e46c(0x19ba)}],[0x0,{'v':_0x37e46c(0x4f71),'n':0x338,'o':_0x37e46c(0xe8a)}],[0x0,_0x37e46c(0x1879)],[0x0,_0x37e46c(0x264b)],[0x0,_0x37e46c(0x1bbe)],[0x0,_0x37e46c(0xdb4)],[0x0,_0x37e46c(0x4dcf)],[0x0,_0x37e46c(0x41df)],[0x0,'⪅'],[0x0,_0x37e46c(0x39f5)],[0x0,_0x37e46c(0x1959)],[0x0,'⪈'],[0x0,_0x37e46c(0x4fe4)],[0x0,_0x37e46c(0x149c)],[0x0,_0x37e46c(0x40a0)],[0x0,_0x37e46c(0x5226)],[0x0,'⪍'],[0x0,_0x37e46c(0x3cb0)],[0x0,_0x37e46c(0x1550)],[0x0,_0x37e46c(0x1dfe)],[0x0,'⪑'],[0x0,_0x37e46c(0x4dd8)],[0x0,_0x37e46c(0x4189)],[0x0,'⪔'],[0x0,_0x37e46c(0x376a)],[0x0,_0x37e46c(0x2c77)],[0x0,'⪗'],[0x0,_0x37e46c(0x1fcd)],[0x0,_0x37e46c(0xae7)],[0x0,_0x37e46c(0x3f06)],[0x2,'⪝'],[0x0,'⪞'],[0x0,_0x37e46c(0x2fd3)],[0x0,_0x37e46c(0x2cb3)],[0x0,{'v':'⪡','n':0x338,'o':_0x37e46c(0x31a9)}],[0x0,{'v':_0x37e46c(0x1436),'n':0x338,'o':'⪢̸'}],[0x1,_0x37e46c(0x22e2)],[0x0,_0x37e46c(0x4661)],[0x0,_0x37e46c(0x2fff)],[0x0,_0x37e46c(0xbf4)],[0x0,_0x37e46c(0x415a)],[0x0,_0x37e46c(0x3346)],[0x0,'⪪'],[0x0,_0x37e46c(0x1ac1)],[0x0,{'v':'⪬','n':0xfe00,'o':_0x37e46c(0x1985)}],[0x0,{'v':'⪭','n':0xfe00,'o':_0x37e46c(0x465e)}],[0x0,_0x37e46c(0x28d1)],[0x0,{'v':_0x37e46c(0x2bb5),'n':0x338,'o':'⪯̸'}],[0x0,{'v':_0x37e46c(0xd54),'n':0x338,'o':_0x37e46c(0x4f52)}],[0x2,'⪳'],[0x0,_0x37e46c(0x4605)],[0x0,_0x37e46c(0x1c9c)],[0x0,_0x37e46c(0x435f)],[0x0,_0x37e46c(0x3a14)],[0x0,_0x37e46c(0x2836)],[0x0,_0x37e46c(0x40d2)],[0x0,'⪺'],[0x0,_0x37e46c(0x5094)],[0x0,_0x37e46c(0x2a27)],[0x0,_0x37e46c(0x1e9d)],[0x0,'⪾'],[0x0,_0x37e46c(0x4165)],[0x0,_0x37e46c(0x10f6)],[0x0,_0x37e46c(0x231)],[0x0,_0x37e46c(0x23a9)],[0x0,_0x37e46c(0x2c3)],[0x0,_0x37e46c(0x788)],[0x0,{'v':_0x37e46c(0x479d),'n':0x338,'o':'⫅̸'}],[0x0,{'v':_0x37e46c(0x8a4),'n':0x338,'o':_0x37e46c(0x2568)}],[0x0,'⫇'],[0x0,_0x37e46c(0x4723)],[0x2,{'v':_0x37e46c(0x427d),'n':0xfe00,'o':_0x37e46c(0xf80)}],[0x0,{'v':_0x37e46c(0x787),'n':0xfe00,'o':_0x37e46c(0x4be7)}],[0x2,_0x37e46c(0x868)],[0x0,'⫐'],[0x0,'⫑'],[0x0,_0x37e46c(0x4d41)],[0x0,'⫓'],[0x0,'⫔'],[0x0,_0x37e46c(0x36a6)],[0x0,'⫖'],[0x0,_0x37e46c(0x2f7d)],[0x0,_0x37e46c(0x248c)],[0x0,_0x37e46c(0x33b4)],[0x0,'⫚'],[0x0,_0x37e46c(0x2b14)],[0x8,_0x37e46c(0x1fbf)],[0x1,'⫦'],[0x0,_0x37e46c(0x3cac)],[0x0,_0x37e46c(0x5da)],[0x0,_0x37e46c(0x11d6)],[0x1,'⫫'],[0x0,_0x37e46c(0x4e39)],[0x0,_0x37e46c(0x408b)],[0x0,_0x37e46c(0x13b8)],[0x0,_0x37e46c(0x17fa)],[0x0,_0x37e46c(0x3601)],[0x0,'⫱'],[0x0,_0x37e46c(0x28c4)],[0x0,_0x37e46c(0x679)],[0x9,{'v':_0x37e46c(0xe2b),'n':0x20e5,'o':_0x37e46c(0x3172)}],[0xad37,{'n':new Map(_0x35f077([[0xdc9c,_0x37e46c(0xf4f)],[0x1,'𝒞'],[0x0,_0x37e46c(0x5162)],[0x2,'𝒢'],[0x2,_0x37e46c(0x1675)],[0x0,_0x37e46c(0x322b)],[0x2,_0x37e46c(0x3137)],[0x0,_0x37e46c(0x17a8)],[0x0,_0x37e46c(0x2cfc)],[0x0,_0x37e46c(0x4ebf)],[0x1,_0x37e46c(0x92b)],[0x0,_0x37e46c(0x71b)],[0x0,_0x37e46c(0xc0d)],[0x0,_0x37e46c(0x1d54)],[0x0,'𝒲'],[0x0,_0x37e46c(0x158b)],[0x0,_0x37e46c(0x172e)],[0x0,'𝒵'],[0x0,_0x37e46c(0x4eee)],[0x0,'𝒷'],[0x0,_0x37e46c(0xd0e)],[0x0,'𝒹'],[0x1,'𝒻'],[0x1,_0x37e46c(0x471d)],[0x0,_0x37e46c(0x49c7)],[0x0,_0x37e46c(0x4286)],[0x0,_0x37e46c(0x2a57)],[0x0,_0x37e46c(0x1f22)],[0x0,'𝓂'],[0x0,_0x37e46c(0x3ee1)],[0x1,_0x37e46c(0x1a5a)],[0x0,_0x37e46c(0x33a3)],[0x0,_0x37e46c(0x9b8)],[0x0,_0x37e46c(0x4b44)],[0x0,_0x37e46c(0x4880)],[0x0,'𝓊'],[0x0,_0x37e46c(0x2d2b)],[0x0,_0x37e46c(0x217a)],[0x0,_0x37e46c(0x4f72)],[0x0,'𝓎'],[0x0,_0x37e46c(0xf3f)],[0x34,_0x37e46c(0x1fdc)],[0x0,_0x37e46c(0x428)],[0x1,_0x37e46c(0x937)],[0x0,'𝔈'],[0x0,_0x37e46c(0x4ed6)],[0x0,_0x37e46c(0xa3e)],[0x2,_0x37e46c(0x1ae9)],[0x0,_0x37e46c(0xec0)],[0x0,_0x37e46c(0x1a16)],[0x0,'𝔐'],[0x0,_0x37e46c(0x34a5)],[0x0,_0x37e46c(0x33e1)],[0x0,_0x37e46c(0x4c44)],[0x0,_0x37e46c(0x28bb)],[0x1,_0x37e46c(0x2744)],[0x0,_0x37e46c(0xdb5)],[0x0,_0x37e46c(0x10a1)],[0x0,_0x37e46c(0x78a)],[0x0,_0x37e46c(0x4411)],[0x0,_0x37e46c(0xb19)],[0x0,_0x37e46c(0x1f7d)],[0x1,_0x37e46c(0x4e61)],[0x0,_0x37e46c(0x39fb)],[0x0,_0x37e46c(0x28b2)],[0x0,'𝔡'],[0x0,_0x37e46c(0x4121)],[0x0,_0x37e46c(0xaad)],[0x0,_0x37e46c(0x30d)],[0x0,_0x37e46c(0x4df7)],[0x0,_0x37e46c(0x568)],[0x0,_0x37e46c(0x442e)],[0x0,'𝔨'],[0x0,_0x37e46c(0x4e99)],[0x0,_0x37e46c(0x1267)],[0x0,_0x37e46c(0x2d67)],[0x0,_0x37e46c(0x2bf7)],[0x0,_0x37e46c(0x3720)],[0x0,'𝔮'],[0x0,_0x37e46c(0x310d)],[0x0,_0x37e46c(0x2bf9)],[0x0,_0x37e46c(0x1bfe)],[0x0,_0x37e46c(0x4010)],[0x0,_0x37e46c(0x11a6)],[0x0,_0x37e46c(0x5e0)],[0x0,_0x37e46c(0x39ed)],[0x0,'𝔶'],[0x0,_0x37e46c(0x3876)],[0x0,_0x37e46c(0x4769)],[0x0,_0x37e46c(0x1f20)],[0x1,'𝔻'],[0x0,_0x37e46c(0x15c1)],[0x0,_0x37e46c(0x1b17)],[0x0,_0x37e46c(0x3111)],[0x1,'𝕀'],[0x0,_0x37e46c(0x3a38)],[0x0,'𝕂'],[0x0,_0x37e46c(0x2067)],[0x0,_0x37e46c(0x17f9)],[0x1,_0x37e46c(0x3b8c)],[0x3,'𝕊'],[0x0,_0x37e46c(0x2ea1)],[0x0,_0x37e46c(0x2b92)],[0x0,_0x37e46c(0x4c43)],[0x0,_0x37e46c(0x3133)],[0x0,_0x37e46c(0x24c6)],[0x0,_0x37e46c(0x21f2)],[0x1,_0x37e46c(0x15df)],[0x0,_0x37e46c(0x4eb0)],[0x0,_0x37e46c(0x2bc1)],[0x0,_0x37e46c(0x1480)],[0x0,_0x37e46c(0x1253)],[0x0,'𝕗'],[0x0,_0x37e46c(0x45ca)],[0x0,_0x37e46c(0x793)],[0x0,_0x37e46c(0x3e4d)],[0x0,_0x37e46c(0x2dfe)],[0x0,_0x37e46c(0x3196)],[0x0,_0x37e46c(0xa85)],[0x0,'𝕞'],[0x0,_0x37e46c(0x286b)],[0x0,_0x37e46c(0xfd8)],[0x0,'𝕡'],[0x0,_0x37e46c(0x3d08)],[0x0,_0x37e46c(0x2986)],[0x0,_0x37e46c(0x1c67)],[0x0,_0x37e46c(0x266b)],[0x0,_0x37e46c(0x3bd1)],[0x0,'𝕧'],[0x0,_0x37e46c(0x360d)],[0x0,_0x37e46c(0x4480)],[0x0,_0x37e46c(0x3c6c)],[0x0,_0x37e46c(0x97b)]]))}],[0x22ca,_0x37e46c(0x1086)],[0x0,_0x37e46c(0x38dc)],[0x0,_0x37e46c(0x34e1)],[0x0,_0x37e46c(0x1f34)],[0x0,_0x37e46c(0x3f22)]]));const _0x2127d0=new Map([[0x22,_0x37e46c(0x2a2)],[0x26,_0x37e46c(0x2a36)],[0x27,_0x37e46c(0x36f2)],[0x3c,_0x37e46c(0x3026)],[0x3e,_0x37e46c(0x713)]]);String[_0x37e46c(0x3b3c)][_0x37e46c(0x32ff)];function _0x2eb50d(_0x269d99,_0x1263ac){return function(_0x3e4ad1){const _0x3f17c6=a0_0x11e7;let _0x24b9ea,_0x5112b2=0x0,_0x42f9e2='';for(;_0x24b9ea=_0x269d99['exec'](_0x3e4ad1);)_0x5112b2!==_0x24b9ea[_0x3f17c6(0x3bb5)]&&(_0x42f9e2+=_0x3e4ad1[_0x3f17c6(0x37b5)](_0x5112b2,_0x24b9ea[_0x3f17c6(0x3bb5)])),_0x42f9e2+=_0x1263ac[_0x3f17c6(0xf9e)](_0x24b9ea[0x0][_0x3f17c6(0x4955)](0x0)),_0x5112b2=_0x24b9ea[_0x3f17c6(0x3bb5)]+0x1;return _0x42f9e2+_0x3e4ad1[_0x3f17c6(0x37b5)](_0x5112b2);};}_0x2eb50d(/[&<>'"]/g,_0x2127d0),_0x2eb50d(/["&\u00A0]/g,new Map([[0x22,_0x37e46c(0x2a2)],[0x26,_0x37e46c(0x2a36)],[0xa0,_0x37e46c(0x514b)]])),_0x2eb50d(/[&<>\u00A0]/g,new Map([[0x26,'&'],[0x3c,_0x37e46c(0x3026)],[0x3e,'>'],[0xa0,_0x37e46c(0x514b)]]));var _0x92ee65,_0x5abf83;function _0x297e70(_0x37548f){const _0x1ec548=_0x37e46c;return _0x1ec548(0x3957)===function(_0x4f438a){const _0x27bb96=_0x1ec548;return Object[_0x27bb96(0x3b3c)][_0x27bb96(0x8e8)][_0x27bb96(0x236b)](_0x4f438a);}(_0x37548f);}!function(_0x531b75){const _0x2c1794=_0x37e46c;_0x531b75[_0x531b75[_0x2c1794(0x1743)]=0x0]='XML',_0x531b75[_0x531b75['HTML']=0x1]=_0x2c1794(0x4a67);}(_0x92ee65||(_0x92ee65={})),function(_0x1d1544){const _0x1fce28=_0x37e46c;_0x1d1544[_0x1d1544['UTF8']=0x0]=_0x1fce28(0x2a20),_0x1d1544[_0x1d1544[_0x1fce28(0x22ae)]=0x1]=_0x1fce28(0x22ae),_0x1d1544[_0x1d1544[_0x1fce28(0x7eb)]=0x2]=_0x1fce28(0x7eb),_0x1d1544[_0x1d1544[_0x1fce28(0x1395)]=0x3]=_0x1fce28(0x1395),_0x1d1544[_0x1d1544[_0x1fce28(0x2f37)]=0x4]=_0x1fce28(0x2f37);}(_0x5abf83||(_0x5abf83={}));const _0x5249d2=Object[_0x37e46c(0x3b3c)][_0x37e46c(0x2427)];function _0x2c462b(_0x15225c,_0x3e1d9c){return _0x5249d2['call'](_0x15225c,_0x3e1d9c);}function _0xa6cec9(_0x34446e){const _0x4e811a=_0x37e46c;return Array['prototype']['slice'][_0x4e811a(0x236b)](arguments,0x1)['forEach'](function(_0x29b17c){const _0x48e276=_0x4e811a;if(_0x29b17c){if(_0x48e276(0x20c7)!=typeof _0x29b17c)throw new TypeError(_0x29b17c+_0x48e276(0x41d5));Object[_0x48e276(0x1ea9)](_0x29b17c)[_0x48e276(0xa21)](function(_0x11a36a){_0x34446e[_0x11a36a]=_0x29b17c[_0x11a36a];});}}),_0x34446e;}function _0x38c141(_0x3fff2d,_0x5557a5,_0x36f203){const _0x357be4=_0x37e46c;return[][_0x357be4(0x1d1d)](_0x3fff2d[_0x357be4(0x384c)](0x0,_0x5557a5),_0x36f203,_0x3fff2d[_0x357be4(0x384c)](_0x5557a5+0x1));}function _0x11f3ea(_0x307bb3){return!(_0x307bb3>=0xd800&&_0x307bb3<=0xdfff)&&(!(_0x307bb3>=0xfdd0&&_0x307bb3<=0xfdef)&&(!!(0xffff&~_0x307bb3&&0xfffe!=(0xffff&_0x307bb3))&&(!(_0x307bb3>=0x0&&_0x307bb3<=0x8)&&(0xb!==_0x307bb3&&(!(_0x307bb3>=0xe&&_0x307bb3<=0x1f)&&(!(_0x307bb3>=0x7f&&_0x307bb3<=0x9f)&&!(_0x307bb3>0x10ffff)))))));}function _0x463426(_0x3946a7){const _0x5a2f18=_0x37e46c;if(_0x3946a7>0xffff){const _0x3a499a=0xd800+((_0x3946a7-=0x10000)>>0xa),_0x17bafa=0xdc00+(0x3ff&_0x3946a7);return String[_0x5a2f18(0x49bf)](_0x3a499a,_0x17bafa);}return String[_0x5a2f18(0x49bf)](_0x3946a7);}const _0x5f50a6=/\\([!"#$%&'()*+,\-./:;<=>?@[\\\]^_`{|}~])/g,_0x35a7f7=new RegExp(_0x5f50a6[_0x37e46c(0x33b0)]+'|'+/&([a-z#][a-z0-9]{1,31});/gi[_0x37e46c(0x33b0)],'gi'),_0x27c9f1=/^#((?:x[a-f0-9]{1,8}|[0-9]{1,8}))$/i;function _0x3833ce(_0x1554cd){const _0x392341=_0x37e46c;return _0x1554cd[_0x392341(0x8c9)]('\x5c')<0x0?_0x1554cd:_0x1554cd[_0x392341(0x741)](_0x5f50a6,'$1');}function _0x5bec25(_0x3987b2){const _0x3242fb=_0x37e46c;return _0x3987b2['indexOf']('\x5c')<0x0&&_0x3987b2['indexOf']('&')<0x0?_0x3987b2:_0x3987b2[_0x3242fb(0x741)](_0x35a7f7,function(_0x4f7d72,_0x48b664,_0x5f5a1b){return _0x48b664||function(_0x2456a4,_0x5466ed){const _0x248d03=a0_0x11e7;if(0x23===_0x5466ed[_0x248d03(0x4955)](0x0)&&_0x27c9f1[_0x248d03(0x1769)](_0x5466ed)){const _0x34a63='x'===_0x5466ed[0x1][_0x248d03(0x6e8)]()?parseInt(_0x5466ed[_0x248d03(0x384c)](0x2),0x10):parseInt(_0x5466ed[_0x248d03(0x384c)](0x1),0xa);return _0x11f3ea(_0x34a63)?_0x463426(_0x34a63):_0x2456a4;}const _0x201a8a=_0x14991c(_0x2456a4);return _0x201a8a!==_0x2456a4?_0x201a8a:_0x2456a4;}(_0x4f7d72,_0x5f5a1b);});}const _0x595751=/[&<>"]/,_0x31b156=/[&<>"]/g,_0x288d45={'&':_0x37e46c(0x2a36),'<':'<','>':_0x37e46c(0x713),'\x22':_0x37e46c(0x2a2)};function _0x4adb61(_0x40fd58){return _0x288d45[_0x40fd58];}function _0x3191fd(_0x106d2c){const _0x1cc1ba=_0x37e46c;return _0x595751[_0x1cc1ba(0x1769)](_0x106d2c)?_0x106d2c['replace'](_0x31b156,_0x4adb61):_0x106d2c;}const _0x32e2a4=/[.?*+^$[\]\\(){}|-]/g;function _0x746769(_0x3e948d){const _0x49f835=_0x37e46c;return _0x3e948d[_0x49f835(0x741)](_0x32e2a4,'\x5c$&');}function _0x290708(_0x267ec6){switch(_0x267ec6){case 0x9:case 0x20:return!0x0;}return!0x1;}function _0x138dc2(_0x203e83){if(_0x203e83>=0x2000&&_0x203e83<=0x200a)return!0x0;switch(_0x203e83){case 0x9:case 0xa:case 0xb:case 0xc:case 0xd:case 0x20:case 0xa0:case 0x1680:case 0x202f:case 0x205f:case 0x3000:return!0x0;}return!0x1;}function _0x1f4fb3(_0x19f82a){return _0x4bf2bf['test'](_0x19f82a);}function _0x5950b9(_0x10dd1a){switch(_0x10dd1a){case 0x21:case 0x22:case 0x23:case 0x24:case 0x25:case 0x26:case 0x27:case 0x28:case 0x29:case 0x2a:case 0x2b:case 0x2c:case 0x2d:case 0x2e:case 0x2f:case 0x3a:case 0x3b:case 0x3c:case 0x3d:case 0x3e:case 0x3f:case 0x40:case 0x5b:case 0x5c:case 0x5d:case 0x5e:case 0x5f:case 0x60:case 0x7b:case 0x7c:case 0x7d:case 0x7e:return!0x0;default:return!0x1;}}function _0x1fe7b1(_0x16eb35){const _0x1d758a=_0x37e46c;return _0x16eb35=_0x16eb35['trim']()['replace'](/\s+/g,'\x20'),'Ṿ'==='ẞ'[_0x1d758a(0x6e8)]()&&(_0x16eb35=_0x16eb35[_0x1d758a(0x741)](/ẞ/g,'ß')),_0x16eb35['toLowerCase']()[_0x1d758a(0x44ff)]();}const _0x5af902={'mdurl':_0x1ef793,'ucmicro':_0x129aa7};function _0x3592ba(_0x5cd10b,_0x4293dc,_0x54c4cd){const _0x43df61=_0x37e46c;let _0x971ef0,_0x4d1fb4,_0x741e47,_0x5f0537;const _0x1af380=_0x5cd10b[_0x43df61(0x3f9b)],_0x437995=_0x5cd10b['pos'];for(_0x5cd10b['pos']=_0x4293dc+0x1,_0x971ef0=0x1;_0x5cd10b[_0x43df61(0x333f)]<_0x1af380;){if(_0x741e47=_0x5cd10b[_0x43df61(0x3d6)][_0x43df61(0x4955)](_0x5cd10b[_0x43df61(0x333f)]),0x5d===_0x741e47&&(_0x971ef0--,0x0===_0x971ef0)){_0x4d1fb4=!0x0;break;}if(_0x5f0537=_0x5cd10b[_0x43df61(0x333f)],_0x5cd10b['md'][_0x43df61(0x2988)][_0x43df61(0x4928)](_0x5cd10b),0x5b===_0x741e47){if(_0x5f0537===_0x5cd10b[_0x43df61(0x333f)]-0x1)_0x971ef0++;else{if(_0x54c4cd)return _0x5cd10b[_0x43df61(0x333f)]=_0x437995,-0x1;}}}let _0xf797e0=-0x1;return _0x4d1fb4&&(_0xf797e0=_0x5cd10b[_0x43df61(0x333f)]),_0x5cd10b[_0x43df61(0x333f)]=_0x437995,_0xf797e0;}function _0x1cba23(_0x234f8e,_0xd5fe7b,_0x55e9c9){const _0x33efb0=_0x37e46c;let _0x59707a,_0x2cd33e=_0xd5fe7b;const _0x25dddd={'ok':!0x1,'pos':0x0,'lines':0x0,'str':''};if(0x3c===_0x234f8e['charCodeAt'](_0x2cd33e)){for(_0x2cd33e++;_0x2cd33e<_0x55e9c9;){if(_0x59707a=_0x234f8e[_0x33efb0(0x4955)](_0x2cd33e),0xa===_0x59707a)return _0x25dddd;if(0x3c===_0x59707a)return _0x25dddd;if(0x3e===_0x59707a)return _0x25dddd['pos']=_0x2cd33e+0x1,_0x25dddd[_0x33efb0(0x257f)]=_0x5bec25(_0x234f8e[_0x33efb0(0x384c)](_0xd5fe7b+0x1,_0x2cd33e)),_0x25dddd['ok']=!0x0,_0x25dddd;0x5c===_0x59707a&&_0x2cd33e+0x1<_0x55e9c9?_0x2cd33e+=0x2:_0x2cd33e++;}return _0x25dddd;}let _0x5224b8=0x0;for(;_0x2cd33e<_0x55e9c9&&(_0x59707a=_0x234f8e[_0x33efb0(0x4955)](_0x2cd33e),0x20!==_0x59707a)&&!(_0x59707a<0x20||0x7f===_0x59707a);)if(0x5c===_0x59707a&&_0x2cd33e+0x1<_0x55e9c9){if(0x20===_0x234f8e[_0x33efb0(0x4955)](_0x2cd33e+0x1))break;_0x2cd33e+=0x2;}else{if(0x28===_0x59707a&&(_0x5224b8++,_0x5224b8>0x20))return _0x25dddd;if(0x29===_0x59707a){if(0x0===_0x5224b8)break;_0x5224b8--;}_0x2cd33e++;}return _0xd5fe7b===_0x2cd33e||0x0!==_0x5224b8||(_0x25dddd[_0x33efb0(0x257f)]=_0x5bec25(_0x234f8e[_0x33efb0(0x384c)](_0xd5fe7b,_0x2cd33e)),_0x25dddd[_0x33efb0(0x333f)]=_0x2cd33e,_0x25dddd['ok']=!0x0),_0x25dddd;}function _0x26920b(_0x2335ab,_0x33323c,_0xd8274a){const _0x416dbd=_0x37e46c;let _0x253478,_0x2abcd0,_0x2d95fa=0x0,_0x1e29ff=_0x33323c;const _0x2c2ba3={'ok':!0x1,'pos':0x0,'lines':0x0,'str':''};if(_0x1e29ff>=_0xd8274a)return _0x2c2ba3;if(_0x2abcd0=_0x2335ab['charCodeAt'](_0x1e29ff),0x22!==_0x2abcd0&&0x27!==_0x2abcd0&&0x28!==_0x2abcd0)return _0x2c2ba3;for(_0x1e29ff++,0x28===_0x2abcd0&&(_0x2abcd0=0x29);_0x1e29ff<_0xd8274a;){if(_0x253478=_0x2335ab[_0x416dbd(0x4955)](_0x1e29ff),_0x253478===_0x2abcd0)return _0x2c2ba3[_0x416dbd(0x333f)]=_0x1e29ff+0x1,_0x2c2ba3[_0x416dbd(0x203f)]=_0x2d95fa,_0x2c2ba3[_0x416dbd(0x257f)]=_0x5bec25(_0x2335ab[_0x416dbd(0x384c)](_0x33323c+0x1,_0x1e29ff)),_0x2c2ba3['ok']=!0x0,_0x2c2ba3;if(0x28===_0x253478&&0x29===_0x2abcd0)return _0x2c2ba3;0xa===_0x253478?_0x2d95fa++:0x5c===_0x253478&&_0x1e29ff+0x1<_0xd8274a&&(_0x1e29ff++,0xa===_0x2335ab[_0x416dbd(0x4955)](_0x1e29ff)&&_0x2d95fa++),_0x1e29ff++;}return _0x2c2ba3;}const _0x4f1e73={};function _0x1af13d(){const _0x2d65bf=_0x37e46c;this[_0x2d65bf(0x39d9)]=_0xa6cec9({},_0x4f1e73);}_0x4f1e73['code_inline']=function(_0x5d38f8,_0x1e1023,_0x3d8069,_0x4c4f24,_0x494d83){const _0x369089=_0x37e46c,_0x26e6e6=_0x5d38f8[_0x1e1023];return _0x369089(0x8be)+_0x494d83[_0x369089(0x2e74)](_0x26e6e6)+'>'+_0x3191fd(_0x26e6e6['content'])+_0x369089(0x3a91);},_0x4f1e73[_0x37e46c(0xfaa)]=function(_0x3474b9,_0x2d85fb,_0x2367d1,_0x2026b0,_0x2e5a8f){const _0x1f6807=_0x37e46c,_0x2bcc46=_0x3474b9[_0x2d85fb];return''+_0x166648+_0x5f25a5(0x3cc9);}return _0x5f25a5(0x1a03)+_0x4892c1[_0x5f25a5(0x2e74)](_0x33d59d)+'>'+_0x166648+_0x5f25a5(0x3cc9);},_0x4f1e73[_0x37e46c(0x178c)]=function(_0x538dff,_0x2a74de,_0x569575,_0xcb657f,_0x1d19fb){const _0xa291ed=_0x37e46c,_0x26b30f=_0x538dff[_0x2a74de];return _0x26b30f[_0xa291ed(0x70e)][_0x26b30f[_0xa291ed(0x3eac)](_0xa291ed(0x43f3))][0x1]=_0x1d19fb[_0xa291ed(0x3af8)](_0x26b30f[_0xa291ed(0x4c3e)],_0x569575,_0xcb657f),_0x1d19fb[_0xa291ed(0x19dd)](_0x538dff,_0x2a74de,_0x569575);},_0x4f1e73[_0x37e46c(0x47b4)]=function(_0x44e91a,_0x113ca2,_0x47c3ed){const _0x536b1a=_0x37e46c;return _0x47c3ed[_0x536b1a(0x1cd3)]?'\x0a':_0x536b1a(0x3cb);},_0x4f1e73[_0x37e46c(0x27f7)]=function(_0x1e81b8,_0x21eb29,_0x377f79){const _0x2192ac=_0x37e46c;return _0x377f79[_0x2192ac(0x4d46)]?_0x377f79[_0x2192ac(0x1cd3)]?_0x2192ac(0x18fa):_0x2192ac(0x3cb):'\x0a';},_0x4f1e73[_0x37e46c(0x4006)]=function(_0x4f0b51,_0x38c1c2){return _0x3191fd(_0x4f0b51[_0x38c1c2]['content']);},_0x4f1e73[_0x37e46c(0x4bcf)]=function(_0x4155d5,_0x11d42d){const _0x85facd=_0x37e46c;return _0x4155d5[_0x11d42d][_0x85facd(0x484f)];},_0x4f1e73[_0x37e46c(0x34c3)]=function(_0x57f875,_0x58ec67){return _0x57f875[_0x58ec67]['content'];},_0x1af13d[_0x37e46c(0x3b3c)][_0x37e46c(0x2e74)]=function(_0x50f92f){const _0x6b80c4=_0x37e46c;let _0x3e5a6d,_0x1f8a46,_0x7b1500;if(!_0x50f92f['attrs'])return'';for(_0x7b1500='',_0x3e5a6d=0x0,_0x1f8a46=_0x50f92f[_0x6b80c4(0x70e)][_0x6b80c4(0x1b19)];_0x3e5a6d<_0x1f8a46;_0x3e5a6d++)_0x7b1500+='\x20'+_0x3191fd(_0x50f92f[_0x6b80c4(0x70e)][_0x3e5a6d][0x0])+'=\x22'+_0x3191fd(_0x50f92f[_0x6b80c4(0x70e)][_0x3e5a6d][0x1])+'\x22';return _0x7b1500;},_0x1af13d['prototype'][_0x37e46c(0x19dd)]=function(_0x3da31a,_0x4f4040,_0x58d968){const _0x428bcf=_0x37e46c,_0x541183=_0x3da31a[_0x4f4040];let _0x241271='';if(_0x541183[_0x428bcf(0x53f)])return'';_0x541183[_0x428bcf(0x1f2e)]&&-0x1!==_0x541183[_0x428bcf(0x3d81)]&&_0x4f4040&&_0x3da31a[_0x4f4040-0x1][_0x428bcf(0x53f)]&&(_0x241271+='\x0a'),_0x241271+=(-0x1===_0x541183[_0x428bcf(0x3d81)]?'\x0a':'>',_0x241271;},_0x1af13d['prototype'][_0x37e46c(0x20e4)]=function(_0x25b271,_0x55ca66,_0x3ab7bb){const _0xaad64c=_0x37e46c;let _0x3b077a='';const _0x4b251f=this[_0xaad64c(0x39d9)];for(let _0x120099=0x0,_0x3cec41=_0x25b271[_0xaad64c(0x1b19)];_0x120099<_0x3cec41;_0x120099++){const _0x285d40=_0x25b271[_0x120099][_0xaad64c(0xcfc)];void 0x0!==_0x4b251f[_0x285d40]?_0x3b077a+=_0x4b251f[_0x285d40](_0x25b271,_0x120099,_0x55ca66,_0x3ab7bb,this):_0x3b077a+=this[_0xaad64c(0x19dd)](_0x25b271,_0x120099,_0x55ca66);}return _0x3b077a;},_0x1af13d[_0x37e46c(0x3b3c)]['renderInlineAsText']=function(_0x4853b5,_0xad15bb,_0x5f0898){const _0x3f816e=_0x37e46c;let _0x18f905='';for(let _0x2f04de=0x0,_0x27b0b0=_0x4853b5[_0x3f816e(0x1b19)];_0x2f04de<_0x27b0b0;_0x2f04de++)switch(_0x4853b5[_0x2f04de][_0x3f816e(0xcfc)]){case _0x3f816e(0x4006):case _0x3f816e(0x34c3):case _0x3f816e(0x4bcf):_0x18f905+=_0x4853b5[_0x2f04de]['content'];break;case _0x3f816e(0x178c):_0x18f905+=this[_0x3f816e(0x3af8)](_0x4853b5[_0x2f04de][_0x3f816e(0x4c3e)],_0xad15bb,_0x5f0898);break;case _0x3f816e(0x27f7):case _0x3f816e(0x47b4):_0x18f905+='\x0a';}return _0x18f905;},_0x1af13d['prototype']['render']=function(_0x50e698,_0xcc0c45,_0x41262d){const _0x3bd126=_0x37e46c;let _0x205e6a='';const _0x5e9a88=this[_0x3bd126(0x39d9)];for(let _0x5d43a5=0x0,_0x142711=_0x50e698[_0x3bd126(0x1b19)];_0x5d43a5<_0x142711;_0x5d43a5++){const _0x2145f2=_0x50e698[_0x5d43a5][_0x3bd126(0xcfc)];'inline'===_0x2145f2?_0x205e6a+=this[_0x3bd126(0x20e4)](_0x50e698[_0x5d43a5][_0x3bd126(0x4c3e)],_0xcc0c45,_0x41262d):void 0x0!==_0x5e9a88[_0x2145f2]?_0x205e6a+=_0x5e9a88[_0x2145f2](_0x50e698,_0x5d43a5,_0xcc0c45,_0x41262d,this):_0x205e6a+=this[_0x3bd126(0x19dd)](_0x50e698,_0x5d43a5,_0xcc0c45,_0x41262d);}return _0x205e6a;};const _0x5bf913=_0x1af13d;function _0x4e0123(){const _0x288296=_0x37e46c;this[_0x288296(0x3f2e)]=[],this[_0x288296(0x4b69)]=null;}_0x4e0123[_0x37e46c(0x3b3c)][_0x37e46c(0x2bed)]=function(_0x433679){const _0x5ecbee=_0x37e46c;for(let _0xae3957=0x0;_0xae3957=0x0&&(_0x366251=this[_0x41bf5d(0x70e)][_0x13d492][0x1]),_0x366251;},_0x485d06[_0x37e46c(0x3b3c)]['attrJoin']=function(_0x38c7f5,_0x1df0e3){const _0x1dc38a=_0x37e46c,_0x59933a=this[_0x1dc38a(0x3eac)](_0x38c7f5);_0x59933a<0x0?this[_0x1dc38a(0x34c2)]([_0x38c7f5,_0x1df0e3]):this[_0x1dc38a(0x70e)][_0x59933a][0x1]=this[_0x1dc38a(0x70e)][_0x59933a][0x1]+'\x20'+_0x1df0e3;};const _0x16b151=_0x485d06;function _0x2085b7(_0x5d99cf,_0x41a436,_0x24813a){const _0x36df56=_0x37e46c;this[_0x36df56(0x3d6)]=_0x5d99cf,this[_0x36df56(0xe1a)]=_0x24813a,this[_0x36df56(0x1c34)]=[],this[_0x36df56(0x2937)]=!0x1,this['md']=_0x41a436;}_0x2085b7[_0x37e46c(0x3b3c)][_0x37e46c(0x412a)]=_0x16b151;const _0x564ecc=_0x2085b7,_0x38ce01=/\r\n?|\n/g,_0x4825f3=/\0/g;function _0x1f565a(_0x18906c){const _0x4e63fd=_0x37e46c;return/^<\/a\s*>/i[_0x4e63fd(0x1769)](_0x18906c);}const _0x615821=/\+-|\.\.|\?\?\?\?|!!!!|,,|--/,_0x163251=/\((c|tm|r)\)/i,_0x9e81ea=/\((c|tm|r)\)/gi,_0xcc45ff={'c':'©','r':'®','tm':'™'};function _0x5f5422(_0x4e8f7a,_0x3508ef){const _0x5273d7=_0x37e46c;return _0xcc45ff[_0x3508ef[_0x5273d7(0x6e8)]()];}function _0x114b30(_0x16d0cf){const _0x5ccb22=_0x37e46c;let _0x30c94a=0x0;for(let _0x28aaa3=_0x16d0cf[_0x5ccb22(0x1b19)]-0x1;_0x28aaa3>=0x0;_0x28aaa3--){const _0x4ee5d2=_0x16d0cf[_0x28aaa3];_0x5ccb22(0x4006)!==_0x4ee5d2[_0x5ccb22(0xcfc)]||_0x30c94a||(_0x4ee5d2[_0x5ccb22(0x484f)]=_0x4ee5d2[_0x5ccb22(0x484f)]['replace'](_0x9e81ea,_0x5f5422)),_0x5ccb22(0x5243)===_0x4ee5d2['type']&&'auto'===_0x4ee5d2[_0x5ccb22(0x3a85)]&&_0x30c94a--,_0x5ccb22(0x24cf)===_0x4ee5d2[_0x5ccb22(0xcfc)]&&_0x5ccb22(0x4c14)===_0x4ee5d2[_0x5ccb22(0x3a85)]&&_0x30c94a++;}}function _0x53a96a(_0x5c1ee8){const _0x302812=_0x37e46c;let _0x22c8c5=0x0;for(let _0x63b68b=_0x5c1ee8[_0x302812(0x1b19)]-0x1;_0x63b68b>=0x0;_0x63b68b--){const _0x3c0afb=_0x5c1ee8[_0x63b68b];_0x302812(0x4006)!==_0x3c0afb[_0x302812(0xcfc)]||_0x22c8c5||_0x615821[_0x302812(0x1769)](_0x3c0afb['content'])&&(_0x3c0afb[_0x302812(0x484f)]=_0x3c0afb['content'][_0x302812(0x741)](/\+-/g,'±')[_0x302812(0x741)](/\.{2,}/g,'…')[_0x302812(0x741)](/([?!])…/g,_0x302812(0x1748))[_0x302812(0x741)](/([?!]){4,}/g,'$1$1$1')['replace'](/,{2,}/g,',')[_0x302812(0x741)](/(^|[^-])---(?=[^-]|$)/gm,'$1—')[_0x302812(0x741)](/(^|\s)--(?=\s|$)/gm,_0x302812(0x2961))['replace'](/(^|[^-\s])--(?=[^-\s]|$)/gm,_0x302812(0x2961))),_0x302812(0x5243)===_0x3c0afb[_0x302812(0xcfc)]&&_0x302812(0x4c14)===_0x3c0afb[_0x302812(0x3a85)]&&_0x22c8c5--,_0x302812(0x24cf)===_0x3c0afb['type']&&_0x302812(0x4c14)===_0x3c0afb[_0x302812(0x3a85)]&&_0x22c8c5++;}}const _0x301c1a=/['"]/,_0x4de85c=/['"]/g,_0x436615='’';function _0x232f03(_0x392a4f,_0x285da8,_0x4dafe6){const _0x17225f=_0x37e46c;return _0x392a4f[_0x17225f(0x384c)](0x0,_0x285da8)+_0x4dafe6+_0x392a4f['slice'](_0x285da8+0x1);}function _0x3e8749(_0x13c401,_0x429384){const _0x2ceeb2=_0x37e46c;let _0x2d0dd4;const _0x277add=[];for(let _0x4b5e43=0x0;_0x4b5e43<_0x13c401[_0x2ceeb2(0x1b19)];_0x4b5e43++){const _0x48eda8=_0x13c401[_0x4b5e43],_0x31fbb0=_0x13c401[_0x4b5e43][_0x2ceeb2(0x1fe)];for(_0x2d0dd4=_0x277add[_0x2ceeb2(0x1b19)]-0x1;_0x2d0dd4>=0x0&&!(_0x277add[_0x2d0dd4]['level']<=_0x31fbb0);_0x2d0dd4--);if(_0x277add[_0x2ceeb2(0x1b19)]=_0x2d0dd4+0x1,_0x2ceeb2(0x4006)!==_0x48eda8['type'])continue;let _0x5316d0=_0x48eda8[_0x2ceeb2(0x484f)],_0x481f03=0x0,_0x5556bc=_0x5316d0[_0x2ceeb2(0x1b19)];_0x1c5f89:for(;_0x481f03<_0x5556bc;){_0x4de85c[_0x2ceeb2(0x3655)]=_0x481f03;const _0x285d48=_0x4de85c[_0x2ceeb2(0x198d)](_0x5316d0);if(!_0x285d48)break;let _0x91166d=!0x0,_0x44cc66=!0x0;_0x481f03=_0x285d48[_0x2ceeb2(0x3bb5)]+0x1;const _0x3f47db='\x27'===_0x285d48[0x0];let _0x2b8cef=0x20;if(_0x285d48[_0x2ceeb2(0x3bb5)]-0x1>=0x0)_0x2b8cef=_0x5316d0[_0x2ceeb2(0x4955)](_0x285d48[_0x2ceeb2(0x3bb5)]-0x1);else{for(_0x2d0dd4=_0x4b5e43-0x1;_0x2d0dd4>=0x0&&(_0x2ceeb2(0x27f7)!==_0x13c401[_0x2d0dd4][_0x2ceeb2(0xcfc)]&&_0x2ceeb2(0x47b4)!==_0x13c401[_0x2d0dd4][_0x2ceeb2(0xcfc)]);_0x2d0dd4--)if(_0x13c401[_0x2d0dd4][_0x2ceeb2(0x484f)]){_0x2b8cef=_0x13c401[_0x2d0dd4][_0x2ceeb2(0x484f)][_0x2ceeb2(0x4955)](_0x13c401[_0x2d0dd4]['content'][_0x2ceeb2(0x1b19)]-0x1);break;}}let _0x50e5bd=0x20;if(_0x481f03<_0x5556bc)_0x50e5bd=_0x5316d0['charCodeAt'](_0x481f03);else{for(_0x2d0dd4=_0x4b5e43+0x1;_0x2d0dd4<_0x13c401[_0x2ceeb2(0x1b19)]&&(_0x2ceeb2(0x27f7)!==_0x13c401[_0x2d0dd4][_0x2ceeb2(0xcfc)]&&'hardbreak'!==_0x13c401[_0x2d0dd4][_0x2ceeb2(0xcfc)]);_0x2d0dd4++)if(_0x13c401[_0x2d0dd4][_0x2ceeb2(0x484f)]){_0x50e5bd=_0x13c401[_0x2d0dd4][_0x2ceeb2(0x484f)][_0x2ceeb2(0x4955)](0x0);break;}}const _0x461c5b=_0x5950b9(_0x2b8cef)||_0x1f4fb3(String['fromCharCode'](_0x2b8cef)),_0x187ca2=_0x5950b9(_0x50e5bd)||_0x1f4fb3(String[_0x2ceeb2(0x49bf)](_0x50e5bd)),_0x3ff275=_0x138dc2(_0x2b8cef),_0x207e9d=_0x138dc2(_0x50e5bd);if(_0x207e9d?_0x91166d=!0x1:_0x187ca2&&(_0x3ff275||_0x461c5b||(_0x91166d=!0x1)),_0x3ff275?_0x44cc66=!0x1:_0x461c5b&&(_0x207e9d||_0x187ca2||(_0x44cc66=!0x1)),0x22===_0x50e5bd&&'\x22'===_0x285d48[0x0]&&_0x2b8cef>=0x30&&_0x2b8cef<=0x39&&(_0x44cc66=_0x91166d=!0x1),_0x91166d&&_0x44cc66&&(_0x91166d=_0x461c5b,_0x44cc66=_0x187ca2),_0x91166d||_0x44cc66){if(_0x44cc66)for(_0x2d0dd4=_0x277add[_0x2ceeb2(0x1b19)]-0x1;_0x2d0dd4>=0x0;_0x2d0dd4--){let _0x32a263=_0x277add[_0x2d0dd4];if(_0x277add[_0x2d0dd4][_0x2ceeb2(0x1fe)]<_0x31fbb0)break;if(_0x32a263[_0x2ceeb2(0x4d4)]===_0x3f47db&&_0x277add[_0x2d0dd4][_0x2ceeb2(0x1fe)]===_0x31fbb0){let _0x5dbf35,_0x5ea7d4;_0x32a263=_0x277add[_0x2d0dd4],_0x3f47db?(_0x5dbf35=_0x429384['md']['options'][_0x2ceeb2(0x37b)][0x2],_0x5ea7d4=_0x429384['md'][_0x2ceeb2(0x20b6)][_0x2ceeb2(0x37b)][0x3]):(_0x5dbf35=_0x429384['md'][_0x2ceeb2(0x20b6)][_0x2ceeb2(0x37b)][0x0],_0x5ea7d4=_0x429384['md'][_0x2ceeb2(0x20b6)][_0x2ceeb2(0x37b)][0x1]),_0x48eda8[_0x2ceeb2(0x484f)]=_0x232f03(_0x48eda8[_0x2ceeb2(0x484f)],_0x285d48['index'],_0x5ea7d4),_0x13c401[_0x32a263[_0x2ceeb2(0x9ff)]][_0x2ceeb2(0x484f)]=_0x232f03(_0x13c401[_0x32a263[_0x2ceeb2(0x9ff)]][_0x2ceeb2(0x484f)],_0x32a263[_0x2ceeb2(0x333f)],_0x5dbf35),_0x481f03+=_0x5ea7d4['length']-0x1,_0x32a263['token']===_0x4b5e43&&(_0x481f03+=_0x5dbf35['length']-0x1),_0x5316d0=_0x48eda8[_0x2ceeb2(0x484f)],_0x5556bc=_0x5316d0[_0x2ceeb2(0x1b19)],_0x277add[_0x2ceeb2(0x1b19)]=_0x2d0dd4;continue _0x1c5f89;}}_0x91166d?_0x277add[_0x2ceeb2(0x1715)]({'token':_0x4b5e43,'pos':_0x285d48[_0x2ceeb2(0x3bb5)],'single':_0x3f47db,'level':_0x31fbb0}):_0x44cc66&&_0x3f47db&&(_0x48eda8['content']=_0x232f03(_0x48eda8[_0x2ceeb2(0x484f)],_0x285d48[_0x2ceeb2(0x3bb5)],_0x436615));}else _0x3f47db&&(_0x48eda8[_0x2ceeb2(0x484f)]=_0x232f03(_0x48eda8['content'],_0x285d48[_0x2ceeb2(0x3bb5)],_0x436615));}}}const _0x1f49a6=[[_0x37e46c(0x2429),function(_0x5bb9f3){const _0xc662be=_0x37e46c;let _0x483c03;_0x483c03=_0x5bb9f3[_0xc662be(0x3d6)]['replace'](_0x38ce01,'\x0a'),_0x483c03=_0x483c03[_0xc662be(0x741)](_0x4825f3,'�'),_0x5bb9f3['src']=_0x483c03;}],['block',function(_0x103569){const _0x302b96=_0x37e46c;let _0x544d82;_0x103569[_0x302b96(0x2937)]?(_0x544d82=new _0x103569[(_0x302b96(0x412a))]('inline','',0x0),_0x544d82[_0x302b96(0x484f)]=_0x103569[_0x302b96(0x3d6)],_0x544d82[_0x302b96(0x4833)]=[0x0,0x1],_0x544d82[_0x302b96(0x4c3e)]=[],_0x103569['tokens'][_0x302b96(0x1715)](_0x544d82)):_0x103569['md'][_0x302b96(0x1f2e)][_0x302b96(0x2956)](_0x103569[_0x302b96(0x3d6)],_0x103569['md'],_0x103569['env'],_0x103569[_0x302b96(0x1c34)]);}],[_0x37e46c(0x2988),function(_0x5d144a){const _0x2fda6e=_0x37e46c,_0x4deb09=_0x5d144a[_0x2fda6e(0x1c34)];for(let _0x48708e=0x0,_0x341987=_0x4deb09['length'];_0x48708e<_0x341987;_0x48708e++){const _0x12e44c=_0x4deb09[_0x48708e];'inline'===_0x12e44c['type']&&_0x5d144a['md']['inline'][_0x2fda6e(0x2956)](_0x12e44c['content'],_0x5d144a['md'],_0x5d144a[_0x2fda6e(0xe1a)],_0x12e44c[_0x2fda6e(0x4c3e)]);}}],[_0x37e46c(0x1255),function(_0x3b234a){const _0x2dd5be=_0x37e46c,_0x3e7586=_0x3b234a[_0x2dd5be(0x1c34)];var _0x39ff92;if(_0x3b234a['md']['options']['linkify'])for(let _0x8cfb66=0x0,_0x2b2ecd=_0x3e7586['length'];_0x8cfb66<_0x2b2ecd;_0x8cfb66++){if('inline'!==_0x3e7586[_0x8cfb66][_0x2dd5be(0xcfc)]||!_0x3b234a['md'][_0x2dd5be(0x1255)]['pretest'](_0x3e7586[_0x8cfb66][_0x2dd5be(0x484f)]))continue;let _0x4b965c=_0x3e7586[_0x8cfb66][_0x2dd5be(0x4c3e)],_0x23ef8d=0x0;for(let _0x2383db=_0x4b965c[_0x2dd5be(0x1b19)]-0x1;_0x2383db>=0x0;_0x2383db--){const _0x2dc140=_0x4b965c[_0x2383db];if(_0x2dd5be(0x24cf)!==_0x2dc140[_0x2dd5be(0xcfc)]){if(_0x2dd5be(0x34c3)===_0x2dc140[_0x2dd5be(0xcfc)]&&(_0x39ff92=_0x2dc140[_0x2dd5be(0x484f)],/^\s]/i[_0x2dd5be(0x1769)](_0x39ff92)&&_0x23ef8d>0x0&&_0x23ef8d--,_0x1f565a(_0x2dc140['content'])&&_0x23ef8d++),!(_0x23ef8d>0x0)&&_0x2dd5be(0x4006)===_0x2dc140[_0x2dd5be(0xcfc)]&&_0x3b234a['md'][_0x2dd5be(0x1255)][_0x2dd5be(0x1769)](_0x2dc140[_0x2dd5be(0x484f)])){const _0x2b7b59=_0x2dc140['content'];let _0x3ecd20=_0x3b234a['md'][_0x2dd5be(0x1255)][_0x2dd5be(0x2d96)](_0x2b7b59);const _0xdb4d46=[];let _0x5ef5=_0x2dc140[_0x2dd5be(0x1fe)],_0x3f2787=0x0;_0x3ecd20[_0x2dd5be(0x1b19)]>0x0&&0x0===_0x3ecd20[0x0][_0x2dd5be(0x3bb5)]&&_0x2383db>0x0&&_0x2dd5be(0x946)===_0x4b965c[_0x2383db-0x1][_0x2dd5be(0xcfc)]&&(_0x3ecd20=_0x3ecd20['slice'](0x1));for(let _0x1bed6c=0x0;_0x1bed6c<_0x3ecd20[_0x2dd5be(0x1b19)];_0x1bed6c++){const _0x2ca53c=_0x3ecd20[_0x1bed6c]['url'],_0x1dbf27=_0x3b234a['md'][_0x2dd5be(0x694)](_0x2ca53c);if(!_0x3b234a['md']['validateLink'](_0x1dbf27))continue;let _0x5ab78b=_0x3ecd20[_0x1bed6c][_0x2dd5be(0x4006)];_0x5ab78b=_0x3ecd20[_0x1bed6c]['schema']?_0x2dd5be(0x594)!==_0x3ecd20[_0x1bed6c][_0x2dd5be(0x313b)]||/^mailto:/i['test'](_0x5ab78b)?_0x3b234a['md']['normalizeLinkText'](_0x5ab78b):_0x3b234a['md']['normalizeLinkText']('mailto:'+_0x5ab78b)[_0x2dd5be(0x741)](/^mailto:/,''):_0x3b234a['md'][_0x2dd5be(0x4c51)](_0x2dd5be(0x1649)+_0x5ab78b)['replace'](/^http:\/\//,'');const _0x40508a=_0x3ecd20[_0x1bed6c][_0x2dd5be(0x3bb5)];if(_0x40508a>_0x3f2787){const _0x14881d=new _0x3b234a[(_0x2dd5be(0x412a))](_0x2dd5be(0x4006),'',0x0);_0x14881d['content']=_0x2b7b59[_0x2dd5be(0x384c)](_0x3f2787,_0x40508a),_0x14881d[_0x2dd5be(0x1fe)]=_0x5ef5,_0xdb4d46[_0x2dd5be(0x1715)](_0x14881d);}const _0x10cad1=new _0x3b234a[(_0x2dd5be(0x412a))](_0x2dd5be(0x5243),'a',0x1);_0x10cad1['attrs']=[[_0x2dd5be(0xe63),_0x1dbf27]],_0x10cad1['level']=_0x5ef5++,_0x10cad1[_0x2dd5be(0x2bd8)]='linkify',_0x10cad1[_0x2dd5be(0x3a85)]=_0x2dd5be(0x4c14),_0xdb4d46[_0x2dd5be(0x1715)](_0x10cad1);const _0x309cda=new _0x3b234a[(_0x2dd5be(0x412a))](_0x2dd5be(0x4006),'',0x0);_0x309cda[_0x2dd5be(0x484f)]=_0x5ab78b,_0x309cda[_0x2dd5be(0x1fe)]=_0x5ef5,_0xdb4d46[_0x2dd5be(0x1715)](_0x309cda);const _0x495f33=new _0x3b234a[(_0x2dd5be(0x412a))]('link_close','a',-0x1);_0x495f33[_0x2dd5be(0x1fe)]=--_0x5ef5,_0x495f33[_0x2dd5be(0x2bd8)]='linkify',_0x495f33['info']=_0x2dd5be(0x4c14),_0xdb4d46[_0x2dd5be(0x1715)](_0x495f33),_0x3f2787=_0x3ecd20[_0x1bed6c][_0x2dd5be(0x3655)];}if(_0x3f2787<_0x2b7b59[_0x2dd5be(0x1b19)]){const _0x5b867f=new _0x3b234a[(_0x2dd5be(0x412a))](_0x2dd5be(0x4006),'',0x0);_0x5b867f['content']=_0x2b7b59['slice'](_0x3f2787),_0x5b867f[_0x2dd5be(0x1fe)]=_0x5ef5,_0xdb4d46[_0x2dd5be(0x1715)](_0x5b867f);}_0x3e7586[_0x8cfb66][_0x2dd5be(0x4c3e)]=_0x4b965c=_0x38c141(_0x4b965c,_0x2383db,_0xdb4d46);}}else{for(_0x2383db--;_0x4b965c[_0x2383db][_0x2dd5be(0x1fe)]!==_0x2dc140[_0x2dd5be(0x1fe)]&&'link_open'!==_0x4b965c[_0x2383db][_0x2dd5be(0xcfc)];)_0x2383db--;}}}}],[_0x37e46c(0x101c),function(_0x14bc35){const _0x207913=_0x37e46c;let _0x1025af;if(_0x14bc35['md']['options'][_0x207913(0xcc5)]){for(_0x1025af=_0x14bc35[_0x207913(0x1c34)]['length']-0x1;_0x1025af>=0x0;_0x1025af--)'inline'===_0x14bc35[_0x207913(0x1c34)][_0x1025af][_0x207913(0xcfc)]&&(_0x163251['test'](_0x14bc35[_0x207913(0x1c34)][_0x1025af]['content'])&&_0x114b30(_0x14bc35['tokens'][_0x1025af][_0x207913(0x4c3e)]),_0x615821[_0x207913(0x1769)](_0x14bc35[_0x207913(0x1c34)][_0x1025af][_0x207913(0x484f)])&&_0x53a96a(_0x14bc35[_0x207913(0x1c34)][_0x1025af][_0x207913(0x4c3e)]));}}],[_0x37e46c(0x47bc),function(_0x3d4f8b){const _0x398c7c=_0x37e46c;if(_0x3d4f8b['md'][_0x398c7c(0x20b6)][_0x398c7c(0xcc5)]){for(let _0x388535=_0x3d4f8b[_0x398c7c(0x1c34)][_0x398c7c(0x1b19)]-0x1;_0x388535>=0x0;_0x388535--)_0x398c7c(0x2988)===_0x3d4f8b['tokens'][_0x388535]['type']&&_0x301c1a[_0x398c7c(0x1769)](_0x3d4f8b[_0x398c7c(0x1c34)][_0x388535]['content'])&&_0x3e8749(_0x3d4f8b['tokens'][_0x388535][_0x398c7c(0x4c3e)],_0x3d4f8b);}}],[_0x37e46c(0x4be8),function(_0x4389d5){const _0x4ec331=_0x37e46c;let _0x3bce34,_0x53ddd4;const _0x483425=_0x4389d5['tokens'],_0x4b2abf=_0x483425[_0x4ec331(0x1b19)];for(let _0x28ae1f=0x0;_0x28ae1f<_0x4b2abf;_0x28ae1f++){if(_0x4ec331(0x2988)!==_0x483425[_0x28ae1f]['type'])continue;const _0x400a91=_0x483425[_0x28ae1f][_0x4ec331(0x4c3e)],_0x22603c=_0x400a91[_0x4ec331(0x1b19)];for(_0x3bce34=0x0;_0x3bce34<_0x22603c;_0x3bce34++)_0x4ec331(0x946)===_0x400a91[_0x3bce34][_0x4ec331(0xcfc)]&&(_0x400a91[_0x3bce34][_0x4ec331(0xcfc)]='text');for(_0x3bce34=_0x53ddd4=0x0;_0x3bce34<_0x22603c;_0x3bce34++)_0x4ec331(0x4006)===_0x400a91[_0x3bce34][_0x4ec331(0xcfc)]&&_0x3bce34+0x1<_0x22603c&&_0x4ec331(0x4006)===_0x400a91[_0x3bce34+0x1][_0x4ec331(0xcfc)]?_0x400a91[_0x3bce34+0x1]['content']=_0x400a91[_0x3bce34][_0x4ec331(0x484f)]+_0x400a91[_0x3bce34+0x1][_0x4ec331(0x484f)]:(_0x3bce34!==_0x53ddd4&&(_0x400a91[_0x53ddd4]=_0x400a91[_0x3bce34]),_0x53ddd4++);_0x3bce34!==_0x53ddd4&&(_0x400a91[_0x4ec331(0x1b19)]=_0x53ddd4);}}]];function _0x1ffce1(){const _0xf41ebe=_0x37e46c;this[_0xf41ebe(0x2eb9)]=new _0x37b2b2();for(let _0x201fed=0x0;_0x201fed<_0x1f49a6[_0xf41ebe(0x1b19)];_0x201fed++)this[_0xf41ebe(0x2eb9)][_0xf41ebe(0x1715)](_0x1f49a6[_0x201fed][0x0],_0x1f49a6[_0x201fed][0x1]);}_0x1ffce1[_0x37e46c(0x3b3c)][_0x37e46c(0xf1b)]=function(_0x548d4d){const _0x1feda6=_0x37e46c,_0x20cc05=this[_0x1feda6(0x2eb9)][_0x1feda6(0x433a)]('');for(let _0x4704d0=0x0,_0x2df489=_0x20cc05['length'];_0x4704d0<_0x2df489;_0x4704d0++)_0x20cc05[_0x4704d0](_0x548d4d);},_0x1ffce1[_0x37e46c(0x3b3c)][_0x37e46c(0x207e)]=_0x564ecc;const _0x47ce26=_0x1ffce1;function _0x29a67e(_0x552be1,_0x1ae236,_0x14ea48,_0xb68d79){const _0x3cd3c8=_0x37e46c;this[_0x3cd3c8(0x3d6)]=_0x552be1,this['md']=_0x1ae236,this[_0x3cd3c8(0xe1a)]=_0x14ea48,this[_0x3cd3c8(0x1c34)]=_0xb68d79,this[_0x3cd3c8(0x1b61)]=[],this[_0x3cd3c8(0x2e82)]=[],this[_0x3cd3c8(0x1869)]=[],this[_0x3cd3c8(0x4fc6)]=[],this[_0x3cd3c8(0x16d3)]=[],this[_0x3cd3c8(0x1de0)]=0x0,this['line']=0x0,this['lineMax']=0x0,this['tight']=!0x1,this['ddIndent']=-0x1,this[_0x3cd3c8(0x1a8a)]=-0x1,this[_0x3cd3c8(0x43fb)]='root',this[_0x3cd3c8(0x1fe)]=0x0;const _0x58f050=this[_0x3cd3c8(0x3d6)];for(let _0x24af93=0x0,_0x45a1d1=0x0,_0x5f348c=0x0,_0x12055b=0x0,_0x57c656=_0x58f050[_0x3cd3c8(0x1b19)],_0x17ee32=!0x1;_0x45a1d1<_0x57c656;_0x45a1d1++){const _0x6919ef=_0x58f050[_0x3cd3c8(0x4955)](_0x45a1d1);if(!_0x17ee32){if(_0x290708(_0x6919ef)){_0x5f348c++,0x9===_0x6919ef?_0x12055b+=0x4-_0x12055b%0x4:_0x12055b++;continue;}_0x17ee32=!0x0;}0xa!==_0x6919ef&&_0x45a1d1!==_0x57c656-0x1||(0xa!==_0x6919ef&&_0x45a1d1++,this['bMarks'][_0x3cd3c8(0x1715)](_0x24af93),this[_0x3cd3c8(0x2e82)][_0x3cd3c8(0x1715)](_0x45a1d1),this[_0x3cd3c8(0x1869)]['push'](_0x5f348c),this[_0x3cd3c8(0x4fc6)]['push'](_0x12055b),this['bsCount'][_0x3cd3c8(0x1715)](0x0),_0x17ee32=!0x1,_0x5f348c=0x0,_0x12055b=0x0,_0x24af93=_0x45a1d1+0x1);}this[_0x3cd3c8(0x1b61)][_0x3cd3c8(0x1715)](_0x58f050[_0x3cd3c8(0x1b19)]),this[_0x3cd3c8(0x2e82)][_0x3cd3c8(0x1715)](_0x58f050[_0x3cd3c8(0x1b19)]),this[_0x3cd3c8(0x1869)][_0x3cd3c8(0x1715)](0x0),this[_0x3cd3c8(0x4fc6)]['push'](0x0),this['bsCount']['push'](0x0),this[_0x3cd3c8(0x2a61)]=this[_0x3cd3c8(0x1b61)][_0x3cd3c8(0x1b19)]-0x1;}_0x29a67e[_0x37e46c(0x3b3c)][_0x37e46c(0x1715)]=function(_0x282a9b,_0x544f68,_0x17ac4d){const _0x384970=_0x37e46c,_0x57fdb9=new _0x16b151(_0x282a9b,_0x544f68,_0x17ac4d);return _0x57fdb9[_0x384970(0x1f2e)]=!0x0,_0x17ac4d<0x0&&this[_0x384970(0x1fe)]--,_0x57fdb9['level']=this['level'],_0x17ac4d>0x0&&this[_0x384970(0x1fe)]++,this[_0x384970(0x1c34)][_0x384970(0x1715)](_0x57fdb9),_0x57fdb9;},_0x29a67e[_0x37e46c(0x3b3c)][_0x37e46c(0x21cf)]=function(_0x29508c){const _0xd5d443=_0x37e46c;return this[_0xd5d443(0x1b61)][_0x29508c]+this[_0xd5d443(0x1869)][_0x29508c]>=this[_0xd5d443(0x2e82)][_0x29508c];},_0x29a67e[_0x37e46c(0x3b3c)][_0x37e46c(0x917)]=function(_0x171be5){const _0x2625d2=_0x37e46c;for(let _0x233bac=this['lineMax'];_0x171be5<_0x233bac&&!(this[_0x2625d2(0x1b61)][_0x171be5]+this[_0x2625d2(0x1869)][_0x171be5]_0x34c757;)if(!_0x290708(this[_0x34eaec(0x3d6)][_0x34eaec(0x4955)](--_0x2a0f3c)))return _0x2a0f3c+0x1;return _0x2a0f3c;},_0x29a67e[_0x37e46c(0x3b3c)]['skipChars']=function(_0x191f4c,_0x239454){const _0x56997f=_0x37e46c;for(let _0x3f6a6b=this[_0x56997f(0x3d6)]['length'];_0x191f4c<_0x3f6a6b&&this[_0x56997f(0x3d6)][_0x56997f(0x4955)](_0x191f4c)===_0x239454;_0x191f4c++);return _0x191f4c;},_0x29a67e[_0x37e46c(0x3b3c)][_0x37e46c(0x46b5)]=function(_0x30ece2,_0x3b7ce8,_0x1e7a89){const _0x29d048=_0x37e46c;if(_0x30ece2<=_0x1e7a89)return _0x30ece2;for(;_0x30ece2>_0x1e7a89;)if(_0x3b7ce8!==this[_0x29d048(0x3d6)]['charCodeAt'](--_0x30ece2))return _0x30ece2+0x1;return _0x30ece2;},_0x29a67e[_0x37e46c(0x3b3c)][_0x37e46c(0xfef)]=function(_0x39d371,_0x13bd20,_0x4b9d06,_0xba265b){const _0x6d685e=_0x37e46c;if(_0x39d371>=_0x13bd20)return'';const _0x50b827=new Array(_0x13bd20-_0x39d371);for(let _0x34e608=0x0,_0x137703=_0x39d371;_0x137703<_0x13bd20;_0x137703++,_0x34e608++){let _0x496a8e=0x0;const _0x1d35b9=this[_0x6d685e(0x1b61)][_0x137703];let _0x28e2f2,_0x5533e3=_0x1d35b9;for(_0x28e2f2=_0x137703+0x1<_0x13bd20||_0xba265b?this['eMarks'][_0x137703]+0x1:this['eMarks'][_0x137703];_0x5533e3<_0x28e2f2&&_0x496a8e<_0x4b9d06;){const _0x2889db=this[_0x6d685e(0x3d6)][_0x6d685e(0x4955)](_0x5533e3);if(_0x290708(_0x2889db))0x9===_0x2889db?_0x496a8e+=0x4-(_0x496a8e+this[_0x6d685e(0x16d3)][_0x137703])%0x4:_0x496a8e++;else{if(!(_0x5533e3-_0x1d35b9_0x4b9d06?new Array(_0x496a8e-_0x4b9d06+0x1)['join']('\x20')+this[_0x6d685e(0x3d6)][_0x6d685e(0x384c)](_0x5533e3,_0x28e2f2):this['src'][_0x6d685e(0x384c)](_0x5533e3,_0x28e2f2);}return _0x50b827['join']('');},_0x29a67e[_0x37e46c(0x3b3c)][_0x37e46c(0x412a)]=_0x16b151;const _0x1ca7f3=_0x29a67e;function _0x56b0c9(_0x127f8d,_0x5aed63){const _0x3ce615=_0x37e46c,_0x30de46=_0x127f8d['bMarks'][_0x5aed63]+_0x127f8d[_0x3ce615(0x1869)][_0x5aed63],_0x3dab8b=_0x127f8d[_0x3ce615(0x2e82)][_0x5aed63];return _0x127f8d[_0x3ce615(0x3d6)][_0x3ce615(0x384c)](_0x30de46,_0x3dab8b);}function _0x21a393(_0x14a3bd){const _0x1b05bb=_0x37e46c,_0xe70e13=[],_0x2753ce=_0x14a3bd[_0x1b05bb(0x1b19)];let _0x414220=0x0,_0x4b8e23=_0x14a3bd[_0x1b05bb(0x4955)](_0x414220),_0x45e0f0=!0x1,_0x229376=0x0,_0x167c63='';for(;_0x414220<_0x2753ce;)0x7c===_0x4b8e23&&(_0x45e0f0?(_0x167c63+=_0x14a3bd[_0x1b05bb(0x37b5)](_0x229376,_0x414220-0x1),_0x229376=_0x414220):(_0xe70e13['push'](_0x167c63+_0x14a3bd['substring'](_0x229376,_0x414220)),_0x167c63='',_0x229376=_0x414220+0x1)),_0x45e0f0=0x5c===_0x4b8e23,_0x414220++,_0x4b8e23=_0x14a3bd[_0x1b05bb(0x4955)](_0x414220);return _0xe70e13['push'](_0x167c63+_0x14a3bd[_0x1b05bb(0x37b5)](_0x229376)),_0xe70e13;}function _0x2e1fd0(_0x375a0d,_0x3b808c){const _0x3b267e=_0x37e46c,_0x3cc5f3=_0x375a0d['eMarks'][_0x3b808c];let _0x2847d9=_0x375a0d[_0x3b267e(0x1b61)][_0x3b808c]+_0x375a0d['tShift'][_0x3b808c];const _0x1b8758=_0x375a0d['src'][_0x3b267e(0x4955)](_0x2847d9++);if(0x2a!==_0x1b8758&&0x2d!==_0x1b8758&&0x2b!==_0x1b8758)return-0x1;if(_0x2847d9<_0x3cc5f3){if(!_0x290708(_0x375a0d[_0x3b267e(0x3d6)][_0x3b267e(0x4955)](_0x2847d9)))return-0x1;}return _0x2847d9;}function _0x4b87e7(_0x5ee9d5,_0x367d07){const _0x4e7e56=_0x37e46c,_0x4e1271=_0x5ee9d5[_0x4e7e56(0x1b61)][_0x367d07]+_0x5ee9d5[_0x4e7e56(0x1869)][_0x367d07],_0x5375fc=_0x5ee9d5[_0x4e7e56(0x2e82)][_0x367d07];let _0x545455=_0x4e1271;if(_0x545455+0x1>=_0x5375fc)return-0x1;let _0x4fb33b=_0x5ee9d5[_0x4e7e56(0x3d6)][_0x4e7e56(0x4955)](_0x545455++);if(_0x4fb33b<0x30||_0x4fb33b>0x39)return-0x1;for(;;){if(_0x545455>=_0x5375fc)return-0x1;if(_0x4fb33b=_0x5ee9d5[_0x4e7e56(0x3d6)][_0x4e7e56(0x4955)](_0x545455++),!(_0x4fb33b>=0x30&&_0x4fb33b<=0x39)){if(0x29===_0x4fb33b||0x2e===_0x4fb33b)break;return-0x1;}if(_0x545455-_0x4e1271>=0xa)return-0x1;}return _0x545455<_0x5375fc&&(_0x4fb33b=_0x5ee9d5[_0x4e7e56(0x3d6)][_0x4e7e56(0x4955)](_0x545455),!_0x290708(_0x4fb33b))?-0x1:_0x545455;}const _0x5899f0=_0x37e46c(0x1577),_0xe4ae5c=_0x37e46c(0x1ee4),_0xd05017=new RegExp(_0x37e46c(0x3002)+_0x5899f0+'|'+_0xe4ae5c+_0x37e46c(0x2e4)),_0x54e4cd=new RegExp(_0x37e46c(0x3002)+_0x5899f0+'|'+_0xe4ae5c+')'),_0xa843f5=[[/^<(script|pre|style|textarea)(?=(\s|>|$))/i,/<\/(script|pre|style|textarea)>/i,!0x0],[/^/,!0x0],[/^<\?/,/\?>/,!0x0],[/^/,!0x0],[/^/,!0x0],[new RegExp(_0x37e46c(0xbae)+[_0x37e46c(0x3db6),'article',_0x37e46c(0x2a6),_0x37e46c(0x7c0),_0x37e46c(0x2d4e),'blockquote',_0x37e46c(0x4f1a),_0x37e46c(0x2200),'center','col',_0x37e46c(0x3821),'dd',_0x37e46c(0x2dd7),'dialog',_0x37e46c(0x177b),_0x37e46c(0x4c88),'dl','dt','fieldset',_0x37e46c(0x4252),_0x37e46c(0x12f4),_0x37e46c(0x3af9),'form',_0x37e46c(0x34f4),_0x37e46c(0x3236),'h1','h2','h3','h4','h5','h6','head','header','hr',_0x37e46c(0x2acd),_0x37e46c(0x2168),_0x37e46c(0x201b),'li','link',_0x37e46c(0x3212),_0x37e46c(0x3888),_0x37e46c(0x1eb4),_0x37e46c(0x1ec0),'noframes','ol','optgroup','option','p',_0x37e46c(0x2c08),_0x37e46c(0x69d),_0x37e46c(0x33b0),_0x37e46c(0x4829),_0x37e46c(0x1639),_0x37e46c(0x2e69),'td','tfoot','th','thead','title','tr',_0x37e46c(0x1f8d),'ul'][_0x37e46c(0x3541)]('|')+_0x37e46c(0x4f46),'i'),/^$/,!0x0],[new RegExp(_0x54e4cd[_0x37e46c(0x33b0)]+_0x37e46c(0x3497)),/^$/,!0x1]],_0x276924=[[_0x37e46c(0x1639),function(_0x43b3da,_0x187310,_0x34917a,_0x57d1ff){const _0xf75615=_0x37e46c;if(_0x187310+0x2>_0x34917a)return!0x1;let _0x3f3551=_0x187310+0x1;if(_0x43b3da['sCount'][_0x3f3551]<_0x43b3da[_0xf75615(0x1de0)])return!0x1;if(_0x43b3da[_0xf75615(0x4fc6)][_0x3f3551]-_0x43b3da['blkIndent']>=0x4)return!0x1;let _0x5091f0=_0x43b3da[_0xf75615(0x1b61)][_0x3f3551]+_0x43b3da['tShift'][_0x3f3551];if(_0x5091f0>=_0x43b3da['eMarks'][_0x3f3551])return!0x1;const _0x284db4=_0x43b3da[_0xf75615(0x3d6)]['charCodeAt'](_0x5091f0++);if(0x7c!==_0x284db4&&0x2d!==_0x284db4&&0x3a!==_0x284db4)return!0x1;if(_0x5091f0>=_0x43b3da[_0xf75615(0x2e82)][_0x3f3551])return!0x1;const _0x3ba772=_0x43b3da[_0xf75615(0x3d6)][_0xf75615(0x4955)](_0x5091f0++);if(0x7c!==_0x3ba772&&0x2d!==_0x3ba772&&0x3a!==_0x3ba772&&!_0x290708(_0x3ba772))return!0x1;if(0x2d===_0x284db4&&_0x290708(_0x3ba772))return!0x1;for(;_0x5091f0<_0x43b3da[_0xf75615(0x2e82)][_0x3f3551];){const _0x5913d1=_0x43b3da[_0xf75615(0x3d6)][_0xf75615(0x4955)](_0x5091f0);if(0x7c!==_0x5913d1&&0x2d!==_0x5913d1&&0x3a!==_0x5913d1&&!_0x290708(_0x5913d1))return!0x1;_0x5091f0++;}let _0x1da267=_0x56b0c9(_0x43b3da,_0x187310+0x1),_0x3452bc=_0x1da267[_0xf75615(0x1117)]('|');const _0x1318f7=[];for(let _0x364fc1=0x0;_0x364fc1<_0x3452bc[_0xf75615(0x1b19)];_0x364fc1++){const _0x1f7681=_0x3452bc[_0x364fc1][_0xf75615(0x1b23)]();if(!_0x1f7681){if(0x0===_0x364fc1||_0x364fc1===_0x3452bc[_0xf75615(0x1b19)]-0x1)continue;return!0x1;}if(!/^:?-+:?$/['test'](_0x1f7681))return!0x1;0x3a===_0x1f7681[_0xf75615(0x4955)](_0x1f7681[_0xf75615(0x1b19)]-0x1)?_0x1318f7[_0xf75615(0x1715)](0x3a===_0x1f7681[_0xf75615(0x4955)](0x0)?'center':_0xf75615(0x4d50)):0x3a===_0x1f7681[_0xf75615(0x4955)](0x0)?_0x1318f7['push'](_0xf75615(0x48eb)):_0x1318f7['push']('');}if(_0x1da267=_0x56b0c9(_0x43b3da,_0x187310)[_0xf75615(0x1b23)](),-0x1===_0x1da267[_0xf75615(0x8c9)]('|'))return!0x1;if(_0x43b3da['sCount'][_0x187310]-_0x43b3da['blkIndent']>=0x4)return!0x1;_0x3452bc=_0x21a393(_0x1da267),_0x3452bc[_0xf75615(0x1b19)]&&''===_0x3452bc[0x0]&&_0x3452bc['shift'](),_0x3452bc[_0xf75615(0x1b19)]&&''===_0x3452bc[_0x3452bc[_0xf75615(0x1b19)]-0x1]&&_0x3452bc[_0xf75615(0x3d35)]();const _0x1d6e5c=_0x3452bc[_0xf75615(0x1b19)];if(0x0===_0x1d6e5c||_0x1d6e5c!==_0x1318f7[_0xf75615(0x1b19)])return!0x1;if(_0x57d1ff)return!0x0;const _0x544acf=_0x43b3da['parentType'];_0x43b3da[_0xf75615(0x43fb)]=_0xf75615(0x1639);const _0x555d12=_0x43b3da['md'][_0xf75615(0x1f2e)][_0xf75615(0x2eb9)]['getRules']('blockquote'),_0x333742=[_0x187310,0x0];_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x12fd),_0xf75615(0x1639),0x1)[_0xf75615(0x4833)]=_0x333742,_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x4b26),_0xf75615(0x46d4),0x1)[_0xf75615(0x4833)]=[_0x187310,_0x187310+0x1],_0x43b3da['push']('tr_open','tr',0x1)[_0xf75615(0x4833)]=[_0x187310,_0x187310+0x1];for(let _0x320f3e=0x0;_0x320f3e<_0x3452bc[_0xf75615(0x1b19)];_0x320f3e++){const _0x4764cb=_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x4b3f),'th',0x1);_0x1318f7[_0x320f3e]&&(_0x4764cb[_0xf75615(0x70e)]=[['style',_0xf75615(0x4518)+_0x1318f7[_0x320f3e]]]);const _0x456645=_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x2988),'',0x0);_0x456645['content']=_0x3452bc[_0x320f3e][_0xf75615(0x1b23)](),_0x456645[_0xf75615(0x4c3e)]=[],_0x43b3da['push']('th_close','th',-0x1);}let _0x45089b;for(_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x4d95),'tr',-0x1),_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x1085),_0xf75615(0x46d4),-0x1),_0x3f3551=_0x187310+0x2;_0x3f3551<_0x34917a&&!(_0x43b3da[_0xf75615(0x4fc6)][_0x3f3551]<_0x43b3da[_0xf75615(0x1de0)]);_0x3f3551++){let _0x721daf=!0x1;for(let _0x149498=0x0,_0x25ebf7=_0x555d12[_0xf75615(0x1b19)];_0x149498<_0x25ebf7;_0x149498++)if(_0x555d12[_0x149498](_0x43b3da,_0x3f3551,_0x34917a,!0x0)){_0x721daf=!0x0;break;}if(_0x721daf)break;if(_0x1da267=_0x56b0c9(_0x43b3da,_0x3f3551)[_0xf75615(0x1b23)](),!_0x1da267)break;if(_0x43b3da[_0xf75615(0x4fc6)][_0x3f3551]-_0x43b3da[_0xf75615(0x1de0)]>=0x4)break;(_0x3452bc=_0x21a393(_0x1da267),_0x3452bc[_0xf75615(0x1b19)]&&''===_0x3452bc[0x0]&&_0x3452bc[_0xf75615(0x34fe)](),_0x3452bc[_0xf75615(0x1b19)]&&''===_0x3452bc[_0x3452bc[_0xf75615(0x1b19)]-0x1]&&_0x3452bc[_0xf75615(0x3d35)](),_0x3f3551===_0x187310+0x2)&&(_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x1269),_0xf75615(0x2e69),0x1)['map']=_0x45089b=[_0x187310+0x2,0x0]);_0x43b3da['push'](_0xf75615(0x117d),'tr',0x1)['map']=[_0x3f3551,_0x3f3551+0x1];for(let _0x5497bb=0x0;_0x5497bb<_0x1d6e5c;_0x5497bb++){const _0x5a4c96=_0x43b3da['push']('td_open','td',0x1);_0x1318f7[_0x5497bb]&&(_0x5a4c96[_0xf75615(0x70e)]=[[_0xf75615(0x1a84),'text-align:'+_0x1318f7[_0x5497bb]]]);const _0x3d7d0a=_0x43b3da[_0xf75615(0x1715)]('inline','',0x0);_0x3d7d0a['content']=_0x3452bc[_0x5497bb]?_0x3452bc[_0x5497bb][_0xf75615(0x1b23)]():'',_0x3d7d0a['children']=[],_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x19c5),'td',-0x1);}_0x43b3da[_0xf75615(0x1715)](_0xf75615(0x4d95),'tr',-0x1);}return _0x45089b&&(_0x43b3da[_0xf75615(0x1715)](_0xf75615(0xf93),_0xf75615(0x2e69),-0x1),_0x45089b[0x1]=_0x3f3551),_0x43b3da[_0xf75615(0x1715)]('table_close',_0xf75615(0x1639),-0x1),_0x333742[0x1]=_0x3f3551,_0x43b3da['parentType']=_0x544acf,_0x43b3da[_0xf75615(0x3572)]=_0x3f3551,!0x0;},[_0x37e46c(0xc3d),'reference']],[_0x37e46c(0x4948),function(_0x1eca34,_0x254f3f,_0x2b55ce){const _0x27b134=_0x37e46c;if(_0x1eca34['sCount'][_0x254f3f]-_0x1eca34[_0x27b134(0x1de0)]<0x4)return!0x1;let _0x4b2ddd=_0x254f3f+0x1,_0x57f4b6=_0x4b2ddd;for(;_0x4b2ddd<_0x2b55ce;)if(_0x1eca34['isEmpty'](_0x4b2ddd))_0x4b2ddd++;else{if(!(_0x1eca34['sCount'][_0x4b2ddd]-_0x1eca34['blkIndent']>=0x4))break;_0x4b2ddd++,_0x57f4b6=_0x4b2ddd;}_0x1eca34[_0x27b134(0x3572)]=_0x57f4b6;const _0x318f78=_0x1eca34['push'](_0x27b134(0xfaa),_0x27b134(0x4948),0x0);return _0x318f78['content']=_0x1eca34[_0x27b134(0xfef)](_0x254f3f,_0x57f4b6,0x4+_0x1eca34['blkIndent'],!0x1)+'\x0a',_0x318f78[_0x27b134(0x4833)]=[_0x254f3f,_0x1eca34[_0x27b134(0x3572)]],!0x0;}],[_0x37e46c(0x141b),function(_0x2b30b0,_0x38ca53,_0x5250ee,_0x590ee0){const _0x3b2a7f=_0x37e46c;let _0x32f2ad=_0x2b30b0[_0x3b2a7f(0x1b61)][_0x38ca53]+_0x2b30b0[_0x3b2a7f(0x1869)][_0x38ca53],_0x1d8d64=_0x2b30b0[_0x3b2a7f(0x2e82)][_0x38ca53];if(_0x2b30b0[_0x3b2a7f(0x4fc6)][_0x38ca53]-_0x2b30b0['blkIndent']>=0x4)return!0x1;if(_0x32f2ad+0x3>_0x1d8d64)return!0x1;const _0x3194d3=_0x2b30b0['src'][_0x3b2a7f(0x4955)](_0x32f2ad);if(0x7e!==_0x3194d3&&0x60!==_0x3194d3)return!0x1;let _0x22a0fb=_0x32f2ad;_0x32f2ad=_0x2b30b0['skipChars'](_0x32f2ad,_0x3194d3);let _0x2a6c24=_0x32f2ad-_0x22a0fb;if(_0x2a6c24<0x3)return!0x1;const _0x86f54a=_0x2b30b0[_0x3b2a7f(0x3d6)][_0x3b2a7f(0x384c)](_0x22a0fb,_0x32f2ad),_0x4845b4=_0x2b30b0[_0x3b2a7f(0x3d6)][_0x3b2a7f(0x384c)](_0x32f2ad,_0x1d8d64);if(0x60===_0x3194d3&&_0x4845b4[_0x3b2a7f(0x8c9)](String[_0x3b2a7f(0x49bf)](_0x3194d3))>=0x0)return!0x1;if(_0x590ee0)return!0x0;let _0x185a7d=_0x38ca53,_0x3f7e6f=!0x1;for(;(_0x185a7d++,!(_0x185a7d>=_0x5250ee))&&(_0x32f2ad=_0x22a0fb=_0x2b30b0['bMarks'][_0x185a7d]+_0x2b30b0[_0x3b2a7f(0x1869)][_0x185a7d],_0x1d8d64=_0x2b30b0[_0x3b2a7f(0x2e82)][_0x185a7d],!(_0x32f2ad<_0x1d8d64&&_0x2b30b0['sCount'][_0x185a7d]<_0x2b30b0[_0x3b2a7f(0x1de0)]));)if(_0x2b30b0[_0x3b2a7f(0x3d6)]['charCodeAt'](_0x32f2ad)===_0x3194d3&&!(_0x2b30b0[_0x3b2a7f(0x4fc6)][_0x185a7d]-_0x2b30b0[_0x3b2a7f(0x1de0)]>=0x4||(_0x32f2ad=_0x2b30b0[_0x3b2a7f(0x4303)](_0x32f2ad,_0x3194d3),_0x32f2ad-_0x22a0fb<_0x2a6c24||(_0x32f2ad=_0x2b30b0['skipSpaces'](_0x32f2ad),_0x32f2ad<_0x1d8d64)))){_0x3f7e6f=!0x0;break;}_0x2a6c24=_0x2b30b0[_0x3b2a7f(0x4fc6)][_0x38ca53],_0x2b30b0[_0x3b2a7f(0x3572)]=_0x185a7d+(_0x3f7e6f?0x1:0x0);const _0x2e0996=_0x2b30b0['push'](_0x3b2a7f(0x141b),'code',0x0);return _0x2e0996[_0x3b2a7f(0x3a85)]=_0x4845b4,_0x2e0996[_0x3b2a7f(0x484f)]=_0x2b30b0[_0x3b2a7f(0xfef)](_0x38ca53+0x1,_0x185a7d,_0x2a6c24,!0x0),_0x2e0996[_0x3b2a7f(0x2bd8)]=_0x86f54a,_0x2e0996[_0x3b2a7f(0x4833)]=[_0x38ca53,_0x2b30b0[_0x3b2a7f(0x3572)]],!0x0;},['paragraph',_0x37e46c(0x3d0f),_0x37e46c(0x702),'list']],[_0x37e46c(0x702),function(_0xb3d428,_0x2af5a9,_0x1e3123,_0x5c6cec){const _0xf3a051=_0x37e46c;let _0x2d4fff=_0xb3d428[_0xf3a051(0x1b61)][_0x2af5a9]+_0xb3d428[_0xf3a051(0x1869)][_0x2af5a9],_0x2c5214=_0xb3d428['eMarks'][_0x2af5a9];const _0xdfb89e=_0xb3d428[_0xf3a051(0x2a61)];if(_0xb3d428[_0xf3a051(0x4fc6)][_0x2af5a9]-_0xb3d428[_0xf3a051(0x1de0)]>=0x4)return!0x1;if(0x3e!==_0xb3d428[_0xf3a051(0x3d6)][_0xf3a051(0x4955)](_0x2d4fff))return!0x1;if(_0x5c6cec)return!0x0;const _0x4cd9ba=[],_0xd22632=[],_0x5426b3=[],_0x1fa65e=[],_0x218aee=_0xb3d428['md'][_0xf3a051(0x1f2e)][_0xf3a051(0x2eb9)][_0xf3a051(0x433a)]('blockquote'),_0x378779=_0xb3d428[_0xf3a051(0x43fb)];_0xb3d428[_0xf3a051(0x43fb)]=_0xf3a051(0x702);let _0x33a314,_0x561360=!0x1;for(_0x33a314=_0x2af5a9;_0x33a314<_0x1e3123;_0x33a314++){const _0x2aeb5f=_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]<_0xb3d428[_0xf3a051(0x1de0)];if(_0x2d4fff=_0xb3d428[_0xf3a051(0x1b61)][_0x33a314]+_0xb3d428[_0xf3a051(0x1869)][_0x33a314],_0x2c5214=_0xb3d428[_0xf3a051(0x2e82)][_0x33a314],_0x2d4fff>=_0x2c5214)break;if(0x3e===_0xb3d428['src'][_0xf3a051(0x4955)](_0x2d4fff++)&&!_0x2aeb5f){let _0x23645d,_0x557a82,_0xed2d5b=_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]+0x1;0x20===_0xb3d428[_0xf3a051(0x3d6)][_0xf3a051(0x4955)](_0x2d4fff)?(_0x2d4fff++,_0xed2d5b++,_0x557a82=!0x1,_0x23645d=!0x0):0x9===_0xb3d428[_0xf3a051(0x3d6)][_0xf3a051(0x4955)](_0x2d4fff)?(_0x23645d=!0x0,(_0xb3d428[_0xf3a051(0x16d3)][_0x33a314]+_0xed2d5b)%0x4==0x3?(_0x2d4fff++,_0xed2d5b++,_0x557a82=!0x1):_0x557a82=!0x0):_0x23645d=!0x1;let _0x1864be=_0xed2d5b;for(_0x4cd9ba[_0xf3a051(0x1715)](_0xb3d428['bMarks'][_0x33a314]),_0xb3d428[_0xf3a051(0x1b61)][_0x33a314]=_0x2d4fff;_0x2d4fff<_0x2c5214;){const _0x10b272=_0xb3d428[_0xf3a051(0x3d6)][_0xf3a051(0x4955)](_0x2d4fff);if(!_0x290708(_0x10b272))break;0x9===_0x10b272?_0x1864be+=0x4-(_0x1864be+_0xb3d428['bsCount'][_0x33a314]+(_0x557a82?0x1:0x0))%0x4:_0x1864be++,_0x2d4fff++;}_0x561360=_0x2d4fff>=_0x2c5214,_0xd22632[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x16d3)][_0x33a314]),_0xb3d428[_0xf3a051(0x16d3)][_0x33a314]=_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]+0x1+(_0x23645d?0x1:0x0),_0x5426b3[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]),_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]=_0x1864be-_0xed2d5b,_0x1fa65e[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x1869)][_0x33a314]),_0xb3d428[_0xf3a051(0x1869)][_0x33a314]=_0x2d4fff-_0xb3d428[_0xf3a051(0x1b61)][_0x33a314];continue;}if(_0x561360)break;let _0x31ed62=!0x1;for(let _0xa8591b=0x0,_0x4220d9=_0x218aee[_0xf3a051(0x1b19)];_0xa8591b<_0x4220d9;_0xa8591b++)if(_0x218aee[_0xa8591b](_0xb3d428,_0x33a314,_0x1e3123,!0x0)){_0x31ed62=!0x0;break;}if(_0x31ed62){_0xb3d428['lineMax']=_0x33a314,0x0!==_0xb3d428[_0xf3a051(0x1de0)]&&(_0x4cd9ba[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x1b61)][_0x33a314]),_0xd22632[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x16d3)][_0x33a314]),_0x1fa65e['push'](_0xb3d428[_0xf3a051(0x1869)][_0x33a314]),_0x5426b3[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]),_0xb3d428['sCount'][_0x33a314]-=_0xb3d428['blkIndent']);break;}_0x4cd9ba[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x1b61)][_0x33a314]),_0xd22632[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x16d3)][_0x33a314]),_0x1fa65e[_0xf3a051(0x1715)](_0xb3d428[_0xf3a051(0x1869)][_0x33a314]),_0x5426b3['push'](_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]),_0xb3d428[_0xf3a051(0x4fc6)][_0x33a314]=-0x1;}const _0x2f1bed=_0xb3d428[_0xf3a051(0x1de0)];_0xb3d428[_0xf3a051(0x1de0)]=0x0;const _0x42362a=_0xb3d428[_0xf3a051(0x1715)](_0xf3a051(0x7a7),_0xf3a051(0x702),0x1);_0x42362a['markup']='>';const _0x5ed782=[_0x2af5a9,0x0];_0x42362a[_0xf3a051(0x4833)]=_0x5ed782,_0xb3d428['md'][_0xf3a051(0x1f2e)][_0xf3a051(0x3c0b)](_0xb3d428,_0x2af5a9,_0x33a314),_0xb3d428[_0xf3a051(0x1715)]('blockquote_close',_0xf3a051(0x702),-0x1)[_0xf3a051(0x2bd8)]='>',_0xb3d428[_0xf3a051(0x2a61)]=_0xdfb89e,_0xb3d428[_0xf3a051(0x43fb)]=_0x378779,_0x5ed782[0x1]=_0xb3d428[_0xf3a051(0x3572)];for(let _0x5d0876=0x0;_0x5d0876<_0x1fa65e['length'];_0x5d0876++)_0xb3d428[_0xf3a051(0x1b61)][_0x5d0876+_0x2af5a9]=_0x4cd9ba[_0x5d0876],_0xb3d428[_0xf3a051(0x1869)][_0x5d0876+_0x2af5a9]=_0x1fa65e[_0x5d0876],_0xb3d428[_0xf3a051(0x4fc6)][_0x5d0876+_0x2af5a9]=_0x5426b3[_0x5d0876],_0xb3d428['bsCount'][_0x5d0876+_0x2af5a9]=_0xd22632[_0x5d0876];return _0xb3d428[_0xf3a051(0x1de0)]=_0x2f1bed,!0x0;},[_0x37e46c(0xc3d),_0x37e46c(0x3d0f),_0x37e46c(0x702),_0x37e46c(0x144e)]],['hr',function(_0x2ee8dd,_0x4bfdcd,_0x556800,_0x428936){const _0x216f63=_0x37e46c,_0x12ab67=_0x2ee8dd['eMarks'][_0x4bfdcd];if(_0x2ee8dd[_0x216f63(0x4fc6)][_0x4bfdcd]-_0x2ee8dd[_0x216f63(0x1de0)]>=0x4)return!0x1;let _0xe8a8b5=_0x2ee8dd[_0x216f63(0x1b61)][_0x4bfdcd]+_0x2ee8dd[_0x216f63(0x1869)][_0x4bfdcd];const _0x38da7d=_0x2ee8dd[_0x216f63(0x3d6)][_0x216f63(0x4955)](_0xe8a8b5++);if(0x2a!==_0x38da7d&&0x2d!==_0x38da7d&&0x5f!==_0x38da7d)return!0x1;let _0x48ae4c=0x1;for(;_0xe8a8b5<_0x12ab67;){const _0x5889ca=_0x2ee8dd[_0x216f63(0x3d6)][_0x216f63(0x4955)](_0xe8a8b5++);if(_0x5889ca!==_0x38da7d&&!_0x290708(_0x5889ca))return!0x1;_0x5889ca===_0x38da7d&&_0x48ae4c++;}if(_0x48ae4c<0x3)return!0x1;if(_0x428936)return!0x0;_0x2ee8dd[_0x216f63(0x3572)]=_0x4bfdcd+0x1;const _0x14558e=_0x2ee8dd['push']('hr','hr',0x0);return _0x14558e[_0x216f63(0x4833)]=[_0x4bfdcd,_0x2ee8dd[_0x216f63(0x3572)]],_0x14558e[_0x216f63(0x2bd8)]=Array(_0x48ae4c+0x1)[_0x216f63(0x3541)](String[_0x216f63(0x49bf)](_0x38da7d)),!0x0;},[_0x37e46c(0xc3d),_0x37e46c(0x3d0f),_0x37e46c(0x702),_0x37e46c(0x144e)]],[_0x37e46c(0x144e),function(_0x4e3e21,_0x141b71,_0xd6f8d7,_0x5f167c){const _0x2407ed=_0x37e46c;let _0xa8125d,_0x5e818a,_0x3d6a8b,_0x4f92b5,_0x30ecae=_0x141b71,_0x9bce26=!0x0;if(_0x4e3e21[_0x2407ed(0x4fc6)][_0x30ecae]-_0x4e3e21[_0x2407ed(0x1de0)]>=0x4)return!0x1;if(_0x4e3e21[_0x2407ed(0x1a8a)]>=0x0&&_0x4e3e21['sCount'][_0x30ecae]-_0x4e3e21[_0x2407ed(0x1a8a)]>=0x4&&_0x4e3e21['sCount'][_0x30ecae]<_0x4e3e21[_0x2407ed(0x1de0)])return!0x1;let _0x2cd441,_0x2cca76,_0x37a3f4,_0x44abe7=!0x1;if(_0x5f167c&&_0x2407ed(0xc3d)===_0x4e3e21['parentType']&&_0x4e3e21[_0x2407ed(0x4fc6)][_0x30ecae]>=_0x4e3e21[_0x2407ed(0x1de0)]&&(_0x44abe7=!0x0),(_0x37a3f4=_0x4b87e7(_0x4e3e21,_0x30ecae))>=0x0){if(_0x2cd441=!0x0,_0x3d6a8b=_0x4e3e21[_0x2407ed(0x1b61)][_0x30ecae]+_0x4e3e21['tShift'][_0x30ecae],_0x2cca76=Number(_0x4e3e21[_0x2407ed(0x3d6)]['slice'](_0x3d6a8b,_0x37a3f4-0x1)),_0x44abe7&&0x1!==_0x2cca76)return!0x1;}else{if(!((_0x37a3f4=_0x2e1fd0(_0x4e3e21,_0x30ecae))>=0x0))return!0x1;_0x2cd441=!0x1;}if(_0x44abe7&&_0x4e3e21['skipSpaces'](_0x37a3f4)>=_0x4e3e21[_0x2407ed(0x2e82)][_0x30ecae])return!0x1;if(_0x5f167c)return!0x0;const _0x49310f=_0x4e3e21[_0x2407ed(0x3d6)][_0x2407ed(0x4955)](_0x37a3f4-0x1),_0x4c73b0=_0x4e3e21['tokens']['length'];_0x2cd441?(_0x4f92b5=_0x4e3e21['push'](_0x2407ed(0x1dfb),'ol',0x1),0x1!==_0x2cca76&&(_0x4f92b5[_0x2407ed(0x70e)]=[[_0x2407ed(0x4cc4),_0x2cca76]])):_0x4f92b5=_0x4e3e21[_0x2407ed(0x1715)](_0x2407ed(0x2b39),'ul',0x1);const _0x49a562=[_0x30ecae,0x0];_0x4f92b5[_0x2407ed(0x4833)]=_0x49a562,_0x4f92b5[_0x2407ed(0x2bd8)]=String[_0x2407ed(0x49bf)](_0x49310f);let _0x30481d=!0x1;const _0x19d929=_0x4e3e21['md']['block'][_0x2407ed(0x2eb9)][_0x2407ed(0x433a)]('list'),_0x11f1b5=_0x4e3e21[_0x2407ed(0x43fb)];for(_0x4e3e21[_0x2407ed(0x43fb)]=_0x2407ed(0x144e);_0x30ecae<_0xd6f8d7;){_0x5e818a=_0x37a3f4,_0xa8125d=_0x4e3e21[_0x2407ed(0x2e82)][_0x30ecae];const _0x267675=_0x4e3e21[_0x2407ed(0x4fc6)][_0x30ecae]+_0x37a3f4-(_0x4e3e21['bMarks'][_0x30ecae]+_0x4e3e21['tShift'][_0x30ecae]);let _0x424fa0=_0x267675;for(;_0x5e818a<_0xa8125d;){const _0x1bee6f=_0x4e3e21['src'][_0x2407ed(0x4955)](_0x5e818a);if(0x9===_0x1bee6f)_0x424fa0+=0x4-(_0x424fa0+_0x4e3e21[_0x2407ed(0x16d3)][_0x30ecae])%0x4;else{if(0x20!==_0x1bee6f)break;_0x424fa0++;}_0x5e818a++;}const _0x4831d5=_0x5e818a;let _0x4cf37a;_0x4cf37a=_0x4831d5>=_0xa8125d?0x1:_0x424fa0-_0x267675,_0x4cf37a>0x4&&(_0x4cf37a=0x1);const _0x5f0c7c=_0x267675+_0x4cf37a;_0x4f92b5=_0x4e3e21['push'](_0x2407ed(0x3995),'li',0x1),_0x4f92b5[_0x2407ed(0x2bd8)]=String[_0x2407ed(0x49bf)](_0x49310f);const _0x2b17ae=[_0x30ecae,0x0];_0x4f92b5[_0x2407ed(0x4833)]=_0x2b17ae,_0x2cd441&&(_0x4f92b5[_0x2407ed(0x3a85)]=_0x4e3e21[_0x2407ed(0x3d6)][_0x2407ed(0x384c)](_0x3d6a8b,_0x37a3f4-0x1));const _0x3cf238=_0x4e3e21[_0x2407ed(0x4347)],_0x3967df=_0x4e3e21[_0x2407ed(0x1869)][_0x30ecae],_0x3d9156=_0x4e3e21['sCount'][_0x30ecae],_0x4b648e=_0x4e3e21['listIndent'];if(_0x4e3e21['listIndent']=_0x4e3e21[_0x2407ed(0x1de0)],_0x4e3e21[_0x2407ed(0x1de0)]=_0x5f0c7c,_0x4e3e21[_0x2407ed(0x4347)]=!0x0,_0x4e3e21[_0x2407ed(0x1869)][_0x30ecae]=_0x4831d5-_0x4e3e21[_0x2407ed(0x1b61)][_0x30ecae],_0x4e3e21[_0x2407ed(0x4fc6)][_0x30ecae]=_0x424fa0,_0x4831d5>=_0xa8125d&&_0x4e3e21[_0x2407ed(0x21cf)](_0x30ecae+0x1)?_0x4e3e21[_0x2407ed(0x3572)]=Math[_0x2407ed(0x37c8)](_0x4e3e21[_0x2407ed(0x3572)]+0x2,_0xd6f8d7):_0x4e3e21['md'][_0x2407ed(0x1f2e)][_0x2407ed(0x3c0b)](_0x4e3e21,_0x30ecae,_0xd6f8d7,!0x0),_0x4e3e21[_0x2407ed(0x4347)]&&!_0x30481d||(_0x9bce26=!0x1),_0x30481d=_0x4e3e21['line']-_0x30ecae>0x1&&_0x4e3e21[_0x2407ed(0x21cf)](_0x4e3e21['line']-0x1),_0x4e3e21[_0x2407ed(0x1de0)]=_0x4e3e21[_0x2407ed(0x1a8a)],_0x4e3e21[_0x2407ed(0x1a8a)]=_0x4b648e,_0x4e3e21[_0x2407ed(0x1869)][_0x30ecae]=_0x3967df,_0x4e3e21[_0x2407ed(0x4fc6)][_0x30ecae]=_0x3d9156,_0x4e3e21[_0x2407ed(0x4347)]=_0x3cf238,_0x4f92b5=_0x4e3e21[_0x2407ed(0x1715)]('list_item_close','li',-0x1),_0x4f92b5['markup']=String[_0x2407ed(0x49bf)](_0x49310f),_0x30ecae=_0x4e3e21[_0x2407ed(0x3572)],_0x2b17ae[0x1]=_0x30ecae,_0x30ecae>=_0xd6f8d7)break;if(_0x4e3e21['sCount'][_0x30ecae]<_0x4e3e21['blkIndent'])break;if(_0x4e3e21[_0x2407ed(0x4fc6)][_0x30ecae]-_0x4e3e21['blkIndent']>=0x4)break;let _0x32ab76=!0x1;for(let _0x1d40a1=0x0,_0x345e4c=_0x19d929[_0x2407ed(0x1b19)];_0x1d40a1<_0x345e4c;_0x1d40a1++)if(_0x19d929[_0x1d40a1](_0x4e3e21,_0x30ecae,_0xd6f8d7,!0x0)){_0x32ab76=!0x0;break;}if(_0x32ab76)break;if(_0x2cd441){if(_0x37a3f4=_0x4b87e7(_0x4e3e21,_0x30ecae),_0x37a3f4<0x0)break;_0x3d6a8b=_0x4e3e21[_0x2407ed(0x1b61)][_0x30ecae]+_0x4e3e21[_0x2407ed(0x1869)][_0x30ecae];}else{if(_0x37a3f4=_0x2e1fd0(_0x4e3e21,_0x30ecae),_0x37a3f4<0x0)break;}if(_0x49310f!==_0x4e3e21[_0x2407ed(0x3d6)]['charCodeAt'](_0x37a3f4-0x1))break;}return _0x4f92b5=_0x2cd441?_0x4e3e21[_0x2407ed(0x1715)](_0x2407ed(0x5b2),'ol',-0x1):_0x4e3e21['push']('bullet_list_close','ul',-0x1),_0x4f92b5[_0x2407ed(0x2bd8)]=String['fromCharCode'](_0x49310f),_0x49a562[0x1]=_0x30ecae,_0x4e3e21[_0x2407ed(0x3572)]=_0x30ecae,_0x4e3e21[_0x2407ed(0x43fb)]=_0x11f1b5,_0x9bce26&&function(_0x5e46b9,_0x596091){const _0x22b85c=_0x2407ed,_0x3d4b7b=_0x5e46b9[_0x22b85c(0x1fe)]+0x2;for(let _0x5a8857=_0x596091+0x2,_0x482fcb=_0x5e46b9['tokens']['length']-0x2;_0x5a8857<_0x482fcb;_0x5a8857++)_0x5e46b9[_0x22b85c(0x1c34)][_0x5a8857][_0x22b85c(0x1fe)]===_0x3d4b7b&&_0x22b85c(0x4903)===_0x5e46b9[_0x22b85c(0x1c34)][_0x5a8857]['type']&&(_0x5e46b9['tokens'][_0x5a8857+0x2][_0x22b85c(0x53f)]=!0x0,_0x5e46b9['tokens'][_0x5a8857][_0x22b85c(0x53f)]=!0x0,_0x5a8857+=0x2);}(_0x4e3e21,_0x4c73b0),!0x0;},[_0x37e46c(0xc3d),'reference',_0x37e46c(0x702)]],['reference',function(_0x1a1405,_0x3232f6,_0x29c75f,_0x3ec01a){const _0x28f7e9=_0x37e46c;let _0x4f5a05=0x0,_0x540682=_0x1a1405['bMarks'][_0x3232f6]+_0x1a1405[_0x28f7e9(0x1869)][_0x3232f6],_0x18687a=_0x1a1405['eMarks'][_0x3232f6],_0x3d6c65=_0x3232f6+0x1;if(_0x1a1405[_0x28f7e9(0x4fc6)][_0x3232f6]-_0x1a1405[_0x28f7e9(0x1de0)]>=0x4)return!0x1;if(0x5b!==_0x1a1405['src'][_0x28f7e9(0x4955)](_0x540682))return!0x1;for(;++_0x540682<_0x18687a;)if(0x5d===_0x1a1405[_0x28f7e9(0x3d6)][_0x28f7e9(0x4955)](_0x540682)&&0x5c!==_0x1a1405[_0x28f7e9(0x3d6)][_0x28f7e9(0x4955)](_0x540682-0x1)){if(_0x540682+0x1===_0x18687a)return!0x1;if(0x3a!==_0x1a1405['src']['charCodeAt'](_0x540682+0x1))return!0x1;break;}const _0x1413bf=_0x1a1405[_0x28f7e9(0x2a61)],_0x1b3888=_0x1a1405['md'][_0x28f7e9(0x1f2e)][_0x28f7e9(0x2eb9)][_0x28f7e9(0x433a)](_0x28f7e9(0x3d0f)),_0x50215a=_0x1a1405[_0x28f7e9(0x43fb)];for(_0x1a1405[_0x28f7e9(0x43fb)]=_0x28f7e9(0x3d0f);_0x3d6c65<_0x1413bf&&!_0x1a1405[_0x28f7e9(0x21cf)](_0x3d6c65);_0x3d6c65++){if(_0x1a1405[_0x28f7e9(0x4fc6)][_0x3d6c65]-_0x1a1405[_0x28f7e9(0x1de0)]>0x3)continue;if(_0x1a1405[_0x28f7e9(0x4fc6)][_0x3d6c65]<0x0)continue;let _0x40f74c=!0x1;for(let _0x4ffdeb=0x0,_0x501bec=_0x1b3888[_0x28f7e9(0x1b19)];_0x4ffdeb<_0x501bec;_0x4ffdeb++)if(_0x1b3888[_0x4ffdeb](_0x1a1405,_0x3d6c65,_0x1413bf,!0x0)){_0x40f74c=!0x0;break;}if(_0x40f74c)break;}const _0x552df3=_0x1a1405['getLines'](_0x3232f6,_0x3d6c65,_0x1a1405[_0x28f7e9(0x1de0)],!0x1)[_0x28f7e9(0x1b23)]();_0x18687a=_0x552df3[_0x28f7e9(0x1b19)];let _0x30e67d=-0x1;for(_0x540682=0x1;_0x540682<_0x18687a;_0x540682++){const _0x4b2893=_0x552df3[_0x28f7e9(0x4955)](_0x540682);if(0x5b===_0x4b2893)return!0x1;if(0x5d===_0x4b2893){_0x30e67d=_0x540682;break;}0xa===_0x4b2893?_0x4f5a05++:0x5c===_0x4b2893&&(_0x540682++,_0x540682<_0x18687a&&0xa===_0x552df3['charCodeAt'](_0x540682)&&_0x4f5a05++);}if(_0x30e67d<0x0||0x3a!==_0x552df3[_0x28f7e9(0x4955)](_0x30e67d+0x1))return!0x1;for(_0x540682=_0x30e67d+0x2;_0x540682<_0x18687a;_0x540682++){const _0x400526=_0x552df3[_0x28f7e9(0x4955)](_0x540682);if(0xa===_0x400526)_0x4f5a05++;else{if(!_0x290708(_0x400526))break;}}const _0x3e4d75=_0x1a1405['md']['helpers'][_0x28f7e9(0x2762)](_0x552df3,_0x540682,_0x18687a);if(!_0x3e4d75['ok'])return!0x1;const _0xf88ed6=_0x1a1405['md'][_0x28f7e9(0x694)](_0x3e4d75['str']);if(!_0x1a1405['md']['validateLink'](_0xf88ed6))return!0x1;_0x540682=_0x3e4d75[_0x28f7e9(0x333f)],_0x4f5a05+=_0x3e4d75[_0x28f7e9(0x203f)];const _0x3de09b=_0x540682,_0x8b030d=_0x4f5a05,_0x2a7049=_0x540682;for(;_0x540682<_0x18687a;_0x540682++){const _0x2d1895=_0x552df3[_0x28f7e9(0x4955)](_0x540682);if(0xa===_0x2d1895)_0x4f5a05++;else{if(!_0x290708(_0x2d1895))break;}}const _0x20df4a=_0x1a1405['md'][_0x28f7e9(0x429b)][_0x28f7e9(0x237d)](_0x552df3,_0x540682,_0x18687a);let _0x5dc6d5;for(_0x540682<_0x18687a&&_0x2a7049!==_0x540682&&_0x20df4a['ok']?(_0x5dc6d5=_0x20df4a[_0x28f7e9(0x257f)],_0x540682=_0x20df4a['pos'],_0x4f5a05+=_0x20df4a['lines']):(_0x5dc6d5='',_0x540682=_0x3de09b,_0x4f5a05=_0x8b030d);_0x540682<_0x18687a;){if(!_0x290708(_0x552df3['charCodeAt'](_0x540682)))break;_0x540682++;}if(_0x540682<_0x18687a&&0xa!==_0x552df3['charCodeAt'](_0x540682)&&_0x5dc6d5)for(_0x5dc6d5='',_0x540682=_0x3de09b,_0x4f5a05=_0x8b030d;_0x540682<_0x18687a;){if(!_0x290708(_0x552df3[_0x28f7e9(0x4955)](_0x540682)))break;_0x540682++;}if(_0x540682<_0x18687a&&0xa!==_0x552df3[_0x28f7e9(0x4955)](_0x540682))return!0x1;const _0x1cb8b6=_0x1fe7b1(_0x552df3[_0x28f7e9(0x384c)](0x1,_0x30e67d));return!!_0x1cb8b6&&(_0x3ec01a||(void 0x0===_0x1a1405[_0x28f7e9(0xe1a)][_0x28f7e9(0xeb2)]&&(_0x1a1405[_0x28f7e9(0xe1a)]['references']={}),void 0x0===_0x1a1405[_0x28f7e9(0xe1a)]['references'][_0x1cb8b6]&&(_0x1a1405[_0x28f7e9(0xe1a)][_0x28f7e9(0xeb2)][_0x1cb8b6]={'title':_0x5dc6d5,'href':_0xf88ed6}),_0x1a1405[_0x28f7e9(0x43fb)]=_0x50215a,_0x1a1405[_0x28f7e9(0x3572)]=_0x3232f6+_0x4f5a05+0x1),!0x0);}],[_0x37e46c(0x4bcf),function(_0x24d2a0,_0x93f30b,_0x3ca427,_0x28ab12){const _0x2474e1=_0x37e46c;let _0xaa08b8=_0x24d2a0['bMarks'][_0x93f30b]+_0x24d2a0['tShift'][_0x93f30b],_0x2aca49=_0x24d2a0[_0x2474e1(0x2e82)][_0x93f30b];if(_0x24d2a0['sCount'][_0x93f30b]-_0x24d2a0['blkIndent']>=0x4)return!0x1;if(!_0x24d2a0['md'][_0x2474e1(0x20b6)]['html'])return!0x1;if(0x3c!==_0x24d2a0[_0x2474e1(0x3d6)][_0x2474e1(0x4955)](_0xaa08b8))return!0x1;let _0xc573d4=_0x24d2a0[_0x2474e1(0x3d6)][_0x2474e1(0x384c)](_0xaa08b8,_0x2aca49),_0x495cb3=0x0;for(;_0x495cb3<_0xa843f5[_0x2474e1(0x1b19)]&&!_0xa843f5[_0x495cb3][0x0]['test'](_0xc573d4);_0x495cb3++);if(_0x495cb3===_0xa843f5['length'])return!0x1;if(_0x28ab12)return _0xa843f5[_0x495cb3][0x2];let _0x2a1c44=_0x93f30b+0x1;if(!_0xa843f5[_0x495cb3][0x1]['test'](_0xc573d4)){for(;_0x2a1c44<_0x3ca427&&!(_0x24d2a0[_0x2474e1(0x4fc6)][_0x2a1c44]<_0x24d2a0[_0x2474e1(0x1de0)]);_0x2a1c44++)if(_0xaa08b8=_0x24d2a0[_0x2474e1(0x1b61)][_0x2a1c44]+_0x24d2a0[_0x2474e1(0x1869)][_0x2a1c44],_0x2aca49=_0x24d2a0[_0x2474e1(0x2e82)][_0x2a1c44],_0xc573d4=_0x24d2a0[_0x2474e1(0x3d6)][_0x2474e1(0x384c)](_0xaa08b8,_0x2aca49),_0xa843f5[_0x495cb3][0x1]['test'](_0xc573d4)){0x0!==_0xc573d4[_0x2474e1(0x1b19)]&&_0x2a1c44++;break;}}_0x24d2a0['line']=_0x2a1c44;const _0x2509db=_0x24d2a0[_0x2474e1(0x1715)](_0x2474e1(0x4bcf),'',0x0);return _0x2509db[_0x2474e1(0x4833)]=[_0x93f30b,_0x2a1c44],_0x2509db[_0x2474e1(0x484f)]=_0x24d2a0[_0x2474e1(0xfef)](_0x93f30b,_0x2a1c44,_0x24d2a0[_0x2474e1(0x1de0)],!0x0),!0x0;},[_0x37e46c(0xc3d),'reference',_0x37e46c(0x702)]],[_0x37e46c(0x561),function(_0x520e2d,_0x2411ea,_0x5f06e7,_0xd69203){const _0x457504=_0x37e46c;let _0x176bf2=_0x520e2d[_0x457504(0x1b61)][_0x2411ea]+_0x520e2d[_0x457504(0x1869)][_0x2411ea],_0x3edca3=_0x520e2d[_0x457504(0x2e82)][_0x2411ea];if(_0x520e2d[_0x457504(0x4fc6)][_0x2411ea]-_0x520e2d['blkIndent']>=0x4)return!0x1;let _0x58ac6c=_0x520e2d[_0x457504(0x3d6)]['charCodeAt'](_0x176bf2);if(0x23!==_0x58ac6c||_0x176bf2>=_0x3edca3)return!0x1;let _0x4f9159=0x1;for(_0x58ac6c=_0x520e2d[_0x457504(0x3d6)][_0x457504(0x4955)](++_0x176bf2);0x23===_0x58ac6c&&_0x176bf2<_0x3edca3&&_0x4f9159<=0x6;)_0x4f9159++,_0x58ac6c=_0x520e2d[_0x457504(0x3d6)]['charCodeAt'](++_0x176bf2);if(_0x4f9159>0x6||_0x176bf2<_0x3edca3&&!_0x290708(_0x58ac6c))return!0x1;if(_0xd69203)return!0x0;_0x3edca3=_0x520e2d[_0x457504(0x3bac)](_0x3edca3,_0x176bf2);const _0x2771fe=_0x520e2d[_0x457504(0x46b5)](_0x3edca3,0x23,_0x176bf2);_0x2771fe>_0x176bf2&&_0x290708(_0x520e2d[_0x457504(0x3d6)][_0x457504(0x4955)](_0x2771fe-0x1))&&(_0x3edca3=_0x2771fe),_0x520e2d['line']=_0x2411ea+0x1;const _0x42820d=_0x520e2d['push'](_0x457504(0x3e71),'h'+String(_0x4f9159),0x1);_0x42820d[_0x457504(0x2bd8)]='########'[_0x457504(0x384c)](0x0,_0x4f9159),_0x42820d[_0x457504(0x4833)]=[_0x2411ea,_0x520e2d[_0x457504(0x3572)]];const _0x225622=_0x520e2d[_0x457504(0x1715)](_0x457504(0x2988),'',0x0);return _0x225622[_0x457504(0x484f)]=_0x520e2d[_0x457504(0x3d6)][_0x457504(0x384c)](_0x176bf2,_0x3edca3)[_0x457504(0x1b23)](),_0x225622[_0x457504(0x4833)]=[_0x2411ea,_0x520e2d['line']],_0x225622[_0x457504(0x4c3e)]=[],_0x520e2d[_0x457504(0x1715)](_0x457504(0x3b78),'h'+String(_0x4f9159),-0x1)['markup']='########'['slice'](0x0,_0x4f9159),!0x0;},[_0x37e46c(0xc3d),_0x37e46c(0x3d0f),_0x37e46c(0x702)]],[_0x37e46c(0x960),function(_0x409541,_0x29a362,_0x439b09){const _0x483a64=_0x37e46c,_0x3a81b7=_0x409541['md'][_0x483a64(0x1f2e)]['ruler']['getRules']('paragraph');if(_0x409541['sCount'][_0x29a362]-_0x409541['blkIndent']>=0x4)return!0x1;const _0x4d2e61=_0x409541[_0x483a64(0x43fb)];_0x409541['parentType']='paragraph';let _0x39f97d,_0x317083=0x0,_0xafa0b3=_0x29a362+0x1;for(;_0xafa0b3<_0x439b09&&!_0x409541[_0x483a64(0x21cf)](_0xafa0b3);_0xafa0b3++){if(_0x409541[_0x483a64(0x4fc6)][_0xafa0b3]-_0x409541[_0x483a64(0x1de0)]>0x3)continue;if(_0x409541[_0x483a64(0x4fc6)][_0xafa0b3]>=_0x409541[_0x483a64(0x1de0)]){let _0x3593e8=_0x409541[_0x483a64(0x1b61)][_0xafa0b3]+_0x409541[_0x483a64(0x1869)][_0xafa0b3];const _0x303f6e=_0x409541['eMarks'][_0xafa0b3];if(_0x3593e8<_0x303f6e&&(_0x39f97d=_0x409541[_0x483a64(0x3d6)][_0x483a64(0x4955)](_0x3593e8),(0x2d===_0x39f97d||0x3d===_0x39f97d)&&(_0x3593e8=_0x409541['skipChars'](_0x3593e8,_0x39f97d),_0x3593e8=_0x409541[_0x483a64(0x162a)](_0x3593e8),_0x3593e8>=_0x303f6e))){_0x317083=0x3d===_0x39f97d?0x1:0x2;break;}}if(_0x409541[_0x483a64(0x4fc6)][_0xafa0b3]<0x0)continue;let _0x1e6438=!0x1;for(let _0x57b8bd=0x0,_0x48ce9e=_0x3a81b7[_0x483a64(0x1b19)];_0x57b8bd<_0x48ce9e;_0x57b8bd++)if(_0x3a81b7[_0x57b8bd](_0x409541,_0xafa0b3,_0x439b09,!0x0)){_0x1e6438=!0x0;break;}if(_0x1e6438)break;}if(!_0x317083)return!0x1;const _0x2ffa0a=_0x409541[_0x483a64(0xfef)](_0x29a362,_0xafa0b3,_0x409541[_0x483a64(0x1de0)],!0x1)[_0x483a64(0x1b23)]();_0x409541['line']=_0xafa0b3+0x1;const _0x42dce1=_0x409541[_0x483a64(0x1715)](_0x483a64(0x3e71),'h'+String(_0x317083),0x1);_0x42dce1[_0x483a64(0x2bd8)]=String[_0x483a64(0x49bf)](_0x39f97d),_0x42dce1[_0x483a64(0x4833)]=[_0x29a362,_0x409541[_0x483a64(0x3572)]];const _0x1ecaf3=_0x409541['push'](_0x483a64(0x2988),'',0x0);return _0x1ecaf3[_0x483a64(0x484f)]=_0x2ffa0a,_0x1ecaf3['map']=[_0x29a362,_0x409541['line']-0x1],_0x1ecaf3[_0x483a64(0x4c3e)]=[],_0x409541[_0x483a64(0x1715)](_0x483a64(0x3b78),'h'+String(_0x317083),-0x1)['markup']=String[_0x483a64(0x49bf)](_0x39f97d),_0x409541[_0x483a64(0x43fb)]=_0x4d2e61,!0x0;}],['paragraph',function(_0x5c4aeb,_0x403200,_0x29fc3a){const _0x45887e=_0x37e46c,_0x203b29=_0x5c4aeb['md']['block'][_0x45887e(0x2eb9)]['getRules'](_0x45887e(0xc3d)),_0x55affd=_0x5c4aeb[_0x45887e(0x43fb)];let _0x918f48=_0x403200+0x1;for(_0x5c4aeb['parentType']='paragraph';_0x918f48<_0x29fc3a&&!_0x5c4aeb['isEmpty'](_0x918f48);_0x918f48++){if(_0x5c4aeb[_0x45887e(0x4fc6)][_0x918f48]-_0x5c4aeb['blkIndent']>0x3)continue;if(_0x5c4aeb['sCount'][_0x918f48]<0x0)continue;let _0x5d0cb4=!0x1;for(let _0x3bf651=0x0,_0x429c3d=_0x203b29[_0x45887e(0x1b19)];_0x3bf651<_0x429c3d;_0x3bf651++)if(_0x203b29[_0x3bf651](_0x5c4aeb,_0x918f48,_0x29fc3a,!0x0)){_0x5d0cb4=!0x0;break;}if(_0x5d0cb4)break;}const _0x1ef9c9=_0x5c4aeb[_0x45887e(0xfef)](_0x403200,_0x918f48,_0x5c4aeb[_0x45887e(0x1de0)],!0x1)[_0x45887e(0x1b23)]();_0x5c4aeb[_0x45887e(0x3572)]=_0x918f48,_0x5c4aeb[_0x45887e(0x1715)](_0x45887e(0x4903),'p',0x1)[_0x45887e(0x4833)]=[_0x403200,_0x5c4aeb['line']];const _0x3d4647=_0x5c4aeb[_0x45887e(0x1715)](_0x45887e(0x2988),'',0x0);return _0x3d4647[_0x45887e(0x484f)]=_0x1ef9c9,_0x3d4647[_0x45887e(0x4833)]=[_0x403200,_0x5c4aeb[_0x45887e(0x3572)]],_0x3d4647[_0x45887e(0x4c3e)]=[],_0x5c4aeb[_0x45887e(0x1715)]('paragraph_close','p',-0x1),_0x5c4aeb[_0x45887e(0x43fb)]=_0x55affd,!0x0;}]];function _0x110c69(){const _0x39e8b6=_0x37e46c;this[_0x39e8b6(0x2eb9)]=new _0x37b2b2();for(let _0x5c8b35=0x0;_0x5c8b35<_0x276924[_0x39e8b6(0x1b19)];_0x5c8b35++)this[_0x39e8b6(0x2eb9)][_0x39e8b6(0x1715)](_0x276924[_0x5c8b35][0x0],_0x276924[_0x5c8b35][0x1],{'alt':(_0x276924[_0x5c8b35][0x2]||[])[_0x39e8b6(0x384c)]()});}_0x110c69['prototype'][_0x37e46c(0x3c0b)]=function(_0x5319d7,_0x1068ee,_0x214fa6){const _0x121fb9=_0x37e46c,_0x46928c=this[_0x121fb9(0x2eb9)][_0x121fb9(0x433a)](''),_0x5cd037=_0x46928c['length'],_0x45d40e=_0x5319d7['md'][_0x121fb9(0x20b6)]['maxNesting'];let _0x8dc128=_0x1068ee,_0x4bff25=!0x1;for(;_0x8dc128<_0x214fa6&&(_0x5319d7[_0x121fb9(0x3572)]=_0x8dc128=_0x5319d7['skipEmptyLines'](_0x8dc128),!(_0x8dc128>=_0x214fa6))&&!(_0x5319d7[_0x121fb9(0x4fc6)][_0x8dc128]<_0x5319d7[_0x121fb9(0x1de0)]);){if(_0x5319d7[_0x121fb9(0x1fe)]>=_0x45d40e){_0x5319d7[_0x121fb9(0x3572)]=_0x214fa6;break;}const _0x8b0fe3=_0x5319d7[_0x121fb9(0x3572)];let _0x307063=!0x1;for(let _0x44e832=0x0;_0x44e832<_0x5cd037;_0x44e832++)if(_0x307063=_0x46928c[_0x44e832](_0x5319d7,_0x8dc128,_0x214fa6,!0x1),_0x307063){if(_0x8b0fe3>=_0x5319d7[_0x121fb9(0x3572)])throw new Error(_0x121fb9(0x490e));break;}if(!_0x307063)throw new Error('none\x20of\x20the\x20block\x20rules\x20matched');_0x5319d7[_0x121fb9(0x4347)]=!_0x4bff25,_0x5319d7['isEmpty'](_0x5319d7[_0x121fb9(0x3572)]-0x1)&&(_0x4bff25=!0x0),_0x8dc128=_0x5319d7[_0x121fb9(0x3572)],_0x8dc128<_0x214fa6&&_0x5319d7[_0x121fb9(0x21cf)](_0x8dc128)&&(_0x4bff25=!0x0,_0x8dc128++,_0x5319d7[_0x121fb9(0x3572)]=_0x8dc128);}},_0x110c69['prototype'][_0x37e46c(0x2956)]=function(_0x3d29ee,_0x32bf12,_0x513216,_0x489595){const _0x490982=_0x37e46c;if(!_0x3d29ee)return;const _0x295de2=new this[(_0x490982(0x207e))](_0x3d29ee,_0x32bf12,_0x513216,_0x489595);this[_0x490982(0x3c0b)](_0x295de2,_0x295de2['line'],_0x295de2[_0x490982(0x2a61)]);},_0x110c69[_0x37e46c(0x3b3c)][_0x37e46c(0x207e)]=_0x1ca7f3;const _0x329cbe=_0x110c69;function _0x3b3f0a(_0x37ce70,_0x29865d,_0x416c1a,_0x73c5d4){const _0x5d6b74=_0x37e46c;this[_0x5d6b74(0x3d6)]=_0x37ce70,this[_0x5d6b74(0xe1a)]=_0x416c1a,this['md']=_0x29865d,this['tokens']=_0x73c5d4,this[_0x5d6b74(0x27b1)]=Array(_0x73c5d4[_0x5d6b74(0x1b19)]),this[_0x5d6b74(0x333f)]=0x0,this['posMax']=this['src'][_0x5d6b74(0x1b19)],this[_0x5d6b74(0x1fe)]=0x0,this[_0x5d6b74(0x3267)]='',this['pendingLevel']=0x0,this['cache']={},this[_0x5d6b74(0x15d9)]=[],this[_0x5d6b74(0x371b)]=[],this[_0x5d6b74(0x10f4)]={},this[_0x5d6b74(0x4a6a)]=!0x1,this[_0x5d6b74(0x6a1)]=0x0;}_0x3b3f0a[_0x37e46c(0x3b3c)]['pushPending']=function(){const _0x3d8884=_0x37e46c,_0x5bf18e=new _0x16b151('text','',0x0);return _0x5bf18e[_0x3d8884(0x484f)]=this[_0x3d8884(0x3267)],_0x5bf18e[_0x3d8884(0x1fe)]=this['pendingLevel'],this[_0x3d8884(0x1c34)]['push'](_0x5bf18e),this[_0x3d8884(0x3267)]='',_0x5bf18e;},_0x3b3f0a[_0x37e46c(0x3b3c)][_0x37e46c(0x1715)]=function(_0x4b5c9c,_0x38bc0d,_0x5189f8){const _0x274e0b=_0x37e46c;this['pending']&&this['pushPending']();const _0x68a4fe=new _0x16b151(_0x4b5c9c,_0x38bc0d,_0x5189f8);let _0x54c116=null;return _0x5189f8<0x0&&(this[_0x274e0b(0x1fe)]--,this[_0x274e0b(0x15d9)]=this[_0x274e0b(0x371b)][_0x274e0b(0x3d35)]()),_0x68a4fe[_0x274e0b(0x1fe)]=this[_0x274e0b(0x1fe)],_0x5189f8>0x0&&(this[_0x274e0b(0x1fe)]++,this[_0x274e0b(0x371b)][_0x274e0b(0x1715)](this['delimiters']),this[_0x274e0b(0x15d9)]=[],_0x54c116={'delimiters':this['delimiters']}),this['pendingLevel']=this[_0x274e0b(0x1fe)],this['tokens'][_0x274e0b(0x1715)](_0x68a4fe),this[_0x274e0b(0x27b1)][_0x274e0b(0x1715)](_0x54c116),_0x68a4fe;},_0x3b3f0a[_0x37e46c(0x3b3c)][_0x37e46c(0x4aad)]=function(_0x50624b,_0x3762ca){const _0x5580cb=_0x37e46c;let _0x43ed59,_0x1dbbdd,_0x274f82=!0x0,_0x31e3c8=!0x0;const _0x92bd07=this[_0x5580cb(0x3f9b)],_0x35f0e9=this[_0x5580cb(0x3d6)][_0x5580cb(0x4955)](_0x50624b),_0x52c876=_0x50624b>0x0?this[_0x5580cb(0x3d6)][_0x5580cb(0x4955)](_0x50624b-0x1):0x20;let _0x126bd1=_0x50624b;for(;_0x126bd1<_0x92bd07&&this[_0x5580cb(0x3d6)]['charCodeAt'](_0x126bd1)===_0x35f0e9;)_0x126bd1++;const _0x29af10=_0x126bd1-_0x50624b,_0x2a3424=_0x126bd1<_0x92bd07?this[_0x5580cb(0x3d6)]['charCodeAt'](_0x126bd1):0x20,_0x365508=_0x5950b9(_0x52c876)||_0x1f4fb3(String[_0x5580cb(0x49bf)](_0x52c876)),_0x28cd43=_0x5950b9(_0x2a3424)||_0x1f4fb3(String['fromCharCode'](_0x2a3424)),_0x2ccab1=_0x138dc2(_0x52c876),_0x4c0ebf=_0x138dc2(_0x2a3424);return _0x4c0ebf?_0x274f82=!0x1:_0x28cd43&&(_0x2ccab1||_0x365508||(_0x274f82=!0x1)),_0x2ccab1?_0x31e3c8=!0x1:_0x365508&&(_0x4c0ebf||_0x28cd43||(_0x31e3c8=!0x1)),_0x3762ca?(_0x43ed59=_0x274f82,_0x1dbbdd=_0x31e3c8):(_0x43ed59=_0x274f82&&(!_0x31e3c8||_0x365508),_0x1dbbdd=_0x31e3c8&&(!_0x274f82||_0x28cd43)),{'can_open':_0x43ed59,'can_close':_0x1dbbdd,'length':_0x29af10};},_0x3b3f0a[_0x37e46c(0x3b3c)][_0x37e46c(0x412a)]=_0x16b151;const _0x11d425=_0x3b3f0a;function _0x1503d7(_0x50f84f){switch(_0x50f84f){case 0xa:case 0x21:case 0x23:case 0x24:case 0x25:case 0x26:case 0x2a:case 0x2b:case 0x2d:case 0x3a:case 0x3c:case 0x3d:case 0x3e:case 0x40:case 0x5b:case 0x5c:case 0x5d:case 0x5e:case 0x5f:case 0x60:case 0x7b:case 0x7d:case 0x7e:return!0x0;default:return!0x1;}}const _0x30d489=/(?:^|[^a-z0-9.+-])([a-z][a-z0-9.+-]*)$/i,_0x925d06=[];for(let _0x25e7b6=0x0;_0x25e7b6<0x100;_0x25e7b6++)_0x925d06[_0x37e46c(0x1715)](0x0);function _0x2a441d(_0x2db5c8,_0x4101c0){const _0x2cdf18=_0x37e46c;let _0x4ac25c;const _0x48ff47=[],_0x1e918e=_0x4101c0[_0x2cdf18(0x1b19)];for(let _0x43860a=0x0;_0x43860a<_0x1e918e;_0x43860a++){const _0x43a3f=_0x4101c0[_0x43860a];if(0x7e!==_0x43a3f[_0x2cdf18(0x5121)])continue;if(-0x1===_0x43a3f[_0x2cdf18(0x2681)])continue;const _0x197dce=_0x4101c0[_0x43a3f[_0x2cdf18(0x2681)]];_0x4ac25c=_0x2db5c8[_0x2cdf18(0x1c34)][_0x43a3f[_0x2cdf18(0x9ff)]],_0x4ac25c[_0x2cdf18(0xcfc)]=_0x2cdf18(0x3481),_0x4ac25c[_0x2cdf18(0x15a9)]='s',_0x4ac25c[_0x2cdf18(0x3d81)]=0x1,_0x4ac25c[_0x2cdf18(0x2bd8)]='~~',_0x4ac25c['content']='',_0x4ac25c=_0x2db5c8[_0x2cdf18(0x1c34)][_0x197dce[_0x2cdf18(0x9ff)]],_0x4ac25c[_0x2cdf18(0xcfc)]=_0x2cdf18(0x22a),_0x4ac25c[_0x2cdf18(0x15a9)]='s',_0x4ac25c[_0x2cdf18(0x3d81)]=-0x1,_0x4ac25c[_0x2cdf18(0x2bd8)]='~~',_0x4ac25c[_0x2cdf18(0x484f)]='',_0x2cdf18(0x4006)===_0x2db5c8[_0x2cdf18(0x1c34)][_0x197dce[_0x2cdf18(0x9ff)]-0x1][_0x2cdf18(0xcfc)]&&'~'===_0x2db5c8[_0x2cdf18(0x1c34)][_0x197dce[_0x2cdf18(0x9ff)]-0x1]['content']&&_0x48ff47[_0x2cdf18(0x1715)](_0x197dce[_0x2cdf18(0x9ff)]-0x1);}for(;_0x48ff47['length'];){const _0x3181b2=_0x48ff47['pop']();let _0x5d8bbc=_0x3181b2+0x1;for(;_0x5d8bbc<_0x2db5c8[_0x2cdf18(0x1c34)]['length']&&_0x2cdf18(0x22a)===_0x2db5c8[_0x2cdf18(0x1c34)][_0x5d8bbc]['type'];)_0x5d8bbc++;_0x5d8bbc--,_0x3181b2!==_0x5d8bbc&&(_0x4ac25c=_0x2db5c8[_0x2cdf18(0x1c34)][_0x5d8bbc],_0x2db5c8['tokens'][_0x5d8bbc]=_0x2db5c8['tokens'][_0x3181b2],_0x2db5c8['tokens'][_0x3181b2]=_0x4ac25c);}}_0x37e46c(0x33e)[_0x37e46c(0x1117)]('')['forEach'](function(_0x10e902){const _0x37218d=_0x37e46c;_0x925d06[_0x10e902[_0x37218d(0x4955)](0x0)]=0x1;});const _0x14c558={'tokenize':function(_0x2a31aa,_0x4e127e){const _0x1b3272=_0x37e46c,_0x39399f=_0x2a31aa[_0x1b3272(0x333f)],_0x2bdd89=_0x2a31aa[_0x1b3272(0x3d6)][_0x1b3272(0x4955)](_0x39399f);if(_0x4e127e)return!0x1;if(0x7e!==_0x2bdd89)return!0x1;const _0x3683a2=_0x2a31aa[_0x1b3272(0x4aad)](_0x2a31aa['pos'],!0x0);let _0x21e144=_0x3683a2['length'];const _0x335296=String[_0x1b3272(0x49bf)](_0x2bdd89);if(_0x21e144<0x2)return!0x1;let _0x39f784;_0x21e144%0x2&&(_0x39f784=_0x2a31aa[_0x1b3272(0x1715)](_0x1b3272(0x4006),'',0x0),_0x39f784['content']=_0x335296,_0x21e144--);for(let _0xa0d322=0x0;_0xa0d322<_0x21e144;_0xa0d322+=0x2)_0x39f784=_0x2a31aa[_0x1b3272(0x1715)]('text','',0x0),_0x39f784['content']=_0x335296+_0x335296,_0x2a31aa[_0x1b3272(0x15d9)][_0x1b3272(0x1715)]({'marker':_0x2bdd89,'length':0x0,'token':_0x2a31aa[_0x1b3272(0x1c34)][_0x1b3272(0x1b19)]-0x1,'end':-0x1,'open':_0x3683a2[_0x1b3272(0x2589)],'close':_0x3683a2[_0x1b3272(0x26ce)]});return _0x2a31aa[_0x1b3272(0x333f)]+=_0x3683a2[_0x1b3272(0x1b19)],!0x0;},'postProcess':function(_0x20149a){const _0x198307=_0x37e46c,_0x46446e=_0x20149a[_0x198307(0x27b1)],_0x393ce6=_0x20149a[_0x198307(0x27b1)][_0x198307(0x1b19)];_0x2a441d(_0x20149a,_0x20149a['delimiters']);for(let _0x477433=0x0;_0x477433<_0x393ce6;_0x477433++)_0x46446e[_0x477433]&&_0x46446e[_0x477433][_0x198307(0x15d9)]&&_0x2a441d(_0x20149a,_0x46446e[_0x477433][_0x198307(0x15d9)]);}};function _0xaa254(_0xfc1f3,_0x38922d){const _0x53277f=_0x37e46c;for(let _0x37d27e=_0x38922d[_0x53277f(0x1b19)]-0x1;_0x37d27e>=0x0;_0x37d27e--){const _0x5228e4=_0x38922d[_0x37d27e];if(0x5f!==_0x5228e4[_0x53277f(0x5121)]&&0x2a!==_0x5228e4[_0x53277f(0x5121)])continue;if(-0x1===_0x5228e4[_0x53277f(0x2681)])continue;const _0x475194=_0x38922d[_0x5228e4[_0x53277f(0x2681)]],_0x4797cc=_0x37d27e>0x0&&_0x38922d[_0x37d27e-0x1]['end']===_0x5228e4[_0x53277f(0x2681)]+0x1&&_0x38922d[_0x37d27e-0x1][_0x53277f(0x5121)]===_0x5228e4[_0x53277f(0x5121)]&&_0x38922d[_0x37d27e-0x1]['token']===_0x5228e4[_0x53277f(0x9ff)]-0x1&&_0x38922d[_0x5228e4[_0x53277f(0x2681)]+0x1]['token']===_0x475194['token']+0x1,_0x32ef00=String[_0x53277f(0x49bf)](_0x5228e4[_0x53277f(0x5121)]),_0x12838a=_0xfc1f3['tokens'][_0x5228e4[_0x53277f(0x9ff)]];_0x12838a[_0x53277f(0xcfc)]=_0x4797cc?_0x53277f(0x27be):'em_open',_0x12838a['tag']=_0x4797cc?'strong':'em',_0x12838a[_0x53277f(0x3d81)]=0x1,_0x12838a[_0x53277f(0x2bd8)]=_0x4797cc?_0x32ef00+_0x32ef00:_0x32ef00,_0x12838a[_0x53277f(0x484f)]='';const _0x1eafaa=_0xfc1f3[_0x53277f(0x1c34)][_0x475194[_0x53277f(0x9ff)]];_0x1eafaa[_0x53277f(0xcfc)]=_0x4797cc?_0x53277f(0x403c):'em_close',_0x1eafaa[_0x53277f(0x15a9)]=_0x4797cc?_0x53277f(0x40c9):'em',_0x1eafaa['nesting']=-0x1,_0x1eafaa['markup']=_0x4797cc?_0x32ef00+_0x32ef00:_0x32ef00,_0x1eafaa[_0x53277f(0x484f)]='',_0x4797cc&&(_0xfc1f3[_0x53277f(0x1c34)][_0x38922d[_0x37d27e-0x1][_0x53277f(0x9ff)]][_0x53277f(0x484f)]='',_0xfc1f3['tokens'][_0x38922d[_0x5228e4[_0x53277f(0x2681)]+0x1][_0x53277f(0x9ff)]][_0x53277f(0x484f)]='',_0x37d27e--);}}const _0x25084f={'tokenize':function(_0x3484e1,_0x1555d4){const _0x280ca7=_0x37e46c,_0x29375a=_0x3484e1[_0x280ca7(0x333f)],_0x3da8d4=_0x3484e1[_0x280ca7(0x3d6)][_0x280ca7(0x4955)](_0x29375a);if(_0x1555d4)return!0x1;if(0x5f!==_0x3da8d4&&0x2a!==_0x3da8d4)return!0x1;const _0x16344d=_0x3484e1[_0x280ca7(0x4aad)](_0x3484e1[_0x280ca7(0x333f)],0x2a===_0x3da8d4);for(let _0x28fc4d=0x0;_0x28fc4d<_0x16344d[_0x280ca7(0x1b19)];_0x28fc4d++){_0x3484e1[_0x280ca7(0x1715)](_0x280ca7(0x4006),'',0x0)[_0x280ca7(0x484f)]=String[_0x280ca7(0x49bf)](_0x3da8d4),_0x3484e1[_0x280ca7(0x15d9)][_0x280ca7(0x1715)]({'marker':_0x3da8d4,'length':_0x16344d[_0x280ca7(0x1b19)],'token':_0x3484e1[_0x280ca7(0x1c34)][_0x280ca7(0x1b19)]-0x1,'end':-0x1,'open':_0x16344d[_0x280ca7(0x2589)],'close':_0x16344d[_0x280ca7(0x26ce)]});}return _0x3484e1[_0x280ca7(0x333f)]+=_0x16344d[_0x280ca7(0x1b19)],!0x0;},'postProcess':function(_0x16c2df){const _0x351d4b=_0x37e46c,_0x44774a=_0x16c2df['tokens_meta'],_0x57fb65=_0x16c2df[_0x351d4b(0x27b1)][_0x351d4b(0x1b19)];_0xaa254(_0x16c2df,_0x16c2df[_0x351d4b(0x15d9)]);for(let _0x181e4d=0x0;_0x181e4d<_0x57fb65;_0x181e4d++)_0x44774a[_0x181e4d]&&_0x44774a[_0x181e4d][_0x351d4b(0x15d9)]&&_0xaa254(_0x16c2df,_0x44774a[_0x181e4d]['delimiters']);}},_0x6d93e3=/^([a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*)$/,_0x582a63=/^([a-zA-Z][a-zA-Z0-9+.-]{1,31}):([^<>\x00-\x20]*)$/,_0x329438=/^&#((?:x[a-f0-9]{1,6}|[0-9]{1,7}));/i,_0xae5fad=/^&([a-z][a-z0-9]{1,31});/i;function _0x2740f1(_0x17c11b){const _0x4bd178=_0x37e46c,_0x1b6615={},_0x42ecab=_0x17c11b[_0x4bd178(0x1b19)];if(!_0x42ecab)return;let _0x1efc35=0x0,_0x3a47c9=-0x2;const _0x50e202=[];for(let _0x45fb91=0x0;_0x45fb91<_0x42ecab;_0x45fb91++){const _0x9da6ff=_0x17c11b[_0x45fb91];if(_0x50e202[_0x4bd178(0x1715)](0x0),_0x17c11b[_0x1efc35][_0x4bd178(0x5121)]===_0x9da6ff[_0x4bd178(0x5121)]&&_0x3a47c9===_0x9da6ff[_0x4bd178(0x9ff)]-0x1||(_0x1efc35=_0x45fb91),_0x3a47c9=_0x9da6ff[_0x4bd178(0x9ff)],_0x9da6ff[_0x4bd178(0x1b19)]=_0x9da6ff[_0x4bd178(0x1b19)]||0x0,!_0x9da6ff[_0x4bd178(0x50d8)])continue;_0x1b6615[_0x4bd178(0x2427)](_0x9da6ff['marker'])||(_0x1b6615[_0x9da6ff[_0x4bd178(0x5121)]]=[-0x1,-0x1,-0x1,-0x1,-0x1,-0x1]);const _0x3ced61=_0x1b6615[_0x9da6ff[_0x4bd178(0x5121)]][(_0x9da6ff[_0x4bd178(0x1795)]?0x3:0x0)+_0x9da6ff[_0x4bd178(0x1b19)]%0x3];let _0x32c5fa=_0x1efc35-_0x50e202[_0x1efc35]-0x1,_0x47ed72=_0x32c5fa;for(;_0x32c5fa>_0x3ced61;_0x32c5fa-=_0x50e202[_0x32c5fa]+0x1){const _0x1600a3=_0x17c11b[_0x32c5fa];if(_0x1600a3[_0x4bd178(0x5121)]===_0x9da6ff['marker']&&(_0x1600a3['open']&&_0x1600a3['end']<0x0)){let _0x4772b8=!0x1;if((_0x1600a3[_0x4bd178(0x50d8)]||_0x9da6ff[_0x4bd178(0x1795)])&&(_0x1600a3[_0x4bd178(0x1b19)]+_0x9da6ff['length'])%0x3==0x0&&(_0x1600a3[_0x4bd178(0x1b19)]%0x3==0x0&&_0x9da6ff[_0x4bd178(0x1b19)]%0x3==0x0||(_0x4772b8=!0x0)),!_0x4772b8){const _0x383434=_0x32c5fa>0x0&&!_0x17c11b[_0x32c5fa-0x1][_0x4bd178(0x1795)]?_0x50e202[_0x32c5fa-0x1]+0x1:0x0;_0x50e202[_0x45fb91]=_0x45fb91-_0x32c5fa+_0x383434,_0x50e202[_0x32c5fa]=_0x383434,_0x9da6ff[_0x4bd178(0x1795)]=!0x1,_0x1600a3['end']=_0x45fb91,_0x1600a3[_0x4bd178(0x50d8)]=!0x1,_0x47ed72=-0x1,_0x3a47c9=-0x2;break;}}}-0x1!==_0x47ed72&&(_0x1b6615[_0x9da6ff['marker']][(_0x9da6ff[_0x4bd178(0x1795)]?0x3:0x0)+(_0x9da6ff[_0x4bd178(0x1b19)]||0x0)%0x3]=_0x47ed72);}}const _0x57ff21=[[_0x37e46c(0x4006),function(_0x362f60,_0x28bf5a){const _0x2a77f5=_0x37e46c;let _0x6acea6=_0x362f60[_0x2a77f5(0x333f)];for(;_0x6acea6<_0x362f60['posMax']&&!_0x1503d7(_0x362f60['src'][_0x2a77f5(0x4955)](_0x6acea6));)_0x6acea6++;return _0x6acea6!==_0x362f60['pos']&&(_0x28bf5a||(_0x362f60[_0x2a77f5(0x3267)]+=_0x362f60['src'][_0x2a77f5(0x384c)](_0x362f60[_0x2a77f5(0x333f)],_0x6acea6)),_0x362f60['pos']=_0x6acea6,!0x0);}],['linkify',function(_0x40667f,_0x205ecb){const _0x8d60fb=_0x37e46c;if(!_0x40667f['md'][_0x8d60fb(0x20b6)][_0x8d60fb(0x1255)])return!0x1;if(_0x40667f[_0x8d60fb(0x6a1)]>0x0)return!0x1;const _0x8e43e2=_0x40667f[_0x8d60fb(0x333f)];if(_0x8e43e2+0x3>_0x40667f['posMax'])return!0x1;if(0x3a!==_0x40667f[_0x8d60fb(0x3d6)][_0x8d60fb(0x4955)](_0x8e43e2))return!0x1;if(0x2f!==_0x40667f['src'][_0x8d60fb(0x4955)](_0x8e43e2+0x1))return!0x1;if(0x2f!==_0x40667f['src'][_0x8d60fb(0x4955)](_0x8e43e2+0x2))return!0x1;const _0x14d8ec=_0x40667f[_0x8d60fb(0x3267)][_0x8d60fb(0x2d96)](_0x30d489);if(!_0x14d8ec)return!0x1;const _0x47cbc6=_0x14d8ec[0x1],_0x6b43d8=_0x40667f['md'][_0x8d60fb(0x1255)][_0x8d60fb(0x23e)](_0x40667f['src'][_0x8d60fb(0x384c)](_0x8e43e2-_0x47cbc6[_0x8d60fb(0x1b19)]));if(!_0x6b43d8)return!0x1;let _0x2cf3b8=_0x6b43d8[_0x8d60fb(0xd17)];if(_0x2cf3b8[_0x8d60fb(0x1b19)]<=_0x47cbc6['length'])return!0x1;_0x2cf3b8=_0x2cf3b8[_0x8d60fb(0x741)](/\*+$/,'');const _0x3f3054=_0x40667f['md']['normalizeLink'](_0x2cf3b8);if(!_0x40667f['md'][_0x8d60fb(0x458)](_0x3f3054))return!0x1;if(!_0x205ecb){_0x40667f[_0x8d60fb(0x3267)]=_0x40667f['pending'][_0x8d60fb(0x384c)](0x0,-_0x47cbc6['length']);const _0x3e12d2=_0x40667f[_0x8d60fb(0x1715)](_0x8d60fb(0x5243),'a',0x1);_0x3e12d2[_0x8d60fb(0x70e)]=[[_0x8d60fb(0xe63),_0x3f3054]],_0x3e12d2[_0x8d60fb(0x2bd8)]=_0x8d60fb(0x1255),_0x3e12d2[_0x8d60fb(0x3a85)]=_0x8d60fb(0x4c14),_0x40667f[_0x8d60fb(0x1715)](_0x8d60fb(0x4006),'',0x0)['content']=_0x40667f['md'][_0x8d60fb(0x4c51)](_0x2cf3b8);const _0x1a8237=_0x40667f[_0x8d60fb(0x1715)](_0x8d60fb(0x24cf),'a',-0x1);_0x1a8237[_0x8d60fb(0x2bd8)]=_0x8d60fb(0x1255),_0x1a8237[_0x8d60fb(0x3a85)]='auto';}return _0x40667f[_0x8d60fb(0x333f)]+=_0x2cf3b8[_0x8d60fb(0x1b19)]-_0x47cbc6[_0x8d60fb(0x1b19)],!0x0;}],[_0x37e46c(0x516f),function(_0x24f148,_0x42feb8){const _0x342a4e=_0x37e46c;let _0x1bb701=_0x24f148[_0x342a4e(0x333f)];if(0xa!==_0x24f148[_0x342a4e(0x3d6)][_0x342a4e(0x4955)](_0x1bb701))return!0x1;const _0x57a98e=_0x24f148['pending'][_0x342a4e(0x1b19)]-0x1,_0x1fdd83=_0x24f148[_0x342a4e(0x3f9b)];if(!_0x42feb8){if(_0x57a98e>=0x0&&0x20===_0x24f148[_0x342a4e(0x3267)]['charCodeAt'](_0x57a98e)){if(_0x57a98e>=0x1&&0x20===_0x24f148['pending'][_0x342a4e(0x4955)](_0x57a98e-0x1)){let _0x2d3f76=_0x57a98e-0x1;for(;_0x2d3f76>=0x1&&0x20===_0x24f148[_0x342a4e(0x3267)][_0x342a4e(0x4955)](_0x2d3f76-0x1);)_0x2d3f76--;_0x24f148[_0x342a4e(0x3267)]=_0x24f148['pending'][_0x342a4e(0x384c)](0x0,_0x2d3f76),_0x24f148['push'](_0x342a4e(0x47b4),'br',0x0);}else _0x24f148[_0x342a4e(0x3267)]=_0x24f148[_0x342a4e(0x3267)][_0x342a4e(0x384c)](0x0,-0x1),_0x24f148[_0x342a4e(0x1715)](_0x342a4e(0x27f7),'br',0x0);}else _0x24f148[_0x342a4e(0x1715)](_0x342a4e(0x27f7),'br',0x0);}for(_0x1bb701++;_0x1bb701<_0x1fdd83&&_0x290708(_0x24f148[_0x342a4e(0x3d6)]['charCodeAt'](_0x1bb701));)_0x1bb701++;return _0x24f148[_0x342a4e(0x333f)]=_0x1bb701,!0x0;}],[_0x37e46c(0x4e2c),function(_0x139330,_0x2ae1d0){const _0x4457f0=_0x37e46c;let _0x456f9d=_0x139330['pos'];const _0x206d64=_0x139330[_0x4457f0(0x3f9b)];if(0x5c!==_0x139330['src'][_0x4457f0(0x4955)](_0x456f9d))return!0x1;if(_0x456f9d++,_0x456f9d>=_0x206d64)return!0x1;let _0x31c853=_0x139330['src'][_0x4457f0(0x4955)](_0x456f9d);if(0xa===_0x31c853){for(_0x2ae1d0||_0x139330['push'](_0x4457f0(0x47b4),'br',0x0),_0x456f9d++;_0x456f9d<_0x206d64&&(_0x31c853=_0x139330[_0x4457f0(0x3d6)][_0x4457f0(0x4955)](_0x456f9d),_0x290708(_0x31c853));)_0x456f9d++;return _0x139330[_0x4457f0(0x333f)]=_0x456f9d,!0x0;}let _0x2c8ee0=_0x139330[_0x4457f0(0x3d6)][_0x456f9d];if(_0x31c853>=0xd800&&_0x31c853<=0xdbff&&_0x456f9d+0x1<_0x206d64){const _0x5309a8=_0x139330[_0x4457f0(0x3d6)][_0x4457f0(0x4955)](_0x456f9d+0x1);_0x5309a8>=0xdc00&&_0x5309a8<=0xdfff&&(_0x2c8ee0+=_0x139330[_0x4457f0(0x3d6)][_0x456f9d+0x1],_0x456f9d++);}const _0x2ab441='\x5c'+_0x2c8ee0;if(!_0x2ae1d0){const _0x18a308=_0x139330[_0x4457f0(0x1715)](_0x4457f0(0x946),'',0x0);_0x31c853<0x100&&0x0!==_0x925d06[_0x31c853]?_0x18a308[_0x4457f0(0x484f)]=_0x2c8ee0:_0x18a308[_0x4457f0(0x484f)]=_0x2ab441,_0x18a308['markup']=_0x2ab441,_0x18a308[_0x4457f0(0x3a85)]=_0x4457f0(0x4e2c);}return _0x139330[_0x4457f0(0x333f)]=_0x456f9d+0x1,!0x0;}],[_0x37e46c(0x10f4),function(_0x4314a4,_0x3038d4){const _0x9783fc=_0x37e46c;let _0x1ab412=_0x4314a4[_0x9783fc(0x333f)];if(0x60!==_0x4314a4['src'][_0x9783fc(0x4955)](_0x1ab412))return!0x1;const _0x28e274=_0x1ab412;_0x1ab412++;const _0xd468a9=_0x4314a4[_0x9783fc(0x3f9b)];for(;_0x1ab412<_0xd468a9&&0x60===_0x4314a4[_0x9783fc(0x3d6)]['charCodeAt'](_0x1ab412);)_0x1ab412++;const _0x4202a4=_0x4314a4[_0x9783fc(0x3d6)]['slice'](_0x28e274,_0x1ab412),_0x2cfaee=_0x4202a4[_0x9783fc(0x1b19)];if(_0x4314a4[_0x9783fc(0x4a6a)]&&(_0x4314a4['backticks'][_0x2cfaee]||0x0)<=_0x28e274)return _0x3038d4||(_0x4314a4[_0x9783fc(0x3267)]+=_0x4202a4),_0x4314a4[_0x9783fc(0x333f)]+=_0x2cfaee,!0x0;let _0x19455d,_0x563fce=_0x1ab412;for(;-0x1!==(_0x19455d=_0x4314a4[_0x9783fc(0x3d6)][_0x9783fc(0x8c9)]('`',_0x563fce));){for(_0x563fce=_0x19455d+0x1;_0x563fce<_0xd468a9&&0x60===_0x4314a4['src'][_0x9783fc(0x4955)](_0x563fce);)_0x563fce++;const _0x139408=_0x563fce-_0x19455d;if(_0x139408===_0x2cfaee){if(!_0x3038d4){const _0x221511=_0x4314a4[_0x9783fc(0x1715)](_0x9783fc(0x2076),_0x9783fc(0x4948),0x0);_0x221511[_0x9783fc(0x2bd8)]=_0x4202a4,_0x221511[_0x9783fc(0x484f)]=_0x4314a4[_0x9783fc(0x3d6)][_0x9783fc(0x384c)](_0x1ab412,_0x19455d)['replace'](/\n/g,'\x20')[_0x9783fc(0x741)](/^ (.+) $/,'$1');}return _0x4314a4[_0x9783fc(0x333f)]=_0x563fce,!0x0;}_0x4314a4[_0x9783fc(0x10f4)][_0x139408]=_0x19455d;}return _0x4314a4[_0x9783fc(0x4a6a)]=!0x0,_0x3038d4||(_0x4314a4['pending']+=_0x4202a4),_0x4314a4[_0x9783fc(0x333f)]+=_0x2cfaee,!0x0;}],[_0x37e46c(0x22cd),_0x14c558[_0x37e46c(0x3c0b)]],['emphasis',_0x25084f[_0x37e46c(0x3c0b)]],[_0x37e46c(0x4b32),function(_0x250a04,_0x145311){const _0x2115cb=_0x37e46c;let _0x374775,_0x46fa65,_0x2f69d3,_0x3713a8,_0x678261='',_0x38b374='',_0x2df0ca=_0x250a04[_0x2115cb(0x333f)],_0x1a83cf=!0x0;if(0x5b!==_0x250a04['src'][_0x2115cb(0x4955)](_0x250a04[_0x2115cb(0x333f)]))return!0x1;const _0x16db56=_0x250a04[_0x2115cb(0x333f)],_0x3239c3=_0x250a04[_0x2115cb(0x3f9b)],_0x100192=_0x250a04[_0x2115cb(0x333f)]+0x1,_0x5a322f=_0x250a04['md']['helpers'][_0x2115cb(0x294)](_0x250a04,_0x250a04[_0x2115cb(0x333f)],!0x0);if(_0x5a322f<0x0)return!0x1;let _0x43c202=_0x5a322f+0x1;if(_0x43c202<_0x3239c3&&0x28===_0x250a04[_0x2115cb(0x3d6)]['charCodeAt'](_0x43c202)){for(_0x1a83cf=!0x1,_0x43c202++;_0x43c202<_0x3239c3&&(_0x374775=_0x250a04[_0x2115cb(0x3d6)]['charCodeAt'](_0x43c202),_0x290708(_0x374775)||0xa===_0x374775);_0x43c202++);if(_0x43c202>=_0x3239c3)return!0x1;if(_0x2df0ca=_0x43c202,_0x2f69d3=_0x250a04['md'][_0x2115cb(0x429b)][_0x2115cb(0x2762)](_0x250a04['src'],_0x43c202,_0x250a04['posMax']),_0x2f69d3['ok']){for(_0x678261=_0x250a04['md'][_0x2115cb(0x694)](_0x2f69d3[_0x2115cb(0x257f)]),_0x250a04['md'][_0x2115cb(0x458)](_0x678261)?_0x43c202=_0x2f69d3[_0x2115cb(0x333f)]:_0x678261='',_0x2df0ca=_0x43c202;_0x43c202<_0x3239c3&&(_0x374775=_0x250a04['src'][_0x2115cb(0x4955)](_0x43c202),_0x290708(_0x374775)||0xa===_0x374775);_0x43c202++);if(_0x2f69d3=_0x250a04['md']['helpers'][_0x2115cb(0x237d)](_0x250a04[_0x2115cb(0x3d6)],_0x43c202,_0x250a04['posMax']),_0x43c202<_0x3239c3&&_0x2df0ca!==_0x43c202&&_0x2f69d3['ok']){for(_0x38b374=_0x2f69d3[_0x2115cb(0x257f)],_0x43c202=_0x2f69d3['pos'];_0x43c202<_0x3239c3&&(_0x374775=_0x250a04[_0x2115cb(0x3d6)][_0x2115cb(0x4955)](_0x43c202),_0x290708(_0x374775)||0xa===_0x374775);_0x43c202++);}}(_0x43c202>=_0x3239c3||0x29!==_0x250a04[_0x2115cb(0x3d6)]['charCodeAt'](_0x43c202))&&(_0x1a83cf=!0x0),_0x43c202++;}if(_0x1a83cf){if(void 0x0===_0x250a04[_0x2115cb(0xe1a)][_0x2115cb(0xeb2)])return!0x1;if(_0x43c202<_0x3239c3&&0x5b===_0x250a04[_0x2115cb(0x3d6)][_0x2115cb(0x4955)](_0x43c202)?(_0x2df0ca=_0x43c202+0x1,_0x43c202=_0x250a04['md']['helpers'][_0x2115cb(0x294)](_0x250a04,_0x43c202),_0x43c202>=0x0?_0x46fa65=_0x250a04['src']['slice'](_0x2df0ca,_0x43c202++):_0x43c202=_0x5a322f+0x1):_0x43c202=_0x5a322f+0x1,_0x46fa65||(_0x46fa65=_0x250a04[_0x2115cb(0x3d6)][_0x2115cb(0x384c)](_0x100192,_0x5a322f)),_0x3713a8=_0x250a04[_0x2115cb(0xe1a)][_0x2115cb(0xeb2)][_0x1fe7b1(_0x46fa65)],!_0x3713a8)return _0x250a04[_0x2115cb(0x333f)]=_0x16db56,!0x1;_0x678261=_0x3713a8[_0x2115cb(0xe63)],_0x38b374=_0x3713a8[_0x2115cb(0x4685)];}if(!_0x145311){_0x250a04[_0x2115cb(0x333f)]=_0x100192,_0x250a04['posMax']=_0x5a322f;const _0x40e100=[['href',_0x678261]];_0x250a04[_0x2115cb(0x1715)]('link_open','a',0x1)[_0x2115cb(0x70e)]=_0x40e100,_0x38b374&&_0x40e100[_0x2115cb(0x1715)](['title',_0x38b374]),_0x250a04[_0x2115cb(0x6a1)]++,_0x250a04['md'][_0x2115cb(0x2988)][_0x2115cb(0x3c0b)](_0x250a04),_0x250a04[_0x2115cb(0x6a1)]--,_0x250a04[_0x2115cb(0x1715)](_0x2115cb(0x24cf),'a',-0x1);}return _0x250a04['pos']=_0x43c202,_0x250a04[_0x2115cb(0x3f9b)]=_0x3239c3,!0x0;}],[_0x37e46c(0x178c),function(_0x11d87,_0x4ec1bb){const _0x2c3995=_0x37e46c;let _0x34c5a3,_0x330e2a,_0xae6e3e,_0xa35b3b,_0x5a9eac,_0x52ce33,_0x2aa22d,_0x4705ff,_0x13ce5e='';const _0x59fd7d=_0x11d87[_0x2c3995(0x333f)],_0x2aeee0=_0x11d87['posMax'];if(0x21!==_0x11d87[_0x2c3995(0x3d6)][_0x2c3995(0x4955)](_0x11d87[_0x2c3995(0x333f)]))return!0x1;if(0x5b!==_0x11d87[_0x2c3995(0x3d6)]['charCodeAt'](_0x11d87[_0x2c3995(0x333f)]+0x1))return!0x1;const _0x9b77f9=_0x11d87[_0x2c3995(0x333f)]+0x2,_0x39dbee=_0x11d87['md'][_0x2c3995(0x429b)][_0x2c3995(0x294)](_0x11d87,_0x11d87[_0x2c3995(0x333f)]+0x1,!0x1);if(_0x39dbee<0x0)return!0x1;if(_0xa35b3b=_0x39dbee+0x1,_0xa35b3b<_0x2aeee0&&0x28===_0x11d87[_0x2c3995(0x3d6)]['charCodeAt'](_0xa35b3b)){for(_0xa35b3b++;_0xa35b3b<_0x2aeee0&&(_0x34c5a3=_0x11d87['src'][_0x2c3995(0x4955)](_0xa35b3b),_0x290708(_0x34c5a3)||0xa===_0x34c5a3);_0xa35b3b++);if(_0xa35b3b>=_0x2aeee0)return!0x1;for(_0x4705ff=_0xa35b3b,_0x52ce33=_0x11d87['md'][_0x2c3995(0x429b)][_0x2c3995(0x2762)](_0x11d87[_0x2c3995(0x3d6)],_0xa35b3b,_0x11d87[_0x2c3995(0x3f9b)]),_0x52ce33['ok']&&(_0x13ce5e=_0x11d87['md'][_0x2c3995(0x694)](_0x52ce33[_0x2c3995(0x257f)]),_0x11d87['md'][_0x2c3995(0x458)](_0x13ce5e)?_0xa35b3b=_0x52ce33[_0x2c3995(0x333f)]:_0x13ce5e=''),_0x4705ff=_0xa35b3b;_0xa35b3b<_0x2aeee0&&(_0x34c5a3=_0x11d87[_0x2c3995(0x3d6)]['charCodeAt'](_0xa35b3b),_0x290708(_0x34c5a3)||0xa===_0x34c5a3);_0xa35b3b++);if(_0x52ce33=_0x11d87['md'][_0x2c3995(0x429b)][_0x2c3995(0x237d)](_0x11d87[_0x2c3995(0x3d6)],_0xa35b3b,_0x11d87[_0x2c3995(0x3f9b)]),_0xa35b3b<_0x2aeee0&&_0x4705ff!==_0xa35b3b&&_0x52ce33['ok']){for(_0x2aa22d=_0x52ce33[_0x2c3995(0x257f)],_0xa35b3b=_0x52ce33['pos'];_0xa35b3b<_0x2aeee0&&(_0x34c5a3=_0x11d87[_0x2c3995(0x3d6)][_0x2c3995(0x4955)](_0xa35b3b),_0x290708(_0x34c5a3)||0xa===_0x34c5a3);_0xa35b3b++);}else _0x2aa22d='';if(_0xa35b3b>=_0x2aeee0||0x29!==_0x11d87[_0x2c3995(0x3d6)][_0x2c3995(0x4955)](_0xa35b3b))return _0x11d87[_0x2c3995(0x333f)]=_0x59fd7d,!0x1;_0xa35b3b++;}else{if(void 0x0===_0x11d87[_0x2c3995(0xe1a)][_0x2c3995(0xeb2)])return!0x1;if(_0xa35b3b<_0x2aeee0&&0x5b===_0x11d87[_0x2c3995(0x3d6)][_0x2c3995(0x4955)](_0xa35b3b)?(_0x4705ff=_0xa35b3b+0x1,_0xa35b3b=_0x11d87['md'][_0x2c3995(0x429b)][_0x2c3995(0x294)](_0x11d87,_0xa35b3b),_0xa35b3b>=0x0?_0xae6e3e=_0x11d87['src'][_0x2c3995(0x384c)](_0x4705ff,_0xa35b3b++):_0xa35b3b=_0x39dbee+0x1):_0xa35b3b=_0x39dbee+0x1,_0xae6e3e||(_0xae6e3e=_0x11d87['src'][_0x2c3995(0x384c)](_0x9b77f9,_0x39dbee)),_0x5a9eac=_0x11d87['env'][_0x2c3995(0xeb2)][_0x1fe7b1(_0xae6e3e)],!_0x5a9eac)return _0x11d87['pos']=_0x59fd7d,!0x1;_0x13ce5e=_0x5a9eac[_0x2c3995(0xe63)],_0x2aa22d=_0x5a9eac[_0x2c3995(0x4685)];}if(!_0x4ec1bb){_0x330e2a=_0x11d87[_0x2c3995(0x3d6)][_0x2c3995(0x384c)](_0x9b77f9,_0x39dbee);const _0x4e6073=[];_0x11d87['md'][_0x2c3995(0x2988)]['parse'](_0x330e2a,_0x11d87['md'],_0x11d87[_0x2c3995(0xe1a)],_0x4e6073);const _0x5020aa=_0x11d87[_0x2c3995(0x1715)](_0x2c3995(0x178c),_0x2c3995(0x3045),0x0),_0x17add2=[[_0x2c3995(0x3d6),_0x13ce5e],['alt','']];_0x5020aa[_0x2c3995(0x70e)]=_0x17add2,_0x5020aa[_0x2c3995(0x4c3e)]=_0x4e6073,_0x5020aa[_0x2c3995(0x484f)]=_0x330e2a,_0x2aa22d&&_0x17add2[_0x2c3995(0x1715)]([_0x2c3995(0x4685),_0x2aa22d]);}return _0x11d87[_0x2c3995(0x333f)]=_0xa35b3b,_0x11d87[_0x2c3995(0x3f9b)]=_0x2aeee0,!0x0;}],[_0x37e46c(0x1e66),function(_0x25e4bd,_0x4944ca){const _0x559188=_0x37e46c;let _0x2ef169=_0x25e4bd['pos'];if(0x3c!==_0x25e4bd[_0x559188(0x3d6)][_0x559188(0x4955)](_0x2ef169))return!0x1;const _0xcf3bd2=_0x25e4bd[_0x559188(0x333f)],_0x4e80f3=_0x25e4bd['posMax'];for(;;){if(++_0x2ef169>=_0x4e80f3)return!0x1;const _0x585cf9=_0x25e4bd[_0x559188(0x3d6)][_0x559188(0x4955)](_0x2ef169);if(0x3c===_0x585cf9)return!0x1;if(0x3e===_0x585cf9)break;}const _0x10fade=_0x25e4bd[_0x559188(0x3d6)]['slice'](_0xcf3bd2+0x1,_0x2ef169);if(_0x582a63[_0x559188(0x1769)](_0x10fade)){const _0x2ca487=_0x25e4bd['md'][_0x559188(0x694)](_0x10fade);if(!_0x25e4bd['md']['validateLink'](_0x2ca487))return!0x1;if(!_0x4944ca){const _0x23cfd0=_0x25e4bd[_0x559188(0x1715)]('link_open','a',0x1);_0x23cfd0[_0x559188(0x70e)]=[[_0x559188(0xe63),_0x2ca487]],_0x23cfd0[_0x559188(0x2bd8)]=_0x559188(0x1e66),_0x23cfd0[_0x559188(0x3a85)]=_0x559188(0x4c14),_0x25e4bd[_0x559188(0x1715)](_0x559188(0x4006),'',0x0)[_0x559188(0x484f)]=_0x25e4bd['md'][_0x559188(0x4c51)](_0x10fade);const _0x4bf6e5=_0x25e4bd[_0x559188(0x1715)]('link_close','a',-0x1);_0x4bf6e5['markup']=_0x559188(0x1e66),_0x4bf6e5['info']=_0x559188(0x4c14);}return _0x25e4bd['pos']+=_0x10fade[_0x559188(0x1b19)]+0x2,!0x0;}if(_0x6d93e3[_0x559188(0x1769)](_0x10fade)){const _0x199130=_0x25e4bd['md'][_0x559188(0x694)](_0x559188(0x594)+_0x10fade);if(!_0x25e4bd['md']['validateLink'](_0x199130))return!0x1;if(!_0x4944ca){const _0x26b9e9=_0x25e4bd[_0x559188(0x1715)](_0x559188(0x5243),'a',0x1);_0x26b9e9[_0x559188(0x70e)]=[[_0x559188(0xe63),_0x199130]],_0x26b9e9['markup']=_0x559188(0x1e66),_0x26b9e9[_0x559188(0x3a85)]=_0x559188(0x4c14),_0x25e4bd[_0x559188(0x1715)](_0x559188(0x4006),'',0x0)[_0x559188(0x484f)]=_0x25e4bd['md'][_0x559188(0x4c51)](_0x10fade);const _0x18e7ca=_0x25e4bd[_0x559188(0x1715)](_0x559188(0x24cf),'a',-0x1);_0x18e7ca['markup']=_0x559188(0x1e66),_0x18e7ca[_0x559188(0x3a85)]=_0x559188(0x4c14);}return _0x25e4bd['pos']+=_0x10fade[_0x559188(0x1b19)]+0x2,!0x0;}return!0x1;}],['html_inline',function(_0x26ef51,_0x199621){const _0x1dbc6e=_0x37e46c;if(!_0x26ef51['md'][_0x1dbc6e(0x20b6)]['html'])return!0x1;const _0x2d7ab1=_0x26ef51['posMax'],_0x39f854=_0x26ef51[_0x1dbc6e(0x333f)];if(0x3c!==_0x26ef51[_0x1dbc6e(0x3d6)][_0x1dbc6e(0x4955)](_0x39f854)||_0x39f854+0x2>=_0x2d7ab1)return!0x1;const _0x326826=_0x26ef51[_0x1dbc6e(0x3d6)][_0x1dbc6e(0x4955)](_0x39f854+0x1);if(0x21!==_0x326826&&0x3f!==_0x326826&&0x2f!==_0x326826&&!function(_0x518bb1){const _0x47ed84=0x20|_0x518bb1;return _0x47ed84>=0x61&&_0x47ed84<=0x7a;}(_0x326826))return!0x1;const _0x47669d=_0x26ef51[_0x1dbc6e(0x3d6)][_0x1dbc6e(0x384c)](_0x39f854)[_0x1dbc6e(0x2d96)](_0xd05017);if(!_0x47669d)return!0x1;if(!_0x199621){const _0x2f3327=_0x26ef51['push'](_0x1dbc6e(0x34c3),'',0x0);_0x2f3327['content']=_0x47669d[0x0],_0xea62e3=_0x2f3327[_0x1dbc6e(0x484f)],/^\s]/i[_0x1dbc6e(0x1769)](_0xea62e3)&&_0x26ef51[_0x1dbc6e(0x6a1)]++,function(_0x406f33){return/^<\/a\s*>/i['test'](_0x406f33);}(_0x2f3327[_0x1dbc6e(0x484f)])&&_0x26ef51[_0x1dbc6e(0x6a1)]--;}var _0xea62e3;return _0x26ef51['pos']+=_0x47669d[0x0]['length'],!0x0;}],[_0x37e46c(0x3816),function(_0x9445ec,_0x42fb52){const _0x2e9392=_0x37e46c,_0x21ba54=_0x9445ec[_0x2e9392(0x333f)],_0x596990=_0x9445ec['posMax'];if(0x26!==_0x9445ec['src'][_0x2e9392(0x4955)](_0x21ba54))return!0x1;if(_0x21ba54+0x1>=_0x596990)return!0x1;if(0x23===_0x9445ec[_0x2e9392(0x3d6)][_0x2e9392(0x4955)](_0x21ba54+0x1)){const _0x34bf5d=_0x9445ec[_0x2e9392(0x3d6)][_0x2e9392(0x384c)](_0x21ba54)[_0x2e9392(0x2d96)](_0x329438);if(_0x34bf5d){if(!_0x42fb52){const _0x412f9d='x'===_0x34bf5d[0x1][0x0][_0x2e9392(0x6e8)]()?parseInt(_0x34bf5d[0x1][_0x2e9392(0x384c)](0x1),0x10):parseInt(_0x34bf5d[0x1],0xa),_0x56a70a=_0x9445ec[_0x2e9392(0x1715)](_0x2e9392(0x946),'',0x0);_0x56a70a[_0x2e9392(0x484f)]=_0x11f3ea(_0x412f9d)?_0x463426(_0x412f9d):_0x463426(0xfffd),_0x56a70a[_0x2e9392(0x2bd8)]=_0x34bf5d[0x0],_0x56a70a[_0x2e9392(0x3a85)]=_0x2e9392(0x3816);}return _0x9445ec['pos']+=_0x34bf5d[0x0][_0x2e9392(0x1b19)],!0x0;}}else{const _0x3e37a6=_0x9445ec[_0x2e9392(0x3d6)]['slice'](_0x21ba54)['match'](_0xae5fad);if(_0x3e37a6){const _0x52f478=_0x14991c(_0x3e37a6[0x0]);if(_0x52f478!==_0x3e37a6[0x0]){if(!_0x42fb52){const _0x74745e=_0x9445ec['push'](_0x2e9392(0x946),'',0x0);_0x74745e[_0x2e9392(0x484f)]=_0x52f478,_0x74745e[_0x2e9392(0x2bd8)]=_0x3e37a6[0x0],_0x74745e[_0x2e9392(0x3a85)]=_0x2e9392(0x3816);}return _0x9445ec['pos']+=_0x3e37a6[0x0][_0x2e9392(0x1b19)],!0x0;}}}return!0x1;}]],_0x3890fa=[[_0x37e46c(0x1d9f),function(_0x178110){const _0x39537b=_0x37e46c,_0x14ac83=_0x178110[_0x39537b(0x27b1)],_0x42f4f2=_0x178110[_0x39537b(0x27b1)][_0x39537b(0x1b19)];_0x2740f1(_0x178110[_0x39537b(0x15d9)]);for(let _0x2c6bff=0x0;_0x2c6bff<_0x42f4f2;_0x2c6bff++)_0x14ac83[_0x2c6bff]&&_0x14ac83[_0x2c6bff]['delimiters']&&_0x2740f1(_0x14ac83[_0x2c6bff][_0x39537b(0x15d9)]);}],[_0x37e46c(0x22cd),_0x14c558[_0x37e46c(0x32d6)]],[_0x37e46c(0x2297),_0x25084f['postProcess']],[_0x37e46c(0x269c),function(_0x240386){const _0x4e01a0=_0x37e46c;let _0x2c6d71,_0x4c515f,_0x3b110c=0x0;const _0x2f4b81=_0x240386[_0x4e01a0(0x1c34)],_0x59ef9d=_0x240386[_0x4e01a0(0x1c34)][_0x4e01a0(0x1b19)];for(_0x2c6d71=_0x4c515f=0x0;_0x2c6d71<_0x59ef9d;_0x2c6d71++)_0x2f4b81[_0x2c6d71]['nesting']<0x0&&_0x3b110c--,_0x2f4b81[_0x2c6d71]['level']=_0x3b110c,_0x2f4b81[_0x2c6d71]['nesting']>0x0&&_0x3b110c++,_0x4e01a0(0x4006)===_0x2f4b81[_0x2c6d71][_0x4e01a0(0xcfc)]&&_0x2c6d71+0x1<_0x59ef9d&&_0x4e01a0(0x4006)===_0x2f4b81[_0x2c6d71+0x1][_0x4e01a0(0xcfc)]?_0x2f4b81[_0x2c6d71+0x1][_0x4e01a0(0x484f)]=_0x2f4b81[_0x2c6d71][_0x4e01a0(0x484f)]+_0x2f4b81[_0x2c6d71+0x1]['content']:(_0x2c6d71!==_0x4c515f&&(_0x2f4b81[_0x4c515f]=_0x2f4b81[_0x2c6d71]),_0x4c515f++);_0x2c6d71!==_0x4c515f&&(_0x2f4b81[_0x4e01a0(0x1b19)]=_0x4c515f);}]];function _0x4630e1(){const _0x501e10=_0x37e46c;this[_0x501e10(0x2eb9)]=new _0x37b2b2();for(let _0x21d0ac=0x0;_0x21d0ac<_0x57ff21[_0x501e10(0x1b19)];_0x21d0ac++)this['ruler'][_0x501e10(0x1715)](_0x57ff21[_0x21d0ac][0x0],_0x57ff21[_0x21d0ac][0x1]);this[_0x501e10(0x38ba)]=new _0x37b2b2();for(let _0x350654=0x0;_0x350654<_0x3890fa['length'];_0x350654++)this[_0x501e10(0x38ba)][_0x501e10(0x1715)](_0x3890fa[_0x350654][0x0],_0x3890fa[_0x350654][0x1]);}_0x4630e1[_0x37e46c(0x3b3c)]['skipToken']=function(_0x53e9f1){const _0x410c2a=_0x37e46c,_0x443f81=_0x53e9f1[_0x410c2a(0x333f)],_0xfafa98=this[_0x410c2a(0x2eb9)][_0x410c2a(0x433a)](''),_0x3ab028=_0xfafa98[_0x410c2a(0x1b19)],_0x14d1ae=_0x53e9f1['md']['options'][_0x410c2a(0x262)],_0x119dfb=_0x53e9f1['cache'];if(void 0x0!==_0x119dfb[_0x443f81])return void(_0x53e9f1['pos']=_0x119dfb[_0x443f81]);let _0x575405=!0x1;if(_0x53e9f1[_0x410c2a(0x1fe)]<_0x14d1ae){for(let _0x2ba551=0x0;_0x2ba551<_0x3ab028;_0x2ba551++)if(_0x53e9f1[_0x410c2a(0x1fe)]++,_0x575405=_0xfafa98[_0x2ba551](_0x53e9f1,!0x0),_0x53e9f1[_0x410c2a(0x1fe)]--,_0x575405){if(_0x443f81>=_0x53e9f1[_0x410c2a(0x333f)])throw new Error(_0x410c2a(0x1100));break;}}else _0x53e9f1[_0x410c2a(0x333f)]=_0x53e9f1[_0x410c2a(0x3f9b)];_0x575405||_0x53e9f1[_0x410c2a(0x333f)]++,_0x119dfb[_0x443f81]=_0x53e9f1[_0x410c2a(0x333f)];},_0x4630e1[_0x37e46c(0x3b3c)][_0x37e46c(0x3c0b)]=function(_0x707999){const _0x322290=_0x37e46c,_0xca58ca=this['ruler'][_0x322290(0x433a)](''),_0x33509d=_0xca58ca[_0x322290(0x1b19)],_0x1d94fb=_0x707999[_0x322290(0x3f9b)],_0x5de6b6=_0x707999['md'][_0x322290(0x20b6)][_0x322290(0x262)];for(;_0x707999[_0x322290(0x333f)]<_0x1d94fb;){const _0x1659b6=_0x707999['pos'];let _0x3b6d2b=!0x1;if(_0x707999[_0x322290(0x1fe)]<_0x5de6b6){for(let _0x79a056=0x0;_0x79a056<_0x33509d;_0x79a056++)if(_0x3b6d2b=_0xca58ca[_0x79a056](_0x707999,!0x1),_0x3b6d2b){if(_0x1659b6>=_0x707999[_0x322290(0x333f)])throw new Error('inline\x20rule\x20didn\x27t\x20increment\x20state.pos');break;}}if(_0x3b6d2b){if(_0x707999[_0x322290(0x333f)]>=_0x1d94fb)break;}else _0x707999['pending']+=_0x707999[_0x322290(0x3d6)][_0x707999['pos']++];}_0x707999[_0x322290(0x3267)]&&_0x707999[_0x322290(0x2734)]();},_0x4630e1[_0x37e46c(0x3b3c)][_0x37e46c(0x2956)]=function(_0x1e135b,_0x27facc,_0x22d5dc,_0x2d3486){const _0x22731b=_0x37e46c,_0x1cab5a=new this[(_0x22731b(0x207e))](_0x1e135b,_0x27facc,_0x22d5dc,_0x2d3486);this[_0x22731b(0x3c0b)](_0x1cab5a);const _0x16335f=this[_0x22731b(0x38ba)]['getRules'](''),_0x8da9d8=_0x16335f['length'];for(let _0x43bf52=0x0;_0x43bf52<_0x8da9d8;_0x43bf52++)_0x16335f[_0x43bf52](_0x1cab5a);},_0x4630e1[_0x37e46c(0x3b3c)][_0x37e46c(0x207e)]=_0x11d425;const _0x2e1abd=_0x4630e1;function _0x164b56(_0x72f981){const _0x266ae4=_0x37e46c;return Array['prototype']['slice'][_0x266ae4(0x236b)](arguments,0x1)[_0x266ae4(0xa21)](function(_0x532e84){const _0x4b5faa=_0x266ae4;_0x532e84&&Object['keys'](_0x532e84)[_0x4b5faa(0xa21)](function(_0x43fb28){_0x72f981[_0x43fb28]=_0x532e84[_0x43fb28];});}),_0x72f981;}function _0x2ec2c9(_0x103281){const _0x308a46=_0x37e46c;return Object[_0x308a46(0x3b3c)][_0x308a46(0x8e8)][_0x308a46(0x236b)](_0x103281);}function _0x39294a(_0x4d40c2){const _0x80066a=_0x37e46c;return _0x80066a(0x229b)===_0x2ec2c9(_0x4d40c2);}function _0x2f2101(_0x39320a){const _0x5698e6=_0x37e46c;return _0x39320a[_0x5698e6(0x741)](/[.?*+^$[\]\\(){}|-]/g,_0x5698e6(0x37fb));}const _0x468419={'fuzzyLink':!0x0,'fuzzyEmail':!0x0,'fuzzyIP':!0x1},_0x3db73d={'http:':{'validate':function(_0x39ef73,_0x520518,_0x5eff11){const _0x4d1dea=_0x37e46c,_0x1d78dc=_0x39ef73['slice'](_0x520518);return _0x5eff11['re']['http']||(_0x5eff11['re'][_0x4d1dea(0x35ba)]=new RegExp('^\x5c/\x5c/'+_0x5eff11['re'][_0x4d1dea(0x41e9)]+_0x5eff11['re'][_0x4d1dea(0x3909)]+_0x5eff11['re'][_0x4d1dea(0x1632)],'i')),_0x5eff11['re'][_0x4d1dea(0x35ba)][_0x4d1dea(0x1769)](_0x1d78dc)?_0x1d78dc[_0x4d1dea(0x2d96)](_0x5eff11['re'][_0x4d1dea(0x35ba)])[0x0][_0x4d1dea(0x1b19)]:0x0;}},'https:':_0x37e46c(0x1acf),'ftp:':_0x37e46c(0x1acf),'//':{'validate':function(_0x125085,_0xaed1fa,_0x3e3393){const _0x1ee599=_0x37e46c,_0xf4223b=_0x125085[_0x1ee599(0x384c)](_0xaed1fa);return _0x3e3393['re'][_0x1ee599(0x320f)]||(_0x3e3393['re'][_0x1ee599(0x320f)]=new RegExp('^'+_0x3e3393['re'][_0x1ee599(0x41e9)]+'(?:localhost|(?:(?:'+_0x3e3393['re'][_0x1ee599(0x16b5)]+_0x1ee599(0x105b)+_0x3e3393['re'][_0x1ee599(0x479a)]+')'+_0x3e3393['re']['src_port']+_0x3e3393['re']['src_host_terminator']+_0x3e3393['re'][_0x1ee599(0x1632)],'i')),_0x3e3393['re'][_0x1ee599(0x320f)][_0x1ee599(0x1769)](_0xf4223b)?_0xaed1fa>=0x3&&':'===_0x125085[_0xaed1fa-0x3]||_0xaed1fa>=0x3&&'/'===_0x125085[_0xaed1fa-0x3]?0x0:_0xf4223b['match'](_0x3e3393['re']['no_http'])[0x0]['length']:0x0;}},'mailto:':{'validate':function(_0x577dbe,_0x431f16,_0x28a2c1){const _0x5774b7=_0x37e46c,_0x441b4d=_0x577dbe[_0x5774b7(0x384c)](_0x431f16);return _0x28a2c1['re']['mailto']||(_0x28a2c1['re']['mailto']=new RegExp('^'+_0x28a2c1['re'][_0x5774b7(0x4050)]+'@'+_0x28a2c1['re'][_0x5774b7(0x38e4)],'i')),_0x28a2c1['re'][_0x5774b7(0x12a1)]['test'](_0x441b4d)?_0x441b4d[_0x5774b7(0x2d96)](_0x28a2c1['re']['mailto'])[0x0][_0x5774b7(0x1b19)]:0x0;}}},_0x5b0fea='a[cdefgilmnoqrstuwxz]|b[abdefghijmnorstvwyz]|c[acdfghiklmnoruvwxyz]|d[ejkmoz]|e[cegrstu]|f[ijkmor]|g[abdefghilmnpqrstuwy]|h[kmnrtu]|i[delmnoqrst]|j[emop]|k[eghimnprwyz]|l[abcikrstuvy]|m[acdeghklmnopqrstuvwxyz]|n[acefgilopruz]|om|p[aefghklmnrstwy]|qa|r[eosuw]|s[abcdeghijklmnortuvxyz]|t[cdfghjklmnortvwz]|u[agksyz]|v[aceginu]|w[fs]|y[et]|z[amw]',_0x494720=_0x37e46c(0x1d84)['split']('|');function _0x318d70(_0x3ddce7){const _0x118559=_0x37e46c,_0x1c8002=_0x3ddce7['re']=function(_0x135253){const _0x5e8322=a0_0x11e7,_0x36a742={};_0x135253=_0x135253||{},_0x36a742[_0x5e8322(0x1747)]=_0x392380[_0x5e8322(0x33b0)],_0x36a742[_0x5e8322(0x5b9)]=_0x212275[_0x5e8322(0x33b0)],_0x36a742[_0x5e8322(0x480e)]=_0x5f3583[_0x5e8322(0x33b0)],_0x36a742[_0x5e8322(0x3828)]=_0x4bf2bf['source'],_0x36a742[_0x5e8322(0x2c78)]=[_0x36a742['src_Z'],_0x36a742[_0x5e8322(0x3828)],_0x36a742['src_Cc']][_0x5e8322(0x3541)]('|'),_0x36a742['src_ZCc']=[_0x36a742[_0x5e8322(0x480e)],_0x36a742[_0x5e8322(0x5b9)]]['join']('|');const _0x5bf762=_0x5e8322(0x4e63);return _0x36a742['src_pseudo_letter']=_0x5e8322(0x37b8)+_0x36a742[_0x5e8322(0x2c78)]+')'+_0x36a742[_0x5e8322(0x1747)]+')',_0x36a742['src_ip4']='(?:(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\x5c.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)',_0x36a742[_0x5e8322(0x41e9)]=_0x5e8322(0x3478)+_0x36a742['src_ZCc']+_0x5e8322(0x17c5),_0x36a742[_0x5e8322(0x3920)]='(?::(?:6(?:[0-4]\x5cd{3}|5(?:[0-4]\x5cd{2}|5(?:[0-2]\x5cd|3[0-5])))|[1-5]?\x5cd{1,4}))?',_0x36a742[_0x5e8322(0x4287)]=_0x5e8322(0x2bcb)+_0x36a742[_0x5e8322(0x2c78)]+')(?!'+(_0x135253[_0x5e8322(0x2751)]?_0x5e8322(0x2430):'-|')+_0x5e8322(0x2ea6)+_0x36a742[_0x5e8322(0x2c78)]+'))',_0x36a742[_0x5e8322(0x1632)]='(?:[/?#](?:(?!'+_0x36a742['src_ZCc']+'|'+_0x5bf762+_0x5e8322(0x1169)+_0x36a742['src_ZCc']+'|\x5c]).)*\x5c]|\x5c((?:(?!'+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x3d39)+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x1e4f)+_0x36a742[_0x5e8322(0x1fa7)]+'|[\x22]).)+\x5c\x22|\x5c\x27(?:(?!'+_0x36a742['src_ZCc']+_0x5e8322(0x2d1d)+_0x36a742[_0x5e8322(0x4c0d)]+_0x5e8322(0x1ab2)+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x39ea)+(_0x135253[_0x5e8322(0x2751)]?_0x5e8322(0x5036):_0x5e8322(0x3b2b))+_0x5e8322(0x19b0)+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x856)+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x4c30)+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x268b)+_0x36a742[_0x5e8322(0x1fa7)]+_0x5e8322(0x1130),_0x36a742[_0x5e8322(0x4050)]=_0x5e8322(0x51a),_0x36a742[_0x5e8322(0x3b47)]='xn--[a-z0-9\x5c-]{1,59}',_0x36a742[_0x5e8322(0x479a)]=_0x5e8322(0xf54)+_0x36a742[_0x5e8322(0x3b47)]+'|'+_0x36a742['src_pseudo_letter']+_0x5e8322(0x4397),_0x36a742['src_domain']=_0x5e8322(0xf54)+_0x36a742[_0x5e8322(0x3b47)]+_0x5e8322(0x1982)+_0x36a742[_0x5e8322(0x4c0d)]+_0x5e8322(0x1dfd)+_0x36a742[_0x5e8322(0x4c0d)]+_0x5e8322(0x8f6)+_0x36a742[_0x5e8322(0x4c0d)]+'){0,61}'+_0x36a742[_0x5e8322(0x4c0d)]+'))',_0x36a742[_0x5e8322(0x273b)]=_0x5e8322(0x39ab)+_0x36a742['src_domain']+_0x5e8322(0xa50)+_0x36a742[_0x5e8322(0x16b5)]+'))',_0x36a742['tpl_host_fuzzy']=_0x5e8322(0xf54)+_0x36a742[_0x5e8322(0x455d)]+'|(?:(?:(?:'+_0x36a742[_0x5e8322(0x16b5)]+_0x5e8322(0x1de6),_0x36a742['tpl_host_no_ip_fuzzy']=_0x5e8322(0x1aa6)+_0x36a742['src_domain']+_0x5e8322(0x45f6),_0x36a742[_0x5e8322(0x38e4)]=_0x36a742['src_host']+_0x36a742[_0x5e8322(0x4287)],_0x36a742['tpl_host_fuzzy_strict']=_0x36a742['tpl_host_fuzzy']+_0x36a742[_0x5e8322(0x4287)],_0x36a742[_0x5e8322(0x3909)]=_0x36a742['src_host']+_0x36a742[_0x5e8322(0x3920)]+_0x36a742[_0x5e8322(0x4287)],_0x36a742[_0x5e8322(0x3dda)]=_0x36a742[_0x5e8322(0x1abd)]+_0x36a742[_0x5e8322(0x3920)]+_0x36a742[_0x5e8322(0x4287)],_0x36a742['tpl_host_port_no_ip_fuzzy_strict']=_0x36a742['tpl_host_no_ip_fuzzy']+_0x36a742[_0x5e8322(0x3920)]+_0x36a742[_0x5e8322(0x4287)],_0x36a742[_0x5e8322(0x3a2f)]=_0x5e8322(0x171c)+_0x36a742[_0x5e8322(0x2c78)]+_0x5e8322(0x4967),_0x36a742[_0x5e8322(0x2c1d)]=_0x5e8322(0x1777)+_0x36a742[_0x5e8322(0x1fa7)]+')('+_0x36a742[_0x5e8322(0x4050)]+'@'+_0x36a742[_0x5e8322(0x4f90)]+')',_0x36a742[_0x5e8322(0x4c8d)]=_0x5e8322(0x1d62)+_0x36a742['src_ZPCc']+_0x5e8322(0x37fc)+_0x36a742['tpl_host_port_fuzzy_strict']+_0x36a742['src_path']+')',_0x36a742[_0x5e8322(0x4c38)]=_0x5e8322(0x1d62)+_0x36a742['src_ZPCc']+_0x5e8322(0x37fc)+_0x36a742['tpl_host_port_no_ip_fuzzy_strict']+_0x36a742['src_path']+')',_0x36a742;}(_0x3ddce7[_0x118559(0x722)]),_0x270882=_0x3ddce7['__tlds__'][_0x118559(0x384c)]();function _0x1a5a34(_0x5d03d0){const _0x947b5a=_0x118559;return _0x5d03d0['replace'](_0x947b5a(0x17e8),_0x1c8002[_0x947b5a(0x340)]);}_0x3ddce7[_0x118559(0x546)](),_0x3ddce7[_0x118559(0x3c69)]||_0x270882[_0x118559(0x1715)](_0x5b0fea),_0x270882[_0x118559(0x1715)](_0x1c8002['src_xn']),_0x1c8002[_0x118559(0x340)]=_0x270882[_0x118559(0x3541)]('|'),_0x1c8002[_0x118559(0x49cf)]=RegExp(_0x1a5a34(_0x1c8002[_0x118559(0x2c1d)]),'i'),_0x1c8002[_0x118559(0x3b93)]=RegExp(_0x1a5a34(_0x1c8002[_0x118559(0x4c8d)]),'i'),_0x1c8002[_0x118559(0x5228)]=RegExp(_0x1a5a34(_0x1c8002[_0x118559(0x4c38)]),'i'),_0x1c8002[_0x118559(0x3f98)]=RegExp(_0x1a5a34(_0x1c8002[_0x118559(0x3a2f)]),'i');const _0x23cc4e=[];function _0x4c4076(_0xea4e8b,_0x4392cc){const _0x90a4f7=_0x118559;throw new Error('(LinkifyIt)\x20Invalid\x20schema\x20\x22'+_0xea4e8b+_0x90a4f7(0x9ed)+_0x4392cc);}_0x3ddce7['__compiled__']={},Object['keys'](_0x3ddce7[_0x118559(0x2497)])[_0x118559(0xa21)](function(_0x3533a5){const _0x129345=_0x118559,_0x2636f9=_0x3ddce7['__schemas__'][_0x3533a5];if(null===_0x2636f9)return;const _0x40a73b={'validate':null,'link':null};if(_0x3ddce7[_0x129345(0x1863)][_0x3533a5]=_0x40a73b,_0x129345(0x4d86)===_0x2ec2c9(_0x2636f9))return!function(_0x2d4f1a){const _0xae7b8f=_0x129345;return _0xae7b8f(0x1fee)===_0x2ec2c9(_0x2d4f1a);}(_0x2636f9['validate'])?_0x39294a(_0x2636f9[_0x129345(0x1509)])?_0x40a73b[_0x129345(0x1509)]=_0x2636f9[_0x129345(0x1509)]:_0x4c4076(_0x3533a5,_0x2636f9):_0x40a73b['validate']=function(_0x5c097c){return function(_0x4690b0,_0x1b88b3){const _0x3efc1b=a0_0x11e7,_0x2e6ee9=_0x4690b0[_0x3efc1b(0x384c)](_0x1b88b3);return _0x5c097c[_0x3efc1b(0x1769)](_0x2e6ee9)?_0x2e6ee9[_0x3efc1b(0x2d96)](_0x5c097c)[0x0][_0x3efc1b(0x1b19)]:0x0;};}(_0x2636f9[_0x129345(0x1509)]),void(_0x39294a(_0x2636f9['normalize'])?_0x40a73b[_0x129345(0x2429)]=_0x2636f9[_0x129345(0x2429)]:_0x2636f9['normalize']?_0x4c4076(_0x3533a5,_0x2636f9):_0x40a73b[_0x129345(0x2429)]=function(_0x4147bc,_0x2e7b9a){_0x2e7b9a['normalize'](_0x4147bc);});!function(_0x5cfdb3){return'[object\x20String]'===_0x2ec2c9(_0x5cfdb3);}(_0x2636f9)?_0x4c4076(_0x3533a5,_0x2636f9):_0x23cc4e[_0x129345(0x1715)](_0x3533a5);}),_0x23cc4e[_0x118559(0xa21)](function(_0x238fa1){const _0x23308d=_0x118559;_0x3ddce7[_0x23308d(0x1863)][_0x3ddce7['__schemas__'][_0x238fa1]]&&(_0x3ddce7['__compiled__'][_0x238fa1][_0x23308d(0x1509)]=_0x3ddce7[_0x23308d(0x1863)][_0x3ddce7[_0x23308d(0x2497)][_0x238fa1]]['validate'],_0x3ddce7[_0x23308d(0x1863)][_0x238fa1][_0x23308d(0x2429)]=_0x3ddce7['__compiled__'][_0x3ddce7[_0x23308d(0x2497)][_0x238fa1]][_0x23308d(0x2429)]);}),_0x3ddce7[_0x118559(0x1863)]['']={'validate':null,'normalize':function(_0x2360b0,_0x5e3c30){const _0x3647a7=_0x118559;_0x5e3c30[_0x3647a7(0x2429)](_0x2360b0);}};const _0x134101=Object[_0x118559(0x1ea9)](_0x3ddce7['__compiled__'])[_0x118559(0x1465)](function(_0x3d178e){const _0x51b8c4=_0x118559;return _0x3d178e[_0x51b8c4(0x1b19)]>0x0&&_0x3ddce7[_0x51b8c4(0x1863)][_0x3d178e];})[_0x118559(0x4833)](_0x2f2101)[_0x118559(0x3541)]('|');_0x3ddce7['re'][_0x118559(0x157c)]=RegExp(_0x118559(0x410c)+_0x1c8002[_0x118559(0x2c78)]+_0x118559(0x1174)+_0x134101+')','i'),_0x3ddce7['re'][_0x118559(0x2c94)]=RegExp(_0x118559(0x410c)+_0x1c8002[_0x118559(0x2c78)]+_0x118559(0x1174)+_0x134101+')','ig'),_0x3ddce7['re']['schema_at_start']=RegExp('^'+_0x3ddce7['re']['schema_search']['source'],'i'),_0x3ddce7['re']['pretest']=RegExp('('+_0x3ddce7['re']['schema_test'][_0x118559(0x33b0)]+_0x118559(0x3a2c)+_0x3ddce7['re']['host_fuzzy_test'][_0x118559(0x33b0)]+_0x118559(0x108d),'i'),function(_0x6a9cae){const _0x1f25c4=_0x118559;_0x6a9cae[_0x1f25c4(0x15a2)]=-0x1,_0x6a9cae[_0x1f25c4(0x2f5e)]='';}(_0x3ddce7);}function _0x5a6862(_0x2fea64,_0xe6644f){const _0x2e9375=_0x37e46c,_0x258618=_0x2fea64['__index__'],_0x5993f4=_0x2fea64[_0x2e9375(0x46f4)],_0x5e0c5a=_0x2fea64[_0x2e9375(0x2f5e)][_0x2e9375(0x384c)](_0x258618,_0x5993f4);this[_0x2e9375(0x313b)]=_0x2fea64['__schema__'][_0x2e9375(0x6e8)](),this[_0x2e9375(0x3bb5)]=_0x258618+_0xe6644f,this['lastIndex']=_0x5993f4+_0xe6644f,this[_0x2e9375(0x1efd)]=_0x5e0c5a,this[_0x2e9375(0x4006)]=_0x5e0c5a,this[_0x2e9375(0xd17)]=_0x5e0c5a;}function _0x11d84d(_0x1ee808,_0x582828){const _0x4a73bf=_0x37e46c,_0x4c22ec=new _0x5a6862(_0x1ee808,_0x582828);return _0x1ee808[_0x4a73bf(0x1863)][_0x4c22ec[_0x4a73bf(0x313b)]][_0x4a73bf(0x2429)](_0x4c22ec,_0x1ee808),_0x4c22ec;}function _0x2ed1a2(_0x6d3328,_0x8a725a){const _0x283b20=_0x37e46c;if(!(this instanceof _0x2ed1a2))return new _0x2ed1a2(_0x6d3328,_0x8a725a);var _0x31aae1;_0x8a725a||(_0x31aae1=_0x6d3328,Object[_0x283b20(0x1ea9)](_0x31aae1||{})[_0x283b20(0x24d8)](function(_0x4c67b2,_0x1159f8){const _0x2bc819=_0x283b20;return _0x4c67b2||_0x468419[_0x2bc819(0x2427)](_0x1159f8);},!0x1)&&(_0x8a725a=_0x6d3328,_0x6d3328={})),this[_0x283b20(0x722)]=_0x164b56({},_0x468419,_0x8a725a),this[_0x283b20(0x15a2)]=-0x1,this[_0x283b20(0x46f4)]=-0x1,this[_0x283b20(0x2620)]='',this['__text_cache__']='',this[_0x283b20(0x2497)]=_0x164b56({},_0x3db73d,_0x6d3328),this[_0x283b20(0x1863)]={},this[_0x283b20(0x506)]=_0x494720,this[_0x283b20(0x3c69)]=!0x1,this['re']={},_0x318d70(this);}_0x2ed1a2[_0x37e46c(0x3b3c)][_0x37e46c(0x362c)]=function(_0x310027,_0x36de6f){return this['__schemas__'][_0x310027]=_0x36de6f,_0x318d70(this),this;},_0x2ed1a2['prototype'][_0x37e46c(0x1fa)]=function(_0x4b71df){const _0x2f8fd3=_0x37e46c;return this[_0x2f8fd3(0x722)]=_0x164b56(this[_0x2f8fd3(0x722)],_0x4b71df),this;},_0x2ed1a2[_0x37e46c(0x3b3c)][_0x37e46c(0x1769)]=function(_0x473fed){const _0x7e79d9=_0x37e46c;if(this[_0x7e79d9(0x2f5e)]=_0x473fed,this['__index__']=-0x1,!_0x473fed[_0x7e79d9(0x1b19)])return!0x1;let _0x3aaaa7,_0x242e60,_0x2dbe3,_0x5a22c5,_0x133bd5,_0x52bf72,_0x31a1cd,_0x5c4da9,_0x2e6677;if(this['re']['schema_test'][_0x7e79d9(0x1769)](_0x473fed)){for(_0x31a1cd=this['re'][_0x7e79d9(0x2c94)],_0x31a1cd[_0x7e79d9(0x3655)]=0x0;null!==(_0x3aaaa7=_0x31a1cd['exec'](_0x473fed));)if(_0x5a22c5=this[_0x7e79d9(0x4710)](_0x473fed,_0x3aaaa7[0x2],_0x31a1cd[_0x7e79d9(0x3655)]),_0x5a22c5){this[_0x7e79d9(0x2620)]=_0x3aaaa7[0x2],this[_0x7e79d9(0x15a2)]=_0x3aaaa7[_0x7e79d9(0x3bb5)]+_0x3aaaa7[0x1][_0x7e79d9(0x1b19)],this[_0x7e79d9(0x46f4)]=_0x3aaaa7[_0x7e79d9(0x3bb5)]+_0x3aaaa7[0x0][_0x7e79d9(0x1b19)]+_0x5a22c5;break;}}return this[_0x7e79d9(0x722)]['fuzzyLink']&&this['__compiled__'][_0x7e79d9(0x1acf)]&&(_0x5c4da9=_0x473fed[_0x7e79d9(0x3190)](this['re'][_0x7e79d9(0x3f98)]),_0x5c4da9>=0x0&&(this[_0x7e79d9(0x15a2)]<0x0||_0x5c4da9=0x0&&null!==(_0x2dbe3=_0x473fed[_0x7e79d9(0x2d96)](this['re'][_0x7e79d9(0x49cf)]))&&(_0x133bd5=_0x2dbe3['index']+_0x2dbe3[0x1][_0x7e79d9(0x1b19)],_0x52bf72=_0x2dbe3[_0x7e79d9(0x3bb5)]+_0x2dbe3[0x0][_0x7e79d9(0x1b19)],(this[_0x7e79d9(0x15a2)]<0x0||_0x133bd5this[_0x7e79d9(0x46f4)])&&(this[_0x7e79d9(0x2620)]='mailto:',this[_0x7e79d9(0x15a2)]=_0x133bd5,this['__last_index__']=_0x52bf72))),this[_0x7e79d9(0x15a2)]>=0x0;},_0x2ed1a2['prototype'][_0x37e46c(0x296a)]=function(_0x442569){const _0x3061cf=_0x37e46c;return this['re']['pretest'][_0x3061cf(0x1769)](_0x442569);},_0x2ed1a2['prototype'][_0x37e46c(0x4710)]=function(_0x2171a8,_0x1a6157,_0x94e802){const _0x2a26bd=_0x37e46c;return this[_0x2a26bd(0x1863)][_0x1a6157[_0x2a26bd(0x6e8)]()]?this[_0x2a26bd(0x1863)][_0x1a6157[_0x2a26bd(0x6e8)]()][_0x2a26bd(0x1509)](_0x2171a8,_0x94e802,this):0x0;},_0x2ed1a2[_0x37e46c(0x3b3c)][_0x37e46c(0x2d96)]=function(_0x2f60df){const _0x13eff8=_0x37e46c,_0xd9da10=[];let _0x4ac1c6=0x0;this['__index__']>=0x0&&this[_0x13eff8(0x2f5e)]===_0x2f60df&&(_0xd9da10[_0x13eff8(0x1715)](_0x11d84d(this,_0x4ac1c6)),_0x4ac1c6=this['__last_index__']);let _0x38d977=_0x4ac1c6?_0x2f60df[_0x13eff8(0x384c)](_0x4ac1c6):_0x2f60df;for(;this[_0x13eff8(0x1769)](_0x38d977);)_0xd9da10[_0x13eff8(0x1715)](_0x11d84d(this,_0x4ac1c6)),_0x38d977=_0x38d977[_0x13eff8(0x384c)](this[_0x13eff8(0x46f4)]),_0x4ac1c6+=this[_0x13eff8(0x46f4)];return _0xd9da10[_0x13eff8(0x1b19)]?_0xd9da10:null;},_0x2ed1a2[_0x37e46c(0x3b3c)]['matchAtStart']=function(_0x997650){const _0x1ebc2f=_0x37e46c;if(this['__text_cache__']=_0x997650,this[_0x1ebc2f(0x15a2)]=-0x1,!_0x997650[_0x1ebc2f(0x1b19)])return null;const _0x5e3530=this['re'][_0x1ebc2f(0x24c3)][_0x1ebc2f(0x198d)](_0x997650);if(!_0x5e3530)return null;const _0x4e3ad0=this[_0x1ebc2f(0x4710)](_0x997650,_0x5e3530[0x2],_0x5e3530[0x0][_0x1ebc2f(0x1b19)]);return _0x4e3ad0?(this['__schema__']=_0x5e3530[0x2],this[_0x1ebc2f(0x15a2)]=_0x5e3530[_0x1ebc2f(0x3bb5)]+_0x5e3530[0x1][_0x1ebc2f(0x1b19)],this['__last_index__']=_0x5e3530[_0x1ebc2f(0x3bb5)]+_0x5e3530[0x0][_0x1ebc2f(0x1b19)]+_0x4e3ad0,_0x11d84d(this,0x0)):null;},_0x2ed1a2[_0x37e46c(0x3b3c)][_0x37e46c(0x3b7d)]=function(_0x443dca,_0x128993){const _0x98f879=_0x37e46c;return _0x443dca=Array['isArray'](_0x443dca)?_0x443dca:[_0x443dca],_0x128993?(this['__tlds__']=this[_0x98f879(0x506)][_0x98f879(0x1d1d)](_0x443dca)[_0x98f879(0x4c33)]()['filter'](function(_0x50ea1f,_0x2bbd12,_0x440014){return _0x50ea1f!==_0x440014[_0x2bbd12-0x1];})[_0x98f879(0x78b)](),_0x318d70(this),this):(this[_0x98f879(0x506)]=_0x443dca[_0x98f879(0x384c)](),this['__tlds_replaced__']=!0x0,_0x318d70(this),this);},_0x2ed1a2['prototype'][_0x37e46c(0x2429)]=function(_0x35e016){const _0x585ef5=_0x37e46c;_0x35e016['schema']||(_0x35e016[_0x585ef5(0xd17)]=_0x585ef5(0x1649)+_0x35e016['url']),_0x585ef5(0x594)!==_0x35e016[_0x585ef5(0x313b)]||/^mailto:/i[_0x585ef5(0x1769)](_0x35e016[_0x585ef5(0xd17)])||(_0x35e016[_0x585ef5(0xd17)]='mailto:'+_0x35e016[_0x585ef5(0xd17)]);},_0x2ed1a2[_0x37e46c(0x3b3c)]['onCompile']=function(){};const _0x519f70=_0x2ed1a2,_0x4ab03c=0x7fffffff,_0x55589a=0x24,_0x1b02bc=/^xn--/,_0x5e6f11=/[^\0-\x7F]/,_0x2f5c00=/[\x2E\u3002\uFF0E\uFF61]/g,_0x19fb29={'overflow':_0x37e46c(0x12b6),'not-basic':_0x37e46c(0x30c),'invalid-input':'Invalid\x20input'},_0x4526ee=Math['floor'],_0x30e7d4=String[_0x37e46c(0x49bf)];function _0x2cddf(_0x1119a1){throw new RangeError(_0x19fb29[_0x1119a1]);}function _0x25b59d(_0x398194,_0x24e412){const _0xf6ede1=_0x37e46c,_0x28250f=_0x398194[_0xf6ede1(0x1117)]('@');let _0x20ca79='';_0x28250f[_0xf6ede1(0x1b19)]>0x1&&(_0x20ca79=_0x28250f[0x0]+'@',_0x398194=_0x28250f[0x1]);const _0x38f9b9=function(_0x3abd2b,_0x4b963e){const _0x669374=_0xf6ede1,_0x5d7661=[];let _0x2dbb22=_0x3abd2b[_0x669374(0x1b19)];for(;_0x2dbb22--;)_0x5d7661[_0x2dbb22]=_0x4b963e(_0x3abd2b[_0x2dbb22]);return _0x5d7661;}((_0x398194=_0x398194[_0xf6ede1(0x741)](_0x2f5c00,'.'))[_0xf6ede1(0x1117)]('.'),_0x24e412)[_0xf6ede1(0x3541)]('.');return _0x20ca79+_0x38f9b9;}function _0x58bad4(_0x1ca040){const _0x4ebd4b=_0x37e46c,_0x5cc31d=[];let _0xe4fa18=0x0;const _0x15a41a=_0x1ca040['length'];for(;_0xe4fa18<_0x15a41a;){const _0x45c2b5=_0x1ca040[_0x4ebd4b(0x4955)](_0xe4fa18++);if(_0x45c2b5>=0xd800&&_0x45c2b5<=0xdbff&&_0xe4fa18<_0x15a41a){const _0x4b52d8=_0x1ca040['charCodeAt'](_0xe4fa18++);0xdc00==(0xfc00&_0x4b52d8)?_0x5cc31d[_0x4ebd4b(0x1715)](((0x3ff&_0x45c2b5)<<0xa)+(0x3ff&_0x4b52d8)+0x10000):(_0x5cc31d[_0x4ebd4b(0x1715)](_0x45c2b5),_0xe4fa18--);}else _0x5cc31d[_0x4ebd4b(0x1715)](_0x45c2b5);}return _0x5cc31d;}const _0xab8509=function(_0x39813b,_0x4b23c6){return _0x39813b+0x16+0x4b*(_0x39813b<0x1a)-((0x0!=_0x4b23c6)<<0x5);},_0x269f4c=function(_0x190b71,_0x110cce,_0x442cdf){let _0x30f5cd=0x0;for(_0x190b71=_0x442cdf?_0x4526ee(_0x190b71/0x2bc):_0x190b71>>0x1,_0x190b71+=_0x4526ee(_0x190b71/_0x110cce);_0x190b71>0x1c7;_0x30f5cd+=_0x55589a)_0x190b71=_0x4526ee(_0x190b71/0x23);return _0x4526ee(_0x30f5cd+0x24*_0x190b71/(_0x190b71+0x26));},_0x38b0de=function(_0x35e4ec){const _0x5ddad0=_0x37e46c,_0x171b26=[],_0x49db8c=_0x35e4ec[_0x5ddad0(0x1b19)];let _0x15cc4d=0x0,_0x38d987=0x80,_0x5915ac=0x48,_0x50eb89=_0x35e4ec[_0x5ddad0(0x4004)]('-');_0x50eb89<0x0&&(_0x50eb89=0x0);for(let _0x19f6d6=0x0;_0x19f6d6<_0x50eb89;++_0x19f6d6)_0x35e4ec[_0x5ddad0(0x4955)](_0x19f6d6)>=0x80&&_0x2cddf(_0x5ddad0(0x36d9)),_0x171b26['push'](_0x35e4ec[_0x5ddad0(0x4955)](_0x19f6d6));for(let _0x15dd14=_0x50eb89>0x0?_0x50eb89+0x1:0x0;_0x15dd14<_0x49db8c;){const _0x423361=_0x15cc4d;for(let _0x372188=0x1,_0x1e40af=_0x55589a;;_0x1e40af+=_0x55589a){_0x15dd14>=_0x49db8c&&_0x2cddf(_0x5ddad0(0x5c7));const _0x5321a4=(_0x483efc=_0x35e4ec[_0x5ddad0(0x4955)](_0x15dd14++))>=0x30&&_0x483efc<0x3a?_0x483efc-0x30+0x1a:_0x483efc>=0x41&&_0x483efc<0x5b?_0x483efc-0x41:_0x483efc>=0x61&&_0x483efc<0x7b?_0x483efc-0x61:_0x55589a;_0x5321a4>=_0x55589a&&_0x2cddf(_0x5ddad0(0x5c7)),_0x5321a4>_0x4526ee((_0x4ab03c-_0x15cc4d)/_0x372188)&&_0x2cddf('overflow'),_0x15cc4d+=_0x5321a4*_0x372188;const _0x5266a9=_0x1e40af<=_0x5915ac?0x1:_0x1e40af>=_0x5915ac+0x1a?0x1a:_0x1e40af-_0x5915ac;if(_0x5321a4<_0x5266a9)break;const _0x2633c7=_0x55589a-_0x5266a9;_0x372188>_0x4526ee(_0x4ab03c/_0x2633c7)&&_0x2cddf('overflow'),_0x372188*=_0x2633c7;}const _0x275060=_0x171b26[_0x5ddad0(0x1b19)]+0x1;_0x5915ac=_0x269f4c(_0x15cc4d-_0x423361,_0x275060,0x0==_0x423361),_0x4526ee(_0x15cc4d/_0x275060)>_0x4ab03c-_0x38d987&&_0x2cddf(_0x5ddad0(0xa69)),_0x38d987+=_0x4526ee(_0x15cc4d/_0x275060),_0x15cc4d%=_0x275060,_0x171b26[_0x5ddad0(0x4986)](_0x15cc4d++,0x0,_0x38d987);}var _0x483efc;return String['fromCodePoint'](..._0x171b26);},_0x49e6ca=function(_0x2be69d){const _0x180aba=_0x37e46c,_0x3e2161=[],_0x687baf=(_0x2be69d=_0x58bad4(_0x2be69d))[_0x180aba(0x1b19)];let _0x7037d9=0x80,_0x2b878c=0x0,_0x2c3e4b=0x48;for(const _0x3c6c87 of _0x2be69d)_0x3c6c87<0x80&&_0x3e2161[_0x180aba(0x1715)](_0x30e7d4(_0x3c6c87));const _0x3edf7a=_0x3e2161[_0x180aba(0x1b19)];let _0x25fa03=_0x3edf7a;for(_0x3edf7a&&_0x3e2161[_0x180aba(0x1715)]('-');_0x25fa03<_0x687baf;){let _0x309c8e=_0x4ab03c;for(const _0xcaf01c of _0x2be69d)_0xcaf01c>=_0x7037d9&&_0xcaf01c<_0x309c8e&&(_0x309c8e=_0xcaf01c);const _0xb12be8=_0x25fa03+0x1;_0x309c8e-_0x7037d9>_0x4526ee((_0x4ab03c-_0x2b878c)/_0xb12be8)&&_0x2cddf(_0x180aba(0xa69)),_0x2b878c+=(_0x309c8e-_0x7037d9)*_0xb12be8,_0x7037d9=_0x309c8e;for(const _0x2f31b0 of _0x2be69d)if(_0x2f31b0<_0x7037d9&&++_0x2b878c>_0x4ab03c&&_0x2cddf('overflow'),_0x2f31b0===_0x7037d9){let _0x18f14f=_0x2b878c;for(let _0x1b8e0a=_0x55589a;;_0x1b8e0a+=_0x55589a){const _0x46d70e=_0x1b8e0a<=_0x2c3e4b?0x1:_0x1b8e0a>=_0x2c3e4b+0x1a?0x1a:_0x1b8e0a-_0x2c3e4b;if(_0x18f14f<_0x46d70e)break;const _0x45fd50=_0x18f14f-_0x46d70e,_0x4938be=_0x55589a-_0x46d70e;_0x3e2161[_0x180aba(0x1715)](_0x30e7d4(_0xab8509(_0x46d70e+_0x45fd50%_0x4938be,0x0))),_0x18f14f=_0x4526ee(_0x45fd50/_0x4938be);}_0x3e2161[_0x180aba(0x1715)](_0x30e7d4(_0xab8509(_0x18f14f,0x0))),_0x2c3e4b=_0x269f4c(_0x2b878c,_0xb12be8,_0x25fa03===_0x3edf7a),_0x2b878c=0x0,++_0x25fa03;}++_0x2b878c,++_0x7037d9;}return _0x3e2161[_0x180aba(0x3541)]('');},_0x501fd4={'version':'2.3.1','ucs2':{'decode':_0x58bad4,'encode':_0x40327b=>String[_0x37e46c(0xe5f)](..._0x40327b)},'decode':_0x38b0de,'encode':_0x49e6ca,'toASCII':function(_0x2fc011){return _0x25b59d(_0x2fc011,function(_0x2ff723){const _0x4469ed=a0_0x11e7;return _0x5e6f11[_0x4469ed(0x1769)](_0x2ff723)?_0x4469ed(0x293a)+_0x49e6ca(_0x2ff723):_0x2ff723;});},'toUnicode':function(_0x219714){return _0x25b59d(_0x219714,function(_0x4e3dca){const _0x42cbd3=a0_0x11e7;return _0x1b02bc[_0x42cbd3(0x1769)](_0x4e3dca)?_0x38b0de(_0x4e3dca['slice'](0x4)[_0x42cbd3(0x6e8)]()):_0x4e3dca;});}},_0xda789a={'default':{'options':{'html':!0x1,'xhtmlOut':!0x1,'breaks':!0x1,'langPrefix':_0x37e46c(0xd06),'linkify':!0x1,'typographer':!0x1,'quotes':_0x37e46c(0x1202),'highlight':null,'maxNesting':0x64},'components':{'core':{},'block':{},'inline':{}}},'zero':{'options':{'html':!0x1,'xhtmlOut':!0x1,'breaks':!0x1,'langPrefix':'language-','linkify':!0x1,'typographer':!0x1,'quotes':_0x37e46c(0x1202),'highlight':null,'maxNesting':0x14},'components':{'core':{'rules':['normalize',_0x37e46c(0x1f2e),'inline','text_join']},'block':{'rules':[_0x37e46c(0xc3d)]},'inline':{'rules':[_0x37e46c(0x4006)],'rules2':[_0x37e46c(0x1d9f),_0x37e46c(0x269c)]}}},'commonmark':{'options':{'html':!0x0,'xhtmlOut':!0x0,'breaks':!0x1,'langPrefix':_0x37e46c(0xd06),'linkify':!0x1,'typographer':!0x1,'quotes':'“”‘’','highlight':null,'maxNesting':0x14},'components':{'core':{'rules':['normalize',_0x37e46c(0x1f2e),_0x37e46c(0x2988),_0x37e46c(0x4be8)]},'block':{'rules':[_0x37e46c(0x702),_0x37e46c(0x4948),'fence',_0x37e46c(0x561),'hr',_0x37e46c(0x4bcf),_0x37e46c(0x960),_0x37e46c(0x144e),_0x37e46c(0x3d0f),_0x37e46c(0xc3d)]},'inline':{'rules':[_0x37e46c(0x1e66),_0x37e46c(0x10f4),'emphasis','entity','escape',_0x37e46c(0x34c3),'image',_0x37e46c(0x4b32),_0x37e46c(0x516f),_0x37e46c(0x4006)],'rules2':[_0x37e46c(0x1d9f),_0x37e46c(0x2297),'fragments_join']}}}},_0x520153=/^(vbscript|javascript|file|data):/,_0x23b33d=/^data:image\/(gif|png|jpeg|webp);/;function _0x29d5d6(_0x10c10e){const _0x2a6e99=_0x37e46c,_0x5b845d=_0x10c10e[_0x2a6e99(0x1b23)]()[_0x2a6e99(0x6e8)]();return!_0x520153[_0x2a6e99(0x1769)](_0x5b845d)||_0x23b33d[_0x2a6e99(0x1769)](_0x5b845d);}const _0x148e33=[_0x37e46c(0x1acf),_0x37e46c(0x506b),_0x37e46c(0x594)];function _0x5573cf(_0x4cb3a2){const _0x247fd3=_0x37e46c,_0x598573=_0x44b13d(_0x4cb3a2,!0x0);if(_0x598573['hostname']&&(!_0x598573['protocol']||_0x148e33[_0x247fd3(0x8c9)](_0x598573[_0x247fd3(0x33e5)])>=0x0))try{_0x598573[_0x247fd3(0xf76)]=_0x501fd4[_0x247fd3(0x7ce)](_0x598573[_0x247fd3(0xf76)]);}catch(_0x9dd1ca){}return _0x39157d(_0x3d11ed(_0x598573));}function _0x1099c3(_0x2ec9c1){const _0x1e40e1=_0x37e46c,_0x480205=_0x44b13d(_0x2ec9c1,!0x0);if(_0x480205[_0x1e40e1(0xf76)]&&(!_0x480205[_0x1e40e1(0x33e5)]||_0x148e33[_0x1e40e1(0x8c9)](_0x480205['protocol'])>=0x0))try{_0x480205[_0x1e40e1(0xf76)]=_0x501fd4[_0x1e40e1(0x183c)](_0x480205[_0x1e40e1(0xf76)]);}catch(_0x38e4b5){}return _0x59651b(_0x3d11ed(_0x480205),_0x59651b[_0x1e40e1(0x3a2b)]+'%');}function _0x1c3966(_0xa7f989,_0x2d0006){const _0x3da23d=_0x37e46c;if(!(this instanceof _0x1c3966))return new _0x1c3966(_0xa7f989,_0x2d0006);_0x2d0006||_0x297e70(_0xa7f989)||(_0x2d0006=_0xa7f989||{},_0xa7f989=_0x3da23d(0x3d23)),this[_0x3da23d(0x2988)]=new _0x2e1abd(),this[_0x3da23d(0x1f2e)]=new _0x329cbe(),this[_0x3da23d(0x1a49)]=new _0x47ce26(),this[_0x3da23d(0x29f1)]=new _0x5bf913(),this[_0x3da23d(0x1255)]=new _0x519f70(),this['validateLink']=_0x29d5d6,this['normalizeLink']=_0x5573cf,this[_0x3da23d(0x4c51)]=_0x1099c3,this[_0x3da23d(0x2881)]=_0xc54b7,this[_0x3da23d(0x429b)]=_0xa6cec9({},_0x502007),this[_0x3da23d(0x20b6)]={},this['configure'](_0xa7f989),_0x2d0006&&this[_0x3da23d(0x1fa)](_0x2d0006);}_0x1c3966['prototype'][_0x37e46c(0x1fa)]=function(_0x300c8f){const _0x5a90d7=_0x37e46c;return _0xa6cec9(this[_0x5a90d7(0x20b6)],_0x300c8f),this;},_0x1c3966[_0x37e46c(0x3b3c)]['configure']=function(_0x18ee6f){const _0x22a4d5=_0x37e46c,_0x1a5fe6=this;if(_0x297e70(_0x18ee6f)){const _0x5d3a72=_0x18ee6f;if(!(_0x18ee6f=_0xda789a[_0x5d3a72]))throw new Error('Wrong\x20`markdown-it`\x20preset\x20\x22'+_0x5d3a72+_0x22a4d5(0xc78));}if(!_0x18ee6f)throw new Error('Wrong\x20`markdown-it`\x20preset,\x20can\x27t\x20be\x20empty');return _0x18ee6f[_0x22a4d5(0x20b6)]&&_0x1a5fe6[_0x22a4d5(0x1fa)](_0x18ee6f[_0x22a4d5(0x20b6)]),_0x18ee6f[_0x22a4d5(0x2a55)]&&Object[_0x22a4d5(0x1ea9)](_0x18ee6f[_0x22a4d5(0x2a55)])[_0x22a4d5(0xa21)](function(_0x29cd70){const _0x1931dc=_0x22a4d5;_0x18ee6f[_0x1931dc(0x2a55)][_0x29cd70]['rules']&&_0x1a5fe6[_0x29cd70][_0x1931dc(0x2eb9)][_0x1931dc(0x10e6)](_0x18ee6f[_0x1931dc(0x2a55)][_0x29cd70][_0x1931dc(0x39d9)]),_0x18ee6f['components'][_0x29cd70][_0x1931dc(0x167a)]&&_0x1a5fe6[_0x29cd70]['ruler2']['enableOnly'](_0x18ee6f[_0x1931dc(0x2a55)][_0x29cd70][_0x1931dc(0x167a)]);}),this;},_0x1c3966[_0x37e46c(0x3b3c)][_0x37e46c(0x4fa)]=function(_0x2c6123,_0x51710a){const _0x327fb9=_0x37e46c;let _0x1c5580=[];Array['isArray'](_0x2c6123)||(_0x2c6123=[_0x2c6123]),[_0x327fb9(0x1a49),'block',_0x327fb9(0x2988)]['forEach'](function(_0x265654){const _0x2bb0df=_0x327fb9;_0x1c5580=_0x1c5580['concat'](this[_0x265654][_0x2bb0df(0x2eb9)][_0x2bb0df(0x4fa)](_0x2c6123,!0x0));},this),_0x1c5580=_0x1c5580[_0x327fb9(0x1d1d)](this[_0x327fb9(0x2988)][_0x327fb9(0x38ba)][_0x327fb9(0x4fa)](_0x2c6123,!0x0));const _0x5f38fc=_0x2c6123['filter'](function(_0x47b5b1){return _0x1c5580['indexOf'](_0x47b5b1)<0x0;});if(_0x5f38fc[_0x327fb9(0x1b19)]&&!_0x51710a)throw new Error(_0x327fb9(0xecb)+_0x5f38fc);return this;},_0x1c3966[_0x37e46c(0x3b3c)]['disable']=function(_0x395afc,_0x12d24f){const _0x717806=_0x37e46c;let _0x3d655a=[];Array['isArray'](_0x395afc)||(_0x395afc=[_0x395afc]),[_0x717806(0x1a49),'block',_0x717806(0x2988)]['forEach'](function(_0x4e3c9b){const _0x26629c=_0x717806;_0x3d655a=_0x3d655a[_0x26629c(0x1d1d)](this[_0x4e3c9b][_0x26629c(0x2eb9)]['disable'](_0x395afc,!0x0));},this),_0x3d655a=_0x3d655a[_0x717806(0x1d1d)](this[_0x717806(0x2988)][_0x717806(0x38ba)][_0x717806(0x4124)](_0x395afc,!0x0));const _0x181e3c=_0x395afc['filter'](function(_0x31567c){const _0x36f3a3=_0x717806;return _0x3d655a[_0x36f3a3(0x8c9)](_0x31567c)<0x0;});if(_0x181e3c[_0x717806(0x1b19)]&&!_0x12d24f)throw new Error(_0x717806(0x374d)+_0x181e3c);return this;},_0x1c3966[_0x37e46c(0x3b3c)][_0x37e46c(0x84a)]=function(_0x286f9e){const _0x5cb285=_0x37e46c,_0x222015=[this]['concat'](Array[_0x5cb285(0x3b3c)][_0x5cb285(0x384c)][_0x5cb285(0x236b)](arguments,0x1));return _0x286f9e['apply'](_0x286f9e,_0x222015),this;},_0x1c3966[_0x37e46c(0x3b3c)][_0x37e46c(0x2956)]=function(_0x34c98b,_0x4d457b){const _0x47a3d1=_0x37e46c;if(_0x47a3d1(0x2431)!=typeof _0x34c98b)throw new Error(_0x47a3d1(0x455a));const _0x189a55=new this[(_0x47a3d1(0x1a49))]['State'](_0x34c98b,this,_0x4d457b);return this['core'][_0x47a3d1(0xf1b)](_0x189a55),_0x189a55[_0x47a3d1(0x1c34)];},_0x1c3966[_0x37e46c(0x3b3c)][_0x37e46c(0x17d9)]=function(_0x373d8f,_0x3c5d4a){const _0x3555d9=_0x37e46c;return _0x3c5d4a=_0x3c5d4a||{},this[_0x3555d9(0x29f1)][_0x3555d9(0x17d9)](this['parse'](_0x373d8f,_0x3c5d4a),this['options'],_0x3c5d4a);},_0x1c3966['prototype']['parseInline']=function(_0x214a31,_0x1236cd){const _0x3ec24e=_0x37e46c,_0x1789b2=new this[(_0x3ec24e(0x1a49))][(_0x3ec24e(0x207e))](_0x214a31,this,_0x1236cd);return _0x1789b2[_0x3ec24e(0x2937)]=!0x0,this[_0x3ec24e(0x1a49)][_0x3ec24e(0xf1b)](_0x1789b2),_0x1789b2[_0x3ec24e(0x1c34)];},_0x1c3966[_0x37e46c(0x3b3c)]['renderInline']=function(_0x1df3d0,_0x1ab548){const _0x22ec37=_0x37e46c;return _0x1ab548=_0x1ab548||{},this[_0x22ec37(0x29f1)]['render'](this['parseInline'](_0x1df3d0,_0x1ab548),this[_0x22ec37(0x20b6)],_0x1ab548);};const _0x28ba27=_0x1c3966;function _0x3cf108(_0x543ac2,_0x50b45c,_0x6807bb){const _0x328ffd=_0x37e46c,_0x4b3549=(_0x6807bb=_0x6807bb||{})[_0x328ffd(0x5121)]||':',_0x2ecb5d=_0x4b3549[_0x328ffd(0x4955)](0x0),_0xe827de=_0x4b3549[_0x328ffd(0x1b19)],_0x23e8b9=_0x6807bb[_0x328ffd(0x1509)]||function(_0xa45680){const _0x3f6a54=_0x328ffd;return _0xa45680[_0x3f6a54(0x1b23)]()['split']('\x20',0x2)[0x0]===_0x50b45c;},_0x4a1132=_0x6807bb['render']||function(_0x400f9f,_0x2ee615,_0x509ecf,_0x2a80d4,_0x361e5d){const _0x392f04=_0x328ffd;return 0x1===_0x400f9f[_0x2ee615][_0x392f04(0x3d81)]&&_0x400f9f[_0x2ee615][_0x392f04(0x13e0)](_0x392f04(0x1390),_0x50b45c),_0x361e5d[_0x392f04(0x19dd)](_0x400f9f,_0x2ee615,_0x509ecf,_0x2a80d4,_0x361e5d);};_0x543ac2[_0x328ffd(0x1f2e)][_0x328ffd(0x2eb9)][_0x328ffd(0x5097)]('fence','container_'+_0x50b45c,function(_0xf29856,_0x34bf87,_0x3a8d94,_0x458a48){const _0xccc1a1=_0x328ffd;let _0x4b4a82,_0x339ddf=!0x1,_0x406568=_0xf29856[_0xccc1a1(0x1b61)][_0x34bf87]+_0xf29856['tShift'][_0x34bf87],_0xc833e7=_0xf29856[_0xccc1a1(0x2e82)][_0x34bf87];if(_0x2ecb5d!==_0xf29856[_0xccc1a1(0x3d6)][_0xccc1a1(0x4955)](_0x406568))return!0x1;for(_0x4b4a82=_0x406568+0x1;_0x4b4a82<=_0xc833e7&&_0x4b3549[(_0x4b4a82-_0x406568)%_0xe827de]===_0xf29856[_0xccc1a1(0x3d6)][_0x4b4a82];_0x4b4a82++);const _0xa8fd=Math[_0xccc1a1(0x2e2d)]((_0x4b4a82-_0x406568)/_0xe827de);if(_0xa8fd<0x3)return!0x1;_0x4b4a82-=(_0x4b4a82-_0x406568)%_0xe827de;const _0x46c34b=_0xf29856['src'][_0xccc1a1(0x384c)](_0x406568,_0x4b4a82),_0x30537d=_0xf29856[_0xccc1a1(0x3d6)][_0xccc1a1(0x384c)](_0x4b4a82,_0xc833e7);if(!_0x23e8b9(_0x30537d,_0x46c34b))return!0x1;if(_0x458a48)return!0x0;let _0x2bb7fc=_0x34bf87;for(;(_0x2bb7fc++,!(_0x2bb7fc>=_0x3a8d94))&&(_0x406568=_0xf29856[_0xccc1a1(0x1b61)][_0x2bb7fc]+_0xf29856[_0xccc1a1(0x1869)][_0x2bb7fc],_0xc833e7=_0xf29856['eMarks'][_0x2bb7fc],!(_0x406568<_0xc833e7&&_0xf29856[_0xccc1a1(0x4fc6)][_0x2bb7fc]<_0xf29856[_0xccc1a1(0x1de0)]));)if(_0x2ecb5d===_0xf29856[_0xccc1a1(0x3d6)][_0xccc1a1(0x4955)](_0x406568)&&!(_0xf29856[_0xccc1a1(0x4fc6)][_0x2bb7fc]-_0xf29856[_0xccc1a1(0x1de0)]>=0x4)){for(_0x4b4a82=_0x406568+0x1;_0x4b4a82<=_0xc833e7&&_0x4b3549[(_0x4b4a82-_0x406568)%_0xe827de]===_0xf29856[_0xccc1a1(0x3d6)][_0x4b4a82];_0x4b4a82++);if(!(Math[_0xccc1a1(0x2e2d)]((_0x4b4a82-_0x406568)/_0xe827de)<_0xa8fd||(_0x4b4a82-=(_0x4b4a82-_0x406568)%_0xe827de,_0x4b4a82=_0xf29856[_0xccc1a1(0x162a)](_0x4b4a82),_0x4b4a82<_0xc833e7))){_0x339ddf=!0x0;break;}}const _0x321333=_0xf29856['parentType'],_0x2ae1ac=_0xf29856[_0xccc1a1(0x2a61)];_0xf29856['parentType']=_0xccc1a1(0x2fa8),_0xf29856['lineMax']=_0x2bb7fc;const _0x162b51=_0xf29856[_0xccc1a1(0x1715)](_0xccc1a1(0x4314)+_0x50b45c+'_open',_0xccc1a1(0x4c88),0x1);_0x162b51[_0xccc1a1(0x2bd8)]=_0x46c34b,_0x162b51[_0xccc1a1(0x1f2e)]=!0x0,_0x162b51[_0xccc1a1(0x3a85)]=_0x30537d,_0x162b51[_0xccc1a1(0x4833)]=[_0x34bf87,_0x2bb7fc],_0xf29856['md'][_0xccc1a1(0x1f2e)][_0xccc1a1(0x3c0b)](_0xf29856,_0x34bf87+0x1,_0x2bb7fc);const _0x4eeca1=_0xf29856[_0xccc1a1(0x1715)](_0xccc1a1(0x4314)+_0x50b45c+'_close',_0xccc1a1(0x4c88),-0x1);return _0x4eeca1[_0xccc1a1(0x2bd8)]=_0xf29856[_0xccc1a1(0x3d6)][_0xccc1a1(0x384c)](_0x406568,_0x4b4a82),_0x4eeca1[_0xccc1a1(0x1f2e)]=!0x0,_0xf29856[_0xccc1a1(0x43fb)]=_0x321333,_0xf29856[_0xccc1a1(0x2a61)]=_0x2ae1ac,_0xf29856['line']=_0x2bb7fc+(_0x339ddf?0x1:0x0),!0x0;},{'alt':[_0x328ffd(0xc3d),_0x328ffd(0x3d0f),_0x328ffd(0x702),_0x328ffd(0x144e)]}),_0x543ac2[_0x328ffd(0x29f1)][_0x328ffd(0x39d9)][_0x328ffd(0x4314)+_0x50b45c+_0x328ffd(0xe09)]=_0x4a1132,_0x543ac2[_0x328ffd(0x29f1)][_0x328ffd(0x39d9)][_0x328ffd(0x4314)+_0x50b45c+_0x328ffd(0x1de8)]=_0x4a1132;}const _0x54bf00=_0x2d589b(0xc97);_0x2d589b(0x11c3);let _0x5bb300;const _0x9f2120=_0x28ba27({'html':!0x0,'highlight':function(_0x88320a,_0x1008a0){const _0x2cc61b=_0x37e46c;if(_0x1008a0&&_0x54bf00[_0x2cc61b(0xa48)](_0x1008a0))try{return _0x54bf00[_0x2cc61b(0x28e5)](_0x88320a,{'language':_0x1008a0})[_0x2cc61b(0x4fe9)];}catch(_0x4fb23e){}return _0x9f2120[_0x2cc61b(0x2881)]['escapeHtml'](_0x88320a);}});function _0x230e00(_0x56931f){const _0x53ff87=_0x37e46c;_0x56931f[_0x53ff87(0x492f)](_0x53ff87(0x1228))[_0x53ff87(0xa21)](_0x493e9c=>{const _0x3cb466=_0x53ff87;_0x493e9c['classList'][_0x3cb466(0x362c)](_0x3cb466(0x170c)),_0x493e9c[_0x3cb466(0x1a84)]['whiteSpace']=_0x3cb466(0x1496),_0x493e9c[_0x3cb466(0x1a84)][_0x3cb466(0x3a3d)]=_0x3cb466(0x3d21),_0x493e9c[_0x3cb466(0x1a84)][_0x3cb466(0x17dc)]=_0x3cb466(0x2d5e),_0x493e9c[_0x3cb466(0x1a84)][_0x3cb466(0xe81)]='royalblue',_0x493e9c[_0x3cb466(0x1a84)][_0x3cb466(0x5252)]=_0x3cb466(0x3a7),_0x493e9c[_0x3cb466(0x1a84)]['backgroundColor']='#f0f0f0',_0x493e9c[_0x3cb466(0x1a84)][_0x3cb466(0x25f1)]=_0x3cb466(0x2735);const _0x2270f3=document[_0x3cb466(0x3ac9)](_0x3cb466(0x18b7));_0x2270f3[_0x3cb466(0x3cdf)]=_0x3cb466(0x2fd1),_0x2270f3[_0x3cb466(0x1745)][_0x3cb466(0x362c)](_0x3cb466(0x28b9)),_0x2270f3[_0x3cb466(0x1a84)][_0x3cb466(0x25f1)]=_0x3cb466(0x5140),_0x2270f3['style']['top']=_0x3cb466(0x2b7d),_0x2270f3[_0x3cb466(0x1a84)][_0x3cb466(0x4d50)]=_0x3cb466(0x2b7d),_0x2270f3[_0x3cb466(0x1a84)][_0x3cb466(0x369a)]='none',_0x2270f3['style'][_0x3cb466(0x4378)]=_0x3cb466(0x2049),_0x2270f3[_0x3cb466(0x1a84)]['background']=_0x3cb466(0x6db),_0x2270f3['style'][_0x3cb466(0x824)]=_0x3cb466(0x43e4),_0x2270f3['style'][_0x3cb466(0xe81)]=_0x3cb466(0x2318),_0x493e9c[_0x3cb466(0x335a)](_0x2270f3),_0x2270f3[_0x3cb466(0xc61)](_0x3cb466(0x364d),()=>{const _0x271954=_0x3cb466,_0x5da55e=_0x2270f3[_0x271954(0x3cdf)];navigator['clipboard'][_0x271954(0xeb1)](_0x493e9c[_0x271954(0x2f80)]),_0x2270f3['innerHTML']=_0x271954(0x1cf0),setTimeout(()=>{_0x2270f3['innerHTML']=_0x5da55e;},0x7d0);});});}function _0x2eb380(_0x34aa20,_0x35e0ee,_0x5af3c7=0x0){const _0xbbbd3d=_0x37e46c,_0x5d3675=_0x35e0ee['querySelector'](_0xbbbd3d(0x24dd));let _0x3313e0='';_0x3313e0=0x0===_0x5af3c7?'\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20'+_0x9f2120[_0xbbbd3d(0x17d9)](_0x34aa20)+'\x0a\x20\x20':'\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20'+_0x9f2120[_0xbbbd3d(0x17d9)](_0x34aa20)+_0xbbbd3d(0x3dc7),_0x5d3675[_0xbbbd3d(0x3cdf)]=_0x3313e0,_0x230e00(_0x5d3675);}_0x9f2120['renderer'][_0x37e46c(0x39d9)][_0x37e46c(0x4006)]=(_0x15555e,_0xbc22d7,_0x424ca7,_0x1d2243,_0x5c49e4)=>{const _0x5bdd91=_0x37e46c,_0x405c3c=_0x15555e[_0xbc22d7][_0x5bdd91(0x484f)],_0x16421f=[..._0x405c3c['matchAll'](/\\(.*?)\\/g)];let _0x4e9252=_0x405c3c;return _0x16421f['forEach'](_0x36b05e=>{const _0x20ce71=_0x5bdd91,_0x2e3bb5=_0x36b05e[0x1],_0x2a4651=_0x20ce71(0x85e)+('https://en.wikipedia.org/wiki/'+encodeURIComponent(_0x2e3bb5))+'\x22\x20target=\x22_blank\x22\x20style=\x22color:\x20blue;\x20text-decoration:\x20underline;\x22>⭷'+_0x2e3bb5+_0x20ce71(0x4246);_0x4e9252=_0x4e9252[_0x20ce71(0x741)]('\x5c'+_0x2e3bb5+'\x5c',_0x2a4651);}),_0x4e9252;},_0x9f2120[_0x37e46c(0x29f1)]['rules']['heading_open']=(_0x425f8d,_0x249f35)=>{const _0xb190c7=_0x37e46c,_0x1eef07=_0x425f8d[_0x249f35]['tag'];return'<'+_0x1eef07+'\x20class=\x22'+{'h1':_0xb190c7(0x1a34),'h2':'text-2xl\x20font-bold\x20mt-2','h3':_0xb190c7(0xa87),'h4':_0xb190c7(0x2e78),'h5':_0xb190c7(0x45a2),'h6':'text-sm\x20font-medium\x20mt-2'}[_0x1eef07]+'\x22>';},_0x9f2120[_0x37e46c(0x29f1)][_0x37e46c(0x39d9)][_0x37e46c(0x3b78)]=(_0x34d1c4,_0x16f32c)=>'',_0x9f2120[_0x37e46c(0x29f1)][_0x37e46c(0x39d9)][_0x37e46c(0x2b39)]=()=>_0x37e46c(0x4bc7),_0x9f2120['renderer'][_0x37e46c(0x39d9)][_0x37e46c(0x1dfb)]=()=>'',_0x9f2120['renderer']['rules']['list_item_open']=()=>_0x37e46c(0x4317),_0x9f2120[_0x37e46c(0x29f1)]['rules'][_0x37e46c(0x11fb)]=()=>_0x37e46c(0x2a03),_0x9f2120[_0x37e46c(0x1f2e)][_0x37e46c(0x2eb9)]['before']('hr',_0x37e46c(0x3945),(_0x281c43,_0x5b5ac4,_0xf89239,_0xe03b82)=>{const _0x4c882b=_0x37e46c,_0x4a33b5=_0x281c43[_0x4c882b(0x1b61)][_0x5b5ac4]+_0x281c43[_0x4c882b(0x1869)][_0x5b5ac4],_0x196370=_0x281c43[_0x4c882b(0x2e82)][_0x5b5ac4];if(_0x4a33b5>=_0x196370)return!0x1;const _0xdc499f=_0x281c43['src'][_0x4c882b(0x384c)](_0x4a33b5,_0x196370)[_0x4c882b(0x1b23)]();if(!/^={4,}$/['test'](_0xdc499f))return!0x1;if(_0xe03b82)return!0x0;_0x281c43[_0x4c882b(0x3572)]=_0x5b5ac4+0x1;const _0x397f20=_0x281c43['push'](_0x4c882b(0x4bcf),'',0x0);return _0x397f20[_0x4c882b(0x484f)]=_0x4c882b(0x1b49),_0x397f20['block']=!0x0,_0x397f20['map']=[_0x5b5ac4,_0x281c43[_0x4c882b(0x3572)]],_0x397f20[_0x4c882b(0x2bd8)]=_0xdc499f,!0x0;}),_0x9f2120[_0x37e46c(0x84a)](_0x3cf108,_0x37e46c(0x372e),{'validate':function(_0x25b895){const _0x161ec1=_0x37e46c;return _0x25b895[_0x161ec1(0x1b23)]()[_0x161ec1(0x2d96)](/^spoiler\s+(.*)$/);},'render':function(_0x32f363,_0x748815){const _0x34f78e=_0x37e46c,_0x2b249f=_0x32f363[_0x748815][_0x34f78e(0x3a85)]['trim']()['match'](/^spoiler\s+(.*)$/);return 0x1===_0x32f363[_0x748815]['nesting']?_0x34f78e(0x300)+_0x9f2120[_0x34f78e(0x2881)]['escapeHtml'](_0x2b249f[0x1])+_0x34f78e(0x2179):_0x34f78e(0x2845);}})[_0x37e46c(0x84a)](_0x3cf108,_0x37e46c(0xcdd),{'render':function(_0x4e27e5,_0x4c8037){const _0x3a4239=_0x37e46c;return 0x1===_0x4e27e5[_0x4c8037][_0x3a4239(0x3d81)]?_0x3a4239(0x380b):'';}});const _0x2ed16=_0x37e46c(0x18fe);function _0x9ad941(){const _0x36736a=_0x37e46c,_0x121c8a=window[_0x36736a(0x350d)]();if(_0x121c8a[_0x36736a(0x1631)])return;let _0x555baa=0x1/0x0,_0x425f85=0x1/0x0,_0x7e2e0e=0x0;for(let _0x51a24c=0x0;_0x51a24c<_0x121c8a[_0x36736a(0x3e72)];_0x51a24c++){let _0x3e8aed=_0x121c8a[_0x36736a(0x87c)](_0x51a24c)[_0x36736a(0x314d)];_0x3e8aed['nodeType']===Node[_0x36736a(0x1e93)]&&(_0x3e8aed=_0x3e8aed[_0x36736a(0x26d8)]);const _0x42f91b=_0x3e8aed[_0x36736a(0x3b6a)]();_0x555baa=Math['min'](_0x555baa,_0x42f91b[_0x36736a(0x48eb)]),_0x425f85=Math['min'](_0x425f85,_0x42f91b[_0x36736a(0x279d)]+window[_0x36736a(0x25f3)]),_0x7e2e0e=Math[_0x36736a(0x4529)](_0x7e2e0e,_0x42f91b[_0x36736a(0x3335)]+window[_0x36736a(0x25f3)]);}let _0x130947=document[_0x36736a(0x2842)](_0x36736a(0x2748));_0x130947||(_0x130947=document[_0x36736a(0x3ac9)]('div'),_0x130947[_0x36736a(0x1745)]['add']('vertical-line-highlight'),_0x130947[_0x36736a(0x1a84)]['position']=_0x36736a(0x5140),_0x130947[_0x36736a(0x1a84)][_0x36736a(0x17d2)]=_0x36736a(0x2b87),_0x130947['style']['backgroundColor']=_0x36736a(0x117e),document['body'][_0x36736a(0x335a)](_0x130947)),_0x130947[_0x36736a(0x1a84)][_0x36736a(0x48eb)]=_0x555baa-0x5+'px',_0x130947['style'][_0x36736a(0x279d)]=_0x425f85+'px',_0x130947[_0x36736a(0x1a84)][_0x36736a(0x3cd6)]=_0x7e2e0e-_0x425f85+'px';}function _0x4a5120(_0x2aaaaa){const _0x1d256c=_0x37e46c,_0x174f2e=_0x2aaaaa[_0x1d256c(0xffd)]('text-selection-menu'),_0x48e19e=_0x2aaaaa[_0x1d256c(0xffd)](_0x1d256c(0x404c));_0x174f2e[_0x1d256c(0x1745)][_0x1d256c(0x2b31)](_0x1d256c(0x173d))?(_0x48e19e[_0x1d256c(0x494d)]=!0x0,_0x174f2e[_0x1d256c(0x1745)][_0x1d256c(0x42a1)](_0x1d256c(0x173d)),_0x174f2e[_0x1d256c(0x1745)]['add']('translate-x-0')):(_0x48e19e['checked']=!0x1,_0x174f2e[_0x1d256c(0x1745)][_0x1d256c(0x42a1)](_0x1d256c(0x3021)),_0x174f2e[_0x1d256c(0x1745)][_0x1d256c(0x362c)](_0x1d256c(0x173d)));}const _0x3c649d={'methods':{'one':{},'two':{},'three':{},'four':{}},'model':{'one':{},'two':{},'three':{}},'compute':{},'hooks':[]},_0x53c6d0={'compute':function(_0x14d765){const _0x2935fa=_0x37e46c,{world:_0x13f283}=this,_0x37fb8f=_0x13f283[_0x2935fa(0x23df)];return _0x2935fa(0x2431)==typeof _0x14d765&&_0x37fb8f[_0x2935fa(0x2427)](_0x14d765)?_0x37fb8f[_0x14d765](this):(_0x48b83b=>_0x2935fa(0xb2e)===Object[_0x2935fa(0x3b3c)]['toString'][_0x2935fa(0x236b)](_0x48b83b))(_0x14d765)?_0x14d765[_0x2935fa(0xa21)](_0x15a7ef=>{const _0xfe1747=_0x2935fa;_0x13f283[_0xfe1747(0x23df)]['hasOwnProperty'](_0x15a7ef)&&_0x37fb8f[_0x15a7ef](this);}):'function'==typeof _0x14d765&&_0x14d765(this),this;}},_0xcfb918=_0x53c6d0,_0x576a5f={'forEach':function(_0x18daf1){const _0x381528=_0x37e46c;return this[_0x381528(0x34ce)]['forEach']((_0x58c6cb,_0x6d9777)=>{const _0x417c1a=_0x381528;let _0x49ec17=this[_0x417c1a(0x38d6)]([_0x58c6cb]);_0x18daf1(_0x49ec17,_0x6d9777);}),this;},'map':function(_0x55171e,_0x12cdc4){const _0x5dc2d5=_0x37e46c;let _0x18ddcf=this[_0x5dc2d5(0x34ce)]['map']((_0xf512bf,_0xd8fe22)=>{const _0x847652=_0x5dc2d5;let _0x1eb6d1=this[_0x847652(0x38d6)]([_0xf512bf]),_0x17be68=_0x55171e(_0x1eb6d1,_0xd8fe22);return void 0x0===_0x17be68?this[_0x847652(0x28b)]():_0x17be68;});if(0x0===_0x18ddcf[_0x5dc2d5(0x1b19)])return _0x12cdc4||this['update']([]);if(void 0x0!==_0x18ddcf[0x0]){if(_0x5dc2d5(0x2431)==typeof _0x18ddcf[0x0])return _0x18ddcf;if(_0x5dc2d5(0x20c7)==typeof _0x18ddcf[0x0]&&(null===_0x18ddcf[0x0]||!_0x18ddcf[0x0][_0x5dc2d5(0x4d72)]))return _0x18ddcf;}let _0x3938db=[];return _0x18ddcf[_0x5dc2d5(0xa21)](_0x16084b=>{const _0x4e502c=_0x5dc2d5;_0x3938db=_0x3938db[_0x4e502c(0x1d1d)](_0x16084b[_0x4e502c(0x34ce)]);}),this[_0x5dc2d5(0x324a)](_0x3938db);},'filter':function(_0x14ba91){const _0x4778d4=_0x37e46c;let _0x308039=this[_0x4778d4(0x34ce)];return _0x308039=_0x308039[_0x4778d4(0x1465)]((_0x51236f,_0x93066c)=>{let _0x22d796=this['update']([_0x51236f]);return _0x14ba91(_0x22d796,_0x93066c);}),this['update'](_0x308039);},'find':function(_0x1a7fdb){const _0x798be3=_0x37e46c;let _0xb9a4f1=this[_0x798be3(0x34ce)][_0x798be3(0x5144)]((_0x20662e,_0x1559a9)=>{const _0x144a03=_0x798be3;let _0x21d62b=this[_0x144a03(0x38d6)]([_0x20662e]);return _0x1a7fdb(_0x21d62b,_0x1559a9);});return this[_0x798be3(0x38d6)]([_0xb9a4f1]);},'some':function(_0x2d2813){const _0x4c90bd=_0x37e46c;return this[_0x4c90bd(0x34ce)]['some']((_0x73da7e,_0x5a264b)=>{const _0x48d5c1=_0x4c90bd;let _0x53766f=this[_0x48d5c1(0x38d6)]([_0x73da7e]);return _0x2d2813(_0x53766f,_0x5a264b);});},'random':function(_0x1e4575=0x1){const _0x478866=_0x37e46c;let _0x58bf55=this[_0x478866(0x34ce)],_0x3218fc=Math[_0x478866(0x2e2d)](Math[_0x478866(0xe98)]()*_0x58bf55['length']);return _0x3218fc+_0x1e4575>this[_0x478866(0x1b19)]&&(_0x3218fc=this[_0x478866(0x1b19)]-_0x1e4575,_0x3218fc=_0x3218fc<0x0?0x0:_0x3218fc),_0x58bf55=_0x58bf55[_0x478866(0x384c)](_0x3218fc,_0x3218fc+_0x1e4575),this[_0x478866(0x38d6)](_0x58bf55);}},_0x2916a5={'termList':function(){const _0x22aba1=_0x37e46c;return this[_0x22aba1(0x1578)]['one']['termList'](this[_0x22aba1(0x204b)]);},'terms':function(_0x23613a){const _0x5d3551=_0x37e46c;let _0x14d28f=this[_0x5d3551(0x2d96)]('.');return _0x5d3551(0x4a80)==typeof _0x23613a?_0x14d28f['eq'](_0x23613a):_0x14d28f;},'groups':function(_0x331167){const _0x4a7b6e=_0x37e46c;if(_0x331167||0x0===_0x331167)return this['update'](this[_0x4a7b6e(0x338c)][_0x331167]||[]);let _0x287979={};return Object['keys'](this[_0x4a7b6e(0x338c)])[_0x4a7b6e(0xa21)](_0x40f326=>{_0x287979[_0x40f326]=this['update'](this['_groups'][_0x40f326]);}),_0x287979;},'eq':function(_0x238a3e){const _0x3fd071=_0x37e46c;let _0x3297d4=this['pointer'];return _0x3297d4||(_0x3297d4=this[_0x3fd071(0x204b)][_0x3fd071(0x4833)]((_0x2ce72b,_0x4a87ad)=>[_0x4a87ad])),_0x3297d4[_0x238a3e]?this[_0x3fd071(0x38d6)]([_0x3297d4[_0x238a3e]]):this[_0x3fd071(0x28b)]();},'first':function(){return this['eq'](0x0);},'last':function(){const _0x55c9e8=_0x37e46c;let _0x124f6c=this['fullPointer'][_0x55c9e8(0x1b19)]-0x1;return this['eq'](_0x124f6c);},'firstTerms':function(){return this['match']('^.');},'lastTerms':function(){const _0x446ba8=_0x37e46c;return this[_0x446ba8(0x2d96)]('.$');},'slice':function(_0x3c5436,_0x538799){const _0x50574d=_0x37e46c;let _0x412886=this[_0x50574d(0x43e4)]||this[_0x50574d(0x204b)][_0x50574d(0x4833)]((_0x5d44bb,_0x3c2d81)=>[_0x3c2d81]);return _0x412886=_0x412886[_0x50574d(0x384c)](_0x3c5436,_0x538799),this[_0x50574d(0x38d6)](_0x412886);},'all':function(){const _0x2f5c6f=_0x37e46c;return this[_0x2f5c6f(0x38d6)]()['toView']();},'fullSentences':function(){const _0x16fcf5=_0x37e46c;let _0x223c7b=this['fullPointer'][_0x16fcf5(0x4833)](_0x1d9868=>[_0x1d9868[0x0]]);return this[_0x16fcf5(0x38d6)](_0x223c7b)[_0x16fcf5(0x324a)]();},'none':function(){return this['update']([]);},'isDoc':function(_0x2ae215){const _0x1bf022=_0x37e46c;if(!_0x2ae215||!_0x2ae215[_0x1bf022(0x4d72)])return!0x1;let _0x5b2381=this['fullPointer'],_0x448f58=_0x2ae215[_0x1bf022(0x34ce)];return!_0x5b2381[_0x1bf022(0x1b19)]!==_0x448f58['length']&&_0x5b2381[_0x1bf022(0x12d8)]((_0x44957c,_0x2dd66b)=>!!_0x448f58[_0x2dd66b]&&(_0x44957c[0x0]===_0x448f58[_0x2dd66b][0x0]&&_0x44957c[0x1]===_0x448f58[_0x2dd66b][0x1]&&_0x44957c[0x2]===_0x448f58[_0x2dd66b][0x2]));},'wordCount':function(){const _0xd40e41=_0x37e46c;return this[_0xd40e41(0x204b)][_0xd40e41(0x24d8)]((_0x4c0396,_0x4d3a60)=>(_0x4c0396+=_0x4d3a60[_0xd40e41(0x1465)](_0x2e5e90=>''!==_0x2e5e90['text'])[_0xd40e41(0x1b19)],_0x4c0396),0x0);},'isFull':function(){const _0xae96d5=_0x37e46c;let _0x4b87be=this[_0xae96d5(0x43e4)];if(!_0x4b87be)return!0x0;if(0x0===_0x4b87be[_0xae96d5(0x1b19)]||0x0!==_0x4b87be[0x0][0x0])return!0x1;let _0x4ef805=0x0,_0x521706=0x0;return this[_0xae96d5(0x295)][_0xae96d5(0xa21)](_0xa2063d=>_0x4ef805+=_0xa2063d[_0xae96d5(0x1b19)]),this[_0xae96d5(0x204b)][_0xae96d5(0xa21)](_0x7774b2=>_0x521706+=_0x7774b2[_0xae96d5(0x1b19)]),_0x4ef805===_0x521706;},'getNth':function(_0x33a8b3){const _0x146021=_0x37e46c;return _0x146021(0x4a80)==typeof _0x33a8b3?this['eq'](_0x33a8b3):_0x146021(0x2431)==typeof _0x33a8b3?this['if'](_0x33a8b3):this;}};_0x2916a5[_0x37e46c(0x4e5b)]=_0x2916a5['groups'],_0x2916a5['fullSentence']=_0x2916a5[_0x37e46c(0x31d6)],_0x2916a5['sentence']=_0x2916a5['fullSentences'],_0x2916a5['lastTerm']=_0x2916a5[_0x37e46c(0x4267)],_0x2916a5['firstTerm']=_0x2916a5[_0x37e46c(0x46e6)];const _0x2340bb=_0x2916a5,_0x168e92=Object[_0x37e46c(0x4e14)]({},_0x2340bb,_0xcfb918,_0x576a5f);_0x168e92[_0x37e46c(0xf9e)]=_0x168e92['eq'];const _0x41928a=_0x168e92;class _0x43c32b{constructor(_0x566400,_0x44b9b3,_0x477d4d={}){const _0x4d1460=_0x37e46c;[[_0x4d1460(0x295),_0x566400],[_0x4d1460(0x4657),_0x3c649d],[_0x4d1460(0x338c),_0x477d4d],[_0x4d1460(0x1aa7),null],[_0x4d1460(0x106d),_0x4d1460(0x24bd)]]['forEach'](_0x5da380=>{const _0x1f9312=_0x4d1460;Object[_0x1f9312(0x6f7)](this,_0x5da380[0x0],{'value':_0x5da380[0x1],'writable':!0x0});}),this['ptrs']=_0x44b9b3;}get[_0x37e46c(0x204b)](){const _0x3363fa=_0x37e46c;let _0x353d71=this[_0x3363fa(0x295)];return this[_0x3363fa(0x232)]&&(_0x353d71=_0x3c649d[_0x3363fa(0x1578)][_0x3363fa(0x1d8a)][_0x3363fa(0xcbe)](this[_0x3363fa(0x232)],this[_0x3363fa(0x295)])),_0x353d71;}get[_0x37e46c(0x43e4)](){const _0x1dea57=_0x37e46c;return this[_0x1dea57(0x232)];}get[_0x37e46c(0x1578)](){const _0x361057=_0x37e46c;return this['world'][_0x361057(0x1578)];}get[_0x37e46c(0x1556)](){const _0xef466a=_0x37e46c;return this['world'][_0xef466a(0x1556)];}get[_0x37e46c(0x1889)](){const _0x5ee3d3=_0x37e46c;return this[_0x5ee3d3(0x4657)][_0x5ee3d3(0x1889)];}get[_0x37e46c(0x4d72)](){return!0x0;}get['found'](){const _0x483b41=_0x37e46c;return this[_0x483b41(0x204b)][_0x483b41(0x1b19)]>0x0;}get['length'](){const _0x166424=_0x37e46c;return this[_0x166424(0x204b)][_0x166424(0x1b19)];}get[_0x37e46c(0x34ce)](){let {docs:_0x48ce1d,ptrs:_0x37c5a8,document:_0xec5f46}=this,_0x22e1f4=_0x37c5a8||_0x48ce1d['map']((_0x2d5e2a,_0x5f4552)=>[_0x5f4552]);return _0x22e1f4['map'](_0x20592f=>{let [_0x1c1369,_0x558438,_0x29b2ba,_0x1aa515,_0x9b69d1]=_0x20592f;return _0x558438=_0x558438||0x0,_0x29b2ba=_0x29b2ba||(_0xec5f46[_0x1c1369]||[])['length'],_0xec5f46[_0x1c1369]&&_0xec5f46[_0x1c1369][_0x558438]&&(_0x1aa515=_0x1aa515||_0xec5f46[_0x1c1369][_0x558438]['id'],_0xec5f46[_0x1c1369][_0x29b2ba-0x1]&&(_0x9b69d1=_0x9b69d1||_0xec5f46[_0x1c1369][_0x29b2ba-0x1]['id'])),[_0x1c1369,_0x558438,_0x29b2ba,_0x1aa515,_0x9b69d1];});}[_0x37e46c(0x38d6)](_0x3e2a48){const _0x24cc03=_0x37e46c;let _0x267914=new _0x43c32b(this[_0x24cc03(0x295)],_0x3e2a48);if(this[_0x24cc03(0x1aa7)]&&_0x3e2a48&&_0x3e2a48[_0x24cc03(0x1b19)]>0x0){let _0x44c8ce=[];_0x3e2a48[_0x24cc03(0xa21)]((_0x213dc9,_0x1b562f)=>{const _0x4d2092=_0x24cc03;let [_0x14181f,_0x41eb2e,_0x149ca1]=_0x213dc9;(0x1===_0x213dc9[_0x4d2092(0x1b19)]||0x0===_0x41eb2e&&this[_0x4d2092(0x295)][_0x14181f]['length']===_0x149ca1)&&(_0x44c8ce[_0x1b562f]=this['_cache'][_0x14181f]);}),_0x44c8ce[_0x24cc03(0x1b19)]>0x0&&(_0x267914[_0x24cc03(0x1aa7)]=_0x44c8ce);}return _0x267914['world']=this[_0x24cc03(0x4657)],_0x267914;}[_0x37e46c(0x324a)](_0x242e1d){const _0x595a34=_0x37e46c;return new _0x43c32b(this[_0x595a34(0x295)],_0x242e1d||this[_0x595a34(0x43e4)]);}[_0x37e46c(0x4f8a)](_0x39d715){const _0x1c72b2=_0x37e46c,{methods:_0x1d9d3b}=this;let _0x41e09c=_0x1d9d3b['one']['tokenize'][_0x1c72b2(0x10ce)](_0x39d715,this[_0x1c72b2(0x4657)]),_0x3de1f2=new _0x43c32b(_0x41e09c);return _0x3de1f2['world']=this[_0x1c72b2(0x4657)],_0x3de1f2[_0x1c72b2(0x23df)]([_0x1c72b2(0x47d),_0x1c72b2(0x209c),_0x1c72b2(0x2c34)]),this[_0x1c72b2(0x4657)][_0x1c72b2(0x23df)][_0x1c72b2(0x2005)]&&_0x3de1f2[_0x1c72b2(0x23df)](_0x1c72b2(0x2005)),_0x3de1f2['compute'](_0x1c72b2(0x52a)),_0x3de1f2;}[_0x37e46c(0x150c)](){const _0x1882a1=_0x37e46c;let _0x3afc6e=this['document'][_0x1882a1(0x384c)](0x0);_0x3afc6e=_0x3afc6e[_0x1882a1(0x4833)](_0x5a6f1b=>_0x5a6f1b[_0x1882a1(0x4833)](_0x1ce99d=>((_0x1ce99d=Object['assign']({},_0x1ce99d))[_0x1882a1(0x521a)]=new Set(_0x1ce99d[_0x1882a1(0x521a)]),_0x1ce99d)));let _0xc3c733=this['update'](this[_0x1882a1(0x43e4)]);return _0xc3c733[_0x1882a1(0x295)]=_0x3afc6e,_0xc3c733['_cache']=this[_0x1882a1(0x1aa7)],_0xc3c733;}}Object[_0x37e46c(0x4e14)](_0x43c32b['prototype'],_0x41928a);const _0xc3d599=_0x43c32b,_0x54172b=function(_0x5b994c){const _0x4aba6a=_0x37e46c;return _0x5b994c&&'object'==typeof _0x5b994c&&!Array[_0x4aba6a(0x22b4)](_0x5b994c);};function _0x3f53cd(_0x14d0cb,_0x42d084){const _0x31da49=_0x37e46c;if(_0x54172b(_0x42d084)){for(const _0x29af76 in _0x42d084)_0x54172b(_0x42d084[_0x29af76])?(_0x14d0cb[_0x29af76]||Object[_0x31da49(0x4e14)](_0x14d0cb,{[_0x29af76]:{}}),_0x3f53cd(_0x14d0cb[_0x29af76],_0x42d084[_0x29af76])):Object[_0x31da49(0x4e14)](_0x14d0cb,{[_0x29af76]:_0x42d084[_0x29af76]});}return _0x14d0cb;}const _0x388b66=function(_0x1de54e,_0x46f77c,_0x248ab0,_0x79cba1){const _0x2022e=_0x37e46c,{methods:_0x6dde31,model:_0x2caaad,compute:_0x27691b,hooks:_0x5600b8}=_0x46f77c;_0x1de54e[_0x2022e(0x1578)]&&function(_0xcdf174,_0x511d3a){const _0x3be82e=_0x2022e;for(const _0x4063c4 in _0x511d3a)_0xcdf174[_0x4063c4]=_0xcdf174[_0x4063c4]||{},Object[_0x3be82e(0x4e14)](_0xcdf174[_0x4063c4],_0x511d3a[_0x4063c4]);}(_0x6dde31,_0x1de54e[_0x2022e(0x1578)]),_0x1de54e[_0x2022e(0x1556)]&&_0x3f53cd(_0x2caaad,_0x1de54e[_0x2022e(0x1556)]),_0x1de54e['irregulars']&&function(_0x2868f8,_0x311b9b){const _0x3a67b7=_0x2022e;let _0x4587d=_0x2868f8[_0x3a67b7(0x21c9)][_0x3a67b7(0xdec)]||{};Object['keys'](_0x311b9b)[_0x3a67b7(0xa21)](_0x1dc1a0=>{const _0x4cfb81=_0x3a67b7;_0x311b9b[_0x1dc1a0][_0x4cfb81(0x203e)]&&(_0x4587d[_0x4cfb81(0x318d)]&&(_0x4587d[_0x4cfb81(0x318d)]['ex'][_0x1dc1a0]=_0x311b9b[_0x1dc1a0][_0x4cfb81(0x203e)]),_0x4587d['fromPast']&&(_0x4587d[_0x4cfb81(0x1d92)]['ex'][_0x311b9b[_0x1dc1a0]['pastTense']]=_0x1dc1a0)),_0x311b9b[_0x1dc1a0][_0x4cfb81(0x4c8b)]&&(_0x4587d['toPresent']&&(_0x4587d[_0x4cfb81(0x45ee)]['ex'][_0x1dc1a0]=_0x311b9b[_0x1dc1a0][_0x4cfb81(0x4c8b)]),_0x4587d[_0x4cfb81(0x12ad)]&&(_0x4587d[_0x4cfb81(0x12ad)]['ex'][_0x311b9b[_0x1dc1a0][_0x4cfb81(0x4c8b)]]=_0x1dc1a0)),_0x311b9b[_0x1dc1a0]['gerund']&&(_0x4587d[_0x4cfb81(0x256b)]&&(_0x4587d[_0x4cfb81(0x256b)]['ex'][_0x1dc1a0]=_0x311b9b[_0x1dc1a0][_0x4cfb81(0x3fbc)]),_0x4587d[_0x4cfb81(0x260f)]&&(_0x4587d[_0x4cfb81(0x260f)]['ex'][_0x311b9b[_0x1dc1a0]['gerund']]=_0x1dc1a0)),_0x311b9b[_0x1dc1a0][_0x4cfb81(0x47d6)]&&(_0x4587d['toComparative']&&(_0x4587d[_0x4cfb81(0x2cd)]['ex'][_0x1dc1a0]=_0x311b9b[_0x1dc1a0][_0x4cfb81(0x47d6)]),_0x4587d[_0x4cfb81(0x22b6)]&&(_0x4587d[_0x4cfb81(0x22b6)]['ex'][_0x311b9b[_0x1dc1a0][_0x4cfb81(0x47d6)]]=_0x1dc1a0)),_0x311b9b[_0x1dc1a0][_0x4cfb81(0x3378)]&&(_0x4587d[_0x4cfb81(0x4c3a)]&&(_0x4587d[_0x4cfb81(0x4c3a)]['ex'][_0x1dc1a0]=_0x311b9b[_0x1dc1a0][_0x4cfb81(0x3378)]),_0x4587d[_0x4cfb81(0xbcc)]&&(_0x4587d[_0x4cfb81(0xbcc)]['ex'][_0x311b9b[_0x1dc1a0][_0x4cfb81(0x3378)]]=_0x1dc1a0));});}(_0x2caaad,_0x1de54e['irregulars']),_0x1de54e['compute']&&Object[_0x2022e(0x4e14)](_0x27691b,_0x1de54e['compute']),_0x5600b8&&(_0x46f77c['hooks']=_0x5600b8[_0x2022e(0x1d1d)](_0x1de54e[_0x2022e(0x1889)]||[])),_0x1de54e[_0x2022e(0x31bb)]&&_0x1de54e[_0x2022e(0x31bb)](_0x248ab0),_0x1de54e[_0x2022e(0x3465)]&&Object[_0x2022e(0x1ea9)](_0x1de54e[_0x2022e(0x3465)])[_0x2022e(0xa21)](_0x152649=>_0x79cba1[_0x152649]=_0x1de54e['lib'][_0x152649]),_0x1de54e[_0x2022e(0x521a)]&&_0x79cba1[_0x2022e(0x3954)](_0x1de54e[_0x2022e(0x521a)]),_0x1de54e[_0x2022e(0x19f4)]&&_0x79cba1[_0x2022e(0x4e6b)](_0x1de54e[_0x2022e(0x19f4)]),_0x1de54e[_0x2022e(0xf75)]&&_0x79cba1[_0x2022e(0x4e6b)](_0x1de54e[_0x2022e(0xf75)],!0x0),_0x1de54e['mutate']&&_0x1de54e[_0x2022e(0x3c67)](_0x46f77c);},_0x1a1096=function(_0x488f3d){const _0x56b342=_0x37e46c;return'[object\x20Array]'===Object['prototype'][_0x56b342(0x8e8)][_0x56b342(0x236b)](_0x488f3d);},_0xb70d67=function(_0x3e3b59,_0xc26f95,_0x1aa956){const _0x3a81ab=_0x37e46c,{methods:_0x451fbb}=_0x1aa956;let _0x4d3101=new _0xc26f95([]);if(_0x4d3101[_0x3a81ab(0x4657)]=_0x1aa956,_0x3a81ab(0x4a80)==typeof _0x3e3b59&&(_0x3e3b59=String(_0x3e3b59)),!_0x3e3b59)return _0x4d3101;if('string'==typeof _0x3e3b59)return new _0xc26f95(_0x451fbb[_0x3a81ab(0x1d8a)]['tokenize'][_0x3a81ab(0x10ce)](_0x3e3b59,_0x1aa956));if(_0x21d62f=_0x3e3b59,_0x3a81ab(0x4d86)===Object[_0x3a81ab(0x3b3c)]['toString'][_0x3a81ab(0x236b)](_0x21d62f)&&_0x3e3b59[_0x3a81ab(0x4d72)])return new _0xc26f95(_0x3e3b59[_0x3a81ab(0x295)],_0x3e3b59[_0x3a81ab(0x232)]);var _0x21d62f;if(_0x1a1096(_0x3e3b59)){if(_0x1a1096(_0x3e3b59[0x0])){let _0x455cc9=_0x3e3b59[_0x3a81ab(0x4833)](_0x4fa266=>_0x4fa266[_0x3a81ab(0x4833)](_0x25c7f4=>({'text':_0x25c7f4,'normal':_0x25c7f4,'pre':'','post':'\x20','tags':new Set()})));return new _0xc26f95(_0x455cc9);}let _0x4ec0c9=function(_0x4eeb26){const _0x30f6d9=_0x3a81ab;return _0x4eeb26[_0x30f6d9(0x4833)](_0x5dde06=>_0x5dde06[_0x30f6d9(0x4a03)][_0x30f6d9(0x4833)](_0x5e331e=>(_0x1a1096(_0x5e331e['tags'])&&(_0x5e331e[_0x30f6d9(0x521a)]=new Set(_0x5e331e['tags'])),_0x5e331e)));}(_0x3e3b59);return new _0xc26f95(_0x4ec0c9);}return _0x4d3101;};let _0x529cff=Object[_0x37e46c(0x4e14)]({},_0x3c649d);const _0x4f55eb=function(_0x184e00,_0x14ca49){const _0x7c87a6=_0x37e46c;_0x14ca49&&_0x4f55eb[_0x7c87a6(0x4e6b)](_0x14ca49);let _0x109654=_0xb70d67(_0x184e00,_0xc3d599,_0x529cff);return _0x184e00&&_0x109654[_0x7c87a6(0x23df)](_0x529cff[_0x7c87a6(0x1889)]),_0x109654;};Object[_0x37e46c(0x6f7)](_0x4f55eb,_0x37e46c(0x248e),{'value':_0x529cff,'writable':!0x0}),_0x4f55eb[_0x37e46c(0x3c0b)]=function(_0xc706b8,_0x16e0ee){const _0x1c2bff=_0x37e46c,{compute:_0x31a4e6}=this['_world'];_0x16e0ee&&_0x4f55eb[_0x1c2bff(0x4e6b)](_0x16e0ee);let _0x3c21ac=_0xb70d67(_0xc706b8,_0xc3d599,_0x529cff);return _0x31a4e6[_0x1c2bff(0x8cf)]&&_0x3c21ac[_0x1c2bff(0x23df)]([_0x1c2bff(0xa94),_0x1c2bff(0x47d),_0x1c2bff(0x192e),_0x1c2bff(0x8cf)]),_0x3c21ac;},_0x4f55eb[_0x37e46c(0xe15)]=function(_0x26e696){return _0x388b66(_0x26e696,this['_world'],_0xc3d599,this),this;},_0x4f55eb[_0x37e46c(0x38b6)]=_0x4f55eb['plugin'],_0x4f55eb[_0x37e46c(0x4657)]=function(){return this['_world'];},_0x4f55eb[_0x37e46c(0x1556)]=function(){const _0x54b2f2=_0x37e46c;return this[_0x54b2f2(0x248e)][_0x54b2f2(0x1556)];},_0x4f55eb[_0x37e46c(0x1578)]=function(){const _0x209b60=_0x37e46c;return this['_world'][_0x209b60(0x1578)];},_0x4f55eb['hooks']=function(){const _0x11c389=_0x37e46c;return this[_0x11c389(0x248e)]['hooks'];},_0x4f55eb[_0x37e46c(0x44a0)]=function(_0x541339){const _0x51b48d=_0x37e46c,_0x321dd6=_0x51b48d(0x1daa)!=typeof process&&process[_0x51b48d(0xe1a)]?process[_0x51b48d(0xe1a)]:self[_0x51b48d(0xe1a)]||{};return _0x321dd6[_0x51b48d(0x3a92)]='tagger'===_0x541339||!0x0===_0x541339||'',_0x321dd6['DEBUG_MATCH']=_0x51b48d(0x2d96)===_0x541339||!0x0===_0x541339||'',_0x321dd6['DEBUG_CHUNKS']=_0x51b48d(0x2d92)===_0x541339||!0x0===_0x541339||'',this;},_0x4f55eb[_0x37e46c(0x459c)]=_0x37e46c(0x4e9f);const _0x5a48a1=_0x4f55eb,_0x27572b=function(_0x3b9b84){const _0x4d3110=_0x37e46c;let _0x5e7487=_0x3b9b84[_0x4d3110(0x4833)](_0x2665a0=>{let _0x5e7180=new Set();return _0x2665a0['forEach'](_0x3a149d=>{const _0x3f2c00=a0_0x11e7;''!==_0x3a149d[_0x3f2c00(0x47d)]&&_0x5e7180[_0x3f2c00(0x362c)](_0x3a149d[_0x3f2c00(0x47d)]),_0x3a149d[_0x3f2c00(0x857)]&&_0x5e7180[_0x3f2c00(0x362c)]('%'+_0x3a149d[_0x3f2c00(0x857)]+'%'),_0x3a149d[_0x3f2c00(0x4570)]&&_0x5e7180[_0x3f2c00(0x362c)](_0x3a149d[_0x3f2c00(0x4570)]),_0x3a149d[_0x3f2c00(0x192e)]&&_0x5e7180['add'](_0x3a149d['machine']),_0x3a149d[_0x3f2c00(0x507b)]&&_0x5e7180[_0x3f2c00(0x362c)](_0x3a149d[_0x3f2c00(0x507b)]),_0x3a149d['alias']&&_0x3a149d['alias']['forEach'](_0x48228f=>_0x5e7180['add'](_0x48228f));let _0x53f187=Array[_0x3f2c00(0x27e6)](_0x3a149d['tags']);for(let _0xc7097e=0x0;_0xc7097e<_0x53f187['length'];_0xc7097e+=0x1)_0x5e7180[_0x3f2c00(0x362c)]('#'+_0x53f187[_0xc7097e]);}),_0x5e7180;});return _0x5e7487;},_0x199cfc={'cache':function(){const _0x46c0c6=_0x37e46c;return this[_0x46c0c6(0x1aa7)]=this[_0x46c0c6(0x1578)][_0x46c0c6(0x1d8a)][_0x46c0c6(0x4f66)](this['document']),this;},'uncache':function(){const _0x5b63d2=_0x37e46c;return this[_0x5b63d2(0x1aa7)]=null,this;}},_0x1f6628=function(_0x2011f9){const _0x2b0391=_0x37e46c;Object[_0x2b0391(0x4e14)](_0x2011f9[_0x2b0391(0x3b3c)],_0x199cfc);},_0x8edf44={'api':_0x1f6628,'compute':{'cache':function(_0x4a6c48){const _0x307f7e=_0x37e46c;_0x4a6c48[_0x307f7e(0x1aa7)]=_0x4a6c48[_0x307f7e(0x1578)][_0x307f7e(0x1d8a)][_0x307f7e(0x4f66)](_0x4a6c48[_0x307f7e(0x295)]);}},'methods':{'one':{'cacheDoc':_0x27572b}}},_0x1f6d4b=_0x3ff5a3=>/^\p{Lu}[\p{Ll}'’]/u[_0x37e46c(0x1769)](_0x3ff5a3)||/^\p{Lu}$/u['test'](_0x3ff5a3),_0xe5cde3=(_0x19e64b,_0x25c758,_0x33b116)=>{const _0xf615d0=_0x37e46c;if(_0x33b116[_0xf615d0(0xa21)](_0x5a1658=>_0x5a1658[_0xf615d0(0x1701)]=!0x0),_0x19e64b){let _0x37d562=[_0x25c758,0x0]['concat'](_0x33b116);Array['prototype'][_0xf615d0(0x4986)][_0xf615d0(0x4c31)](_0x19e64b,_0x37d562);}return _0x19e64b;},_0x5a4ecd=function(_0x5f0222){const _0x19796c=_0x37e46c;let _0x2283f0=_0x5f0222[_0x5f0222['length']-0x1];!_0x2283f0||/ $/[_0x19796c(0x1769)](_0x2283f0[_0x19796c(0x24ce)])||/[-–—]/[_0x19796c(0x1769)](_0x2283f0[_0x19796c(0x24ce)])||(_0x2283f0[_0x19796c(0x24ce)]+='\x20');},_0x4dcf5f=(_0x210a20,_0xd0d11b,_0x267ac9)=>{const _0xf3ed7a=_0x37e46c,_0x261204=/[-.?!,;:)–—'"]/g;let _0x49e2ce=_0x210a20[_0xd0d11b-0x1];if(!_0x49e2ce)return;let _0x32722d=_0x49e2ce[_0xf3ed7a(0x24ce)];if(_0x261204[_0xf3ed7a(0x1769)](_0x32722d)){let _0x44cf8d=_0x32722d[_0xf3ed7a(0x2d96)](_0x261204)[_0xf3ed7a(0x3541)](''),_0x39617c=_0x267ac9[_0x267ac9[_0xf3ed7a(0x1b19)]-0x1];_0x39617c['post']=_0x44cf8d+_0x39617c['post'],_0x49e2ce[_0xf3ed7a(0x24ce)]=_0x49e2ce[_0xf3ed7a(0x24ce)]['replace'](_0x261204,'');}},_0x26258d=function(_0x2ffb41,_0x485b32,_0x541036,_0x508c5a){let [_0x583e68,_0x3e30a3,_0x282895]=_0x485b32;0x0===_0x3e30a3||_0x282895===_0x508c5a[_0x583e68]['length']?_0x5a4ecd(_0x541036):(_0x5a4ecd(_0x541036),_0x5a4ecd([_0x2ffb41[_0x485b32[0x1]]])),function(_0x48d86f,_0x4ca6c1,_0x2f4602){const _0x12c6aa=a0_0x11e7;let _0x1d85d7=_0x48d86f[_0x4ca6c1];if(0x0!==_0x4ca6c1||!_0x1f6d4b(_0x1d85d7[_0x12c6aa(0x4006)]))return;_0x2f4602[0x0]['text']=_0x2f4602[0x0][_0x12c6aa(0x4006)]['replace'](/^\p{Ll}/u,_0x3ce0b8=>_0x3ce0b8['toUpperCase']());let _0x26d312=_0x48d86f[_0x4ca6c1];_0x26d312['tags'][_0x12c6aa(0x3170)](_0x12c6aa(0xb7e))||_0x26d312[_0x12c6aa(0x521a)][_0x12c6aa(0x3170)]('Acronym')||_0x1f6d4b(_0x26d312['text'])&&_0x26d312['text'][_0x12c6aa(0x1b19)]>0x1&&(_0x26d312[_0x12c6aa(0x4006)]=(_0x7da0cd=_0x26d312[_0x12c6aa(0x4006)],_0x7da0cd[_0x12c6aa(0x741)](/^\p{Lu}/u,_0x46d52f=>_0x46d52f[_0x12c6aa(0x6e8)]())));var _0x7da0cd;}(_0x2ffb41,_0x3e30a3,_0x541036),_0xe5cde3(_0x2ffb41,_0x3e30a3,_0x541036);};let _0x440663=0x0;const _0x32d420=_0x548bba=>(_0x548bba=_0x548bba[_0x37e46c(0x1b19)]<0x3?'0'+_0x548bba:_0x548bba)[_0x37e46c(0x1b19)]<0x3?'0'+_0x548bba:_0x548bba,_0x2036be=function(_0x282bec){const _0x4182d8=_0x37e46c;let [_0x478cb7,_0x3c4125]=_0x282bec[_0x4182d8(0x3bb5)]||[0x0,0x0];_0x440663+=0x1,_0x440663=_0x440663>0xb63f?0x0:_0x440663,_0x478cb7=_0x478cb7>0xb63f?0x0:_0x478cb7,_0x3c4125=_0x3c4125>0x50e?0x0:_0x3c4125;let _0x149b4d=_0x32d420(_0x440663[_0x4182d8(0x8e8)](0x24));_0x149b4d+=_0x32d420(_0x478cb7['toString'](0x24));let _0x56c706=_0x3c4125[_0x4182d8(0x8e8)](0x24);return _0x56c706=_0x56c706['length']<0x2?'0'+_0x56c706:_0x56c706,_0x149b4d+=_0x56c706,_0x149b4d+=parseInt(0x24*Math[_0x4182d8(0xe98)](),0xa)['toString'](0x24),_0x282bec[_0x4182d8(0x47d)]+'|'+_0x149b4d[_0x4182d8(0x44ff)]();},_0x526a5e=function(_0x558528){const _0x34ef6b=_0x37e46c;_0x558528[_0x34ef6b(0x3170)]('@hasContraction')&&_0x34ef6b(0x14b2)==typeof _0x558528['contractions']&&_0x558528[_0x34ef6b(0x4c27)]('@hasContraction')['contractions']()[_0x34ef6b(0x2d1c)]();},_0x26b499=_0x2f7d38=>_0x37e46c(0xb2e)===Object[_0x37e46c(0x3b3c)][_0x37e46c(0x8e8)]['call'](_0x2f7d38),_0xd1330e=function(_0x572828,_0x35b42a,_0x31e34b){const _0x3a9e0b=_0x37e46c,{document:_0x638a,world:_0x50c6af}=_0x35b42a;_0x35b42a['uncache']();let _0x5e1150=_0x35b42a[_0x3a9e0b(0x34ce)],_0x5bc481=_0x35b42a[_0x3a9e0b(0x34ce)];_0x35b42a['forEach']((_0x542e6e,_0x410d7e)=>{const _0x3f1a93=_0x3a9e0b;let _0x38e3c1=_0x542e6e[_0x3f1a93(0x34ce)][0x0],[_0x22b443]=_0x38e3c1,_0x31a5f8=_0x638a[_0x22b443],_0x21ac93=function(_0x855b7c,_0x58fbd4){const _0x5af7fe=_0x3f1a93,{methods:_0x499181}=_0x58fbd4;return _0x5af7fe(0x2431)==typeof _0x855b7c?_0x499181[_0x5af7fe(0x1d8a)][_0x5af7fe(0x3c0b)][_0x5af7fe(0x10ce)](_0x855b7c,_0x58fbd4)[0x0]:_0x5af7fe(0x20c7)==typeof _0x855b7c&&_0x855b7c[_0x5af7fe(0x4d72)]?_0x855b7c[_0x5af7fe(0x150c)]()[_0x5af7fe(0x204b)][0x0]||[]:_0x26b499(_0x855b7c)?_0x26b499(_0x855b7c[0x0])?_0x855b7c[0x0]:_0x855b7c:[];}(_0x572828,_0x50c6af);0x0!==_0x21ac93['length']&&(_0x21ac93=function(_0x31eb53){const _0x4a3a72=_0x3f1a93;return _0x31eb53[_0x4a3a72(0x4833)](_0x109186=>(_0x109186['id']=_0x2036be(_0x109186),_0x109186));}(_0x21ac93),_0x31e34b?(_0x526a5e(_0x35b42a['update']([_0x38e3c1])[_0x3f1a93(0x42fe)]()),_0x26258d(_0x31a5f8,_0x38e3c1,_0x21ac93,_0x638a)):(_0x526a5e(_0x35b42a[_0x3f1a93(0x38d6)]([_0x38e3c1])[_0x3f1a93(0x2cce)]()),function(_0x39ed9c,_0x290633,_0x33b984,_0x5b3363){const _0x1193e6=_0x3f1a93;let [_0x42be3d,,_0x34e06b]=_0x290633,_0x13f455=(_0x5b3363[_0x42be3d]||[])[_0x1193e6(0x1b19)];_0x34e06b<_0x13f455?(_0x4dcf5f(_0x39ed9c,_0x34e06b,_0x33b984),_0x5a4ecd(_0x33b984)):_0x13f455===_0x34e06b&&(_0x5a4ecd(_0x39ed9c),_0x4dcf5f(_0x39ed9c,_0x34e06b,_0x33b984),_0x5b3363[_0x42be3d+0x1]&&(_0x33b984[_0x33b984[_0x1193e6(0x1b19)]-0x1][_0x1193e6(0x24ce)]+='\x20')),_0xe5cde3(_0x39ed9c,_0x290633[0x2],_0x33b984),_0x290633[0x4]=_0x33b984[_0x33b984[_0x1193e6(0x1b19)]-0x1]['id'];}(_0x31a5f8,_0x38e3c1,_0x21ac93,_0x638a)),_0x638a[_0x22b443]&&_0x638a[_0x22b443][_0x38e3c1[0x1]]&&(_0x38e3c1[0x3]=_0x638a[_0x22b443][_0x38e3c1[0x1]]['id']),_0x5bc481[_0x410d7e]=_0x38e3c1,_0x38e3c1[0x2]+=_0x21ac93['length'],_0x5e1150[_0x410d7e]=_0x38e3c1);});let _0x20e1fc=_0x35b42a[_0x3a9e0b(0x324a)](_0x5e1150);return _0x35b42a[_0x3a9e0b(0x232)]=_0x5bc481,_0x20e1fc[_0x3a9e0b(0x23df)](['id',_0x3a9e0b(0x3bb5),_0x3a9e0b(0x209c),_0x3a9e0b(0x2c34)]),_0x20e1fc[_0x3a9e0b(0x4657)]['compute'][_0x3a9e0b(0x2005)]&&_0x20e1fc[_0x3a9e0b(0x23df)]('preTagger'),_0x20e1fc['compute'](_0x3a9e0b(0x52a)),_0x20e1fc;},_0x47af7a={'insertAfter':function(_0x41c8c6){return _0xd1330e(_0x41c8c6,this,!0x1);},'insertBefore':function(_0x51b175){return _0xd1330e(_0x51b175,this,!0x0);}};_0x47af7a['append']=_0x47af7a[_0x37e46c(0x3219)],_0x47af7a['prepend']=_0x47af7a[_0x37e46c(0x197c)],_0x47af7a['insert']=_0x47af7a[_0x37e46c(0x3219)];const _0x1276aa=_0x47af7a,_0x14cc71=/\$[0-9a-z]+/g,_0x1f1304={};_0x1f1304[_0x37e46c(0x1f1f)]=function(_0x4b5cce,_0x43e522={}){const _0x128b58=_0x37e46c;let _0xb00765=this[_0x128b58(0x34ce)],_0x1e1b62=this;if(this['uncache'](),'function'==typeof _0x4b5cce)return function(_0x175429,_0x46750d){return _0x175429['forEach'](_0x28abb7=>{let _0xc3f2c3=_0x46750d(_0x28abb7);_0x28abb7['replaceWith'](_0xc3f2c3);}),_0x175429;}(_0x1e1b62,_0x4b5cce);let _0x389a52=_0x1e1b62['docs'][0x0],_0x2849ef=_0x43e522[_0x128b58(0xa76)]&&_0x389a52[_0x389a52[_0x128b58(0x1b19)]-0x1][_0x128b58(0x521a)][_0x128b58(0x3170)]('Possessive');_0x4b5cce=function(_0x232ac6,_0x2a69f6){const _0x48074f=_0x128b58;if(_0x48074f(0x2431)!=typeof _0x232ac6)return _0x232ac6;let _0x416771=_0x2a69f6[_0x48074f(0x9e6)]();return _0x232ac6=_0x232ac6[_0x48074f(0x741)](_0x14cc71,_0x337bcf=>{const _0x58c85f=_0x48074f;let _0x4d6c8f=_0x337bcf[_0x58c85f(0x741)](/\$/,'');return _0x416771[_0x58c85f(0x2427)](_0x4d6c8f)?_0x416771[_0x4d6c8f][_0x58c85f(0x4006)]():_0x337bcf;}),_0x232ac6;}(_0x4b5cce,_0x1e1b62);let _0x156780=this[_0x128b58(0x38d6)](_0xb00765);_0xb00765=_0xb00765[_0x128b58(0x4833)](_0xd8ed9e=>_0xd8ed9e[_0x128b58(0x384c)](0x0,0x3));let _0x230e3a=(_0x156780[_0x128b58(0x204b)][0x0]||[])['map'](_0x1e8d76=>Array[_0x128b58(0x27e6)](_0x1e8d76[_0x128b58(0x521a)]));(_0x128b58(0x2431)==typeof _0x4b5cce&&(_0x4b5cce=this[_0x128b58(0x4f8a)](_0x4b5cce)['compute']('id')),_0x1e1b62[_0x128b58(0x3219)](_0x4b5cce),_0x156780['has'](_0x128b58(0x14c0))&&_0x1e1b62[_0x128b58(0x8cf)])&&_0x1e1b62['grow']('@hasContraction+')[_0x128b58(0x8cf)]()[_0x128b58(0x2d1c)]();if(_0x1e1b62[_0x128b58(0x5be)](_0x156780),_0x2849ef){let _0x1f4e65=_0x1e1b62[_0x128b58(0x204b)][0x0],_0x3d4a25=_0x1f4e65[_0x1f4e65[_0x128b58(0x1b19)]-0x1];_0x3d4a25[_0x128b58(0x521a)][_0x128b58(0x3170)]('Possessive')||(_0x3d4a25['text']+='\x27s',_0x3d4a25[_0x128b58(0x47d)]+='\x27s',_0x3d4a25[_0x128b58(0x521a)]['add']('Possessive'));}let _0x237bd7=_0x1e1b62[_0x128b58(0x324a)](_0xb00765)[_0x128b58(0x23df)]([_0x128b58(0x3bb5),_0x128b58(0x209c),'lexicon']);return _0x237bd7[_0x128b58(0x4657)]['compute'][_0x128b58(0x2005)]&&_0x237bd7[_0x128b58(0x23df)](_0x128b58(0x2005)),_0x237bd7[_0x128b58(0x23df)](_0x128b58(0x52a)),_0x43e522[_0x128b58(0x521a)]&&_0x237bd7[_0x128b58(0x4a03)]()[_0x128b58(0xa21)]((_0x451402,_0x4a9f99)=>{const _0x57ac17=_0x128b58;_0x451402[_0x57ac17(0x108a)](_0x230e3a[_0x4a9f99]);}),_0x43e522[_0x128b58(0x2e7e)]&&_0x237bd7[_0x128b58(0x204b)][0x0]&&_0x237bd7['docs'][0x0][0x0]&&0x0===_0x237bd7[_0x128b58(0x204b)][0x0][0x0][_0x128b58(0x3bb5)][0x1]&&(_0x237bd7[_0x128b58(0x204b)][0x0][0x0]['text']=_0x237bd7[_0x128b58(0x204b)][0x0][0x0][_0x128b58(0x4006)][_0x128b58(0x741)](/\w\S*/g,_0x2bbd0a=>_0x2bbd0a[_0x128b58(0x2fe2)](0x0)[_0x128b58(0x44ff)]()+_0x2bbd0a['substring'](0x1)[_0x128b58(0x6e8)]())),_0x237bd7;},_0x1f1304['replace']=function(_0x791c9f,_0x19c231,_0x5afea1){const _0x4b5a94=_0x37e46c;if(_0x791c9f&&!_0x19c231)return this[_0x4b5a94(0x1f1f)](_0x791c9f,_0x5afea1);let _0x85c56=this[_0x4b5a94(0x2d96)](_0x791c9f);return _0x85c56[_0x4b5a94(0x2108)]?(this[_0x4b5a94(0x43f6)](),_0x85c56[_0x4b5a94(0x1f1f)](_0x19c231,_0x5afea1)):this;};const _0x6affa3=_0x1f1304,_0x5a5416=function(_0x324df7,_0x54d1bc){const _0x8595f5=_0x37e46c;_0x54d1bc[_0x8595f5(0xa21)](_0x4d9ee2=>{const _0x40a68c=_0x8595f5;let [_0x5b07d1,_0x3ae847,_0xf62eb9]=_0x4d9ee2,_0xb093df=_0xf62eb9-_0x3ae847;_0x324df7[_0x5b07d1]&&(_0xf62eb9===_0x324df7[_0x5b07d1]['length']&&_0xf62eb9>0x1&&function(_0x1b8fb7,_0x5faa51){const _0x246c0d=a0_0x11e7;let _0x728289=_0x1b8fb7['length']-0x1,_0x239026=_0x1b8fb7[_0x728289],_0x2e08b9=_0x1b8fb7[_0x728289-_0x5faa51];_0x2e08b9&&_0x239026&&(_0x2e08b9[_0x246c0d(0x24ce)]+=_0x239026[_0x246c0d(0x24ce)],_0x2e08b9[_0x246c0d(0x24ce)]=_0x2e08b9[_0x246c0d(0x24ce)][_0x246c0d(0x741)](/ +([.?!,;:])/,'$1'),_0x2e08b9['post']=_0x2e08b9['post'][_0x246c0d(0x741)](/[,;:]+([.?!])/,'$1'));}(_0x324df7[_0x5b07d1],_0xb093df),_0x324df7[_0x5b07d1][_0x40a68c(0x4986)](_0x3ae847,_0xb093df));});for(let _0x29c76d=_0x324df7[_0x8595f5(0x1b19)]-0x1;_0x29c76d>=0x0;_0x29c76d-=0x1)if(0x0===_0x324df7[_0x29c76d][_0x8595f5(0x1b19)]&&(_0x324df7[_0x8595f5(0x4986)](_0x29c76d,0x1),_0x29c76d===_0x324df7[_0x8595f5(0x1b19)]&&_0x324df7[_0x29c76d-0x1])){let _0x3eb6aa=_0x324df7[_0x29c76d-0x1],_0x130a72=_0x3eb6aa[_0x3eb6aa[_0x8595f5(0x1b19)]-0x1];_0x130a72&&(_0x130a72[_0x8595f5(0x24ce)]=_0x130a72['post'][_0x8595f5(0x1ee6)]());}return _0x324df7;},_0xb1d894={'remove':function(_0x1308b2){const _0x3b9d7b=_0x37e46c,{indexN:_0x49b0a3}=this[_0x3b9d7b(0x1578)][_0x3b9d7b(0x1d8a)]['pointer'];this['uncache']();let _0x1dae56=this[_0x3b9d7b(0xc36)](),_0x505a38=this;_0x1308b2&&(_0x1dae56=this,_0x505a38=this[_0x3b9d7b(0x2d96)](_0x1308b2));let _0x3fc4d9=!_0x1dae56[_0x3b9d7b(0x232)];_0x505a38[_0x3b9d7b(0x3170)]('@hasContraction')&&_0x505a38[_0x3b9d7b(0x8cf)]&&_0x505a38[_0x3b9d7b(0x4c27)]('@hasContraction')[_0x3b9d7b(0x8cf)]()[_0x3b9d7b(0x2d1c)]();let _0x260aee=_0x1dae56[_0x3b9d7b(0x34ce)],_0x72435=_0x505a38['fullPointer'][_0x3b9d7b(0x78b)](),_0x23a8d2=_0x5a5416(this['document'],_0x72435);return _0x260aee=function(_0x38b4d6,_0x139c73){const _0x35c693=_0x3b9d7b;return _0x38b4d6=_0x38b4d6['map'](_0x291b88=>{const _0x4d3af1=a0_0x11e7;let [_0x43bb51]=_0x291b88;return _0x139c73[_0x43bb51]?(_0x139c73[_0x43bb51][_0x4d3af1(0xa21)](_0x3244d1=>{let _0x3f1774=_0x3244d1[0x2]-_0x3244d1[0x1];_0x291b88[0x1]<=_0x3244d1[0x1]&&_0x291b88[0x2]>=_0x3244d1[0x2]&&(_0x291b88[0x2]-=_0x3f1774);}),_0x291b88):_0x291b88;}),_0x38b4d6[_0x35c693(0xa21)]((_0x4e6bc5,_0x2405f)=>{const _0x5f37cc=_0x35c693;if(0x0===_0x4e6bc5[0x1]&&0x0==_0x4e6bc5[0x2]){for(let _0x46c169=_0x2405f+0x1;_0x46c169<_0x38b4d6[_0x5f37cc(0x1b19)];_0x46c169+=0x1)_0x38b4d6[_0x46c169][0x0]-=0x1,_0x38b4d6[_0x46c169][0x0]<0x0&&(_0x38b4d6[_0x46c169][0x0]=0x0);}}),_0x38b4d6=(_0x38b4d6=_0x38b4d6[_0x35c693(0x1465)](_0x1f69fd=>_0x1f69fd[0x2]-_0x1f69fd[0x1]>0x0))[_0x35c693(0x4833)](_0x1ca6db=>(_0x1ca6db[0x3]=null,_0x1ca6db[0x4]=null,_0x1ca6db));}(_0x260aee,_0x49b0a3(_0x72435)),_0x1dae56[_0x3b9d7b(0x232)]=_0x260aee,_0x1dae56[_0x3b9d7b(0x295)]=_0x23a8d2,_0x1dae56[_0x3b9d7b(0x23df)]('index'),_0x3fc4d9&&(_0x1dae56[_0x3b9d7b(0x232)]=void 0x0),_0x1308b2?_0x1dae56[_0x3b9d7b(0x324a)](_0x260aee):(this[_0x3b9d7b(0x232)]=[],_0x1dae56['none']());}};_0xb1d894['delete']=_0xb1d894['remove'];const _0x2c2bd2=_0xb1d894,_0x44b44f={'pre':function(_0x2a243d,_0x442c7e){const _0x23294a=_0x37e46c;return void 0x0===_0x2a243d&&this['found']?this['docs'][0x0][0x0][_0x23294a(0x1228)]:(this[_0x23294a(0x204b)]['forEach'](_0x4bb084=>{const _0x7e7ebb=_0x23294a;let _0x32ace0=_0x4bb084[0x0];!0x0===_0x442c7e?_0x32ace0[_0x7e7ebb(0x1228)]+=_0x2a243d:_0x32ace0['pre']=_0x2a243d;}),this);},'post':function(_0x35f237,_0x3e899f){const _0x5580eb=_0x37e46c;if(void 0x0===_0x35f237){let _0x557f2d=this[_0x5580eb(0x204b)][this[_0x5580eb(0x204b)][_0x5580eb(0x1b19)]-0x1];return _0x557f2d[_0x557f2d[_0x5580eb(0x1b19)]-0x1][_0x5580eb(0x24ce)];}return this['docs'][_0x5580eb(0xa21)](_0x2b2233=>{const _0x2aee3a=_0x5580eb;let _0x74a63b=_0x2b2233[_0x2b2233['length']-0x1];!0x0===_0x3e899f?_0x74a63b['post']+=_0x35f237:_0x74a63b[_0x2aee3a(0x24ce)]=_0x35f237;}),this;},'trim':function(){const _0x5a34fd=_0x37e46c;if(!this['found'])return this;let _0x503b9d=this[_0x5a34fd(0x204b)],_0x5b6121=_0x503b9d[0x0][0x0];_0x5b6121[_0x5a34fd(0x1228)]=_0x5b6121[_0x5a34fd(0x1228)][_0x5a34fd(0x8d3)]();let _0x4b6740=_0x503b9d[_0x503b9d[_0x5a34fd(0x1b19)]-0x1],_0x4e077a=_0x4b6740[_0x4b6740['length']-0x1];return _0x4e077a[_0x5a34fd(0x24ce)]=_0x4e077a[_0x5a34fd(0x24ce)][_0x5a34fd(0x1ee6)](),this;},'hyphenate':function(){const _0x4bde7e=_0x37e46c;return this[_0x4bde7e(0x204b)]['forEach'](_0x49388d=>{const _0x46127d=_0x4bde7e;_0x49388d[_0x46127d(0xa21)]((_0xa3be62,_0xc373b0)=>{const _0x588d98=_0x46127d;0x0!==_0xc373b0&&(_0xa3be62[_0x588d98(0x1228)]=''),_0x49388d[_0xc373b0+0x1]&&(_0xa3be62['post']='-');});}),this;},'dehyphenate':function(){const _0x312d24=_0x37e46c,_0x516af7=/[-–—]/;return this[_0x312d24(0x204b)][_0x312d24(0xa21)](_0x26add6=>{const _0xcc7e86=_0x312d24;_0x26add6[_0xcc7e86(0xa21)](_0x3a79f7=>{const _0x2c2b72=_0xcc7e86;_0x516af7[_0x2c2b72(0x1769)](_0x3a79f7[_0x2c2b72(0x24ce)])&&(_0x3a79f7[_0x2c2b72(0x24ce)]='\x20');});}),this;},'toQuotations':function(_0x2a39c,_0x3ed3f6){const _0x1e5c1e=_0x37e46c;return _0x2a39c=_0x2a39c||'\x22',_0x3ed3f6=_0x3ed3f6||'\x22',this[_0x1e5c1e(0x204b)][_0x1e5c1e(0xa21)](_0x46aeb7=>{const _0x546f8c=_0x1e5c1e;_0x46aeb7[0x0]['pre']=_0x2a39c+_0x46aeb7[0x0][_0x546f8c(0x1228)];let _0x285ae4=_0x46aeb7[_0x46aeb7[_0x546f8c(0x1b19)]-0x1];_0x285ae4[_0x546f8c(0x24ce)]=_0x3ed3f6+_0x285ae4['post'];}),this;},'toParentheses':function(_0x1fc5dc,_0x3beb3f){const _0x42bd73=_0x37e46c;return _0x1fc5dc=_0x1fc5dc||'(',_0x3beb3f=_0x3beb3f||')',this[_0x42bd73(0x204b)][_0x42bd73(0xa21)](_0x365c79=>{const _0x57c3aa=_0x42bd73;_0x365c79[0x0][_0x57c3aa(0x1228)]=_0x1fc5dc+_0x365c79[0x0][_0x57c3aa(0x1228)];let _0x1b6e6c=_0x365c79[_0x365c79['length']-0x1];_0x1b6e6c[_0x57c3aa(0x24ce)]=_0x3beb3f+_0x1b6e6c[_0x57c3aa(0x24ce)];}),this;}};_0x44b44f[_0x37e46c(0xe9d)]=_0x44b44f['dehyphenate'],_0x44b44f[_0x37e46c(0x2a43)]=_0x44b44f[_0x37e46c(0x3517)];const _0x761946=_0x44b44f,_0x3b43d8={'alpha':(_0x593693,_0x27dfab)=>_0x593693['normal']<_0x27dfab[_0x37e46c(0x47d)]?-0x1:_0x593693[_0x37e46c(0x47d)]>_0x27dfab[_0x37e46c(0x47d)]?0x1:0x0,'length':(_0x1cd72b,_0x3ed59e)=>{const _0x4e7799=_0x37e46c;let _0x3666bf=_0x1cd72b[_0x4e7799(0x47d)][_0x4e7799(0x1b23)]()[_0x4e7799(0x1b19)],_0x283c52=_0x3ed59e[_0x4e7799(0x47d)]['trim']()[_0x4e7799(0x1b19)];return _0x3666bf<_0x283c52?0x1:_0x3666bf>_0x283c52?-0x1:0x0;},'wordCount':(_0x287843,_0x13e2e8)=>_0x287843[_0x37e46c(0x19f4)]<_0x13e2e8[_0x37e46c(0x19f4)]?0x1:_0x287843[_0x37e46c(0x19f4)]>_0x13e2e8[_0x37e46c(0x19f4)]?-0x1:0x0,'sequential':(_0xe30381,_0x6db2b0)=>_0xe30381[0x0]<_0x6db2b0[0x0]?0x1:_0xe30381[0x0]>_0x6db2b0[0x0]?-0x1:_0xe30381[0x1]>_0x6db2b0[0x1]?0x1:-0x1,'byFreq':function(_0x336588){const _0x2bae03=_0x37e46c;let _0xac9f17={};return _0x336588['forEach'](_0x46ed42=>{const _0x181823=a0_0x11e7;_0xac9f17[_0x46ed42[_0x181823(0x47d)]]=_0xac9f17[_0x46ed42[_0x181823(0x47d)]]||0x0,_0xac9f17[_0x46ed42[_0x181823(0x47d)]]+=0x1;}),_0x336588[_0x2bae03(0x4c33)]((_0x32ba16,_0x56b8ec)=>{const _0x336a5=_0x2bae03;let _0x1aff0f=_0xac9f17[_0x32ba16['normal']],_0x35b88f=_0xac9f17[_0x56b8ec[_0x336a5(0x47d)]];return _0x1aff0f<_0x35b88f?0x1:_0x1aff0f>_0x35b88f?-0x1:0x0;}),_0x336588;}},_0x84150f=new Set([_0x37e46c(0x3bb5),'sequence','seq',_0x37e46c(0x429c),_0x37e46c(0x940),_0x37e46c(0x219d)]),_0x3d786d=new Set([_0x37e46c(0xe8f),_0x37e46c(0x1ead),_0x37e46c(0x3f67),'repeats']),_0x19bfe5=new Set([_0x37e46c(0x174a),_0x37e46c(0x3b8b)]),_0x4c9686={'unique':function(){const _0x2768e1=_0x37e46c;let _0x42f5cb=new Set(),_0x539171=this[_0x2768e1(0x1465)](_0x1d6c5f=>{const _0x669a48=_0x2768e1;let _0xcc7e65=_0x1d6c5f[_0x669a48(0x4006)](_0x669a48(0x192e));return!_0x42f5cb['has'](_0xcc7e65)&&(_0x42f5cb[_0x669a48(0x362c)](_0xcc7e65),!0x0);});return _0x539171;},'reverse':function(){const _0x132b11=_0x37e46c;let _0x30fa5a=this[_0x132b11(0x43e4)]||this[_0x132b11(0x204b)]['map']((_0x581f49,_0x3866cb)=>[_0x3866cb]);return _0x30fa5a=[][_0x132b11(0x1d1d)](_0x30fa5a),_0x30fa5a=_0x30fa5a[_0x132b11(0x78b)](),this['_cache']&&(this[_0x132b11(0x1aa7)]=this['_cache']['reverse']()),this[_0x132b11(0x38d6)](_0x30fa5a);},'sort':function(_0x4ee8a1){const _0x3b3ce9=_0x37e46c;let {docs:_0x37ba26,pointer:_0x275075}=this;if(this[_0x3b3ce9(0x4742)](),'function'==typeof _0x4ee8a1)return function(_0xbfb106,_0x5137ba){const _0x51b378=_0x3b3ce9;let _0x498396=_0xbfb106['fullPointer'];return _0x498396=_0x498396[_0x51b378(0x4c33)]((_0x11e1ca,_0x36ce73)=>(_0x11e1ca=_0xbfb106[_0x51b378(0x38d6)]([_0x11e1ca]),_0x36ce73=_0xbfb106[_0x51b378(0x38d6)]([_0x36ce73]),_0x5137ba(_0x11e1ca,_0x36ce73))),_0xbfb106['ptrs']=_0x498396,_0xbfb106;}(this,_0x4ee8a1);_0x4ee8a1=_0x4ee8a1||'alpha';let _0x1f1dbe=_0x275075||_0x37ba26[_0x3b3ce9(0x4833)]((_0x176f12,_0x40ab0f)=>[_0x40ab0f]),_0x186998=_0x37ba26[_0x3b3ce9(0x4833)]((_0xf3d2e0,_0x4c70b0)=>({'index':_0x4c70b0,'words':_0xf3d2e0[_0x3b3ce9(0x1b19)],'normal':_0xf3d2e0[_0x3b3ce9(0x4833)](_0x23924d=>_0x23924d[_0x3b3ce9(0x192e)]||_0x23924d[_0x3b3ce9(0x47d)]||'')[_0x3b3ce9(0x3541)]('\x20'),'pointer':_0x1f1dbe[_0x4c70b0]}));return _0x84150f[_0x3b3ce9(0x3170)](_0x4ee8a1)&&(_0x4ee8a1=_0x3b3ce9(0x429c)),_0x19bfe5[_0x3b3ce9(0x3170)](_0x4ee8a1)&&(_0x4ee8a1=_0x3b3ce9(0x174a)),_0x3d786d['has'](_0x4ee8a1)?(_0x186998=_0x3b43d8[_0x3b3ce9(0x2304)](_0x186998),this[_0x3b3ce9(0x38d6)](_0x186998['map'](_0x5ab229=>_0x5ab229['pointer']))):_0x3b3ce9(0x14b2)==typeof _0x3b43d8[_0x4ee8a1]?(_0x186998=_0x186998['sort'](_0x3b43d8[_0x4ee8a1]),this[_0x3b3ce9(0x38d6)](_0x186998[_0x3b3ce9(0x4833)](_0x41007b=>_0x41007b[_0x3b3ce9(0x43e4)]))):this;}},_0x38afab=function(_0x324979,_0x3a3834){const _0xdb782e=_0x37e46c;if(_0x324979[_0xdb782e(0x1b19)]>0x0){let _0x3874f5=_0x324979[_0x324979[_0xdb782e(0x1b19)]-0x1],_0xf1e12c=_0x3874f5[_0x3874f5[_0xdb782e(0x1b19)]-0x1];!0x1===/ /[_0xdb782e(0x1769)](_0xf1e12c[_0xdb782e(0x24ce)])&&(_0xf1e12c[_0xdb782e(0x24ce)]+='\x20');}return _0x324979=_0x324979['concat'](_0x3a3834);},_0x4a3fe1={'concat':function(_0x261c2f){const _0x1c1468=_0x37e46c;if(_0x1c1468(0x2431)==typeof _0x261c2f){let _0x302d5a=this[_0x1c1468(0x4f8a)](_0x261c2f);if(this['found']&&this[_0x1c1468(0x232)]){let _0x2da315=this[_0x1c1468(0x34ce)],_0xbaf4e9=_0x2da315[_0x2da315['length']-0x1][0x0];this[_0x1c1468(0x295)][_0x1c1468(0x4986)](_0xbaf4e9,0x0,..._0x302d5a[_0x1c1468(0x295)]);}else this['document']=this[_0x1c1468(0x295)][_0x1c1468(0x1d1d)](_0x302d5a[_0x1c1468(0x295)]);return this[_0x1c1468(0xc36)]()[_0x1c1468(0x23df)](_0x1c1468(0x3bb5));}if('object'==typeof _0x261c2f&&_0x261c2f[_0x1c1468(0x4d72)])return function(_0x28a18b,_0x35cd81){const _0x2adcd7=_0x1c1468;if(_0x28a18b[_0x2adcd7(0x295)]===_0x35cd81[_0x2adcd7(0x295)]){let _0x2ddea8=_0x28a18b[_0x2adcd7(0x34ce)][_0x2adcd7(0x1d1d)](_0x35cd81['fullPointer']);return _0x28a18b[_0x2adcd7(0x324a)](_0x2ddea8)[_0x2adcd7(0x23df)](_0x2adcd7(0x3bb5));}return _0x35cd81[_0x2adcd7(0x34ce)]['forEach'](_0x1e925a=>{const _0x18c5dc=_0x2adcd7;_0x1e925a[0x0]+=_0x28a18b[_0x18c5dc(0x295)][_0x18c5dc(0x1b19)];}),_0x28a18b[_0x2adcd7(0x295)]=_0x38afab(_0x28a18b['document'],_0x35cd81[_0x2adcd7(0x204b)]),_0x28a18b[_0x2adcd7(0xc36)]();}(this,_0x261c2f);if(_0x36c277=_0x261c2f,_0x1c1468(0xb2e)===Object[_0x1c1468(0x3b3c)]['toString']['call'](_0x36c277)){let _0x17ca86=_0x38afab(this[_0x1c1468(0x295)],_0x261c2f);return this[_0x1c1468(0x295)]=_0x17ca86,this[_0x1c1468(0xc36)]();}var _0x36c277;return this;}},_0x1a036a={'harden':function(){const _0x51a776=_0x37e46c;return this[_0x51a776(0x232)]=this['fullPointer'],this;},'soften':function(){const _0x5bd197=_0x37e46c;let _0xa94fb=this[_0x5bd197(0x232)];return!_0xa94fb||_0xa94fb[_0x5bd197(0x1b19)]<0x1||(_0xa94fb=_0xa94fb['map'](_0x218230=>_0x218230['slice'](0x0,0x3)),this[_0x5bd197(0x232)]=_0xa94fb),this;}},_0x27d6ae=Object[_0x37e46c(0x4e14)]({},{'toLowerCase':function(){const _0x15b391=_0x37e46c;return this[_0x15b391(0x50a1)]()['forEach'](_0x494313=>{const _0x10686d=_0x15b391;_0x494313['text']=_0x494313[_0x10686d(0x4006)][_0x10686d(0x6e8)]();}),this;},'toUpperCase':function(){const _0x3228f0=_0x37e46c;return this[_0x3228f0(0x50a1)]()[_0x3228f0(0xa21)](_0x1a3e9f=>{const _0x2f6949=_0x3228f0;_0x1a3e9f['text']=_0x1a3e9f[_0x2f6949(0x4006)]['toUpperCase']();}),this;},'toTitleCase':function(){const _0x3be034=_0x37e46c;return this[_0x3be034(0x50a1)]()[_0x3be034(0xa21)](_0x4ee43e=>{const _0x451f2f=_0x3be034;_0x4ee43e[_0x451f2f(0x4006)]=_0x4ee43e[_0x451f2f(0x4006)][_0x451f2f(0x741)](/^ *[a-z\u00C0-\u00FF]/,_0x4630d2=>_0x4630d2[_0x451f2f(0x44ff)]());}),this;},'toCamelCase':function(){const _0x1cc848=_0x37e46c;return this[_0x1cc848(0x204b)]['forEach'](_0x217be8=>{const _0x23cb01=_0x1cc848;_0x217be8[_0x23cb01(0xa21)]((_0x1cb595,_0x14b4bc)=>{const _0x354cd9=_0x23cb01;0x0!==_0x14b4bc&&(_0x1cb595[_0x354cd9(0x4006)]=_0x1cb595[_0x354cd9(0x4006)][_0x354cd9(0x741)](/^ *[a-z\u00C0-\u00FF]/,_0x1bcdb7=>_0x1bcdb7[_0x354cd9(0x44ff)]())),_0x14b4bc!==_0x217be8['length']-0x1&&(_0x1cb595[_0x354cd9(0x24ce)]='');});}),this;}},_0x1276aa,_0x6affa3,_0x2c2bd2,_0x761946,_0x4c9686,_0x4a3fe1,_0x1a036a),_0x319583=function(_0x198788){const _0x47ab83=_0x37e46c;Object['assign'](_0x198788[_0x47ab83(0x3b3c)],_0x27d6ae);},_0x50c0f7={'id':function(_0x2d7e9f){const _0xd534f8=_0x37e46c;let _0x31d26e=_0x2d7e9f[_0xd534f8(0x204b)];for(let _0xc7f8c4=0x0;_0xc7f8c4<_0x31d26e[_0xd534f8(0x1b19)];_0xc7f8c4+=0x1)for(let _0x20d4f5=0x0;_0x20d4f5<_0x31d26e[_0xc7f8c4][_0xd534f8(0x1b19)];_0x20d4f5+=0x1){let _0x4d5697=_0x31d26e[_0xc7f8c4][_0x20d4f5];_0x4d5697['id']=_0x4d5697['id']||_0x2036be(_0x4d5697);}}},_0x5aa28c={'api':_0x319583,'compute':_0x50c0f7},_0x7fccd4=!0x0,_0x3ead30={'one':{'contractions':[{'word':'@','out':['at']},{'word':'arent','out':['are',_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x5ef),'out':['a','lot']},{'word':_0x37e46c(0x4dcd),'out':['be',_0x37e46c(0x4d50),'back']},{'word':'cannot','out':[_0x37e46c(0x312c),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x378e),'out':['do',_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x15f4),'out':[_0x37e46c(0x312c),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x5146),'out':[_0x37e46c(0x40ea),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x127b),'out':['will',_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x3263),'out':[_0x37e46c(0x3a9c),'is']},{'word':_0x37e46c(0x50cb),'out':[_0x37e46c(0x3fcc),'is']},{'word':'let\x27s','out':[_0x37e46c(0x1e61),'us']},{'word':_0x37e46c(0x3f5),'out':['do',_0x37e46c(0xc1a),_0x37e46c(0x1fa0)]},{'word':_0x37e46c(0xab7),'out':[_0x37e46c(0x2ba0),'to']},{'word':_0x37e46c(0x1cd0),'out':['have',_0x37e46c(0x176c),'to']},{'word':_0x37e46c(0x20dc),'out':[_0x37e46c(0x47ba),'me']},{'word':_0x37e46c(0x35e2),'out':[_0x37e46c(0x3ab5),'of']},{'word':_0x37e46c(0x2b1d),'out':[_0x37e46c(0x1b9c),'to']},{'word':'gtg','out':[_0x37e46c(0x176c),'to','go']},{'word':'im','out':['i','am']},{'word':'imma','out':['I',_0x37e46c(0x207)]},{'word':_0x37e46c(0x406b),'out':['in','my',_0x37e46c(0x2c49)]},{'word':_0x37e46c(0x2af5),'out':['in',_0x37e46c(0x47f6),_0x37e46c(0xcd5)]},{'word':_0x37e46c(0x4d4b),'out':['i',_0x37e46c(0x1065)]},{'word':'rn','out':[_0x37e46c(0x4d50),'now']},{'word':_0x37e46c(0x47c),'out':['to','be',_0x37e46c(0x5112)]},{'word':_0x37e46c(0x1cac),'out':[_0x37e46c(0x44e5),'to']},{'word':_0x37e46c(0x3bc1),'out':[_0x37e46c(0x3a9d),_0x37e46c(0xfa2)]},{'word':_0x37e46c(0x318e),'out':[_0x37e46c(0x3a9d),'on']},{'word':'shoulda','out':[_0x37e46c(0x40ea),'have']},{'word':_0x37e46c(0x784),'out':[_0x37e46c(0x784),_0x37e46c(0x1065)]},{'word':'woulda','out':[_0x37e46c(0x515e),_0x37e46c(0x1065)]},{'word':_0x37e46c(0x918),'out':[_0x37e46c(0x1684),_0x37e46c(0x1065)]},{'word':_0x37e46c(0x4d17),'out':['it','is']},{'word':_0x37e46c(0x8bd),'out':['it','was']},{'word':'y\x27know','out':[_0x37e46c(0x28f2),_0x37e46c(0x1fa0)]},{'word':_0x37e46c(0x2d2),'out':[_0x37e46c(0x1d3d)]},{'word':_0x37e46c(0x190c),'out':[_0x37e46c(0x135c)]},{'after':'ll','out':['will']},{'after':'ve','out':['have']},{'after':'re','out':[_0x37e46c(0x9d5)]},{'after':'m','out':['am']},{'before':'c','out':['ce']},{'before':'m','out':['me']},{'before':'n','out':['ne']},{'before':'qu','out':[_0x37e46c(0x962)]},{'before':'s','out':['se']},{'before':'t','out':['tu']},{'word':_0x37e46c(0x18b3),'out':[_0x37e46c(0x40ea),_0x37e46c(0xc1a)]},{'word':'couldnt','out':[_0x37e46c(0x3db),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x396b),'out':[_0x37e46c(0x361a),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x2b38),'out':[_0x37e46c(0x3170),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x2804),'out':[_0x37e46c(0x23ef),_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x4a63),'out':['is',_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x260b),'out':['can',_0x37e46c(0xc1a)]},{'word':'dont','out':['do',_0x37e46c(0xc1a)]},{'word':_0x37e46c(0x2f9b),'out':[_0x37e46c(0x207),_0x37e46c(0xc1a)]},{'word':'howd','out':[_0x37e46c(0x3981),_0x37e46c(0x4af8)]},{'word':_0x37e46c(0x4f81),'out':[_0x37e46c(0x3fcc),_0x37e46c(0x4af8)]},{'word':_0x37e46c(0xf61),'out':[_0x37e46c(0x191b),_0x37e46c(0x4af8)]},{'word':_0x37e46c(0x1ef0),'out':[_0x37e46c(0x3b62),_0x37e46c(0x4af8)]}],'numberSuffixes':{'st':_0x7fccd4,'nd':_0x7fccd4,'rd':_0x7fccd4,'th':_0x7fccd4,'am':_0x7fccd4,'pm':_0x7fccd4,'max':_0x7fccd4,'°':_0x7fccd4,'s':_0x7fccd4,'e':_0x7fccd4,'er':_0x7fccd4,'ère':_0x7fccd4,'ème':_0x7fccd4}}},_0x17be8b=function(_0x5693a4,_0x4980d2,_0xbdd826){const _0x3f20d4=_0x37e46c;let [_0x43a6db,_0xfe7315]=_0x4980d2;_0xbdd826&&0x0!==_0xbdd826[_0x3f20d4(0x1b19)]&&(_0xbdd826=_0xbdd826[_0x3f20d4(0x4833)]((_0x323e5d,_0x297c7d)=>(_0x323e5d['implicit']=_0x323e5d[_0x3f20d4(0x4006)],_0x323e5d[_0x3f20d4(0x192e)]=_0x323e5d[_0x3f20d4(0x4006)],_0x323e5d['pre']='',_0x323e5d[_0x3f20d4(0x24ce)]='',_0x323e5d[_0x3f20d4(0x4006)]='',_0x323e5d[_0x3f20d4(0x47d)]='',_0x323e5d[_0x3f20d4(0x3bb5)]=[_0x43a6db,_0xfe7315+_0x297c7d],_0x323e5d)),_0xbdd826[0x0]&&(_0xbdd826[0x0]['pre']=_0x5693a4[_0x43a6db][_0xfe7315][_0x3f20d4(0x1228)],_0xbdd826[_0xbdd826['length']-0x1][_0x3f20d4(0x24ce)]=_0x5693a4[_0x43a6db][_0xfe7315][_0x3f20d4(0x24ce)],_0xbdd826[0x0]['text']=_0x5693a4[_0x43a6db][_0xfe7315][_0x3f20d4(0x4006)],_0xbdd826[0x0][_0x3f20d4(0x47d)]=_0x5693a4[_0x43a6db][_0xfe7315]['normal']),_0x5693a4[_0x43a6db][_0x3f20d4(0x4986)](_0xfe7315,0x1,..._0xbdd826));},_0x18c825=/'/,_0xa032f5=new Set(['what',_0x37e46c(0x3981),'when',_0x37e46c(0x3b62),_0x37e46c(0x3cc1)]),_0x9d95f5=new Set(['be','go','start','think',_0x37e46c(0x1843)]),_0x58e73c=new Set([_0x37e46c(0x72a),_0x37e46c(0xeac)]),_0x11524d=function(_0x418efc,_0x2e9079){const _0x64df76=_0x37e46c;let _0x48a4bb=_0x418efc[_0x2e9079][_0x64df76(0x47d)][_0x64df76(0x1117)](_0x18c825)[0x0];if(_0xa032f5['has'](_0x48a4bb))return[_0x48a4bb,_0x64df76(0x4af8)];if(_0x418efc[_0x2e9079+0x1]){if(_0x58e73c['has'](_0x418efc[_0x2e9079+0x1][_0x64df76(0x47d)]))return[_0x48a4bb,'had'];if(_0x9d95f5[_0x64df76(0x3170)](_0x418efc[_0x2e9079+0x1]['normal']))return[_0x48a4bb,_0x64df76(0x361a)];}return null;},_0x238ec4=function(_0x161327,_0x1a9e42){const _0xcf3146=_0x37e46c;if(_0xcf3146(0x26f4)===_0x161327[_0x1a9e42]['normal']||_0xcf3146(0x43c5)===_0x161327[_0x1a9e42][_0xcf3146(0x47d)])return null;return[_0x161327[_0x1a9e42][_0xcf3146(0x47d)][_0xcf3146(0x741)](/n't/,''),'not'];},_0x45accd=/'/,_0x2af9e2=/(e|é|aison|sion|tion)$/,_0x4b49e0=/(age|isme|acle|ege|oire)$/,_0x32c09d=(_0x42f951,_0xae069e)=>['je',_0x42f951[_0xae069e][_0x37e46c(0x47d)][_0x37e46c(0x1117)](_0x45accd)[0x1]],_0x133576=(_0x52f42f,_0x47f3f0)=>{const _0x37db25=_0x37e46c;let _0x1eae0f=_0x52f42f[_0x47f3f0]['normal'][_0x37db25(0x1117)](_0x45accd)[0x1];return _0x1eae0f&&_0x1eae0f['endsWith']('e')?['la',_0x1eae0f]:['le',_0x1eae0f];},_0x2db341=(_0x58badf,_0x4640fb)=>{const _0x4f8c54=_0x37e46c;let _0x11d0cc=_0x58badf[_0x4640fb]['normal'][_0x4f8c54(0x1117)](_0x45accd)[0x1];return _0x11d0cc&&_0x2af9e2[_0x4f8c54(0x1769)](_0x11d0cc)&&!_0x4b49e0[_0x4f8c54(0x1769)](_0x11d0cc)?['du',_0x11d0cc]:_0x11d0cc&&_0x11d0cc[_0x4f8c54(0x2a85)]('s')?['des',_0x11d0cc]:['de',_0x11d0cc];},_0x2c1c39=/^([0-9.]{1,4}[a-z]{0,2}) ?[-–—] ?([0-9]{1,4}[a-z]{0,2})$/i,_0x25a1d1=/^([0-9]{1,2}(:[0-9][0-9])?(am|pm)?) ?[-–—] ?([0-9]{1,2}(:[0-9][0-9])?(am|pm)?)$/i,_0x11269e=/^[0-9]{3}-[0-9]{4}$/,_0x20926e=function(_0x1d54ec,_0xab6896){const _0x58dd65=_0x37e46c;let _0x271272=_0x1d54ec[_0xab6896],_0x410731=_0x271272[_0x58dd65(0x4006)][_0x58dd65(0x2d96)](_0x2c1c39);return null!==_0x410731?!0x0===_0x271272[_0x58dd65(0x521a)][_0x58dd65(0x3170)](_0x58dd65(0xe0f))||_0x11269e['test'](_0x271272[_0x58dd65(0x4006)])?null:[_0x410731[0x1],'to',_0x410731[0x2]]:(_0x410731=_0x271272[_0x58dd65(0x4006)]['match'](_0x25a1d1),null!==_0x410731?[_0x410731[0x1],'to',_0x410731[0x4]]:null);},_0x45354f=/^([+-]?[0-9][.,0-9]*)([a-z°²³µ/]+)$/,_0x213485=function(_0xd014e6,_0x7f3d57,_0x3d5f9d){const _0xa6c0cb=_0x37e46c,_0x8e260a=_0x3d5f9d[_0xa6c0cb(0x1556)][_0xa6c0cb(0x1d8a)][_0xa6c0cb(0x40bd)]||{};let _0xb2b4b2=_0xd014e6[_0x7f3d57][_0xa6c0cb(0x4006)][_0xa6c0cb(0x2d96)](_0x45354f);if(null!==_0xb2b4b2){let _0x57d52d=_0xb2b4b2[0x2][_0xa6c0cb(0x6e8)]()['trim']();return _0x8e260a[_0xa6c0cb(0x2427)](_0x57d52d)?null:[_0xb2b4b2[0x1],_0x57d52d];}return null;},_0x56b81e=/'/,_0x351f86=/^[0-9][^-–—]*[-–—].*?[0-9]/,_0x2b8ba8=function(_0x20f26d,_0x1a34b4,_0x1ad2e8,_0x5af483){const _0x574e7c=_0x37e46c;let _0x1954e3=_0x1a34b4[_0x574e7c(0x38d6)]();_0x1954e3['document']=[_0x20f26d];let _0x43f1d6=_0x1ad2e8+_0x5af483;_0x1ad2e8>0x0&&(_0x1ad2e8-=0x1),_0x20f26d[_0x43f1d6]&&(_0x43f1d6+=0x1),_0x1954e3['ptrs']=[[0x0,_0x1ad2e8,_0x43f1d6]];},_0x406b54={'t':(_0x4e1560,_0x593834)=>_0x238ec4(_0x4e1560,_0x593834),'d':(_0x5b00ee,_0x22d17b)=>_0x11524d(_0x5b00ee,_0x22d17b)},_0x4b7f84={'j':(_0x13ecd3,_0x1e8226)=>_0x32c09d(_0x13ecd3,_0x1e8226),'l':(_0x3cb112,_0x21cb56)=>_0x133576(_0x3cb112,_0x21cb56),'d':(_0x4f0737,_0x50b607)=>_0x2db341(_0x4f0737,_0x50b607)},_0x3573fc=function(_0x563bdd,_0x5a8121,_0x581874,_0x5df29f){const _0x52f2bd=_0x37e46c;for(let _0x460849=0x0;_0x460849<_0x563bdd[_0x52f2bd(0x1b19)];_0x460849+=0x1){let _0x233ea9=_0x563bdd[_0x460849];if(_0x233ea9[_0x52f2bd(0x2506)]===_0x5a8121[_0x52f2bd(0x47d)])return _0x233ea9[_0x52f2bd(0x3ab5)];if(null!==_0x5df29f&&_0x5df29f===_0x233ea9['after'])return[_0x581874][_0x52f2bd(0x1d1d)](_0x233ea9[_0x52f2bd(0x3ab5)]);if(null!==_0x581874&&_0x581874===_0x233ea9[_0x52f2bd(0x5097)]&&_0x5df29f&&_0x5df29f['length']>0x2)return _0x233ea9[_0x52f2bd(0x3ab5)][_0x52f2bd(0x1d1d)](_0x5df29f);}return null;},_0x272171=function(_0x11bab8,_0x1669c0){const _0x59ad46=_0x37e46c;let _0xd85dea=_0x1669c0['fromText'](_0x11bab8[_0x59ad46(0x3541)]('\x20'));return _0xd85dea[_0x59ad46(0x23df)](['id',_0x59ad46(0xa94)]),_0xd85dea[_0x59ad46(0x204b)][0x0];},_0x363588=function(_0x59cc71,_0x42967a){const _0x5db7e0=_0x37e46c;for(let _0x2b3729=_0x42967a+0x1;_0x2b3729<0x5&&_0x59cc71[_0x2b3729];_0x2b3729+=0x1)if(_0x5db7e0(0x72a)===_0x59cc71[_0x2b3729]['normal'])return[_0x5db7e0(0xdd3),'has'];return[_0x5db7e0(0xdd3),'is'];},_0x22ad77=_0x3870a4=>{const _0x446a0d=_0x37e46c;let {world:_0x3dd634,document:_0x6d8d11}=_0x3870a4;const {model:_0x1a92f6,methods:_0x493e6d}=_0x3dd634;let _0x1d4095=_0x1a92f6[_0x446a0d(0x1d8a)][_0x446a0d(0x8cf)]||[];_0x6d8d11['forEach']((_0x41dd45,_0x5b81ec)=>{const _0x1bc35a=_0x446a0d;for(let _0x11d67f=_0x41dd45[_0x1bc35a(0x1b19)]-0x1;_0x11d67f>=0x0;_0x11d67f-=0x1){let _0x23d735=null,_0x3107ee=null;if(!0x0===_0x56b81e[_0x1bc35a(0x1769)](_0x41dd45[_0x11d67f]['normal'])){let _0x49a3da=_0x41dd45[_0x11d67f][_0x1bc35a(0x47d)][_0x1bc35a(0x1117)](_0x56b81e);_0x23d735=_0x49a3da[0x0],_0x3107ee=_0x49a3da[0x1];}let _0x3b4d93=_0x3573fc(_0x1d4095,_0x41dd45[_0x11d67f],_0x23d735,_0x3107ee);!_0x3b4d93&&_0x406b54[_0x1bc35a(0x2427)](_0x3107ee)&&(_0x3b4d93=_0x406b54[_0x3107ee](_0x41dd45,_0x11d67f,_0x3dd634)),!_0x3b4d93&&_0x4b7f84[_0x1bc35a(0x2427)](_0x23d735)&&(_0x3b4d93=_0x4b7f84[_0x23d735](_0x41dd45,_0x11d67f)),_0x1bc35a(0xdd3)===_0x23d735&&'s'===_0x3107ee&&(_0x3b4d93=_0x363588(_0x41dd45,_0x11d67f)),_0x3b4d93?(_0x3b4d93=_0x272171(_0x3b4d93,_0x3870a4),_0x17be8b(_0x6d8d11,[_0x5b81ec,_0x11d67f],_0x3b4d93),_0x2b8ba8(_0x6d8d11[_0x5b81ec],_0x3870a4,_0x11d67f,_0x3b4d93['length'])):_0x351f86[_0x1bc35a(0x1769)](_0x41dd45[_0x11d67f][_0x1bc35a(0x47d)])?(_0x3b4d93=_0x20926e(_0x41dd45,_0x11d67f),_0x3b4d93&&(_0x3b4d93=_0x272171(_0x3b4d93,_0x3870a4),_0x17be8b(_0x6d8d11,[_0x5b81ec,_0x11d67f],_0x3b4d93),_0x493e6d['one'][_0x1bc35a(0x4820)](_0x3b4d93,_0x1bc35a(0x2e56),_0x3dd634),_0x3b4d93[0x2]&&_0x3b4d93[0x2][_0x1bc35a(0x521a)][_0x1bc35a(0x3170)](_0x1bc35a(0x1703))&&_0x493e6d[_0x1bc35a(0x1d8a)]['setTag']([_0x3b4d93[0x0]],'Time',_0x3dd634,null,_0x1bc35a(0x16a0)),_0x2b8ba8(_0x6d8d11[_0x5b81ec],_0x3870a4,_0x11d67f,_0x3b4d93['length']))):(_0x3b4d93=_0x213485(_0x41dd45,_0x11d67f,_0x3dd634),_0x3b4d93&&(_0x3b4d93=_0x272171(_0x3b4d93,_0x3870a4),_0x17be8b(_0x6d8d11,[_0x5b81ec,_0x11d67f],_0x3b4d93),_0x493e6d[_0x1bc35a(0x1d8a)][_0x1bc35a(0x4820)]([_0x3b4d93[0x1]],_0x1bc35a(0xdae),_0x3dd634,null,'contraction-unit')));}});},_0xc83a46={'model':_0x3ead30,'compute':{'contractions':_0x22ad77},'hooks':['contractions']},_0x2056fc=function(_0x327c3e){const _0x45ecaa=_0x37e46c,_0x171b0a=_0x327c3e['world'],{model:_0x36924b,methods:_0x1865fe}=_0x327c3e['world'],_0x240364=_0x1865fe[_0x45ecaa(0x1d8a)]['setTag'],{frozenLex:_0x199f78}=_0x36924b[_0x45ecaa(0x1d8a)],_0x48c4c6=_0x36924b['one'][_0x45ecaa(0x839)]||{};_0x327c3e['docs']['forEach'](_0x54131b=>{const _0x4ac11a=_0x45ecaa;for(let _0x19b395=0x0;_0x19b395<_0x54131b[_0x4ac11a(0x1b19)];_0x19b395+=0x1){let _0x4a4fd7=_0x54131b[_0x19b395],_0x3c3e2b=_0x4a4fd7[_0x4ac11a(0x192e)]||_0x4a4fd7[_0x4ac11a(0x47d)];if(void 0x0!==_0x48c4c6[_0x3c3e2b]&&_0x54131b[_0x19b395+0x1])for(let _0x358f3e=_0x19b395+_0x48c4c6[_0x3c3e2b]-0x1;_0x358f3e>_0x19b395;_0x358f3e-=0x1){let _0x4cec7a=_0x54131b[_0x4ac11a(0x384c)](_0x19b395,_0x358f3e+0x1),_0x56a159=_0x4cec7a[_0x4ac11a(0x4833)](_0x40bdce=>_0x40bdce[_0x4ac11a(0x192e)]||_0x40bdce[_0x4ac11a(0x47d)])['join']('\x20');!0x0!==_0x199f78[_0x4ac11a(0x2427)](_0x56a159)||(_0x240364(_0x4cec7a,_0x199f78[_0x56a159],_0x171b0a,!0x1,_0x4ac11a(0x32a1)),_0x4cec7a[_0x4ac11a(0xa21)](_0x4390e5=>_0x4390e5[_0x4ac11a(0xf75)]=!0x0));}void 0x0!==_0x199f78[_0x3c3e2b]&&_0x199f78[_0x4ac11a(0x2427)](_0x3c3e2b)&&(_0x240364([_0x4a4fd7],_0x199f78[_0x3c3e2b],_0x171b0a,!0x1,_0x4ac11a(0xa47)),_0x4a4fd7['frozen']=!0x0);}});},_0x10f864=_0x9304ff=>_0x37e46c(0x234b)+_0x9304ff+'\x1b[0m',_0x19c08d=function(_0xb5fa84){const _0x5825f6=_0x37e46c;_0xb5fa84[_0x5825f6(0x204b)][_0x5825f6(0xa21)](_0x51da63=>{const _0x4fe72a=_0x5825f6;_0x51da63[_0x4fe72a(0xa21)](_0x33b7d5=>{const _0x37243f=_0x4fe72a;let _0x5e1b1c='\x20\x20'+_0x10f864('│')+'\x20\x20',_0x43b57b=_0x33b7d5[_0x37243f(0x4570)]||_0x33b7d5[_0x37243f(0x4006)]||'-';!0x0===_0x33b7d5[_0x37243f(0xf75)]?_0x5e1b1c+=(_0x443011=>_0x37243f(0x1f66)+_0x443011+_0x37243f(0x239))(_0x43b57b)+_0x37243f(0x9ea):_0x5e1b1c+=_0x10f864(_0x43b57b);});});},_0x3de92e={'compute':{'frozen':_0x2056fc,'freeze':_0x2056fc,'unfreeze':function(_0xf8f618){const _0x2a545e=_0x37e46c;return _0xf8f618[_0x2a545e(0x204b)]['forEach'](_0xd8a203=>{const _0x3dacbf=_0x2a545e;_0xd8a203[_0x3dacbf(0xa21)](_0x4dd3c7=>{const _0x4fa19d=_0x3dacbf;delete _0x4dd3c7[_0x4fa19d(0xf75)];});}),_0xf8f618;}},'mutate':_0x329c76=>{const _0x2b162f=_0x37e46c,_0x467c03=_0x329c76[_0x2b162f(0x1578)][_0x2b162f(0x1d8a)];_0x467c03[_0x2b162f(0x29bf)][_0x2b162f(0x44c0)]=_0x11a407=>!0x0===_0x11a407['frozen'],_0x467c03[_0x2b162f(0x534)][_0x2b162f(0x209c)]=_0x19c08d,_0x467c03[_0x2b162f(0x534)][_0x2b162f(0xf75)]=_0x19c08d;},'api':function(_0x7e6cc4){const _0x3fc954=_0x37e46c;_0x7e6cc4[_0x3fc954(0x3b3c)][_0x3fc954(0x209c)]=function(){const _0x5ad9ae=_0x3fc954;return this[_0x5ad9ae(0x204b)][_0x5ad9ae(0xa21)](_0x59e223=>{const _0x5182c2=_0x5ad9ae;_0x59e223[_0x5182c2(0xa21)](_0x4f233d=>{const _0x54c208=_0x5182c2;_0x4f233d[_0x54c208(0xf75)]=!0x0;});}),this;},_0x7e6cc4['prototype']['unfreeze']=function(){const _0x8d846b=_0x3fc954;this[_0x8d846b(0x23df)](_0x8d846b(0x52a));},_0x7e6cc4[_0x3fc954(0x3b3c)][_0x3fc954(0x44c0)]=function(){const _0x7548cb=_0x3fc954;return this['match'](_0x7548cb(0x282e));};},'hooks':[_0x37e46c(0x209c)]},_0x3db89d=function(_0x5d4918,_0xd86a08,_0x54b6e3){const _0x2aff8e=_0x37e46c,{model:_0x5d8b5e,methods:_0x3fce44}=_0x54b6e3,_0x59e31a=_0x3fce44['one'][_0x2aff8e(0x4820)],_0x16c0ce=_0x5d8b5e[_0x2aff8e(0x1d8a)]['_multiCache']||{},{lexicon:_0x63befc}=_0x5d8b5e['one']||{};let _0x2c2f00=_0x5d4918[_0xd86a08],_0x2289f8=_0x2c2f00[_0x2aff8e(0x192e)]||_0x2c2f00[_0x2aff8e(0x47d)];if(void 0x0!==_0x16c0ce[_0x2289f8]&&_0x5d4918[_0xd86a08+0x1]){for(let _0x2fb946=_0xd86a08+_0x16c0ce[_0x2289f8]-0x1;_0x2fb946>_0xd86a08;_0x2fb946-=0x1){let _0x68e8ea=_0x5d4918[_0x2aff8e(0x384c)](_0xd86a08,_0x2fb946+0x1);if(_0x68e8ea[_0x2aff8e(0x1b19)]<=0x1)return!0x1;let _0x5aa9c1=_0x68e8ea['map'](_0x2e9d33=>_0x2e9d33[_0x2aff8e(0x192e)]||_0x2e9d33[_0x2aff8e(0x47d)])[_0x2aff8e(0x3541)]('\x20');if(!0x0===_0x63befc[_0x2aff8e(0x2427)](_0x5aa9c1)){let _0x2196dc=_0x63befc[_0x5aa9c1];return _0x59e31a(_0x68e8ea,_0x2196dc,_0x54b6e3,!0x1,'1-multi-lexicon'),!_0x2196dc||0x2!==_0x2196dc['length']||'PhrasalVerb'!==_0x2196dc[0x0]&&_0x2aff8e(0x39b6)!==_0x2196dc[0x1]||_0x59e31a([_0x68e8ea[0x1]],'Particle',_0x54b6e3,!0x1,_0x2aff8e(0x1b27)),!0x0;}}return!0x1;}return null;},_0x4fbcfd=/^(under|over|mis|re|un|dis|semi|pre|post)-?/,_0x109a3b=new Set([_0x37e46c(0x487b),_0x37e46c(0x2631),_0x37e46c(0xe52),_0x37e46c(0x42ce),_0x37e46c(0x2c88),'Adjective',_0x37e46c(0x1ebc)]),_0x296e5d=function(_0x358bff,_0x20efae,_0xe18ac7){const _0x22940e=_0x37e46c,{model:_0x4d771e,methods:_0x4182c2}=_0xe18ac7,_0x248e3e=_0x4182c2['one'][_0x22940e(0x4820)],{lexicon:_0x5f3b01}=_0x4d771e[_0x22940e(0x1d8a)];let _0x5b7aff=_0x358bff[_0x20efae],_0x387955=_0x5b7aff[_0x22940e(0x192e)]||_0x5b7aff[_0x22940e(0x47d)];if(void 0x0!==_0x5f3b01[_0x387955]&&_0x5f3b01['hasOwnProperty'](_0x387955))return _0x248e3e([_0x5b7aff],_0x5f3b01[_0x387955],_0xe18ac7,!0x1,_0x22940e(0x3b6d)),!0x0;if(_0x5b7aff['alias']){let _0x2b7fc2=_0x5b7aff[_0x22940e(0xa94)][_0x22940e(0x5144)](_0x47a300=>_0x5f3b01[_0x22940e(0x2427)](_0x47a300));if(_0x2b7fc2)return _0x248e3e([_0x5b7aff],_0x5f3b01[_0x2b7fc2],_0xe18ac7,!0x1,'1-lexicon-alias'),!0x0;}if(!0x0===_0x4fbcfd[_0x22940e(0x1769)](_0x387955)){let _0x1d7446=_0x387955[_0x22940e(0x741)](_0x4fbcfd,'');if(_0x5f3b01[_0x22940e(0x2427)](_0x1d7446)&&_0x1d7446[_0x22940e(0x1b19)]>0x3&&_0x109a3b[_0x22940e(0x3170)](_0x5f3b01[_0x1d7446]))return _0x248e3e([_0x5b7aff],_0x5f3b01[_0x1d7446],_0xe18ac7,!0x1,_0x22940e(0x2aa3)),!0x0;}return null;},_0x382576={'lexicon':function(_0x178605){const _0x45d115=_0x37e46c,_0x1a82a3=_0x178605[_0x45d115(0x4657)];_0x178605[_0x45d115(0x204b)][_0x45d115(0xa21)](_0x33afde=>{const _0x2a101c=_0x45d115;for(let _0x22aff7=0x0;_0x22aff7<_0x33afde[_0x2a101c(0x1b19)];_0x22aff7+=0x1)if(0x0===_0x33afde[_0x22aff7][_0x2a101c(0x521a)][_0x2a101c(0x395f)]){let _0x1f2437=null;_0x1f2437=_0x1f2437||_0x3db89d(_0x33afde,_0x22aff7,_0x1a82a3),_0x1f2437=_0x1f2437||_0x296e5d(_0x33afde,_0x22aff7,_0x1a82a3);}});}},_0x4dd4ef=function(_0x22340e){const _0x36efbc=_0x37e46c;let _0x1a6186={},_0x1f29e4={};return Object[_0x36efbc(0x1ea9)](_0x22340e)['forEach'](_0x25e9ae=>{const _0x23ede2=_0x36efbc;let _0x36fd74=_0x22340e[_0x25e9ae],_0x47bc56=(_0x25e9ae=(_0x25e9ae=_0x25e9ae['toLowerCase']()[_0x23ede2(0x1b23)]())[_0x23ede2(0x741)](/'s\b/,''))[_0x23ede2(0x1117)](/ /);_0x47bc56[_0x23ede2(0x1b19)]>0x1&&(void 0x0===_0x1f29e4[_0x47bc56[0x0]]||_0x47bc56[_0x23ede2(0x1b19)]>_0x1f29e4[_0x47bc56[0x0]])&&(_0x1f29e4[_0x47bc56[0x0]]=_0x47bc56[_0x23ede2(0x1b19)]),_0x1a6186[_0x25e9ae]=_0x1a6186[_0x25e9ae]||_0x36fd74;}),delete _0x1a6186[''],delete _0x1a6186[_0x36efbc(0x1582)],delete _0x1a6186['\x20'],{'lex':_0x1a6186,'_multi':_0x1f29e4};},_0x2d6702={'addWords':function(_0x3fa3e1,_0xa83631=!0x1){const _0x1631dd=_0x37e46c,_0x3fdd89=this[_0x1631dd(0x4657)](),{methods:_0x43d911,model:_0x1fe145}=_0x3fdd89;if(!_0x3fa3e1)return;if(Object[_0x1631dd(0x1ea9)](_0x3fa3e1)[_0x1631dd(0xa21)](_0x236ccf=>{const _0x2aca7a=_0x1631dd;'string'==typeof _0x3fa3e1[_0x236ccf]&&_0x3fa3e1[_0x236ccf][_0x2aca7a(0x3bcf)]('#')&&(_0x3fa3e1[_0x236ccf]=_0x3fa3e1[_0x236ccf][_0x2aca7a(0x741)](/^#/,''));}),!0x0===_0xa83631){let {lex:_0x36d2df,_multi:_0x508ff2}=_0x43d911[_0x1631dd(0x1d8a)][_0x1631dd(0xbc6)](_0x3fa3e1,_0x3fdd89);return Object[_0x1631dd(0x4e14)](_0x1fe145[_0x1631dd(0x1d8a)][_0x1631dd(0x839)],_0x508ff2),void Object['assign'](_0x1fe145[_0x1631dd(0x1d8a)][_0x1631dd(0x4663)],_0x36d2df);}if(_0x43d911['two']['expandLexicon']){let {lex:_0x277bf2,_multi:_0x10d821}=_0x43d911[_0x1631dd(0x21c9)][_0x1631dd(0xbc6)](_0x3fa3e1,_0x3fdd89);Object['assign'](_0x1fe145[_0x1631dd(0x1d8a)][_0x1631dd(0x2c34)],_0x277bf2),Object['assign'](_0x1fe145['one']['_multiCache'],_0x10d821);}let {lex:_0x521c27,_multi:_0x15e975}=_0x43d911[_0x1631dd(0x1d8a)][_0x1631dd(0xbc6)](_0x3fa3e1,_0x3fdd89);Object[_0x1631dd(0x4e14)](_0x1fe145[_0x1631dd(0x1d8a)][_0x1631dd(0x2c34)],_0x521c27),Object['assign'](_0x1fe145[_0x1631dd(0x1d8a)][_0x1631dd(0x839)],_0x15e975);}},_0x363936={'model':{'one':{'lexicon':{},'_multiCache':{},'frozenLex':{}}},'methods':{'one':{'expandLexicon':_0x4dd4ef}},'compute':_0x382576,'lib':_0x2d6702,'hooks':[_0x37e46c(0x2c34)]},_0x1459f0=function(_0x599ef5,_0x497d7b){const _0x1ba95f=_0x37e46c;let _0x5082cf=[{}],_0x468cb5=[null],_0x44916c=[0x0],_0x3cb29d=[],_0x5ab539=0x0;_0x599ef5[_0x1ba95f(0xa21)](function(_0x4cc31d){const _0x15cd83=_0x1ba95f;let _0x2ca317=0x0,_0x59c495=function(_0x54fbdf,_0x287928){const _0x19d085=a0_0x11e7,{methods:_0x92a22b,model:_0x4f40ee}=_0x287928;let _0xb30457=_0x92a22b[_0x19d085(0x1d8a)][_0x19d085(0x3c0b)][_0x19d085(0x1e5d)](_0x54fbdf,_0x4f40ee)[_0x19d085(0x4833)](_0x20c30b=>_0x92a22b[_0x19d085(0x1d8a)][_0x19d085(0x3c0b)][_0x19d085(0x666)](_0x20c30b,_0x4f40ee));return _0xb30457[_0x19d085(0x4833)](_0x53e68d=>_0x53e68d['text'][_0x19d085(0x6e8)]());}(_0x4cc31d,_0x497d7b);for(let _0x486f6c=0x0;_0x486f6c<_0x59c495[_0x15cd83(0x1b19)];_0x486f6c++){let _0x69ca14=_0x59c495[_0x486f6c];_0x5082cf[_0x2ca317]&&_0x5082cf[_0x2ca317][_0x15cd83(0x2427)](_0x69ca14)?_0x2ca317=_0x5082cf[_0x2ca317][_0x69ca14]:(_0x5ab539++,_0x5082cf[_0x2ca317][_0x69ca14]=_0x5ab539,_0x5082cf[_0x5ab539]={},_0x2ca317=_0x5ab539,_0x468cb5[_0x5ab539]=null);}_0x468cb5[_0x2ca317]=[_0x59c495[_0x15cd83(0x1b19)]];});for(let _0x4b2ac0 in _0x5082cf[0x0])_0x5ab539=_0x5082cf[0x0][_0x4b2ac0],_0x44916c[_0x5ab539]=0x0,_0x3cb29d[_0x1ba95f(0x1715)](_0x5ab539);for(;_0x3cb29d['length'];){let _0x1636c0=_0x3cb29d[_0x1ba95f(0x34fe)](),_0x22c4a5=Object[_0x1ba95f(0x1ea9)](_0x5082cf[_0x1636c0]);for(let _0x5d8f27=0x0;_0x5d8f27<_0x22c4a5[_0x1ba95f(0x1b19)];_0x5d8f27+=0x1){let _0x5ba1b9=_0x22c4a5[_0x5d8f27],_0x682f7e=_0x5082cf[_0x1636c0][_0x5ba1b9];for(_0x3cb29d[_0x1ba95f(0x1715)](_0x682f7e),_0x5ab539=_0x44916c[_0x1636c0];_0x5ab539>0x0&&!_0x5082cf[_0x5ab539]['hasOwnProperty'](_0x5ba1b9);)_0x5ab539=_0x44916c[_0x5ab539];if(_0x5082cf[_0x1ba95f(0x2427)](_0x5ab539)){let _0x33bfae=_0x5082cf[_0x5ab539][_0x5ba1b9];_0x44916c[_0x682f7e]=_0x33bfae,_0x468cb5[_0x33bfae]&&(_0x468cb5[_0x682f7e]=_0x468cb5[_0x682f7e]||[],_0x468cb5[_0x682f7e]=_0x468cb5[_0x682f7e]['concat'](_0x468cb5[_0x33bfae]));}else _0x44916c[_0x682f7e]=0x0;}}return{'goNext':_0x5082cf,'endAs':_0x468cb5,'failTo':_0x44916c};},_0x3c1fe0=function(_0x3ab6bb,_0x335b23,_0x2c3452){const _0x810ef=_0x37e46c;let _0x4c9d54=0x0,_0x2311ff=[];for(let _0x1d169d=0x0;_0x1d169d<_0x3ab6bb['length'];_0x1d169d++){let _0x39eac9=_0x3ab6bb[_0x1d169d][_0x2c3452[_0x810ef(0x31e0)]]||_0x3ab6bb[_0x1d169d][_0x810ef(0x47d)];for(;_0x4c9d54>0x0&&(void 0x0===_0x335b23[_0x810ef(0x1774)][_0x4c9d54]||!_0x335b23[_0x810ef(0x1774)][_0x4c9d54][_0x810ef(0x2427)](_0x39eac9));)_0x4c9d54=_0x335b23[_0x810ef(0x2e58)][_0x4c9d54]||0x0;if(_0x335b23['goNext'][_0x4c9d54][_0x810ef(0x2427)](_0x39eac9)&&(_0x4c9d54=_0x335b23[_0x810ef(0x1774)][_0x4c9d54][_0x39eac9],_0x335b23[_0x810ef(0x3175)][_0x4c9d54])){let _0x55661=_0x335b23[_0x810ef(0x3175)][_0x4c9d54];for(let _0xe68c13=0x0;_0xe68c13<_0x55661[_0x810ef(0x1b19)];_0xe68c13++){let _0x660e54=_0x55661[_0xe68c13],_0x421859=_0x3ab6bb[_0x1d169d-_0x660e54+0x1],[_0x19721a,_0x445912]=_0x421859['index'];_0x2311ff[_0x810ef(0x1715)]([_0x19721a,_0x445912,_0x445912+_0x660e54,_0x421859['id']]);}}}return _0x2311ff;},_0x594a80=function(_0x4d560a,_0x664121){const _0xbffaef=_0x37e46c;for(let _0x32661d=0x0;_0x32661d<_0x4d560a[_0xbffaef(0x1b19)];_0x32661d+=0x1)if(!0x0===_0x664121[_0xbffaef(0x3170)](_0x4d560a[_0x32661d]))return!0x1;return!0x0;},_0x2c1440=function(_0x501d9e,_0x4946a1,_0x44f10f){const _0x18024a=_0x37e46c;let _0x539cfd=[];_0x44f10f['form']=_0x44f10f['form']||_0x18024a(0x47d);let _0x1f3afe=_0x501d9e['docs'];if(!_0x4946a1['goNext']||!_0x4946a1[_0x18024a(0x1774)][0x0])return _0x501d9e[_0x18024a(0x28b)]();let _0x4ab42a=Object[_0x18024a(0x1ea9)](_0x4946a1[_0x18024a(0x1774)][0x0]);for(let _0x493f64=0x0;_0x493f64<_0x1f3afe[_0x18024a(0x1b19)];_0x493f64++){if(_0x501d9e[_0x18024a(0x1aa7)]&&_0x501d9e[_0x18024a(0x1aa7)][_0x493f64]&&!0x0===_0x594a80(_0x4ab42a,_0x501d9e[_0x18024a(0x1aa7)][_0x493f64]))continue;let _0x4d9130=_0x1f3afe[_0x493f64],_0x158c3a=_0x3c1fe0(_0x4d9130,_0x4946a1,_0x44f10f);_0x158c3a[_0x18024a(0x1b19)]>0x0&&(_0x539cfd=_0x539cfd['concat'](_0x158c3a));}return _0x501d9e[_0x18024a(0x38d6)](_0x539cfd);},_0x838975=(_0x4d315,_0x109f47)=>{for(let _0x37755a=_0x4d315['length']-0x1;_0x37755a>=0x0;_0x37755a-=0x1)if(_0x4d315[_0x37755a]!==_0x109f47)return _0x4d315=_0x4d315['slice'](0x0,_0x37755a+0x1);return _0x4d315;},_0x56525b=function(_0x14a7c4){const _0x233d84=_0x37e46c;return _0x14a7c4['goNext']=_0x14a7c4['goNext'][_0x233d84(0x4833)](_0x39954c=>{const _0x30bcda=_0x233d84;if(0x0!==Object[_0x30bcda(0x1ea9)](_0x39954c)[_0x30bcda(0x1b19)])return _0x39954c;}),_0x14a7c4[_0x233d84(0x1774)]=_0x838975(_0x14a7c4[_0x233d84(0x1774)],void 0x0),_0x14a7c4['failTo']=_0x838975(_0x14a7c4[_0x233d84(0x2e58)],0x0),_0x14a7c4[_0x233d84(0x3175)]=_0x838975(_0x14a7c4[_0x233d84(0x3175)],null),_0x14a7c4;},_0x4a0868={'buildTrie':function(_0xc8d507){const _0x580a59=_0x37e46c,_0x132ba7=_0x1459f0(_0xc8d507,this[_0x580a59(0x4657)]());return _0x56525b(_0x132ba7);}};_0x4a0868[_0x37e46c(0x23cd)]=_0x4a0868[_0x37e46c(0x2c57)];const _0x274b29={'api':function(_0x4eaae9){const _0x456705=_0x37e46c;_0x4eaae9[_0x456705(0x3b3c)]['lookup']=function(_0xe57f6c,_0x5618c2={}){const _0x50baf=_0x456705;if(!_0xe57f6c)return this[_0x50baf(0x28b)]();_0x50baf(0x2431)==typeof _0xe57f6c&&(_0xe57f6c=[_0xe57f6c]);let _0x42aba6=(_0x31d829=_0xe57f6c,'[object\x20Object]'===Object['prototype']['toString']['call'](_0x31d829)?_0xe57f6c:_0x1459f0(_0xe57f6c,this[_0x50baf(0x4657)]));var _0x31d829;let _0x23f643=_0x2c1440(this,_0x42aba6,_0x5618c2);return _0x23f643=_0x23f643[_0x50baf(0x2439)](),_0x23f643;};},'lib':_0x4a0868},_0x3f0d40=function(_0x314a62,_0x25381a){return _0x25381a?(_0x314a62['forEach'](_0x579389=>{let _0x530eeb=_0x579389[0x0];_0x25381a[_0x530eeb]&&(_0x579389[0x0]=_0x25381a[_0x530eeb][0x0],_0x579389[0x1]+=_0x25381a[_0x530eeb][0x1],_0x579389[0x2]+=_0x25381a[_0x530eeb][0x1]);}),_0x314a62):_0x314a62;},_0x3457f4=function(_0x285e69,_0x22f91d){const _0x11b141=_0x37e46c;let {ptrs:_0x2e0ebe,byGroup:_0x35ee65}=_0x285e69;return _0x2e0ebe=_0x3f0d40(_0x2e0ebe,_0x22f91d),Object[_0x11b141(0x1ea9)](_0x35ee65)[_0x11b141(0xa21)](_0x40220e=>{_0x35ee65[_0x40220e]=_0x3f0d40(_0x35ee65[_0x40220e],_0x22f91d);}),{'ptrs':_0x2e0ebe,'byGroup':_0x35ee65};},_0x3d5806=function(_0x28d283,_0x26b752,_0x2ac8bb){const _0x36ba6c=_0x37e46c,_0x4f5c1d=_0x2ac8bb[_0x36ba6c(0x1578)][_0x36ba6c(0x1d8a)];return _0x36ba6c(0x4a80)==typeof _0x28d283&&(_0x28d283=String(_0x28d283)),_0x36ba6c(0x2431)==typeof _0x28d283&&(_0x28d283=_0x4f5c1d[_0x36ba6c(0x1744)](_0x28d283,_0x2ac8bb),_0x28d283=_0x4f5c1d[_0x36ba6c(0x2407)](_0x28d283,_0x26b752,_0x2ac8bb)),_0x28d283;},_0x17454d=_0x2cfe99=>'[object\x20Object]'===Object[_0x37e46c(0x3b3c)][_0x37e46c(0x8e8)][_0x37e46c(0x236b)](_0x2cfe99),_0x45c29a=_0x5515d1=>_0x5515d1&&_0x17454d(_0x5515d1)&&!0x0===_0x5515d1[_0x37e46c(0x4d72)],_0x5af94f=_0x7fb545=>_0x7fb545&&_0x17454d(_0x7fb545)&&!0x0===_0x7fb545[_0x37e46c(0x409b)],_0x3345d2={'matchOne':function(_0x131ba3,_0x35fab3,_0x2db117){const _0x2b2bc0=_0x37e46c,_0x388609=this['methods'][_0x2b2bc0(0x1d8a)];if(_0x45c29a(_0x131ba3))return this['intersection'](_0x131ba3)['eq'](0x0);if(_0x5af94f(_0x131ba3))return this[_0x2b2bc0(0xd6a)](_0x131ba3,{'tagger':!0x1,'matchOne':!0x0})[_0x2b2bc0(0x1961)];let _0x4aaaa0={'regs':_0x131ba3=_0x3d5806(_0x131ba3,_0x2db117,this[_0x2b2bc0(0x4657)]),'group':_0x35fab3,'justOne':!0x0},_0x5f5b39=_0x388609[_0x2b2bc0(0x2d96)](this[_0x2b2bc0(0x204b)],_0x4aaaa0,this[_0x2b2bc0(0x1aa7)]),{ptrs:_0x12d687,byGroup:_0xc40372}=_0x3457f4(_0x5f5b39,this[_0x2b2bc0(0x34ce)]),_0x3082de=this[_0x2b2bc0(0x324a)](_0x12d687);return _0x3082de[_0x2b2bc0(0x338c)]=_0xc40372,_0x3082de;},'match':function(_0x7438bb,_0x1b8a11,_0x4cac69){const _0x399214=_0x37e46c,_0x49e5a1=this['methods']['one'];if(_0x45c29a(_0x7438bb))return this['intersection'](_0x7438bb);if(_0x5af94f(_0x7438bb))return this[_0x399214(0xd6a)](_0x7438bb,{'tagger':!0x1})[_0x399214(0x1961)]['settle']();let _0x376495={'regs':_0x7438bb=_0x3d5806(_0x7438bb,_0x4cac69,this[_0x399214(0x4657)]),'group':_0x1b8a11},_0x2c3c56=_0x49e5a1[_0x399214(0x2d96)](this[_0x399214(0x204b)],_0x376495,this[_0x399214(0x1aa7)]),{ptrs:_0x3f3beb,byGroup:_0x568f20}=_0x3457f4(_0x2c3c56,this[_0x399214(0x34ce)]),_0x8c6d80=this[_0x399214(0x324a)](_0x3f3beb);return _0x8c6d80[_0x399214(0x338c)]=_0x568f20,_0x8c6d80;},'has':function(_0x4fa79a,_0x28770f,_0x10b80a){const _0x45cae8=_0x37e46c,_0x27a626=this[_0x45cae8(0x1578)][_0x45cae8(0x1d8a)];if(_0x45c29a(_0x4fa79a))return this[_0x45cae8(0x687)](_0x4fa79a)[_0x45cae8(0x34ce)][_0x45cae8(0x1b19)]>0x0;if(_0x5af94f(_0x4fa79a))return this[_0x45cae8(0xd6a)](_0x4fa79a,{'tagger':!0x1})[_0x45cae8(0x1961)][_0x45cae8(0x2108)];let _0x4dae1c={'regs':_0x4fa79a=_0x3d5806(_0x4fa79a,_0x10b80a,this[_0x45cae8(0x4657)]),'group':_0x28770f,'justOne':!0x0};return _0x27a626['match'](this[_0x45cae8(0x204b)],_0x4dae1c,this[_0x45cae8(0x1aa7)])[_0x45cae8(0x232)]['length']>0x0;},'if':function(_0x3aa9b8,_0x296505,_0xb79d7b){const _0x54a1be=_0x37e46c,_0x3934b6=this[_0x54a1be(0x1578)][_0x54a1be(0x1d8a)];if(_0x45c29a(_0x3aa9b8))return this['filter'](_0x3bf796=>_0x3bf796[_0x54a1be(0x687)](_0x3aa9b8)[_0x54a1be(0x2108)]);if(_0x5af94f(_0x3aa9b8)){let _0x20c53b=this['sweep'](_0x3aa9b8,{'tagger':!0x1})[_0x54a1be(0x1961)][_0x54a1be(0x2439)]();return this['if'](_0x20c53b);}let _0x37da94={'regs':_0x3aa9b8=_0x3d5806(_0x3aa9b8,_0xb79d7b,this['world']),'group':_0x296505,'justOne':!0x0},_0xd03a2b=this[_0x54a1be(0x34ce)],_0x531f99=this[_0x54a1be(0x1aa7)]||[];_0xd03a2b=_0xd03a2b[_0x54a1be(0x1465)]((_0x1fb3bc,_0x509175)=>{const _0x51026c=_0x54a1be;let _0x2ddc78=this[_0x51026c(0x38d6)]([_0x1fb3bc]);return _0x3934b6[_0x51026c(0x2d96)](_0x2ddc78['docs'],_0x37da94,_0x531f99[_0x509175])[_0x51026c(0x232)][_0x51026c(0x1b19)]>0x0;});let _0xe75c49=this['update'](_0xd03a2b);return this[_0x54a1be(0x1aa7)]&&(_0xe75c49['_cache']=_0xd03a2b[_0x54a1be(0x4833)](_0x4361de=>_0x531f99[_0x4361de[0x0]])),_0xe75c49;},'ifNo':function(_0x43a75e,_0x5b5118,_0x1a3284){const _0x2a2bcd=_0x37e46c,{methods:_0x592765}=this,_0x317b80=_0x592765[_0x2a2bcd(0x1d8a)];if(_0x45c29a(_0x43a75e))return this[_0x2a2bcd(0x1465)](_0x559bbd=>!_0x559bbd['intersection'](_0x43a75e)[_0x2a2bcd(0x2108)]);if(_0x5af94f(_0x43a75e)){let _0x2864b9=this['sweep'](_0x43a75e,{'tagger':!0x1})['view']['settle']();return this[_0x2a2bcd(0x385b)](_0x2864b9);}_0x43a75e=_0x3d5806(_0x43a75e,_0x1a3284,this['world']);let _0x404521=this[_0x2a2bcd(0x1aa7)]||[],_0x127150=this[_0x2a2bcd(0x1465)]((_0x3c5e99,_0x24e307)=>{const _0xf5e257=_0x2a2bcd;let _0x158595={'regs':_0x43a75e,'group':_0x5b5118,'justOne':!0x0};return 0x0===_0x317b80['match'](_0x3c5e99[_0xf5e257(0x204b)],_0x158595,_0x404521[_0x24e307])['ptrs'][_0xf5e257(0x1b19)];});return this['_cache']&&(_0x127150[_0x2a2bcd(0x1aa7)]=_0x127150[_0x2a2bcd(0x232)][_0x2a2bcd(0x4833)](_0x3e129f=>_0x404521[_0x3e129f[0x0]])),_0x127150;}},_0x3d21bd={'before':function(_0x51c870,_0x4de7af,_0x222264){const _0x1385ce=_0x37e46c,{indexN:_0x5daff5}=this['methods'][_0x1385ce(0x1d8a)]['pointer'];let _0x25bae1=[],_0x2dc667=_0x5daff5(this[_0x1385ce(0x34ce)]);Object[_0x1385ce(0x1ea9)](_0x2dc667)['forEach'](_0x350589=>{const _0x19f081=_0x1385ce;let _0x3e290d=_0x2dc667[_0x350589][_0x19f081(0x4c33)]((_0x46b2f0,_0x471eaa)=>_0x46b2f0[0x1]>_0x471eaa[0x1]?0x1:-0x1)[0x0];_0x3e290d[0x1]>0x0&&_0x25bae1[_0x19f081(0x1715)]([_0x3e290d[0x0],0x0,_0x3e290d[0x1]]);});let _0xdfc100=this[_0x1385ce(0x324a)](_0x25bae1);return _0x51c870?_0xdfc100[_0x1385ce(0x2d96)](_0x51c870,_0x4de7af,_0x222264):_0xdfc100;},'after':function(_0x4f08ee,_0x429433,_0x4a936b){const _0x5a1771=_0x37e46c,{indexN:_0x69a2c8}=this[_0x5a1771(0x1578)]['one'][_0x5a1771(0x43e4)];let _0x29fa4b=[],_0x3d829d=_0x69a2c8(this[_0x5a1771(0x34ce)]),_0x1efba7=this[_0x5a1771(0x295)];Object[_0x5a1771(0x1ea9)](_0x3d829d)[_0x5a1771(0xa21)](_0x5e4b8c=>{const _0x4affd9=_0x5a1771;let _0x126bdd=_0x3d829d[_0x5e4b8c]['sort']((_0x39f078,_0x83dc26)=>_0x39f078[0x1]>_0x83dc26[0x1]?-0x1:0x1)[0x0],[_0xdef123,,_0x21515f]=_0x126bdd;_0x21515f<_0x1efba7[_0xdef123][_0x4affd9(0x1b19)]&&_0x29fa4b[_0x4affd9(0x1715)]([_0xdef123,_0x21515f,_0x1efba7[_0xdef123][_0x4affd9(0x1b19)]]);});let _0x202223=this[_0x5a1771(0x324a)](_0x29fa4b);return _0x4f08ee?_0x202223[_0x5a1771(0x2d96)](_0x4f08ee,_0x429433,_0x4a936b):_0x202223;},'growLeft':function(_0x4ea3b7,_0x1cd5e2,_0x10fd41){const _0x3cc84d=_0x37e46c;'string'==typeof _0x4ea3b7&&(_0x4ea3b7=this[_0x3cc84d(0x4657)][_0x3cc84d(0x1578)][_0x3cc84d(0x1d8a)][_0x3cc84d(0x2407)](_0x4ea3b7,_0x10fd41,this[_0x3cc84d(0x4657)])),_0x4ea3b7[_0x4ea3b7['length']-0x1][_0x3cc84d(0x2681)]=!0x0;let _0x48156c=this[_0x3cc84d(0x34ce)];return this[_0x3cc84d(0xa21)]((_0x1ec62c,_0x1b0ef4)=>{const _0x314a7f=_0x3cc84d;let _0x19e70d=_0x1ec62c[_0x314a7f(0x5097)](_0x4ea3b7,_0x1cd5e2);if(_0x19e70d[_0x314a7f(0x2108)]){let _0x246b64=_0x19e70d[_0x314a7f(0x4a03)]();_0x48156c[_0x1b0ef4][0x1]-=_0x246b64['length'],_0x48156c[_0x1b0ef4][0x3]=_0x246b64[_0x314a7f(0x204b)][0x0][0x0]['id'];}}),this[_0x3cc84d(0x38d6)](_0x48156c);},'growRight':function(_0x56b33e,_0x27d4c5,_0x22a60d){const _0x4b9ad6=_0x37e46c;_0x4b9ad6(0x2431)==typeof _0x56b33e&&(_0x56b33e=this[_0x4b9ad6(0x4657)]['methods'][_0x4b9ad6(0x1d8a)][_0x4b9ad6(0x2407)](_0x56b33e,_0x22a60d,this[_0x4b9ad6(0x4657)])),_0x56b33e[0x0][_0x4b9ad6(0x4cc4)]=!0x0;let _0x5c8c10=this[_0x4b9ad6(0x34ce)];return this[_0x4b9ad6(0xa21)]((_0x536136,_0x2de764)=>{const _0x5b8c89=_0x4b9ad6;let _0x3a56dc=_0x536136[_0x5b8c89(0x1349)](_0x56b33e,_0x27d4c5);if(_0x3a56dc[_0x5b8c89(0x2108)]){let _0x110fea=_0x3a56dc[_0x5b8c89(0x4a03)]();_0x5c8c10[_0x2de764][0x2]+=_0x110fea[_0x5b8c89(0x1b19)],_0x5c8c10[_0x2de764][0x4]=null;}}),this['update'](_0x5c8c10);},'grow':function(_0x55a2d3,_0x48f1cb,_0x260249){const _0x5886d9=_0x37e46c;return this[_0x5886d9(0xfbb)](_0x55a2d3,_0x48f1cb,_0x260249)[_0x5886d9(0x51f4)](_0x55a2d3,_0x48f1cb,_0x260249);}},_0x427a29=function(_0x343c68,_0x2d59a7){return[_0x343c68[0x0],_0x343c68[0x1],_0x2d59a7[0x2]];},_0x33f8cd=(_0x327fef,_0x5d2228,_0x382c13)=>{const _0x8253ef=_0x37e46c;return _0x8253ef(0x2431)==typeof _0x327fef||(_0x20fc7d=_0x327fef,'[object\x20Array]'===Object[_0x8253ef(0x3b3c)]['toString'][_0x8253ef(0x236b)](_0x20fc7d))?_0x5d2228[_0x8253ef(0x2d96)](_0x327fef,_0x382c13):_0x327fef||_0x5d2228[_0x8253ef(0x28b)]();var _0x20fc7d;},_0x35a182=function(_0x1e4d5b,_0x2df776){const _0x2373ea=_0x37e46c;let [_0x3bad13,_0x4a2e66,_0x39c04f]=_0x1e4d5b;return _0x2df776['document'][_0x3bad13]&&_0x2df776[_0x2373ea(0x295)][_0x3bad13][_0x4a2e66]&&(_0x1e4d5b[0x3]=_0x1e4d5b[0x3]||_0x2df776[_0x2373ea(0x295)][_0x3bad13][_0x4a2e66]['id'],_0x2df776[_0x2373ea(0x295)][_0x3bad13][_0x39c04f-0x1]&&(_0x1e4d5b[0x4]=_0x1e4d5b[0x4]||_0x2df776['document'][_0x3bad13][_0x39c04f-0x1]['id'])),_0x1e4d5b;},_0x2de3bc={'splitOn':function(_0x4cd6cb,_0x5b4e4a){const _0x4a11d6=_0x37e46c,{splitAll:_0x1a772a}=this[_0x4a11d6(0x1578)][_0x4a11d6(0x1d8a)][_0x4a11d6(0x43e4)];let _0x599833=_0x33f8cd(_0x4cd6cb,this,_0x5b4e4a)['fullPointer'],_0x18a3c9=_0x1a772a(this[_0x4a11d6(0x34ce)],_0x599833),_0x157bd6=[];return _0x18a3c9['forEach'](_0x787e1c=>{const _0x280a6c=_0x4a11d6;_0x157bd6['push'](_0x787e1c['passthrough']),_0x157bd6[_0x280a6c(0x1715)](_0x787e1c[_0x280a6c(0x5097)]),_0x157bd6['push'](_0x787e1c[_0x280a6c(0x2d96)]),_0x157bd6['push'](_0x787e1c[_0x280a6c(0x1349)]);}),_0x157bd6=_0x157bd6[_0x4a11d6(0x1465)](_0x40462c=>_0x40462c),_0x157bd6=_0x157bd6[_0x4a11d6(0x4833)](_0x28c1e2=>_0x35a182(_0x28c1e2,this)),this['update'](_0x157bd6);},'splitBefore':function(_0x47f452,_0x16fe85){const _0x487330=_0x37e46c,{splitAll:_0x1f9e77}=this[_0x487330(0x1578)][_0x487330(0x1d8a)]['pointer'];let _0x222939=_0x33f8cd(_0x47f452,this,_0x16fe85)[_0x487330(0x34ce)],_0x1f9e73=_0x1f9e77(this['fullPointer'],_0x222939);for(let _0x5e78c7=0x0;_0x5e78c7<_0x1f9e73['length'];_0x5e78c7+=0x1)!_0x1f9e73[_0x5e78c7][_0x487330(0x1349)]&&_0x1f9e73[_0x5e78c7+0x1]&&_0x1f9e73[_0x5e78c7+0x1][_0x487330(0x5097)]&&_0x1f9e73[_0x5e78c7][_0x487330(0x2d96)]&&_0x1f9e73[_0x5e78c7][_0x487330(0x2d96)][0x0]===_0x1f9e73[_0x5e78c7+0x1][_0x487330(0x5097)][0x0]&&(_0x1f9e73[_0x5e78c7][_0x487330(0x1349)]=_0x1f9e73[_0x5e78c7+0x1][_0x487330(0x5097)],delete _0x1f9e73[_0x5e78c7+0x1][_0x487330(0x5097)]);let _0x1d377d=[];return _0x1f9e73[_0x487330(0xa21)](_0x2e20bf=>{const _0x2a1869=_0x487330;_0x1d377d['push'](_0x2e20bf[_0x2a1869(0x3054)]),_0x1d377d[_0x2a1869(0x1715)](_0x2e20bf[_0x2a1869(0x5097)]),_0x2e20bf[_0x2a1869(0x2d96)]&&_0x2e20bf['after']?_0x1d377d[_0x2a1869(0x1715)](_0x427a29(_0x2e20bf[_0x2a1869(0x2d96)],_0x2e20bf[_0x2a1869(0x1349)])):_0x1d377d[_0x2a1869(0x1715)](_0x2e20bf[_0x2a1869(0x2d96)]);}),_0x1d377d=_0x1d377d['filter'](_0x5f07b0=>_0x5f07b0),_0x1d377d=_0x1d377d[_0x487330(0x4833)](_0x583ac1=>_0x35a182(_0x583ac1,this)),this['update'](_0x1d377d);},'splitAfter':function(_0x17c408,_0xf11a89){const _0x395481=_0x37e46c,{splitAll:_0xe9b71}=this[_0x395481(0x1578)][_0x395481(0x1d8a)][_0x395481(0x43e4)];let _0x32815c=_0x33f8cd(_0x17c408,this,_0xf11a89)[_0x395481(0x34ce)],_0x27ad30=_0xe9b71(this['fullPointer'],_0x32815c),_0x3a736b=[];return _0x27ad30[_0x395481(0xa21)](_0x253e13=>{const _0x876b9f=_0x395481;_0x3a736b['push'](_0x253e13[_0x876b9f(0x3054)]),_0x253e13[_0x876b9f(0x5097)]&&_0x253e13['match']?_0x3a736b[_0x876b9f(0x1715)](_0x427a29(_0x253e13[_0x876b9f(0x5097)],_0x253e13[_0x876b9f(0x2d96)])):(_0x3a736b[_0x876b9f(0x1715)](_0x253e13[_0x876b9f(0x5097)]),_0x3a736b[_0x876b9f(0x1715)](_0x253e13[_0x876b9f(0x2d96)])),_0x3a736b[_0x876b9f(0x1715)](_0x253e13[_0x876b9f(0x1349)]);}),_0x3a736b=_0x3a736b[_0x395481(0x1465)](_0x55a9bc=>_0x55a9bc),_0x3a736b=_0x3a736b['map'](_0x22d8aa=>_0x35a182(_0x22d8aa,this)),this[_0x395481(0x38d6)](_0x3a736b);}};_0x2de3bc['split']=_0x2de3bc['splitAfter'];const _0x42edd6=_0x2de3bc,_0x54c8eb=function(_0x272240,_0x3c128a){return!(!_0x272240||!_0x3c128a)&&(_0x272240[0x0]===_0x3c128a[0x0]&&_0x272240[0x2]===_0x3c128a[0x1]);},_0x3ea3b6=function(_0x41b8d5,_0xf38748,_0x36fcbe){const _0x375ee1=_0x37e46c,_0x5862e5=_0x41b8d5[_0x375ee1(0x4657)],_0x248eac=_0x5862e5[_0x375ee1(0x1578)][_0x375ee1(0x1d8a)]['parseMatch'];_0x36fcbe=_0x36fcbe||'^.';let _0x1661bd=_0x248eac(_0xf38748=_0xf38748||'.$',{},_0x5862e5),_0x8b6e5=_0x248eac(_0x36fcbe,{},_0x5862e5);_0x1661bd[_0x1661bd[_0x375ee1(0x1b19)]-0x1][_0x375ee1(0x2681)]=!0x0,_0x8b6e5[0x0][_0x375ee1(0x4cc4)]=!0x0;let _0xb4d989=_0x41b8d5[_0x375ee1(0x34ce)],_0x313d6c=[_0xb4d989[0x0]];for(let _0x42c9da=0x1;_0x42c9da<_0xb4d989['length'];_0x42c9da+=0x1){let _0x26de7d=_0x313d6c[_0x313d6c[_0x375ee1(0x1b19)]-0x1],_0x43c197=_0xb4d989[_0x42c9da],_0x2841ed=_0x41b8d5[_0x375ee1(0x38d6)]([_0x26de7d]),_0x30398e=_0x41b8d5['update']([_0x43c197]);_0x54c8eb(_0x26de7d,_0x43c197)&&_0x2841ed[_0x375ee1(0x3170)](_0x1661bd)&&_0x30398e[_0x375ee1(0x3170)](_0x8b6e5)?_0x313d6c[_0x313d6c[_0x375ee1(0x1b19)]-0x1]=[_0x26de7d[0x0],_0x26de7d[0x1],_0x43c197[0x2],_0x26de7d[0x3],_0x43c197[0x4]]:_0x313d6c[_0x375ee1(0x1715)](_0x43c197);}return _0x41b8d5['update'](_0x313d6c);},_0x2cfa4f={'joinIf':function(_0x24dddb,_0x158a4e){return _0x3ea3b6(this,_0x24dddb,_0x158a4e);},'join':function(){return _0x3ea3b6(this);}},_0x2eed84=Object[_0x37e46c(0x4e14)]({},_0x3345d2,_0x3d21bd,_0x42edd6,_0x2cfa4f);_0x2eed84[_0x37e46c(0x215d)]=_0x2eed84[_0x37e46c(0x5097)],_0x2eed84[_0x37e46c(0x261d)]=_0x2eed84[_0x37e46c(0x5097)],_0x2eed84[_0x37e46c(0x4051)]=_0x2eed84['after'],_0x2eed84[_0x37e46c(0x342c)]=_0x2eed84['after'],_0x2eed84[_0x37e46c(0x45f9)]=_0x2eed84[_0x37e46c(0x385b)];const _0xa04ced=function(_0x25e322){const _0x453bec=_0x37e46c;Object[_0x453bec(0x4e14)](_0x25e322[_0x453bec(0x3b3c)],_0x2eed84);},_0x16ab7a=/(?:^|\s)([![^]*(?:<[^<]*>)?\/.*?[^\\/]\/[?\]+*$~]*)(?:\s|$)/,_0x49ea90=/([!~[^]*(?:<[^<]*>)?\([^)]+[^\\)]\)[?\]+*$~]*)(?:\s|$)/,_0xb74045=/ /g,_0x1bf3a3=_0x24a100=>/^[![^]*(<[^<]*>)?\//['test'](_0x24a100)&&/\/[?\]+*$~]*$/[_0x37e46c(0x1769)](_0x24a100),_0x4735f6=function(_0x42796d){const _0x36f3e2=_0x37e46c;return _0x42796d=(_0x42796d=_0x42796d['map'](_0x380b36=>_0x380b36[_0x36f3e2(0x1b23)]()))[_0x36f3e2(0x1465)](_0x51a5ec=>_0x51a5ec);},_0x217ece=function(_0x43a390){const _0x838aba=_0x37e46c;let _0x47ab9d=_0x43a390[_0x838aba(0x1117)](_0x16ab7a),_0x20d5c3=[];_0x47ab9d[_0x838aba(0xa21)](_0x2a55d8=>{const _0x59be44=_0x838aba;_0x1bf3a3(_0x2a55d8)?_0x20d5c3[_0x59be44(0x1715)](_0x2a55d8):_0x20d5c3=_0x20d5c3[_0x59be44(0x1d1d)](_0x2a55d8[_0x59be44(0x1117)](_0x49ea90));}),_0x20d5c3=_0x4735f6(_0x20d5c3);let _0x55b17a=[];return _0x20d5c3['forEach'](_0x48cab8=>{const _0x174695=_0x838aba;(_0x4156a5=>/^[![^]*(<[^<]*>)?\(/[_0x174695(0x1769)](_0x4156a5)&&/\)[?\]+*$~]*$/[_0x174695(0x1769)](_0x4156a5))(_0x48cab8)||_0x1bf3a3(_0x48cab8)?_0x55b17a[_0x174695(0x1715)](_0x48cab8):_0x55b17a=_0x55b17a[_0x174695(0x1d1d)](_0x48cab8['split'](_0xb74045));}),_0x55b17a=_0x4735f6(_0x55b17a),_0x55b17a;},_0x5adccb=/\{([0-9]+)?(, *[0-9]*)?\}/,_0x15e151=/&&/,_0xd50a96=new RegExp(/^<\s*(\S+)\s*>/),_0x22ffc2=_0x14ed08=>_0x14ed08[_0x37e46c(0x2fe2)](0x0)[_0x37e46c(0x44ff)]()+_0x14ed08['substring'](0x1),_0x193dea=_0x5e3e3f=>_0x5e3e3f[_0x37e46c(0x2fe2)](_0x5e3e3f[_0x37e46c(0x1b19)]-0x1),_0x33ed24=_0xa04ea4=>_0xa04ea4['charAt'](0x0),_0x56ac13=_0x51e447=>_0x51e447[_0x37e46c(0x37b5)](0x1),_0x4c0793=_0x2a4206=>_0x2a4206[_0x37e46c(0x37b5)](0x0,_0x2a4206[_0x37e46c(0x1b19)]-0x1),_0x9e8e0e=function(_0x5efb5b){return _0x5efb5b=_0x56ac13(_0x5efb5b),_0x5efb5b=_0x4c0793(_0x5efb5b);},_0x486f70=function(_0x197b65,_0xbc58){const _0x18c595=_0x37e46c;let _0x356fc4={};for(let _0x160591=0x0;_0x160591<0x2;_0x160591+=0x1){if('$'===_0x193dea(_0x197b65)&&(_0x356fc4[_0x18c595(0x2681)]=!0x0,_0x197b65=_0x4c0793(_0x197b65)),'^'===_0x33ed24(_0x197b65)&&(_0x356fc4[_0x18c595(0x4cc4)]=!0x0,_0x197b65=_0x56ac13(_0x197b65)),'?'===_0x193dea(_0x197b65)&&(_0x356fc4['optional']=!0x0,_0x197b65=_0x4c0793(_0x197b65)),('['===_0x33ed24(_0x197b65)||']'===_0x193dea(_0x197b65))&&(_0x356fc4['group']=null,'['===_0x33ed24(_0x197b65)&&(_0x356fc4[_0x18c595(0xaed)]=!0x0),']'===_0x193dea(_0x197b65)&&(_0x356fc4[_0x18c595(0x392)]=!0x0),_0x197b65=(_0x197b65=_0x197b65[_0x18c595(0x741)](/^\[/,''))[_0x18c595(0x741)](/\]$/,''),'<'===_0x33ed24(_0x197b65))){const _0x276aae=_0xd50a96[_0x18c595(0x198d)](_0x197b65);_0x276aae[_0x18c595(0x1b19)]>=0x2&&(_0x356fc4[_0x18c595(0x4e5b)]=_0x276aae[0x1],_0x197b65=_0x197b65[_0x18c595(0x741)](_0x276aae[0x0],''));}if('+'===_0x193dea(_0x197b65)&&(_0x356fc4['greedy']=!0x0,_0x197b65=_0x4c0793(_0x197b65)),'*'!==_0x197b65&&'*'===_0x193dea(_0x197b65)&&'\x5c*'!==_0x197b65&&(_0x356fc4[_0x18c595(0x48aa)]=!0x0,_0x197b65=_0x4c0793(_0x197b65)),'!'===_0x33ed24(_0x197b65)&&(_0x356fc4[_0x18c595(0x298)]=!0x0,_0x197b65=_0x56ac13(_0x197b65)),'~'===_0x33ed24(_0x197b65)&&'~'===_0x193dea(_0x197b65)&&_0x197b65['length']>0x2&&(_0x197b65=_0x9e8e0e(_0x197b65),_0x356fc4[_0x18c595(0x4c2c)]=!0x0,_0x356fc4['min']=_0xbc58[_0x18c595(0x4c2c)]||0.85,!0x1===/\(/[_0x18c595(0x1769)](_0x197b65)))return _0x356fc4[_0x18c595(0x2506)]=_0x197b65,_0x356fc4;if('/'===_0x33ed24(_0x197b65)&&'/'===_0x193dea(_0x197b65))return _0x197b65=_0x9e8e0e(_0x197b65),_0xbc58[_0x18c595(0x1f82)]&&(_0x356fc4['use']=_0x18c595(0x4006)),_0x356fc4[_0x18c595(0x41d2)]=new RegExp(_0x197b65),_0x356fc4;if(!0x0===_0x5adccb['test'](_0x197b65)&&(_0x197b65=_0x197b65[_0x18c595(0x741)](_0x5adccb,(_0x249a94,_0x59162c,_0x178c83)=>(void 0x0===_0x178c83?(_0x356fc4[_0x18c595(0x37c8)]=Number(_0x59162c),_0x356fc4[_0x18c595(0x4529)]=Number(_0x59162c)):(_0x178c83=_0x178c83['replace'](/, */,''),void 0x0===_0x59162c?(_0x356fc4[_0x18c595(0x37c8)]=0x0,_0x356fc4[_0x18c595(0x4529)]=Number(_0x178c83)):(_0x356fc4[_0x18c595(0x37c8)]=Number(_0x59162c),_0x356fc4[_0x18c595(0x4529)]=Number(_0x178c83||0x3e7))),_0x356fc4[_0x18c595(0x48aa)]=!0x0,_0x356fc4[_0x18c595(0x37c8)]||(_0x356fc4[_0x18c595(0x51e4)]=!0x0),''))),'('===_0x33ed24(_0x197b65)&&')'===_0x193dea(_0x197b65)){_0x15e151['test'](_0x197b65)?(_0x356fc4[_0x18c595(0x7c8)]=_0x197b65['split'](_0x15e151),_0x356fc4['operator']=_0x18c595(0x2663)):(_0x356fc4[_0x18c595(0x7c8)]=_0x197b65[_0x18c595(0x1117)]('|'),_0x356fc4[_0x18c595(0x1182)]='or'),_0x356fc4[_0x18c595(0x7c8)][0x0]=_0x56ac13(_0x356fc4[_0x18c595(0x7c8)][0x0]);let _0x45359b=_0x356fc4[_0x18c595(0x7c8)][_0x18c595(0x1b19)]-0x1;_0x356fc4[_0x18c595(0x7c8)][_0x45359b]=_0x4c0793(_0x356fc4[_0x18c595(0x7c8)][_0x45359b]),_0x356fc4[_0x18c595(0x7c8)]=_0x356fc4[_0x18c595(0x7c8)][_0x18c595(0x4833)](_0x284f7f=>_0x284f7f[_0x18c595(0x1b23)]()),_0x356fc4[_0x18c595(0x7c8)]=_0x356fc4['choices']['filter'](_0x276b51=>_0x276b51),_0x356fc4['choices']=_0x356fc4[_0x18c595(0x7c8)][_0x18c595(0x4833)](_0x8d7bc=>_0x8d7bc['split'](/ /g)[_0x18c595(0x4833)](_0x4484e3=>_0x486f70(_0x4484e3,_0xbc58))),_0x197b65='';}if('{'===_0x33ed24(_0x197b65)&&'}'===_0x193dea(_0x197b65)){if(_0x197b65=_0x9e8e0e(_0x197b65),_0x356fc4[_0x18c595(0x507b)]=_0x197b65,/\//[_0x18c595(0x1769)](_0x197b65)){let _0x226bb3=_0x356fc4[_0x18c595(0x507b)][_0x18c595(0x1117)](/\//);_0x356fc4[_0x18c595(0x507b)]=_0x226bb3[0x0],_0x356fc4[_0x18c595(0x333f)]=_0x226bb3[0x1],_0x18c595(0x3f9e)===_0x356fc4[_0x18c595(0x333f)]&&(_0x356fc4[_0x18c595(0x333f)]=_0x18c595(0x4972)),_0x356fc4[_0x18c595(0x333f)]=_0x356fc4[_0x18c595(0x333f)][_0x18c595(0x2fe2)](0x0)[_0x18c595(0x44ff)]()+_0x356fc4[_0x18c595(0x333f)]['substr'](0x1)[_0x18c595(0x6e8)](),void 0x0!==_0x226bb3[0x2]&&(_0x356fc4[_0x18c595(0x3b97)]=_0x226bb3[0x2]);}return _0x356fc4;}if('<'===_0x33ed24(_0x197b65)&&'>'===_0x193dea(_0x197b65))return _0x197b65=_0x9e8e0e(_0x197b65),_0x356fc4['chunk']=_0x22ffc2(_0x197b65),_0x356fc4['greedy']=!0x0,_0x356fc4;if('%'===_0x33ed24(_0x197b65)&&'%'===_0x193dea(_0x197b65))return _0x197b65=_0x9e8e0e(_0x197b65),_0x356fc4['switch']=_0x197b65,_0x356fc4;}return'#'===_0x33ed24(_0x197b65)?(_0x356fc4['tag']=_0x56ac13(_0x197b65),_0x356fc4[_0x18c595(0x15a9)]=_0x22ffc2(_0x356fc4[_0x18c595(0x15a9)]),_0x356fc4):'@'===_0x33ed24(_0x197b65)?(_0x356fc4[_0x18c595(0x510c)]=_0x56ac13(_0x197b65),_0x356fc4):'.'===_0x197b65?(_0x356fc4[_0x18c595(0x3f9)]=!0x0,_0x356fc4):'*'===_0x197b65?(_0x356fc4['anything']=!0x0,_0x356fc4[_0x18c595(0x48aa)]=!0x0,_0x356fc4[_0x18c595(0x51e4)]=!0x0,_0x356fc4):(_0x197b65&&(_0x197b65=(_0x197b65=_0x197b65[_0x18c595(0x741)]('\x5c*','*'))[_0x18c595(0x741)]('\x5c.','.'),_0xbc58[_0x18c595(0x1f82)]?_0x356fc4[_0x18c595(0x84a)]=_0x18c595(0x4006):_0x197b65=_0x197b65[_0x18c595(0x6e8)](),_0x356fc4[_0x18c595(0x2506)]=_0x197b65),_0x356fc4);},_0x2884bf=_0x486f70,_0x57befd=/[a-z0-9][-–—][a-z]/i,_0x14cc27=function(_0xc34b34,_0x3d5c7c){const _0x14c4c3=_0x37e46c;let _0x5f19fa=_0x3d5c7c[_0x14c4c3(0x1556)][_0x14c4c3(0x1d8a)][_0x14c4c3(0x4145)];for(let _0x4422fe=_0xc34b34[_0x14c4c3(0x1b19)]-0x1;_0x4422fe>=0x0;_0x4422fe-=0x1){let _0xb0e64f=_0xc34b34[_0x4422fe];if(_0xb0e64f[_0x14c4c3(0x2506)]&&_0x57befd[_0x14c4c3(0x1769)](_0xb0e64f['word'])){let _0xc60f31=_0xb0e64f['word'][_0x14c4c3(0x1117)](/[-–—]/g);if(_0x5f19fa['hasOwnProperty'](_0xc60f31[0x0]))continue;_0xc60f31=_0xc60f31[_0x14c4c3(0x1465)](_0x5e8587=>_0x5e8587)[_0x14c4c3(0x78b)](),_0xc34b34[_0x14c4c3(0x4986)](_0x4422fe,0x1),_0xc60f31[_0x14c4c3(0xa21)](_0x131e99=>{const _0x214a4b=_0x14c4c3;let _0x2c94d5=Object[_0x214a4b(0x4e14)]({},_0xb0e64f);_0x2c94d5[_0x214a4b(0x2506)]=_0x131e99,_0xc34b34['splice'](_0x4422fe,0x0,_0x2c94d5);});}}return _0xc34b34;},_0x2f5dc7=function(_0x577e8d,_0x23b8c9){const _0x35e35e=_0x37e46c;let {all:_0x2e7eec}=_0x23b8c9[_0x35e35e(0x1578)][_0x35e35e(0x21c9)][_0x35e35e(0x5161)]['verb']||{},_0x4d2fbe=_0x577e8d[_0x35e35e(0x507b)];return _0x2e7eec?_0x2e7eec(_0x4d2fbe,_0x23b8c9['model']):[];},_0x3ea7d9=function(_0x18f400,_0x2b20f2){const _0x2de3c6=_0x37e46c;let {all:_0x3e3d00}=_0x2b20f2[_0x2de3c6(0x1578)][_0x2de3c6(0x21c9)][_0x2de3c6(0x5161)][_0x2de3c6(0x4d62)]||{};return _0x3e3d00?_0x3e3d00(_0x18f400[_0x2de3c6(0x507b)],_0x2b20f2[_0x2de3c6(0x1556)]):[_0x18f400[_0x2de3c6(0x507b)]];},_0x411910=function(_0x546eaf,_0x145b4d){const _0x5580b5=_0x37e46c;let {all:_0x59aabb}=_0x145b4d['methods'][_0x5580b5(0x21c9)][_0x5580b5(0x5161)]['adjective']||{};return _0x59aabb?_0x59aabb(_0x546eaf[_0x5580b5(0x507b)],_0x145b4d[_0x5580b5(0x1556)]):[_0x546eaf[_0x5580b5(0x507b)]];},_0x52804e=function(_0x24a99e,_0x779949){const _0x11bce0=_0x37e46c;return _0x24a99e=_0x24a99e[_0x11bce0(0x4833)](_0x2e4b4e=>{const _0xc92c2f=_0x11bce0;if(_0x2e4b4e['root']){if(_0x779949[_0xc92c2f(0x1578)][_0xc92c2f(0x21c9)]&&_0x779949['methods'][_0xc92c2f(0x21c9)][_0xc92c2f(0x5161)]){let _0x523a70=[];_0x2e4b4e['pos']?_0xc92c2f(0x487b)===_0x2e4b4e[_0xc92c2f(0x333f)]?_0x523a70=_0x523a70['concat'](_0x2f5dc7(_0x2e4b4e,_0x779949)):_0xc92c2f(0x1786)===_0x2e4b4e[_0xc92c2f(0x333f)]?_0x523a70=_0x523a70[_0xc92c2f(0x1d1d)](_0x3ea7d9(_0x2e4b4e,_0x779949)):_0xc92c2f(0x4972)===_0x2e4b4e[_0xc92c2f(0x333f)]&&(_0x523a70=_0x523a70[_0xc92c2f(0x1d1d)](_0x411910(_0x2e4b4e,_0x779949))):(_0x523a70=_0x523a70[_0xc92c2f(0x1d1d)](_0x2f5dc7(_0x2e4b4e,_0x779949)),_0x523a70=_0x523a70['concat'](_0x3ea7d9(_0x2e4b4e,_0x779949)),_0x523a70=_0x523a70[_0xc92c2f(0x1d1d)](_0x411910(_0x2e4b4e,_0x779949))),_0x523a70=_0x523a70[_0xc92c2f(0x1465)](_0x5c1423=>_0x5c1423),_0x523a70['length']>0x0&&(_0x2e4b4e[_0xc92c2f(0x1182)]='or',_0x2e4b4e[_0xc92c2f(0x36fe)]=new Set(_0x523a70));}else _0x2e4b4e[_0xc92c2f(0x192e)]=_0x2e4b4e[_0xc92c2f(0x507b)],delete _0x2e4b4e['id'],delete _0x2e4b4e[_0xc92c2f(0x507b)];}return _0x2e4b4e;}),_0x24a99e;},_0x6088fc=function(_0x292c53){const _0x3b45fc=_0x37e46c;return _0x292c53=function(_0x155ef9){const _0x2782e4=a0_0x11e7;let _0x1f8608=0x0,_0x2b95b5=null;for(let _0x3e8d55=0x0;_0x3e8d55<_0x155ef9[_0x2782e4(0x1b19)];_0x3e8d55++){const _0x3a11e9=_0x155ef9[_0x3e8d55];!0x0===_0x3a11e9[_0x2782e4(0xaed)]&&(_0x2b95b5=_0x3a11e9['group'],null===_0x2b95b5&&(_0x2b95b5=String(_0x1f8608),_0x1f8608+=0x1)),null!==_0x2b95b5&&(_0x3a11e9[_0x2782e4(0x4e5b)]=_0x2b95b5),!0x0===_0x3a11e9[_0x2782e4(0x392)]&&(_0x2b95b5=null);}return _0x155ef9;}(_0x292c53),_0x292c53=_0x292c53[_0x3b45fc(0x4833)](_0xe17aa9=>{const _0x497617=_0x3b45fc;if(void 0x0!==_0xe17aa9[_0x497617(0x7c8)]){if('or'!==_0xe17aa9[_0x497617(0x1182)])return _0xe17aa9;if(!0x0===_0xe17aa9['fuzzy'])return _0xe17aa9;let _0x4b4b96=_0xe17aa9[_0x497617(0x7c8)][_0x497617(0x12d8)](_0x3064b4=>{const _0x6a45db=_0x497617;if(0x1!==_0x3064b4[_0x6a45db(0x1b19)])return!0x1;let _0x54bac6=_0x3064b4[0x0];return!0x0!==_0x54bac6[_0x6a45db(0x4c2c)]&&!_0x54bac6['start']&&!_0x54bac6[_0x6a45db(0x2681)]&&void 0x0!==_0x54bac6['word']&&!0x0!==_0x54bac6['negative']&&!0x0!==_0x54bac6[_0x6a45db(0x51e4)]&&!0x0!==_0x54bac6[_0x6a45db(0x510c)];});!0x0===_0x4b4b96&&(_0xe17aa9[_0x497617(0x36fe)]=new Set(),_0xe17aa9[_0x497617(0x7c8)][_0x497617(0xa21)](_0x434ddf=>{const _0x679454=_0x497617;_0xe17aa9[_0x679454(0x36fe)][_0x679454(0x362c)](_0x434ddf[0x0][_0x679454(0x2506)]);}),delete _0xe17aa9[_0x497617(0x7c8)]);}return _0xe17aa9;}),_0x292c53=function(_0x58689d){const _0x425a8e=_0x3b45fc;return _0x58689d['map'](_0x448ec8=>(_0x448ec8[_0x425a8e(0x4c2c)]&&_0x448ec8[_0x425a8e(0x7c8)]&&_0x448ec8[_0x425a8e(0x7c8)]['forEach'](_0x1f9309=>{const _0x4ffa48=_0x425a8e;0x1===_0x1f9309[_0x4ffa48(0x1b19)]&&_0x1f9309[0x0][_0x4ffa48(0x2506)]&&(_0x1f9309[0x0]['fuzzy']=!0x0,_0x1f9309[0x0][_0x4ffa48(0x37c8)]=_0x448ec8['min']);}),_0x448ec8));}(_0x292c53),_0x292c53;},_0x41307d=function(_0xa274c8,_0x2da48a,_0x3ac22a){const _0x4a6c7e=_0x37e46c;if(null==_0xa274c8||''===_0xa274c8)return[];_0x2da48a=_0x2da48a||{},'number'==typeof _0xa274c8&&(_0xa274c8=String(_0xa274c8));let _0x1a0cc5=_0x217ece(_0xa274c8);return _0x1a0cc5=_0x1a0cc5[_0x4a6c7e(0x4833)](_0x22a92a=>_0x2884bf(_0x22a92a,_0x2da48a)),_0x1a0cc5=_0x14cc27(_0x1a0cc5,_0x3ac22a),_0x1a0cc5=_0x52804e(_0x1a0cc5,_0x3ac22a),_0x1a0cc5=_0x6088fc(_0x1a0cc5,_0x2da48a),_0x1a0cc5;},_0x2d092a=function(_0x506aae,_0x5b57da){const _0x3b61c5=_0x37e46c;for(let _0x4d0e53 of _0x5b57da)if(_0x506aae[_0x3b61c5(0x3170)](_0x4d0e53))return!0x0;return!0x1;},_0x4fe508=function(_0x32c1fa,_0x45ef3f){const _0xa05085=_0x37e46c;for(let _0x3ccf31=0x0;_0x3ccf31<_0x32c1fa[_0xa05085(0x1b19)];_0x3ccf31+=0x1){let _0x3e9297=_0x32c1fa[_0x3ccf31];if(!0x0!==_0x3e9297[_0xa05085(0x51e4)]&&!0x0!==_0x3e9297[_0xa05085(0x298)]&&!0x0!==_0x3e9297[_0xa05085(0x4c2c)]){if(void 0x0!==_0x3e9297[_0xa05085(0x2506)]&&!0x1===_0x45ef3f['has'](_0x3e9297[_0xa05085(0x2506)]))return!0x0;if(void 0x0!==_0x3e9297[_0xa05085(0x15a9)]&&!0x1===_0x45ef3f[_0xa05085(0x3170)]('#'+_0x3e9297[_0xa05085(0x15a9)]))return!0x0;if(_0x3e9297[_0xa05085(0x36fe)]&&!0x1===_0x2d092a(_0x3e9297['fastOr'],_0x45ef3f))return!0x1;}}return!0x1;},_0x3aa909=function(_0x315355,_0x4b9bd6,_0xa7713e=0x3){const _0x2c90c6=_0x37e46c;if(_0x315355===_0x4b9bd6)return 0x1;if(_0x315355[_0x2c90c6(0x1b19)]<_0xa7713e||_0x4b9bd6['length']<_0xa7713e)return 0x0;const _0x1afcf2=function(_0x180f58,_0xef4d17){const _0x18e3e4=_0x2c90c6;let _0x249180=_0x180f58['length'],_0x1e35b1=_0xef4d17['length'];if(0x0===_0x249180)return _0x1e35b1;if(0x0===_0x1e35b1)return _0x249180;let _0x5e9e2f=(_0x1e35b1>_0x249180?_0x1e35b1:_0x249180)+0x1;if(Math[_0x18e3e4(0xbe0)](_0x249180-_0x1e35b1)>(_0x5e9e2f||0x64))return _0x5e9e2f||0x64;let _0x1e4621,_0x4af160,_0x25d40c,_0x419798,_0x3fc96e,_0x49e915,_0x4d6e27=[];for(let _0x1e91d0=0x0;_0x1e91d0<_0x5e9e2f;_0x1e91d0++)_0x4d6e27[_0x1e91d0]=[_0x1e91d0],_0x4d6e27[_0x1e91d0]['length']=_0x5e9e2f;for(let _0x5d75c4=0x0;_0x5d75c4<_0x5e9e2f;_0x5d75c4++)_0x4d6e27[0x0][_0x5d75c4]=_0x5d75c4;for(let _0x44ce01=0x1;_0x44ce01<=_0x249180;++_0x44ce01)for(_0x4af160=_0x180f58[_0x44ce01-0x1],_0x1e4621=0x1;_0x1e4621<=_0x1e35b1;++_0x1e4621){if(_0x44ce01===_0x1e4621&&_0x4d6e27[_0x44ce01][_0x1e4621]>0x4)return _0x249180;_0x25d40c=_0xef4d17[_0x1e4621-0x1],_0x419798=_0x4af160===_0x25d40c?0x0:0x1,_0x3fc96e=_0x4d6e27[_0x44ce01-0x1][_0x1e4621]+0x1,(_0x49e915=_0x4d6e27[_0x44ce01][_0x1e4621-0x1]+0x1)<_0x3fc96e&&(_0x3fc96e=_0x49e915),(_0x49e915=_0x4d6e27[_0x44ce01-0x1][_0x1e4621-0x1]+_0x419798)<_0x3fc96e&&(_0x3fc96e=_0x49e915);let _0x3aa3ad=_0x44ce01>0x1&&_0x1e4621>0x1&&_0x4af160===_0xef4d17[_0x1e4621-0x2]&&_0x180f58[_0x44ce01-0x2]===_0x25d40c&&(_0x49e915=_0x4d6e27[_0x44ce01-0x2][_0x1e4621-0x2]+_0x419798)<_0x3fc96e;_0x4d6e27[_0x44ce01][_0x1e4621]=_0x3aa3ad?_0x49e915:_0x3fc96e;}return _0x4d6e27[_0x249180][_0x1e35b1];}(_0x315355,_0x4b9bd6);let _0x3f5c71=Math[_0x2c90c6(0x4529)](_0x315355['length'],_0x4b9bd6['length']);return 0x1-(0x0===_0x3f5c71?0x0:_0x1afcf2/_0x3f5c71);},_0x4792a5=/([\u0022\uFF02\u0027\u201C\u2018\u201F\u201B\u201E\u2E42\u201A\u00AB\u2039\u2035\u2036\u2037\u301D\u0060\u301F])/,_0x179064=/([\u0022\uFF02\u0027\u201D\u2019\u00BB\u203A\u2032\u2033\u2034\u301E\u00B4])/,_0x2e139c=/^[-–—]$/,_0x12512e=/ [-–—]{1,3} /,_0x2743b1=(_0x1921cf,_0x4ddb97)=>-0x1!==_0x1921cf[_0x37e46c(0x24ce)][_0x37e46c(0x8c9)](_0x4ddb97),_0x54cde6={'hasQuote':_0x199704=>_0x4792a5[_0x37e46c(0x1769)](_0x199704[_0x37e46c(0x1228)])||_0x179064[_0x37e46c(0x1769)](_0x199704[_0x37e46c(0x24ce)]),'hasComma':_0x437c9c=>_0x2743b1(_0x437c9c,','),'hasPeriod':_0x44dabc=>!0x0===_0x2743b1(_0x44dabc,'.')&&!0x1===_0x2743b1(_0x44dabc,_0x37e46c(0x278e)),'hasExclamation':_0x3d2c0a=>_0x2743b1(_0x3d2c0a,'!'),'hasQuestionMark':_0x5e6dde=>_0x2743b1(_0x5e6dde,'?')||_0x2743b1(_0x5e6dde,'¿'),'hasEllipses':_0x11e50d=>_0x2743b1(_0x11e50d,'..')||_0x2743b1(_0x11e50d,'…'),'hasSemicolon':_0x1187ef=>_0x2743b1(_0x1187ef,';'),'hasColon':_0xc44bfa=>_0x2743b1(_0xc44bfa,':'),'hasSlash':_0xb3257a=>/\//[_0x37e46c(0x1769)](_0xb3257a['text']),'hasHyphen':_0x489c=>_0x2e139c[_0x37e46c(0x1769)](_0x489c[_0x37e46c(0x24ce)])||_0x2e139c[_0x37e46c(0x1769)](_0x489c[_0x37e46c(0x1228)]),'hasDash':_0x28ea39=>_0x12512e['test'](_0x28ea39[_0x37e46c(0x24ce)])||_0x12512e['test'](_0x28ea39[_0x37e46c(0x1228)]),'hasContraction':_0x4a6aaa=>Boolean(_0x4a6aaa['implicit']),'isAcronym':_0x482624=>_0x482624[_0x37e46c(0x521a)][_0x37e46c(0x3170)]('Acronym'),'isKnown':_0x5bca09=>_0x5bca09['tags'][_0x37e46c(0x395f)]>0x0,'isTitleCase':_0x1df4f8=>/^\p{Lu}[a-z'\u00C0-\u00FF]/u[_0x37e46c(0x1769)](_0x1df4f8[_0x37e46c(0x4006)]),'isUpperCase':_0x2d7b65=>/^\p{Lu}+$/u['test'](_0x2d7b65[_0x37e46c(0x4006)])};_0x54cde6[_0x37e46c(0xf9a)]=_0x54cde6[_0x37e46c(0x36f4)];const _0x5b6efb=_0x54cde6;let _0x515070=function(){};_0x515070=function(_0x2212f8,_0x9cef34,_0x1c86ac,_0x207c0d){const _0x352365=_0x37e46c;let _0x2fbf4f=function(_0x2ecace,_0x2b49a2,_0x34d5c8,_0x50f899){const _0x36656f=a0_0x11e7;if(!0x0===_0x2b49a2['anything'])return!0x0;if(!0x0===_0x2b49a2[_0x36656f(0x4cc4)]&&0x0!==_0x34d5c8)return!0x1;if(!0x0===_0x2b49a2['end']&&_0x34d5c8!==_0x50f899-0x1)return!0x1;if(void 0x0!==_0x2b49a2['id']&&_0x2b49a2['id']===_0x2ecace['id'])return!0x0;if(void 0x0!==_0x2b49a2[_0x36656f(0x2506)]){if(_0x2b49a2[_0x36656f(0x84a)])return _0x2b49a2[_0x36656f(0x2506)]===_0x2ecace[_0x2b49a2[_0x36656f(0x84a)]];if(null!==_0x2ecace[_0x36656f(0x192e)]&&_0x2ecace[_0x36656f(0x192e)]===_0x2b49a2[_0x36656f(0x2506)])return!0x0;if(void 0x0!==_0x2ecace[_0x36656f(0xa94)]&&_0x2ecace[_0x36656f(0xa94)][_0x36656f(0x2427)](_0x2b49a2[_0x36656f(0x2506)]))return!0x0;if(!0x0===_0x2b49a2[_0x36656f(0x4c2c)]){if(_0x2b49a2[_0x36656f(0x2506)]===_0x2ecace[_0x36656f(0x507b)])return!0x0;if(_0x3aa909(_0x2b49a2[_0x36656f(0x2506)],_0x2ecace['normal'])>=_0x2b49a2[_0x36656f(0x37c8)])return!0x0;}return!(!_0x2ecace[_0x36656f(0xa94)]||!_0x2ecace['alias']['some'](_0x273010=>_0x273010===_0x2b49a2[_0x36656f(0x2506)]))||_0x2b49a2[_0x36656f(0x2506)]===_0x2ecace['text']||_0x2b49a2['word']===_0x2ecace[_0x36656f(0x47d)];}if(void 0x0!==_0x2b49a2[_0x36656f(0x15a9)])return!0x0===_0x2ecace[_0x36656f(0x521a)][_0x36656f(0x3170)](_0x2b49a2[_0x36656f(0x15a9)]);if(void 0x0!==_0x2b49a2[_0x36656f(0x510c)])return'function'==typeof _0x5b6efb[_0x2b49a2['method']]&&!0x0===_0x5b6efb[_0x2b49a2[_0x36656f(0x510c)]](_0x2ecace);if(void 0x0!==_0x2b49a2['pre'])return _0x2ecace['pre']&&_0x2ecace['pre'][_0x36656f(0x2628)](_0x2b49a2[_0x36656f(0x1228)]);if(void 0x0!==_0x2b49a2[_0x36656f(0x24ce)])return _0x2ecace[_0x36656f(0x24ce)]&&_0x2ecace[_0x36656f(0x24ce)][_0x36656f(0x2628)](_0x2b49a2[_0x36656f(0x24ce)]);if(void 0x0!==_0x2b49a2[_0x36656f(0x41d2)]){let _0x402495=_0x2ecace[_0x36656f(0x47d)];return _0x2b49a2[_0x36656f(0x84a)]&&(_0x402495=_0x2ecace[_0x2b49a2[_0x36656f(0x84a)]]),_0x2b49a2[_0x36656f(0x41d2)][_0x36656f(0x1769)](_0x402495);}if(void 0x0!==_0x2b49a2[_0x36656f(0x1647)])return _0x2ecace[_0x36656f(0x1647)]===_0x2b49a2['chunk'];if(void 0x0!==_0x2b49a2[_0x36656f(0x857)])return _0x2ecace[_0x36656f(0x857)]===_0x2b49a2[_0x36656f(0x857)];if(void 0x0!==_0x2b49a2[_0x36656f(0x192e)])return _0x2ecace[_0x36656f(0x47d)]===_0x2b49a2[_0x36656f(0x192e)]||_0x2ecace[_0x36656f(0x192e)]===_0x2b49a2['machine']||_0x2ecace[_0x36656f(0x507b)]===_0x2b49a2[_0x36656f(0x192e)];if(void 0x0!==_0x2b49a2['sense'])return _0x2ecace[_0x36656f(0x3b97)]===_0x2b49a2[_0x36656f(0x3b97)];if(void 0x0!==_0x2b49a2['fastOr']){if(_0x2b49a2['pos']&&!_0x2ecace[_0x36656f(0x521a)]['has'](_0x2b49a2[_0x36656f(0x333f)]))return null;let _0x2894e9=_0x2ecace[_0x36656f(0x507b)]||_0x2ecace['implicit']||_0x2ecace['machine']||_0x2ecace[_0x36656f(0x47d)];return _0x2b49a2[_0x36656f(0x36fe)][_0x36656f(0x3170)](_0x2894e9)||_0x2b49a2[_0x36656f(0x36fe)][_0x36656f(0x3170)](_0x2ecace[_0x36656f(0x4006)]);}return void 0x0!==_0x2b49a2['choices']&&(_0x36656f(0x2663)===_0x2b49a2[_0x36656f(0x1182)]?_0x2b49a2[_0x36656f(0x7c8)]['every'](_0x12e7b1=>_0x515070(_0x2ecace,_0x12e7b1,_0x34d5c8,_0x50f899)):_0x2b49a2[_0x36656f(0x7c8)][_0x36656f(0x363a)](_0x170ba8=>_0x515070(_0x2ecace,_0x170ba8,_0x34d5c8,_0x50f899)));}(_0x2212f8,_0x9cef34,_0x1c86ac,_0x207c0d);return!0x0===_0x9cef34[_0x352365(0x298)]?!_0x2fbf4f:_0x2fbf4f;};const _0xe01b42=_0x515070,_0x49ff0f=function(_0xc3fade,_0x405766){const _0x2fdcc1=_0x37e46c;if(!0x0===_0xc3fade[_0x2fdcc1(0x2681)]&&!0x0===_0xc3fade['greedy']&&_0x405766[_0x2fdcc1(0x28b8)]+_0x405766['t']<_0x405766[_0x2fdcc1(0x21d)]-0x1){let _0x2ecabf=Object[_0x2fdcc1(0x4e14)]({},_0xc3fade,{'end':!0x1});if(!0x0===_0xe01b42(_0x405766[_0x2fdcc1(0x4a03)][_0x405766['t']],_0x2ecabf,_0x405766[_0x2fdcc1(0x28b8)]+_0x405766['t'],_0x405766['phrase_length']))return!0x0;}return!0x1;},_0x1ea5d3=function(_0x5365c3,_0x13839f){const _0x22b00b=_0x37e46c;return _0x5365c3[_0x22b00b(0x9e6)][_0x5365c3[_0x22b00b(0x256)]]||(_0x5365c3[_0x22b00b(0x9e6)][_0x5365c3[_0x22b00b(0x256)]]={'start':_0x13839f,'length':0x0}),_0x5365c3[_0x22b00b(0x9e6)][_0x5365c3[_0x22b00b(0x256)]];},_0x1de8bb=function(_0x27fa4e){const _0x53eed9=_0x37e46c;let {regs:_0x14e2af}=_0x27fa4e,_0x2341b3=_0x14e2af[_0x27fa4e['r']],_0x52ae4e=function(_0x7f664b,_0x420b3e){const _0x1d9243=a0_0x11e7;let _0x2e27d9=_0x7f664b['t'];if(!_0x420b3e)return _0x7f664b[_0x1d9243(0x4a03)][_0x1d9243(0x1b19)];for(;_0x2e27d9<_0x7f664b['terms'][_0x1d9243(0x1b19)];_0x2e27d9+=0x1)if(!0x0===_0xe01b42(_0x7f664b[_0x1d9243(0x4a03)][_0x2e27d9],_0x420b3e,_0x7f664b[_0x1d9243(0x28b8)]+_0x2e27d9,_0x7f664b['phrase_length']))return _0x2e27d9;return null;}(_0x27fa4e,_0x14e2af[_0x27fa4e['r']+0x1]);if(null===_0x52ae4e||0x0===_0x52ae4e)return null;if(void 0x0!==_0x2341b3[_0x53eed9(0x37c8)]&&_0x52ae4e-_0x27fa4e['t']<_0x2341b3[_0x53eed9(0x37c8)])return null;if(void 0x0!==_0x2341b3[_0x53eed9(0x4529)]&&_0x52ae4e-_0x27fa4e['t']>_0x2341b3['max'])return _0x27fa4e['t']=_0x27fa4e['t']+_0x2341b3[_0x53eed9(0x4529)],!0x0;return!0x0===_0x27fa4e[_0x53eed9(0x1ba2)]&&(_0x1ea5d3(_0x27fa4e,_0x27fa4e['t'])['length']=_0x52ae4e-_0x27fa4e['t']),(_0x27fa4e['t']=_0x52ae4e,!0x0);},_0x126166=function(_0x305394,_0x3ac990=0x0){const _0x4b110d=_0x37e46c;let _0x2c194f=_0x305394[_0x4b110d(0x2ea5)][_0x305394['r']],_0x2ca341=!0x1;for(let _0xde67a9=0x0;_0xde67a9<_0x2c194f[_0x4b110d(0x7c8)]['length'];_0xde67a9+=0x1){let _0x4378bb=_0x2c194f[_0x4b110d(0x7c8)][_0xde67a9];if(_0x5613c2=_0x4378bb,_0x4b110d(0xb2e)!==Object[_0x4b110d(0x3b3c)][_0x4b110d(0x8e8)][_0x4b110d(0x236b)](_0x5613c2))return!0x1;if(_0x2ca341=_0x4378bb[_0x4b110d(0x12d8)]((_0x2ba557,_0x15da03)=>{const _0x1fe519=_0x4b110d;let _0x4c34f3=0x0,_0x54dbcb=_0x305394['t']+_0x15da03+_0x3ac990+_0x4c34f3;if(void 0x0===_0x305394[_0x1fe519(0x4a03)][_0x54dbcb])return!0x1;let _0xbede09=_0xe01b42(_0x305394['terms'][_0x54dbcb],_0x2ba557,_0x54dbcb+_0x305394[_0x1fe519(0x28b8)],_0x305394[_0x1fe519(0x21d)]);if(!0x0===_0xbede09&&!0x0===_0x2ba557[_0x1fe519(0x48aa)])for(let _0x4e960f=0x1;_0x4e960f<_0x305394[_0x1fe519(0x4a03)][_0x1fe519(0x1b19)];_0x4e960f+=0x1){let _0x54354f=_0x305394[_0x1fe519(0x4a03)][_0x54dbcb+_0x4e960f];if(_0x54354f){if(!0x0!==_0xe01b42(_0x54354f,_0x2ba557,_0x305394[_0x1fe519(0x28b8)]+_0x4e960f,_0x305394['phrase_length']))break;_0x4c34f3+=0x1;}}return _0x3ac990+=_0x4c34f3,_0xbede09;}),_0x2ca341){_0x3ac990+=_0x4378bb[_0x4b110d(0x1b19)];break;}}var _0x5613c2;return _0x2ca341&&!0x0===_0x2c194f[_0x4b110d(0x48aa)]?_0x126166(_0x305394,_0x3ac990):_0x3ac990;},_0x158512=function(_0x5c6926){const _0x41b9a8=_0x37e46c,{regs:_0x491433}=_0x5c6926;let _0x233774=_0x491433[_0x5c6926['r']],_0x28e7f0=_0x126166(_0x5c6926);if(_0x28e7f0){if(!0x0===_0x233774[_0x41b9a8(0x298)])return null;!0x0===_0x5c6926[_0x41b9a8(0x1ba2)]&&(_0x1ea5d3(_0x5c6926,_0x5c6926['t'])['length']+=_0x28e7f0);if(!0x0===_0x233774['end']){let _0x3956ce=_0x5c6926['phrase_length'];if(_0x5c6926['t']+_0x5c6926[_0x41b9a8(0x28b8)]+_0x28e7f0!==_0x3956ce)return null;}return _0x5c6926['t']+=_0x28e7f0,!0x0;}return!!_0x233774[_0x41b9a8(0x51e4)]||null;},_0x7110d6=function(_0x2d6c7d){const _0x31b9d8=_0x37e46c,{regs:_0x1c6407}=_0x2d6c7d;let _0x456863=_0x1c6407[_0x2d6c7d['r']],_0x21c4b8=function(_0x4a2c56){const _0x38c21d=a0_0x11e7;let _0x25354d=0x0,_0x56e040=_0x4a2c56[_0x38c21d(0x2ea5)][_0x4a2c56['r']][_0x38c21d(0x7c8)]['every'](_0x505dc6=>{const _0xce4e3c=_0x38c21d;let _0x4bfb56=_0x505dc6[_0xce4e3c(0x12d8)]((_0xa6138b,_0x108466)=>{const _0x308b4e=_0xce4e3c;let _0x13bcc5=_0x4a2c56['t']+_0x108466;return void 0x0!==_0x4a2c56['terms'][_0x13bcc5]&&_0xe01b42(_0x4a2c56[_0x308b4e(0x4a03)][_0x13bcc5],_0xa6138b,_0x13bcc5,_0x4a2c56[_0x308b4e(0x21d)]);});return!0x0===_0x4bfb56&&_0x505dc6[_0xce4e3c(0x1b19)]>_0x25354d&&(_0x25354d=_0x505dc6[_0xce4e3c(0x1b19)]),_0x4bfb56;});return!0x0===_0x56e040&&_0x25354d;}(_0x2d6c7d);if(_0x21c4b8){if(!0x0===_0x456863[_0x31b9d8(0x298)])return null;!0x0===_0x2d6c7d[_0x31b9d8(0x1ba2)]&&(_0x1ea5d3(_0x2d6c7d,_0x2d6c7d['t'])[_0x31b9d8(0x1b19)]+=_0x21c4b8);if(!0x0===_0x456863[_0x31b9d8(0x2681)]){let _0x2feea4=_0x2d6c7d['phrase_length']-0x1;if(_0x2d6c7d['t']+_0x2d6c7d['start_i']!==_0x2feea4)return null;}return _0x2d6c7d['t']+=_0x21c4b8,!0x0;}return!!_0x456863['optional']||null;},_0x2776d8=function(_0x5c45e4,_0xfbb30a,_0x2e7b9d){const _0x46dfec=_0x37e46c;let _0x7aa4bb=0x0;for(let _0x1ce636=_0x5c45e4['t'];_0x1ce636<_0x5c45e4[_0x46dfec(0x4a03)][_0x46dfec(0x1b19)];_0x1ce636+=0x1){let _0x53d646=_0xe01b42(_0x5c45e4['terms'][_0x1ce636],_0xfbb30a,_0x5c45e4[_0x46dfec(0x28b8)]+_0x5c45e4['t'],_0x5c45e4[_0x46dfec(0x21d)]);if(_0x53d646)break;if(_0x2e7b9d&&(_0x53d646=_0xe01b42(_0x5c45e4[_0x46dfec(0x4a03)][_0x1ce636],_0x2e7b9d,_0x5c45e4[_0x46dfec(0x28b8)]+_0x5c45e4['t'],_0x5c45e4[_0x46dfec(0x21d)]),_0x53d646))break;if(_0x7aa4bb+=0x1,void 0x0!==_0xfbb30a[_0x46dfec(0x4529)]&&_0x7aa4bb===_0xfbb30a['max'])break;}return 0x0!==_0x7aa4bb&&(!(_0xfbb30a[_0x46dfec(0x37c8)]&&_0xfbb30a[_0x46dfec(0x37c8)]>_0x7aa4bb)&&(_0x5c45e4['t']+=_0x7aa4bb,!0x0));},_0x24ae77=function(_0x4073fb){const _0x3019b2=_0x37e46c,{regs:_0x21c5c4}=_0x4073fb;let _0x3b40d0=_0x21c5c4[_0x4073fb['r']],_0x55ea48=Object[_0x3019b2(0x4e14)]({},_0x3b40d0);if(_0x55ea48[_0x3019b2(0x298)]=!0x1,_0xe01b42(_0x4073fb['terms'][_0x4073fb['t']],_0x55ea48,_0x4073fb[_0x3019b2(0x28b8)]+_0x4073fb['t'],_0x4073fb['phrase_length']))return!0x1;if(_0x3b40d0[_0x3019b2(0x51e4)]){let _0x3f16aa=_0x21c5c4[_0x4073fb['r']+0x1];if(_0x3f16aa){if(_0xe01b42(_0x4073fb['terms'][_0x4073fb['t']],_0x3f16aa,_0x4073fb[_0x3019b2(0x28b8)]+_0x4073fb['t'],_0x4073fb[_0x3019b2(0x21d)]))_0x4073fb['r']+=0x1;else _0x3f16aa[_0x3019b2(0x51e4)]&&_0x21c5c4[_0x4073fb['r']+0x2]&&(_0xe01b42(_0x4073fb['terms'][_0x4073fb['t']],_0x21c5c4[_0x4073fb['r']+0x2],_0x4073fb[_0x3019b2(0x28b8)]+_0x4073fb['t'],_0x4073fb['phrase_length'])&&(_0x4073fb['r']+=0x2));}}return _0x3b40d0['greedy']?_0x2776d8(_0x4073fb,_0x55ea48,_0x21c5c4[_0x4073fb['r']+0x1]):(_0x4073fb['t']+=0x1,!0x0);},_0x942c9c=function(_0x1421de){const _0x204aa6=_0x37e46c,{regs:_0x19ce28}=_0x1421de;let _0x2a27c4=_0x19ce28[_0x1421de['r']],_0x34b56b=_0x1421de[_0x204aa6(0x4a03)][_0x1421de['t']],_0x5c2dc5=_0xe01b42(_0x34b56b,_0x19ce28[_0x1421de['r']+0x1],_0x1421de[_0x204aa6(0x28b8)]+_0x1421de['t'],_0x1421de[_0x204aa6(0x21d)]);if(_0x2a27c4['negative']||_0x5c2dc5){let _0x37a53f=_0x1421de[_0x204aa6(0x4a03)][_0x1421de['t']+0x1];_0x37a53f&&_0xe01b42(_0x37a53f,_0x19ce28[_0x1421de['r']+0x1],_0x1421de[_0x204aa6(0x28b8)]+_0x1421de['t'],_0x1421de[_0x204aa6(0x21d)])||(_0x1421de['r']+=0x1);}},_0x20ef98=function(_0x3263db){const _0x2d44c3=_0x37e46c,{regs:_0x506832,phrase_length:_0x5744ef}=_0x3263db;let _0xda86e7=_0x506832[_0x3263db['r']];return _0x3263db['t']=function(_0x26e0e1,_0x18a27d){const _0x559884=a0_0x11e7;let _0x205352=Object[_0x559884(0x4e14)]({},_0x26e0e1['regs'][_0x26e0e1['r']],{'start':!0x1,'end':!0x1}),_0x2520af=_0x26e0e1['t'];for(;_0x26e0e1['t']<_0x26e0e1[_0x559884(0x4a03)][_0x559884(0x1b19)];_0x26e0e1['t']+=0x1){if(_0x18a27d&&_0xe01b42(_0x26e0e1[_0x559884(0x4a03)][_0x26e0e1['t']],_0x18a27d,_0x26e0e1['start_i']+_0x26e0e1['t'],_0x26e0e1[_0x559884(0x21d)]))return _0x26e0e1['t'];let _0x3ad30b=_0x26e0e1['t']-_0x2520af+0x1;if(void 0x0!==_0x205352[_0x559884(0x4529)]&&_0x3ad30b===_0x205352['max'])return _0x26e0e1['t'];if(!0x1===_0xe01b42(_0x26e0e1[_0x559884(0x4a03)][_0x26e0e1['t']],_0x205352,_0x26e0e1[_0x559884(0x28b8)]+_0x26e0e1['t'],_0x26e0e1[_0x559884(0x21d)]))return void 0x0!==_0x205352[_0x559884(0x37c8)]&&_0x3ad30b<_0x205352[_0x559884(0x37c8)]?null:_0x26e0e1['t'];}return _0x26e0e1['t'];}(_0x3263db,_0x506832[_0x3263db['r']+0x1]),null===_0x3263db['t']||_0xda86e7[_0x2d44c3(0x37c8)]&&_0xda86e7[_0x2d44c3(0x37c8)]>_0x3263db['t']?null:!0x0!==_0xda86e7[_0x2d44c3(0x2681)]||_0x3263db[_0x2d44c3(0x28b8)]+_0x3263db['t']===_0x5744ef||null;},_0xe14f0d=function(_0x153da8){const _0x4f1e07=_0x37e46c;let _0x595f8f=_0x153da8[_0x4f1e07(0x4a03)][_0x153da8['t']],_0x4a1f1f=_0x153da8[_0x4f1e07(0x2ea5)][_0x153da8['r']];if(_0x595f8f['implicit']&&_0x153da8[_0x4f1e07(0x4a03)][_0x153da8['t']+0x1]){if(!_0x153da8[_0x4f1e07(0x4a03)][_0x153da8['t']+0x1][_0x4f1e07(0x4570)])return;_0x4a1f1f['word']===_0x595f8f[_0x4f1e07(0x47d)]&&(_0x153da8['t']+=0x1),_0x4f1e07(0x79a)===_0x4a1f1f[_0x4f1e07(0x510c)]&&(_0x153da8['t']+=0x1);}},_0x4519a7=function(_0x2f32a7){const _0x5aa8d7=_0x37e46c,{regs:_0x56dfdd}=_0x2f32a7;let _0x9c79f6=_0x56dfdd[_0x2f32a7['r']],_0x2c0577=_0x2f32a7[_0x5aa8d7(0x4a03)][_0x2f32a7['t']],_0x132960=_0x2f32a7['t'];if(_0x9c79f6[_0x5aa8d7(0x51e4)]&&_0x56dfdd[_0x2f32a7['r']+0x1]&&_0x9c79f6[_0x5aa8d7(0x298)])return!0x0;if(_0x9c79f6[_0x5aa8d7(0x51e4)]&&_0x56dfdd[_0x2f32a7['r']+0x1]&&_0x942c9c(_0x2f32a7),_0x2c0577[_0x5aa8d7(0x4570)]&&_0x2f32a7['terms'][_0x2f32a7['t']+0x1]&&_0xe14f0d(_0x2f32a7),_0x2f32a7['t']+=0x1,!0x0===_0x9c79f6['end']&&_0x2f32a7['t']!==_0x2f32a7[_0x5aa8d7(0x4a03)][_0x5aa8d7(0x1b19)]&&!0x0!==_0x9c79f6[_0x5aa8d7(0x48aa)])return null;if(!0x0===_0x9c79f6[_0x5aa8d7(0x48aa)]){if(!_0x20ef98(_0x2f32a7))return null;}return!0x0===_0x2f32a7[_0x5aa8d7(0x1ba2)]&&function(_0x3075dc,_0x49c9bd){const _0xf03df9=_0x5aa8d7;let _0x44c034=_0x3075dc[_0xf03df9(0x2ea5)][_0x3075dc['r']];const _0x4981b0=_0x1ea5d3(_0x3075dc,_0x49c9bd);_0x3075dc['t']>0x1&&_0x44c034[_0xf03df9(0x48aa)]?_0x4981b0['length']+=_0x3075dc['t']-_0x49c9bd:_0x4981b0[_0xf03df9(0x1b19)]++;}(_0x2f32a7,_0x132960),!0x0;},_0x7d4c26=function(_0x7114cf,_0x111b81,_0x568e8b,_0x2a6981){const _0x1d1141=_0x37e46c;if(0x0===_0x7114cf['length']||0x0===_0x111b81[_0x1d1141(0x1b19)])return null;let _0x10dd96={'t':0x0,'terms':_0x7114cf,'r':0x0,'regs':_0x111b81,'groups':{},'start_i':_0x568e8b,'phrase_length':_0x2a6981,'inGroup':null};for(;_0x10dd96['r']<_0x111b81[_0x1d1141(0x1b19)];_0x10dd96['r']+=0x1){let _0x1d891f=_0x111b81[_0x10dd96['r']];if(_0x10dd96[_0x1d1141(0x1ba2)]=Boolean(_0x1d891f[_0x1d1141(0x4e5b)]),!0x0===_0x10dd96[_0x1d1141(0x1ba2)]?_0x10dd96[_0x1d1141(0x256)]=_0x1d891f[_0x1d1141(0x4e5b)]:_0x10dd96['inGroup']=null,!_0x10dd96[_0x1d1141(0x4a03)][_0x10dd96['t']]){if(!0x1===_0x111b81[_0x1d1141(0x384c)](_0x10dd96['r'])[_0x1d1141(0x363a)](_0x4f73d1=>!_0x4f73d1[_0x1d1141(0x51e4)]))break;return null;}if(!0x0!==_0x1d891f['anything']||!0x0!==_0x1d891f[_0x1d1141(0x48aa)]){if(void 0x0===_0x1d891f[_0x1d1141(0x7c8)]||'or'!==_0x1d891f[_0x1d1141(0x1182)]){if(void 0x0===_0x1d891f[_0x1d1141(0x7c8)]||'and'!==_0x1d891f[_0x1d1141(0x1182)]){if(!0x0!==_0x1d891f[_0x1d1141(0x3f9)]){if(!0x0!==_0x49ff0f(_0x1d891f,_0x10dd96)){if(_0x1d891f[_0x1d1141(0x298)]){if(!_0x24ae77(_0x10dd96))return null;}else{if(!0x0!==_0xe01b42(_0x10dd96['terms'][_0x10dd96['t']],_0x1d891f,_0x10dd96['start_i']+_0x10dd96['t'],_0x10dd96[_0x1d1141(0x21d)])){if(!0x0!==_0x1d891f[_0x1d1141(0x51e4)])return null;}else{if(!_0x4519a7(_0x10dd96))return null;}}}else{if(!_0x4519a7(_0x10dd96))return null;}}else{if(_0x1d891f[_0x1d1141(0x298)]&&_0x1d891f[_0x1d1141(0x3f9)])return null;if(!_0x4519a7(_0x10dd96))return null;}}else{if(!_0x7110d6(_0x10dd96))return null;}}else{if(!_0x158512(_0x10dd96))return null;}}else{if(!_0x1de8bb(_0x10dd96))return null;}}let _0x1823ed=[null,_0x568e8b,_0x10dd96['t']+_0x568e8b];if(_0x1823ed[0x1]===_0x1823ed[0x2])return null;let _0x9b98ae={};return Object['keys'](_0x10dd96[_0x1d1141(0x9e6)])['forEach'](_0x2c7204=>{const _0x56c30c=_0x1d1141;let _0x4ad56d=_0x10dd96['groups'][_0x2c7204],_0x440311=_0x568e8b+_0x4ad56d[_0x56c30c(0x4cc4)];_0x9b98ae[_0x2c7204]=[null,_0x440311,_0x440311+_0x4ad56d[_0x56c30c(0x1b19)]];}),{'pointer':_0x1823ed,'groups':_0x9b98ae};},_0x43e6e9=function(_0x410ab3,_0xa26371){const _0x555598=_0x37e46c;let _0x214c7a=[],_0x12d409={};return 0x0===_0x410ab3[_0x555598(0x1b19)]||(_0x555598(0x4a80)==typeof _0xa26371&&(_0xa26371=String(_0xa26371)),_0xa26371?_0x410ab3[_0x555598(0xa21)](_0x306860=>{const _0x19f349=_0x555598;_0x306860[_0x19f349(0x9e6)][_0xa26371]&&_0x214c7a[_0x19f349(0x1715)](_0x306860[_0x19f349(0x9e6)][_0xa26371]);}):_0x410ab3[_0x555598(0xa21)](_0x4f3eda=>{const _0x47b888=_0x555598;_0x214c7a['push'](_0x4f3eda['pointer']),Object[_0x47b888(0x1ea9)](_0x4f3eda[_0x47b888(0x9e6)])[_0x47b888(0xa21)](_0x4abec2=>{const _0xfe9d28=_0x47b888;_0x12d409[_0x4abec2]=_0x12d409[_0x4abec2]||[],_0x12d409[_0x4abec2]['push'](_0x4f3eda[_0xfe9d28(0x9e6)][_0x4abec2]);});})),{'ptrs':_0x214c7a,'byGroup':_0x12d409};},_0x5e481e=function(_0x2c2043,_0x5c7a9e,_0x568d96){const _0x4fbb5d=_0x37e46c;return _0x2c2043=_0x2c2043[_0x4fbb5d(0x1465)](_0x5b0e88=>{const _0x4d41b7=_0x4fbb5d;let [_0x2de7f3,_0x373a23,_0x535e2a]=_0x5b0e88[_0x4d41b7(0x43e4)],_0x5ce881=_0x568d96[_0x2de7f3]['slice'](_0x373a23,_0x535e2a);for(let _0x25fa1a=0x0;_0x25fa1a<_0x5ce881[_0x4d41b7(0x1b19)];_0x25fa1a+=0x1){let _0x888729=_0x5ce881[_0x4d41b7(0x384c)](_0x25fa1a);if(null!==_0x7d4c26(_0x888729,_0x5c7a9e,_0x25fa1a,_0x5ce881[_0x4d41b7(0x1b19)]))return!0x1;}return!0x0;}),_0x2c2043;},_0x393d26=function(_0x4fd8ff,_0x578f40){const _0xf6afc1=_0x37e46c;return _0x4fd8ff['pointer'][0x0]=_0x578f40,Object[_0xf6afc1(0x1ea9)](_0x4fd8ff[_0xf6afc1(0x9e6)])[_0xf6afc1(0xa21)](_0x656ff5=>{const _0x56a49e=_0xf6afc1;_0x4fd8ff[_0x56a49e(0x9e6)][_0x656ff5][0x0]=_0x578f40;}),_0x4fd8ff;},_0x1933b4=function(_0x919a0d,_0x2b302e,_0x30becc){let _0x14fab3=_0x7d4c26(_0x919a0d,_0x2b302e,0x0,_0x919a0d['length']);return _0x14fab3?(_0x14fab3=_0x393d26(_0x14fab3,_0x30becc),_0x14fab3):null;},_0x2e8d41=function(_0x4c49aa,_0x2238ff,_0x5f4432){const _0x507ed6=_0x37e46c;_0x5f4432=_0x5f4432||[];let {regs:_0x2b9ca2,group:_0x2b0f0f,justOne:_0x23926d}=_0x2238ff,_0x4f07b6=[];if(!_0x2b9ca2||0x0===_0x2b9ca2[_0x507ed6(0x1b19)])return{'ptrs':[],'byGroup':{}};const _0x414d09=_0x2b9ca2[_0x507ed6(0x1465)](_0xcb0fd8=>!0x0!==_0xcb0fd8[_0x507ed6(0x51e4)]&&!0x0!==_0xcb0fd8[_0x507ed6(0x298)])[_0x507ed6(0x1b19)];_0x328c3b:for(let _0x133af=0x0;_0x133af<_0x4c49aa[_0x507ed6(0x1b19)];_0x133af+=0x1){let _0x328fc0=_0x4c49aa[_0x133af];if(!_0x5f4432[_0x133af]||!_0x4fe508(_0x2b9ca2,_0x5f4432[_0x133af])){if(!0x0!==_0x2b9ca2[0x0]['start'])for(let _0x3d3676=0x0;_0x3d3676<_0x328fc0['length'];_0x3d3676+=0x1){let _0x161630=_0x328fc0[_0x507ed6(0x384c)](_0x3d3676);if(_0x161630[_0x507ed6(0x1b19)]<_0x414d09)break;let _0x183370=_0x7d4c26(_0x161630,_0x2b9ca2,_0x3d3676,_0x328fc0[_0x507ed6(0x1b19)]);if(_0x183370){if(_0x183370=_0x393d26(_0x183370,_0x133af),_0x4f07b6[_0x507ed6(0x1715)](_0x183370),!0x0===_0x23926d)break _0x328c3b;let _0x546160=_0x183370['pointer'][0x2];Math[_0x507ed6(0xbe0)](_0x546160-0x1)>_0x3d3676&&(_0x3d3676=Math[_0x507ed6(0xbe0)](_0x546160-0x1));}}else{let _0x51d5d8=_0x1933b4(_0x328fc0,_0x2b9ca2,_0x133af);_0x51d5d8&&_0x4f07b6[_0x507ed6(0x1715)](_0x51d5d8);}}}return!0x0===_0x2b9ca2[_0x2b9ca2[_0x507ed6(0x1b19)]-0x1][_0x507ed6(0x2681)]&&(_0x4f07b6=_0x4f07b6[_0x507ed6(0x1465)](_0xc6661e=>{const _0x370108=_0x507ed6;let _0x2c490c=_0xc6661e['pointer'][0x0];return _0x4c49aa[_0x2c490c]['length']===_0xc6661e[_0x370108(0x43e4)][0x2];})),_0x2238ff[_0x507ed6(0x45f9)]&&(_0x4f07b6=_0x5e481e(_0x4f07b6,_0x2238ff[_0x507ed6(0x45f9)],_0x4c49aa)),_0x4f07b6=_0x43e6e9(_0x4f07b6,_0x2b0f0f),_0x4f07b6[_0x507ed6(0x232)]['forEach'](_0x1d172c=>{let [_0x38c6f1,_0xb44a7f,_0x28b15c]=_0x1d172c;_0x1d172c[0x3]=_0x4c49aa[_0x38c6f1][_0xb44a7f]['id'],_0x1d172c[0x4]=_0x4c49aa[_0x38c6f1][_0x28b15c-0x1]['id'];}),_0x4f07b6;},_0x226049={'api':_0xa04ced,'methods':{'one':{'termMethods':_0x5b6efb,'parseMatch':_0x41307d,'match':_0x2e8d41}},'lib':{'parseMatch':function(_0x117849,_0x14eb6a){const _0x5cf8f8=_0x37e46c,_0x3f96a2=this[_0x5cf8f8(0x4657)]();let _0x5f694a=_0x3f96a2[_0x5cf8f8(0x1578)][_0x5cf8f8(0x1d8a)][_0x5cf8f8(0x1744)];return _0x5f694a&&(_0x117849=_0x5f694a(_0x117849,_0x3f96a2)),_0x3f96a2[_0x5cf8f8(0x1578)][_0x5cf8f8(0x1d8a)][_0x5cf8f8(0x2407)](_0x117849,_0x14eb6a,_0x3f96a2);}}},_0xc23e44=/^\../,_0x32b9b6=/^#./,_0x5f5491=function(_0x75ad10,_0x54ad2b){const _0x389f93=_0x37e46c;let _0x4cb928={},_0x56ee45={};return Object[_0x389f93(0x1ea9)](_0x54ad2b)[_0x389f93(0xa21)](_0xc70ede=>{const _0x54cdf7=_0x389f93;let _0x3adab6=_0x54ad2b[_0xc70ede],_0x31cbaa=function(_0x471134){const _0x14aaf1=a0_0x11e7;let _0x41ff27='',_0x4d93c3=_0x14aaf1(0x1bcb);return _0x471134=_0x471134[_0x14aaf1(0x741)](/&/g,'&')[_0x14aaf1(0x741)](//g,'>')[_0x14aaf1(0x741)](/"/g,_0x14aaf1(0x2a2))[_0x14aaf1(0x741)](/'/g,_0x14aaf1(0x36f2)),_0xc23e44[_0x14aaf1(0x1769)](_0x471134)?_0x41ff27=''),_0x41ff27+='>',{'start':_0x41ff27,'end':_0x4d93c3};}(_0xc70ede);_0x54cdf7(0x2431)==typeof _0x3adab6&&(_0x3adab6=_0x75ad10[_0x54cdf7(0x2d96)](_0x3adab6)),_0x3adab6[_0x54cdf7(0x204b)][_0x54cdf7(0xa21)](_0x35fb23=>{const _0x13e745=_0x54cdf7;if(_0x35fb23[_0x13e745(0x12d8)](_0x3377c8=>_0x3377c8[_0x13e745(0x4570)]))return;let _0x1b257b=_0x35fb23[0x0]['id'];_0x4cb928[_0x1b257b]=_0x4cb928[_0x1b257b]||[],_0x4cb928[_0x1b257b][_0x13e745(0x1715)](_0x31cbaa[_0x13e745(0x4cc4)]);let _0x39255f=_0x35fb23[_0x35fb23[_0x13e745(0x1b19)]-0x1]['id'];_0x56ee45[_0x39255f]=_0x56ee45[_0x39255f]||[],_0x56ee45[_0x39255f][_0x13e745(0x1715)](_0x31cbaa[_0x13e745(0x2681)]);});}),{'starts':_0x4cb928,'ends':_0x56ee45};},_0x9193d={'html':function(_0x5f1f4d){const _0x104ae3=_0x37e46c;let {starts:_0x199275,ends:_0x1cd929}=_0x5f5491(this,_0x5f1f4d),_0x16e767='';return this[_0x104ae3(0x204b)]['forEach'](_0x4d02a3=>{const _0xf6ff20=_0x104ae3;for(let _0x15de2e=0x0;_0x15de2e<_0x4d02a3[_0xf6ff20(0x1b19)];_0x15de2e+=0x1){let _0x5ee620=_0x4d02a3[_0x15de2e];_0x199275[_0xf6ff20(0x2427)](_0x5ee620['id'])&&(_0x16e767+=_0x199275[_0x5ee620['id']][_0xf6ff20(0x3541)]('')),_0x16e767+=_0x5ee620[_0xf6ff20(0x1228)]||'',_0x16e767+=_0x5ee620[_0xf6ff20(0x4006)]||'',_0x1cd929[_0xf6ff20(0x2427)](_0x5ee620['id'])&&(_0x16e767+=_0x1cd929[_0x5ee620['id']][_0xf6ff20(0x3541)]('')),_0x16e767+=_0x5ee620['post']||'';}}),_0x16e767;}},_0x1e9a71=/[,:;)\]*.?~!\u0022\uFF02\u201D\u2019\u00BB\u203A\u2032\u2033\u2034\u301E\u00B4—-]+$/,_0x8be2b0=/^[(['"*~\uFF02\u201C\u2018\u201F\u201B\u201E\u2E42\u201A\u00AB\u2039\u2035\u2036\u2037\u301D\u0060\u301F]+/,_0x7d954a=/[,:;)('"\u201D\]]/,_0x46446f=/^[-–—]$/,_0x3cb026=/ /,_0x10634e=function(_0x1ba80d,_0x56b7d9,_0x1cc95e=!0x0){const _0x107be1=_0x37e46c;let _0x526f2b='';return _0x1ba80d[_0x107be1(0xa21)](_0x5e7ad7=>{const _0x4ac874=_0x107be1;let _0x5e0acc=_0x5e7ad7[_0x4ac874(0x1228)]||'',_0x37b1e8=_0x5e7ad7['post']||'';'some'===_0x56b7d9[_0x4ac874(0xa25)]&&(_0x5e0acc=_0x5e0acc[_0x4ac874(0x741)](_0x8be2b0,''),_0x46446f['test'](_0x37b1e8)&&(_0x37b1e8='\x20'),_0x37b1e8=_0x37b1e8[_0x4ac874(0x741)](_0x7d954a,''),_0x37b1e8=_0x37b1e8['replace'](/\?!+/,'?'),_0x37b1e8=_0x37b1e8[_0x4ac874(0x741)](/!+/,'!'),_0x37b1e8=_0x37b1e8['replace'](/\?+/,'?'),_0x37b1e8=_0x37b1e8[_0x4ac874(0x741)](/\.{2,}/,''),_0x5e7ad7[_0x4ac874(0x521a)][_0x4ac874(0x3170)]('Abbreviation')&&(_0x37b1e8=_0x37b1e8[_0x4ac874(0x741)](/\./,''))),_0x4ac874(0x363a)===_0x56b7d9['whitespace']&&(_0x5e0acc=_0x5e0acc[_0x4ac874(0x741)](/\s/,''),_0x37b1e8=_0x37b1e8[_0x4ac874(0x741)](/\s+/,'\x20')),_0x56b7d9[_0x4ac874(0x144b)]||(_0x5e0acc=_0x5e0acc[_0x4ac874(0x741)](_0x8be2b0,''),_0x37b1e8='-'===_0x37b1e8?'\x20':_0x37b1e8[_0x4ac874(0x741)](_0x1e9a71,''));let _0x381bf4=_0x5e7ad7[_0x56b7d9['form']||_0x4ac874(0x4006)]||_0x5e7ad7[_0x4ac874(0x47d)]||'';'implicit'===_0x56b7d9['form']&&(_0x381bf4=_0x5e7ad7[_0x4ac874(0x4570)]||_0x5e7ad7[_0x4ac874(0x4006)]),_0x4ac874(0x507b)===_0x56b7d9['form']&&_0x5e7ad7['implicit']&&(_0x381bf4=_0x5e7ad7[_0x4ac874(0x507b)]||_0x5e7ad7['implicit']||_0x5e7ad7[_0x4ac874(0x47d)]),'machine'!==_0x56b7d9[_0x4ac874(0x31e0)]&&_0x4ac874(0x4570)!==_0x56b7d9[_0x4ac874(0x31e0)]&&_0x4ac874(0x507b)!==_0x56b7d9[_0x4ac874(0x31e0)]||!_0x5e7ad7[_0x4ac874(0x4570)]||_0x37b1e8&&_0x3cb026[_0x4ac874(0x1769)](_0x37b1e8)||(_0x37b1e8+='\x20'),_0x526f2b+=_0x5e0acc+_0x381bf4+_0x37b1e8;}),!0x1===_0x1cc95e&&(_0x526f2b=_0x526f2b[_0x107be1(0x1b23)]()),!0x0===_0x56b7d9[_0x107be1(0x2f39)]&&(_0x526f2b=_0x526f2b[_0x107be1(0x6e8)]()),_0x526f2b;},_0x5e4118={'text':{'form':_0x37e46c(0x4006)},'normal':{'whitespace':_0x37e46c(0x363a),'punctuation':'some','case':'some','unicode':_0x37e46c(0x363a),'form':_0x37e46c(0x47d)},'machine':{'keepSpace':!0x1,'whitespace':'some','punctuation':'some','case':'none','unicode':_0x37e46c(0x363a),'form':_0x37e46c(0x192e)},'root':{'keepSpace':!0x1,'whitespace':_0x37e46c(0x363a),'punctuation':'some','case':_0x37e46c(0x363a),'unicode':_0x37e46c(0x363a),'form':_0x37e46c(0x507b)},'implicit':{'form':_0x37e46c(0x4570)}};_0x5e4118[_0x37e46c(0x43d4)]=_0x5e4118['normal'],_0x5e4118[_0x37e46c(0x25ba)]=_0x5e4118[_0x37e46c(0x507b)];const _0x413dfd=_0x5e4118;let _0x4f3912=[],_0x932669=0x0;for(;_0x932669<0x40;)_0x4f3912[_0x932669]=0x0|0x100000000*Math['sin'](++_0x932669%Math['PI']);const _0x6dc6b0=function(_0x41cefc){const _0x5a18f4=_0x37e46c;let _0x248cdf,_0x151e45,_0x4a05ed,_0x45df33=[_0x248cdf=0x67452301,_0x151e45=0xefcdab89,~_0x248cdf,~_0x151e45],_0x5a481f=[],_0x266a10=decodeURI(encodeURI(_0x41cefc))+'\u0080',_0x4e42b7=_0x266a10[_0x5a18f4(0x1b19)];for(_0x41cefc=--_0x4e42b7/0x4+0x2|0xf,_0x5a481f[--_0x41cefc]=0x8*_0x4e42b7;~_0x4e42b7;)_0x5a481f[_0x4e42b7>>0x2]|=_0x266a10['charCodeAt'](_0x4e42b7)<<0x8*_0x4e42b7--;for(_0x932669=_0x266a10=0x0;_0x932669<_0x41cefc;_0x932669+=0x10){for(_0x4e42b7=_0x45df33;_0x266a10<0x40;_0x4e42b7=[_0x4a05ed=_0x4e42b7[0x3],_0x248cdf+((_0x4a05ed=_0x4e42b7[0x0]+[_0x248cdf&_0x151e45|~_0x248cdf&_0x4a05ed,_0x4a05ed&_0x248cdf|~_0x4a05ed&_0x151e45,_0x248cdf^_0x151e45^_0x4a05ed,_0x151e45^(_0x248cdf|~_0x4a05ed)][_0x4e42b7=_0x266a10>>0x4]+_0x4f3912[_0x266a10]+~~_0x5a481f[_0x932669|0xf&[_0x266a10,0x5*_0x266a10+0x1,0x3*_0x266a10+0x5,0x7*_0x266a10][_0x4e42b7]])<<(_0x4e42b7=[0x7,0xc,0x11,0x16,0x5,0x9,0xe,0x14,0x4,0xb,0x10,0x17,0x6,0xa,0xf,0x15][0x4*_0x4e42b7+_0x266a10++%0x4])|_0x4a05ed>>>-_0x4e42b7),_0x248cdf,_0x151e45])_0x248cdf=0x0|_0x4e42b7[0x1],_0x151e45=_0x4e42b7[0x2];for(_0x266a10=0x4;_0x266a10;)_0x45df33[--_0x266a10]+=_0x4e42b7[_0x266a10];}for(_0x41cefc='';_0x266a10<0x20;)_0x41cefc+=(_0x45df33[_0x266a10>>0x3]>>0x4*(0x1^_0x266a10++)&0xf)[_0x5a18f4(0x8e8)](0x10);return _0x41cefc;},_0x548add={'text':!0x0,'terms':!0x0};let _0x542105={'case':_0x37e46c(0x28b),'unicode':_0x37e46c(0x363a),'form':_0x37e46c(0x192e),'punctuation':'some'};const _0x34d664=function(_0x20cf9e,_0x398f26){const _0x5a8a18=_0x37e46c;return Object[_0x5a8a18(0x4e14)]({},_0x20cf9e,_0x398f26);},_0x404929={'text':_0x37c7a9=>_0x10634e(_0x37c7a9,{'keepPunct':!0x0},!0x1),'normal':_0x460e4=>_0x10634e(_0x460e4,_0x34d664(_0x413dfd[_0x37e46c(0x47d)],{'keepPunct':!0x0}),!0x1),'implicit':_0x45ab42=>_0x10634e(_0x45ab42,_0x34d664(_0x413dfd[_0x37e46c(0x4570)],{'keepPunct':!0x0}),!0x1),'machine':_0x349071=>_0x10634e(_0x349071,_0x542105,!0x1),'root':_0x3f31a4=>_0x10634e(_0x3f31a4,_0x34d664(_0x542105,{'form':_0x37e46c(0x507b)}),!0x1),'hash':_0x2eefbc=>_0x6dc6b0(_0x10634e(_0x2eefbc,{'keepPunct':!0x0},!0x1)),'offset':_0x14729d=>{const _0x8d81d=_0x37e46c;let _0x33be60=_0x404929['text'](_0x14729d)[_0x8d81d(0x1b19)];return{'index':_0x14729d[0x0]['offset'][_0x8d81d(0x3bb5)],'start':_0x14729d[0x0]['offset'][_0x8d81d(0x4cc4)],'length':_0x33be60};},'terms':_0x2c7bb0=>_0x2c7bb0['map'](_0x211f4e=>{const _0x2343d9=_0x37e46c;let _0x62b4f9=Object[_0x2343d9(0x4e14)]({},_0x211f4e);return _0x62b4f9['tags']=Array[_0x2343d9(0x27e6)](_0x211f4e[_0x2343d9(0x521a)]),_0x62b4f9;}),'confidence':(_0x427680,_0x436e10,_0x2c230a)=>_0x436e10['eq'](_0x2c230a)[_0x37e46c(0x202a)](),'syllables':(_0x6384be,_0x357c94,_0x4df211)=>_0x357c94['eq'](_0x4df211)[_0x37e46c(0x258e)](),'sentence':(_0x144b2e,_0x254c40,_0x422312)=>_0x254c40['eq'](_0x422312)[_0x37e46c(0x1bb9)]()[_0x37e46c(0x4006)](),'dirty':_0x1fd083=>_0x1fd083[_0x37e46c(0x363a)](_0x590c21=>!0x0===_0x590c21['dirty'])};_0x404929['sentences']=_0x404929[_0x37e46c(0x3824)],_0x404929['clean']=_0x404929[_0x37e46c(0x47d)],_0x404929[_0x37e46c(0x25ba)]=_0x404929[_0x37e46c(0x507b)];const _0x59d3af={'json':function(_0x27f866){const _0x5195a2=_0x37e46c;let _0xd7aff2=(_0x59fea5=this,_0x5195a2(0x2431)==typeof(_0x24d0d0=(_0x24d0d0=_0x27f866)||{})&&(_0x24d0d0={}),(_0x24d0d0=Object[_0x5195a2(0x4e14)]({},_0x548add,_0x24d0d0))[_0x5195a2(0xf16)]&&_0x59fea5[_0x5195a2(0x23df)](_0x5195a2(0xf16)),_0x59fea5[_0x5195a2(0x204b)][_0x5195a2(0x4833)]((_0x4c02f3,_0x451953)=>{const _0x3080b9=_0x5195a2;let _0x5dc2ce={};return Object[_0x3080b9(0x1ea9)](_0x24d0d0)['forEach'](_0x81804b=>{_0x24d0d0[_0x81804b]&&_0x404929[_0x81804b]&&(_0x5dc2ce[_0x81804b]=_0x404929[_0x81804b](_0x4c02f3,_0x59fea5,_0x451953));}),_0x5dc2ce;}));var _0x59fea5,_0x24d0d0;return _0x5195a2(0x4a80)==typeof _0x27f866?_0xd7aff2[_0x27f866]:_0xd7aff2;}};_0x59d3af[_0x37e46c(0x5139)]=_0x59d3af[_0x37e46c(0x3289)];const _0x1e7978=_0x59d3af,_0x4045a5=function(_0x21eb2f){const _0x1907df=_0x37e46c;let _0x1f0bd2=this['methods'][_0x1907df(0x1d8a)][_0x1907df(0x534)]||{};return _0x21eb2f&&_0x1f0bd2[_0x1907df(0x2427)](_0x21eb2f)?(_0x1f0bd2[_0x21eb2f](this),this):'undefined'!=typeof window&&window['document']?(_0x1f0bd2[_0x1907df(0xa71)](this),this):(_0x1f0bd2['tags'](this),this);},_0x55f631=function(_0x994a2e){const _0x5eddf0=_0x37e46c;let _0xfce53f=_0x994a2e['pre']||'',_0x11ebf0=_0x994a2e['post']||'';return _0xfce53f+_0x994a2e[_0x5eddf0(0x4006)]+_0x11ebf0;},_0x4c8cc6=function(_0x13b8ca,_0x328cd9){const _0x1e3ef3=_0x37e46c;let _0x334a40=function(_0x4f975b,_0x5a6aaa){const _0x3dddc9=a0_0x11e7;let _0x4c553e={};return Object[_0x3dddc9(0x1ea9)](_0x5a6aaa)['forEach'](_0x423b7d=>{const _0x46830e=_0x3dddc9;_0x4f975b[_0x46830e(0x2d96)](_0x423b7d)[_0x46830e(0x34ce)][_0x46830e(0xa21)](_0x5dca96=>{_0x4c553e[_0x5dca96[0x3]]={'fn':_0x5a6aaa[_0x423b7d],'end':_0x5dca96[0x2]};});}),_0x4c553e;}(_0x13b8ca,_0x328cd9),_0x86d82a='';return _0x13b8ca['docs'][_0x1e3ef3(0xa21)]((_0x3921fc,_0xd5131)=>{const _0x651fad=_0x1e3ef3;for(let _0x2b69a2=0x0;_0x2b69a2<_0x3921fc[_0x651fad(0x1b19)];_0x2b69a2+=0x1){let _0x2382a0=_0x3921fc[_0x2b69a2];if(_0x334a40[_0x651fad(0x2427)](_0x2382a0['id'])){let {fn:_0x52c91b,end:_0x10c90a}=_0x334a40[_0x2382a0['id']],_0x4cb562=_0x13b8ca[_0x651fad(0x38d6)]([[_0xd5131,_0x2b69a2,_0x10c90a]]);_0x86d82a+=_0x3921fc[_0x2b69a2][_0x651fad(0x1228)]||'',_0x86d82a+=_0x52c91b(_0x4cb562),_0x2b69a2=_0x10c90a-0x1,_0x86d82a+=_0x3921fc[_0x2b69a2][_0x651fad(0x24ce)]||'';}else _0x86d82a+=_0x55f631(_0x2382a0);}}),_0x86d82a;},_0x5b248e={'debug':_0x4045a5,'out':function(_0xe5a243){const _0x2c0ffa=_0x37e46c;if(_0x2efe6d=_0xe5a243,'[object\x20Object]'===Object['prototype']['toString'][_0x2c0ffa(0x236b)](_0x2efe6d))return _0x4c8cc6(this,_0xe5a243);var _0x2efe6d;if(_0x2c0ffa(0x4006)===_0xe5a243)return this[_0x2c0ffa(0x4006)]();if(_0x2c0ffa(0x47d)===_0xe5a243)return this['text']('normal');if(_0x2c0ffa(0x507b)===_0xe5a243)return this[_0x2c0ffa(0x4006)](_0x2c0ffa(0x507b));if(_0x2c0ffa(0x192e)===_0xe5a243||_0x2c0ffa(0x25ba)===_0xe5a243)return this['text'](_0x2c0ffa(0x192e));if(_0x2c0ffa(0x40c0)===_0xe5a243||_0x2c0ffa(0x1d5c)===_0xe5a243)return _0x6dc6b0(this[_0x2c0ffa(0x4006)]());if(_0x2c0ffa(0x3289)===_0xe5a243)return this[_0x2c0ffa(0x3289)]();if(_0x2c0ffa(0xf16)===_0xe5a243||_0x2c0ffa(0x4090)===_0xe5a243)return this[_0x2c0ffa(0x23df)](_0x2c0ffa(0xf16)),this[_0x2c0ffa(0x3289)]({'offset':!0x0});if('array'===_0xe5a243){let _0x113a56=this[_0x2c0ffa(0x204b)][_0x2c0ffa(0x4833)](_0x99615c=>_0x99615c[_0x2c0ffa(0x24d8)]((_0x29de69,_0x5df9fe)=>_0x29de69+_0x5df9fe['pre']+_0x5df9fe[_0x2c0ffa(0x4006)]+_0x5df9fe[_0x2c0ffa(0x24ce)],'')[_0x2c0ffa(0x1b23)]());return _0x113a56[_0x2c0ffa(0x1465)](_0x2b5a80=>_0x2b5a80);}if(_0x2c0ffa(0xe8f)===_0xe5a243||_0x2c0ffa(0x1ead)===_0xe5a243||_0x2c0ffa(0x3f67)===_0xe5a243)return function(_0x546828){const _0x206790=_0x2c0ffa;let _0x1c437b={};_0x546828['forEach'](_0x15444e=>{_0x1c437b[_0x15444e]=_0x1c437b[_0x15444e]||0x0,_0x1c437b[_0x15444e]+=0x1;});let _0xf5d08b=Object[_0x206790(0x1ea9)](_0x1c437b)['map'](_0x3e03b5=>({'normal':_0x3e03b5,'count':_0x1c437b[_0x3e03b5]}));return _0xf5d08b[_0x206790(0x4c33)]((_0x4efb63,_0x268391)=>_0x4efb63['count']>_0x268391[_0x206790(0x404e)]?-0x1:0x0);}(this[_0x2c0ffa(0x3289)]({'normal':!0x0})['map'](_0x150365=>_0x150365[_0x2c0ffa(0x47d)]));if('terms'===_0xe5a243){let _0x1a7cd4=[];return this[_0x2c0ffa(0x204b)][_0x2c0ffa(0xa21)](_0x51b152=>{const _0x14190b=_0x2c0ffa;let _0x1b75c3=_0x51b152[_0x14190b(0x4833)](_0x2eaf86=>_0x2eaf86[_0x14190b(0x4006)]);_0x1b75c3=_0x1b75c3[_0x14190b(0x1465)](_0x12f468=>_0x12f468),_0x1a7cd4=_0x1a7cd4[_0x14190b(0x1d1d)](_0x1b75c3);}),_0x1a7cd4;}return _0x2c0ffa(0x521a)===_0xe5a243?this[_0x2c0ffa(0x204b)][_0x2c0ffa(0x4833)](_0x37d32e=>_0x37d32e[_0x2c0ffa(0x24d8)]((_0x57e1c5,_0x5a888e)=>(_0x57e1c5[_0x5a888e['implicit']||_0x5a888e['normal']]=Array[_0x2c0ffa(0x27e6)](_0x5a888e[_0x2c0ffa(0x521a)]),_0x57e1c5),{})):'debug'===_0xe5a243?this[_0x2c0ffa(0x534)]():this[_0x2c0ffa(0x4006)]();},'wrap':function(_0x53efea){return _0x4c8cc6(this,_0x53efea);}},_0x157464=_0x5b248e,_0x1013a7={'text':function(_0x16062d){const _0x15be31=_0x37e46c;let _0x341ff4={};var _0x4ec435;if(_0x16062d&&_0x15be31(0x2431)==typeof _0x16062d&&_0x413dfd[_0x15be31(0x2427)](_0x16062d)?_0x341ff4=Object[_0x15be31(0x4e14)]({},_0x413dfd[_0x16062d]):_0x16062d&&(_0x4ec435=_0x16062d,_0x15be31(0x4d86)===Object[_0x15be31(0x3b3c)][_0x15be31(0x8e8)][_0x15be31(0x236b)](_0x4ec435))&&(_0x341ff4=Object[_0x15be31(0x4e14)]({},_0x16062d)),void 0x0!==_0x341ff4[_0x15be31(0xa7d)]||this[_0x15be31(0x1b5b)]()||(_0x341ff4['keepSpace']=!0x1),void 0x0===_0x341ff4['keepEndPunct']&&this[_0x15be31(0x43e4)]){let _0x345594=this['pointer'][0x0];_0x345594&&_0x345594[0x1]?_0x341ff4[_0x15be31(0x42e2)]=!0x1:_0x341ff4['keepEndPunct']=!0x0;}return void 0x0===_0x341ff4[_0x15be31(0x144b)]&&(_0x341ff4[_0x15be31(0x144b)]=!0x0),void 0x0===_0x341ff4[_0x15be31(0xa7d)]&&(_0x341ff4['keepSpace']=!0x0),function(_0x371119,_0x2956f4){const _0x55e1fd=_0x15be31;let _0x566fe8='';if(!_0x371119||!_0x371119[0x0]||!_0x371119[0x0][0x0])return _0x566fe8;for(let _0x11e023=0x0;_0x11e023<_0x371119[_0x55e1fd(0x1b19)];_0x11e023+=0x1)_0x566fe8+=_0x10634e(_0x371119[_0x11e023],_0x2956f4,!0x0);if(_0x2956f4[_0x55e1fd(0xa7d)]||(_0x566fe8=_0x566fe8['trim']()),!0x1===_0x2956f4[_0x55e1fd(0x42e2)]){_0x371119[0x0][0x0]['tags'][_0x55e1fd(0x3170)](_0x55e1fd(0x1717))||(_0x566fe8=_0x566fe8[_0x55e1fd(0x741)](_0x8be2b0,''));let _0x38ea33=_0x371119[_0x371119['length']-0x1];_0x38ea33[_0x38ea33['length']-0x1]['tags'][_0x55e1fd(0x3170)]('Emoticon')||(_0x566fe8=_0x566fe8['replace'](_0x1e9a71,'')),_0x566fe8[_0x55e1fd(0x2a85)]('\x27')&&!_0x566fe8[_0x55e1fd(0x2a85)]('s\x27')&&(_0x566fe8=_0x566fe8[_0x55e1fd(0x741)](/'/,''));}return!0x0===_0x2956f4['cleanWhitespace']&&(_0x566fe8=_0x566fe8[_0x55e1fd(0x1b23)]()),_0x566fe8;}(this[_0x15be31(0x204b)],_0x341ff4);}},_0x356f48=Object['assign']({},_0x157464,_0x1013a7,_0x1e7978,_0x9193d),_0x596a3c=function(_0x2bba1a){Object['assign'](_0x2bba1a['prototype'],_0x356f48);},_0x4c4772=function(_0x10c950){_0x10c950['forEach'](_0x432b79=>{const _0x4a71dd=a0_0x11e7;_0x432b79[_0x4a71dd(0x204b)][0x0]['map'](_0x37dc95=>{const _0x4f9459=_0x4a71dd;let _0xd1c707=_0x37dc95[_0x4f9459(0x4006)]||'-';return _0x37dc95[_0x4f9459(0x4570)]&&(_0xd1c707='['+_0x37dc95[_0x4f9459(0x4570)]+']'),{'text':_0xd1c707,'tags':'['+Array[_0x4f9459(0x27e6)](_0x37dc95['tags'])[_0x4f9459(0x3541)](',\x20')+']'};});});},_0x495806='\x1b[0m',_0x21db59={'green':_0x4339a0=>_0x37e46c(0x20d7)+_0x4339a0+_0x495806,'red':_0x5b20ca=>_0x37e46c(0x3be6)+_0x5b20ca+_0x495806,'blue':_0x33675e=>'\x1b[34m'+_0x33675e+_0x495806,'magenta':_0x408f59=>_0x37e46c(0x3553)+_0x408f59+_0x495806,'cyan':_0xedd81f=>_0x37e46c(0x4f4f)+_0xedd81f+_0x495806,'yellow':_0xd22854=>_0x37e46c(0x3ba4)+_0xd22854+_0x495806,'black':_0x7455aa=>_0x37e46c(0x45bc)+_0x7455aa+_0x495806,'dim':_0x415c0d=>'\x1b[2m'+_0x415c0d+_0x495806,'i':_0x562c10=>_0x37e46c(0x47a)+_0x562c10+_0x495806},_0x454937=function(_0x26aa1f){const _0xf0652f=_0x37e46c;let {docs:_0x8424dd,model:_0x2f81cd}=_0x26aa1f;_0x8424dd['length'],_0x8424dd[_0xf0652f(0xa21)](_0x353a60=>{_0x353a60['forEach'](_0xc79ce2=>{const _0x1a75e1=a0_0x11e7;let _0x387c6=[..._0xc79ce2[_0x1a75e1(0x521a)]||[]],_0x46c829=_0xc79ce2[_0x1a75e1(0x4006)]||'-';_0xc79ce2[_0x1a75e1(0x3b97)]&&(_0x46c829='{'+_0xc79ce2[_0x1a75e1(0x47d)]+'/'+_0xc79ce2['sense']+'}'),_0xc79ce2[_0x1a75e1(0x4570)]&&(_0x46c829='['+_0xc79ce2[_0x1a75e1(0x4570)]+']'),_0x46c829=_0x21db59[_0x1a75e1(0x3ad5)](_0x46c829);let _0x11a126='\x27'+_0x46c829+'\x27';if(_0xc79ce2[_0x1a75e1(0x3d0f)]){let _0x2c8ad6=_0x26aa1f[_0x1a75e1(0x38d6)]([_0xc79ce2[_0x1a75e1(0x3d0f)]])[_0x1a75e1(0x4006)]('normal');_0x11a126+='\x20-\x20'+_0x21db59[_0x1a75e1(0x4fae)](_0x21db59['i']('['+_0x2c8ad6+']'));}_0x11a126=_0x11a126[_0x1a75e1(0xe80)](0x12),(_0x21db59[_0x1a75e1(0x1536)](_0x1a75e1(0x340c)),_0x21db59['i'](_0x11a126),function(_0x3e6b06,_0x3963c3){const _0x4c7c3b=_0x1a75e1;_0x3963c3[_0x4c7c3b(0x1d8a)][_0x4c7c3b(0x3f15)]&&(_0x3e6b06=_0x3e6b06['map'](_0x466018=>{const _0x46d638=_0x4c7c3b;if(!_0x3963c3[_0x46d638(0x1d8a)][_0x46d638(0x3f15)][_0x46d638(0x2427)](_0x466018))return _0x466018;const _0x31fa2e=_0x3963c3[_0x46d638(0x1d8a)][_0x46d638(0x3f15)][_0x466018][_0x46d638(0xe81)]||_0x46d638(0x1536);return _0x21db59[_0x31fa2e](_0x466018);})),_0x3e6b06['join'](',\x20');}(_0x387c6,_0x2f81cd));});});},_0x359180=function(_0x452f3d){const _0x1a039b=_0x37e46c;let {docs:_0x108f7a}=_0x452f3d;_0x108f7a[_0x1a039b(0xa21)](_0x2c9fd8=>{const _0x32f958=_0x1a039b;let _0xbbb8b=[];_0x2c9fd8[_0x32f958(0xa21)](_0x3f2d03=>{const _0x3d6fc2=_0x32f958;_0x3d6fc2(0x1786)===_0x3f2d03[_0x3d6fc2(0x1647)]?_0xbbb8b[_0x3d6fc2(0x1715)](_0x21db59[_0x3d6fc2(0x1536)](_0x3f2d03[_0x3d6fc2(0x4570)]||_0x3f2d03['normal'])):_0x3d6fc2(0x487b)===_0x3f2d03[_0x3d6fc2(0x1647)]?_0xbbb8b['push'](_0x21db59[_0x3d6fc2(0x6a2)](_0x3f2d03[_0x3d6fc2(0x4570)]||_0x3f2d03[_0x3d6fc2(0x47d)])):_0x3d6fc2(0x4972)===_0x3f2d03[_0x3d6fc2(0x1647)]?_0xbbb8b[_0x3d6fc2(0x1715)](_0x21db59[_0x3d6fc2(0x3ad5)](_0x3f2d03['implicit']||_0x3f2d03[_0x3d6fc2(0x47d)])):_0x3d6fc2(0x2d7c)===_0x3f2d03[_0x3d6fc2(0x1647)]?_0xbbb8b[_0x3d6fc2(0x1715)](_0x21db59[_0x3d6fc2(0x117e)](_0x3f2d03[_0x3d6fc2(0x4570)]||_0x3f2d03[_0x3d6fc2(0x47d)])):_0xbbb8b['push'](_0x3f2d03[_0x3d6fc2(0x4570)]||_0x3f2d03[_0x3d6fc2(0x47d)]);});});},_0x48c89c=function(_0x3b9b47){const _0x38e33c=_0x37e46c;if(!_0x3b9b47[_0x38e33c(0x2108)])return;let _0x18dc5e={};_0x3b9b47[_0x38e33c(0x34ce)][_0x38e33c(0xa21)](_0x4aa80f=>{_0x18dc5e[_0x4aa80f[0x0]]=_0x18dc5e[_0x4aa80f[0x0]]||[],_0x18dc5e[_0x4aa80f[0x0]]['push'](_0x4aa80f);}),Object[_0x38e33c(0x1ea9)](_0x18dc5e)[_0x38e33c(0xa21)](_0xb1e5d9=>{const _0x22ddfe=_0x38e33c;let _0x1184ba=_0x3b9b47[_0x22ddfe(0x38d6)]([[Number(_0xb1e5d9)]])[_0x22ddfe(0x4006)]();_0x3b9b47[_0x22ddfe(0x38d6)](_0x18dc5e[_0xb1e5d9])[_0x22ddfe(0x3289)]({'offset':!0x0})[_0x22ddfe(0xa21)]((_0x219a32,_0x1e782a)=>{_0x1184ba=function(_0x229e4e,_0x354b97,_0xd0d774){let _0x5aac26=((_0x2b39bb,_0x1381ac,_0x22ae9d)=>{const _0x34b016=a0_0x11e7;let _0x113a7=0x9*_0x22ae9d,_0x54e5a2=_0x1381ac['start']+_0x113a7,_0x42935c=_0x54e5a2+_0x1381ac['length'];return[_0x2b39bb[_0x34b016(0x37b5)](0x0,_0x54e5a2),_0x2b39bb[_0x34b016(0x37b5)](_0x54e5a2,_0x42935c),_0x2b39bb[_0x34b016(0x37b5)](_0x42935c,_0x2b39bb[_0x34b016(0x1b19)])];})(_0x229e4e,_0x354b97,_0xd0d774);return''+_0x5aac26[0x0]+_0x21db59['blue'](_0x5aac26[0x1])+_0x5aac26[0x2];}(_0x1184ba,_0x219a32['offset'],_0x1e782a);});});},_0x553be4={'api':_0x596a3c,'methods':{'one':{'hash':_0x6dc6b0,'debug':{'tags':_0x454937,'clientSide':_0x4c4772,'chunks':_0x359180,'highlight':_0x48c89c}}}},_0x191bd4=function(_0x226ab2,_0x41405f){if(_0x226ab2[0x0]!==_0x41405f[0x0])return!0x1;let [,_0x2e7f0c,_0x35604b]=_0x226ab2,[,_0x5b42d0,_0x9c75e]=_0x41405f;return _0x2e7f0c<=_0x5b42d0&&_0x35604b>_0x5b42d0||_0x5b42d0<=_0x2e7f0c&&_0x9c75e>_0x2e7f0c;},_0x396cdd=function(_0x45e4ca){let _0x379938={};return _0x45e4ca['forEach'](_0x28ead9=>{const _0x4caa1d=a0_0x11e7;_0x379938[_0x28ead9[0x0]]=_0x379938[_0x28ead9[0x0]]||[],_0x379938[_0x28ead9[0x0]][_0x4caa1d(0x1715)](_0x28ead9);}),_0x379938;},_0x173a87=function(_0x3b29b5,_0x2b24be){const _0x154ed4=_0x37e46c;let _0x5c7830=_0x396cdd(_0x2b24be),_0x247af6=[];return _0x3b29b5[_0x154ed4(0xa21)](_0x3bc700=>{const _0x4ceb7b=_0x154ed4;let [_0x290268]=_0x3bc700,_0x5091f5=_0x5c7830[_0x290268]||[];if(_0x5091f5=_0x5091f5['filter'](_0x4f0fb4=>function(_0x43da60,_0x3c1d69){return _0x43da60[0x1]<=_0x3c1d69[0x1]&&_0x3c1d69[0x2]<=_0x43da60[0x2];}(_0x3bc700,_0x4f0fb4)),0x0===_0x5091f5['length'])return void _0x247af6[_0x4ceb7b(0x1715)]({'passthrough':_0x3bc700});_0x5091f5=_0x5091f5['sort']((_0x3e0380,_0x56ac76)=>_0x3e0380[0x1]-_0x56ac76[0x1]);let _0x335e1d=_0x3bc700;_0x5091f5[_0x4ceb7b(0xa21)]((_0x4ce258,_0x4b3257)=>{const _0x542b3e=_0x4ceb7b;let _0x96b73d=function(_0x34a431,_0x13157b){const _0x4d7105=a0_0x11e7;let [_0x150de5,_0x315457]=_0x34a431,_0x1a286b=_0x13157b[0x1],_0xbb04c7=_0x13157b[0x2],_0x282c2f={};if(_0x315457<_0x1a286b){let _0x252c8b=_0x1a286b<_0x34a431[0x2]?_0x1a286b:_0x34a431[0x2];_0x282c2f[_0x4d7105(0x5097)]=[_0x150de5,_0x315457,_0x252c8b];}return _0x282c2f[_0x4d7105(0x2d96)]=_0x13157b,_0x34a431[0x2]>_0xbb04c7&&(_0x282c2f[_0x4d7105(0x1349)]=[_0x150de5,_0xbb04c7,_0x34a431[0x2]]),_0x282c2f;}(_0x335e1d,_0x4ce258);_0x5091f5[_0x4b3257+0x1]?(_0x247af6[_0x542b3e(0x1715)]({'before':_0x96b73d[_0x542b3e(0x5097)],'match':_0x96b73d['match']}),_0x96b73d['after']&&(_0x335e1d=_0x96b73d['after'])):_0x247af6[_0x542b3e(0x1715)](_0x96b73d);});}),_0x247af6;},_0x27c971=function(_0x5b3593,_0x1cda07){const _0x2c8efb=_0x37e46c;let _0xc1486c=[];return _0x5b3593['forEach']((_0x5e2a2f,_0x7c6c28)=>{const _0x82b120=a0_0x11e7;if(!_0x5e2a2f)return;let [_0x55d9b6,_0x1f6ef9,_0x42bd00,_0x43f2a3,_0x37a75a]=_0x5e2a2f,_0x3dfe53=_0x1cda07[_0x55d9b6]||[];if(void 0x0===_0x1f6ef9&&(_0x1f6ef9=0x0),void 0x0===_0x42bd00&&(_0x42bd00=_0x3dfe53['length']),!_0x43f2a3||_0x3dfe53[_0x1f6ef9]&&_0x3dfe53[_0x1f6ef9]['id']===_0x43f2a3)_0x3dfe53=_0x3dfe53[_0x82b120(0x384c)](_0x1f6ef9,_0x42bd00);else{let _0x4e9eb9=function(_0x583e4f,_0x5e29f3,_0x4cbde2){const _0x234c56=_0x82b120;for(let _0x383608=0x0;_0x383608<0x14;_0x383608+=0x1){if(_0x5e29f3[_0x4cbde2-_0x383608]){let _0x340fe5=_0x5e29f3[_0x4cbde2-_0x383608][_0x234c56(0x59b)](_0x42de45=>_0x42de45['id']===_0x583e4f);if(-0x1!==_0x340fe5)return[_0x4cbde2-_0x383608,_0x340fe5];}if(_0x5e29f3[_0x4cbde2+_0x383608]){let _0x1e71ae=_0x5e29f3[_0x4cbde2+_0x383608]['findIndex'](_0x2b16de=>_0x2b16de['id']===_0x583e4f);if(-0x1!==_0x1e71ae)return[_0x4cbde2+_0x383608,_0x1e71ae];}}return null;}(_0x43f2a3,_0x1cda07,_0x55d9b6);if(null!==_0x4e9eb9){let _0x5d15a6=_0x42bd00-_0x1f6ef9;_0x3dfe53=_0x1cda07[_0x4e9eb9[0x0]][_0x82b120(0x384c)](_0x4e9eb9[0x1],_0x4e9eb9[0x1]+_0x5d15a6);let _0x3c813=_0x3dfe53[0x0]?_0x3dfe53[0x0]['id']:null;_0x5b3593[_0x7c6c28]=[_0x4e9eb9[0x0],_0x4e9eb9[0x1],_0x4e9eb9[0x1]+_0x5d15a6,_0x3c813];}}0x0!==_0x3dfe53[_0x82b120(0x1b19)]&&_0x1f6ef9!==_0x42bd00&&(_0x37a75a&&_0x3dfe53[_0x3dfe53[_0x82b120(0x1b19)]-0x1]['id']!==_0x37a75a&&(_0x3dfe53=function(_0x568bd8,_0x3ea4de){const _0x3d48aa=_0x82b120;let [_0x580a39,_0x4174a5,,,_0x373272]=_0x568bd8,_0x5c2f50=_0x3ea4de[_0x580a39],_0x2c07b3=_0x5c2f50[_0x3d48aa(0x59b)](_0x4c8f90=>_0x4c8f90['id']===_0x373272);return-0x1===_0x2c07b3?(_0x568bd8[0x2]=_0x3ea4de[_0x580a39][_0x3d48aa(0x1b19)],_0x568bd8[0x4]=_0x5c2f50[_0x3d48aa(0x1b19)]?_0x5c2f50[_0x5c2f50[_0x3d48aa(0x1b19)]-0x1]['id']:null):_0x568bd8[0x2]=_0x2c07b3,_0x3ea4de[_0x580a39][_0x3d48aa(0x384c)](_0x4174a5,_0x568bd8[0x2]+0x1);}(_0x5e2a2f,_0x1cda07)),_0xc1486c[_0x82b120(0x1715)](_0x3dfe53));}),_0xc1486c=_0xc1486c[_0x2c8efb(0x1465)](_0x1f1370=>_0x1f1370[_0x2c8efb(0x1b19)]>0x0),_0xc1486c;},_0x48b037={'one':{'termList':function(_0x5106ef){const _0x3e61c5=_0x37e46c;let _0x18ae4b=[];for(let _0x27b6ee=0x0;_0x27b6ee<_0x5106ef[_0x3e61c5(0x1b19)];_0x27b6ee+=0x1)for(let _0x2df665=0x0;_0x2df665<_0x5106ef[_0x27b6ee][_0x3e61c5(0x1b19)];_0x2df665+=0x1)_0x18ae4b[_0x3e61c5(0x1715)](_0x5106ef[_0x27b6ee][_0x2df665]);return _0x18ae4b;},'getDoc':_0x27c971,'pointer':{'indexN':_0x396cdd,'splitAll':_0x173a87}}},_0x51fc9a=function(_0x11566f,_0x3f04e0){const _0x43b8d0=_0x37e46c;let _0xbd4f28=_0x11566f[_0x43b8d0(0x1d1d)](_0x3f04e0),_0x23eeee=_0x396cdd(_0xbd4f28),_0x5ee366=[];return _0xbd4f28['forEach'](_0x151a76=>{const _0x4d1e16=_0x43b8d0;let [_0x1f3ce0]=_0x151a76;if(0x1===_0x23eeee[_0x1f3ce0][_0x4d1e16(0x1b19)])return void _0x5ee366['push'](_0x151a76);let _0x1de09f=_0x23eeee[_0x1f3ce0][_0x4d1e16(0x1465)](_0x3aa674=>_0x191bd4(_0x151a76,_0x3aa674));_0x1de09f[_0x4d1e16(0x1715)](_0x151a76);let _0x19f8a3=function(_0x320ed7){let _0x850ae=_0x320ed7[0x0][0x1],_0x43760b=_0x320ed7[0x0][0x2];return _0x320ed7['forEach'](_0x5d181e=>{_0x5d181e[0x1]<_0x850ae&&(_0x850ae=_0x5d181e[0x1]),_0x5d181e[0x2]>_0x43760b&&(_0x43760b=_0x5d181e[0x2]);}),[_0x320ed7[0x0][0x0],_0x850ae,_0x43760b];}(_0x1de09f);_0x5ee366['push'](_0x19f8a3);}),_0x5ee366=function(_0x378fdb){const _0x250ddc=_0x43b8d0;let _0x1e9c4d={};for(let _0x255d7f=0x0;_0x255d7f<_0x378fdb[_0x250ddc(0x1b19)];_0x255d7f+=0x1)_0x1e9c4d[_0x378fdb[_0x255d7f][_0x250ddc(0x3541)](',')]=_0x378fdb[_0x255d7f];return Object[_0x250ddc(0x1fae)](_0x1e9c4d);}(_0x5ee366),_0x5ee366;},_0x13dfba=function(_0x2eb32a,_0x27a7ff){const _0x40d877=_0x37e46c;let _0x2b8b72=[];return _0x173a87(_0x2eb32a,_0x27a7ff)[_0x40d877(0xa21)](_0x3fdd1f=>{const _0x52cedf=_0x40d877;_0x3fdd1f[_0x52cedf(0x3054)]&&_0x2b8b72[_0x52cedf(0x1715)](_0x3fdd1f[_0x52cedf(0x3054)]),_0x3fdd1f[_0x52cedf(0x5097)]&&_0x2b8b72[_0x52cedf(0x1715)](_0x3fdd1f[_0x52cedf(0x5097)]),_0x3fdd1f['after']&&_0x2b8b72[_0x52cedf(0x1715)](_0x3fdd1f['after']);}),_0x2b8b72;},_0x3b42ca=function(_0x43fa60,_0x3df033){let _0x4b389e=_0x396cdd(_0x3df033),_0x416ece=[];return _0x43fa60['forEach'](_0x3c29a5=>{const _0x5f1849=a0_0x11e7;let _0x3081b0=_0x4b389e[_0x3c29a5[0x0]]||[];_0x3081b0=_0x3081b0[_0x5f1849(0x1465)](_0xc55a47=>_0x191bd4(_0x3c29a5,_0xc55a47)),0x0!==_0x3081b0[_0x5f1849(0x1b19)]&&_0x3081b0[_0x5f1849(0xa21)](_0x65b57=>{const _0x2db2bc=_0x5f1849;let _0x3e24f0=function(_0x5e789f,_0x121afc){let _0x811991=_0x5e789f[0x1]<_0x121afc[0x1]?_0x121afc[0x1]:_0x5e789f[0x1],_0x409ffd=_0x5e789f[0x2]>_0x121afc[0x2]?_0x121afc[0x2]:_0x5e789f[0x2];return _0x811991<_0x409ffd?[_0x5e789f[0x0],_0x811991,_0x409ffd]:null;}(_0x3c29a5,_0x65b57);_0x3e24f0&&_0x416ece[_0x2db2bc(0x1715)](_0x3e24f0);});}),_0x416ece;},_0x279fb8=(_0x464603,_0x2cb2e4)=>{const _0x34e4a5=_0x37e46c;return _0x34e4a5(0x2431)==typeof _0x464603||(_0x30b84a=_0x464603,_0x34e4a5(0xb2e)===Object[_0x34e4a5(0x3b3c)][_0x34e4a5(0x8e8)][_0x34e4a5(0x236b)](_0x30b84a))?_0x2cb2e4[_0x34e4a5(0x2d96)](_0x464603):_0x464603||_0x2cb2e4[_0x34e4a5(0x28b)]();var _0x30b84a;},_0x21bb0f=function(_0x38b508,_0x2db53e){return _0x38b508['map'](_0x456d58=>{let [_0x4a89,_0x98c2ce]=_0x456d58;return _0x2db53e[_0x4a89]&&_0x2db53e[_0x4a89][_0x98c2ce]&&(_0x456d58[0x3]=_0x2db53e[_0x4a89][_0x98c2ce]['id']),_0x456d58;});},_0x56f64a={'union':function(_0x19260f){const _0x351c0c=_0x37e46c;_0x19260f=_0x279fb8(_0x19260f,this);let _0x506965=_0x51fc9a(this[_0x351c0c(0x34ce)],_0x19260f[_0x351c0c(0x34ce)]);return _0x506965=_0x21bb0f(_0x506965,this[_0x351c0c(0x295)]),this[_0x351c0c(0x324a)](_0x506965);}};_0x56f64a[_0x37e46c(0x2663)]=_0x56f64a[_0x37e46c(0x29d)],_0x56f64a[_0x37e46c(0x687)]=function(_0x5b3607){const _0x3c913d=_0x37e46c;_0x5b3607=_0x279fb8(_0x5b3607,this);let _0x4ea06a=_0x3b42ca(this[_0x3c913d(0x34ce)],_0x5b3607[_0x3c913d(0x34ce)]);return _0x4ea06a=_0x21bb0f(_0x4ea06a,this['document']),this[_0x3c913d(0x324a)](_0x4ea06a);},_0x56f64a[_0x37e46c(0xc1a)]=function(_0xde739d){const _0x4f9fae=_0x37e46c;_0xde739d=_0x279fb8(_0xde739d,this);let _0xa07cdd=_0x13dfba(this['fullPointer'],_0xde739d[_0x4f9fae(0x34ce)]);return _0xa07cdd=_0x21bb0f(_0xa07cdd,this[_0x4f9fae(0x295)]),this[_0x4f9fae(0x324a)](_0xa07cdd);},_0x56f64a[_0x37e46c(0x4d09)]=_0x56f64a[_0x37e46c(0xc1a)],_0x56f64a[_0x37e46c(0x3a11)]=function(){const _0x5c53b5=_0x37e46c;let _0x5725e9=this['all'](),_0x421127=_0x13dfba(_0x5725e9[_0x5c53b5(0x34ce)],this[_0x5c53b5(0x34ce)]);return _0x421127=_0x21bb0f(_0x421127,this['document']),this[_0x5c53b5(0x324a)](_0x421127);},_0x56f64a[_0x37e46c(0x2439)]=function(){const _0x18f8ae=_0x37e46c;let _0x46a01a=this['fullPointer'];return _0x46a01a[_0x18f8ae(0xa21)](_0x4650a2=>{_0x46a01a=_0x51fc9a(_0x46a01a,[_0x4650a2]);}),_0x46a01a=_0x21bb0f(_0x46a01a,this[_0x18f8ae(0x295)]),this[_0x18f8ae(0x38d6)](_0x46a01a);};const _0x33bd61=function(_0x31db6e){const _0x30e85c=_0x37e46c;Object[_0x30e85c(0x4e14)](_0x31db6e[_0x30e85c(0x3b3c)],_0x56f64a);},_0x184f3c={'methods':_0x48b037,'api':_0x33bd61},_0x568c4d=function(_0x1f47e4){const _0x4a76f4=_0x37e46c;_0x1f47e4[_0x4a76f4(0x3b3c)][_0x4a76f4(0xd6a)]=function(_0x1395ae,_0x13e90a={}){const _0x174f0b=_0x4a76f4,{world:_0x128e6c,docs:_0x1b2eda}=this,{methods:_0x59893e}=_0x128e6c;let _0x20aeb1=_0x59893e[_0x174f0b(0x1d8a)][_0x174f0b(0x893)](_0x1b2eda,_0x1395ae,this['methods'],_0x13e90a);!0x1!==_0x13e90a['tagger']&&_0x59893e[_0x174f0b(0x1d8a)][_0x174f0b(0x382f)](_0x20aeb1,_0x1b2eda,this[_0x174f0b(0x4657)]),_0x20aeb1=_0x20aeb1[_0x174f0b(0x4833)](_0x3f700a=>{const _0x45db41=_0x174f0b;let _0x367bd9=_0x3f700a['pointer'],_0x4b5293=_0x1b2eda[_0x367bd9[0x0]][_0x367bd9[0x1]],_0x3ca253=_0x367bd9[0x2]-_0x367bd9[0x1];return _0x4b5293[_0x45db41(0x3bb5)]&&(_0x3f700a['pointer']=[_0x4b5293['index'][0x0],_0x4b5293[_0x45db41(0x3bb5)][0x1],_0x367bd9[0x1]+_0x3ca253]),_0x3f700a;});let _0x310987=_0x20aeb1['map'](_0xe2742f=>_0xe2742f['pointer']);return _0x20aeb1=_0x20aeb1[_0x174f0b(0x4833)](_0x233a9c=>(_0x233a9c['view']=this[_0x174f0b(0x38d6)]([_0x233a9c['pointer']]),delete _0x233a9c[_0x174f0b(0x2ea5)],delete _0x233a9c[_0x174f0b(0xacb)],delete _0x233a9c[_0x174f0b(0x43e4)],delete _0x233a9c['_expanded'],_0x233a9c)),{'view':this[_0x174f0b(0x38d6)](_0x310987),'found':_0x20aeb1};};},_0x50f209=function(_0x57bfd8){const _0x4b36dc=_0x37e46c;return!0x0===_0x57bfd8['optional']||!0x0===_0x57bfd8['negative']?null:_0x57bfd8[_0x4b36dc(0x15a9)]?'#'+_0x57bfd8['tag']:_0x57bfd8[_0x4b36dc(0x2506)]?_0x57bfd8[_0x4b36dc(0x2506)]:_0x57bfd8[_0x4b36dc(0x857)]?'%'+_0x57bfd8[_0x4b36dc(0x857)]+'%':null;},_0x3b868b=function(_0x4d1d2e,_0x6023b6){const _0x1e9988=_0x37e46c,_0x51ea49=_0x6023b6['methods']['one'][_0x1e9988(0x2407)];return _0x4d1d2e[_0x1e9988(0xa21)](_0x418dd6=>{const _0x36fe31=_0x1e9988;_0x418dd6[_0x36fe31(0x2ea5)]=_0x51ea49(_0x418dd6['match'],{},_0x6023b6),_0x36fe31(0x2431)==typeof _0x418dd6[_0x36fe31(0x385b)]&&(_0x418dd6[_0x36fe31(0x385b)]=[_0x418dd6[_0x36fe31(0x385b)]]),_0x418dd6[_0x36fe31(0x45f9)]&&(_0x418dd6[_0x36fe31(0x45f9)]=_0x51ea49(_0x418dd6['notIf'],{},_0x6023b6)),_0x418dd6['needs']=function(_0x507621){let _0x41f72f=[];return _0x507621['forEach'](_0x222efc=>{const _0x29cd9f=a0_0x11e7;_0x41f72f[_0x29cd9f(0x1715)](_0x50f209(_0x222efc)),'and'===_0x222efc[_0x29cd9f(0x1182)]&&_0x222efc[_0x29cd9f(0x7c8)]&&_0x222efc[_0x29cd9f(0x7c8)][_0x29cd9f(0xa21)](_0x5e9f13=>{const _0x27d804=_0x29cd9f;_0x5e9f13[_0x27d804(0xa21)](_0x5b6135=>{const _0x5cc32e=_0x27d804;_0x41f72f[_0x5cc32e(0x1715)](_0x50f209(_0x5b6135));});});}),_0x41f72f['filter'](_0x52f62d=>_0x52f62d);}(_0x418dd6[_0x36fe31(0x2ea5)]);let {wants:_0x1a527e,count:_0x59c129}=function(_0x346fa6){const _0x55b528=_0x36fe31;let _0x1aba8c=[],_0x3b97f2=0x0;return _0x346fa6[_0x55b528(0xa21)](_0x1c9fe2=>{const _0x3eeaa8=_0x55b528;'or'!==_0x1c9fe2[_0x3eeaa8(0x1182)]||_0x1c9fe2['optional']||_0x1c9fe2[_0x3eeaa8(0x298)]||(_0x1c9fe2[_0x3eeaa8(0x36fe)]&&Array['from'](_0x1c9fe2[_0x3eeaa8(0x36fe)])[_0x3eeaa8(0xa21)](_0xd8bbe5=>{const _0x14673d=_0x3eeaa8;_0x1aba8c[_0x14673d(0x1715)](_0xd8bbe5);}),_0x1c9fe2[_0x3eeaa8(0x7c8)]&&_0x1c9fe2[_0x3eeaa8(0x7c8)][_0x3eeaa8(0xa21)](_0x306864=>{_0x306864['forEach'](_0x5af37d=>{const _0x88cb18=a0_0x11e7;let _0x4ea90b=_0x50f209(_0x5af37d);_0x4ea90b&&_0x1aba8c[_0x88cb18(0x1715)](_0x4ea90b);});}),_0x3b97f2+=0x1);}),{'wants':_0x1aba8c,'count':_0x3b97f2};}(_0x418dd6[_0x36fe31(0x2ea5)]);_0x418dd6[_0x36fe31(0x22fa)]=_0x1a527e,_0x418dd6['minWant']=_0x59c129,_0x418dd6[_0x36fe31(0x4a7c)]=_0x418dd6[_0x36fe31(0x2ea5)]['filter'](_0x5df51f=>!_0x5df51f[_0x36fe31(0x51e4)])['length'];}),_0x4d1d2e;},_0x2728fe=function(_0xa36ed2,_0x2d5631){const _0xf116e0=_0x37e46c;_0xa36ed2=_0x3b868b(_0xa36ed2,_0x2d5631);let _0x29b241={};_0xa36ed2[_0xf116e0(0xa21)](_0x3c9b35=>{const _0xbd3222=_0xf116e0;_0x3c9b35['needs']['forEach'](_0x320a74=>{const _0xa16904=a0_0x11e7;_0x29b241[_0x320a74]=Array[_0xa16904(0x22b4)](_0x29b241[_0x320a74])?_0x29b241[_0x320a74]:[],_0x29b241[_0x320a74][_0xa16904(0x1715)](_0x3c9b35);}),_0x3c9b35['wants'][_0xbd3222(0xa21)](_0x950a31=>{const _0x5d5273=_0xbd3222;_0x29b241[_0x950a31]=Array['isArray'](_0x29b241[_0x950a31])?_0x29b241[_0x950a31]:[],_0x29b241[_0x950a31][_0x5d5273(0x1715)](_0x3c9b35);});}),Object[_0xf116e0(0x1ea9)](_0x29b241)['forEach'](_0x5e7066=>{const _0x2dd1a0=_0xf116e0;let _0x405d89={};_0x29b241[_0x5e7066]=_0x29b241[_0x5e7066][_0x2dd1a0(0x1465)](_0x2d5975=>'boolean'!=typeof _0x405d89[_0x2d5975[_0x2dd1a0(0x2d96)]]&&(_0x405d89[_0x2d5975['match']]=!0x0,!0x0));});let _0x1b45cd=_0xa36ed2['filter'](_0x34d937=>0x0===_0x34d937[_0xf116e0(0xacb)]['length']&&0x0===_0x34d937[_0xf116e0(0x22fa)][_0xf116e0(0x1b19)]);return{'hooks':_0x29b241,'always':_0x1b45cd};},_0x5a216d=function(_0x20bf6e,_0x1d7a68){const _0x3851dc=_0x37e46c;return _0x20bf6e[_0x3851dc(0x4833)]((_0x345c0f,_0x57f532)=>{const _0x2fa378=_0x3851dc;let _0x5aed46=[];Object[_0x2fa378(0x1ea9)](_0x1d7a68)[_0x2fa378(0xa21)](_0x4c452d=>{const _0x500c68=_0x2fa378;_0x20bf6e[_0x57f532][_0x500c68(0x3170)](_0x4c452d)&&(_0x5aed46=_0x5aed46[_0x500c68(0x1d1d)](_0x1d7a68[_0x4c452d]));});let _0x174255={};return _0x5aed46=_0x5aed46[_0x2fa378(0x1465)](_0x392064=>_0x2fa378(0x1e8d)!=typeof _0x174255[_0x392064['match']]&&(_0x174255[_0x392064[_0x2fa378(0x2d96)]]=!0x0,!0x0)),_0x5aed46;});},_0x491f76=function(_0x43db9e,_0x243b4f){const _0x21a596=_0x37e46c;return _0x43db9e[_0x21a596(0x4833)]((_0x5c12c1,_0x91bef0)=>{const _0x5c9be2=_0x21a596;let _0x58564f=_0x243b4f[_0x91bef0];return _0x5c12c1=(_0x5c12c1=(_0x5c12c1=_0x5c12c1[_0x5c9be2(0x1465)](_0x44403e=>_0x44403e[_0x5c9be2(0xacb)][_0x5c9be2(0x12d8)](_0x773b42=>_0x58564f['has'](_0x773b42))))[_0x5c9be2(0x1465)](_0x18c4bb=>void 0x0===_0x18c4bb[_0x5c9be2(0x385b)]||!0x0!==_0x18c4bb[_0x5c9be2(0x385b)][_0x5c9be2(0x363a)](_0x17db8b=>_0x58564f['has'](_0x17db8b))))[_0x5c9be2(0x1465)](_0xf34d1a=>{const _0x4e9f9f=_0x5c9be2;if(0x0===_0xf34d1a[_0x4e9f9f(0x22fa)][_0x4e9f9f(0x1b19)])return!0x0;return _0xf34d1a[_0x4e9f9f(0x22fa)]['filter'](_0x3a1ef6=>_0x58564f[_0x4e9f9f(0x3170)](_0x3a1ef6))[_0x4e9f9f(0x1b19)]>=_0xf34d1a[_0x4e9f9f(0x2344)];});});},_0x5c4007=function(_0xe2873b,_0x468a09,_0x54d180,_0x2be0c9,_0x2954fd){const _0xf5adaf=_0x37e46c;let _0x528e28=[];for(let _0x1f08ca=0x0;_0x1f08ca<_0xe2873b[_0xf5adaf(0x1b19)];_0x1f08ca+=0x1)for(let _0x5d5288=0x0;_0x5d5288<_0xe2873b[_0x1f08ca][_0xf5adaf(0x1b19)];_0x5d5288+=0x1){let _0x35ccad=_0xe2873b[_0x1f08ca][_0x5d5288],_0x448ff4=_0x2be0c9[_0xf5adaf(0x1d8a)][_0xf5adaf(0x2d96)]([_0x468a09[_0x1f08ca]],_0x35ccad);if(_0x448ff4['ptrs'][_0xf5adaf(0x1b19)]>0x0&&(_0x448ff4[_0xf5adaf(0x232)][_0xf5adaf(0xa21)](_0x4dc1b7=>{const _0x3de7db=_0xf5adaf;_0x4dc1b7[0x0]=_0x1f08ca;let _0x526635=Object[_0x3de7db(0x4e14)]({},_0x35ccad,{'pointer':_0x4dc1b7});void 0x0!==_0x35ccad['unTag']&&(_0x526635['unTag']=_0x35ccad[_0x3de7db(0x1b1a)]),_0x528e28[_0x3de7db(0x1715)](_0x526635);}),!0x0===_0x2954fd[_0xf5adaf(0x4b95)]))return[_0x528e28[0x0]];}return _0x528e28;},_0x5613f4=function(_0x319487,_0x4dc953,_0xbf7f81,_0x551876={}){const _0x307645=_0x37e46c;let _0x3f5909=_0xbf7f81['one'][_0x307645(0x4f66)](_0x319487),_0x489b8b=_0x5a216d(_0x3f5909,_0x4dc953[_0x307645(0x1889)]);return _0x489b8b=_0x491f76(_0x489b8b,_0x3f5909,_0x319487),_0x4dc953[_0x307645(0x4505)][_0x307645(0x1b19)]>0x0&&(_0x489b8b=_0x489b8b['map'](_0x32dd45=>_0x32dd45[_0x307645(0x1d1d)](_0x4dc953[_0x307645(0x4505)]))),_0x489b8b=function(_0x12e5a5,_0x4c26ef){const _0x48878c=_0x307645;return _0x12e5a5[_0x48878c(0x4833)]((_0x58e9ba,_0x5c943c)=>{const _0x41d60b=_0x48878c;let _0x32282e=_0x4c26ef[_0x5c943c][_0x41d60b(0x1b19)];return _0x58e9ba=_0x58e9ba[_0x41d60b(0x1465)](_0xd7732c=>_0x32282e>=_0xd7732c[_0x41d60b(0x4a7c)]),_0x58e9ba;});}(_0x489b8b,_0x319487),_0x5c4007(_0x489b8b,_0x319487,_0x3f5909,_0xbf7f81,_0x551876);},_0x423d5d=function(_0x4d527a,_0x2ba6b3,_0x3dcae2){const _0x38e512=_0x37e46c;let _0x525083=_0x3dcae2[_0x38e512(0x1d8a)][_0x38e512(0x3f15)];if(!_0x525083[_0x38e512(0x2427)](_0x2ba6b3))return!0x0;let _0x9b8e12=_0x525083[_0x2ba6b3][_0x38e512(0xc1a)]||[];for(let _0x5be59b=0x0;_0x5be59b<_0x4d527a[_0x38e512(0x1b19)];_0x5be59b+=0x1){let _0x302e45=_0x4d527a[_0x5be59b];for(let _0x1fbb90=0x0;_0x1fbb90<_0x9b8e12['length'];_0x1fbb90+=0x1)if(!0x0===_0x302e45[_0x38e512(0x521a)][_0x38e512(0x3170)](_0x9b8e12[_0x1fbb90]))return!0x1;}return!0x0;},_0x5221e9=function(_0x201e22,_0x287ffe,_0x382806){const _0x5e907b=_0x37e46c,{model:_0x2165b8,methods:_0x16bda1}=_0x382806,{getDoc:_0x2ae071,setTag:_0x539de0,unTag:_0x16d57c}=_0x16bda1[_0x5e907b(0x1d8a)],_0x36cf2e=_0x16bda1[_0x5e907b(0x21c9)][_0x5e907b(0x37cb)];if(0x0===_0x201e22[_0x5e907b(0x1b19)])return _0x201e22;return(_0x5e907b(0x1daa)!=typeof process&&process['env']?process[_0x5e907b(0xe1a)]:self['env']||{})[_0x5e907b(0x3a92)],_0x201e22[_0x5e907b(0x4833)](_0x3a85ed=>{const _0x17f9fd=_0x5e907b;if(!_0x3a85ed[_0x17f9fd(0x15a9)]&&!_0x3a85ed[_0x17f9fd(0x1647)]&&!_0x3a85ed[_0x17f9fd(0x1b1a)])return;let _0x589bb2=_0x3a85ed[_0x17f9fd(0x39fd)]||_0x3a85ed[_0x17f9fd(0x2d96)],_0x11af84=_0x2ae071([_0x3a85ed[_0x17f9fd(0x43e4)]],_0x287ffe)[0x0];if(!0x0===_0x3a85ed[_0x17f9fd(0x569)]){if(!0x1===_0x423d5d(_0x11af84,_0x3a85ed[_0x17f9fd(0x15a9)],_0x2165b8))return;if('-'===_0x11af84[_0x11af84['length']-0x1][_0x17f9fd(0x24ce)])return;}if(void 0x0!==_0x3a85ed[_0x17f9fd(0x15a9)]){if(_0x539de0(_0x11af84,_0x3a85ed[_0x17f9fd(0x15a9)],_0x382806,_0x3a85ed['safe'],_0x17f9fd(0x33cc)+_0x589bb2+'\x27'),'Noun'===_0x3a85ed[_0x17f9fd(0x15a9)]&&_0x36cf2e){let _0x14992e=_0x11af84[_0x11af84['length']-0x1];_0x36cf2e(_0x14992e[_0x17f9fd(0x4006)])?_0x539de0([_0x14992e],_0x17f9fd(0x25f7),_0x382806,_0x3a85ed['safe'],_0x17f9fd(0x1638)):_0x539de0([_0x14992e],_0x17f9fd(0x1e9f),_0x382806,_0x3a85ed['safe'],_0x17f9fd(0x31f6));}!0x0===_0x3a85ed[_0x17f9fd(0x209c)]&&_0x11af84[_0x17f9fd(0xa21)](_0x2f6478=>_0x2f6478[_0x17f9fd(0xf75)]=!0x0);}void 0x0!==_0x3a85ed[_0x17f9fd(0x1b1a)]&&_0x16d57c(_0x11af84,_0x3a85ed[_0x17f9fd(0x1b1a)],_0x382806,_0x3a85ed[_0x17f9fd(0x569)],_0x589bb2),_0x3a85ed['chunk']&&_0x11af84['forEach'](_0x3e3538=>_0x3e3538[_0x17f9fd(0x1647)]=_0x3a85ed[_0x17f9fd(0x1647)]);});},_0x55e89d={'lib':{'buildNet':function(_0xc375a1){const _0x42dd68=_0x37e46c;let _0x2453b7=this[_0x42dd68(0x1578)]()[_0x42dd68(0x1d8a)]['buildNet'](_0xc375a1,this[_0x42dd68(0x4657)]());return _0x2453b7[_0x42dd68(0x409b)]=!0x0,_0x2453b7;}},'api':_0x568c4d,'methods':{'one':{'buildNet':_0x2728fe,'bulkMatch':_0x5613f4,'bulkTagger':_0x5221e9}}},_0x5a20ff=/ /,_0x5f1ff5=function(_0x2b51e7,_0x11d2ef){const _0xb4f47b=_0x37e46c;_0xb4f47b(0x1786)===_0x11d2ef&&(_0x2b51e7[_0xb4f47b(0x1647)]=_0x11d2ef),_0xb4f47b(0x487b)===_0x11d2ef&&(_0x2b51e7[_0xb4f47b(0x1647)]=_0x11d2ef);},_0x126410=function(_0x3c32e9,_0xdc423c,_0x58e6aa,_0x130d91){const _0x54e522=_0x37e46c;if(!0x0===_0x3c32e9['tags'][_0x54e522(0x3170)](_0xdc423c))return null;if('.'===_0xdc423c)return null;!0x0===_0x3c32e9[_0x54e522(0xf75)]&&(_0x130d91=!0x0);let _0x4f69cd=_0x58e6aa[_0xdc423c];if(_0x4f69cd){if(_0x4f69cd[_0x54e522(0xc1a)]&&_0x4f69cd[_0x54e522(0xc1a)]['length']>0x0)for(let _0x31b146=0x0;_0x31b146<_0x4f69cd[_0x54e522(0xc1a)][_0x54e522(0x1b19)];_0x31b146+=0x1){if(!0x0===_0x130d91&&_0x3c32e9['tags'][_0x54e522(0x3170)](_0x4f69cd[_0x54e522(0xc1a)][_0x31b146]))return null;_0x3c32e9[_0x54e522(0x521a)][_0x54e522(0x5be)](_0x4f69cd[_0x54e522(0xc1a)][_0x31b146]);}if(_0x4f69cd['parents']&&_0x4f69cd['parents'][_0x54e522(0x1b19)]>0x0){for(let _0x525c8f=0x0;_0x525c8f<_0x4f69cd[_0x54e522(0x1ed8)][_0x54e522(0x1b19)];_0x525c8f+=0x1)_0x3c32e9[_0x54e522(0x521a)]['add'](_0x4f69cd[_0x54e522(0x1ed8)][_0x525c8f]),_0x5f1ff5(_0x3c32e9,_0x4f69cd[_0x54e522(0x1ed8)][_0x525c8f]);}}return _0x3c32e9[_0x54e522(0x521a)][_0x54e522(0x362c)](_0xdc423c),_0x3c32e9['dirty']=!0x0,_0x5f1ff5(_0x3c32e9,_0xdc423c),!0x0;},_0x1e8c87=function(_0xe69570,_0x4fde5d,_0x34d5aa={},_0x1f546f,_0x2aca81){const _0x3e847a=_0x37e46c,_0x1ccd71=_0x34d5aa[_0x3e847a(0x1556)]['one'][_0x3e847a(0x3f15)]||{};if(!_0x4fde5d)return;const _0x2d8017=_0x3e847a(0x1daa)!=typeof process&&process[_0x3e847a(0xe1a)]?process[_0x3e847a(0xe1a)]:self[_0x3e847a(0xe1a)]||{};var _0x28ed23;if(_0x2d8017&&_0x2d8017['DEBUG_TAGS']&&((_0x498295,_0x5e5145,_0x5c5aa3='')=>{const _0x4ee99e=_0x3e847a;_0x498295[_0x4ee99e(0x4833)](_0x23f05a=>_0x23f05a[_0x4ee99e(0x4006)]||'['+_0x23f05a[_0x4ee99e(0x4570)]+']')['join']('\x20'),_0x4ee99e(0x2431)!=typeof _0x5e5145&&_0x5e5145[_0x4ee99e(0x1b19)]>0x2&&(_0x5e5145=_0x5e5145[_0x4ee99e(0x384c)](0x0,0x2)[_0x4ee99e(0x3541)](',\x20#')+'\x20+'),_0x5e5145=_0x4ee99e(0x2431)!=typeof _0x5e5145?_0x5e5145[_0x4ee99e(0x3541)](_0x4ee99e(0x386c)):_0x5e5145;})(_0xe69570,_0x4fde5d,_0x2aca81),!0x0!=(_0x28ed23=_0x4fde5d,_0x3e847a(0xb2e)===Object[_0x3e847a(0x3b3c)]['toString']['call'](_0x28ed23))){if(_0x3e847a(0x2431)==typeof _0x4fde5d){if(_0x4fde5d=_0x4fde5d[_0x3e847a(0x1b23)](),_0x5a20ff['test'](_0x4fde5d))!function(_0x18c807,_0x46294f,_0x50d16d,_0x56d910){const _0x3f93f3=_0x3e847a;let _0x271cfe=_0x46294f['split'](_0x5a20ff);_0x18c807[_0x3f93f3(0xa21)]((_0xc06cab,_0x4f2dcd)=>{const _0x3320a9=_0x3f93f3;let _0xb97b6=_0x271cfe[_0x4f2dcd];_0xb97b6&&(_0xb97b6=_0xb97b6[_0x3320a9(0x741)](/^#/,''),_0x126410(_0xc06cab,_0xb97b6,_0x50d16d,_0x56d910));});}(_0xe69570,_0x4fde5d,_0x1ccd71,_0x1f546f);else{_0x4fde5d=_0x4fde5d[_0x3e847a(0x741)](/^#/,'');for(let _0x1bd943=0x0;_0x1bd943<_0xe69570['length'];_0x1bd943+=0x1)_0x126410(_0xe69570[_0x1bd943],_0x4fde5d,_0x1ccd71,_0x1f546f);}}}else _0x4fde5d[_0x3e847a(0xa21)](_0x3408b2=>_0x1e8c87(_0xe69570,_0x3408b2,_0x34d5aa,_0x1f546f));},_0x2b4893=_0x1e8c87,_0x173f14=function(_0x4f203b,_0x4b8a2c,_0x5960ed){const _0x5a223b=_0x37e46c;_0x4b8a2c=_0x4b8a2c[_0x5a223b(0x1b23)]()[_0x5a223b(0x741)](/^#/,'');for(let _0x51771d=0x0;_0x51771d<_0x4f203b[_0x5a223b(0x1b19)];_0x51771d+=0x1){let _0xed829=_0x4f203b[_0x51771d];if(!0x0===_0xed829[_0x5a223b(0xf75)])continue;if('*'===_0x4b8a2c){_0xed829['tags'][_0x5a223b(0x4933)]();continue;}let _0x421841=_0x5960ed[_0x4b8a2c];if(_0x421841&&_0x421841[_0x5a223b(0x4c3e)][_0x5a223b(0x1b19)]>0x0){for(let _0x425d9e=0x0;_0x425d9e<_0x421841[_0x5a223b(0x4c3e)]['length'];_0x425d9e+=0x1)_0xed829[_0x5a223b(0x521a)][_0x5a223b(0x5be)](_0x421841[_0x5a223b(0x4c3e)][_0x425d9e]);}_0xed829['tags']['delete'](_0x4b8a2c);}},_0x527bd2=function(_0x310db5,_0x3ab680,_0x276b9a){const _0x440871=_0x37e46c;if(!_0x276b9a['hasOwnProperty'](_0x3ab680))return!0x0;let _0x57d37b=_0x276b9a[_0x3ab680][_0x440871(0xc1a)]||[];for(let _0x4ecf00=0x0;_0x4ecf00<_0x57d37b[_0x440871(0x1b19)];_0x4ecf00+=0x1)if(_0x310db5[_0x440871(0x521a)]['has'](_0x57d37b[_0x4ecf00]))return!0x1;return!0x0;},_0xafc2a6=function(_0x4b5f1d){const _0x5f471e=_0x37e46c;return _0x4b5f1d[_0x5f471e(0x4c3e)]=_0x4b5f1d[_0x5f471e(0x4c3e)]||[],_0x4b5f1d[_0x5f471e(0x1aa7)]=_0x4b5f1d[_0x5f471e(0x1aa7)]||{},_0x4b5f1d['props']=_0x4b5f1d[_0x5f471e(0xbf7)]||{},_0x4b5f1d[_0x5f471e(0x1aa7)][_0x5f471e(0x1ed8)]=_0x4b5f1d[_0x5f471e(0x1aa7)]['parents']||[],_0x4b5f1d['_cache'][_0x5f471e(0x4c3e)]=_0x4b5f1d[_0x5f471e(0x1aa7)][_0x5f471e(0x4c3e)]||[],_0x4b5f1d;},_0x4bb286=/^ *(#|\/\/)/,_0x4c36ba=function(_0x3c2c86){const _0x1072d3=_0x37e46c;let _0x21932e=_0x3c2c86['trim']()[_0x1072d3(0x1117)](/->/),_0x3d101c=[];_0x21932e[_0x1072d3(0xa21)](_0x1ea417=>{_0x3d101c=_0x3d101c['concat'](function(_0x5d356c){const _0x65b9a2=a0_0x11e7;if(!(_0x5d356c=_0x5d356c[_0x65b9a2(0x1b23)]()))return null;if(/^\[/[_0x65b9a2(0x1769)](_0x5d356c)&&/\]$/[_0x65b9a2(0x1769)](_0x5d356c)){let _0x1be801=(_0x5d356c=(_0x5d356c=_0x5d356c[_0x65b9a2(0x741)](/^\[/,''))[_0x65b9a2(0x741)](/\]$/,''))['split'](/,/);return _0x1be801=_0x1be801[_0x65b9a2(0x4833)](_0x267145=>_0x267145[_0x65b9a2(0x1b23)]())[_0x65b9a2(0x1465)](_0x2bdf54=>_0x2bdf54),_0x1be801=_0x1be801[_0x65b9a2(0x4833)](_0x10fd23=>_0xafc2a6({'id':_0x10fd23})),_0x1be801;}return[_0xafc2a6({'id':_0x5d356c})];}(_0x1ea417));}),_0x3d101c=_0x3d101c[_0x1072d3(0x1465)](_0x6d9f80=>_0x6d9f80);let _0x2e765c=_0x3d101c[0x0];for(let _0x2f728d=0x1;_0x2f728d<_0x3d101c[_0x1072d3(0x1b19)];_0x2f728d+=0x1)_0x2e765c['children'][_0x1072d3(0x1715)](_0x3d101c[_0x2f728d]),_0x2e765c=_0x3d101c[_0x2f728d];return _0x3d101c[0x0];},_0x3b3748=(_0xc2518c,_0x41063a)=>{const _0x365ee2=_0x37e46c;let _0x5e7b22=[],_0x5a2edd=[_0xc2518c];for(;_0x5a2edd[_0x365ee2(0x1b19)]>0x0;){let _0x13328d=_0x5a2edd[_0x365ee2(0x3d35)]();_0x5e7b22[_0x365ee2(0x1715)](_0x13328d),_0x13328d[_0x365ee2(0x4c3e)]&&_0x13328d[_0x365ee2(0x4c3e)][_0x365ee2(0xa21)](_0x4188b6=>{const _0x26d9cd=_0x365ee2;_0x41063a&&_0x41063a(_0x13328d,_0x4188b6),_0x5a2edd[_0x26d9cd(0x1715)](_0x4188b6);});}return _0x5e7b22;},_0x1bfc10=_0x115fe5=>_0x37e46c(0xb2e)===Object['prototype'][_0x37e46c(0x8e8)]['call'](_0x115fe5),_0x55b650=_0x134475=>(_0x134475=_0x134475||'')[_0x37e46c(0x1b23)](),_0x4c6be4=function(_0x57cde1=[]){const _0x359138=_0x37e46c;return _0x359138(0x2431)==typeof _0x57cde1?function(_0x4aa064){const _0x312f9e=_0x359138;let _0xc20c3e=_0x4aa064[_0x312f9e(0x1117)](/\r?\n/),_0x4645fd=[];_0xc20c3e[_0x312f9e(0xa21)](_0x17a1ae=>{const _0x2f5ec0=_0x312f9e;if(!_0x17a1ae[_0x2f5ec0(0x1b23)]()||_0x4bb286[_0x2f5ec0(0x1769)](_0x17a1ae))return;let _0x37264a=(_0x48982b=>{const _0x1c8924=_0x2f5ec0,_0x2c4ff9=/^( {2}|\t)/;let _0x163e51=0x0;for(;_0x2c4ff9['test'](_0x48982b);)_0x48982b=_0x48982b[_0x1c8924(0x741)](_0x2c4ff9,''),_0x163e51+=0x1;return _0x163e51;})(_0x17a1ae);_0x4645fd['push']({'indent':_0x37264a,'node':_0x4c36ba(_0x17a1ae)});});let _0xfd7385=function(_0x48dc6b){const _0x2c837e=_0x312f9e;let _0x40cb0f={'children':[]};return _0x48dc6b[_0x2c837e(0xa21)]((_0x50acb3,_0x128ed8)=>{const _0x3a642e=_0x2c837e;0x0===_0x50acb3[_0x3a642e(0x2a39)]?_0x40cb0f[_0x3a642e(0x4c3e)]=_0x40cb0f[_0x3a642e(0x4c3e)][_0x3a642e(0x1d1d)](_0x50acb3['node']):_0x48dc6b[_0x128ed8-0x1]&&function(_0x387878,_0x2b377e){let _0x54b12a=_0x387878[_0x2b377e]['indent'];for(;_0x2b377e>=0x0;_0x2b377e-=0x1)if(_0x387878[_0x2b377e]['indent']<_0x54b12a)return _0x387878[_0x2b377e];return _0x387878[0x0];}(_0x48dc6b,_0x128ed8)[_0x3a642e(0x13c6)]['children'][_0x3a642e(0x1715)](_0x50acb3[_0x3a642e(0x13c6)]);}),_0x40cb0f;}(_0x4645fd);return _0xfd7385=_0xafc2a6(_0xfd7385),_0xfd7385;}(_0x57cde1):_0x1bfc10(_0x57cde1)?function(_0xa6517f){const _0x8da948=_0x359138;let _0x50c133={};_0xa6517f['forEach'](_0x410e7f=>{_0x50c133[_0x410e7f['id']]=_0x410e7f;});let _0x1fa9db=_0xafc2a6({});return _0xa6517f[_0x8da948(0xa21)](_0x133023=>{const _0x42beff=_0x8da948;if((_0x133023=_0xafc2a6(_0x133023))['parent']){if(_0x50c133[_0x42beff(0x2427)](_0x133023[_0x42beff(0x46f8)])){let _0x4d81fe=_0x50c133[_0x133023[_0x42beff(0x46f8)]];delete _0x133023[_0x42beff(0x46f8)],_0x4d81fe['children'][_0x42beff(0x1715)](_0x133023);}}else _0x1fa9db[_0x42beff(0x4c3e)][_0x42beff(0x1715)](_0x133023);}),_0x1fa9db;}(_0x57cde1):(_0x3b3748(_0x330b4f=_0x57cde1)[_0x359138(0xa21)](_0xafc2a6),_0x330b4f);var _0x330b4f;},_0x31096e=function(_0x3760ac,_0x407282){const _0x4c7bd4=_0x37e46c;let _0x45304a=_0x4c7bd4(0xf45);_0x407282&&(_0x45304a=(_0x46b9fb=>_0x4c7bd4(0x3f77)+_0x46b9fb+_0x4c7bd4(0x239))('→\x20'));let _0x3dfb94='';return _0x3b3748(_0x3760ac)[_0x4c7bd4(0xa21)]((_0x1c35e7,_0x25039f)=>{const _0x399348=_0x4c7bd4;let _0x3f1201=_0x1c35e7['id']||'';if(_0x407282&&(_0x3f1201=(_0x55aecb=>_0x399348(0x3be6)+_0x55aecb+_0x399348(0x239))(_0x3f1201)),0x0===_0x25039f&&!_0x1c35e7['id'])return;let _0xc64a32=_0x1c35e7['_cache'][_0x399348(0x1ed8)][_0x399348(0x1b19)];_0x3dfb94+=_0x399348(0x1120)['repeat'](_0xc64a32)+_0x45304a+_0x3f1201+'\x0a';}),_0x3dfb94;},_0x5dcb3a=function(_0x3a2838){const _0xf2413f=_0x37e46c;let _0x5d72f0=_0x3b3748(_0x3a2838);_0x5d72f0[_0xf2413f(0xa21)](_0x3641ee=>{const _0x49f64a=_0xf2413f;delete(_0x3641ee=Object[_0x49f64a(0x4e14)]({},_0x3641ee))[_0x49f64a(0x4c3e)];});let _0x41210a=_0x5d72f0[0x0];return _0x41210a&&!_0x41210a['id']&&0x0===Object[_0xf2413f(0x1ea9)](_0x41210a['props'])[_0xf2413f(0x1b19)]&&_0x5d72f0[_0xf2413f(0x34fe)](),_0x5d72f0;},_0x4d2cdf={'text':_0x31096e,'txt':_0x31096e,'array':_0x5dcb3a,'flat':_0x5dcb3a},_0x3a9288=function(_0x17e143,_0x17aebd){const _0x3c3c28=_0x37e46c;return'nested'===_0x17aebd||_0x3c3c28(0x3289)===_0x17aebd?_0x17e143:_0x3c3c28(0x534)===_0x17aebd?null:_0x4d2cdf['hasOwnProperty'](_0x17aebd)?_0x4d2cdf[_0x17aebd](_0x17e143):_0x17e143;},_0x24f8a6=_0x47f741=>{_0x3b3748(_0x47f741,(_0x41c53c,_0x51a280)=>{const _0x597f47=a0_0x11e7;_0x41c53c['id']&&(_0x41c53c[_0x597f47(0x1aa7)][_0x597f47(0x1ed8)]=_0x41c53c[_0x597f47(0x1aa7)][_0x597f47(0x1ed8)]||[],_0x51a280[_0x597f47(0x1aa7)][_0x597f47(0x1ed8)]=_0x41c53c[_0x597f47(0x1aa7)][_0x597f47(0x1ed8)][_0x597f47(0x1d1d)]([_0x41c53c['id']]));});},_0x2b85cc=/\//;class _0x1bfd28{constructor(_0x421d58={}){const _0x10465f=_0x37e46c;Object[_0x10465f(0x6f7)](this,_0x10465f(0x3289),{'enumerable':!0x1,'value':_0x421d58,'writable':!0x0});}get[_0x37e46c(0x4c3e)](){const _0x3d62aa=_0x37e46c;return this[_0x3d62aa(0x3289)][_0x3d62aa(0x4c3e)];}get['id'](){const _0x4a5305=_0x37e46c;return this[_0x4a5305(0x3289)]['id'];}get[_0x37e46c(0x2108)](){const _0x14f2f0=_0x37e46c;return this[_0x14f2f0(0x3289)]['id']||this[_0x14f2f0(0x3289)]['children'][_0x14f2f0(0x1b19)]>0x0;}[_0x37e46c(0xbf7)](_0x11fea2={}){const _0x29d1af=_0x37e46c;let _0x4ef082=this[_0x29d1af(0x3289)]['props']||{};return'string'==typeof _0x11fea2&&(_0x4ef082[_0x11fea2]=!0x0),this[_0x29d1af(0x3289)][_0x29d1af(0xbf7)]=Object[_0x29d1af(0x4e14)](_0x4ef082,_0x11fea2),this;}['get'](_0x3aed88){const _0x68ab1e=_0x37e46c;if(_0x3aed88=_0x55b650(_0x3aed88),!_0x2b85cc['test'](_0x3aed88)){let _0x5e3f59=this[_0x68ab1e(0x3289)][_0x68ab1e(0x4c3e)][_0x68ab1e(0x5144)](_0x3165cd=>_0x3165cd['id']===_0x3aed88);return new _0x1bfd28(_0x5e3f59);}let _0x521808=((_0x422d0d,_0x2c5d6b)=>{const _0x5223d2=_0x68ab1e;let _0x37f2a7=(_0x26d66f=>_0x5223d2(0x2431)!=typeof _0x26d66f?_0x26d66f:(_0x26d66f=_0x26d66f[_0x5223d2(0x741)](/^\//,''))[_0x5223d2(0x1117)](/\//))(_0x2c5d6b=_0x2c5d6b||'');for(let _0x54c359=0x0;_0x54c359<_0x37f2a7[_0x5223d2(0x1b19)];_0x54c359+=0x1){let _0x53e122=_0x422d0d[_0x5223d2(0x4c3e)][_0x5223d2(0x5144)](_0xc9f7dd=>_0xc9f7dd['id']===_0x37f2a7[_0x54c359]);if(!_0x53e122)return null;_0x422d0d=_0x53e122;}return _0x422d0d;})(this['json'],_0x3aed88)||_0xafc2a6({});return new _0x1bfd28(_0x521808);}[_0x37e46c(0x362c)](_0x5017b6,_0x32f917={}){const _0x29ba71=_0x37e46c;if(_0x1bfc10(_0x5017b6))return _0x5017b6[_0x29ba71(0xa21)](_0x2b4f70=>this['add'](_0x55b650(_0x2b4f70),_0x32f917)),this;_0x5017b6=_0x55b650(_0x5017b6);let _0x547a77=_0xafc2a6({'id':_0x5017b6,'props':_0x32f917});return this['json'][_0x29ba71(0x4c3e)]['push'](_0x547a77),new _0x1bfd28(_0x547a77);}[_0x37e46c(0x42a1)](_0x308688){const _0x5c4422=_0x37e46c;return _0x308688=_0x55b650(_0x308688),this[_0x5c4422(0x3289)][_0x5c4422(0x4c3e)]=this[_0x5c4422(0x3289)][_0x5c4422(0x4c3e)]['filter'](_0x5d030c=>_0x5d030c['id']!==_0x308688),this;}[_0x37e46c(0xbe9)](){const _0x378401=_0x37e46c;return _0x3b3748(this[_0x378401(0x3289)])['map'](_0x28e50a=>(delete(_0x28e50a=Object[_0x378401(0x4e14)]({},_0x28e50a))['children'],_0x28e50a));}[_0x37e46c(0x26f)](){const _0x49e586=_0x37e46c;return(_0x1709e4=>{const _0x1872e9=a0_0x11e7;let _0x19f5cd=_0x3b3748(_0x1709e4,(_0x295e08,_0x3b2d24)=>{const _0x310e53=a0_0x11e7;_0x295e08['id']&&(_0x295e08[_0x310e53(0x1aa7)][_0x310e53(0x1ed8)]=_0x295e08[_0x310e53(0x1aa7)][_0x310e53(0x1ed8)]||[],_0x295e08['_cache'][_0x310e53(0x4c3e)]=_0x295e08[_0x310e53(0x1aa7)][_0x310e53(0x4c3e)]||[],_0x3b2d24[_0x310e53(0x1aa7)][_0x310e53(0x1ed8)]=_0x295e08[_0x310e53(0x1aa7)][_0x310e53(0x1ed8)][_0x310e53(0x1d1d)]([_0x295e08['id']]));}),_0x221e10={};_0x19f5cd[_0x1872e9(0xa21)](_0x5ec5c4=>{_0x5ec5c4['id']&&(_0x221e10[_0x5ec5c4['id']]=_0x5ec5c4);}),_0x19f5cd[_0x1872e9(0xa21)](_0x506248=>{const _0x22bc20=_0x1872e9;_0x506248[_0x22bc20(0x1aa7)][_0x22bc20(0x1ed8)][_0x22bc20(0xa21)](_0x356800=>{const _0x4d2eed=_0x22bc20;_0x221e10[_0x4d2eed(0x2427)](_0x356800)&&_0x221e10[_0x356800][_0x4d2eed(0x1aa7)][_0x4d2eed(0x4c3e)][_0x4d2eed(0x1715)](_0x506248['id']);});}),_0x1709e4['_cache']['children']=Object[_0x1872e9(0x1ea9)](_0x221e10);})(this[_0x49e586(0x3289)]),this;}[_0x37e46c(0x144e)](){const _0x55d30d=_0x37e46c;return _0x3b3748(this[_0x55d30d(0x3289)]);}[_0x37e46c(0x2bdd)](){const _0x3ad891=_0x37e46c;var _0x3388f8;return _0x3388f8=this[_0x3ad891(0x3289)],_0x3b3748(_0x3388f8,(_0x2baf89,_0x56d803)=>{const _0x32a17e=_0x3ad891;_0x56d803[_0x32a17e(0xbf7)]=((_0x575154,_0x3a6d4c)=>(Object[_0x32a17e(0x1ea9)](_0x3a6d4c)[_0x32a17e(0xa21)](_0x4d754f=>{const _0x460191=_0x32a17e;if(_0x3a6d4c[_0x4d754f]instanceof Set){let _0x51472c=_0x575154[_0x4d754f]||new Set();_0x575154[_0x4d754f]=new Set([..._0x51472c,..._0x3a6d4c[_0x4d754f]]);}else{if((_0x39a210=>_0x39a210&&_0x460191(0x20c7)==typeof _0x39a210&&!Array[_0x460191(0x22b4)](_0x39a210))(_0x3a6d4c[_0x4d754f])){let _0x32c608=_0x575154[_0x4d754f]||{};_0x575154[_0x4d754f]=Object[_0x460191(0x4e14)]({},_0x3a6d4c[_0x4d754f],_0x32c608);}else _0x1bfc10(_0x3a6d4c[_0x4d754f])?_0x575154[_0x4d754f]=_0x3a6d4c[_0x4d754f][_0x460191(0x1d1d)](_0x575154[_0x4d754f]||[]):void 0x0===_0x575154[_0x4d754f]&&(_0x575154[_0x4d754f]=_0x3a6d4c[_0x4d754f]);}}),_0x575154))(_0x56d803['props'],_0x2baf89[_0x32a17e(0xbf7)]);}),this;}[_0x37e46c(0x368e)](){const _0x3b212a=_0x37e46c;_0x24f8a6(this[_0x3b212a(0x3289)]);let _0x28cf83=_0x3b3748(this[_0x3b212a(0x3289)]),_0x1ff994=_0x28cf83[_0x3b212a(0x1b19)]>0x1?0x1:0x0;return _0x28cf83[_0x3b212a(0xa21)](_0xea38da=>{const _0x3f1de2=_0x3b212a;if(0x0===_0xea38da[_0x3f1de2(0x1aa7)][_0x3f1de2(0x1ed8)]['length'])return;let _0x43b63c=_0xea38da[_0x3f1de2(0x1aa7)]['parents'][_0x3f1de2(0x1b19)]+0x1;_0x43b63c>_0x1ff994&&(_0x1ff994=_0x43b63c);}),_0x1ff994;}[_0x37e46c(0x3ab5)](_0x48d1ef){const _0x4320c7=_0x37e46c;return _0x24f8a6(this[_0x4320c7(0x3289)]),_0x3a9288(this[_0x4320c7(0x3289)],_0x48d1ef);}['debug'](){const _0xb8e67f=_0x37e46c;return _0x24f8a6(this[_0xb8e67f(0x3289)]),_0x3a9288(this[_0xb8e67f(0x3289)],'debug'),this;}}const _0x48331f=function(_0x3cb04e){let _0x1e7def=_0x4c6be4(_0x3cb04e);return new _0x1bfd28(_0x1e7def);};_0x48331f[_0x37e46c(0x3b3c)]['plugin']=function(_0x498f72){_0x498f72(this);};const _0x193c5a={'Noun':_0x37e46c(0x1536),'Verb':_0x37e46c(0x6a2),'Negative':_0x37e46c(0x6a2),'Date':'red','Value':_0x37e46c(0x117e),'Adjective':'magenta','Preposition':_0x37e46c(0x1ced),'Conjunction':'cyan','Determiner':'cyan','Hyphenated':_0x37e46c(0x1ced),'Adverb':_0x37e46c(0x1ced)},_0x226890=function(_0x253d69){const _0x5d6ce6=_0x37e46c;if(_0x193c5a[_0x5d6ce6(0x2427)](_0x253d69['id']))return _0x193c5a[_0x253d69['id']];if(_0x193c5a['hasOwnProperty'](_0x253d69['is']))return _0x193c5a[_0x253d69['is']];let _0x4bfd75=_0x253d69[_0x5d6ce6(0x1aa7)][_0x5d6ce6(0x1ed8)]['find'](_0x332d39=>_0x193c5a[_0x332d39]);return _0x193c5a[_0x4bfd75];},_0x1f91e1=function(_0x217462){const _0x17e397=_0x37e46c,_0x42ef92={};return _0x217462[_0x17e397(0xa21)](_0xdb11b1=>{const _0x425a42=_0x17e397;let {not:_0x3e72d2,also:_0xdbf7a9,is:_0x5b2284,novel:_0x945a2d}=_0xdb11b1[_0x425a42(0xbf7)],_0x556573=_0xdb11b1['_cache']['parents'];_0xdbf7a9&&(_0x556573=_0x556573[_0x425a42(0x1d1d)](_0xdbf7a9)),_0x42ef92[_0xdb11b1['id']]={'is':_0x5b2284,'not':_0x3e72d2,'novel':_0x945a2d,'also':_0xdbf7a9,'parents':_0x556573,'children':_0xdb11b1[_0x425a42(0x1aa7)][_0x425a42(0x4c3e)],'color':_0x226890(_0xdb11b1)};}),Object[_0x17e397(0x1ea9)](_0x42ef92)[_0x17e397(0xa21)](_0x46a2af=>{const _0x3503ee=_0x17e397;let _0x439992=new Set(_0x42ef92[_0x46a2af][_0x3503ee(0xc1a)]);_0x42ef92[_0x46a2af][_0x3503ee(0xc1a)][_0x3503ee(0xa21)](_0x1e5973=>{const _0x5de7f3=_0x3503ee;_0x42ef92[_0x1e5973]&&_0x42ef92[_0x1e5973][_0x5de7f3(0x4c3e)][_0x5de7f3(0xa21)](_0x427ab2=>_0x439992[_0x5de7f3(0x362c)](_0x427ab2));}),_0x42ef92[_0x46a2af]['not']=Array['from'](_0x439992);}),_0x42ef92;},_0x2e8d62=function(_0x46f428){const _0x54296c=_0x37e46c;return _0x46f428?_0x54296c(0x2431)==typeof _0x46f428?[_0x46f428]:_0x46f428:[];},_0x26aa63=function(_0x1aaf92,_0xa02783){const _0x428f5d=_0x37e46c;return _0x1aaf92=function(_0x574cb4,_0x5e81b5){const _0xd6fc99=a0_0x11e7;return Object[_0xd6fc99(0x1ea9)](_0x574cb4)[_0xd6fc99(0xa21)](_0x552959=>{const _0x212dcf=_0xd6fc99;_0x574cb4[_0x552959][_0x212dcf(0x1422)]&&(_0x574cb4[_0x552959]['is']=_0x574cb4[_0x552959][_0x212dcf(0x1422)]),_0x574cb4[_0x552959][_0x212dcf(0x14d7)]&&(_0x574cb4[_0x552959][_0x212dcf(0xc1a)]=_0x574cb4[_0x552959][_0x212dcf(0x14d7)]),_0x574cb4[_0x552959]['is']&&_0x212dcf(0x2431)==typeof _0x574cb4[_0x552959]['is']&&(_0x5e81b5[_0x212dcf(0x2427)](_0x574cb4[_0x552959]['is'])||_0x574cb4[_0x212dcf(0x2427)](_0x574cb4[_0x552959]['is'])||(_0x574cb4[_0x574cb4[_0x552959]['is']]={})),_0x574cb4[_0x552959][_0x212dcf(0xc1a)]&&_0x212dcf(0x2431)==typeof _0x574cb4[_0x552959][_0x212dcf(0xc1a)]&&!_0x574cb4['hasOwnProperty'](_0x574cb4[_0x552959][_0x212dcf(0xc1a)])&&(_0x5e81b5[_0x212dcf(0x2427)](_0x574cb4[_0x552959][_0x212dcf(0xc1a)])||_0x574cb4['hasOwnProperty'](_0x574cb4[_0x552959][_0x212dcf(0xc1a)])||(_0x574cb4[_0x574cb4[_0x552959][_0x212dcf(0xc1a)]]={}));}),_0x574cb4;}(_0x1aaf92,_0xa02783),Object[_0x428f5d(0x1ea9)](_0x1aaf92)['forEach'](_0xf3a304=>{const _0x576d56=_0x428f5d;_0x1aaf92[_0xf3a304][_0x576d56(0x4c3e)]=_0x2e8d62(_0x1aaf92[_0xf3a304][_0x576d56(0x4c3e)]),_0x1aaf92[_0xf3a304][_0x576d56(0xc1a)]=_0x2e8d62(_0x1aaf92[_0xf3a304][_0x576d56(0xc1a)]);}),Object[_0x428f5d(0x1ea9)](_0x1aaf92)[_0x428f5d(0xa21)](_0xdde186=>{const _0x26a6e8=_0x428f5d;(_0x1aaf92[_0xdde186][_0x26a6e8(0xc1a)]||[])[_0x26a6e8(0xa21)](_0x5866ec=>{const _0x56b254=_0x26a6e8;_0x1aaf92[_0x5866ec]&&_0x1aaf92[_0x5866ec][_0x56b254(0xc1a)]&&_0x1aaf92[_0x5866ec]['not'][_0x56b254(0x1715)](_0xdde186);});}),_0x1aaf92;},_0x279da1=function(_0x125a14,_0x7a5d7f){const _0x3a978f=_0x37e46c;Object[_0x3a978f(0x1ea9)](_0x7a5d7f)[_0x3a978f(0x1b19)]>0x0&&(_0x125a14=function(_0x30b016){const _0x2de340=_0x3a978f;return Object[_0x2de340(0x1ea9)](_0x30b016)[_0x2de340(0xa21)](_0x18b02c=>{const _0x452fd1=_0x2de340;_0x30b016[_0x18b02c]=Object[_0x452fd1(0x4e14)]({},_0x30b016[_0x18b02c]),_0x30b016[_0x18b02c][_0x452fd1(0x3112)]=!0x0;}),_0x30b016;}(_0x125a14)),_0x125a14=_0x26aa63(_0x125a14,_0x7a5d7f);const _0x13aa2d=function(_0x52dde9){const _0x33e5df=_0x3a978f,_0x551a30=Object[_0x33e5df(0x1ea9)](_0x52dde9)['map'](_0x247523=>{const _0x4308fb=_0x33e5df;let _0x130eca=_0x52dde9[_0x247523];const _0x3bea01={'not':new Set(_0x130eca[_0x4308fb(0xc1a)]),'also':_0x130eca[_0x4308fb(0x1411)],'is':_0x130eca['is'],'novel':_0x130eca[_0x4308fb(0x3112)]};return{'id':_0x247523,'parent':_0x130eca['is'],'props':_0x3bea01,'children':[]};});return _0x48331f(_0x551a30)['cache']()[_0x33e5df(0x2bdd)]()['out'](_0x33e5df(0x26f6));}(Object[_0x3a978f(0x4e14)]({},_0x7a5d7f,_0x125a14));return _0x1f91e1(_0x13aa2d);},_0x3db6dd={'one':{'setTag':_0x2b4893,'unTag':_0x173f14,'addTags':_0x279da1,'canBe':_0x527bd2}},_0x5bc9c1=function(_0xd12173){const _0x2d59d9=_0x37e46c;return'[object\x20Array]'===Object[_0x2d59d9(0x3b3c)][_0x2d59d9(0x8e8)]['call'](_0xd12173);},_0x20033f={'tag':function(_0x5d8f4f,_0x53b5fa='',_0x7e6713){const _0x428cd2=_0x37e46c;if(!this[_0x428cd2(0x2108)]||!_0x5d8f4f)return this;let _0x3c20cf=this[_0x428cd2(0x50a1)]();if(0x0===_0x3c20cf['length'])return this;const {methods:_0x80f655,verbose:_0x27b8c1,world:_0x7bbbbf}=this;return _0x5bc9c1(_0x5d8f4f)?_0x5d8f4f[_0x428cd2(0xa21)](_0x48b3a6=>_0x80f655[_0x428cd2(0x1d8a)][_0x428cd2(0x4820)](_0x3c20cf,_0x48b3a6,_0x7bbbbf,_0x7e6713,_0x53b5fa)):_0x80f655[_0x428cd2(0x1d8a)][_0x428cd2(0x4820)](_0x3c20cf,_0x5d8f4f,_0x7bbbbf,_0x7e6713,_0x53b5fa),this[_0x428cd2(0x4742)](),this;},'tagSafe':function(_0x5d0249,_0x3e50dc=''){const _0x2c9b1b=_0x37e46c;return this[_0x2c9b1b(0x15a9)](_0x5d0249,_0x3e50dc,!0x0);},'unTag':function(_0x81b9d4,_0x58b104){const _0x4b3d0c=_0x37e46c;if(!this[_0x4b3d0c(0x2108)]||!_0x81b9d4)return this;let _0x450fda=this[_0x4b3d0c(0x50a1)]();if(0x0===_0x450fda[_0x4b3d0c(0x1b19)])return this;const {methods:_0x39b48c,verbose:_0x4b6492,model:_0x4c2015}=this;let _0x139763=_0x4c2015[_0x4b3d0c(0x1d8a)][_0x4b3d0c(0x3f15)];return _0x5bc9c1(_0x81b9d4)?_0x81b9d4[_0x4b3d0c(0xa21)](_0x12fbfe=>_0x39b48c[_0x4b3d0c(0x1d8a)]['unTag'](_0x450fda,_0x12fbfe,_0x139763)):_0x39b48c['one'][_0x4b3d0c(0x1b1a)](_0x450fda,_0x81b9d4,_0x139763),this[_0x4b3d0c(0x4742)](),this;},'canBe':function(_0x7f47d4){const _0x206fca=_0x37e46c;_0x7f47d4=_0x7f47d4[_0x206fca(0x741)](/^#/,'');let _0x508639=this['model']['one'][_0x206fca(0x3f15)],_0x4d1a9c=this[_0x206fca(0x1578)][_0x206fca(0x1d8a)][_0x206fca(0x13bd)],_0x1104bd=[];this[_0x206fca(0x295)][_0x206fca(0xa21)]((_0x3556ca,_0x574555)=>{const _0x25f041=_0x206fca;_0x3556ca[_0x25f041(0xa21)]((_0x57478b,_0x5c2a10)=>{const _0x433b44=_0x25f041;_0x4d1a9c(_0x57478b,_0x7f47d4,_0x508639)||_0x1104bd[_0x433b44(0x1715)]([_0x574555,_0x5c2a10,_0x5c2a10+0x1]);});});let _0xb7280b=this[_0x206fca(0x38d6)](_0x1104bd);return this[_0x206fca(0x4d09)](_0xb7280b);}},_0x2754e7=_0x20033f,_0x47c27b=function(_0x597870){const _0x5de2fd=_0x37e46c;Object[_0x5de2fd(0x4e14)](_0x597870[_0x5de2fd(0x3b3c)],_0x2754e7);},_0x668619={'addTags':function(_0x15b807){const _0xe968ce=_0x37e46c,{model:_0x4f8474,methods:_0x19d36d}=this[_0xe968ce(0x4657)](),_0x41c35e=_0x4f8474[_0xe968ce(0x1d8a)][_0xe968ce(0x3f15)];let _0x7fa4cd=(0x0,_0x19d36d[_0xe968ce(0x1d8a)][_0xe968ce(0x3954)])(_0x15b807,_0x41c35e);return _0x4f8474['one'][_0xe968ce(0x3f15)]=_0x7fa4cd,this;}},_0x3ddeac=new Set(['Auxiliary',_0x37e46c(0x3d93)]),_0x3973b0=function(_0x5ac996){const _0x41d78f=_0x37e46c,{document:_0x2a0dd1,world:_0xc043d3}=_0x5ac996,_0x576607=_0xc043d3[_0x41d78f(0x1556)][_0x41d78f(0x1d8a)][_0x41d78f(0x3f15)];_0x2a0dd1[_0x41d78f(0xa21)](_0x40e39b=>{const _0x2f30d0=_0x41d78f;_0x40e39b[_0x2f30d0(0xa21)](_0x296daa=>{const _0x367b2c=_0x2f30d0;let _0x1b4c48=Array[_0x367b2c(0x27e6)](_0x296daa['tags']);_0x296daa[_0x367b2c(0x2cf)]=function(_0x4de25c,_0x1bc7bc){const _0x512204=_0x367b2c;return _0x4de25c=_0x4de25c[_0x512204(0x4c33)]((_0xc32046,_0x1e7f78)=>{const _0x35d576=_0x512204;if(_0x3ddeac[_0x35d576(0x3170)](_0xc32046)||!_0x1bc7bc['hasOwnProperty'](_0x1e7f78))return 0x1;if(_0x3ddeac[_0x35d576(0x3170)](_0x1e7f78)||!_0x1bc7bc[_0x35d576(0x2427)](_0xc32046))return-0x1;let _0x2bc520=_0x1bc7bc[_0xc32046][_0x35d576(0x4c3e)]||[],_0x20b130=_0x2bc520[_0x35d576(0x1b19)];return _0x2bc520=_0x1bc7bc[_0x1e7f78][_0x35d576(0x4c3e)]||[],_0x20b130-_0x2bc520[_0x35d576(0x1b19)];}),_0x4de25c;}(_0x1b4c48,_0x576607);});});},_0x4f5f82={'model':{'one':{'tagSet':{}}},'compute':{'tagRank':_0x3973b0},'methods':_0x3db6dd,'api':_0x47c27b,'lib':_0x668619},_0x17481e=/([.!?\u203D\u2E18\u203C\u2047-\u2049\u3002]+\s)/g,_0x240d7a=/^[.!?\u203D\u2E18\u203C\u2047-\u2049\u3002]+\s$/,_0x16b87f=/((?:\r?\n|\r)+)/,_0x257174=function(_0x175656){const _0x256238=_0x37e46c;let _0x9291f9=[],_0x511da7=_0x175656[_0x256238(0x1117)](_0x16b87f);for(let _0x337353=0x0;_0x337353<_0x511da7[_0x256238(0x1b19)];_0x337353++){let _0x7bdfa4=_0x511da7[_0x337353][_0x256238(0x1117)](_0x17481e);for(let _0x488da4=0x0;_0x488da4<_0x7bdfa4[_0x256238(0x1b19)];_0x488da4++)_0x7bdfa4[_0x488da4+0x1]&&!0x0===_0x240d7a[_0x256238(0x1769)](_0x7bdfa4[_0x488da4+0x1])&&(_0x7bdfa4[_0x488da4]+=_0x7bdfa4[_0x488da4+0x1],_0x7bdfa4[_0x488da4+0x1]=''),''!==_0x7bdfa4[_0x488da4]&&_0x9291f9[_0x256238(0x1715)](_0x7bdfa4[_0x488da4]);}return _0x9291f9;},_0x51584d=/[a-z0-9\u00C0-\u00FF\u00a9\u00ae\u2000-\u3300\ud000-\udfff]/i,_0x311a1d=/\S/,_0x184136=function(_0x544182){const _0x16aa21=_0x37e46c;let _0xc7b8ea=[];for(let _0xd80cd=0x0;_0xd80cd<_0x544182['length'];_0xd80cd++){let _0x235177=_0x544182[_0xd80cd];if(void 0x0!==_0x235177&&''!==_0x235177){if(!0x1===_0x311a1d[_0x16aa21(0x1769)](_0x235177)||!0x1===_0x51584d[_0x16aa21(0x1769)](_0x235177)){if(_0xc7b8ea[_0xc7b8ea[_0x16aa21(0x1b19)]-0x1]){_0xc7b8ea[_0xc7b8ea[_0x16aa21(0x1b19)]-0x1]+=_0x235177;continue;}if(_0x544182[_0xd80cd+0x1]){_0x544182[_0xd80cd+0x1]=_0x235177+_0x544182[_0xd80cd+0x1];continue;}}_0xc7b8ea[_0x16aa21(0x1715)](_0x235177);}}return _0xc7b8ea;},_0x1bd2c6=function(_0x46bd41,_0x1c745f){const _0x2e1dac=_0x37e46c,_0x3196dc=_0x1c745f[_0x2e1dac(0x1578)][_0x2e1dac(0x1d8a)]['tokenize'][_0x2e1dac(0x1c3f)],_0x2c980c=_0x1c745f[_0x2e1dac(0x1556)][_0x2e1dac(0x1d8a)][_0x2e1dac(0x3b89)]||new Set();let _0x56d36e=[];for(let _0x166592=0x0;_0x166592<_0x46bd41[_0x2e1dac(0x1b19)];_0x166592++){let _0x332798=_0x46bd41[_0x166592];_0x46bd41[_0x166592+0x1]&&!0x1===_0x3196dc(_0x332798,_0x2c980c)?_0x46bd41[_0x166592+0x1]=_0x332798+(_0x46bd41[_0x166592+0x1]||''):_0x332798&&_0x332798[_0x2e1dac(0x1b19)]>0x0&&(_0x56d36e['push'](_0x332798),_0x46bd41[_0x166592]='');}return _0x56d36e;},_0x405269={'\x22':'\x22','"':'"','“':'”','‟':'”','„':'”','⹂':'”','‚':'’','«':'»','‹':'›','‵':'′','‶':'″','‷':'‴','〝':'〞','〟':'〞'},_0x457aef=RegExp('['+Object[_0x37e46c(0x1ea9)](_0x405269)['join']('')+']','g'),_0x43883f=RegExp('['+Object[_0x37e46c(0x1fae)](_0x405269)[_0x37e46c(0x3541)]('')+']','g'),_0x3abf3e=function(_0x518a66){const _0x2233ae=_0x37e46c;if(!_0x518a66)return!0x1;let _0x2ab5eb=_0x518a66[_0x2233ae(0x2d96)](_0x43883f);return null!==_0x2ab5eb&&0x1===_0x2ab5eb[_0x2233ae(0x1b19)];},_0x3fc993=function(_0x307ce6){const _0x34cce6=_0x37e46c;let _0x2a995a=[];for(let _0x412627=0x0;_0x412627<_0x307ce6[_0x34cce6(0x1b19)];_0x412627+=0x1){let _0x393616=_0x307ce6[_0x412627][_0x34cce6(0x2d96)](_0x457aef);if(null!==_0x393616&&0x1===_0x393616['length']){if(_0x3abf3e(_0x307ce6[_0x412627+0x1])&&_0x307ce6[_0x412627+0x1][_0x34cce6(0x1b19)]<0x118){_0x307ce6[_0x412627]+=_0x307ce6[_0x412627+0x1],_0x2a995a[_0x34cce6(0x1715)](_0x307ce6[_0x412627]),_0x307ce6[_0x412627+0x1]='',_0x412627+=0x1;continue;}if(_0x3abf3e(_0x307ce6[_0x412627+0x2])){let _0x5c04fc=_0x307ce6[_0x412627+0x1]+_0x307ce6[_0x412627+0x2];if(_0x5c04fc[_0x34cce6(0x1b19)]<0x118){_0x307ce6[_0x412627]+=_0x5c04fc,_0x2a995a[_0x34cce6(0x1715)](_0x307ce6[_0x412627]),_0x307ce6[_0x412627+0x1]='',_0x307ce6[_0x412627+0x2]='',_0x412627+=0x2;continue;}}}_0x2a995a[_0x34cce6(0x1715)](_0x307ce6[_0x412627]);}return _0x2a995a;},_0x29bcc7=/\(/g,_0x271628=/\)/g,_0x376c17=function(_0x1d3778){const _0x578868=_0x37e46c;let _0xc41b09=[];for(let _0x42300f=0x0;_0x42300f<_0x1d3778[_0x578868(0x1b19)];_0x42300f+=0x1){let _0x1aa7dc=_0x1d3778[_0x42300f][_0x578868(0x2d96)](_0x29bcc7);if(null!==_0x1aa7dc&&0x1===_0x1aa7dc[_0x578868(0x1b19)]&&_0x1d3778[_0x42300f+0x1]&&_0x1d3778[_0x42300f+0x1][_0x578868(0x1b19)]<0xfa){if(null!==_0x1d3778[_0x42300f+0x1][_0x578868(0x2d96)](_0x271628)&&0x1===_0x1aa7dc[_0x578868(0x1b19)]&&!_0x29bcc7[_0x578868(0x1769)](_0x1d3778[_0x42300f+0x1])){_0x1d3778[_0x42300f]+=_0x1d3778[_0x42300f+0x1],_0xc41b09[_0x578868(0x1715)](_0x1d3778[_0x42300f]),_0x1d3778[_0x42300f+0x1]='',_0x42300f+=0x1;continue;}}_0xc41b09['push'](_0x1d3778[_0x42300f]);}return _0xc41b09;},_0x1726b7=/\S/,_0x4ceba9=/^\s+/,_0x546122=function(_0x10a112,_0x2c71ba){const _0x43dfbb=_0x37e46c;if(_0x10a112=_0x10a112||'',!(_0x10a112=String(_0x10a112))||_0x43dfbb(0x2431)!=typeof _0x10a112||!0x1===_0x1726b7[_0x43dfbb(0x1769)](_0x10a112))return[];_0x10a112=_0x10a112[_0x43dfbb(0x741)]('\u00a0','\x20');let _0xc8a82a=_0x257174(_0x10a112),_0x3d66a3=_0x184136(_0xc8a82a);if(_0x3d66a3=_0x1bd2c6(_0x3d66a3,_0x2c71ba),_0x3d66a3=_0x3fc993(_0x3d66a3),_0x3d66a3=_0x376c17(_0x3d66a3),0x0===_0x3d66a3[_0x43dfbb(0x1b19)])return[_0x10a112];for(let _0x43e4af=0x1;_0x43e4af<_0x3d66a3[_0x43dfbb(0x1b19)];_0x43e4af+=0x1){let _0x40c4ec=_0x3d66a3[_0x43e4af][_0x43dfbb(0x2d96)](_0x4ceba9);null!==_0x40c4ec&&(_0x3d66a3[_0x43e4af-0x1]+=_0x40c4ec[0x0],_0x3d66a3[_0x43e4af]=_0x3d66a3[_0x43e4af][_0x43dfbb(0x741)](_0x4ceba9,''));}return _0x3d66a3;},_0x3a0e1c=function(_0x2d8260,_0x3ccb15){const _0x549ef1=_0x37e46c;let _0x19d9cd=_0x2d8260['split'](/[-–—]/);if(_0x19d9cd[_0x549ef1(0x1b19)]<=0x1)return!0x1;const {prefixes:_0xae214,suffixes:_0x312269}=_0x3ccb15[_0x549ef1(0x1d8a)];if(0x1===_0x19d9cd[0x0][_0x549ef1(0x1b19)]&&/[a-z]/i['test'](_0x19d9cd[0x0]))return!0x1;if(_0xae214[_0x549ef1(0x2427)](_0x19d9cd[0x0]))return!0x1;if(_0x19d9cd[0x1]=_0x19d9cd[0x1][_0x549ef1(0x1b23)]()[_0x549ef1(0x741)](/[.?!]$/,''),_0x312269[_0x549ef1(0x2427)](_0x19d9cd[0x1]))return!0x1;if(!0x0===/^([a-z\u00C0-\u00FF`"'/]+)[-–—]([a-z0-9\u00C0-\u00FF].*)/i['test'](_0x2d8260))return!0x0;return!0x0===/^[('"]?([0-9]{1,4})[-–—]([a-z\u00C0-\u00FF`"'/-]+[)'"]?$)/i['test'](_0x2d8260);},_0x54f68f=function(_0x1be182){const _0x26fda2=_0x37e46c;let _0x5d96f8=[];const _0x1a7f08=_0x1be182[_0x26fda2(0x1117)](/[-–—]/);let _0x5b1440='-',_0x533d77=_0x1be182[_0x26fda2(0x2d96)](/[-–—]/);_0x533d77&&_0x533d77[0x0]&&(_0x5b1440=_0x533d77);for(let _0x213acb=0x0;_0x213acb<_0x1a7f08[_0x26fda2(0x1b19)];_0x213acb++)_0x213acb===_0x1a7f08[_0x26fda2(0x1b19)]-0x1?_0x5d96f8[_0x26fda2(0x1715)](_0x1a7f08[_0x213acb]):_0x5d96f8[_0x26fda2(0x1715)](_0x1a7f08[_0x213acb]+_0x5b1440);return _0x5d96f8;},_0x48f977=function(_0x28a175){const _0x2fd7d9=_0x37e46c,_0xc1c110=/^[0-9]{1,4}(:[0-9][0-9])?([a-z]{1,2})? ?[-–—] ?$/,_0x100524=/^[0-9]{1,4}([a-z]{1,2})? ?$/;for(let _0x137695=0x0;_0x137695<_0x28a175[_0x2fd7d9(0x1b19)]-0x1;_0x137695+=0x1)_0x28a175[_0x137695+0x1]&&_0xc1c110[_0x2fd7d9(0x1769)](_0x28a175[_0x137695])&&_0x100524[_0x2fd7d9(0x1769)](_0x28a175[_0x137695+0x1])&&(_0x28a175[_0x137695]=_0x28a175[_0x137695]+_0x28a175[_0x137695+0x1],_0x28a175[_0x137695+0x1]=null);return _0x28a175;},_0x9fbc32=/\p{L} ?\/ ?\p{L}+$/u,_0x273aa6=function(_0x7bb8b9){const _0x3e80f1=_0x37e46c;for(let _0x4d3dda=0x1;_0x4d3dda<_0x7bb8b9[_0x3e80f1(0x1b19)]-0x1;_0x4d3dda++)_0x9fbc32['test'](_0x7bb8b9[_0x4d3dda])&&(_0x7bb8b9[_0x4d3dda-0x1]+=_0x7bb8b9[_0x4d3dda]+_0x7bb8b9[_0x4d3dda+0x1],_0x7bb8b9[_0x4d3dda]=null,_0x7bb8b9[_0x4d3dda+0x1]=null);return _0x7bb8b9;},_0xb55fe3=/\S/,_0x39af30=/^[!?.]+$/,_0x5afc4f=/(\S+)/;let _0x1e78f2=['.','?','!',':',';','-','–','—','--','...','(',')','[',']','\x22','\x27','`','«','»','*','•'];_0x1e78f2=_0x1e78f2[_0x37e46c(0x24d8)]((_0x96ed9f,_0x5f52fb)=>(_0x96ed9f[_0x5f52fb]=!0x0,_0x96ed9f),{});const _0x3daef7=function(_0x2189f1,_0x27d8ae){const _0x71f67c=_0x37e46c;let _0x13dbb7=[],_0x5ae483=[];if('number'==typeof(_0x2189f1=_0x2189f1||'')&&(_0x2189f1=String(_0x2189f1)),function(_0x164f43){const _0x181abb=a0_0x11e7;return'[object\x20Array]'===Object[_0x181abb(0x3b3c)]['toString'][_0x181abb(0x236b)](_0x164f43);}(_0x2189f1))return _0x2189f1;const _0x50b573=_0x2189f1[_0x71f67c(0x1117)](_0x5afc4f);for(let _0x489a07=0x0;_0x489a07<_0x50b573[_0x71f67c(0x1b19)];_0x489a07++)!0x0!==_0x3a0e1c(_0x50b573[_0x489a07],_0x27d8ae)?_0x5ae483[_0x71f67c(0x1715)](_0x50b573[_0x489a07]):_0x5ae483=_0x5ae483[_0x71f67c(0x1d1d)](_0x54f68f(_0x50b573[_0x489a07]));let _0x40ec93='';for(let _0x4d61cb=0x0;_0x4d61cb<_0x5ae483[_0x71f67c(0x1b19)];_0x4d61cb++){let _0x3b1ff9=_0x5ae483[_0x4d61cb];!0x0===_0xb55fe3[_0x71f67c(0x1769)](_0x3b1ff9)&&!0x1===_0x1e78f2[_0x71f67c(0x2427)](_0x3b1ff9)&&!0x1===_0x39af30['test'](_0x3b1ff9)?(_0x13dbb7['length']>0x0?(_0x13dbb7[_0x13dbb7[_0x71f67c(0x1b19)]-0x1]+=_0x40ec93,_0x13dbb7[_0x71f67c(0x1715)](_0x3b1ff9)):_0x13dbb7[_0x71f67c(0x1715)](_0x40ec93+_0x3b1ff9),_0x40ec93=''):_0x40ec93+=_0x3b1ff9;}return _0x40ec93&&(0x0===_0x13dbb7[_0x71f67c(0x1b19)]&&(_0x13dbb7[0x0]=''),_0x13dbb7[_0x13dbb7['length']-0x1]+=_0x40ec93),_0x13dbb7=_0x273aa6(_0x13dbb7),_0x13dbb7=_0x48f977(_0x13dbb7),_0x13dbb7=_0x13dbb7[_0x71f67c(0x1465)](_0x5e862d=>_0x5e862d),_0x13dbb7;},_0x48c527=/\p{Letter}/u,_0x281b54=/[\p{Number}\p{Currency_Symbol}]/u,_0x3cf8ea=/^[a-z]\.([a-z]\.)+/i,_0x466d68=/[sn]['’]$/,_0x5bdf6b=function(_0x32ccb1,_0xb83d29){const _0x2aea42=_0x37e46c;let {prePunctuation:_0x104cc9,postPunctuation:_0x58f7a9,emoticons:_0x62631b}=_0xb83d29['one'],_0x1620c4=_0x32ccb1,_0x224326='',_0x1f6bc2='',_0x108017=Array[_0x2aea42(0x27e6)](_0x32ccb1);if(_0x62631b[_0x2aea42(0x2427)](_0x32ccb1[_0x2aea42(0x1b23)]()))return{'str':_0x32ccb1[_0x2aea42(0x1b23)](),'pre':_0x224326,'post':'\x20'};let _0x23871b=_0x108017[_0x2aea42(0x1b19)];for(let _0x33cb7c=0x0;_0x33cb7c<_0x23871b;_0x33cb7c+=0x1){let _0x535acd=_0x108017[0x0];if(!0x0!==_0x104cc9[_0x535acd]){if(('+'===_0x535acd||'-'===_0x535acd)&&_0x281b54[_0x2aea42(0x1769)](_0x108017[0x1]))break;if('\x27'===_0x535acd&&0x3===_0x535acd[_0x2aea42(0x1b19)]&&_0x281b54[_0x2aea42(0x1769)](_0x108017[0x1]))break;if(_0x48c527[_0x2aea42(0x1769)](_0x535acd)||_0x281b54[_0x2aea42(0x1769)](_0x535acd))break;_0x224326+=_0x108017[_0x2aea42(0x34fe)]();}}_0x23871b=_0x108017['length'];for(let _0x38b874=0x0;_0x38b874<_0x23871b;_0x38b874+=0x1){let _0x1ef1cc=_0x108017[_0x108017[_0x2aea42(0x1b19)]-0x1];if(!0x0!==_0x58f7a9[_0x1ef1cc]){if(_0x48c527[_0x2aea42(0x1769)](_0x1ef1cc)||_0x281b54[_0x2aea42(0x1769)](_0x1ef1cc))break;'.'===_0x1ef1cc&&!0x0===_0x3cf8ea[_0x2aea42(0x1769)](_0x1620c4)||'\x27'===_0x1ef1cc&&!0x0===_0x466d68[_0x2aea42(0x1769)](_0x1620c4)||(_0x1f6bc2=_0x108017[_0x2aea42(0x3d35)]()+_0x1f6bc2);}}return''===(_0x32ccb1=_0x108017[_0x2aea42(0x3541)](''))&&(_0x1620c4=_0x1620c4[_0x2aea42(0x741)](/ *$/,_0x55dce5=>(_0x1f6bc2=_0x55dce5||'','')),_0x32ccb1=_0x1620c4,_0x224326=''),{'str':_0x32ccb1,'pre':_0x224326,'post':_0x1f6bc2};},_0x4f512c=(_0x5d5523,_0x264da6)=>{let {str:_0x5c16e4,pre:_0x1941a3,post:_0x257323}=_0x5bdf6b(_0x5d5523,_0x264da6);return{'text':_0x5c16e4,'pre':_0x1941a3,'post':_0x257323,'tags':new Set()};},_0x3d105d=function(_0x1ff65f,_0x4b4c43){const _0x2de2c9=_0x37e46c,_0x5c0065=_0x4b4c43[_0x2de2c9(0x1556)]['one'][_0x2de2c9(0x7a3)]||{};let _0x4617d8=(_0x1ff65f=_0x1ff65f||'')[_0x2de2c9(0x1117)]('');return _0x4617d8[_0x2de2c9(0xa21)]((_0xcfa0e6,_0x23d22d)=>{_0x5c0065[_0xcfa0e6]&&(_0x4617d8[_0x23d22d]=_0x5c0065[_0xcfa0e6]);}),_0x4617d8[_0x2de2c9(0x3541)]('');},_0x132cdd=function(_0x424aef){const _0x4db2fb=_0x37e46c;let _0x58fd82=_0x424aef=(_0x424aef=(_0x424aef=_0x424aef||'')['toLowerCase']())[_0x4db2fb(0x1b23)]();return _0x424aef=(_0x424aef=(_0x424aef=_0x424aef[_0x4db2fb(0x741)](/[,;.!?]+$/,''))[_0x4db2fb(0x741)](/\u2026/g,'...'))['replace'](/\u2013/g,'-'),!0x1===/^[:;]/[_0x4db2fb(0x1769)](_0x424aef)&&(_0x424aef=(_0x424aef=(_0x424aef=_0x424aef['replace'](/\.{3,}$/g,''))[_0x4db2fb(0x741)](/[",.!:;?)]+$/g,''))[_0x4db2fb(0x741)](/^['"(]+/g,'')),''===(_0x424aef=(_0x424aef=_0x424aef[_0x4db2fb(0x741)](/[\u200B-\u200D\uFEFF]/g,''))[_0x4db2fb(0x1b23)]())&&(_0x424aef=_0x58fd82),_0x424aef=_0x424aef[_0x4db2fb(0x741)](/([0-9]),([0-9])/g,'$1$2');},_0x334507=/([A-Z]\.)+[A-Z]?,?$/,_0x54806d=/^[A-Z]\.,?$/,_0x2b473e=/[A-Z]{2,}('s|,)?$/,_0x353af8=/([a-z]\.)+[a-z]\.?$/,_0x1883a6=function(_0x3938cb){const _0x4c1221=_0x37e46c;return function(_0x272207){const _0x282b7c=a0_0x11e7;return!0x0===_0x334507['test'](_0x272207)||!0x0===_0x353af8[_0x282b7c(0x1769)](_0x272207)||!0x0===_0x54806d['test'](_0x272207)||!0x0===_0x2b473e[_0x282b7c(0x1769)](_0x272207);}(_0x3938cb)&&(_0x3938cb=_0x3938cb[_0x4c1221(0x741)](/\./g,'')),_0x3938cb;},_0x102de8=function(_0x1869d7,_0x294a96){const _0xd984b5=_0x37e46c,_0x54c000=_0x294a96['methods']['one'][_0xd984b5(0x1744)];let _0x29dba1=_0x1869d7[_0xd984b5(0x4006)]||'';_0x29dba1=_0x132cdd(_0x29dba1),_0x29dba1=_0x54c000(_0x29dba1,_0x294a96),_0x29dba1=_0x1883a6(_0x29dba1),_0x1869d7[_0xd984b5(0x47d)]=_0x29dba1;},_0xf18df2=function(_0x5bde65,_0x2ea96d){const _0x19bdf3=_0x37e46c,{methods:_0x577bbd,model:_0x4f9de2}=_0x2ea96d,{splitSentences:_0x5270f3,splitTerms:_0x24d33a,splitWhitespace:_0x19fce7}=_0x577bbd[_0x19bdf3(0x1d8a)][_0x19bdf3(0x3c0b)];return _0x5bde65=_0x5270f3(_0x5bde65=_0x5bde65||'',_0x2ea96d)[_0x19bdf3(0x4833)](_0x13cc9a=>{const _0x44d68b=_0x19bdf3;let _0x3cfae9=_0x24d33a(_0x13cc9a,_0x4f9de2);return _0x3cfae9=_0x3cfae9[_0x44d68b(0x4833)](_0x31c737=>_0x19fce7(_0x31c737,_0x4f9de2)),_0x3cfae9[_0x44d68b(0xa21)](_0x1906d6=>{_0x102de8(_0x1906d6,_0x2ea96d);}),_0x3cfae9;}),_0x5bde65;},_0x310e08=/[ .][A-Z]\.? *$/i,_0x314c27=/(?:\u2026|\.{2,}) *$/,_0x26d542=/\p{L}/u,_0x50a740=/\. *$/,_0x3268d0=/^[A-Z]\. $/,_0x34eed4={'one':{'killUnicode':_0x3d105d,'tokenize':{'splitSentences':_0x546122,'isSentence':function(_0x1864ca,_0x175af8){const _0x372e1c=_0x37e46c;if(!0x1===_0x26d542[_0x372e1c(0x1769)](_0x1864ca))return!0x1;if(!0x0===_0x310e08[_0x372e1c(0x1769)](_0x1864ca))return!0x1;if(0x3===_0x1864ca['length']&&_0x3268d0[_0x372e1c(0x1769)](_0x1864ca))return!0x1;if(!0x0===_0x314c27['test'](_0x1864ca))return!0x1;let _0x35cbc6=_0x1864ca['replace'](/[.!?\u203D\u2E18\u203C\u2047-\u2049] *$/,'')[_0x372e1c(0x1117)]('\x20'),_0x24fab3=_0x35cbc6[_0x35cbc6[_0x372e1c(0x1b19)]-0x1]['toLowerCase']();return!0x0!==_0x175af8[_0x372e1c(0x2427)](_0x24fab3)||!0x0!==_0x50a740[_0x372e1c(0x1769)](_0x1864ca);},'splitTerms':_0x3daef7,'splitWhitespace':_0x4f512c,'fromString':_0xf18df2}}},_0x2213f0={'&':'and','@':'at','%':_0x37e46c(0x11f8),'plz':_0x37e46c(0xce0),'bein':_0x37e46c(0x642)};let _0x390225={},_0xfda68={};[[[_0x37e46c(0x122d),_0x37e46c(0x3d1c),'bc',_0x37e46c(0x7ea),'eg','esp',_0x37e46c(0x3952),_0x37e46c(0xce1),'ex','exp',_0x37e46c(0x2204),_0x37e46c(0x3dfb),_0x37e46c(0x4770),_0x37e46c(0x37c8),_0x37e46c(0x1ee0),_0x37e46c(0x2702),'jd',_0x37e46c(0x44d9),'lng',_0x37e46c(0x397a),'fm',_0x37e46c(0x452b),_0x37e46c(0x1e17),_0x37e46c(0x30c3),'ea','ps',_0x37e46c(0x26b2),'pt','pref','pl','pp','qt','fr','sq',_0x37e46c(0x601),'ss',_0x37e46c(0x98a),_0x37e46c(0x1854),_0x37e46c(0x3ee3),_0x37e46c(0x2d33),'fem',_0x37e46c(0x739),'eng',_0x37e46c(0x3f9e),'vb','rb',_0x37e46c(0x384b),_0x37e46c(0x2f3b),_0x37e46c(0x1180),_0x37e46c(0x476a),'wr']],[['dl','ml','gal','qt','pt',_0x37e46c(0x366d),_0x37e46c(0x2ec5),_0x37e46c(0x33af),'km','dm','cm','mm','mi','td','hr','hrs','kg','hg','dg','cg','mg','µg','lb','oz','sq\x20ft','hz',_0x37e46c(0x50bd),'mph','kmph','kb','mb','tb','lx','lm','fl\x20oz','yb'],_0x37e46c(0xdae)],[['ad','al','arc','ba','bl','ca',_0x37e46c(0xc37),_0x37e46c(0x1721),_0x37e46c(0x213f),'ft','fy','ie','lit','ma','md','pd','tce'],_0x37e46c(0x1786)],[['adj',_0x37e46c(0x2313),_0x37e46c(0xf3b),_0x37e46c(0x20a0),_0x37e46c(0x3b5f),_0x37e46c(0x2826),_0x37e46c(0x4a69),_0x37e46c(0x34c7),_0x37e46c(0x3e10),'comdr',_0x37e46c(0x148c),_0x37e46c(0x25d3),'dr',_0x37e46c(0x15c0),_0x37e46c(0x386b),_0x37e46c(0x6dd),_0x37e46c(0x17c0),'jr',_0x37e46c(0x2b85),'lt',_0x37e46c(0x44e7),_0x37e46c(0x4fd1),_0x37e46c(0x889),'mme','mr','mrs','ms','mstr','phd',_0x37e46c(0x4bcc),_0x37e46c(0x4e0b),'rep','reps',_0x37e46c(0x497a),_0x37e46c(0x46ea),_0x37e46c(0x436d),'sens','sfc','sgt',_0x37e46c(0x5dd),'sr',_0x37e46c(0x4dce),_0x37e46c(0x1e18)],_0x37e46c(0x3ab)],[[_0x37e46c(0x47db),_0x37e46c(0x314),_0x37e46c(0x371d),_0x37e46c(0x5012),_0x37e46c(0x2548),_0x37e46c(0x4604),_0x37e46c(0x2c03),_0x37e46c(0x15aa),'sept',_0x37e46c(0x3d82),'nov',_0x37e46c(0x461a)],_0x37e46c(0x1c7a)],[[_0x37e46c(0x258b),_0x37e46c(0x2bea),_0x37e46c(0x3edd),_0x37e46c(0x43ec),_0x37e46c(0x1bf8),'ltd','co'],_0x37e46c(0x4c36)],[['rd','st',_0x37e46c(0x3fef),'mt','ave',_0x37e46c(0x392e),'cl',_0x37e46c(0x2f04),_0x37e46c(0x147f),_0x37e46c(0x4f98),'cal',_0x37e46c(0x4164),_0x37e46c(0x21ed),_0x37e46c(0x1dd5),'fla','fl','ga','ida','ia',_0x37e46c(0x1cbd),_0x37e46c(0x4fa7),_0x37e46c(0x154b),_0x37e46c(0x1318),_0x37e46c(0x2852),_0x37e46c(0x19c0),_0x37e46c(0x4f80),_0x37e46c(0x42e1),'pa',_0x37e46c(0x4ffa),_0x37e46c(0x181f),'tex','ut','vt','va',_0x37e46c(0x4313),'wisc','wy',_0x37e46c(0x2e08),_0x37e46c(0x91b),_0x37e46c(0x44ec),_0x37e46c(0x5f1),'que',_0x37e46c(0x484c)],_0x37e46c(0x1c11)]][_0x37e46c(0xa21)](_0x28f1d2=>{const _0x37695b=_0x37e46c;_0x28f1d2[0x0][_0x37695b(0xa21)](_0x20a2e9=>{const _0x3a09a7=_0x37695b;_0x390225[_0x20a2e9]=!0x0,_0xfda68[_0x20a2e9]=_0x3a09a7(0x1b3e),void 0x0!==_0x28f1d2[0x1]&&(_0xfda68[_0x20a2e9]=[_0xfda68[_0x20a2e9],_0x28f1d2[0x1]]);});});const _0x2e38f4=['anti','bi','co',_0x37e46c(0x2564),'de',_0x37e46c(0x1987),_0x37e46c(0x608),_0x37e46c(0x495c),_0x37e46c(0x5189),_0x37e46c(0x172d),_0x37e46c(0x387d),_0x37e46c(0x7be),_0x37e46c(0x22e8),_0x37e46c(0x2238),_0x37e46c(0x2ea),_0x37e46c(0x1228),_0x37e46c(0x3f3d),'proto',_0x37e46c(0x11a2),'re',_0x37e46c(0x217c),_0x37e46c(0x4bbe),_0x37e46c(0x2c14),_0x37e46c(0x358f),'un',_0x37e46c(0x3ab5),'ex'][_0x37e46c(0x24d8)]((_0x43b612,_0x55b707)=>(_0x43b612[_0x55b707]=!0x0,_0x43b612),{});let _0x45006f={'!':'¡','?':'¿Ɂ','\x22':'“”\x22❝❞','\x27':_0x37e46c(0x4eed),'-':'—–','a':_0x37e46c(0x4ea6),'b':_0x37e46c(0x1a09),'c':'¢©ÇçĆćĈĉĊċČčƆƇƈȻȼͻͼϲϹϽϾСсєҀҁҪҫ','d':_0x37e46c(0x35a4),'e':_0x37e46c(0x4db8),'f':'ƑƒϜϝӺӻҒғſ','g':_0x37e46c(0x3922),'h':'ĤĥĦħƕǶȞȟΉΗЂЊЋНнђћҢңҤҥҺһӉӊ','I':_0x37e46c(0x4de2),'i':'ìíîïĨĩĪīĬĭĮįİıƖƗȈȉȊȋΊΐΪίιϊІЇіїi̇','j':_0x37e46c(0x41d3),'k':_0x37e46c(0x12e7),'l':'ĹĺĻļĽľĿŀŁłƚƪǀǏǐȴȽΙӀӏ','m':_0x37e46c(0x1c0b),'n':_0x37e46c(0x367d),'o':_0x37e46c(0x4b97),'p':'ƤΡρϷϸϼРрҎҏÞ','q':'Ɋɋ','r':_0x37e46c(0x29eb),'s':'ŚśŜŝŞşŠšƧƨȘșȿЅѕ','t':_0x37e46c(0x4ce4),'u':_0x37e46c(0x4007),'v':_0x37e46c(0x68c),'w':'ŴŵƜωώϖϢϣШЩшщѡѿ','x':'×ΧχϗϰХхҲҳӼӽӾӿ','y':_0x37e46c(0x3051),'z':_0x37e46c(0x41ef)},_0x170c85={};Object[_0x37e46c(0x1ea9)](_0x45006f)[_0x37e46c(0xa21)](function(_0x4b716d){const _0x3a9f05=_0x37e46c;_0x45006f[_0x4b716d][_0x3a9f05(0x1117)]('')[_0x3a9f05(0xa21)](function(_0x187dad){_0x170c85[_0x187dad]=_0x4b716d;});});const _0x57d137=/\//,_0xeae720=/[a-z]\.[a-z]/i,_0x5606c9=/[0-9]/,_0x421a6d=function(_0x857bce,_0x1088a8){const _0x2a50e2=_0x37e46c;let _0x5bbbba=_0x857bce[_0x2a50e2(0x47d)]||_0x857bce[_0x2a50e2(0x4006)]||_0x857bce[_0x2a50e2(0x192e)];const _0xd11074=_0x1088a8['model']['one'][_0x2a50e2(0x559)];if(_0xd11074[_0x2a50e2(0x2427)](_0x5bbbba)&&(_0x857bce['alias']=_0x857bce['alias']||[],_0x857bce['alias'][_0x2a50e2(0x1715)](_0xd11074[_0x5bbbba])),_0x57d137[_0x2a50e2(0x1769)](_0x5bbbba)&&!_0xeae720[_0x2a50e2(0x1769)](_0x5bbbba)&&!_0x5606c9['test'](_0x5bbbba)){let _0xb12c3a=_0x5bbbba['split'](_0x57d137);_0xb12c3a[_0x2a50e2(0x1b19)]<=0x3&&_0xb12c3a[_0x2a50e2(0xa21)](_0x267d82=>{const _0x3b4cb5=_0x2a50e2;''!==(_0x267d82=_0x267d82[_0x3b4cb5(0x1b23)]())&&(_0x857bce[_0x3b4cb5(0xa94)]=_0x857bce[_0x3b4cb5(0xa94)]||[],_0x857bce[_0x3b4cb5(0xa94)][_0x3b4cb5(0x1715)](_0x267d82));});}return _0x857bce;},_0x2e63fd=/^\p{Letter}+-\p{Letter}+$/u,_0x585413=function(_0x31339b){const _0x418fde=_0x37e46c;let _0x1195ff=_0x31339b[_0x418fde(0x4570)]||_0x31339b[_0x418fde(0x47d)]||_0x31339b['text'];_0x1195ff=_0x1195ff[_0x418fde(0x741)](/['’]s$/,''),_0x1195ff=_0x1195ff[_0x418fde(0x741)](/s['’]$/,'s'),_0x1195ff=_0x1195ff[_0x418fde(0x741)](/([aeiou][ktrp])in'$/,_0x418fde(0x3dc1)),_0x2e63fd[_0x418fde(0x1769)](_0x1195ff)&&(_0x1195ff=_0x1195ff[_0x418fde(0x741)](/-/g,'')),_0x1195ff=_0x1195ff['replace'](/^[#@]/,''),_0x1195ff!==_0x31339b[_0x418fde(0x47d)]&&(_0x31339b[_0x418fde(0x192e)]=_0x1195ff);},_0x2b2e24=function(_0x22931d){const _0x3a6e5e=_0x37e46c;let _0x1f9963=_0x22931d[_0x3a6e5e(0x204b)],_0x12938c={};for(let _0x586daa=0x0;_0x586daa<_0x1f9963[_0x3a6e5e(0x1b19)];_0x586daa+=0x1)for(let _0x9c5637=0x0;_0x9c5637<_0x1f9963[_0x586daa][_0x3a6e5e(0x1b19)];_0x9c5637+=0x1){let _0x493daf=_0x1f9963[_0x586daa][_0x9c5637],_0x4012a9=_0x493daf[_0x3a6e5e(0x192e)]||_0x493daf[_0x3a6e5e(0x47d)];_0x12938c[_0x4012a9]=_0x12938c[_0x4012a9]||0x0,_0x12938c[_0x4012a9]+=0x1;}for(let _0xdf149b=0x0;_0xdf149b<_0x1f9963[_0x3a6e5e(0x1b19)];_0xdf149b+=0x1)for(let _0x54d1a2=0x0;_0x54d1a2<_0x1f9963[_0xdf149b]['length'];_0x54d1a2+=0x1){let _0x143d97=_0x1f9963[_0xdf149b][_0x54d1a2],_0x10348e=_0x143d97[_0x3a6e5e(0x192e)]||_0x143d97[_0x3a6e5e(0x47d)];_0x143d97['freq']=_0x12938c[_0x10348e];}},_0x13d374=function(_0xb508ef){const _0x3ef4ab=_0x37e46c;let _0x2898c2=0x0,_0x165c16=0x0,_0x13b970=_0xb508ef['document'];for(let _0xb53ac2=0x0;_0xb53ac2<_0x13b970[_0x3ef4ab(0x1b19)];_0xb53ac2+=0x1)for(let _0x5a77a9=0x0;_0x5a77a9<_0x13b970[_0xb53ac2]['length'];_0x5a77a9+=0x1){let _0x37b67b=_0x13b970[_0xb53ac2][_0x5a77a9];_0x37b67b[_0x3ef4ab(0xf16)]={'index':_0x165c16,'start':_0x2898c2+_0x37b67b[_0x3ef4ab(0x1228)]['length'],'length':_0x37b67b['text'][_0x3ef4ab(0x1b19)]},_0x2898c2+=_0x37b67b[_0x3ef4ab(0x1228)][_0x3ef4ab(0x1b19)]+_0x37b67b[_0x3ef4ab(0x4006)][_0x3ef4ab(0x1b19)]+_0x37b67b[_0x3ef4ab(0x24ce)][_0x3ef4ab(0x1b19)],_0x165c16+=0x1;}},_0x52bdb3=function(_0x27dcbb){const _0x4fc202=_0x37e46c;let _0x3adfcb=_0x27dcbb[_0x4fc202(0x295)];for(let _0x1b26e6=0x0;_0x1b26e6<_0x3adfcb['length'];_0x1b26e6+=0x1)for(let _0x25c7c3=0x0;_0x25c7c3<_0x3adfcb[_0x1b26e6][_0x4fc202(0x1b19)];_0x25c7c3+=0x1)_0x3adfcb[_0x1b26e6][_0x25c7c3]['index']=[_0x1b26e6,_0x25c7c3];},_0x2afa5e=function(_0x3cef36){const _0x513e54=_0x37e46c;let _0x3dca44=0x0,_0x21374f=_0x3cef36[_0x513e54(0x204b)];for(let _0x56fcc9=0x0;_0x56fcc9<_0x21374f[_0x513e54(0x1b19)];_0x56fcc9+=0x1)for(let _0x12a358=0x0;_0x12a358<_0x21374f[_0x56fcc9][_0x513e54(0x1b19)];_0x12a358+=0x1)''!==_0x21374f[_0x56fcc9][_0x12a358][_0x513e54(0x47d)]&&(_0x3dca44+=0x1,_0x21374f[_0x56fcc9][_0x12a358][_0x513e54(0x208a)]=_0x3dca44);},_0x195584=function(_0x5cb945,_0x37db82){const _0x53aaca=_0x37e46c;let _0x3d11b8=_0x5cb945['docs'];for(let _0x8a84bc=0x0;_0x8a84bc<_0x3d11b8[_0x53aaca(0x1b19)];_0x8a84bc+=0x1)for(let _0x20329f=0x0;_0x20329f<_0x3d11b8[_0x8a84bc][_0x53aaca(0x1b19)];_0x20329f+=0x1)_0x37db82(_0x3d11b8[_0x8a84bc][_0x20329f],_0x5cb945[_0x53aaca(0x4657)]);},_0x16e484={'compute':{'alias':_0x885463=>_0x195584(_0x885463,_0x421a6d),'machine':_0x462df8=>_0x195584(_0x462df8,_0x585413),'normal':_0x24ebac=>_0x195584(_0x24ebac,_0x102de8),'freq':_0x2b2e24,'offset':_0x13d374,'index':_0x52bdb3,'wordCount':_0x2afa5e},'methods':_0x34eed4,'model':{'one':{'aliases':_0x2213f0,'abbreviations':_0x390225,'prefixes':_0x2e38f4,'suffixes':{'like':!0x0,'ish':!0x0,'less':!0x0,'able':!0x0,'elect':!0x0,'type':!0x0,'designate':!0x0},'prePunctuation':{'#':!0x0,'@':!0x0,'_':!0x0,'°':!0x0,'​':!0x0,'‌':!0x0,'‍':!0x0,'\ufeff':!0x0},'postPunctuation':{'%':!0x0,'_':!0x0,'°':!0x0,'​':!0x0,'‌':!0x0,'‍':!0x0,'\ufeff':!0x0},'lexicon':_0xfda68,'unicode':_0x170c85,'emoticons':{'<3':!0x0,'{const _0x43b436=_0x15a6d1;let _0x334a4d=(_0x3e42d3=_0x3e42d3['toLowerCase']()[_0x43b436(0x1b23)]())[_0x43b436(0x1b19)];_0x120691['max']&&_0x334a4d>_0x120691[_0x43b436(0x4529)]&&(_0x334a4d=_0x120691[_0x43b436(0x4529)]);for(let _0x54dec9=_0x120691[_0x43b436(0x37c8)];_0x54dec9<_0x334a4d;_0x54dec9+=0x1){let _0x35ebfb=_0x3e42d3['substring'](0x0,_0x54dec9);_0x120691['safe']&&_0xfe4776[_0x43b436(0x1556)][_0x43b436(0x1d8a)][_0x43b436(0x2c34)][_0x43b436(0x2427)](_0x35ebfb)||(!0x0!==_0x3fed77[_0x43b436(0x2427)](_0x35ebfb)&&!0x0!==_0x20b63d[_0x43b436(0x2427)](_0x35ebfb)?_0x20b63d[_0x35ebfb]=_0x3e42d3:_0x32ac2d[_0x43b436(0x1715)](_0x35ebfb));}}),_0x20b63d=Object[_0x15a6d1(0x4e14)]({},_0x3fed77,_0x20b63d),_0x32ac2d[_0x15a6d1(0xa21)](_0x4c270e=>{delete _0x20b63d[_0x4c270e];}),_0x20b63d;},_0x491a4d={'safe':!0x0,'min':0x3},_0x148c82={'typeahead':function(_0x3aace5=[],_0x4a5080={}){const _0x13f429=_0x37e46c;let _0x2d6df1=this['model']();var _0x1bc9e9;_0x4a5080=Object[_0x13f429(0x4e14)]({},_0x491a4d,_0x4a5080),_0x1bc9e9=_0x3aace5,_0x13f429(0x4d86)===Object[_0x13f429(0x3b3c)][_0x13f429(0x8e8)][_0x13f429(0x236b)](_0x1bc9e9)&&(Object[_0x13f429(0x4e14)](_0x2d6df1[_0x13f429(0x1d8a)][_0x13f429(0x2c34)],_0x3aace5),_0x3aace5=Object['keys'](_0x3aace5));let _0x4a83c1=_0x4063f7(_0x3aace5,_0x4a5080,this['world']());return Object[_0x13f429(0x1ea9)](_0x4a83c1)[_0x13f429(0xa21)](_0x21456c=>{const _0x2c9c35=_0x13f429;_0x2d6df1['one']['typeahead'][_0x2c9c35(0x2427)](_0x21456c)?delete _0x2d6df1[_0x2c9c35(0x1d8a)]['typeahead'][_0x21456c]:_0x2d6df1[_0x2c9c35(0x1d8a)][_0x2c9c35(0xa63)][_0x21456c]=_0x4a83c1[_0x21456c];}),this;}},_0x627f32={'model':{'one':{'typeahead':{}}},'api':_0x3c7111,'lib':_0x148c82,'compute':_0x86199,'hooks':[_0x37e46c(0xa63)]};_0x5a48a1[_0x37e46c(0x38b6)](_0x5aa28c),_0x5a48a1[_0x37e46c(0x38b6)](_0x553be4),_0x5a48a1[_0x37e46c(0x38b6)](_0x226049),_0x5a48a1[_0x37e46c(0x38b6)](_0x184f3c),_0x5a48a1[_0x37e46c(0x38b6)](_0x4f5f82),_0x5a48a1['plugin'](_0xc83a46),_0x5a48a1['extend'](_0x16e484),_0x5a48a1[_0x37e46c(0x38b6)](_0x3de92e),_0x5a48a1[_0x37e46c(0xe15)](_0x8edf44),_0x5a48a1[_0x37e46c(0x38b6)](_0x274b29),_0x5a48a1[_0x37e46c(0x38b6)](_0x627f32),_0x5a48a1[_0x37e46c(0x38b6)](_0x363936),_0x5a48a1[_0x37e46c(0x38b6)](_0x55e89d);const _0x41de3d=_0x5a48a1,_0x439866={'addendum':'addenda','corpus':_0x37e46c(0x1bec),'criterion':'criteria','curriculum':_0x37e46c(0x3797),'genus':_0x37e46c(0x77c),'memorandum':_0x37e46c(0x18d8),'opus':'opera','ovum':_0x37e46c(0x23a8),'phenomenon':_0x37e46c(0x2280),'referendum':'referenda','alga':_0x37e46c(0x3f68),'alumna':_0x37e46c(0x2704),'antenna':_0x37e46c(0x33e6),'formula':'formulae','larva':_0x37e46c(0x17cb),'nebula':_0x37e46c(0x15e2),'vertebra':'vertebrae','analysis':_0x37e46c(0x18d5),'axis':_0x37e46c(0x41be),'diagnosis':'diagnoses','parenthesis':'parentheses','prognosis':'prognoses','synopsis':_0x37e46c(0x350),'thesis':_0x37e46c(0x4d91),'neurosis':_0x37e46c(0x4735),'appendix':_0x37e46c(0x418),'index':_0x37e46c(0x2474),'matrix':_0x37e46c(0x17dd),'ox':_0x37e46c(0x2a14),'sex':_0x37e46c(0x47ec),'alumnus':'alumni','bacillus':_0x37e46c(0x491d),'cactus':'cacti','fungus':_0x37e46c(0x3c34),'hippopotamus':_0x37e46c(0x12d3),'libretto':_0x37e46c(0x512c),'modulus':_0x37e46c(0x4fb9),'nucleus':'nuclei','octopus':_0x37e46c(0x4ad0),'radius':_0x37e46c(0x3e1b),'stimulus':'stimuli','syllabus':_0x37e46c(0x305f),'cookie':_0x37e46c(0x5b3),'calorie':_0x37e46c(0x2a3c),'auntie':_0x37e46c(0x46f2),'movie':'movies','pie':'pies','rookie':'rookies','tie':'ties','zombie':_0x37e46c(0x9fc),'leaf':_0x37e46c(0x107f),'loaf':_0x37e46c(0x4d6a),'thief':_0x37e46c(0x16ff),'foot':_0x37e46c(0x38cd),'goose':_0x37e46c(0x4201),'tooth':_0x37e46c(0x42ed),'beau':_0x37e46c(0x1f42),'chateau':_0x37e46c(0x16e2),'tableau':_0x37e46c(0x3214),'bus':_0x37e46c(0x20fd),'gas':_0x37e46c(0xca9),'circus':'circuses','crisis':_0x37e46c(0x1d05),'virus':_0x37e46c(0x1408),'database':'databases','excuse':_0x37e46c(0x45fd),'abuse':_0x37e46c(0x7a5),'avocado':_0x37e46c(0x45d6),'barracks':_0x37e46c(0x2aba),'child':'children','clothes':_0x37e46c(0x58a),'echo':_0x37e46c(0x145d),'embargo':'embargoes','epoch':_0x37e46c(0x316c),'deer':_0x37e46c(0x2867),'halo':_0x37e46c(0x375d),'man':_0x37e46c(0x487a),'woman':_0x37e46c(0x4aae),'mosquito':_0x37e46c(0x2e00),'mouse':_0x37e46c(0x1249),'person':'people','quiz':'quizzes','rodeo':'rodeos','shoe':_0x37e46c(0x4039),'sombrero':'sombreros','stomach':_0x37e46c(0xbe7),'tornado':_0x37e46c(0x1886),'tuxedo':_0x37e46c(0x29ae),'volcano':_0x37e46c(0x3733)},_0x36d284={'Comparative':_0x37e46c(0x41e5),'Superlative':_0x37e46c(0x43d6),'PresentTense':_0x37e46c(0x4bca),'Condition':_0x37e46c(0x2125),'PastTense':'true¦began,came,d4had,kneel3l2m0sa4we1;ea0sg2;nt;eap0i0;ed;id','Participle':'true¦0:09;a06b01cZdXeat0fSgQhPoJprov0rHs7t6u4w1;ak0ithdra02o2r1;i02uY;k0v0;nd1pr04;ergoJoJ;ak0hHo3;e9h7lain,o6p5t4un3w1;o1um;rn;g,k;ol0reS;iQok0;ught,wn;ak0o1runk;ne,wn;en,wn;ewriNi1uJ;dd0s0;ut3ver1;do4se0t1;ak0h2;do2g1;roG;ne;ast0i7;iv0o1;ne,tt0;all0loBor1;bi3g2s1;ak0e0;iv0o9;dd0;ove,r1;a5eamt,iv0;hos0lu1;ng;e4i3lo2ui1;lt;wn;tt0;at0en,gun;r2w1;ak0ok0;is0;en','Gerund':'true¦accord0be0doin,go0result0stain0;ing','Expression':_0x37e46c(0x1a68),'Negative':_0x37e46c(0xda6),'QuestionWord':_0x37e46c(0x3fd2),'Reflexive':_0x37e46c(0x35c4),'Plural':_0x37e46c(0x45a6),'Unit|Noun':'true¦cEfDgChBinchAk9lb,m6newt5oz,p4qt,t1y0;ardEd;able1b0ea1sp;!l,sp;spo1;a,t,x;on9;!b,g,i1l,m,p0;h,s;!les;!b,elvin,g,m;!es;g,z;al,b;eet,oot,t;m,up0;!s','Value':_0x37e46c(0x3ed),'Imperative':_0x37e46c(0x2850),'Plural|Verb':'true¦leaves','Demonym':_0x37e46c(0x47f),'Organization':_0x37e46c(0xd77),'Possessive':_0x37e46c(0xbab),'Noun|Verb':'true¦0:9W;1:AA;2:96;3:A3;4:9R;5:A2;6:9K;7:8N;8:7L;9:A8;A:93;B:8D;C:8X;a9Ob8Qc7Id6Re6Gf5Sg5Hh55i4Xj4Uk4Rl4Em40n3Vo3Sp2Squ2Rr21s0Jt02u00vVwGyFzD;ip,oD;ne,om;awn,e6Fie68;aOeMhJiHoErD;ap,e9Oink2;nd0rDuC;kDry,sh5Hth;!shop;ck,nDpe,re,sh;!d,g;e86iD;p,sD;k,p0t2;aDed,lco8W;r,th0;it,lk,rEsDt4ve,x;h,te;!ehou1ra9;aGen5FiFoD;iDmAte,w;ce,d;be,ew,sA;cuum,l4B;pDr7;da5gra6Elo6A;aReQhrPiOoMrGuEwiDy5Z;n,st;nDrn;e,n7O;aGeFiEoDu6;t,ub2;bu5ck4Jgg0m,p;at,k,nd;ck,de,in,nsDp,v7J;f0i8R;ll,ne,p,r4Yss,t94uD;ch,r;ck,de,e,le,me,p,re;e5Wow,u6;ar,e,ll,mp0st,xt;g,lDng2rg7Ps5x;k,ly;a0Sc0Ne0Kh0Fi0Dk0Cl0Am08n06o05pXquaBtKuFwD;ea88iD;ng,pe,t4;bGit,m,ppErD;fa3ge,pri1v2U;lDo6S;e6Py;!je8;aMeLiKoHrEuDy2;dy,ff,mb2;a85eEiDo5Pugg2;ke,ng;am,ss,t4;ckEop,p,rD;e,m;ing,pi2;ck,nk,t4;er,m,p;ck,ff,ge,in,ke,lEmp,nd,p2rDte,y;!e,t;k,l;aJeIiHlGoFrDur,y;ay,e56inDu3;g,k2;ns8Bt;a5Qit;ll,n,r87te;ed,ll;m,n,rk;b,uC;aDee1Tow;ke,p;a5Je4FiDo53;le,rk;eep,iDou4;ce,p,t;ateboa7Ii;de,gnDl2Vnk,p,ze;!al;aGeFiEoDuff2;ck,p,re,w;ft,p,v0;d,i3Ylt0;ck,de,pe,re,ve;aEed,nDrv1It;se,t2N;l,r4t;aGhedu2oBrD;aEeDibb2o3Z;en,w;pe,t4;le,n,r2M;cDfegua72il,mp2;k,rifi3;aZeHhy6LiGoEuD;b,in,le,n,s5X;a6ck,ll,oDpe,u5;f,t;de,ng,ot,p,s1W;aTcSdo,el,fQgPje8lOmMnLo17pJque6sFturn,vDwa6V;eDi27;al,r1;er74oFpe8tEuD;lt,me;!a55;l71rt;air,eaDly,o53;l,t;dezvo2Zt;aDedy;ke,rk;ea1i4G;a6Iist0r5N;act6Yer1Vo71uD;nd,se;a38o6F;ch,s6G;c1Dge,iEke,lly,nDp1Wt1W;ge,k,t;n,se;es6Biv0;a04e00hYiXlToNrEsy4uD;mp,n4rcha1sh;aKeIiHoDu4O;be,ceFdu3fi2grDje8mi1p,te6;amDe6W;!me;ed,ss;ce,de,nt;sDy;er6Cs;cti3i1;iHlFoEp,re,sDuCw0;e,i5Yt;l,p;iDl;ce,sh;nt,s5V;aEce,e32uD;g,mp,n7;ce,nDy;!t;ck,le,n17pe,tNvot;a1oD;ne,tograph;ak,eFnErDt;fu55mA;!c32;!l,r;ckJiInHrFsEtDu1y;ch,e9;s,te;k,tD;!y;!ic;nt,r,se;!a7;bje8ff0il,oErDutli3Qver4B;bAd0ie9;ze;a4ReFoDur1;d,tD;e,i3;ed,gle8tD;!work;aMeKiIoEuD;rd0;ck,d3Rld,nEp,uDve;nt,th;it5EkD;ey;lk,n4Brr5CsDx;s,ta2B;asuBn4UrDss;ge,it;il,nFp,rk3WsEtD;ch,t0;h,k,t0;da5n0oeuvB;aLeJiHoEuD;mp,st;aEbby,ck,g,oDve;k,t;d,n;cDe,ft,mAnIst;en1k;aDc0Pe4vK;ch,d,k,p,se;bFcEnd,p,t4uD;gh,n4;e,k;el,o2U;eEiDno4E;ck,d,ll,ss;el,y;aEo1OuD;i3mp;m,zz;mpJnEr46ssD;ue;c1Rdex,fluGha2k,se2HteDvoi3;nt,rD;e6fa3viD;ew;en3;a8le2A;aJeHiGoEuD;g,nt;l3Ano2Dok,pDr1u1;!e;ghli1Fke,nt,re,t;aDd7lp;d,t;ck,mGndFrEsh,tDu9;ch,e;bo3Xm,ne4Eve6;!le;!m0;aMear,ift,lKossJrFuD;arDe4Alp,n;antee,d;aFiEoDumb2;uCwth;ll,nd,p;de,sp;ip;aBoDue;ss,w;g,in,me,ng,s,te,ze;aZeWiRlNoJrFuD;ck,el,nDss,zz;c38d;aEoDy;st,wn;cDgme,me,nchi1;tuB;cFg,il,ld,rD;ce,e29mDwa31;!at;us;aFe0Vip,oDy;at,ck,od,wD;!er;g,ke,me,re,sh,vo1E;eGgFlEnDre,sh,t,x;an3i0Q;e,m,t0;ht,uB;ld;aEeDn3;d,l;r,tuB;ce,il,ll,rm,vo2W;cho,d7ffe8nMsKxFyeD;!baD;ll;cGerci1hFpDtra8;eriDo0W;en3me9;au6ibA;el,han7u1;caDtima5;pe;count0d,vy;a01eSiMoJrEuDye;b,el,mp,pli2X;aGeFiEoD;ne,p;ft,ll,nk,p,ve;am,ss;ft,g,in;cEd7ubt,wnloD;ad;k,u0E;ge6p,sFt4vD;e,iDor3;de;char7gui1h,liEpD;at4lay,u5;ke;al,bKcJfeIlGmaCposAsEtaD;il;e07iD;gn,re;ay,ega5iD;ght;at,ct;li04rea1;a5ut;b,ma7n3rDte;e,t;a0Eent0Dh06irc2l03oKrFuD;be,e,rDt;b,e,l,ve;aGeFoEuDy;sh;p,ss,wd;dAep;ck,ft,sh;at,de,in,lTmMnFordina5py,re,st,uDv0;gh,nDp2rt;s01t;ceHdu8fli8glomeIsFtDveN;a8rD;a6ol;e9tru8;ct;ntDrn;ra5;bHfoGmFpD;leDouCromi1;me9;aCe9it,u5;rt;at,iD;ne;lap1oD;r,ur;aEiDoud,ub;ck,p;im,w;aEeDip;at,ck,er;iGllen7nErD;ge,m,t;ge,nD;el;n,r;er,re;ke,ll,mp,noe,pGrXsFtEuDve;se,ti0I;alog,ch;h,t;!tuB;re;a03eZiXlToPrHuEyD;pa11;bb2ck2dgEff0mp,rDst,zz;den,n;et;anJeHiFoadEuD;i1sh;ca6;be,d7;ge;aDed;ch,k;ch,d;aFg,mb,nEoDrd0tt2x,ycott;k,st,t;d,e;rd,st;aFeCiDoYur;nk,tz;nd;me;as,d,ke,nd,opsy,tD;!ch,e;aFef,lt,nDt;d,efA;it;r,t;ck,il,lan3nIrFsEtt2;le;e,h;!gDk;aDe;in;!d,g,k;bu1c05dZge,iYlVnTppQrLsIttGucEwaD;rd;tiD;on;aDempt;ck;k,sD;i6ocia5;st;chFmD;!oD;ur;!iD;ve;eEroa4;ch;al;chDg0sw0;or;aEt0;er;rm;d,m,r;dreHvD;an3oD;ca5;te;ce;ss;cDe,he,t;eFoD;rd,u9;nt;nt,ss;se','Actor':_0x37e46c(0x3e92),'Adj|Noun':'true¦0:16;a1Db17c0Ud0Re0Mf0Dg0Ah08i06ju05l02mWnUoSpNrIsBt7u4v1watershed;a1ision0Z;gabo4nilla,ria1;b0Vnt;ndergr1pstairs;adua14ou1;nd;a3e1oken,ri0;en,r1;min0rori13;boo,n;age,e5ilv0Flack,o3quat,ta2u1well;bordina0Xper5;b0Lndard;ciali0Yl1vereign;e,ve16;cret,n1ri0;ior;a4e2ou1ubbiL;nd,tiY;ar,bBl0Wnt0p1side11;resent0Vublican;ci0Qsh;a4eriodic0last0Zotenti0r1;emi2incip0o1;!fession0;er,um;rall4st,tie0U;ff1pposi0Hv0;ens0Oi0C;agg01ov1uts;el;a5e3iniatJo1;bi01der07r1;al,t0;di1tr0N;an,um;le,riG;attOi2u1;sh;ber0ght,qC;stice,veniT;de0mpressioYn1;cumbe0Edividu0no0Dsta0Eterim;alf,o1umdrum;bby,melF;en2old,ra1;ph0Bve;er0ious;a7e5i4l3u1;git03t1;ure;uid;ne;llow,m1;aFiL;ir,t,vo1;riOuriO;l3p00x1;c1ecutUpeV;ess;d1iK;er;ar2e1;mographUrivO;k,l2;hiGlassSo2rude,unn1;ing;m5n1operK;creCstitueOte2vertab1;le;mpor1nt;ary;ic,m2p1;anion,lex;er2u1;ni8;ci0;al;e5lank,o4r1;i2u1;te;ef;ttom,urgeois;st;cadem9d6l2ntarct9r1;ab,ct8;e3tern1;at1;ive;rt;oles1ult;ce1;nt;ic','Adj|Past':_0x37e46c(0x3a32),'Singular':_0x37e46c(0x4c4d),'Person|Noun':'true¦a0Eb07c03dWeUfQgOhLjHkiGlFmCnBolive,p7r4s3trini06v1wa0;ng,rd,tts;an,enus,iol0;a,et;ky,onPumm09;ay,e1o0uby;bin,d,se;ed,x;a2e1o0;l,tt04;aLnJ;dYge,tR;at,orm;a0eloW;t0x,ya;!s;a9eo,iH;ng,tP;a2e1o0;lGy;an,w3;de,smi4y;a0erb,iOolBuntR;ll,z0;el;ail,e0iLuy;ne;a1ern,i0lo;elds,nn;ith,n0;ny;a0dEmir,ula,ve;rl;a4e3i1j,ol0;ly;ck,x0;ie;an,ja;i0wn;sy;am,h0liff,rystal;a0in,ristian;mbers,ri0;ty;a4e3i2o,r0ud;an0ook;dy;ll;nedict,rg;k0nks;er;l0rt;fredo,ma','Actor|Verb':_0x37e46c(0x30c0),'MaleName':_0x37e46c(0x43d8),'Uncountable':'true¦0:2E;1:2L;2:33;a2Ub2Lc29d22e1Rf1Ng1Eh16i11j0Yk0Wl0Rm0Hn0Do0Cp03rZsLt9uran2Jv7w3you\x20gu0E;a5his17i4oo3;d,l;ldlife,ne;rm8t1;apor,ernacul29i3;neg28ol1Otae;eDhBiAo8r4un3yranny;a,gst1B;aff2Oea1Ko4ue\x20nor3;th;o08u3;bleshoot2Ose1Tt;night,othpas1Vwn3;foEsfoE;me\x20off,n;er3und1;e,mod2S;a,nnis;aDcCeBhAi9ki8o7p6t4u3weepstak0;g1Unshi2Hshi;ati08e3;am,el;ace2Keci0;ap,cc1meth2C;n,ttl0;lk;eep,ingl0or1C;lf,na1Gri0;ene1Kisso1C;d0Wfe2l4nd,t3;i0Iurn;m1Ut;abi0e4ic3;e,ke15;c3i01laxa11search;ogni10rea10;a9e8hys7luto,o5re3ut2;amble,mis0s3ten20;en1Zs0L;l3rk;i28l0EyH;\x2016i28;a24tr0F;nt3ti0M;i0s;bstetri24vercrowd1Qxyg09;a5e4owada3utella;ys;ptu1Ows;il\x20poliZtional\x20securi2;aAe8o5u3;m3s1H;ps;n3o1K;ey,o3;gamy;a3cha0Elancholy,rchandi1Htallurgy;sl0t;chine3g1Aj1Hrs,thema1Q;\x20learn1Cry;aught1e6i5ogi4u3;ck,g12;c,s1M;ce,ghtn18nguis1LteratWv1;ath1isVss;ara0EindergartPn3;icke0Aowled0Y;e3upit1;a3llyfiGwel0G;ns;ce,gnor6mp5n3;forma00ter3;net,sta07;atiSort3rov;an18;a7e6isto09o3ung1;ckey,mework,ne4o3rseradi8spitali2use\x20arrest;ky;s2y;adquarteXre;ir,libut,ppiHs3;hi3te;sh;ene8l6o5r3um,ymnas11;a3eZ;niUss;lf,re;ut3yce0F;en;\x203ti0W;edit0Hpo3;ol;aNicFlour,o4urnit3;ure;od,rgive3uri1wl;ness;arCcono0LducaBlectr9n7quip8thi0Pvery6x3;ist4per3;ti0B;en0J;body,o08th07;joy3tertain3;ment;ici2o3;ni0H;tiS;nings,th;emi02i6o4raugh3ynas2;ts;pe,wnstai3;rs;abet0ce,s3;honZrepu3;te;aDelciChAivi07l8o3urrency;al,ld\x20w6mmenta5n3ral,ttIuscoB;fusiHt\x203;ed;ry;ar;assi01oth0;es;aos,e3;eMwK;us;d,rO;a8i6lood,owlHread5u3;ntGtt1;er;!th;lliarJs3;on;g3ss;ga3;ge;cKdviJeroGirFmBn6ppeal\x20court,r4spi3thleL;rin;ithmet3sen3;ic;i6y3;o4th3;ing;ne;se;en5n3;es2;ty;ds;craft;bi8d3nau7;yna3;mi6;ce;id,ous3;ti3;cs','Infinitive':_0x37e46c(0x41a3),'Person':_0x37e46c(0x39cb),'Adjective':_0x37e46c(0x161a),'Pronoun':_0x37e46c(0x1502),'Preposition':_0x37e46c(0x404),'SportsTeam':_0x37e46c(0x430f),'Unit':_0x37e46c(0x1c61),'Noun|Gerund':_0x37e46c(0x4d23),'PhrasalVerb':_0x37e46c(0xd70),'ProperNoun':_0x37e46c(0x259d),'Person|Place':_0x37e46c(0x4ae3),'LastName':_0x37e46c(0x4e50),'Ordinal':_0x37e46c(0x338b),'Cardinal':_0x37e46c(0x15b8),'Multiple':_0x37e46c(0x43c9),'City':_0x37e46c(0x45df),'Region':_0x37e46c(0x51a3),'Country':_0x37e46c(0x3ac0),'Place':_0x37e46c(0x385),'FirstName':_0x37e46c(0x2699),'WeekDay':_0x37e46c(0x167b),'Month':_0x37e46c(0x3a34),'Date':_0x37e46c(0x24ab),'Duration':_0x37e46c(0x4a99),'FemaleName':'true¦0:J7;1:JB;2:IJ;3:IK;4:J1;5:IO;6:JS;7:JO;8:HB;9:JK;A:H4;B:I2;C:IT;D:JH;E:IX;F:BA;G:I4;aGTbFLcDRdD0eBMfB4gADh9Ti9Gj8Dk7Cl5Wm48n3Lo3Hp33qu32r29s15t0Eu0Cv02wVxiTyOzH;aLeIineb,oHsof3;e3Sf3la,ra;h2iKlIna,ynH;ab,ep;da,ma;da,h2iHra;nab;aKeJi0FolB7uIvH;et8onDP;i0na;le0sen3;el,gm3Hn,rGLs8W;aoHme0nyi;m5XyAD;aMendDZhiDGiH;dele9lJnH;if48niHo0;e,f47;a,helmi0lHma;a,ow;ka0nB;aNeKiHusa5;ck84kIl8oleAviH;anFenJ4;ky,toriBK;da,lA8rHs0;a,nHoniH9;a,iFR;leHnesH9;nILrH;i1y;g9rHs6xHA;su5te;aYeUhRiNoLrIuHy2;i,la;acJ3iHu0J;c3na,sH;hFta;nHr0F;iFya;aJffaEOnHs6;a,gtiH;ng;!nFSra;aIeHomasi0;a,l9Oo8Ares1;l3ndolwethu;g9Fo88rIssH;!a,ie;eHi,ri7;sa,za;bOlMmKnIrHs6tia0wa0;a60yn;iHya;a,ka,s6;arFe2iHm77ra;!ka;a,iH;a,t6;at6it6;a0Ecarlett,e0AhWiSkye,neza0oQri,tNuIyH;bIGlvi1;ha,mayIJniAsIzH;an3Net8ie,y;anHi7;!a,e,nH;aCe;aIeH;fan4l5Dphan6E;cI5r5;b3fiAAm0LnHphi1;d2ia,ja,ya;er2lJmon1nIobh8QtH;a,i;dy;lETv3;aMeIirHo0risFDy5;a,lDM;ba,e0i5lJrH;iHr6Jyl;!d8Ifa;ia,lDZ;hd,iMki2nJrIu0w0yH;la,ma,na;i,le9on,ron,yn;aIda,ia,nHon;a,on;!ya;k6mH;!aa;lJrItaye82vH;da,inj;e0ife;en1i0ma;anA9bLd5Oh1SiBkKlJmInd2rHs6vannaC;aCi0;ant6i2;lDOma,ome;ee0in8Tu2;in1ri0;a05eZhXiUoHuthDM;bScRghQl8LnPsJwIxH;anB3ie,y;an,e0;aIeHie,lD;ann7ll1marDGtA;!lHnn1;iHyn;e,nH;a,dF;da,i,na;ayy8G;hel67io;bDRerAyn;a,cIkHmas,nFta,ya;ki,o;h8Xki;ea,iannGMoH;da,n1P;an0bJemFgi0iInHta,y0;a8Bee;han86na;a,eH;cHkaC;a,ca;bi0chIe,i0mo0nHquETy0;di,ia;aERelHiB;!e,le;een4ia0;aPeOhMiLoJrHute6A;iHudenCV;scil3LyamvaB;lHrt3;i0ly;a,paluk;ilome0oebe,ylH;is,lis;ggy,nelope,r5t2;ige,m0VnKo5rvaDMtIulH;a,et8in1;ricHt4T;a,e,ia;do2i07;ctav3dIfD3is6ksa0lHphD3umC5yunbileg;a,ga,iv3;eHvAF;l3t8;aWeUiMoIurHy5;!ay,ul;a,eJor,rIuH;f,r;aCeEma;ll1mi;aNcLhariBQkKlaJna,sHta,vi;anHha;ur;!y;a,iDZki;hoGk9YolH;a,e4P;!mh;hir,lHna,risDEsreE;!a,lBV;asuMdLh3i6Dl5nKomi7rgEVtH;aHhal4;lHs6;i1ya;cy,et8;e9iF0ya;nngu2X;a0Ackenz4e02iMoJrignayani,uriDJyH;a,rH;a,iOlNna,tG;bi0i2llBJnH;a,iH;ca,ka,qD9;a,cUdo4ZkaTlOmi,nMrItzi,yH;ar;aJiIlH;anET;am;!l,nB;dy,eHh,n4;nhGrva;aKdJe0iCUlH;iHy;cent,e;red;!gros;!e5;ae5hH;ae5el3Z;ag5DgNi,lKrH;edi7AiIjem,on,yH;em,l;em,sCG;an4iHliCF;nHsCJ;a,da;!an,han;b09cASd07e,g05ha,i04ja,l02n00rLsoum5YtKuIv84xBKyHz4;bell,ra,soBB;d7rH;a,eE;h8Gild1t4;a,cUgQiKjor4l7Un4s6tJwa,yH;!aHbe6Xja9lAE;m,nBL;a,ha,in1;!aJbCGeIja,lDna,sHt63;!a,ol,sa;!l1D;!h,mInH;!a,e,n1;!awit,i;arJeIie,oHr48ueri8;!t;!ry;et46i3B;el4Xi7Cy;dHon,ue5;akranAy;ak,en,iHlo3S;a,ka,nB;a,re,s4te;daHg4;!l3E;alDd4elHge,isDJon0;ei9in1yn;el,le;a0Ne0CiXoQuLyH;d3la,nH;!a,dIe2OnHsCT;!a,e2N;a,sCR;aD4cJel0Pis1lIna,pHz;e,iA;a,u,wa;iHy;a0Se,ja,l2NnB;is,l1UrItt1LuHvel4;el5is1;aKeIi7na,rH;aADi7;lHn1tA;ei;!in1;aTbb9HdSepa,lNnKsJvIzH;!a,be5Ret8z4;!ia;a,et8;!a,dH;a,sHy;ay,ey,i,y;a,iJja,lH;iHy;aA8e;!aH;!nF;ia,ya;!nH;!a,ne;aPda,e0iNjYla,nMoKsJtHx93y5;iHt4;c3t3;e2PlCO;la,nHra;a,ie,o2;a,or1;a,gh,laH;!ni;!h,nH;a,d2e,n5V;cOdon9DiNkes6mi9Gna,rMtJurIvHxmi,y5;ern1in3;a,e5Aie,yn;as6iIoH;nya,ya;fa,s6;a,isA9;a,la;ey,ie,y;a04eZhXiOlASoNrJyH;lHra;a,ee,ie;istHy6I;a,en,iIyH;!na;!e,n5F;nul,ri,urtnB8;aOerNlB7mJrHzzy;a,stH;en,in;!berlImernH;aq;eHi,y;e,y;a,stE;!na,ra;aHei2ongordzol;dij1w5;el7UiKjsi,lJnIrH;a,i,ri;d2na,za;ey,i,lBLs4y;ra,s6;biAcARdiat7MeBAiSlQmPnyakuma1DrNss6NtKviAyH;!e,lH;a,eH;e,i8T;!a6HeIhHi4TlDri0y;ar8Her8Hie,leErBAy;!lyn8Ori0;a,en,iHl5Xoli0yn;!ma,nFs95;a5il1;ei8Mi,lH;e,ie;a,tl6O;a0AeZiWoOuH;anMdLlHst88;es,iH;a8NeHs8X;!n9tH;!a,te;e5Mi3My;a,iA;!anNcelDdMelGhan7VleLni,sIva0yH;a,ce;eHie;fHlDph7Y;a,in1;en,n1;i7y;!a,e,n45;lHng;!i1DlH;!i1C;anNle0nKrJsH;i8JsH;!e,i8I;i,ri;!a,elGif2CnH;a,et8iHy;!e,f2A;a,eJiInH;a,eIiH;e,n1;!t8;cMda,mi,nIque4YsminFvie2y9zH;min7;a7eIiH;ce,e,n1s;!lHs82t0F;e,le;inIk6HlDquelH;in1yn;da,ta;da,lRmPnOo0rNsIvaHwo0zaro;!a0lu,na;aJiIlaHob89;!n9R;do2;belHdo2;!a,e,l3B;a7Ben1i0ma;di2es,gr72ji;a9elBogH;en1;a,e9iHo0se;a0na;aSeOiJoHus7Kyacin2C;da,ll4rten24snH;a,i9U;lImaH;ri;aIdHlaI;a,egard;ry;ath1BiJlInrietArmi9sH;sa,t1A;en2Uga,mi;di;bi2Fil8MlNnMrJsItHwa,yl8M;i5Tt4;n60ti;iHmo51ri53;etH;!te;aCnaC;a,ey,l4;a02eWiRlPoNrKunJwH;enHyne1R;!dolD;ay,el;acieIetHiselB;a,chE;!la;ld1CogooH;sh;adys,enHor3yn2K;a,da,na;aKgi,lIna,ov8EselHta;a,e,le;da,liH;an;!n0;mLnJorgIrH;ald5Si,m3Etrud7;et8i4X;a,eHna;s29vieve;ma;bIle,mHrnet,yG;al5Si5;iIrielH;a,l1;!ja;aTeQiPlorOoz3rH;anJeIiH;da,eB;da,ja;!cH;esIiHoi0P;n1s66;!ca;a,enc3;en,o0;lIn0rnH;anB;ec3ic3;jr,nArKtHy7;emIiHma,oumaA;ha,ma,n;eh;ah,iBrah,za0;cr4Rd0Re0Qi0Pk0Ol07mXn54rUsOtNuMvHwa;aKelIiH;!e,ta;inFyn;!a;!ngel4V;geni1ni47;h5Yien9ta;mLperanKtH;eIhHrel5;er;l31r7;za;a,eralB;iHma,ne4Lyn;cHka,n;a,ka;aPeNiKmH;aHe21ie,y;!li9nuH;elG;lHn1;e7iHy;a,e,ja;lHrald;da,y;!nue5;aWeUiNlMma,no2oKsJvH;a,iH;na,ra;a,ie;iHuiH;se;a,en,ie,y;a0c3da,e,f,nMsJzaH;!betHveA;e,h;aHe,ka;!beH;th;!a,or;anor,nH;!a,i;!in1na;ate1Rta;leEs6;vi;eIiHna,wi0;e,th;l,n;aYeMh3iLjeneKoH;lor5Vminiq4Ln3FrHtt4;a,eEis,la,othHthy;ea,y;ba;an09naCon9ya;anQbPde,eOiMlJmetr3nHsir5M;a,iH;ce,se;a,iIla,orHphi9;es,is;a,l6F;dHrdH;re;!d5Ena;!b2ForaCraC;a,d2nH;!a,e;hl3i0l0GmNnLphn1rIvi1WyH;le,na;a,by,cIia,lH;a,en1;ey,ie;a,et8iH;!ca,el1Aka,z;arHia;is;a0Re0Nh04i02lUoJristIynH;di,th3;al,i0;lPnMrIurH;tn1D;aJd2OiHn2Ori9;!nH;a,e,n1;!l4;cepci5Cn4sH;tanHuelo;ce,za;eHleE;en,t8;aJeoIotH;il54;!pat2;ir7rJudH;et8iH;a,ne;a,e,iH;ce,sZ;a2er2ndH;i,y;aReNloe,rH;isJyH;stH;al;sy,tH;a1Sen,iHy;an1e,n1;deJlseIrH;!i7yl;a,y;li9;nMrH;isKlImH;ai9;a,eHot8;n1t8;!sa;d2elGtH;al,elG;cIlH;es8i47;el3ilH;e,ia,y;itlYlXmilWndVrMsKtHy5;aIeIhHri0;er1IleErDy;ri0;a38sH;a37ie;a,iOlLmeJolIrH;ie,ol;!e,in1yn;lHn;!a,la;a,eIie,otHy;a,ta;ne,y;na,s1X;a0Ii0I;a,e,l1;isAl4;in,yn;a0Ke02iZlXoUrH;andi7eRiJoIyH;an0nn;nwDoke;an3HdgMgiLtH;n31tH;!aInH;ey,i,y;ny;d,t8;etH;!t7;an0e,nH;da,na;bbi7glarIlo07nH;iAn4;ka;ancHythe;a,he;an1Clja0nHsm3M;iAtH;ou;aWcVlinUniArPssOtJulaCvH;!erlH;ey,y;hJsy,tH;e,iHy7;e,na;!anH;ie,y;!ie;nItHyl;ha,ie;adIiH;ce;et8i9;ay,da;ca,ky;!triH;ce,z;rbJyaH;rmH;aa;a2o2ra;a2Ub2Od25g21i1Sj5l18m0Zn0Boi,r06sWtVuPvOwa,yIzH;ra,u0;aKes6gJlIn,seH;!l;in;un;!nH;a,na;a,i2K;drLguJrIsteH;ja;el3;stH;in1;a,ey,i,y;aahua,he0;hIi2Gja,miAs2DtrH;id;aMlIraqHt21;at;eIi7yH;!n;e,iHy;gh;!nH;ti;iJleIo6piA;ta;en,n1t8;aHelG;!n1J;a01dje5eZgViTjRnKohito,toHya;inet8nH;el5ia;te;!aKeIiHmJ;e,ka;!mHtt7;ar4;!belIliHmU;sa;!l1;a,eliH;ca;ka,sHta;a,sa;elHie;a,iH;a,ca,n1qH;ue;!tH;a,te;!bImHstasiMya;ar3;el;aLberKeliJiHy;e,l3naH;!ta;a,ja;!ly;hGiIl3nB;da;a,ra;le;aWba,ePiMlKthJyH;a,c3sH;a,on,sa;ea;iHys0N;e,s0M;a,cIn1sHza;a,e,ha,on,sa;e,ia,ja;c3is6jaKksaKna,sJxH;aHia;!nd2;ia,saH;nd2;ra;ia;i0nIyH;ah,na;a,is,naCoud;la;c6da,leEmNnLsH;haClH;inHyY;g,n;!h;a,o,slH;ey;ee;en;at6g4nIusH;ti0;es;ie;aWdiTelMrH;eJiH;anMenH;a,e,ne;an0;na;!aLeKiIyH;nn;a,n1;a,e;!ne;!iH;de;e,lDsH;on;yn;!lH;i9yn;ne;aKbIiHrL;!e,gaK;ey,i7y;!e;gaH;il;dKliyJradhIs6;ha;ya;ah;a,ya','Honorific':_0x37e46c(0x3ec5),'Adj|Gerund':_0x37e46c(0x3d04),'Comparable':_0x37e46c(0x5cd),'Adverb':_0x37e46c(0x37a),'Conjunction':_0x37e46c(0x2b6b),'Currency':_0x37e46c(0x356c),'Determiner':_0x37e46c(0x4df2),'Adj|Present':'true¦a07b04cVdQeNfJhollIidRlEmCnarrIoBp9qua8r7s3t2uttFw0;aKet,ro0;ng,u08;endChin;e2hort,l1mooth,our,pa9tray,u0;re,speU;i2ow;cu6da02leSpaN;eplica01i02;ck;aHerfePr0;eseUime,omV;bscu1pen,wn;atu0e3odeH;re;a2e1ive,ow0;er;an;st,y;ow;a2i1oul,r0;ee,inge;rm;iIke,ncy,st;l1mpty,x0;emHpress;abo4ic7;amp,e2i1oub0ry,ull;le;ffu9re6;fu8libe0;raE;alm,l5o0;mpleCn3ol,rr1unterfe0;it;e0u7;ct;juga8sum7;ea1o0;se;n,r;ankru1lu0;nt;pt;li2pproxi0rticula1;ma0;te;ght','Person|Adj':'true¦b3du2earnest,frank,mi2r0san1woo1;an0ich,u1;dy;sty;ella,rown','Modal':_0x37e46c(0x3cbb),'Verb':_0x37e46c(0x4aea),'Person|Verb':_0x37e46c(0x207b),'Person|Date':_0x37e46c(0x209f)},_0x5976fa=0x24,_0x1d70f1=_0x37e46c(0x3511),_0x39633a=_0x1d70f1[_0x37e46c(0x1117)]('')[_0x37e46c(0x24d8)](function(_0x347286,_0x54a1a1,_0x4b4c56){return _0x347286[_0x54a1a1]=_0x4b4c56,_0x347286;},{}),_0x4dd490=function(_0x2130d2){const _0x2f1c69=_0x37e46c;if(void 0x0!==_0x39633a[_0x2130d2])return _0x39633a[_0x2130d2];let _0x32e595=0x0,_0x1b5b9d=0x1,_0x2119df=_0x5976fa,_0x5aaa6d=0x1;for(;_0x1b5b9d<_0x2130d2[_0x2f1c69(0x1b19)];_0x32e595+=_0x2119df,_0x1b5b9d++,_0x2119df*=_0x5976fa);for(let _0x199bf1=_0x2130d2[_0x2f1c69(0x1b19)]-0x1;_0x199bf1>=0x0;_0x199bf1--,_0x5aaa6d*=_0x5976fa){let _0x51d92e=_0x2130d2[_0x2f1c69(0x4955)](_0x199bf1)-0x30;_0x51d92e>0xa&&(_0x51d92e-=0x7),_0x32e595+=_0x51d92e*_0x5aaa6d;}return _0x32e595;},_0x660af2=function(_0x34134c){const _0x1047d1=_0x37e46c,_0x57ba7b=new RegExp(_0x1047d1(0x2f2a));for(let _0x568227=0x0;_0x568227<_0x34134c[_0x1047d1(0xbe9)][_0x1047d1(0x1b19)];_0x568227++){const _0x5efffb=_0x57ba7b[_0x1047d1(0x198d)](_0x34134c[_0x1047d1(0xbe9)][_0x568227]);if(!_0x5efffb){_0x34134c[_0x1047d1(0x305b)]=_0x568227;break;}_0x34134c[_0x1047d1(0x18b2)][_0x4dd490(_0x5efffb[0x1])]=_0x4dd490(_0x5efffb[0x2]);}_0x34134c['nodes']=_0x34134c[_0x1047d1(0xbe9)][_0x1047d1(0x384c)](_0x34134c[_0x1047d1(0x305b)],_0x34134c[_0x1047d1(0xbe9)][_0x1047d1(0x1b19)]);},_0x5ac0db=function(_0x198588,_0x162b6e,_0x1b6556){const _0x2fbe18=_0x37e46c,_0x3eccca=_0x4dd490(_0x162b6e);return _0x3eccca<_0x198588[_0x2fbe18(0x305b)]?_0x198588[_0x2fbe18(0x18b2)][_0x3eccca]:_0x1b6556+_0x3eccca+0x1-_0x198588[_0x2fbe18(0x305b)];},_0x56721f=function(_0x30b3cd){const _0x4de46b=_0x37e46c,_0x5250f7={'nodes':_0x30b3cd[_0x4de46b(0x1117)](';'),'syms':[],'symCount':0x0};return _0x30b3cd[_0x4de46b(0x2d96)](':')&&_0x660af2(_0x5250f7),function(_0x249ed2){const _0x1128fc=[],_0xa79a8e=(_0x1b6981,_0x2edadb)=>{const _0x357b47=a0_0x11e7;let _0x3a80ab=_0x249ed2[_0x357b47(0xbe9)][_0x1b6981];'!'===_0x3a80ab[0x0]&&(_0x1128fc['push'](_0x2edadb),_0x3a80ab=_0x3a80ab[_0x357b47(0x384c)](0x1));const _0x4a40fc=_0x3a80ab['split'](/([A-Z0-9,]+)/g);for(let _0x4044b3=0x0;_0x4044b3<_0x4a40fc['length'];_0x4044b3+=0x2){const _0xd3c22a=_0x4a40fc[_0x4044b3],_0x4bd175=_0x4a40fc[_0x4044b3+0x1];if(!_0xd3c22a)continue;const _0x2efc8f=_0x2edadb+_0xd3c22a;if(','===_0x4bd175||void 0x0===_0x4bd175){_0x1128fc['push'](_0x2efc8f);continue;}const _0x335c22=_0x5ac0db(_0x249ed2,_0x4bd175,_0x1b6981);_0xa79a8e(_0x335c22,_0x2efc8f);}};return _0xa79a8e(0x0,''),_0x1128fc;}(_0x5250f7);},_0x147dbb=function(_0x569774){const _0x5bf025=_0x37e46c;if(!_0x569774)return{};const _0x3c3eb4=_0x569774[_0x5bf025(0x1117)]('|')['reduce']((_0x52aeba,_0x274bdb)=>{const _0x59698c=_0x5bf025,_0x4b7029=_0x274bdb[_0x59698c(0x1117)]('¦');return _0x52aeba[_0x4b7029[0x0]]=_0x4b7029[0x1],_0x52aeba;},{}),_0x40ad2e={};return Object[_0x5bf025(0x1ea9)](_0x3c3eb4)[_0x5bf025(0xa21)](function(_0x410541){const _0x47d86b=_0x5bf025,_0x4a941b=_0x56721f(_0x3c3eb4[_0x410541]);'true'===_0x410541&&(_0x410541=!0x0);for(let _0x1b0671=0x0;_0x1b0671<_0x4a941b[_0x47d86b(0x1b19)];_0x1b0671++){const _0x855f16=_0x4a941b[_0x1b0671];!0x0===_0x40ad2e[_0x47d86b(0x2427)](_0x855f16)?!0x1===Array[_0x47d86b(0x22b4)](_0x40ad2e[_0x855f16])?_0x40ad2e[_0x855f16]=[_0x40ad2e[_0x855f16],_0x410541]:_0x40ad2e[_0x855f16]['push'](_0x410541):_0x40ad2e[_0x855f16]=_0x410541;}}),_0x40ad2e;},_0x242ccc=[_0x37e46c(0x3d93),'Pronoun'],_0x185b19={'a':[[/(antenn|formul|nebul|vertebr|vit)a$/i,_0x37e46c(0x2c45)],[/ia$/i,'ia']],'e':[[/(kn|l|w)ife$/i,_0x37e46c(0x481a)],[/(hive)$/i,'$1s'],[/([m|l])ouse$/i,_0x37e46c(0x1c06)],[/([m|l])ice$/i,_0x37e46c(0x1c06)]],'f':[[/^(dwar|handkerchie|hoo|scar|whar)f$/i,_0x37e46c(0x206d)],[/^((?:ca|e|ha|(?:our|them|your)?se|she|wo)l|lea|loa|shea|thie)f$/i,_0x37e46c(0x206d)]],'i':[[/(octop|vir)i$/i,_0x37e46c(0x5290)]],'m':[[/([ti])um$/i,_0x37e46c(0x1bd7)]],'n':[[/^(oxen)$/i,'$1']],'o':[[/(al|ad|at|er|et|ed)o$/i,_0x37e46c(0x184b)]],'s':[[/(ax|test)is$/i,'$1es'],[/(alias|status)$/i,_0x37e46c(0x2ba7)],[/sis$/i,_0x37e46c(0x4636)],[/(bu)s$/i,_0x37e46c(0x288d)],[/(sis)$/i,_0x37e46c(0x4636)],[/^(?!talis|.*hu)(.*)man$/i,_0x37e46c(0x1681)],[/(octop|vir|radi|nucle|fung|cact|stimul)us$/i,_0x37e46c(0x5290)]],'x':[[/(matr|vert|ind|cort)(ix|ex)$/i,_0x37e46c(0x75c)],[/^(ox)$/i,_0x37e46c(0x4ad9)]],'y':[[/([^aeiouy]|qu)y$/i,_0x37e46c(0x39bf)]],'z':[[/(quiz)$/i,_0x37e46c(0x30a7)]]},_0x2d0f3e=/([xsz]|ch|sh)$/,_0x7b8422=function(_0xa7417c='',_0x550565){const _0x1d8db5=_0x37e46c;let {irregularPlurals:_0x4e6c77,uncountable:_0x2165b6}=_0x550565[_0x1d8db5(0x21c9)];if(_0x2165b6[_0x1d8db5(0x2427)](_0xa7417c))return _0xa7417c;if(_0x4e6c77[_0x1d8db5(0x2427)](_0xa7417c))return _0x4e6c77[_0xa7417c];let _0x133994=function(_0x416572){const _0x3dd5ac=_0x1d8db5;let _0x2e5ce5=_0x416572[_0x416572[_0x3dd5ac(0x1b19)]-0x1];if(!0x0===_0x185b19[_0x3dd5ac(0x2427)](_0x2e5ce5))for(let _0x4f199f=0x0;_0x4f199f<_0x185b19[_0x2e5ce5][_0x3dd5ac(0x1b19)];_0x4f199f+=0x1){let _0x54dca7=_0x185b19[_0x2e5ce5][_0x4f199f][0x0];if(!0x0===_0x54dca7[_0x3dd5ac(0x1769)](_0x416572))return _0x416572[_0x3dd5ac(0x741)](_0x54dca7,_0x185b19[_0x2e5ce5][_0x4f199f][0x1]);}return null;}(_0xa7417c);return null!==_0x133994?_0x133994:_0x2d0f3e[_0x1d8db5(0x1769)](_0xa7417c)?_0xa7417c+'es':_0xa7417c+'s';},_0x29ac3c=/\|/;let _0x415e70={'20th\x20century\x20fox':_0x37e46c(0x4c36),'7\x20eleven':_0x37e46c(0x4c36),'motel\x206':_0x37e46c(0x4c36),'g8':_0x37e46c(0x4c36),'vh1':_0x37e46c(0x4c36),'76ers':_0x37e46c(0x2aad),'49ers':_0x37e46c(0x2aad),'q1':_0x37e46c(0x448e),'q2':_0x37e46c(0x448e),'q3':_0x37e46c(0x448e),'q4':_0x37e46c(0x448e),'km2':'Unit','m2':'Unit','dm2':_0x37e46c(0xdae),'cm2':_0x37e46c(0xdae),'mm2':'Unit','mile2':'Unit','in2':_0x37e46c(0xdae),'yd2':'Unit','ft2':_0x37e46c(0xdae),'m3':_0x37e46c(0xdae),'dm3':_0x37e46c(0xdae),'cm3':_0x37e46c(0xdae),'in3':'Unit','ft3':'Unit','yd3':_0x37e46c(0xdae),'at&t':_0x37e46c(0x4c36),'black\x20&\x20decker':_0x37e46c(0x4c36),'h\x20&\x20m':_0x37e46c(0x4c36),'johnson\x20&\x20johnson':_0x37e46c(0x4c36),'procter\x20&\x20gamble':_0x37e46c(0x4c36),'ben\x20&\x20jerry\x27s':'Organization','&':'Conjunction','i':[_0x37e46c(0x2394),_0x37e46c(0x1e9f)],'he':[_0x37e46c(0x2394),'Singular'],'she':['Pronoun',_0x37e46c(0x1e9f)],'it':['Pronoun',_0x37e46c(0x1e9f)],'they':['Pronoun',_0x37e46c(0x25f7)],'we':['Pronoun',_0x37e46c(0x25f7)],'was':['Copula','PastTense'],'is':[_0x37e46c(0x49fa),_0x37e46c(0x2c88)],'are':[_0x37e46c(0x49fa),_0x37e46c(0x2c88)],'am':[_0x37e46c(0x49fa),_0x37e46c(0x2c88)],'were':[_0x37e46c(0x49fa),_0x37e46c(0xe52)],'her':_0x242ccc,'his':_0x242ccc,'hers':_0x242ccc,'their':_0x242ccc,'theirs':_0x242ccc,'themselves':_0x242ccc,'your':_0x242ccc,'our':_0x242ccc,'ours':_0x242ccc,'my':_0x242ccc,'its':_0x242ccc,'vs':[_0x37e46c(0x37f1),_0x37e46c(0x1b3e)],'if':['Condition',_0x37e46c(0x326f)],'closer':_0x37e46c(0x13dd),'closest':_0x37e46c(0x2f40),'much':_0x37e46c(0x2cbd),'may':_0x37e46c(0x28f1),'babysat':'PastTense','blew':'PastTense','drank':_0x37e46c(0xe52),'drove':_0x37e46c(0xe52),'forgave':_0x37e46c(0xe52),'skiied':_0x37e46c(0xe52),'spilt':'PastTense','stung':'PastTense','swam':_0x37e46c(0xe52),'swung':_0x37e46c(0xe52),'guaranteed':_0x37e46c(0xe52),'shrunk':_0x37e46c(0xe52),'nears':_0x37e46c(0x2c88),'nearing':_0x37e46c(0x42ce),'neared':_0x37e46c(0xe52),'no':[_0x37e46c(0x90f),'Expression']},_0x1ae5cf={};const _0x17fa09={'two':{'irregularPlurals':_0x439866,'uncountable':{}}};Object[_0x37e46c(0x1ea9)](_0x36d284)[_0x37e46c(0xa21)](_0x590350=>{const _0x51df5e=_0x37e46c;let _0x1874b0=_0x147dbb(_0x36d284[_0x590350]);_0x29ac3c[_0x51df5e(0x1769)](_0x590350)?Object['keys'](_0x1874b0)[_0x51df5e(0xa21)](_0x19ee93=>{const _0x2ba7f5=_0x51df5e;if(_0x1ae5cf[_0x19ee93]=_0x590350,_0x2ba7f5(0x2d09)===_0x590350){let _0x31e2b4=_0x7b8422(_0x19ee93,_0x17fa09);_0x1ae5cf[_0x31e2b4]=_0x2ba7f5(0x3450);}}):Object[_0x51df5e(0x1ea9)](_0x1874b0)[_0x51df5e(0xa21)](_0x363fc2=>{_0x415e70[_0x363fc2]=_0x590350;});}),[':(',':)',':P',':p',':O',';(',';)',';P',';p',';O',':3',':|',':/',':\x5c',':$',':*',':@',':-(',_0x37e46c(0x3825),_0x37e46c(0x468f),_0x37e46c(0x232c),_0x37e46c(0x2a88),_0x37e46c(0xf18),_0x37e46c(0x997),_0x37e46c(0x3900),_0x37e46c(0x172b),_0x37e46c(0x3ee5),_0x37e46c(0x3e20),_0x37e46c(0x31c6),_0x37e46c(0x1baf),_0x37e46c(0x126f),_0x37e46c(0x2453),_0x37e46c(0x12ea),':^O',':^3',_0x37e46c(0x4111),':^/',':^\x5c',_0x37e46c(0x2853),_0x37e46c(0x4e49),_0x37e46c(0x3c8f),'):','(:','$:','*:',_0x37e46c(0x2ee1),_0x37e46c(0x1cb5),_0x37e46c(0x2ca2),_0x37e46c(0x3080),')^:',_0x37e46c(0x762),_0x37e46c(0x382b),_0x37e46c(0x12e4),'<3',_0x37e46c(0x1f5d),_0x37e46c(0x3186),'=('][_0x37e46c(0xa21)](_0xc15242=>_0x415e70[_0xc15242]=_0x37e46c(0x1717)),delete _0x415e70[''],delete _0x415e70[_0x37e46c(0x1582)],delete _0x415e70['\x20'];const _0x340c2f='Singular',_0x32ef43={'beforeTags':{'Determiner':_0x340c2f,'Possessive':_0x340c2f,'Acronym':_0x340c2f,'Noun':_0x340c2f,'Adjective':_0x340c2f,'PresentTense':_0x340c2f,'Gerund':_0x340c2f,'PastTense':_0x340c2f,'Infinitive':_0x340c2f,'Date':_0x340c2f,'Ordinal':_0x340c2f,'Demonym':_0x340c2f},'afterTags':{'Value':_0x340c2f,'Modal':_0x340c2f,'Copula':_0x340c2f,'PresentTense':_0x340c2f,'PastTense':_0x340c2f,'Demonym':_0x340c2f,'Actor':_0x340c2f},'beforeWords':{'the':_0x340c2f,'with':_0x340c2f,'without':_0x340c2f,'of':_0x340c2f,'for':_0x340c2f,'any':_0x340c2f,'all':_0x340c2f,'on':_0x340c2f,'cut':_0x340c2f,'cuts':_0x340c2f,'increase':_0x340c2f,'decrease':_0x340c2f,'raise':_0x340c2f,'drop':_0x340c2f,'save':_0x340c2f,'saved':_0x340c2f,'saves':_0x340c2f,'make':_0x340c2f,'makes':_0x340c2f,'made':_0x340c2f,'minus':_0x340c2f,'plus':_0x340c2f,'than':_0x340c2f,'another':_0x340c2f,'versus':_0x340c2f,'neither':_0x340c2f,'about':_0x340c2f,'favorite':_0x340c2f,'best':_0x340c2f,'daily':_0x340c2f,'weekly':_0x340c2f,'linear':_0x340c2f,'binary':_0x340c2f,'mobile':_0x340c2f,'lexical':_0x340c2f,'technical':_0x340c2f,'computer':_0x340c2f,'scientific':_0x340c2f,'security':_0x340c2f,'government':_0x340c2f,'popular':_0x340c2f,'formal':_0x340c2f,'no':_0x340c2f,'more':_0x340c2f,'one':_0x340c2f,'let':_0x340c2f,'her':_0x340c2f,'his':_0x340c2f,'their':_0x340c2f,'our':_0x340c2f,'us':_0x340c2f,'sheer':_0x340c2f,'monthly':_0x340c2f,'yearly':_0x340c2f,'current':_0x340c2f,'previous':_0x340c2f,'upcoming':_0x340c2f,'last':_0x340c2f,'next':_0x340c2f,'main':_0x340c2f,'initial':_0x340c2f,'final':_0x340c2f,'beginning':_0x340c2f,'end':_0x340c2f,'top':_0x340c2f,'bottom':_0x340c2f,'future':_0x340c2f,'past':_0x340c2f,'major':_0x340c2f,'minor':_0x340c2f,'side':_0x340c2f,'central':_0x340c2f,'peripheral':_0x340c2f,'public':_0x340c2f,'private':_0x340c2f},'afterWords':{'of':_0x340c2f,'system':_0x340c2f,'aid':_0x340c2f,'method':_0x340c2f,'utility':_0x340c2f,'tool':_0x340c2f,'reform':_0x340c2f,'therapy':_0x340c2f,'philosophy':_0x340c2f,'room':_0x340c2f,'authority':_0x340c2f,'says':_0x340c2f,'said':_0x340c2f,'wants':_0x340c2f,'wanted':_0x340c2f,'is':_0x340c2f,'did':_0x340c2f,'do':_0x340c2f,'can':_0x340c2f,'wise':_0x340c2f}},_0x529027=_0x37e46c(0x2631),_0x16ebcc={'beforeTags':{'Modal':_0x529027,'Adverb':_0x529027,'Negative':_0x529027,'Plural':_0x529027},'afterTags':{'Determiner':_0x529027,'Adverb':_0x529027,'Possessive':_0x529027,'Reflexive':_0x529027,'Preposition':_0x529027,'Cardinal':_0x529027,'Comparative':_0x529027,'Superlative':_0x529027},'beforeWords':{'i':_0x529027,'we':_0x529027,'you':_0x529027,'they':_0x529027,'to':_0x529027,'please':_0x529027,'will':_0x529027,'have':_0x529027,'had':_0x529027,'would':_0x529027,'could':_0x529027,'should':_0x529027,'do':_0x529027,'did':_0x529027,'does':_0x529027,'can':_0x529027,'must':_0x529027,'us':_0x529027,'me':_0x529027,'let':_0x529027,'even':_0x529027,'when':_0x529027,'help':_0x529027,'he':_0x529027,'she':_0x529027,'it':_0x529027,'being':_0x529027,'bi':_0x529027,'co':_0x529027,'contra':_0x529027,'de':_0x529027,'inter':_0x529027,'intra':_0x529027,'mis':_0x529027,'pre':_0x529027,'out':_0x529027,'counter':_0x529027,'nobody':_0x529027,'somebody':_0x529027,'anybody':_0x529027,'everybody':_0x529027},'afterWords':{'the':_0x529027,'me':_0x529027,'you':_0x529027,'him':_0x529027,'us':_0x529027,'her':_0x529027,'his':_0x529027,'them':_0x529027,'they':_0x529027,'it':_0x529027,'himself':_0x529027,'herself':_0x529027,'itself':_0x529027,'myself':_0x529027,'ourselves':_0x529027,'themselves':_0x529027,'something':_0x529027,'anything':_0x529027,'a':_0x529027,'an':_0x529027,'up':_0x529027,'down':_0x529027,'by':_0x529027,'out':_0x529027,'off':_0x529027,'under':_0x529027,'what':_0x529027,'all':_0x529027,'to':_0x529027,'because':_0x529027,'although':_0x529027,'how':_0x529027,'otherwise':_0x529027,'together':_0x529027,'though':_0x529027,'into':_0x529027,'yet':_0x529027,'more':_0x529027,'here':_0x529027,'there':_0x529027,'away':_0x529027}},_0xde70c0={'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0xbc7)],_0x32ef43[_0x37e46c(0xbc7)],{}),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0x1880)],_0x32ef43['afterTags'],{}),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0x22e3)],_0x32ef43[_0x37e46c(0x22e3)],{}),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0x1839)],_0x32ef43[_0x37e46c(0x1839)],{})},_0x53a9dc='Adjective',_0x4655d1={'beforeTags':{'Determiner':_0x53a9dc,'Possessive':_0x53a9dc,'Hyphenated':_0x53a9dc},'afterTags':{'Adjective':_0x53a9dc},'beforeWords':{'seem':_0x53a9dc,'seemed':_0x53a9dc,'seems':_0x53a9dc,'feel':_0x53a9dc,'feels':_0x53a9dc,'felt':_0x53a9dc,'stay':_0x53a9dc,'appear':_0x53a9dc,'appears':_0x53a9dc,'appeared':_0x53a9dc,'also':_0x53a9dc,'over':_0x53a9dc,'under':_0x53a9dc,'too':_0x53a9dc,'it':_0x53a9dc,'but':_0x53a9dc,'still':_0x53a9dc,'really':_0x53a9dc,'quite':_0x53a9dc,'well':_0x53a9dc,'very':_0x53a9dc,'truly':_0x53a9dc,'how':_0x53a9dc,'deeply':_0x53a9dc,'hella':_0x53a9dc,'profoundly':_0x53a9dc,'extremely':_0x53a9dc,'so':_0x53a9dc,'badly':_0x53a9dc,'mostly':_0x53a9dc,'totally':_0x53a9dc,'awfully':_0x53a9dc,'rather':_0x53a9dc,'nothing':_0x53a9dc,'something':_0x53a9dc,'anything':_0x53a9dc,'not':_0x53a9dc,'me':_0x53a9dc,'is':_0x53a9dc,'face':_0x53a9dc,'faces':_0x53a9dc,'faced':_0x53a9dc,'look':_0x53a9dc,'looks':_0x53a9dc,'looked':_0x53a9dc,'reveal':_0x53a9dc,'reveals':_0x53a9dc,'revealed':_0x53a9dc,'sound':_0x53a9dc,'sounded':_0x53a9dc,'sounds':_0x53a9dc,'remains':_0x53a9dc,'remained':_0x53a9dc,'prove':_0x53a9dc,'proves':_0x53a9dc,'proved':_0x53a9dc,'becomes':_0x53a9dc,'stays':_0x53a9dc,'tastes':_0x53a9dc,'taste':_0x53a9dc,'smells':_0x53a9dc,'smell':_0x53a9dc,'gets':_0x53a9dc,'grows':_0x53a9dc,'as':_0x53a9dc,'rings':_0x53a9dc,'radiates':_0x53a9dc,'conveys':_0x53a9dc,'convey':_0x53a9dc,'conveyed':_0x53a9dc,'of':_0x53a9dc},'afterWords':{'too':_0x53a9dc,'also':_0x53a9dc,'or':_0x53a9dc,'enough':_0x53a9dc,'as':_0x53a9dc}},_0x22c9d9='Gerund',_0x4af743={'beforeTags':{'Adverb':_0x22c9d9,'Preposition':_0x22c9d9,'Conjunction':_0x22c9d9},'afterTags':{'Adverb':_0x22c9d9,'Possessive':_0x22c9d9,'Person':_0x22c9d9,'Pronoun':_0x22c9d9,'Determiner':_0x22c9d9,'Copula':_0x22c9d9,'Preposition':_0x22c9d9,'Conjunction':_0x22c9d9,'Comparative':_0x22c9d9},'beforeWords':{'been':_0x22c9d9,'keep':_0x22c9d9,'continue':_0x22c9d9,'stop':_0x22c9d9,'am':_0x22c9d9,'be':_0x22c9d9,'me':_0x22c9d9,'began':_0x22c9d9,'start':_0x22c9d9,'starts':_0x22c9d9,'started':_0x22c9d9,'stops':_0x22c9d9,'stopped':_0x22c9d9,'help':_0x22c9d9,'helps':_0x22c9d9,'avoid':_0x22c9d9,'avoids':_0x22c9d9,'love':_0x22c9d9,'loves':_0x22c9d9,'loved':_0x22c9d9,'hate':_0x22c9d9,'hates':_0x22c9d9,'hated':_0x22c9d9},'afterWords':{'you':_0x22c9d9,'me':_0x22c9d9,'her':_0x22c9d9,'him':_0x22c9d9,'his':_0x22c9d9,'them':_0x22c9d9,'their':_0x22c9d9,'it':_0x22c9d9,'this':_0x22c9d9,'there':_0x22c9d9,'on':_0x22c9d9,'about':_0x22c9d9,'for':_0x22c9d9,'up':_0x22c9d9,'down':_0x22c9d9}},_0x5deda4=_0x37e46c(0x42ce),_0x62e014=_0x37e46c(0x4972),_0x550080={'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0xbc7)],_0x4af743[_0x37e46c(0xbc7)],{'Imperative':_0x5deda4,'Infinitive':_0x62e014,'Plural':_0x5deda4}),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x1880)],_0x4af743[_0x37e46c(0x1880)],{'Noun':_0x62e014}),'beforeWords':Object['assign']({},_0x4655d1[_0x37e46c(0x22e3)],_0x4af743[_0x37e46c(0x22e3)],{'is':_0x62e014,'are':_0x5deda4,'was':_0x62e014,'of':_0x62e014,'suggest':_0x5deda4,'suggests':_0x5deda4,'suggested':_0x5deda4,'recommend':_0x5deda4,'recommends':_0x5deda4,'recommended':_0x5deda4,'imagine':_0x5deda4,'imagines':_0x5deda4,'imagined':_0x5deda4,'consider':_0x5deda4,'considered':_0x5deda4,'considering':_0x5deda4,'resist':_0x5deda4,'resists':_0x5deda4,'resisted':_0x5deda4,'avoid':_0x5deda4,'avoided':_0x5deda4,'avoiding':_0x5deda4,'except':_0x62e014,'accept':_0x62e014,'assess':_0x5deda4,'explore':_0x5deda4,'fear':_0x5deda4,'fears':_0x5deda4,'appreciate':_0x5deda4,'question':_0x5deda4,'help':_0x5deda4,'embrace':_0x5deda4,'with':_0x62e014}),'afterWords':Object['assign']({},_0x4655d1[_0x37e46c(0x1839)],_0x4af743['afterWords'],{'to':_0x5deda4,'not':_0x5deda4,'the':_0x5deda4})},_0x31f852={'beforeTags':{'Determiner':void 0x0,'Cardinal':_0x37e46c(0x1786),'PhrasalVerb':'Adjective'},'afterTags':{}},_0x55c63b={'beforeTags':Object['assign']({},_0x4655d1['beforeTags'],_0x32ef43[_0x37e46c(0xbc7)],_0x31f852[_0x37e46c(0xbc7)]),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x4655d1['afterTags'],_0x32ef43[_0x37e46c(0x1880)],_0x31f852['afterTags']),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x22e3)],_0x32ef43['beforeWords'],{'are':_0x37e46c(0x4972),'is':_0x37e46c(0x4972),'was':_0x37e46c(0x4972),'be':_0x37e46c(0x4972),'off':'Adjective','out':'Adjective'}),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x1839)],_0x32ef43[_0x37e46c(0x1839)])};let _0x179b01=_0x37e46c(0xe52),_0x56de75='Adjective';const _0x32eeba={'beforeTags':{'Adverb':_0x179b01,'Pronoun':_0x179b01,'ProperNoun':_0x179b01,'Auxiliary':_0x179b01,'Noun':_0x179b01},'afterTags':{'Possessive':_0x179b01,'Pronoun':_0x179b01,'Determiner':_0x179b01,'Adverb':_0x179b01,'Comparative':_0x179b01,'Date':_0x179b01,'Gerund':_0x179b01},'beforeWords':{'be':_0x179b01,'who':_0x179b01,'get':_0x56de75,'had':_0x179b01,'has':_0x179b01,'have':_0x179b01,'been':_0x179b01,'it':_0x179b01,'as':_0x179b01,'for':_0x56de75,'more':_0x56de75,'always':_0x56de75},'afterWords':{'by':_0x179b01,'back':_0x179b01,'out':_0x179b01,'in':_0x179b01,'up':_0x179b01,'down':_0x179b01,'before':_0x179b01,'after':_0x179b01,'for':_0x179b01,'the':_0x179b01,'with':_0x179b01,'as':_0x179b01,'on':_0x179b01,'at':_0x179b01,'between':_0x179b01,'to':_0x179b01,'into':_0x179b01,'us':_0x179b01,'them':_0x179b01,'his':_0x179b01,'her':_0x179b01,'their':_0x179b01,'our':_0x179b01,'me':_0x179b01,'about':_0x56de75}},_0x29e595={'beforeTags':Object['assign']({},_0x4655d1[_0x37e46c(0xbc7)],_0x32eeba[_0x37e46c(0xbc7)]),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x1880)],_0x32eeba[_0x37e46c(0x1880)]),'beforeWords':Object['assign']({},_0x4655d1[_0x37e46c(0x22e3)],_0x32eeba[_0x37e46c(0x22e3)]),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x1839)],_0x32eeba['afterWords'])},_0x5d44ef={'afterTags':{'Noun':'Adjective','Conjunction':void 0x0}},_0x24bde8={'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0xbc7)],_0x16ebcc[_0x37e46c(0xbc7)],{'Adverb':void 0x0,'Negative':void 0x0}),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x1880)],_0x16ebcc[_0x37e46c(0x1880)],_0x5d44ef[_0x37e46c(0x1880)]),'beforeWords':Object['assign']({},_0x4655d1[_0x37e46c(0x22e3)],_0x16ebcc[_0x37e46c(0x22e3)],{'have':void 0x0,'had':void 0x0,'not':void 0x0,'went':'Adjective','goes':_0x37e46c(0x4972),'got':'Adjective','be':_0x37e46c(0x4972)}),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x4655d1[_0x37e46c(0x1839)],_0x16ebcc[_0x37e46c(0x1839)],{'to':void 0x0,'as':_0x37e46c(0x4972)})},_0x25f035={'Copula':_0x37e46c(0x42ce),'PastTense':_0x37e46c(0x42ce),'PresentTense':_0x37e46c(0x42ce),'Infinitive':_0x37e46c(0x42ce)},_0x1ae45e={'Value':_0x37e46c(0x42ce)},_0x37b40f={'are':'Gerund','were':'Gerund','be':_0x37e46c(0x42ce),'no':_0x37e46c(0x42ce),'without':_0x37e46c(0x42ce),'you':_0x37e46c(0x42ce),'we':_0x37e46c(0x42ce),'they':_0x37e46c(0x42ce),'he':'Gerund','she':_0x37e46c(0x42ce),'us':_0x37e46c(0x42ce),'them':_0x37e46c(0x42ce)},_0x432a95={'the':_0x37e46c(0x42ce),'this':_0x37e46c(0x42ce),'that':'Gerund','me':_0x37e46c(0x42ce),'us':'Gerund','them':_0x37e46c(0x42ce)},_0x1836e0={'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x4af743[_0x37e46c(0xbc7)],_0x32ef43[_0x37e46c(0xbc7)],_0x25f035),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x4af743[_0x37e46c(0x1880)],_0x32ef43[_0x37e46c(0x1880)],_0x1ae45e),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x4af743[_0x37e46c(0x22e3)],_0x32ef43['beforeWords'],_0x37b40f),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x4af743['afterWords'],_0x32ef43[_0x37e46c(0x1839)],_0x432a95)},_0x4dc6f8=_0x37e46c(0x1e9f),_0x50c3ce=_0x37e46c(0x2631),_0x152afa={'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0xbc7)],_0x32ef43[_0x37e46c(0xbc7)],{'Adjective':_0x4dc6f8,'Particle':_0x4dc6f8}),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0x1880)],_0x32ef43[_0x37e46c(0x1880)],{'ProperNoun':_0x50c3ce,'Gerund':_0x50c3ce,'Adjective':_0x50c3ce,'Copula':_0x4dc6f8}),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x16ebcc[_0x37e46c(0x22e3)],_0x32ef43['beforeWords'],{'is':_0x4dc6f8,'was':_0x4dc6f8,'of':_0x4dc6f8,'have':null}),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x16ebcc['afterWords'],_0x32ef43['afterWords'],{'instead':_0x50c3ce,'about':_0x50c3ce,'his':_0x50c3ce,'her':_0x50c3ce,'to':null,'by':null,'in':null})},_0x2735b3=_0x37e46c(0x904),_0x4e8e75={'beforeTags':{'Honorific':_0x2735b3,'Person':_0x2735b3},'afterTags':{'Person':_0x2735b3,'ProperNoun':_0x2735b3,'Verb':_0x2735b3},'ownTags':{'ProperNoun':_0x2735b3},'beforeWords':{'hi':_0x2735b3,'hey':_0x2735b3,'yo':_0x2735b3,'dear':_0x2735b3,'hello':_0x2735b3},'afterWords':{'said':_0x2735b3,'says':_0x2735b3,'told':_0x2735b3,'tells':_0x2735b3,'feels':_0x2735b3,'felt':_0x2735b3,'seems':_0x2735b3,'thinks':_0x2735b3,'thought':_0x2735b3,'spends':_0x2735b3,'spendt':_0x2735b3,'plays':_0x2735b3,'played':_0x2735b3,'sing':_0x2735b3,'sang':_0x2735b3,'learn':_0x2735b3,'learned':_0x2735b3,'wants':_0x2735b3,'wanted':_0x2735b3}},_0x58fdcb=_0x37e46c(0x1c7a),_0x5bc45d={'beforeTags':{'Date':_0x58fdcb,'Value':_0x58fdcb},'afterTags':{'Date':_0x58fdcb,'Value':_0x58fdcb},'beforeWords':{'by':_0x58fdcb,'in':_0x58fdcb,'on':_0x58fdcb,'during':_0x58fdcb,'after':_0x58fdcb,'before':_0x58fdcb,'between':_0x58fdcb,'until':_0x58fdcb,'til':_0x58fdcb,'sometime':_0x58fdcb,'of':_0x58fdcb,'this':_0x58fdcb,'next':_0x58fdcb,'last':_0x58fdcb,'previous':_0x58fdcb,'following':_0x58fdcb,'with':'Person'},'afterWords':{'sometime':_0x58fdcb,'in':_0x58fdcb,'of':_0x58fdcb,'until':_0x58fdcb,'the':_0x58fdcb}},_0x39811b={'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x4e8e75[_0x37e46c(0xbc7)],_0x5bc45d['beforeTags']),'afterTags':Object['assign']({},_0x4e8e75[_0x37e46c(0x1880)],_0x5bc45d[_0x37e46c(0x1880)]),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x4e8e75['beforeWords'],_0x5bc45d[_0x37e46c(0x22e3)]),'afterWords':Object['assign']({},_0x4e8e75[_0x37e46c(0x1839)],_0x5bc45d[_0x37e46c(0x1839)])},_0x2275ee='Place',_0x36d23c={'beforeTags':{'Place':_0x2275ee},'afterTags':{'Place':_0x2275ee,'Abbreviation':_0x2275ee},'beforeWords':{'in':_0x2275ee,'by':_0x2275ee,'near':_0x2275ee,'from':_0x2275ee,'to':_0x2275ee},'afterWords':{'in':_0x2275ee,'by':_0x2275ee,'near':_0x2275ee,'from':_0x2275ee,'to':_0x2275ee,'government':_0x2275ee,'council':_0x2275ee,'region':_0x2275ee,'city':_0x2275ee}};let _0x30b0b6=_0x37e46c(0xdae);const _0x70cdc4={'Actor|Verb':_0xde70c0,'Adj|Gerund':_0x550080,'Adj|Noun':_0x55c63b,'Adj|Past':_0x29e595,'Adj|Present':_0x24bde8,'Noun|Verb':_0x152afa,'Noun|Gerund':_0x1836e0,'Person|Noun':{'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x32ef43[_0x37e46c(0xbc7)],_0x4e8e75[_0x37e46c(0xbc7)]),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x32ef43[_0x37e46c(0x1880)],_0x4e8e75[_0x37e46c(0x1880)]),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x32ef43[_0x37e46c(0x22e3)],_0x4e8e75['beforeWords'],{'i':_0x37e46c(0x2631),'we':_0x37e46c(0x2631)}),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x32ef43['afterWords'],_0x4e8e75[_0x37e46c(0x1839)])},'Person|Date':_0x39811b,'Person|Verb':{'beforeTags':Object[_0x37e46c(0x4e14)]({},_0x32ef43['beforeTags'],_0x4e8e75[_0x37e46c(0xbc7)],_0x16ebcc['beforeTags']),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x32ef43['afterTags'],_0x4e8e75[_0x37e46c(0x1880)],_0x16ebcc[_0x37e46c(0x1880)]),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x32ef43[_0x37e46c(0x22e3)],_0x4e8e75[_0x37e46c(0x22e3)],_0x16ebcc['beforeWords']),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x32ef43[_0x37e46c(0x1839)],_0x4e8e75[_0x37e46c(0x1839)],_0x16ebcc[_0x37e46c(0x1839)])},'Person|Place':{'beforeTags':Object['assign']({},_0x36d23c[_0x37e46c(0xbc7)],_0x4e8e75[_0x37e46c(0xbc7)]),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x36d23c[_0x37e46c(0x1880)],_0x4e8e75[_0x37e46c(0x1880)]),'beforeWords':Object['assign']({},_0x36d23c[_0x37e46c(0x22e3)],_0x4e8e75['beforeWords']),'afterWords':Object[_0x37e46c(0x4e14)]({},_0x36d23c[_0x37e46c(0x1839)],_0x4e8e75[_0x37e46c(0x1839)])},'Person|Adj':{'beforeTags':Object['assign']({},_0x4e8e75[_0x37e46c(0xbc7)],_0x4655d1['beforeTags']),'afterTags':Object[_0x37e46c(0x4e14)]({},_0x4e8e75[_0x37e46c(0x1880)],_0x4655d1[_0x37e46c(0x1880)]),'beforeWords':Object[_0x37e46c(0x4e14)]({},_0x4e8e75[_0x37e46c(0x22e3)],_0x4655d1[_0x37e46c(0x22e3)]),'afterWords':Object['assign']({},_0x4e8e75[_0x37e46c(0x1839)],_0x4655d1[_0x37e46c(0x1839)])},'Unit|Noun':{'beforeTags':{'Value':_0x30b0b6},'afterTags':{},'beforeWords':{'per':_0x30b0b6,'every':_0x30b0b6,'each':_0x30b0b6,'square':_0x30b0b6,'cubic':_0x30b0b6,'sq':_0x30b0b6,'metric':_0x30b0b6},'afterWords':{'per':_0x30b0b6,'squared':_0x30b0b6,'cubed':_0x30b0b6,'long':_0x30b0b6}}},_0x1c2a2c=(_0x1518e7,_0xddc4e6)=>{const _0x2eade3=_0x37e46c;let _0x396046=Object['keys'](_0x1518e7)[_0x2eade3(0x24d8)]((_0x130f61,_0x15feb4)=>(_0x130f61[_0x15feb4]='Infinitive'===_0x1518e7[_0x15feb4]?_0x2eade3(0x2c88):_0x2eade3(0x25f7),_0x130f61),{});return Object['assign'](_0x396046,_0xddc4e6);};_0x70cdc4[_0x37e46c(0x3450)]={'beforeWords':_0x1c2a2c(_0x70cdc4['Noun|Verb'][_0x37e46c(0x22e3)],{'had':_0x37e46c(0x25f7),'have':'Plural'}),'afterWords':_0x1c2a2c(_0x70cdc4[_0x37e46c(0x2d09)][_0x37e46c(0x1839)],{'his':_0x37e46c(0x2c88),'her':_0x37e46c(0x2c88),'its':'PresentTense','in':null,'to':null,'is':_0x37e46c(0x2c88),'by':_0x37e46c(0x2c88)}),'beforeTags':_0x1c2a2c(_0x70cdc4[_0x37e46c(0x2d09)][_0x37e46c(0xbc7)],{'Conjunction':_0x37e46c(0x2c88),'Noun':void 0x0,'ProperNoun':'PresentTense'}),'afterTags':_0x1c2a2c(_0x70cdc4[_0x37e46c(0x2d09)][_0x37e46c(0x1880)],{'Gerund':_0x37e46c(0x25f7),'Noun':_0x37e46c(0x2c88),'Value':'PresentTense'})};const _0x2d4f1b=_0x70cdc4,_0x43d290=_0x37e46c(0x4972),_0x448f17=_0x37e46c(0x2631),_0x38a63f='PresentTense',_0x11b03a=_0x37e46c(0x1e9f),_0x4130a8=_0x37e46c(0xe52),_0x1ae6d2='Adverb',_0x570ef6=_0x37e46c(0x25f7),_0x47bc9a='Actor',_0x37588a='Verb',_0x3e26cf=_0x37e46c(0x1786),_0x820f1e=_0x37e46c(0x30cd),_0x1f164d=_0x37e46c(0x28f1),_0x1d201c=_0x37e46c(0x1c11),_0x3b14da=_0x37e46c(0x1ebc),_0x52e5ef=[null,null,{'ea':_0x11b03a,'ia':_0x3e26cf,'ic':_0x43d290,'ly':_0x1ae6d2,'\x27n':_0x37588a,'\x27t':_0x37588a},{'oed':_0x4130a8,'ued':_0x4130a8,'xed':_0x4130a8,'\x20so':_0x1ae6d2,'\x27ll':_0x1f164d,'\x27re':_0x37e46c(0x49fa),'azy':_0x43d290,'eer':_0x3e26cf,'end':_0x37588a,'ped':_0x4130a8,'ffy':_0x43d290,'ify':_0x448f17,'ing':_0x37e46c(0x42ce),'ize':_0x448f17,'ibe':_0x448f17,'lar':_0x43d290,'mum':_0x43d290,'nes':_0x38a63f,'nny':_0x43d290,'ous':_0x43d290,'que':_0x43d290,'ger':_0x3e26cf,'ber':_0x3e26cf,'rol':_0x11b03a,'sis':_0x11b03a,'ogy':_0x11b03a,'oid':_0x11b03a,'ian':_0x11b03a,'zes':_0x38a63f,'eld':_0x4130a8,'ken':_0x3b14da,'ven':_0x3b14da,'ten':_0x3b14da,'ect':_0x448f17,'ict':_0x448f17,'ign':_0x448f17,'oze':_0x448f17,'ful':_0x43d290,'bal':_0x43d290,'ton':_0x3e26cf},{'amed':_0x4130a8,'aped':_0x4130a8,'ched':_0x4130a8,'lked':_0x4130a8,'rked':_0x4130a8,'reed':_0x4130a8,'nded':_0x4130a8,'mned':_0x43d290,'cted':_0x4130a8,'dged':_0x4130a8,'ield':_0x11b03a,'akis':_0x820f1e,'cede':_0x448f17,'chuk':_0x820f1e,'czyk':_0x820f1e,'ects':_0x38a63f,'iend':_0x11b03a,'ends':_0x37588a,'enko':_0x820f1e,'ette':_0x11b03a,'iary':_0x11b03a,'wner':_0x11b03a,'fies':_0x38a63f,'fore':_0x1ae6d2,'gate':_0x448f17,'gone':_0x43d290,'ices':_0x570ef6,'ints':_0x570ef6,'ruct':_0x448f17,'ines':_0x570ef6,'ions':_0x570ef6,'ners':_0x570ef6,'pers':_0x570ef6,'lers':_0x570ef6,'less':_0x43d290,'llen':_0x43d290,'made':_0x43d290,'nsen':_0x820f1e,'oses':_0x38a63f,'ould':_0x1f164d,'some':_0x43d290,'sson':_0x820f1e,'ians':_0x570ef6,'tion':_0x11b03a,'tage':_0x3e26cf,'ique':_0x11b03a,'tive':_0x43d290,'tors':_0x3e26cf,'vice':_0x11b03a,'lier':_0x11b03a,'fier':_0x11b03a,'wned':_0x4130a8,'gent':_0x11b03a,'tist':_0x47bc9a,'pist':_0x47bc9a,'rist':_0x47bc9a,'mist':_0x47bc9a,'yist':_0x47bc9a,'vist':_0x47bc9a,'ists':_0x47bc9a,'lite':_0x11b03a,'site':_0x11b03a,'rite':_0x11b03a,'mite':_0x11b03a,'bite':_0x11b03a,'mate':_0x11b03a,'date':_0x11b03a,'ndal':_0x11b03a,'vent':_0x11b03a,'uist':_0x47bc9a,'gist':_0x47bc9a,'note':_0x11b03a,'cide':_0x11b03a,'ence':_0x11b03a,'wide':_0x43d290,'vide':_0x448f17,'ract':_0x448f17,'duce':_0x448f17,'pose':_0x448f17,'eive':_0x448f17,'lyze':_0x448f17,'lyse':_0x448f17,'iant':_0x43d290,'nary':_0x43d290,'ghty':_0x43d290,'uent':_0x43d290,'erer':_0x47bc9a,'bury':_0x1d201c,'dorf':_0x3e26cf,'esty':_0x3e26cf,'wych':_0x1d201c,'dale':_0x1d201c,'folk':_0x1d201c},{'elist':_0x47bc9a,'holic':_0x11b03a,'phite':_0x11b03a,'tized':_0x4130a8,'urned':_0x4130a8,'eased':_0x4130a8,'ances':_0x570ef6,'bound':_0x43d290,'ettes':_0x570ef6,'fully':_0x1ae6d2,'ishes':_0x38a63f,'ities':_0x570ef6,'marek':_0x820f1e,'nssen':_0x820f1e,'ology':_0x3e26cf,'osome':_0x11b03a,'tment':_0x11b03a,'ports':_0x570ef6,'rough':_0x43d290,'tches':_0x38a63f,'tieth':_0x37e46c(0x4eea),'tures':_0x570ef6,'wards':_0x1ae6d2,'where':_0x1ae6d2,'archy':_0x3e26cf,'pathy':_0x3e26cf,'opoly':_0x3e26cf,'embly':_0x3e26cf,'phate':_0x3e26cf,'ndent':_0x11b03a,'scent':_0x11b03a,'onist':_0x47bc9a,'anist':_0x47bc9a,'alist':_0x47bc9a,'olist':_0x47bc9a,'icist':_0x47bc9a,'ounce':_0x448f17,'iable':_0x43d290,'borne':_0x43d290,'gnant':_0x43d290,'inant':_0x43d290,'igent':_0x43d290,'atory':_0x43d290,'rient':_0x11b03a,'dient':_0x11b03a,'maker':_0x47bc9a,'burgh':_0x1d201c,'mouth':_0x1d201c,'ceter':_0x1d201c,'ville':_0x1d201c,'worth':_0x3e26cf},{'auskas':_0x820f1e,'parent':_0x11b03a,'cedent':_0x11b03a,'ionary':_0x11b03a,'cklist':_0x11b03a,'keeper':_0x47bc9a,'logist':_0x47bc9a,'teenth':'Value','worker':_0x47bc9a,'master':_0x47bc9a,'writer':_0x47bc9a,'brough':_0x1d201c,'cester':_0x1d201c},{'logists':_0x47bc9a,'opoulos':_0x820f1e,'borough':_0x1d201c,'sdottir':_0x820f1e}],_0x18bb7c=_0x37e46c(0x4972),_0x44fe49=_0x37e46c(0x1786),_0x19d9f5=_0x37e46c(0x487b),_0x3b25fc=[null,null,{},{'neo':_0x44fe49,'bio':_0x44fe49,'de-':_0x19d9f5,'re-':_0x19d9f5,'un-':_0x19d9f5,'ex-':_0x44fe49},{'anti':_0x44fe49,'auto':_0x44fe49,'faux':_0x18bb7c,'hexa':_0x44fe49,'kilo':_0x44fe49,'mono':_0x44fe49,'nano':_0x44fe49,'octa':_0x44fe49,'poly':_0x44fe49,'semi':_0x18bb7c,'tele':_0x44fe49,'pro-':_0x18bb7c,'mis-':_0x19d9f5,'dis-':_0x19d9f5,'pre-':_0x18bb7c},{'anglo':_0x44fe49,'centi':_0x44fe49,'ethno':_0x44fe49,'ferro':_0x44fe49,'grand':_0x44fe49,'hepta':_0x44fe49,'hydro':_0x44fe49,'intro':_0x44fe49,'macro':_0x44fe49,'micro':_0x44fe49,'milli':_0x44fe49,'nitro':_0x44fe49,'penta':_0x44fe49,'quasi':_0x18bb7c,'radio':_0x44fe49,'tetra':_0x44fe49,'omni-':_0x18bb7c,'post-':_0x18bb7c},{'pseudo':_0x18bb7c,'extra-':_0x18bb7c,'hyper-':_0x18bb7c,'inter-':_0x18bb7c,'intra-':_0x18bb7c,'deca-':_0x18bb7c},{'electro':_0x44fe49}],_0x2d39d7=_0x37e46c(0x4972),_0x50f0c6=_0x37e46c(0x2631),_0x4898ca='PresentTense',_0x32e153=_0x37e46c(0x1e9f),_0x41f294='PastTense',_0xc75716=_0x37e46c(0x2cbd),_0x291d05=_0x37e46c(0x804),_0x6d47db=_0x37e46c(0x3b29),_0x874301=_0x37e46c(0x487b),_0x316407='Noun',_0x56e0b0=_0x37e46c(0x30cd),_0x6013d7={'a':[[/.[aeiou]na$/,_0x316407,'tuna'],[/.[oau][wvl]ska$/,_0x56e0b0],[/.[^aeiou]ica$/,_0x32e153,_0x37e46c(0x23e7)],[/^([hyj]a+)+$/,_0x291d05,'haha']],'c':[[/.[^aeiou]ic$/,_0x2d39d7]],'d':[[/[aeiou](pp|ll|ss|ff|gg|tt|rr|bb|nn|mm)ed$/,_0x41f294,_0x37e46c(0x131d)],[/.[aeo]{2}[bdgmnprvz]ed$/,_0x41f294,_0x37e46c(0x11fa)],[/.[aeiou][sg]hed$/,_0x41f294,_0x37e46c(0x5236)],[/.[aeiou]red$/,_0x41f294,'hired'],[/.[aeiou]r?ried$/,_0x41f294,'hurried'],[/[^aeiou]ard$/,_0x32e153,'steward'],[/[aeiou][^aeiou]id$/,_0x2d39d7,''],[/.[vrl]id$/,_0x2d39d7,_0x37e46c(0x4cbd)],[/..led$/,_0x41f294,_0x37e46c(0x38ac)],[/.[iao]sed$/,_0x41f294,''],[/[aeiou]n?[cs]ed$/,_0x41f294,''],[/[aeiou][rl]?[mnf]ed$/,_0x41f294,''],[/[aeiou][ns]?c?ked$/,_0x41f294,'bunked'],[/[aeiou]gned$/,_0x41f294],[/[aeiou][nl]?ged$/,_0x41f294],[/.[tdbwxyz]ed$/,_0x41f294],[/[^aeiou][aeiou][tvx]ed$/,_0x41f294],[/.[cdflmnprstv]ied$/,_0x41f294,_0x37e46c(0x229c)]],'e':[[/.[lnr]ize$/,_0x50f0c6,_0x37e46c(0x1360)],[/.[^aeiou]ise$/,_0x50f0c6,_0x37e46c(0x2b52)],[/.[aeiou]te$/,_0x50f0c6,_0x37e46c(0xb62)],[/.[^aeiou][ai]ble$/,_0x2d39d7,_0x37e46c(0xa23)],[/.[^aeiou]eable$/,_0x2d39d7,_0x37e46c(0x2bbb)],[/.[ts]ive$/,_0x2d39d7,_0x37e46c(0x14f4)],[/[a-z]-like$/,_0x2d39d7,'woman-like']],'h':[[/.[^aeiouf]ish$/,_0x2d39d7,_0x37e46c(0x1c9e)],[/.v[iy]ch$/,_0x56e0b0,_0x37e46c(0x4391)],[/^ug?h+$/,_0x291d05,_0x37e46c(0x3a1)],[/^uh[ -]?oh$/,_0x291d05,_0x37e46c(0x166d)],[/[a-z]-ish$/,_0x2d39d7,'cartoon-ish']],'i':[[/.[oau][wvl]ski$/,_0x56e0b0,_0x37e46c(0x9cf)]],'k':[[/^(k){2}$/,_0x291d05,_0x37e46c(0x3206)]],'l':[[/.[gl]ial$/,_0x2d39d7,'familial'],[/.[^aeiou]ful$/,_0x2d39d7,_0x37e46c(0x22c8)],[/.[nrtumcd]al$/,_0x2d39d7,_0x37e46c(0x3202)],[/.[^aeiou][ei]al$/,_0x2d39d7,_0x37e46c(0x10ef)]],'m':[[/.[^aeiou]ium$/,_0x32e153,_0x37e46c(0x314c)],[/[^aeiou]ism$/,_0x32e153,_0x37e46c(0x3598)],[/^[hu]m+$/,_0x291d05,_0x37e46c(0x3c47)],[/^\d+ ?[ap]m$/,_0x37e46c(0x448e),_0x37e46c(0x151d)]],'n':[[/.[lsrnpb]ian$/,_0x2d39d7,'republican'],[/[^aeiou]ician$/,_0x6d47db,_0x37e46c(0x1940)],[/[aeiou][ktrp]in'$/,'Gerund',_0x37e46c(0x47e7)]],'o':[[/^no+$/,_0x291d05,'noooo'],[/^(yo)+$/,_0x291d05,'yoo'],[/^wo{2,}[pt]?$/,_0x291d05,'woop']],'r':[[/.[bdfklmst]ler$/,_0x37e46c(0x1786)],[/[aeiou][pns]er$/,_0x32e153],[/[^i]fer$/,_0x50f0c6],[/.[^aeiou][ao]pher$/,_0x6d47db],[/.[lk]er$/,'Noun'],[/.ier$/,_0x37e46c(0x13dd)]],'t':[[/.[di]est$/,'Superlative'],[/.[icldtgrv]ent$/,_0x2d39d7],[/[aeiou].*ist$/,_0x2d39d7],[/^[a-z]et$/,_0x874301]],'s':[[/.[^aeiou]ises$/,_0x4898ca],[/.[rln]ates$/,_0x4898ca],[/.[^z]ens$/,_0x874301],[/.[lstrn]us$/,_0x32e153],[/.[aeiou]sks$/,_0x4898ca],[/.[aeiou]kes$/,_0x4898ca],[/[aeiou][^aeiou]is$/,_0x32e153],[/[a-z]'s$/,_0x316407],[/^yes+$/,_0x291d05]],'v':[[/.[^aeiou][ai][kln]ov$/,_0x56e0b0]],'y':[[/.[cts]hy$/,_0x2d39d7],[/.[st]ty$/,_0x2d39d7],[/.[tnl]ary$/,_0x2d39d7],[/.[oe]ry$/,_0x32e153],[/[rdntkbhs]ly$/,_0xc75716],[/.(gg|bb|zz)ly$/,_0x2d39d7],[/...lly$/,_0xc75716],[/.[gk]y$/,_0x2d39d7],[/[bszmp]{2}y$/,_0x2d39d7],[/.[ai]my$/,_0x2d39d7],[/[ea]{2}zy$/,_0x2d39d7],[/.[^aeiou]ity$/,_0x32e153]]},_0x34876e='Verb',_0x430728='Noun',_0x14988d={'leftTags':[['Adjective',_0x430728],[_0x37e46c(0x3d93),_0x430728],[_0x37e46c(0x3b3e),_0x430728],[_0x37e46c(0x2cbd),_0x34876e],[_0x37e46c(0x2394),_0x34876e],[_0x37e46c(0x3a43),_0x430728],['Ordinal',_0x430728],[_0x37e46c(0x28f1),_0x34876e],[_0x37e46c(0x2f40),_0x430728],['Demonym',_0x430728],[_0x37e46c(0x3ab),_0x37e46c(0x904)]],'leftWords':[['i',_0x34876e],[_0x37e46c(0x4d51),_0x430728],['it',_0x34876e],[_0x37e46c(0xdd3),_0x34876e],[_0x37e46c(0xc1a),_0x34876e],[_0x37e46c(0x2fb6),_0x430728],['if',_0x430728],['but',_0x430728],['who',_0x34876e],[_0x37e46c(0x138f),_0x430728],[_0x37e46c(0x414b),_0x430728],[_0x37e46c(0x191b),_0x430728],[_0x37e46c(0x28f2),_0x34876e],[_0x37e46c(0x1ff2),_0x37e46c(0x4972)],['old',_0x430728],[_0x37e46c(0x1d3d),_0x34876e],[_0x37e46c(0x5097),_0x430728],['a',_0x430728],['the',_0x430728],[_0x37e46c(0x72a),_0x34876e]],'rightTags':[[_0x37e46c(0x49fa),_0x430728],[_0x37e46c(0xe52),_0x430728],[_0x37e46c(0x37f1),_0x430728],[_0x37e46c(0x28f1),_0x430728]],'rightWords':[[_0x37e46c(0xdd3),_0x34876e],['me',_0x34876e],[_0x37e46c(0x45f5),'Adjective'],['him',_0x34876e],['it',_0x34876e],[_0x37e46c(0x3813),_0x430728],[_0x37e46c(0x4de6),_0x430728],['himself',_0x34876e],[_0x37e46c(0x10ea),_0x430728],['who',_0x430728],['jr','Person']]},_0x83dce7={'fwd':'3:ser,ier¦1er:h,t,f,l,n¦1r:e¦2er:ss,or,om','both':'3er:ver,ear,alm¦3ner:hin¦3ter:lat¦2mer:im¦2er:ng,rm,mb¦2ber:ib¦2ger:ig¦1er:w,p,k,d¦ier:y','rev':_0x37e46c(0x93b),'ex':_0x37e46c(0xb10)},_0x5bde4f={'fwd':'1:nning,tting,rring,pping,eing,mming,gging,dding,bbing,kking¦2:eking,oling,eling,eming¦3:velling,siting,uiting,fiting,loting,geting,ialing,celling¦4:graming','both':_0x37e46c(0x4611),'rev':_0x37e46c(0x2001),'ex':'3:adding,eating,aiming,aiding,airing,outing,gassing,setting,getting,putting,cutting,winning,sitting,betting,mapping,tapping,letting,bidding,hitting,tanning,netting,popping,fitting,capping,lapping,barring,banning,vetting,topping,rotting,tipping,potting,wetting,pitting,dipping,budding,hemming,pinning,jetting,kidding,padding,podding,sipping,wedding,bedding,donning,warring,penning,gutting,cueing,wadding,petting,ripping,napping,matting,tinning,binning,dimming,hopping,mopping,nodding,panning,rapping,ridding,sinning¦4:selling,falling,calling,waiting,editing,telling,rolling,heating,boating,hanging,beating,coating,singing,tolling,felling,polling,discing,seating,voiding,gelling,yelling,baiting,reining,ruining,seeking,spanning,stepping,knitting,emitting,slipping,quitting,dialing,omitting,clipping,shutting,skinning,abutting,flipping,trotting,cramming,fretting,suiting¦5:bringing,treating,spelling,stalling,trolling,expelling,rivaling,wringing,deterring,singeing,befitting,refitting¦6:enrolling,distilling,scrolling,strolling,caucusing,travelling¦7:installing,redefining,stencilling,recharging,overeating,benefiting,unraveling,programing¦9:reprogramming¦is:being¦2e:using,aging,owing¦3e:making,taking,coming,noting,hiring,filing,coding,citing,doping,baking,coping,hoping,lading,caring,naming,voting,riding,mining,curing,lining,ruling,typing,boring,dining,firing,hiding,piling,taping,waning,baling,boning,faring,honing,wiping,luring,timing,wading,piping,fading,biting,zoning,daring,waking,gaming,raking,ceding,tiring,coking,wining,joking,paring,gaping,poking,pining,coring,liming,toting,roping,wiring,aching¦4e:writing,storing,eroding,framing,smoking,tasting,wasting,phoning,shaking,abiding,braking,flaking,pasting,priming,shoring,sloping,withing,hinging¦5e:defining,refining,renaming,swathing,fringing,reciting¦1ie:dying,tying,lying,vying¦7e:sunbathing'},_0x2160fe={'fwd':'1:mt¦2:llen¦3:iven,aken¦:ne¦y:in','both':'1:wn¦2:me,aten¦3:seen,bidden,isen¦4:roven,asten¦3l:pilt¦3d:uilt¦2e:itten¦1im:wum¦1eak:poken¦1ine:hone¦1ose:osen¦1in:gun¦1ake:woken¦ear:orn¦eal:olen¦eeze:ozen¦et:otten¦ink:unk¦ing:ung','rev':'2:un¦oken:eak¦ought:eek¦oven:eave¦1ne:o¦1own:ly¦1den:de¦1in:ay¦2t:am¦2n:ee¦3en:all¦4n:rive,sake,take¦5n:rgive','ex':_0x37e46c(0x4af3)},_0x375516={'fwd':_0x37e46c(0x3538),'both':_0x37e46c(0x44ae),'rev':'1ies:ly¦2es:us,go,do¦3es:cho,eto','ex':_0x37e46c(0x2274)},_0x2abbdd={'fwd':_0x37e46c(0x1f05),'both':_0x37e46c(0x3466),'rev':'1:ttest,nnest,yest¦2:sest,stest,rmest,cest,vest,lmest,olest,ilest,ulest,ssest,imest,uest¦3:rgest,eatest,oorest,plest,allest,urest,iefest,uelest,blest,ugest,amest,yalest,ealest,illest,tlest,itest¦4:cerest,eriest,somest,rmalest,ndomest,motest,uarest,tiffest¦5:leverest,rangest¦ar:urthest¦3ey:riciest','ex':'best:good¦worst:bad¦5est:great¦4est:fast,full,fair,dull¦3test:hot,wet,fat¦4nest:thin¦1urthest:far¦3est:gay,shy,ill¦4test:neat¦4st:late,wide,fine,safe,cute,fake,pale,rare,rude,sore,ripe,dire¦6st:severe'},_0x40e56a={'fwd':'1:tistic,eable,lful,sful,ting,tty¦2:onate,rtable,geous,ced,seful,ctful¦3:ortive,ented¦arity:ear¦y:etic¦fulness:begone¦1ity:re¦1y:tiful,gic¦2ity:ile,imous,ilous,ime¦2ion:ated¦2eness:iving¦2y:trious¦2ation:iring¦2tion:vant¦3ion:ect¦3ce:mant,mantic¦3tion:irable¦3y:est,estic¦3m:mistic,listic¦3ess:ning¦4n:utious¦4on:rative,native,vative,ective¦4ce:erant','both':_0x37e46c(0x4daa),'rev':'','ex':_0x37e46c(0x4d81)},_0x168b1=/^([0-9]+)/,_0x1cca85=function(_0x33a3e7){const _0x48145f=_0x37e46c;let _0x248b01=function(_0x12693f){const _0x12c4b1=a0_0x11e7;let _0x3072ea={};return _0x12693f[_0x12c4b1(0x1117)]('¦')[_0x12c4b1(0xa21)](_0x3dd089=>{const _0x39234f=_0x12c4b1;let [_0x1e9ce6,_0x7deb22]=_0x3dd089[_0x39234f(0x1117)](':');_0x7deb22=(_0x7deb22||'')[_0x39234f(0x1117)](','),_0x7deb22['forEach'](_0x487b23=>{_0x3072ea[_0x487b23]=_0x1e9ce6;});}),_0x3072ea;}(_0x33a3e7);return Object[_0x48145f(0x1ea9)](_0x248b01)[_0x48145f(0x24d8)]((_0x4bd5aa,_0x2d30e3)=>(_0x4bd5aa[_0x2d30e3]=function(_0x3d320d='',_0x58e86d=''){const _0x703cc=_0x48145f;let _0x538b90=(_0x58e86d=String(_0x58e86d))[_0x703cc(0x2d96)](_0x168b1);if(null===_0x538b90)return _0x58e86d;let _0xf85df5=Number(_0x538b90[0x1])||0x0;return _0x3d320d[_0x703cc(0x37b5)](0x0,_0xf85df5)+_0x58e86d['replace'](_0x168b1,'');}(_0x2d30e3,_0x248b01[_0x2d30e3]),_0x4bd5aa),{});},_0x555fba=function(_0x3c2bd4={}){const _0xc8d068=_0x37e46c;return'string'==typeof _0x3c2bd4&&(_0x3c2bd4=JSON[_0xc8d068(0x2956)](_0x3c2bd4)),_0x3c2bd4[_0xc8d068(0x6ae)]=_0x1cca85(_0x3c2bd4[_0xc8d068(0x6ae)]||''),_0x3c2bd4[_0xc8d068(0x830)]=_0x1cca85(_0x3c2bd4[_0xc8d068(0x830)]||''),_0x3c2bd4['rev']=_0x1cca85(_0x3c2bd4[_0xc8d068(0x46ea)]||''),_0x3c2bd4['ex']=_0x1cca85(_0x3c2bd4['ex']||''),_0x3c2bd4;},_0x121e97=function(_0x363b70){const _0x482a8a=_0x37e46c;return Object['entries'](_0x363b70)[_0x482a8a(0x24d8)]((_0x2bd8bb,_0x4365c0)=>(_0x2bd8bb[_0x4365c0[0x1]]=_0x4365c0[0x0],_0x2bd8bb),{});},_0x2cf802=function(_0x7775f3={}){const _0x5a08af=_0x37e46c;return{'reversed':!0x0,'both':_0x121e97(_0x7775f3['both']),'ex':_0x121e97(_0x7775f3['ex']),'fwd':_0x7775f3[_0x5a08af(0x46ea)]||{}};},_0x46e9a9=_0x555fba({'fwd':'1:tted,wed,gged,nned,een,rred,pped,yed,bbed,oed,dded,rd,wn,mmed¦2:eed,nded,et,hted,st,oled,ut,emed,eled,lded,ken,rt,nked,apt,ant,eped,eked¦3:eared,eat,eaded,nelled,ealt,eeded,ooted,eaked,eaned,eeted,mited,bid,uit,ead,uited,ealed,geted,velled,ialed,belled¦4:ebuted,hined,comed¦y:ied¦ome:ame¦ear:ore¦ind:ound¦ing:ung,ang¦ep:pt¦ink:ank,unk¦ig:ug¦all:ell¦ee:aw¦ive:ave¦eeze:oze¦old:eld¦ave:ft¦ake:ook¦ell:old¦ite:ote¦ide:ode¦ine:one¦in:un,on¦eal:ole¦im:am¦ie:ay¦and:ood¦1ise:rose¦1eak:roke¦1ing:rought¦1ive:rove¦1el:elt¦1id:bade¦1et:got¦1y:aid¦1it:sat¦3e:lid¦3d:pent','both':_0x37e46c(0x3715),'rev':_0x37e46c(0x566),'ex':'2:been,upped¦3:added,aged,aided,aimed,aired,bid,died,dyed,egged,erred,eyed,fit,gassed,hit,lied,owed,pent,pied,tied,used,vied,oiled,outed,banned,barred,bet,canned,cut,dipped,donned,ended,feed,inked,jarred,let,manned,mowed,netted,padded,panned,pitted,popped,potted,put,set,sewn,sowed,tanned,tipped,topped,vowed,weed,bowed,jammed,binned,dimmed,hopped,mopped,nodded,pinned,rigged,sinned,towed,vetted¦4:ached,baked,baled,boned,bored,called,caned,cared,ceded,cited,coded,cored,cubed,cured,dared,dined,edited,exited,faked,fared,filed,fined,fired,fuelled,gamed,gelled,hired,hoped,joked,lined,mined,named,noted,piled,poked,polled,pored,pulled,reaped,roamed,rolled,ruled,seated,shed,sided,timed,tolled,toned,voted,waited,walled,waned,winged,wiped,wired,zoned,yelled,tamed,lubed,roped,faded,mired,caked,honed,banged,culled,heated,raked,welled,banded,beat,cast,cooled,cost,dealt,feared,folded,footed,handed,headed,heard,hurt,knitted,landed,leaked,leapt,linked,meant,minded,molded,neared,needed,peaked,plodded,plotted,pooled,quit,read,rooted,sealed,seeded,seeped,shipped,shunned,skimmed,slammed,sparred,stemmed,stirred,suited,thinned,twinned,swayed,winked,dialed,abutted,blotted,fretted,healed,heeded,peeled,reeled¦5:basted,cheated,equalled,eroded,exiled,focused,opined,pleated,primed,quoted,scouted,shored,sloped,smoked,sniped,spelled,spouted,routed,staked,stored,swelled,tasted,treated,wasted,smelled,dwelled,honored,prided,quelled,eloped,scared,coveted,sweated,breaded,cleared,debuted,deterred,freaked,modeled,pleaded,rebutted,speeded¦6:anchored,defined,endured,impaled,invited,refined,revered,strolled,cringed,recast,thrust,unfolded¦7:authored,combined,competed,conceded,convened,excreted,extruded,redefined,restored,secreted,rescinded,welcomed¦8:expedited,infringed¦9:interfered,intervened,persevered¦10:contravened¦eat:ate¦is:was¦go:went¦are:were¦3d:bent,lent,rent,sent¦3e:bit,fled,hid,lost¦3ed:bled,bred¦2ow:blew,grew¦1uy:bought¦2tch:caught¦1o:did¦1ive:dove,gave¦2aw:drew¦2ed:fed¦2y:flew,laid,paid,said¦1ight:fought¦1et:got¦2ve:had¦1ang:hung¦2ad:led¦2ght:lit¦2ke:made¦2et:met¦1un:ran¦1ise:rose¦1it:sat¦1eek:sought¦1each:taught¦1ake:woke,took¦1eave:wove¦2ise:arose¦1ear:bore,tore,wore¦1ind:bound,found,wound¦2eak:broke¦2ing:brought,wrung¦1ome:came¦2ive:drove¦1ig:dug¦1all:fell¦2el:felt¦4et:forgot¦1old:held¦2ave:left¦1ing:rang,sang¦1ide:rode¦1ink:sank¦1ee:saw¦2ine:shone¦4e:slid¦1ell:sold,told¦4d:spent¦2in:spun¦1in:won'}),_0x10b043=_0x555fba(_0x375516),_0x3bf0ae=_0x555fba(_0x5bde4f),_0x30288e=_0x555fba(_0x2160fe),_0x162b08=_0x2cf802(_0x46e9a9),_0x452192=_0x2cf802(_0x10b043),_0x5495fd=_0x2cf802(_0x3bf0ae),_0x7aaa01=_0x2cf802(_0x30288e),_0xce1eba=_0x555fba(_0x83dce7),_0x1c5a7e=_0x555fba(_0x2abbdd),_0x1d5b61={'fromPast':_0x46e9a9,'fromPresent':_0x10b043,'fromGerund':_0x3bf0ae,'fromParticiple':_0x30288e,'toPast':_0x162b08,'toPresent':_0x452192,'toGerund':_0x5495fd,'toParticiple':_0x7aaa01,'toComparative':_0xce1eba,'toSuperlative':_0x1c5a7e,'fromComparative':_0x2cf802(_0xce1eba),'fromSuperlative':_0x2cf802(_0x1c5a7e),'adjToNoun':_0x555fba(_0x40e56a)},_0x1d2170=[_0x37e46c(0x3255),'administration',_0x37e46c(0x350c),'agences',_0x37e46c(0x3c82),_0x37e46c(0x4a1f),_0x37e46c(0x3607),_0x37e46c(0x25b8),_0x37e46c(0x266),_0x37e46c(0x4512),_0x37e46c(0x6c0),_0x37e46c(0x4ba2),_0x37e46c(0x2fd7),_0x37e46c(0x342b),_0x37e46c(0x4d54),'aviation',_0x37e46c(0x50c),'banque',_0x37e46c(0x2d80),_0x37e46c(0x4f6d),_0x37e46c(0x3313),'brewery',_0x37e46c(0x2e42),'brothers','bureau',_0x37e46c(0x4cf6),'co',_0x37e46c(0x607),_0x37e46c(0x2c42),_0x37e46c(0x4f19),'cathedral',_0x37e46c(0x2820),'centre',_0x37e46c(0x4192),_0x37e46c(0x49d2),_0x37e46c(0x1dbb),_0x37e46c(0x1f30),_0x37e46c(0x32c),_0x37e46c(0x18ae),_0x37e46c(0x2941),'club','co',_0x37e46c(0x2ca9),_0x37e46c(0x38ea),_0x37e46c(0x3cb5),'college',_0x37e46c(0x350f),_0x37e46c(0x4d03),_0x37e46c(0x10ff),_0x37e46c(0x18a5),_0x37e46c(0x604),_0x37e46c(0x2f3a),'computers',_0x37e46c(0x8df),_0x37e46c(0x245d),_0x37e46c(0xe77),_0x37e46c(0x4aaf),_0x37e46c(0x2e89),'corporation',_0x37e46c(0x4622),'corp',_0x37e46c(0x2c66),_0x37e46c(0x5088),_0x37e46c(0x5139),'departement',_0x37e46c(0x2258),_0x37e46c(0x1f5b),'design',_0x37e46c(0x4e22),'directorate',_0x37e46c(0x1431),_0x37e46c(0x3487),_0x37e46c(0x1c22),_0x37e46c(0xfaf),_0x37e46c(0x30ff),_0x37e46c(0x470c),_0x37e46c(0x24ba),_0x37e46c(0x12b2),_0x37e46c(0x1003),_0x37e46c(0x2cbb),_0x37e46c(0x149f),'estate',_0x37e46c(0x4d42),'faculty',_0x37e46c(0x483f),_0x37e46c(0x3bc7),_0x37e46c(0x2a3),'fm',_0x37e46c(0x32ba),'fund',_0x37e46c(0x2133),'gazette',_0x37e46c(0x273a),_0x37e46c(0x2055),_0x37e46c(0x4e5b),_0x37e46c(0x283),'herald',_0x37e46c(0x2249),'hospital',_0x37e46c(0x102a),_0x37e46c(0x2064),_0x37e46c(0x1bf8),'industries',_0x37e46c(0x152f),_0x37e46c(0x3534),_0x37e46c(0x537),_0x37e46c(0x42a4),_0x37e46c(0xb66),_0x37e46c(0x471),_0x37e46c(0x2c9e),_0x37e46c(0x51ee),'investors',_0x37e46c(0x44d),_0x37e46c(0x4484),_0x37e46c(0x1025),_0x37e46c(0x4cb4),_0x37e46c(0x4064),'limited','machines',_0x37e46c(0x5167),_0x37e46c(0x1d57),_0x37e46c(0x1241),_0x37e46c(0x4800),_0x37e46c(0x3ba5),_0x37e46c(0x3e76),'memorial',_0x37e46c(0x1e72),'ministry',_0x37e46c(0x6c5),_0x37e46c(0x2173),_0x37e46c(0x4d0b),_0x37e46c(0x1f02),_0x37e46c(0x2f77),_0x37e46c(0x479f),_0x37e46c(0x50ec),_0x37e46c(0x2fe7),_0x37e46c(0x1811),_0x37e46c(0x11f0),_0x37e46c(0xfbc),_0x37e46c(0x4baa),'organization',_0x37e46c(0x2d08),'partnership','petrol','petroleum',_0x37e46c(0x349e),'pharmaceutical','pharmaceuticals',_0x37e46c(0x115d),_0x37e46c(0xc2b),_0x37e46c(0x435a),_0x37e46c(0xf1e),_0x37e46c(0x250d),_0x37e46c(0x24ce),_0x37e46c(0x17a4),_0x37e46c(0x376),_0x37e46c(0x240d),_0x37e46c(0x2787),_0x37e46c(0x375f),_0x37e46c(0xf5c),_0x37e46c(0x26e4),_0x37e46c(0x25d7),_0x37e46c(0xcc8),_0x37e46c(0x742),'school',_0x37e46c(0x2a25),'service',_0x37e46c(0x298e),_0x37e46c(0x4a11),'subsidiary',_0x37e46c(0x2348),_0x37e46c(0x1c58),_0x37e46c(0xbff),_0x37e46c(0x1ecd),_0x37e46c(0x1807),_0x37e46c(0x17f3),_0x37e46c(0x48c9),_0x37e46c(0x38a0),_0x37e46c(0xc31),_0x37e46c(0xa99),'tv','union','university',_0x37e46c(0x3786),_0x37e46c(0x1ccf)][_0x37e46c(0x24d8)]((_0x279a3a,_0x133d7d)=>(_0x279a3a[_0x133d7d]=!0x0,_0x279a3a),{}),_0x1aca21=[_0x37e46c(0x1896),_0x37e46c(0x116d),_0x37e46c(0x1d14),_0x37e46c(0xdf2),_0x37e46c(0x3210),_0x37e46c(0xb5c),'camp',_0x37e46c(0x622),'canyons',_0x37e46c(0x34c5),'cave',_0x37e46c(0x523b),_0x37e46c(0x21ab),_0x37e46c(0x38a8),_0x37e46c(0x346d),_0x37e46c(0x1da6),_0x37e46c(0x1a44),_0x37e46c(0x1cad),_0x37e46c(0x3641),_0x37e46c(0xbd5),_0x37e46c(0x4034),_0x37e46c(0x3e33),_0x37e46c(0x49bb),'falls',_0x37e46c(0x1453),'fjords',_0x37e46c(0x22ff),_0x37e46c(0xa62),'glacier','gorge',_0x37e46c(0x2b7f),_0x37e46c(0xbf2),_0x37e46c(0x3e17),_0x37e46c(0x3ea0),_0x37e46c(0x303d),_0x37e46c(0x326b),_0x37e46c(0x345c),_0x37e46c(0x1698),_0x37e46c(0x85c),_0x37e46c(0x37b0),'knoll',_0x37e46c(0x3ed6),_0x37e46c(0x122b),'marsh',_0x37e46c(0x2e91),'mount',_0x37e46c(0x490c),_0x37e46c(0x3621),_0x37e46c(0x2a24),_0x37e46c(0x3bd0),_0x37e46c(0x92d),_0x37e46c(0x1edb),_0x37e46c(0x24b0),_0x37e46c(0x2971),_0x37e46c(0x2d98),_0x37e46c(0x4e4b),_0x37e46c(0x27a5),'ridge','river','rivers',_0x37e46c(0x21f5),_0x37e46c(0x3911),_0x37e46c(0x9a6),'shoreline','shores','strait','straits','stream','swamp','tombolo','trail','trails',_0x37e46c(0x1733),_0x37e46c(0xf11),_0x37e46c(0x3123),'volcano',_0x37e46c(0x1a79),_0x37e46c(0x484a),_0x37e46c(0x3b0d),_0x37e46c(0x8fa),_0x37e46c(0x3d9c),'county',_0x37e46c(0x4d59),_0x37e46c(0x36a),_0x37e46c(0x106a),_0x37e46c(0x3fd6),'region',_0x37e46c(0x3d26),_0x37e46c(0x206b),_0x37e46c(0x1275),'borough',_0x37e46c(0xa82),_0x37e46c(0x49ce),_0x37e46c(0x1d04),_0x37e46c(0x1c21),_0x37e46c(0xf81),_0x37e46c(0x470f),_0x37e46c(0x305),_0x37e46c(0x15cc),_0x37e46c(0x4783),_0x37e46c(0x2e88),_0x37e46c(0x523f),_0x37e46c(0x4cb3),_0x37e46c(0x4f0),_0x37e46c(0x2e88),_0x37e46c(0x3b4d),_0x37e46c(0x2bd1),'airport','amphitheater',_0x37e46c(0x11a8),_0x37e46c(0x4f57),'auditorium',_0x37e46c(0x2460),_0x37e46c(0x5b7),_0x37e46c(0x3207),_0x37e46c(0x56a),_0x37e46c(0x83f),_0x37e46c(0xe85),_0x37e46c(0x49ac),_0x37e46c(0x2ed8),_0x37e46c(0x19b3),_0x37e46c(0x4a42),'complex',_0x37e46c(0x26a3),_0x37e46c(0x4057),'field','fort',_0x37e46c(0x27f3),_0x37e46c(0xcbb),'gymnasium','hall',_0x37e46c(0x51c5),'levee',_0x37e46c(0x2e60),'memorial','monument',_0x37e46c(0x479f),_0x37e46c(0x3872),_0x37e46c(0x31f4),'pillar',_0x37e46c(0x4443),_0x37e46c(0x2689),'playhouse',_0x37e46c(0x4bd7),_0x37e46c(0x2284),_0x37e46c(0x2856),'stadium',_0x37e46c(0x27b7),_0x37e46c(0x4d9c),_0x37e46c(0xe71),'tower',_0x37e46c(0x1746),_0x37e46c(0x3436),'site',_0x37e46c(0x42ec),_0x37e46c(0x2856),'st',_0x37e46c(0x521e),'rd','road',_0x37e46c(0x2d70),'cr',_0x37e46c(0x269e),'tr','terrace',_0x37e46c(0x20cb),_0x37e46c(0x4bc0)]['reduce']((_0x5b6ddf,_0x4c58b8)=>(_0x5b6ddf[_0x4c58b8]=!0x0,_0x5b6ddf),{}),_0x4e5e3c=[[/([^v])ies$/i,'$1y'],[/(ise)s$/i,'$1'],[/(kn|[^o]l|w)ives$/i,_0x37e46c(0x33f2)],[/^((?:ca|e|ha|(?:our|them|your)?se|she|wo)l|lea|loa|shea|thie)ves$/i,_0x37e46c(0x2602)],[/^(dwar|handkerchie|hoo|scar|whar)ves$/i,'$1f'],[/(antenn|formul|nebul|vertebr|vit)ae$/i,'$1a'],[/(octop|vir|radi|nucle|fung|cact|stimul)(i)$/i,_0x37e46c(0x40b9)],[/(buffal|tomat|tornad)(oes)$/i,_0x37e46c(0x4e54)],[/(ause)s$/i,'$1'],[/(ease)s$/i,'$1'],[/(ious)es$/i,'$1'],[/(ouse)s$/i,'$1'],[/(ose)s$/i,'$1'],[/(..ase)s$/i,'$1'],[/(..[aeiu]s)es$/i,'$1'],[/(vert|ind|cort)(ices)$/i,_0x37e46c(0x4803)],[/(matr|append)(ices)$/i,_0x37e46c(0x2a5f)],[/([xo]|ch|ss|sh)es$/i,'$1'],[/men$/i,'man'],[/(n)ews$/i,_0x37e46c(0x1625)],[/([ti])a$/i,'$1um'],[/([^aeiouy]|qu)ies$/i,_0x37e46c(0x4cba)],[/(s)eries$/i,_0x37e46c(0x1e9b)],[/(m)ovies$/i,_0x37e46c(0x4409)],[/(cris|ax|test)es$/i,_0x37e46c(0x1fb0)],[/(alias|status)es$/i,'$1'],[/(ss)$/i,'$1'],[/(ic)s$/i,'$1'],[/s$/i,'']],_0x3338d8=function(_0x516b40,_0x3c16ee){const _0x28fb4e=_0x37e46c,{irregularPlurals:_0x10f94c}=_0x3c16ee[_0x28fb4e(0x21c9)];let _0x33aff2=(_0x11e89c=_0x10f94c,Object[_0x28fb4e(0x1ea9)](_0x11e89c)[_0x28fb4e(0x24d8)]((_0x101d77,_0xb7634f)=>(_0x101d77[_0x11e89c[_0xb7634f]]=_0xb7634f,_0x101d77),{}));var _0x11e89c;if(_0x33aff2[_0x28fb4e(0x2427)](_0x516b40))return _0x33aff2[_0x516b40];for(let _0x5543e0=0x0;_0x5543e0<_0x4e5e3c[_0x28fb4e(0x1b19)];_0x5543e0++)if(!0x0===_0x4e5e3c[_0x5543e0][0x0]['test'](_0x516b40))return _0x516b40=_0x516b40[_0x28fb4e(0x741)](_0x4e5e3c[_0x5543e0][0x0],_0x4e5e3c[_0x5543e0][0x1]);return _0x516b40;},_0x57ead4={'toPlural':_0x7b8422,'toSingular':_0x3338d8,'all':function(_0x2e9d5b,_0x111828){const _0x3e4c39=_0x37e46c;let _0x5c47a=[_0x2e9d5b],_0x32b87f=_0x7b8422(_0x2e9d5b,_0x111828);_0x32b87f!==_0x2e9d5b&&_0x5c47a['push'](_0x32b87f);let _0xc5428c=_0x3338d8(_0x2e9d5b,_0x111828);return _0xc5428c!==_0x2e9d5b&&_0x5c47a[_0x3e4c39(0x1715)](_0xc5428c),_0x5c47a;}},_0x5cd686=function(_0x48b1c6='',_0x2abf12={}){const _0x210002=_0x37e46c;let _0x5a15c8=function(_0x2e28e3,_0x166ae9={}){const _0x1d4bf6=a0_0x11e7;return _0x166ae9[_0x1d4bf6(0x2427)](_0x2e28e3)?_0x166ae9[_0x2e28e3]:null;}(_0x48b1c6,_0x2abf12['ex']);return _0x5a15c8=_0x5a15c8||function(_0x537891,_0x2618ab=[]){const _0x3234be=a0_0x11e7;for(let _0x917177=0x0;_0x917177<_0x2618ab[_0x3234be(0x1b19)];_0x917177+=0x1)if(_0x537891[_0x3234be(0x2a85)](_0x2618ab[_0x917177]))return _0x537891;return null;}(_0x48b1c6,_0x2abf12[_0x210002(0x4bc2)]),_0x5a15c8=_0x5a15c8||function(_0x9dbbbe,_0x483090,_0x4258f1={}){const _0x5a9990=_0x210002;_0x483090=_0x483090||{};for(let _0x1e3056=_0x9dbbbe[_0x5a9990(0x1b19)]-0x1;_0x1e3056>=0x1;_0x1e3056-=0x1){let _0x263de2=_0x9dbbbe[_0x5a9990(0x1b19)]-_0x1e3056,_0xcb8214=_0x9dbbbe[_0x5a9990(0x37b5)](_0x263de2,_0x9dbbbe[_0x5a9990(0x1b19)]);if(!0x0===_0x483090[_0x5a9990(0x2427)](_0xcb8214))return _0x9dbbbe['slice'](0x0,_0x263de2)+_0x483090[_0xcb8214];if(!0x0===_0x4258f1[_0x5a9990(0x2427)](_0xcb8214))return _0x9dbbbe[_0x5a9990(0x384c)](0x0,_0x263de2)+_0x4258f1[_0xcb8214];}return _0x483090[_0x5a9990(0x2427)]('')?_0x9dbbbe+_0x483090['']:_0x4258f1[_0x5a9990(0x2427)]('')?_0x9dbbbe+_0x4258f1['']:null;}(_0x48b1c6,_0x2abf12[_0x210002(0x6ae)],_0x2abf12[_0x210002(0x830)]),_0x5a15c8=_0x5a15c8||_0x48b1c6,_0x5a15c8;};let _0x28ed62={'Gerund':[_0x37e46c(0x4930)],'Actor':[_0x37e46c(0x423)],'Infinitive':[_0x37e46c(0x456b),'ize',_0x37e46c(0x4c06),_0x37e46c(0x3528),_0x37e46c(0xaf5),'ress','ify',_0x37e46c(0x2910),'nce',_0x37e46c(0x89c),_0x37e46c(0x2231),_0x37e46c(0x683),'ish','ace',_0x37e46c(0x315b),_0x37e46c(0x26d),'tch',_0x37e46c(0x2681),_0x37e46c(0x577),'and',_0x37e46c(0x226d),_0x37e46c(0x3a20),_0x37e46c(0x19f1),'ite',_0x37e46c(0x2f5c),_0x37e46c(0x1c1e),_0x37e46c(0x84a),_0x37e46c(0x4d4b),'int',_0x37e46c(0x958),_0x37e46c(0x2c72),_0x37e46c(0x3952),_0x37e46c(0x4a88),_0x37e46c(0x3cc8),_0x37e46c(0x4235),_0x37e46c(0x1dcc),'er','le',_0x37e46c(0x4994),'ung',_0x37e46c(0x3401),'en'],'PastTense':[_0x37e46c(0x512a),'ed','lt','nt','ew','ld'],'PresentTense':[_0x37e46c(0x3ec0),_0x37e46c(0x4dc9),_0x37e46c(0x4a62),_0x37e46c(0x103a),_0x37e46c(0x50bd),'tes',_0x37e46c(0x2c84),_0x37e46c(0xb60),_0x37e46c(0x4e2f),_0x37e46c(0x30e2),_0x37e46c(0x30fb),_0x37e46c(0xf7e),'ocks',_0x37e46c(0x706),_0x37e46c(0x46af),_0x37e46c(0x4566),_0x37e46c(0x331b),_0x37e46c(0x4193),_0x37e46c(0x3bda),_0x37e46c(0xd6c),_0x37e46c(0x30a9),_0x37e46c(0x1856),_0x37e46c(0x4282),_0x37e46c(0xb9e),_0x37e46c(0x11cf),_0x37e46c(0x183e),'urs','lds','ews',_0x37e46c(0x2d18),'es','ts','ns'],'Participle':[_0x37e46c(0x39e9),'wn']};_0x28ed62=Object[_0x37e46c(0x1ea9)](_0x28ed62)[_0x37e46c(0x24d8)]((_0x6807f8,_0x2b54db)=>(_0x28ed62[_0x2b54db][_0x37e46c(0xa21)](_0x3e4d1e=>_0x6807f8[_0x3e4d1e]=_0x2b54db),_0x6807f8),{});const _0x3d7c4b=_0x28ed62,_0x51fa16=function(_0x321ddb){const _0xa7c2a3=_0x37e46c;let _0x1ade1a=_0x321ddb[_0xa7c2a3(0x37b5)](_0x321ddb[_0xa7c2a3(0x1b19)]-0x3);if(!0x0===_0x3d7c4b[_0xa7c2a3(0x2427)](_0x1ade1a))return _0x3d7c4b[_0x1ade1a];let _0x2eb0bf=_0x321ddb[_0xa7c2a3(0x37b5)](_0x321ddb[_0xa7c2a3(0x1b19)]-0x2);return!0x0===_0x3d7c4b[_0xa7c2a3(0x2427)](_0x2eb0bf)?_0x3d7c4b[_0x2eb0bf]:'s'===_0x321ddb[_0xa7c2a3(0x37b5)](_0x321ddb['length']-0x1)?_0xa7c2a3(0x2c88):null;},_0x3e9f00={'are':'be','were':'be','been':'be','is':'be','am':'be','was':'be','be':'be','being':'be'},_0x86805e=function(_0x297734,_0x42c9a5,_0x2b5732){const _0x57be02=_0x37e46c,{fromPast:_0x29f474,fromPresent:_0x3292f0,fromGerund:_0x4aadc8,fromParticiple:_0x51aa41}=_0x42c9a5[_0x57be02(0x21c9)]['models'];let {prefix:_0x13c6ff,verb:_0x4f51aa,particle:_0x3176b2}=function(_0x25918d,_0x2af35a){const _0x160db1=_0x57be02;let _0x5f2ecd='',_0x298d7e={};_0x2af35a[_0x160db1(0x1d8a)]&&_0x2af35a[_0x160db1(0x1d8a)][_0x160db1(0x4145)]&&(_0x298d7e=_0x2af35a[_0x160db1(0x1d8a)][_0x160db1(0x4145)]);let [_0x3ec1de,_0x31e5ac]=_0x25918d['split'](/ /);return _0x31e5ac&&!0x0===_0x298d7e[_0x3ec1de]&&(_0x5f2ecd=_0x3ec1de,_0x3ec1de=_0x31e5ac,_0x31e5ac=''),{'prefix':_0x5f2ecd,'verb':_0x3ec1de,'particle':_0x31e5ac};}(_0x297734,_0x42c9a5),_0x2fddd8='';if(_0x2b5732||(_0x2b5732=_0x51fa16(_0x297734)),_0x3e9f00['hasOwnProperty'](_0x297734))_0x2fddd8=_0x3e9f00[_0x297734];else{if(_0x57be02(0x1ebc)===_0x2b5732)_0x2fddd8=_0x5cd686(_0x4f51aa,_0x51aa41);else{if(_0x57be02(0xe52)===_0x2b5732)_0x2fddd8=_0x5cd686(_0x4f51aa,_0x29f474);else{if(_0x57be02(0x2c88)===_0x2b5732)_0x2fddd8=_0x5cd686(_0x4f51aa,_0x3292f0);else{if(_0x57be02(0x42ce)!==_0x2b5732)return _0x297734;_0x2fddd8=_0x5cd686(_0x4f51aa,_0x4aadc8);}}}}return _0x3176b2&&(_0x2fddd8+='\x20'+_0x3176b2),_0x13c6ff&&(_0x2fddd8=_0x13c6ff+'\x20'+_0x2fddd8),_0x2fddd8;},_0x578649=function(_0x551f7b,_0x42f08b){const _0x3d4fd2=_0x37e46c,{toPast:_0x5a9020,toPresent:_0x5491d7,toGerund:_0x20d18d,toParticiple:_0x1c7399}=_0x42f08b[_0x3d4fd2(0x21c9)][_0x3d4fd2(0xdec)];if('be'===_0x551f7b)return{'Infinitive':_0x551f7b,'Gerund':_0x3d4fd2(0x642),'PastTense':'was','PresentTense':'is'};let [_0x7cd43c,_0x1e5b9f]=(_0x435135=>/ /[_0x3d4fd2(0x1769)](_0x435135)?_0x435135[_0x3d4fd2(0x1117)](/ /):[_0x435135,''])(_0x551f7b),_0x2694bd={'Infinitive':_0x7cd43c,'PastTense':_0x5cd686(_0x7cd43c,_0x5a9020),'PresentTense':_0x5cd686(_0x7cd43c,_0x5491d7),'Gerund':_0x5cd686(_0x7cd43c,_0x20d18d),'FutureTense':_0x3d4fd2(0x11d5)+_0x7cd43c},_0x2809e2=_0x5cd686(_0x7cd43c,_0x1c7399);if(_0x2809e2!==_0x551f7b&&_0x2809e2!==_0x2694bd[_0x3d4fd2(0xe52)]){let _0x1e1a3c=_0x42f08b[_0x3d4fd2(0x1d8a)][_0x3d4fd2(0x2c34)]||{};_0x3d4fd2(0x1ebc)!==_0x1e1a3c[_0x2809e2]&&_0x3d4fd2(0x4972)!==_0x1e1a3c[_0x2809e2]||(_0x3d4fd2(0x4b75)===_0x551f7b&&(_0x2809e2='played'),_0x2694bd[_0x3d4fd2(0x1ebc)]=_0x2809e2);}return _0x1e5b9f&&Object[_0x3d4fd2(0x1ea9)](_0x2694bd)[_0x3d4fd2(0xa21)](_0x1e84d9=>{_0x2694bd[_0x1e84d9]+='\x20'+_0x1e5b9f;}),_0x2694bd;},_0x44ae51={'toInfinitive':_0x86805e,'conjugate':_0x578649,'all':function(_0x29ecc0,_0x44b2a4){const _0x462630=_0x37e46c;let _0x14285b=_0x578649(_0x29ecc0,_0x44b2a4);return delete _0x14285b[_0x462630(0x2b4d)],Object[_0x462630(0x1fae)](_0x14285b)['filter'](_0x3ce74b=>_0x3ce74b);}},_0x6cd315=function(_0x129f01,_0x4b018c){const _0x3f96b2=_0x37e46c,_0x38be50=_0x4b018c[_0x3f96b2(0x21c9)]['models'][_0x3f96b2(0x4c3a)];return _0x5cd686(_0x129f01,_0x38be50);},_0x8f6e70=function(_0xc46d6c,_0x116316){const _0x14a5c3=_0x37e46c,_0x367379=_0x116316[_0x14a5c3(0x21c9)][_0x14a5c3(0xdec)]['toComparative'];return _0x5cd686(_0xc46d6c,_0x367379);},_0xf98328=function(_0x2b77db='',_0x876c76=[]){const _0x5a8d22=_0x37e46c,_0xc48d9d=_0x2b77db[_0x5a8d22(0x1b19)];for(let _0x41a384=_0xc48d9d<=0x6?_0xc48d9d-0x1:0x6;_0x41a384>=0x1;_0x41a384-=0x1){let _0x8d477=_0x2b77db[_0x5a8d22(0x37b5)](_0xc48d9d-_0x41a384,_0x2b77db[_0x5a8d22(0x1b19)]);if(!0x0===_0x876c76[_0x8d477[_0x5a8d22(0x1b19)]][_0x5a8d22(0x2427)](_0x8d477))return _0x2b77db[_0x5a8d22(0x384c)](0x0,_0xc48d9d-_0x41a384)+_0x876c76[_0x8d477[_0x5a8d22(0x1b19)]][_0x8d477];}return null;},_0x2bab88=_0x37e46c(0x23f7),_0x5388ad=new Set([_0x37e46c(0x428a)+_0x2bab88,'chem'+_0x2bab88,_0x37e46c(0x1390)+_0x2bab88,_0x37e46c(0x1818)+_0x2bab88,_0x37e46c(0x38c5)+_0x2bab88,'ecolog'+_0x2bab88,_0x37e46c(0x4a3e)+_0x2bab88,_0x37e46c(0xe3c)+_0x2bab88,_0x37e46c(0xafb)+_0x2bab88,_0x37e46c(0x40b6)+_0x2bab88,_0x37e46c(0x27e9)+_0x2bab88,_0x37e46c(0x11d2)+_0x2bab88,'log'+_0x2bab88,_0x37e46c(0x390e)+_0x2bab88,_0x37e46c(0x589)+_0x2bab88,_0x37e46c(0x4e32)+_0x2bab88,_0x37e46c(0x4418)+_0x2bab88,_0x37e46c(0x510c)+_0x2bab88,_0x37e46c(0x510c)+_0x2bab88,_0x37e46c(0x243f)+_0x2bab88,'phys'+_0x2bab88,_0x37e46c(0x3309)+_0x2bab88,'polit'+_0x2bab88,_0x37e46c(0x254a)+_0x2bab88,'rad'+_0x2bab88,_0x37e46c(0x7e4)+_0x2bab88,'statist'+_0x2bab88,_0x37e46c(0x130b)+_0x2bab88,_0x37e46c(0x20d4)+_0x2bab88,_0x37e46c(0x425e)+_0x2bab88,_0x37e46c(0x2863)+_0x2bab88,_0x37e46c(0x414)+_0x2bab88,_0x37e46c(0x36c9)+_0x2bab88]),_0x4c8f3b=[null,{},{'ly':''},{'ily':'y','bly':_0x37e46c(0x39d1),'ply':'ple'},{'ally':'al','rply':'rp'},{'ually':_0x37e46c(0x1fdd),'ially':'ial','cally':'cal','eally':'eal','rally':_0x37e46c(0x1553),'nally':_0x37e46c(0x1635),'mally':_0x37e46c(0x4302),'eeply':'eep','eaply':_0x37e46c(0x461f)},{'ically':'ic'}],_0x34b228=new Set([_0x37e46c(0x3809),_0x37e46c(0x257c),_0x37e46c(0x2622),_0x37e46c(0x9b2),_0x37e46c(0x33d3),_0x37e46c(0x40a4),_0x37e46c(0x502c),_0x37e46c(0x22b3),'duly',_0x37e46c(0x4b4d),_0x37e46c(0x2152),_0x37e46c(0x374c),_0x37e46c(0xaeb),_0x37e46c(0x266d),_0x37e46c(0x24fc),'presumably',_0x37e46c(0x35dd),_0x37e46c(0x13ca),'best',_0x37e46c(0x1d41),_0x37e46c(0xb1c),_0x37e46c(0x324f),_0x37e46c(0x1e08)]),_0x528e55={'wholly':_0x37e46c(0xcae),'fully':_0x37e46c(0x2102),'truly':'true','gently':_0x37e46c(0x4071),'singly':_0x37e46c(0x4d4),'customarily':_0x37e46c(0x3a71),'idly':'idle','publically':_0x37e46c(0x39ce),'quickly':'quick','superbly':_0x37e46c(0x3e09),'cynically':_0x37e46c(0x1995),'well':_0x37e46c(0x211c)},_0x4746cb=[null,{'y':_0x37e46c(0x4131)},{'ly':'ly','ic':'ically'},{'ial':'ially','ual':_0x37e46c(0x269d),'tle':_0x37e46c(0x29c6),'ble':'bly','ple':_0x37e46c(0x476b),'ary':_0x37e46c(0xc96)},{},{},{}],_0x442084={'cool':_0x37e46c(0x1980),'whole':_0x37e46c(0x1017),'full':_0x37e46c(0x32dd),'good':_0x37e46c(0x284f),'idle':_0x37e46c(0x9d8),'public':_0x37e46c(0x4bf7),'single':_0x37e46c(0x948),'special':_0x37e46c(0x2152)},_0xc9369c=function(_0x42009a){if(_0x442084['hasOwnProperty'](_0x42009a))return _0x442084[_0x42009a];let _0x1c17dc=_0xf98328(_0x42009a,_0x4746cb);return _0x1c17dc||(_0x1c17dc=_0x42009a+'ly'),_0x1c17dc;},_0x4f2c99={'toSuperlative':_0x6cd315,'toComparative':_0x8f6e70,'toAdverb':_0xc9369c,'toNoun':function(_0x183f5e,_0x7e9f1e){const _0x3f94be=_0x37e46c,_0x500a15=_0x7e9f1e[_0x3f94be(0x21c9)]['models'][_0x3f94be(0x2c0b)];return _0x5cd686(_0x183f5e,_0x500a15);},'fromAdverb':function(_0x1378d1){const _0x19e262=_0x37e46c;return _0x1378d1[_0x19e262(0x2a85)]('ly')?_0x5388ad['has'](_0x1378d1)?_0x1378d1[_0x19e262(0x741)](/ically/,'ical'):_0x34b228['has'](_0x1378d1)?null:_0x528e55[_0x19e262(0x2427)](_0x1378d1)?_0x528e55[_0x1378d1]:_0xf98328(_0x1378d1,_0x4c8f3b)||_0x1378d1:null;},'fromSuperlative':function(_0x274aae,_0x4097ad){const _0x383204=_0x37e46c,_0x573797=_0x4097ad['two'][_0x383204(0xdec)]['fromSuperlative'];return _0x5cd686(_0x274aae,_0x573797);},'fromComparative':function(_0x44969d,_0x384190){const _0x498e36=_0x37e46c,_0xaaf871=_0x384190[_0x498e36(0x21c9)]['models']['fromComparative'];return _0x5cd686(_0x44969d,_0xaaf871);},'all':function(_0x533e10,_0x534559){const _0x23c8ab=_0x37e46c;let _0x4445c9=[_0x533e10];return _0x4445c9[_0x23c8ab(0x1715)](_0x6cd315(_0x533e10,_0x534559)),_0x4445c9[_0x23c8ab(0x1715)](_0x8f6e70(_0x533e10,_0x534559)),_0x4445c9[_0x23c8ab(0x1715)](_0xc9369c(_0x533e10)),_0x4445c9=_0x4445c9[_0x23c8ab(0x1465)](_0xaa0f36=>_0xaa0f36),_0x4445c9=new Set(_0x4445c9),Array['from'](_0x4445c9);}},_0x1cdb95={'noun':_0x57ead4,'verb':_0x44ae51,'adjective':_0x4f2c99},_0x170938={'Singular':(_0x168661,_0x4bfc69,_0x2f1bbc,_0x2ef7ad)=>{const _0x8eae4f=_0x37e46c;let _0x1a168a=_0x2ef7ad[_0x8eae4f(0x1d8a)][_0x8eae4f(0x2c34)],_0x373e9e=_0x2f1bbc[_0x8eae4f(0x21c9)][_0x8eae4f(0x5161)][_0x8eae4f(0x4d62)][_0x8eae4f(0x467a)](_0x168661,_0x2ef7ad);_0x1a168a[_0x373e9e]||(_0x4bfc69[_0x373e9e]=_0x4bfc69[_0x373e9e]||_0x8eae4f(0x25f7));},'Actor':(_0xad347b,_0x57d573,_0x46a853,_0x2f9336)=>{const _0x48d1ea=_0x37e46c;let _0x1b1e9e=_0x2f9336[_0x48d1ea(0x1d8a)][_0x48d1ea(0x2c34)],_0x493965=_0x46a853[_0x48d1ea(0x21c9)][_0x48d1ea(0x5161)]['noun']['toPlural'](_0xad347b,_0x2f9336);_0x1b1e9e[_0x493965]||(_0x57d573[_0x493965]=_0x57d573[_0x493965]||[_0x48d1ea(0x25f7),_0x48d1ea(0x3b29)]);},'Comparable':(_0xcee2f4,_0x2fc2c6,_0x5cf272,_0x2edca1)=>{const _0x55c16b=_0x37e46c;let _0x442b52=_0x2edca1[_0x55c16b(0x1d8a)][_0x55c16b(0x2c34)],{toSuperlative:_0x362687,toComparative:_0x3f617d}=_0x5cf272[_0x55c16b(0x21c9)][_0x55c16b(0x5161)][_0x55c16b(0x2b3f)],_0x2b8967=_0x362687(_0xcee2f4,_0x2edca1);_0x442b52[_0x2b8967]||(_0x2fc2c6[_0x2b8967]=_0x2fc2c6[_0x2b8967]||_0x55c16b(0x2f40));let _0x5c2b28=_0x3f617d(_0xcee2f4,_0x2edca1);_0x442b52[_0x5c2b28]||(_0x2fc2c6[_0x5c2b28]=_0x2fc2c6[_0x5c2b28]||'Comparative'),_0x2fc2c6[_0xcee2f4]='Adjective';},'Demonym':(_0x257bfb,_0x2bbe9c,_0x29d582,_0x26f692)=>{const _0x363af5=_0x37e46c;let _0x4f7eac=_0x29d582[_0x363af5(0x21c9)][_0x363af5(0x5161)][_0x363af5(0x4d62)][_0x363af5(0x467a)](_0x257bfb,_0x26f692);_0x2bbe9c[_0x4f7eac]=_0x2bbe9c[_0x4f7eac]||['Demonym',_0x363af5(0x25f7)];},'Infinitive':(_0x518f0e,_0x1076a7,_0xf1b7ff,_0x22857d)=>{const _0x308fe6=_0x37e46c;let _0x4abd62=_0x22857d[_0x308fe6(0x1d8a)][_0x308fe6(0x2c34)],_0xa40915=_0xf1b7ff[_0x308fe6(0x21c9)][_0x308fe6(0x5161)]['verb']['conjugate'](_0x518f0e,_0x22857d);Object[_0x308fe6(0x1694)](_0xa40915)['forEach'](_0x49c7ba=>{_0x4abd62[_0x49c7ba[0x1]]||_0x1076a7[_0x49c7ba[0x1]]||'FutureTense'===_0x49c7ba[0x0]||(_0x1076a7[_0x49c7ba[0x1]]=_0x49c7ba[0x0]);});},'PhrasalVerb':(_0x2a44c7,_0x4d18ed,_0x20fcd7,_0x2629c1)=>{const _0x1b67ca=_0x37e46c;let _0x3049ed=_0x2629c1[_0x1b67ca(0x1d8a)][_0x1b67ca(0x2c34)];_0x4d18ed[_0x2a44c7]=[_0x1b67ca(0x39b6),_0x1b67ca(0x2631)];let _0x114c8f=_0x2629c1[_0x1b67ca(0x1d8a)][_0x1b67ca(0x839)],[_0xd31baa,_0x504f7e]=_0x2a44c7[_0x1b67ca(0x1117)]('\x20');_0x3049ed[_0xd31baa]||(_0x4d18ed[_0xd31baa]=_0x4d18ed[_0xd31baa]||'Infinitive');let _0x1009ad=_0x20fcd7[_0x1b67ca(0x21c9)][_0x1b67ca(0x5161)][_0x1b67ca(0x4134)][_0x1b67ca(0x2343)](_0xd31baa,_0x2629c1);delete _0x1009ad[_0x1b67ca(0x2b4d)],Object[_0x1b67ca(0x1694)](_0x1009ad)[_0x1b67ca(0xa21)](_0x541902=>{const _0x16facd=_0x1b67ca;if(_0x16facd(0x3b29)===_0x541902[0x0]||''===_0x541902[0x1])return;_0x4d18ed[_0x541902[0x1]]||_0x3049ed[_0x541902[0x1]]||(_0x4d18ed[_0x541902[0x1]]=_0x541902[0x0]),_0x114c8f[_0x541902[0x1]]=0x2;let _0x436af8=_0x541902[0x1]+'\x20'+_0x504f7e;_0x4d18ed[_0x436af8]=_0x4d18ed[_0x436af8]||[_0x541902[0x0],_0x16facd(0x39b6)];});},'Multiple':(_0x365ecf,_0x458a45)=>{const _0x53a56b=_0x37e46c;_0x458a45[_0x365ecf]=[_0x53a56b(0x981),_0x53a56b(0xab2)],_0x458a45[_0x365ecf+'th']=[_0x53a56b(0x981),_0x53a56b(0x4eea)],_0x458a45[_0x365ecf+_0x53a56b(0x1ef3)]=[_0x53a56b(0x981),'Fraction'];},'Cardinal':(_0x3c939a,_0x175114)=>{const _0x1a6916=_0x37e46c;_0x175114[_0x3c939a]=[_0x1a6916(0xc93),_0x1a6916(0xab2)];},'Ordinal':(_0x459def,_0x453a9c)=>{const _0x43c2de=_0x37e46c;_0x453a9c[_0x459def]=[_0x43c2de(0xc93),_0x43c2de(0x4eea)],_0x453a9c[_0x459def+'s']=[_0x43c2de(0xc93),'Fraction'];},'Place':(_0x25431f,_0x425a16)=>{const _0x1f1725=_0x37e46c;_0x425a16[_0x25431f]=[_0x1f1725(0x1c11),'ProperNoun'];},'Region':(_0x1f577c,_0x52ed6d)=>{const _0x50a9e2=_0x37e46c;_0x52ed6d[_0x1f577c]=[_0x50a9e2(0x3d8e),_0x50a9e2(0xb7e)];}},_0x23b568=function(_0x42f2c6,_0x303a89){const _0x253fbc=_0x37e46c,{methods:_0x7f3e4f,model:_0x5bba76}=_0x303a89;let _0x2ad6a2={},_0x4c245d={};return Object['keys'](_0x42f2c6)[_0x253fbc(0xa21)](_0x200267=>{const _0x28ac6f=_0x253fbc;let _0x46e5ba=_0x42f2c6[_0x200267],_0x5ec0a9=(_0x200267=(_0x200267=_0x200267[_0x28ac6f(0x6e8)]()['trim']())['replace'](/'s\b/,''))[_0x28ac6f(0x1117)](/ /);_0x5ec0a9[_0x28ac6f(0x1b19)]>0x1&&(void 0x0===_0x4c245d[_0x5ec0a9[0x0]]||_0x5ec0a9['length']>_0x4c245d[_0x5ec0a9[0x0]])&&(_0x4c245d[_0x5ec0a9[0x0]]=_0x5ec0a9[_0x28ac6f(0x1b19)]),!0x0===_0x170938[_0x28ac6f(0x2427)](_0x46e5ba)&&_0x170938[_0x46e5ba](_0x200267,_0x2ad6a2,_0x7f3e4f,_0x5bba76),_0x2ad6a2[_0x200267]=_0x2ad6a2[_0x200267]||_0x46e5ba;}),delete _0x2ad6a2[''],delete _0x2ad6a2[_0x253fbc(0x1582)],delete _0x2ad6a2['\x20'],{'lex':_0x2ad6a2,'_multi':_0x4c245d};},_0x153615=function(_0x6a1b1b){const _0x1de121=_0x37e46c,_0x4a6f10=/[,:;]/;let _0x2f5111=[];return _0x6a1b1b[_0x1de121(0xa21)](_0x2e5fff=>{const _0x23cfd1=_0x1de121;let _0x258619=0x0;_0x2e5fff[_0x23cfd1(0xa21)]((_0x3afec1,_0x11becd)=>{const _0x158e04=_0x23cfd1;_0x4a6f10[_0x158e04(0x1769)](_0x3afec1[_0x158e04(0x24ce)])&&function(_0x5ced2b,_0x47d57e){const _0x5d3eca=_0x158e04,_0x2659cd=/^[0-9]+$/;let _0x5c751e=_0x5ced2b[_0x47d57e];if(!_0x5c751e)return!0x1;const _0x4399a4=new Set([_0x5d3eca(0xd88),_0x5d3eca(0x2aa1),_0x5d3eca(0x2949),'jan']);if(_0x5d3eca(0x83d)===_0x5c751e[_0x5d3eca(0x47d)]||_0x4399a4[_0x5d3eca(0x3170)](_0x5c751e[_0x5d3eca(0x47d)]))return!0x1;if(_0x5c751e['tags'][_0x5d3eca(0x3170)]('Place')||_0x5c751e[_0x5d3eca(0x521a)]['has'](_0x5d3eca(0x448e)))return!0x1;if(_0x5ced2b[_0x47d57e-0x1]){let _0x2c21ad=_0x5ced2b[_0x47d57e-0x1];if(_0x2c21ad[_0x5d3eca(0x521a)][_0x5d3eca(0x3170)](_0x5d3eca(0x448e))||_0x4399a4[_0x5d3eca(0x3170)](_0x2c21ad[_0x5d3eca(0x47d)]))return!0x1;if(_0x2c21ad[_0x5d3eca(0x521a)][_0x5d3eca(0x3170)]('Adjective')||_0x5c751e[_0x5d3eca(0x521a)][_0x5d3eca(0x3170)](_0x5d3eca(0x4972)))return!0x1;}let _0x55741a=_0x5c751e[_0x5d3eca(0x47d)];return 0x1!==_0x55741a[_0x5d3eca(0x1b19)]&&0x2!==_0x55741a[_0x5d3eca(0x1b19)]&&0x4!==_0x55741a['length']||!_0x2659cd['test'](_0x55741a);}(_0x2e5fff,_0x11becd+0x1)&&(_0x2f5111[_0x158e04(0x1715)](_0x2e5fff[_0x158e04(0x384c)](_0x258619,_0x11becd+0x1)),_0x258619=_0x11becd+0x1);}),_0x258619<_0x2e5fff[_0x23cfd1(0x1b19)]&&_0x2f5111[_0x23cfd1(0x1715)](_0x2e5fff[_0x23cfd1(0x384c)](_0x258619,_0x2e5fff['length']));}),_0x2f5111;},_0x55e084={'e':[_0x37e46c(0x1249),'louse',_0x37e46c(0x33e6),'formulae',_0x37e46c(0x15e2),_0x37e46c(0x3077),'vitae'],'i':[_0x37e46c(0x39a9),_0x37e46c(0x4ad0),_0x37e46c(0x2966),_0x37e46c(0x3e1b),_0x37e46c(0xc5f),_0x37e46c(0x3c34),_0x37e46c(0x1637),_0x37e46c(0x4b76)],'n':[_0x37e46c(0x487a)],'t':['feet']},_0x8116b5=new Set([_0x37e46c(0x1c47),_0x37e46c(0x4da8),_0x37e46c(0x20a8)]),_0x4fcdc0=[_0x37e46c(0x28f0),_0x37e46c(0x2cea),'was',_0x37e46c(0x270a),_0x37e46c(0x3037),'vas',_0x37e46c(0x43ef),_0x37e46c(0x1f83),_0x37e46c(0x6e5),_0x37e46c(0x41bd),_0x37e46c(0x17e1),_0x37e46c(0x1607),_0x37e46c(0x4d17),_0x37e46c(0x3efc),_0x37e46c(0x3eda),_0x37e46c(0x4077),'eus',_0x37e46c(0x42ef),_0x37e46c(0x27d8),_0x37e46c(0xe2f),'lus','nus',_0x37e46c(0x14ef),_0x37e46c(0x2298),_0x37e46c(0x4ca8),_0x37e46c(0x44eb),_0x37e46c(0x1900),'tus',_0x37e46c(0x120f),_0x37e46c(0x109d),_0x37e46c(0x993),_0x37e46c(0x1cfc),_0x37e46c(0x35ac),'\x27s','ss'],_0x5aec52=function(_0x189a65){const _0x5e930a=_0x37e46c;if(!_0x189a65||_0x189a65[_0x5e930a(0x1b19)]<=0x3)return!0x1;if(_0x8116b5[_0x5e930a(0x3170)](_0x189a65))return!0x0;let _0x5a771d=_0x189a65[_0x189a65[_0x5e930a(0x1b19)]-0x1];return _0x55e084['hasOwnProperty'](_0x5a771d)?_0x55e084[_0x5a771d]['find'](_0x5b2edc=>_0x189a65[_0x5e930a(0x2a85)](_0x5b2edc)):'s'===_0x5a771d&&!_0x4fcdc0[_0x5e930a(0x5144)](_0x119a18=>_0x189a65['endsWith'](_0x119a18));},_0x2db37b={'two':{'quickSplit':_0x153615,'expandLexicon':_0x23b568,'transform':_0x1cdb95,'looksPlural':_0x5aec52}},_0x281ffa=function(_0x66f0b4){const _0x8b39aa=_0x37e46c,{irregularPlurals:_0x111c0e}=_0x66f0b4[_0x8b39aa(0x21c9)],{lexicon:_0x3673d6}=_0x66f0b4[_0x8b39aa(0x1d8a)];return Object['entries'](_0x111c0e)['forEach'](_0x3a59b3=>{const _0x50b3f2=_0x8b39aa;_0x3673d6[_0x3a59b3[0x0]]=_0x3673d6[_0x3a59b3[0x0]]||_0x50b3f2(0x1e9f),_0x3673d6[_0x3a59b3[0x1]]=_0x3673d6[_0x3a59b3[0x1]]||_0x50b3f2(0x25f7);}),_0x66f0b4;};let _0x4e62da={'one':{'lexicon':{}},'two':{'models':_0x1d5b61}};const _0x1f779e={'Actor|Verb':_0x37e46c(0x3b29),'Adj|Gerund':_0x37e46c(0x4972),'Adj|Noun':_0x37e46c(0x4972),'Adj|Past':'Adjective','Adj|Present':_0x37e46c(0x4972),'Noun|Verb':_0x37e46c(0x1e9f),'Noun|Gerund':'Gerund','Person|Noun':_0x37e46c(0x1786),'Person|Date':_0x37e46c(0x1c7a),'Person|Verb':'FirstName','Person|Place':'Person','Person|Adj':'Comparative','Plural|Verb':_0x37e46c(0x25f7),'Unit|Noun':_0x37e46c(0x1786)},_0x180640=function(_0xa639b0,_0x4e06bf){const _0x4e1cfb=_0x37e46c,_0x53e698={'model':_0x4e06bf,'methods':_0x2db37b};let {lex:_0x138dba,_multi:_0x1bae59}=_0x2db37b[_0x4e1cfb(0x21c9)]['expandLexicon'](_0xa639b0,_0x53e698);return Object['assign'](_0x4e06bf[_0x4e1cfb(0x1d8a)]['lexicon'],_0x138dba),Object[_0x4e1cfb(0x4e14)](_0x4e06bf[_0x4e1cfb(0x1d8a)][_0x4e1cfb(0x839)],_0x1bae59),_0x4e06bf;},_0x3020d7=function(_0x13c04b,_0x42741f,_0x291105){const _0x154da7=_0x37e46c;let _0x23a9be=_0x578649(_0x13c04b,_0x4e62da);_0x42741f[_0x23a9be[_0x154da7(0xe52)]]=_0x42741f[_0x23a9be[_0x154da7(0xe52)]]||_0x154da7(0xe52),_0x42741f[_0x23a9be[_0x154da7(0x42ce)]]=_0x42741f[_0x23a9be[_0x154da7(0x42ce)]]||_0x154da7(0x42ce),!0x0===_0x291105&&(_0x42741f[_0x23a9be[_0x154da7(0x2c88)]]=_0x42741f[_0x23a9be[_0x154da7(0x2c88)]]||_0x154da7(0x2c88));},_0x31b4df=function(_0x3915cc,_0x468084,_0x38e6e1){const _0x3147c8=_0x37e46c;let _0x12f4b1=_0x6cd315(_0x3915cc,_0x38e6e1);_0x468084[_0x12f4b1]=_0x468084[_0x12f4b1]||_0x3147c8(0x2f40);let _0x24877c=_0x8f6e70(_0x3915cc,_0x38e6e1);_0x468084[_0x24877c]=_0x468084[_0x24877c]||'Comparative';},_0x10ef82=function(_0x46a793,_0x588fd3){const _0x5ae4db=_0x37e46c;let _0x114a1b={};const _0x3a4d63=_0x588fd3[_0x5ae4db(0x1d8a)][_0x5ae4db(0x2c34)];return Object[_0x5ae4db(0x1ea9)](_0x46a793)['forEach'](_0x2508aa=>{const _0x230ab7=_0x5ae4db,_0x42dddd=_0x46a793[_0x2508aa];if(_0x114a1b[_0x2508aa]=_0x1f779e[_0x42dddd],_0x230ab7(0x2d09)!==_0x42dddd&&_0x230ab7(0x46a)!==_0x42dddd&&'Actor|Verb'!==_0x42dddd||_0x3020d7(_0x2508aa,_0x3a4d63,!0x1),_0x230ab7(0x458e)===_0x42dddd&&(_0x3020d7(_0x2508aa,_0x3a4d63,!0x0),_0x31b4df(_0x2508aa,_0x3a4d63,_0x588fd3)),_0x230ab7(0x50bc)===_0x42dddd&&_0x31b4df(_0x2508aa,_0x3a4d63,_0x588fd3),_0x230ab7(0x32f6)===_0x42dddd||_0x230ab7(0xeda)===_0x42dddd){let _0x20f785=_0x86805e(_0x2508aa,_0x4e62da,_0x230ab7(0x42ce));_0x3a4d63[_0x20f785]||(_0x114a1b[_0x20f785]=_0x230ab7(0x2631));}if('Noun|Gerund'!==_0x42dddd&&'Adj|Noun'!==_0x42dddd&&'Person|Noun'!==_0x42dddd||function(_0x42e4b2,_0x45d074,_0xdadcaa){const _0x491b13=_0x230ab7;let _0x322272=_0x7b8422(_0x42e4b2,_0xdadcaa);_0x45d074[_0x322272]=_0x45d074[_0x322272]||_0x491b13(0x25f7);}(_0x2508aa,_0x3a4d63,_0x588fd3),'Adj|Past'===_0x42dddd){let _0x2145f8=_0x86805e(_0x2508aa,_0x4e62da,_0x230ab7(0xe52));_0x3a4d63[_0x2145f8]||(_0x114a1b[_0x2145f8]=_0x230ab7(0x2631));}}),_0x588fd3=_0x180640(_0x114a1b,_0x588fd3);},_0x4b8d7c=function(_0x32c86c){const _0x60586d=_0x37e46c;return _0x32c86c=function(_0x286c85,_0x2dbdca){const _0x399499=a0_0x11e7;return Object['keys'](_0x286c85)[_0x399499(0xa21)](_0x27b03f=>{const _0xac6f46=_0x399499;'Uncountable'===_0x286c85[_0x27b03f]&&(_0x2dbdca[_0xac6f46(0x21c9)][_0xac6f46(0x381c)][_0x27b03f]=!0x0,_0x286c85[_0x27b03f]=_0xac6f46(0x163f));}),_0x2dbdca;}((_0x32c86c=_0x180640(_0x32c86c[_0x60586d(0x1d8a)]['lexicon'],_0x32c86c))[_0x60586d(0x1d8a)][_0x60586d(0x2c34)],_0x32c86c),_0x32c86c=_0x10ef82(_0x32c86c[_0x60586d(0x21c9)][_0x60586d(0x2b12)],_0x32c86c),_0x32c86c=_0x281ffa(_0x32c86c);};let _0x36c5f5={'one':{'_multiCache':{},'lexicon':_0x415e70,'frozenLex':{'20th\x20century\x20fox':_0x37e46c(0x4c36),'7\x20eleven':_0x37e46c(0x4c36),'motel\x206':'Organization','excuse\x20me':'Expression','financial\x20times':_0x37e46c(0x4c36),'guns\x20n\x20roses':'Organization','la\x20z\x20boy':_0x37e46c(0x4c36),'labour\x20party':'Organization','new\x20kids\x20on\x20the\x20block':_0x37e46c(0x4c36),'new\x20york\x20times':_0x37e46c(0x4c36),'the\x20guess\x20who':_0x37e46c(0x4c36),'thin\x20lizzy':'Organization','prime\x20minister':_0x37e46c(0x3b29),'free\x20market':_0x37e46c(0x1e9f),'lay\x20up':'Singular','living\x20room':_0x37e46c(0x1e9f),'living\x20rooms':'Plural','spin\x20off':'Singular','appeal\x20court':'Uncountable','cold\x20war':_0x37e46c(0x163f),'gene\x20pool':_0x37e46c(0x163f),'machine\x20learning':_0x37e46c(0x163f),'nail\x20polish':'Uncountable','time\x20off':_0x37e46c(0x163f),'take\x20part':'Infinitive','bill\x20gates':_0x37e46c(0x904),'doctor\x20who':_0x37e46c(0x904),'dr\x20who':_0x37e46c(0x904),'he\x20man':_0x37e46c(0x904),'iron\x20man':_0x37e46c(0x904),'kid\x20cudi':_0x37e46c(0x904),'run\x20dmc':_0x37e46c(0x904),'rush\x20limbaugh':'Person','snow\x20white':_0x37e46c(0x904),'tiger\x20woods':_0x37e46c(0x904),'brand\x20new':_0x37e46c(0x4972),'en\x20route':'Adjective','left\x20wing':_0x37e46c(0x4972),'off\x20guard':_0x37e46c(0x4972),'on\x20board':'Adjective','part\x20time':_0x37e46c(0x4972),'right\x20wing':_0x37e46c(0x4972),'so\x20called':_0x37e46c(0x4972),'spot\x20on':_0x37e46c(0x4972),'straight\x20forward':'Adjective','super\x20duper':_0x37e46c(0x4972),'tip\x20top':_0x37e46c(0x4972),'top\x20notch':_0x37e46c(0x4972),'up\x20to\x20date':_0x37e46c(0x4972),'win\x20win':'Adjective','brooklyn\x20nets':'SportsTeam','chicago\x20bears':_0x37e46c(0x2aad),'houston\x20astros':_0x37e46c(0x2aad),'houston\x20dynamo':_0x37e46c(0x2aad),'houston\x20rockets':_0x37e46c(0x2aad),'houston\x20texans':_0x37e46c(0x2aad),'minnesota\x20twins':'SportsTeam','orlando\x20magic':_0x37e46c(0x2aad),'san\x20antonio\x20spurs':_0x37e46c(0x2aad),'san\x20diego\x20chargers':_0x37e46c(0x2aad),'san\x20diego\x20padres':_0x37e46c(0x2aad),'iron\x20maiden':_0x37e46c(0xb7e),'isle\x20of\x20man':_0x37e46c(0x4138),'united\x20states':'Country','united\x20states\x20of\x20america':_0x37e46c(0x4138),'prince\x20edward\x20island':'Region','cedar\x20breaks':_0x37e46c(0x1c11),'cedar\x20falls':_0x37e46c(0x1c11),'point\x20blank':_0x37e46c(0x2cbd),'tiny\x20bit':_0x37e46c(0x2cbd),'by\x20the\x20time':_0x37e46c(0x37f1),'no\x20matter':_0x37e46c(0x37f1),'civil\x20wars':_0x37e46c(0x25f7),'credit\x20cards':_0x37e46c(0x25f7),'default\x20rates':_0x37e46c(0x25f7),'free\x20markets':'Plural','head\x20starts':'Plural','home\x20runs':_0x37e46c(0x25f7),'lay\x20ups':_0x37e46c(0x25f7),'phone\x20calls':_0x37e46c(0x25f7),'press\x20releases':_0x37e46c(0x25f7),'record\x20labels':_0x37e46c(0x25f7),'soft\x20serves':'Plural','student\x20loans':_0x37e46c(0x25f7),'tax\x20returns':_0x37e46c(0x25f7),'tv\x20shows':_0x37e46c(0x25f7),'video\x20games':_0x37e46c(0x25f7),'took\x20part':_0x37e46c(0xe52),'takes\x20part':_0x37e46c(0x2c88),'taking\x20part':_0x37e46c(0x42ce),'taken\x20part':_0x37e46c(0x1ebc),'light\x20bulb':_0x37e46c(0x1786),'rush\x20hour':_0x37e46c(0x1786),'fluid\x20ounce':_0x37e46c(0xdae),'the\x20rolling\x20stones':_0x37e46c(0x4c36)}},'two':{'irregularPlurals':_0x439866,'models':_0x1d5b61,'suffixPatterns':_0x52e5ef,'prefixPatterns':_0x3b25fc,'endsWith':_0x6013d7,'neighbours':_0x14988d,'regexNormal':[[/^[\w.]+@[\w.]+\.[a-z]{2,3}$/,_0x37e46c(0x2278)],[/^(https?:\/\/|www\.)+\w+\.[a-z]{2,3}/,_0x37e46c(0x1262),'http..'],[/^[a-z0-9./].+\.(com|net|gov|org|ly|edu|info|biz|dev|ru|jp|de|in|uk|br|io|ai)/,_0x37e46c(0x1262),_0x37e46c(0x3c0e)],[/^[PMCE]ST$/,_0x37e46c(0x3dd9),_0x37e46c(0xc75)],[/^ma?c'[a-z]{3}/,_0x37e46c(0x30cd),_0x37e46c(0x2161)],[/^o'[a-z]{3}/,_0x37e46c(0x30cd),'o\x27connor'],[/^ma?cd[aeiou][a-z]{3}/,'LastName',_0x37e46c(0x48dd)],[/^(lol)+[sz]$/,'Expression',_0x37e46c(0x45bd)],[/^wo{2,}a*h?$/,'Expression',_0x37e46c(0x48c7)],[/^(hee?){2,}h?$/,_0x37e46c(0x804),_0x37e46c(0x28a8)],[/^(un|de|re)\\-[a-z\u00C0-\u00FF]{2}/,_0x37e46c(0x487b),_0x37e46c(0x4760)],[/^(m|k|cm|km)\/(s|h|hr)$/,_0x37e46c(0xdae),_0x37e46c(0x351d)],[/^(ug|ng|mg)\/(l|m3|ft3)$/,_0x37e46c(0xdae),_0x37e46c(0x4a27)]],'regexText':[[/^#[\p{Number}_]*\p{Letter}/u,_0x37e46c(0x47e6)],[/^@\w{2,}$/,_0x37e46c(0x1221)],[/^([A-Z]\.){2}[A-Z]?/i,[_0x37e46c(0x45e2),_0x37e46c(0x1786)],_0x37e46c(0x510f)],[/.{3}[lkmnp]in['‘’‛‵′`´]$/,_0x37e46c(0x42ce),_0x37e46c(0x82d)],[/.{4}s['‘’‛‵′`´]$/,_0x37e46c(0x3d93),_0x37e46c(0xa54)],[/^[\p{Emoji_Presentation}\p{Extended_Pictographic}]/u,'Emoji',_0x37e46c(0x555)]],'regexNumbers':[[/^@1?[0-9](am|pm)$/i,_0x37e46c(0x1703),_0x37e46c(0x35ae)],[/^@1?[0-9]:[0-9]{2}(am|pm)?$/i,_0x37e46c(0x1703),_0x37e46c(0x3e2d)],[/^'[0-9]{2}$/,'Year'],[/^[012]?[0-9](:[0-5][0-9])(:[0-5][0-9])$/,_0x37e46c(0x1703),_0x37e46c(0x1346)],[/^[012]?[0-9](:[0-5][0-9])?(:[0-5][0-9])? ?(am|pm)$/i,_0x37e46c(0x1703),_0x37e46c(0x3ad7)],[/^[012]?[0-9](:[0-5][0-9])(:[0-5][0-9])? ?(am|pm)?$/i,'Time','1:12:31pm'],[/^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}/i,_0x37e46c(0x448e),_0x37e46c(0x2810)],[/^[0-9]{1,4}-[0-9]{1,2}-[0-9]{1,4}$/,_0x37e46c(0x448e),_0x37e46c(0x1634)],[/^[0-9]{1,4}\/[0-9]{1,2}\/([0-9]{4}|[0-9]{2})$/,_0x37e46c(0x448e),_0x37e46c(0x13c7)],[/^[0-9]{1,4}\.[0-9]{1,2}\.[0-9]{1,4}$/,'Date',_0x37e46c(0xd97)],[/^[0-9]{1,4}-[a-z]{2,9}-[0-9]{1,4}$/i,_0x37e46c(0x448e),'12-dec-2019'],[/^utc ?[+-]?[0-9]+$/,'Timezone','utc-9'],[/^(gmt|utc)[+-][0-9]{1,2}$/i,_0x37e46c(0x3dd9),'gmt-3'],[/^[0-9]{3}-[0-9]{4}$/,'PhoneNumber',_0x37e46c(0x2542)],[/^(\+?[0-9][ -])?[0-9]{3}[ -]?[0-9]{3}-[0-9]{4}$/,_0x37e46c(0xe0f),_0x37e46c(0x264f)],[/^[-+]?\p{Currency_Symbol}[-+]?[0-9]+(,[0-9]{3})*(\.[0-9]+)?([kmb]|bn)?\+?$/u,['Money',_0x37e46c(0x3a43)],_0x37e46c(0x2975)],[/^[-+]?[0-9]+(,[0-9]{3})*(\.[0-9]+)?\p{Currency_Symbol}\+?$/u,[_0x37e46c(0x4b84),'Value'],_0x37e46c(0x3d2c)],[/^[-+]?[$£]?[0-9]([0-9,.])+(usd|eur|jpy|gbp|cad|aud|chf|cny|hkd|nzd|kr|rub)$/i,[_0x37e46c(0x4b84),_0x37e46c(0x3a43)],_0x37e46c(0x2031)],[/^[-+]?[0-9]+(,[0-9]{3})*(\.[0-9]+)?\+?$/,[_0x37e46c(0xab2),_0x37e46c(0x466e)],_0x37e46c(0x36d4)],[/^[-+]?[0-9]+(,[0-9]{3})*(\.[0-9]+)?(st|nd|rd|r?th)$/,[_0x37e46c(0x4eea),'NumericValue'],_0x37e46c(0x3278)],[/^\.[0-9]+\+?$/,[_0x37e46c(0xab2),_0x37e46c(0x466e)],'.73th'],[/^[-+]?[0-9]+(,[0-9]{3})*(\.[0-9]+)?%\+?$/,[_0x37e46c(0x4990),_0x37e46c(0xab2),_0x37e46c(0x466e)],'-4%'],[/^\.[0-9]+%$/,[_0x37e46c(0x4990),'Cardinal','NumericValue'],_0x37e46c(0x22cc)],[/^[0-9]{1,4}\/[0-9]{1,4}(st|nd|rd|th)?s?$/,[_0x37e46c(0x2365),_0x37e46c(0x466e)],_0x37e46c(0x1c0d)],[/^[0-9.]{1,3}[a-z]{0,2}[-–—][0-9]{1,3}[a-z]{0,2}$/,[_0x37e46c(0x3a43),'NumberRange'],_0x37e46c(0x269b)],[/^[0-9]{1,2}(:[0-9][0-9])?(am|pm)? ?[-–—] ?[0-9]{1,2}(:[0-9][0-9])?(am|pm)$/,[_0x37e46c(0x1703),_0x37e46c(0x2e56)],_0x37e46c(0x3dd1)],[/^[0-9.]+([a-z°]{1,4})$/,_0x37e46c(0x466e),_0x37e46c(0x38aa)]],'switches':_0x1ae5cf,'clues':_0x2d4f1b,'uncountable':{},'orgWords':_0x1d2170,'placeWords':_0x1aca21}};_0x36c5f5=_0x4b8d7c(_0x36c5f5);const _0x31db35=_0x36c5f5,_0x19456a=function(_0xa2da0c,_0x441c84,_0x22552f,_0x5f6efd){const _0x2a27eb=_0x37e46c,_0x418b61=_0x5f6efd[_0x2a27eb(0x1578)]['one']['setTag'];if(0x0===_0x441c84&&_0xa2da0c[_0x2a27eb(0x1b19)]>=0x3){const _0x2c9c7d=/:/;if(_0xa2da0c[0x0][_0x2a27eb(0x24ce)][_0x2a27eb(0x2d96)](_0x2c9c7d)){let _0x1b081a=_0xa2da0c[0x1];if(_0x1b081a[_0x2a27eb(0x521a)][_0x2a27eb(0x3170)](_0x2a27eb(0x3a43))||_0x1b081a[_0x2a27eb(0x521a)]['has'](_0x2a27eb(0x2278))||_0x1b081a['tags'][_0x2a27eb(0x3170)](_0x2a27eb(0xe0f)))return;_0x418b61([_0xa2da0c[0x0]],_0x2a27eb(0x804),_0x5f6efd,null,_0x2a27eb(0x17a3));}}},_0x444e87=function(_0xcd9e93,_0x4f6a95,_0x16a23e,_0x5ae2a7){const _0x34c7a7=_0x37e46c,_0xda0432=_0x5ae2a7[_0x34c7a7(0x1578)][_0x34c7a7(0x1d8a)][_0x34c7a7(0x4820)];'-'===_0xcd9e93[_0x4f6a95]['post']&&_0xcd9e93[_0x4f6a95+0x1]&&_0xda0432([_0xcd9e93[_0x4f6a95],_0xcd9e93[_0x4f6a95+0x1]],_0x34c7a7(0x472c),_0x5ae2a7,null,_0x34c7a7(0x48c0));},_0x36e886=/^(under|over|mis|re|un|dis|semi)-?/,_0x4c1359=function(_0x350da6,_0x386a72,_0x251e56){const _0x5507c5=_0x37e46c,_0x8932cc=_0x251e56[_0x5507c5(0x21c9)][_0x5507c5(0x2b12)];let _0x34e303=_0x350da6[_0x386a72];if(_0x8932cc[_0x5507c5(0x2427)](_0x34e303['normal']))_0x34e303[_0x5507c5(0x857)]=_0x8932cc[_0x34e303[_0x5507c5(0x47d)]];else{if(_0x36e886[_0x5507c5(0x1769)](_0x34e303[_0x5507c5(0x47d)])){let _0x3b21a8=_0x34e303[_0x5507c5(0x47d)][_0x5507c5(0x741)](_0x36e886,'');_0x3b21a8[_0x5507c5(0x1b19)]>0x3&&_0x8932cc[_0x5507c5(0x2427)](_0x3b21a8)&&(_0x34e303[_0x5507c5(0x857)]=_0x8932cc[_0x3b21a8]);}}},_0x454b28=function(_0x502595,_0x88877f,_0x12c1cf){const _0x1133cc=_0x37e46c;if(!_0x88877f||0x0===_0x88877f[_0x1133cc(0x1b19)])return;if(!0x0===_0x502595[_0x1133cc(0xf75)])return;const _0x49f1cd='undefined'!=typeof process&&process[_0x1133cc(0xe1a)]?process['env']:self[_0x1133cc(0xe1a)]||{};_0x49f1cd&&_0x49f1cd['DEBUG_TAGS']&&((_0x824fa6,_0x25692a,_0x50f64='')=>{const _0x1b4aef=_0x1133cc;_0x824fa6[_0x1b4aef(0x4006)]||_0x824fa6[_0x1b4aef(0x4570)],_0x1b4aef(0x2431)!=typeof _0x25692a&&_0x25692a[_0x1b4aef(0x1b19)]>0x2&&(_0x25692a=_0x25692a[_0x1b4aef(0x384c)](0x0,0x2)[_0x1b4aef(0x3541)](_0x1b4aef(0x386c))+'\x20+'),_0x25692a=_0x1b4aef(0x2431)!=typeof _0x25692a?_0x25692a[_0x1b4aef(0x3541)](',\x20#'):_0x25692a;})(_0x502595,_0x88877f,_0x12c1cf),_0x502595[_0x1133cc(0x521a)]=_0x502595[_0x1133cc(0x521a)]||new Set(),_0x1133cc(0x2431)==typeof _0x88877f?_0x502595[_0x1133cc(0x521a)][_0x1133cc(0x362c)](_0x88877f):_0x88877f[_0x1133cc(0xa21)](_0x294dde=>_0x502595[_0x1133cc(0x521a)][_0x1133cc(0x362c)](_0x294dde));},_0x36dcab=['Acronym','Abbreviation',_0x37e46c(0xb7e),_0x37e46c(0x163f),_0x37e46c(0x3d93),_0x37e46c(0x2394),_0x37e46c(0x4520),'Honorific',_0x37e46c(0x1c7a)],_0xc48397=function(_0x163186,_0x52f604,_0x4eda49){const _0x26150f=_0x37e46c;let _0x494040=_0x163186[_0x52f604],_0xdf21cb=Array[_0x26150f(0x27e6)](_0x494040[_0x26150f(0x521a)]);for(let _0x54e301=0x0;_0x54e301<_0xdf21cb[_0x26150f(0x1b19)];_0x54e301+=0x1)if(_0x4eda49[_0x26150f(0x1d8a)]['tagSet'][_0xdf21cb[_0x54e301]]){let _0x19890a=_0x4eda49[_0x26150f(0x1d8a)]['tagSet'][_0xdf21cb[_0x54e301]][_0x26150f(0x1ed8)];_0x454b28(_0x494040,_0x19890a,_0x26150f(0x349c)+_0xdf21cb[_0x54e301]);}!function(_0x5477ab){const _0x2087be=_0x26150f;!_0x5477ab[_0x2087be(0x521a)][_0x2087be(0x3170)](_0x2087be(0x1786))||_0x5477ab[_0x2087be(0x521a)][_0x2087be(0x3170)]('Plural')||_0x5477ab[_0x2087be(0x521a)][_0x2087be(0x3170)](_0x2087be(0x1e9f))||_0x36dcab[_0x2087be(0x5144)](_0x39e75e=>_0x5477ab[_0x2087be(0x521a)][_0x2087be(0x3170)](_0x39e75e))||(_0x5aec52(_0x5477ab['normal'])?_0x454b28(_0x5477ab,_0x2087be(0x25f7),_0x2087be(0x368c)):_0x454b28(_0x5477ab,_0x2087be(0x1e9f),'3-singular-guess'));}(_0x494040),function(_0xde2603){const _0x3b145a=_0x26150f;let _0xd7800f=_0xde2603[_0x3b145a(0x521a)];if(_0xd7800f[_0x3b145a(0x3170)](_0x3b145a(0x487b))&&0x1===_0xd7800f[_0x3b145a(0x395f)]){let _0x592fc2=_0x51fa16(_0xde2603[_0x3b145a(0x47d)]);_0x592fc2&&_0x454b28(_0xde2603,_0x592fc2,_0x3b145a(0x2227));}}(_0x494040);},_0x5ade7e=/^\p{Lu}[\p{Ll}'’]/u,_0x27f897=/[0-9]/,_0x59b045=[_0x37e46c(0x448e),_0x37e46c(0x1c7a),_0x37e46c(0x14dd),'Unit','Expression'],_0x2c2161=/[IVX]/,_0x14604c=/^[IVXLCDM]{2,}$/,_0x4e5ced=/^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$/,_0x31e803={'li':!0x0,'dc':!0x0,'md':!0x0,'dm':!0x0,'ml':!0x0},_0xeecead=function(_0x4921fa,_0x3b071b,_0x5e60c3){const _0x76883c=_0x37e46c;let _0x571733=_0x4921fa[_0x3b071b];_0x571733[_0x76883c(0x3bb5)]=_0x571733[_0x76883c(0x3bb5)]||[0x0,0x0];let _0x1217c1=_0x571733[_0x76883c(0x3bb5)][0x1],_0x2b757e=_0x571733['text']||'';return 0x0!==_0x1217c1&&!0x0===_0x5ade7e[_0x76883c(0x1769)](_0x2b757e)&&!0x1===_0x27f897[_0x76883c(0x1769)](_0x2b757e)?_0x59b045['find'](_0x176058=>_0x571733[_0x76883c(0x521a)][_0x76883c(0x3170)](_0x176058))||_0x571733[_0x76883c(0x1228)][_0x76883c(0x2d96)](/["']$/)||_0x76883c(0x3556)===_0x571733[_0x76883c(0x47d)]?null:(_0xc48397(_0x4921fa,_0x3b071b,_0x5e60c3),_0x571733[_0x76883c(0x521a)][_0x76883c(0x3170)]('Noun')||_0x571733[_0x76883c(0xf75)]||_0x571733[_0x76883c(0x521a)][_0x76883c(0x4933)](),_0x454b28(_0x571733,_0x76883c(0xb7e),_0x76883c(0x2f36)),!0x0):_0x2b757e[_0x76883c(0x1b19)]>=0x2&&_0x14604c[_0x76883c(0x1769)](_0x2b757e)&&_0x2c2161[_0x76883c(0x1769)](_0x2b757e)&&_0x4e5ced[_0x76883c(0x1769)](_0x2b757e)&&!_0x31e803[_0x571733[_0x76883c(0x47d)]]?(_0x454b28(_0x571733,_0x76883c(0x3f8),_0x76883c(0x4900)),!0x0):null;},_0xc0e2d=function(_0x252fa9='',_0x146b54=[]){const _0x1b09d3=_0x37e46c,_0x250f38=_0x252fa9[_0x1b09d3(0x1b19)];let _0x11ca88=0x7;_0x250f38<=_0x11ca88&&(_0x11ca88=_0x250f38-0x1);for(let _0x373637=_0x11ca88;_0x373637>0x1;_0x373637-=0x1){let _0x5b13a8=_0x252fa9[_0x1b09d3(0x37b5)](_0x250f38-_0x373637,_0x250f38);if(!0x0===_0x146b54[_0x5b13a8[_0x1b09d3(0x1b19)]][_0x1b09d3(0x2427)](_0x5b13a8))return _0x146b54[_0x5b13a8[_0x1b09d3(0x1b19)]][_0x5b13a8];}return null;},_0x283bc7=function(_0x1115b1,_0x4176c0,_0x501090){const _0x44459c=_0x37e46c;let _0x17ef6e=_0x1115b1[_0x4176c0];if(0x0===_0x17ef6e[_0x44459c(0x521a)][_0x44459c(0x395f)]){let _0x31fbba=_0xc0e2d(_0x17ef6e['normal'],_0x501090[_0x44459c(0x21c9)][_0x44459c(0x5fd)]);if(null!==_0x31fbba)return _0x454b28(_0x17ef6e,_0x31fbba,_0x44459c(0x504a)),_0x17ef6e['confidence']=0.7,!0x0;if(_0x17ef6e[_0x44459c(0x4570)]&&(_0x31fbba=_0xc0e2d(_0x17ef6e['implicit'],_0x501090['two'][_0x44459c(0x5fd)]),null!==_0x31fbba))return _0x454b28(_0x17ef6e,_0x31fbba,_0x44459c(0x7b1)),_0x17ef6e[_0x44459c(0x202a)]=0.7,!0x0;}return null;},_0x23f20d=/['‘’‛‵′`´]/,_0x2899e1=function(_0x33c292,_0x49b4dc){for(let _0x217689=0x0;_0x217689<_0x49b4dc['length'];_0x217689+=0x1)if(!0x0===_0x49b4dc[_0x217689][0x0]['test'](_0x33c292))return _0x49b4dc[_0x217689];return null;},_0x49daff=function(_0x352471,_0x2bfa0d,_0x3a684a,_0x49f53c){const _0x173f3d=_0x37e46c,_0x5b0492=_0x49f53c['methods'][_0x173f3d(0x1d8a)][_0x173f3d(0x4820)];let {regexText:_0x3aef12,regexNormal:_0x5d4a5c,regexNumbers:_0xabb2e5,endsWith:_0x50f833}=_0x3a684a[_0x173f3d(0x21c9)],_0x1a0e6b=_0x352471[_0x2bfa0d],_0x1422e2=_0x1a0e6b[_0x173f3d(0x192e)]||_0x1a0e6b[_0x173f3d(0x47d)],_0x14843d=_0x1a0e6b['text'];_0x23f20d[_0x173f3d(0x1769)](_0x1a0e6b[_0x173f3d(0x24ce)])&&!_0x23f20d[_0x173f3d(0x1769)](_0x1a0e6b['pre'])&&(_0x14843d+=_0x1a0e6b[_0x173f3d(0x24ce)][_0x173f3d(0x1b23)]());let _0x5e183f=_0x2899e1(_0x14843d,_0x3aef12)||_0x2899e1(_0x1422e2,_0x5d4a5c);return!_0x5e183f&&/[0-9]/['test'](_0x1422e2)&&(_0x5e183f=_0x2899e1(_0x1422e2,_0xabb2e5)),_0x5e183f||0x0!==_0x1a0e6b['tags'][_0x173f3d(0x395f)]||(_0x5e183f=function(_0x385514='',_0x41fbf9){const _0x4aa2a3=_0x173f3d;let _0x17ea10=_0x385514[_0x385514[_0x4aa2a3(0x1b19)]-0x1];if(!0x0===_0x41fbf9[_0x4aa2a3(0x2427)](_0x17ea10)){let _0xe91f79=_0x41fbf9[_0x17ea10]||[];for(let _0x1fe364=0x0;_0x1fe364<_0xe91f79[_0x4aa2a3(0x1b19)];_0x1fe364+=0x1)if(!0x0===_0xe91f79[_0x1fe364][0x0][_0x4aa2a3(0x1769)](_0x385514))return _0xe91f79[_0x1fe364];}return null;}(_0x1422e2,_0x50f833)),_0x5e183f?(_0x5b0492([_0x1a0e6b],_0x5e183f[0x1],_0x49f53c,null,_0x173f3d(0x138a)+(_0x5e183f[0x2]||_0x5e183f[0x0])+'\x27'),_0x1a0e6b[_0x173f3d(0x202a)]=0.6,!0x0):null;},_0x308101=function(_0x1b3c25,_0x30a7b7,_0x4d24da){const _0x80d5d5=_0x37e46c;let _0x3d2189=_0x1b3c25[_0x30a7b7];if(0x0===_0x3d2189[_0x80d5d5(0x521a)][_0x80d5d5(0x395f)]){let _0x2d436a=function(_0x352c25='',_0x392dec=[]){const _0x4a45c3=_0x80d5d5,_0x48c2d0=_0x352c25['length'];let _0x4c15a6=0x7;_0x4c15a6>_0x48c2d0-0x3&&(_0x4c15a6=_0x48c2d0-0x3);for(let _0x2a8555=_0x4c15a6;_0x2a8555>0x2;_0x2a8555-=0x1){let _0x28dd40=_0x352c25[_0x4a45c3(0x37b5)](0x0,_0x2a8555);if(!0x0===_0x392dec[_0x28dd40[_0x4a45c3(0x1b19)]][_0x4a45c3(0x2427)](_0x28dd40))return _0x392dec[_0x28dd40[_0x4a45c3(0x1b19)]][_0x28dd40];}return null;}(_0x3d2189[_0x80d5d5(0x47d)],_0x4d24da[_0x80d5d5(0x21c9)][_0x80d5d5(0xfce)]);if(null!==_0x2d436a)return _0x454b28(_0x3d2189,_0x2d436a,_0x80d5d5(0x2e66)),_0x3d2189[_0x80d5d5(0x202a)]=0.5,!0x0;}return null;},_0x322af2=new Set(['in','on','by',_0x37e46c(0x30d6),_0x37e46c(0x3c19),'to',_0x37e46c(0x356b),'throughout',_0x37e46c(0x501c),_0x37e46c(0x5c5),_0x37e46c(0x5097),_0x37e46c(0x1349),'of',_0x37e46c(0x138f),'next','last',_0x37e46c(0x2633),_0x37e46c(0x4263),_0x37e46c(0x24ce),_0x37e46c(0x1228),_0x37e46c(0x1250),_0x37e46c(0x4540),_0x37e46c(0x1d2e),_0x37e46c(0xd88)]),_0x101aea=function(_0x8efde0){const _0x2e9031=_0x37e46c;if(!_0x8efde0)return!0x1;let _0x2bf001=_0x8efde0[_0x2e9031(0x47d)]||_0x8efde0[_0x2e9031(0x4570)];return!!_0x322af2[_0x2e9031(0x3170)](_0x2bf001)||(!!(_0x8efde0[_0x2e9031(0x521a)][_0x2e9031(0x3170)](_0x2e9031(0x448e))||_0x8efde0[_0x2e9031(0x521a)][_0x2e9031(0x3170)]('Month')||_0x8efde0[_0x2e9031(0x521a)][_0x2e9031(0x3170)](_0x2e9031(0x14dd))||_0x8efde0[_0x2e9031(0x521a)][_0x2e9031(0x3170)](_0x2e9031(0x37ec)))||!!_0x8efde0[_0x2e9031(0x521a)][_0x2e9031(0x3170)](_0x2e9031(0xb7e)));},_0x1638eb=function(_0x3797e8){const _0xedcb3c=_0x37e46c;return!!_0x3797e8&&(!!_0x3797e8[_0xedcb3c(0x521a)][_0xedcb3c(0x3170)](_0xedcb3c(0x4eea))||(!!(_0x3797e8[_0xedcb3c(0x521a)][_0xedcb3c(0x3170)](_0xedcb3c(0xab2))&&_0x3797e8[_0xedcb3c(0x47d)][_0xedcb3c(0x1b19)]<0x3)||('is'===_0x3797e8[_0xedcb3c(0x47d)]||_0xedcb3c(0x23ef)===_0x3797e8[_0xedcb3c(0x47d)])));},_0x3c73c6=function(_0xa30f04){const _0x33885a=_0x37e46c;return _0xa30f04&&(_0xa30f04[_0x33885a(0x521a)]['has'](_0x33885a(0x448e))||_0xa30f04[_0x33885a(0x521a)][_0x33885a(0x3170)](_0x33885a(0x1c7a))||_0xa30f04['tags'][_0x33885a(0x3170)](_0x33885a(0x14dd))||_0xa30f04[_0x33885a(0x521a)][_0x33885a(0x3170)](_0x33885a(0x37ec)));},_0x4dcc5d=function(_0x32e41a,_0x24b6f7){const _0x3d195e=_0x37e46c,_0x2baeb2=_0x32e41a[_0x24b6f7];if(_0x2baeb2[_0x3d195e(0x521a)][_0x3d195e(0x3170)](_0x3d195e(0x466e))&&_0x2baeb2[_0x3d195e(0x521a)][_0x3d195e(0x3170)](_0x3d195e(0xab2))&&0x4===_0x2baeb2[_0x3d195e(0x47d)][_0x3d195e(0x1b19)]){let _0x27baea=Number(_0x2baeb2[_0x3d195e(0x47d)]);if(_0x27baea&&!isNaN(_0x27baea)&&_0x27baea>0x578&&_0x27baea<0x834){let _0x3e2b57=_0x32e41a[_0x24b6f7-0x1],_0x507823=_0x32e41a[_0x24b6f7+0x1];if(_0x101aea(_0x3e2b57)||_0x101aea(_0x507823))return _0x454b28(_0x2baeb2,_0x3d195e(0x37ec),_0x3d195e(0x2184));if(_0x27baea>=0x780&&_0x27baea<0x7e9){if(_0x1638eb(_0x3e2b57)||_0x1638eb(_0x507823))return _0x454b28(_0x2baeb2,_0x3d195e(0x37ec),_0x3d195e(0x240a));if(_0x3c73c6(_0x32e41a[_0x24b6f7-0x2])||_0x3c73c6(_0x32e41a[_0x24b6f7+0x2]))return _0x454b28(_0x2baeb2,_0x3d195e(0x37ec),_0x3d195e(0x2edd));if(_0x3e2b57&&(_0x3e2b57[_0x3d195e(0x521a)][_0x3d195e(0x3170)](_0x3d195e(0x3b3e))||_0x3e2b57['tags'][_0x3d195e(0x3170)](_0x3d195e(0x3d93)))&&_0x507823&&_0x507823[_0x3d195e(0x521a)][_0x3d195e(0x3170)]('Noun')&&!_0x507823[_0x3d195e(0x521a)]['has'](_0x3d195e(0x25f7)))return _0x454b28(_0x2baeb2,'Year',_0x3d195e(0x50b9));}}}return null;},_0x4eb59f=function(_0x44167c,_0x13dc77,_0x123de0,_0x2d97a4){const _0x3fdf9e=_0x37e46c,_0x6bd6ef=_0x2d97a4['methods'][_0x3fdf9e(0x1d8a)]['setTag'],_0x21a846=_0x44167c[_0x13dc77],_0xf87d10=[_0x3fdf9e(0xe52),_0x3fdf9e(0x2c88),_0x3fdf9e(0x1a5e),'Modal',_0x3fdf9e(0x4ae6)];_0x21a846[_0x3fdf9e(0x521a)][_0x3fdf9e(0x3170)](_0x3fdf9e(0x487b))&&(_0xf87d10['find'](_0x333472=>_0x21a846['tags'][_0x3fdf9e(0x3170)](_0x333472))||_0x6bd6ef([_0x21a846],'Infinitive',_0x2d97a4,null,_0x3fdf9e(0x3b03)));},_0x27c0e6=/^[A-Z]('s|,)?$/,_0x54dc48=/^[A-Z-]+$/,_0x1e5a16=/^[A-Z]+s$/,_0x3342e4=/([A-Z]\.)+[A-Z]?,?$/,_0x3da0cf=/[A-Z]{2,}('s|,)?$/,_0x8145e3=/([a-z]\.)+[a-z]\.?$/,_0x2c2220={'I':!0x0,'A':!0x0},_0x47dce7={'la':!0x0,'ny':!0x0,'us':!0x0,'dc':!0x0,'gb':!0x0},_0x143d1c=function(_0x45909c,_0x2d0ad6,_0x1279c7){const _0x3bb906=_0x37e46c;let _0x5037c1=_0x45909c[_0x2d0ad6];return _0x5037c1[_0x3bb906(0x521a)][_0x3bb906(0x3170)](_0x3bb906(0x3f8))||_0x5037c1[_0x3bb906(0x521a)][_0x3bb906(0x3170)](_0x3bb906(0x45e2))||_0x5037c1[_0x3bb906(0xf75)]?null:function(_0x1114a3,_0x2ea70e){const _0x2c363f=_0x3bb906;let _0xac4c82=_0x1114a3['text'];if(!0x1===_0x54dc48[_0x2c363f(0x1769)](_0xac4c82)){if(!(_0xac4c82[_0x2c363f(0x1b19)]>0x3&&!0x0===_0x1e5a16['test'](_0xac4c82)))return!0x1;_0xac4c82=_0xac4c82['replace'](/s$/,'');}return!(_0xac4c82[_0x2c363f(0x1b19)]>0x5||_0x2c2220['hasOwnProperty'](_0xac4c82)||_0x2ea70e[_0x2c363f(0x1d8a)]['lexicon']['hasOwnProperty'](_0x1114a3[_0x2c363f(0x47d)])||!0x0!==_0x3342e4[_0x2c363f(0x1769)](_0xac4c82)&&!0x0!==_0x8145e3['test'](_0xac4c82)&&!0x0!==_0x27c0e6[_0x2c363f(0x1769)](_0xac4c82)&&!0x0!==_0x3da0cf['test'](_0xac4c82));}(_0x5037c1,_0x1279c7)?(_0x5037c1[_0x3bb906(0x521a)][_0x3bb906(0x4933)](),_0x454b28(_0x5037c1,[_0x3bb906(0x45e2),_0x3bb906(0x1786)],'3-no-period-acronym'),!0x0===_0x47dce7[_0x5037c1[_0x3bb906(0x47d)]]&&_0x454b28(_0x5037c1,_0x3bb906(0x1c11),_0x3bb906(0xe9c)),!0x0===_0x1e5a16[_0x3bb906(0x1769)](_0x5037c1[_0x3bb906(0x4006)])&&_0x454b28(_0x5037c1,'Plural','3-plural-acronym'),!0x0):!_0x2c2220[_0x3bb906(0x2427)](_0x5037c1[_0x3bb906(0x4006)])&&_0x27c0e6[_0x3bb906(0x1769)](_0x5037c1[_0x3bb906(0x4006)])?(_0x5037c1[_0x3bb906(0x521a)][_0x3bb906(0x4933)](),_0x454b28(_0x5037c1,[_0x3bb906(0x45e2),'Noun'],_0x3bb906(0x2181)),!0x0):_0x5037c1[_0x3bb906(0x521a)][_0x3bb906(0x3170)](_0x3bb906(0x4c36))&&_0x5037c1[_0x3bb906(0x4006)][_0x3bb906(0x1b19)]<=0x3?(_0x454b28(_0x5037c1,_0x3bb906(0x45e2),_0x3bb906(0xe43)),!0x0):_0x5037c1[_0x3bb906(0x521a)][_0x3bb906(0x3170)](_0x3bb906(0x4c36))&&_0x54dc48['test'](_0x5037c1[_0x3bb906(0x4006)])&&_0x5037c1[_0x3bb906(0x4006)]['length']<=0x6?(_0x454b28(_0x5037c1,_0x3bb906(0x45e2),_0x3bb906(0x4d04)),!0x0):null;},_0x5cca45=function(_0x4238fe,_0x499131){const _0x43dad9=_0x37e46c;if(!_0x4238fe)return null;let _0x32a633=_0x499131[_0x43dad9(0x5144)](_0x3c85b2=>_0x4238fe[_0x43dad9(0x47d)]===_0x3c85b2[0x0]);return _0x32a633?_0x32a633[0x1]:null;},_0x21ad5b=function(_0x36722a,_0x1239d7){const _0x199532=_0x37e46c;if(!_0x36722a)return null;let _0x1c7ad3=_0x1239d7[_0x199532(0x5144)](_0x558a86=>_0x36722a[_0x199532(0x521a)][_0x199532(0x3170)](_0x558a86[0x0]));return _0x1c7ad3?_0x1c7ad3[0x1]:null;},_0x4ac1fd=function(_0x47540e,_0x2ea4b8,_0x696d0){const _0x35890e=_0x37e46c,{leftTags:_0x4615a6,leftWords:_0x571869,rightWords:_0x25308c,rightTags:_0x3bb11e}=_0x696d0[_0x35890e(0x21c9)][_0x35890e(0x2378)];let _0x531e70=_0x47540e[_0x2ea4b8];if(0x0===_0x531e70[_0x35890e(0x521a)]['size']){let _0x5db5b0=null;if(_0x5db5b0=_0x5db5b0||_0x5cca45(_0x47540e[_0x2ea4b8-0x1],_0x571869),_0x5db5b0=_0x5db5b0||_0x5cca45(_0x47540e[_0x2ea4b8+0x1],_0x25308c),_0x5db5b0=_0x5db5b0||_0x21ad5b(_0x47540e[_0x2ea4b8-0x1],_0x4615a6),_0x5db5b0=_0x5db5b0||_0x21ad5b(_0x47540e[_0x2ea4b8+0x1],_0x3bb11e),_0x5db5b0)return _0x454b28(_0x531e70,_0x5db5b0,_0x35890e(0x1f40)),_0xc48397(_0x47540e,_0x2ea4b8,_0x696d0),_0x47540e[_0x2ea4b8][_0x35890e(0x202a)]=0.2,!0x0;}return null;},_0x1b4ae5=function(_0x21896d,_0x260f8a,_0x1f5663){const _0x4dbaae=_0x37e46c;return!!_0x21896d&&(!_0x21896d[_0x4dbaae(0x521a)][_0x4dbaae(0x3170)]('FirstName')&&!_0x21896d[_0x4dbaae(0x521a)]['has'](_0x4dbaae(0x1c11))&&(!!(_0x21896d[_0x4dbaae(0x521a)][_0x4dbaae(0x3170)](_0x4dbaae(0xb7e))||_0x21896d[_0x4dbaae(0x521a)][_0x4dbaae(0x3170)]('Organization')||_0x21896d['tags'][_0x4dbaae(0x3170)]('Acronym'))||!(_0x1f5663||(_0x655e68=_0x21896d[_0x4dbaae(0x4006)],!/^\p{Lu}[\p{Ll}'’]/u[_0x4dbaae(0x1769)](_0x655e68)))&&(0x0!==_0x260f8a||_0x21896d['tags'][_0x4dbaae(0x3170)]('Singular'))));var _0x655e68;},_0x1e443b=function(_0x3fe0b5,_0x5959d3,_0x193e3a,_0x37e952){const _0x52c2f2=_0x37e46c,_0x147dd3=_0x193e3a[_0x52c2f2(0x1556)][_0x52c2f2(0x21c9)][_0x52c2f2(0x4f77)],_0x23be76=_0x193e3a['methods'][_0x52c2f2(0x1d8a)][_0x52c2f2(0x4820)];let _0x3f1adc=_0x3fe0b5[_0x5959d3];if(!0x0===_0x147dd3[_0x3f1adc['machine']||_0x3f1adc[_0x52c2f2(0x47d)]]&&_0x1b4ae5(_0x3fe0b5[_0x5959d3-0x1],_0x5959d3-0x1,_0x37e952)){_0x23be76([_0x3fe0b5[_0x5959d3]],'Organization',_0x193e3a,null,_0x52c2f2(0x780));for(let _0x11d15c=_0x5959d3;_0x11d15c>=0x0&&_0x1b4ae5(_0x3fe0b5[_0x11d15c],_0x11d15c,_0x37e952);_0x11d15c-=0x1)_0x23be76([_0x3fe0b5[_0x11d15c]],_0x52c2f2(0x4c36),_0x193e3a,null,_0x52c2f2(0x780));}return null;},_0x3bd12=/'s$/,_0x371257=new Set([_0x37e46c(0x20ce),'city',_0x37e46c(0x18a5),_0x37e46c(0x3db9),_0x37e46c(0x51b7),_0x37e46c(0x2a3),_0x37e46c(0x2c3e),_0x37e46c(0x19b2),_0x37e46c(0xf60),'local',_0x37e46c(0xfe0),_0x37e46c(0x3bba),_0x37e46c(0x1969),_0x37e46c(0x3db5),_0x37e46c(0x231a),_0x37e46c(0x1b5e),_0x37e46c(0x206b),_0x37e46c(0x267d)]),_0x5360c3=new Set([_0x37e46c(0x2820),'centre',_0x37e46c(0x269e),'range',_0x37e46c(0x2460),_0x37e46c(0x83f),_0x37e46c(0x4574),_0x37e46c(0x2d44)]),_0x308e6d=function(_0x38d8a2,_0x590b6a,_0x55e0c3){const _0x4f27af=_0x37e46c;if(!_0x38d8a2)return!0x1;let _0x5de1ed=_0x38d8a2[_0x4f27af(0x521a)];return!(_0x5de1ed[_0x4f27af(0x3170)](_0x4f27af(0x4c36))||_0x5de1ed[_0x4f27af(0x3170)](_0x4f27af(0x3d93))||_0x3bd12[_0x4f27af(0x1769)](_0x38d8a2[_0x4f27af(0x47d)]))&&(!(!_0x5de1ed['has']('ProperNoun')&&!_0x5de1ed[_0x4f27af(0x3170)](_0x4f27af(0x1c11)))||!(_0x55e0c3||(_0x58cf36=_0x38d8a2[_0x4f27af(0x4006)],!/^\p{Lu}[\p{Ll}'’]/u[_0x4f27af(0x1769)](_0x58cf36)))&&(0x0!==_0x590b6a||_0x5de1ed['has'](_0x4f27af(0x1e9f))));var _0x58cf36;},_0x4f82bb=function(_0x3097c7,_0x25cb6d,_0x59e169,_0x2b7576){const _0xd202eb=_0x37e46c,_0x5e8cab=_0x59e169['model'][_0xd202eb(0x21c9)][_0xd202eb(0x45a3)],_0x1df6f9=_0x59e169[_0xd202eb(0x1578)][_0xd202eb(0x1d8a)][_0xd202eb(0x4820)];let _0x3e5e38=_0x3097c7[_0x25cb6d],_0x32e007=_0x3e5e38[_0xd202eb(0x192e)]||_0x3e5e38[_0xd202eb(0x47d)];if(!0x0===_0x5e8cab[_0x32e007]){for(let _0x890600=_0x25cb6d-0x1;_0x890600>=0x0;_0x890600-=0x1)if(!_0x371257[_0xd202eb(0x3170)](_0x3097c7[_0x890600][_0xd202eb(0x47d)])){if(!_0x308e6d(_0x3097c7[_0x890600],_0x890600,_0x2b7576))break;_0x1df6f9(_0x3097c7['slice'](_0x890600,_0x25cb6d+0x1),_0xd202eb(0x1c11),_0x59e169,null,_0xd202eb(0x125e));}if(_0x5360c3['has'](_0x32e007))return!0x1;for(let _0x41a176=_0x25cb6d+0x1;_0x41a176<_0x3097c7[_0xd202eb(0x1b19)];_0x41a176+=0x1){if(_0x308e6d(_0x3097c7[_0x41a176],_0x41a176,_0x2b7576))return _0x1df6f9(_0x3097c7[_0xd202eb(0x384c)](_0x25cb6d,_0x41a176+0x1),_0xd202eb(0x1c11),_0x59e169,null,_0xd202eb(0x4c91)),!0x0;if('of'!==_0x3097c7[_0x41a176]['normal']&&!_0x371257['has'](_0x3097c7[_0x41a176][_0xd202eb(0x47d)]))break;}}return null;},_0x20b124=function(_0x3d8ce9,_0x31509e,_0x195166){const _0x5cee52=_0x37e46c;let _0x5cc531=!0x1,_0x19b6b5=_0x3d8ce9[_0x31509e]['tags'];(0x0===_0x19b6b5['size']||0x1===_0x19b6b5['size']&&(_0x19b6b5[_0x5cee52(0x3170)]('Hyphenated')||_0x19b6b5[_0x5cee52(0x3170)]('HashTag')||_0x19b6b5[_0x5cee52(0x3170)]('Prefix')))&&(_0x5cc531=!0x0),_0x5cc531&&(_0x454b28(_0x3d8ce9[_0x31509e],_0x5cee52(0x1786),_0x5cee52(0x2ce1)),_0xc48397(_0x3d8ce9,_0x31509e,_0x195166),_0x3d8ce9[_0x31509e][_0x5cee52(0x202a)]=0.1);},_0x1cd82f=/^[A-Z][a-z]/,_0x2efe4d=(_0x540d39,_0x129caf)=>_0x540d39[_0x129caf][_0x37e46c(0x521a)][_0x37e46c(0x3170)](_0x37e46c(0xb7e))&&_0x1cd82f[_0x37e46c(0x1769)](_0x540d39[_0x129caf][_0x37e46c(0x4006)])?_0x37e46c(0x1786):null,_0x2bbccb=(_0x4562a9,_0x1552b1,_0x16e0ba)=>0x0!==_0x1552b1||_0x4562a9[0x1]?null:_0x16e0ba,_0x2f2e20={'Adj|Gerund':(_0x21dfe5,_0x466829)=>_0x2efe4d(_0x21dfe5,_0x466829),'Adj|Noun':(_0x246fd5,_0x1858e0)=>_0x2efe4d(_0x246fd5,_0x1858e0)||function(_0x198dfd,_0x2f8525){const _0x57c483=_0x37e46c;return!_0x198dfd[_0x2f8525+0x1]&&_0x198dfd[_0x2f8525-0x1]&&_0x198dfd[_0x2f8525-0x1][_0x57c483(0x521a)][_0x57c483(0x3170)](_0x57c483(0x3b3e))?'Noun':null;}(_0x246fd5,_0x1858e0),'Actor|Verb':(_0x3c011f,_0x44834d)=>_0x2efe4d(_0x3c011f,_0x44834d),'Adj|Past':(_0x55503c,_0x28478b)=>_0x2efe4d(_0x55503c,_0x28478b),'Adj|Present':(_0xa18a75,_0x53971b)=>_0x2efe4d(_0xa18a75,_0x53971b),'Noun|Gerund':(_0x383f89,_0x5c7164)=>_0x2efe4d(_0x383f89,_0x5c7164),'Noun|Verb':(_0x46fd5b,_0x2f0084)=>_0x2f0084>0x0&&_0x2efe4d(_0x46fd5b,_0x2f0084)||_0x2bbccb(_0x46fd5b,_0x2f0084,_0x37e46c(0x2631)),'Plural|Verb':(_0xe5efb5,_0x2b8768)=>_0x2efe4d(_0xe5efb5,_0x2b8768)||_0x2bbccb(_0xe5efb5,_0x2b8768,_0x37e46c(0x2c88))||function(_0x33d103,_0x2cbd78,_0x4fc746){const _0x498cbb=_0x37e46c;return 0x0===_0x2cbd78&&_0x33d103[_0x498cbb(0x1b19)]>0x3?_0x4fc746:null;}(_0xe5efb5,_0x2b8768,_0x37e46c(0x25f7)),'Person|Noun':(_0xec4d84,_0x46b84e)=>_0x2efe4d(_0xec4d84,_0x46b84e),'Person|Verb':(_0x4da1f7,_0x2d9d07)=>0x0!==_0x2d9d07?_0x2efe4d(_0x4da1f7,_0x2d9d07):null,'Person|Adj':(_0x214920,_0x3d6461)=>0x0===_0x3d6461&&_0x214920[_0x37e46c(0x1b19)]>0x1||_0x2efe4d(_0x214920,_0x3d6461)?_0x37e46c(0x904):null},_0x54be51=_0x2f2e20,_0xefd95b=_0x37e46c(0x1daa)!=typeof process&&process[_0x37e46c(0xe1a)]?process['env']:self[_0x37e46c(0xe1a)]||{},_0xab510=/^(under|over|mis|re|un|dis|semi)-?/,_0x480a4a=(_0x266df4,_0x5f5cf0)=>{const _0x6946d9=_0x37e46c;if(!_0x266df4||!_0x5f5cf0)return null;let _0x4372aa=_0x266df4[_0x6946d9(0x47d)]||_0x266df4[_0x6946d9(0x4570)],_0x54902b=null;return _0x5f5cf0[_0x6946d9(0x2427)](_0x4372aa)&&(_0x54902b=_0x5f5cf0[_0x4372aa]),_0x54902b&&_0xefd95b[_0x6946d9(0x3a92)],_0x54902b;},_0xc58389=(_0x4435be,_0x26523f={},_0x1b0fca)=>{const _0x57b995=_0x37e46c;if(!_0x4435be||!_0x26523f)return null;let _0x428c32=Array[_0x57b995(0x27e6)](_0x4435be[_0x57b995(0x521a)])[_0x57b995(0x4c33)]((_0x2a6fc0,_0x1f35da)=>(_0x1b0fca[_0x2a6fc0]?_0x1b0fca[_0x2a6fc0][_0x57b995(0x1ed8)][_0x57b995(0x1b19)]:0x0)>(_0x1b0fca[_0x1f35da]?_0x1b0fca[_0x1f35da][_0x57b995(0x1ed8)]['length']:0x0)?-0x1:0x1),_0x1eb817=_0x428c32[_0x57b995(0x5144)](_0x433a40=>_0x26523f[_0x433a40]);return _0x1eb817&&_0xefd95b[_0x57b995(0x3a92)],_0x1eb817=_0x26523f[_0x1eb817],_0x1eb817;},_0x2739cb=function(_0x31e6ba,_0x39d729,_0x38a52b){const _0x58b74a=_0x37e46c,_0x318403=_0x38a52b[_0x58b74a(0x1556)],_0x8f2040=_0x38a52b['methods']['one'][_0x58b74a(0x4820)],{switches:_0x4bd941,clues:_0x123fe3}=_0x318403[_0x58b74a(0x21c9)],_0xe9f510=_0x31e6ba[_0x39d729];let _0x1ee500=_0xe9f510[_0x58b74a(0x47d)]||_0xe9f510[_0x58b74a(0x4570)]||'';if(_0xab510[_0x58b74a(0x1769)](_0x1ee500)&&!_0x4bd941[_0x1ee500]&&(_0x1ee500=_0x1ee500[_0x58b74a(0x741)](_0xab510,'')),_0xe9f510[_0x58b74a(0x857)]){let _0x44ac17=_0xe9f510['switch'];if(_0xe9f510[_0x58b74a(0x521a)][_0x58b74a(0x3170)](_0x58b74a(0x45e2))||_0xe9f510[_0x58b74a(0x521a)][_0x58b74a(0x3170)](_0x58b74a(0x39b6)))return;let _0x5b2ae5=function(_0x56a71f,_0x2f2bc8,_0x30af5e,_0x545685){const _0x20aeaf=_0x58b74a;if(!_0x30af5e)return null;const _0x2e0a7c=_0x20aeaf(0x1411)!==_0x56a71f[_0x2f2bc8-0x1]?.[_0x20aeaf(0x4006)]?_0x2f2bc8-0x1:Math['max'](0x0,_0x2f2bc8-0x2),_0x508d94=_0x545685[_0x20aeaf(0x1d8a)][_0x20aeaf(0x3f15)];let _0x41d868=_0x480a4a(_0x56a71f[_0x2f2bc8+0x1],_0x30af5e['afterWords']);return _0x41d868=_0x41d868||_0x480a4a(_0x56a71f[_0x2e0a7c],_0x30af5e[_0x20aeaf(0x22e3)]),_0x41d868=_0x41d868||_0xc58389(_0x56a71f[_0x2e0a7c],_0x30af5e[_0x20aeaf(0xbc7)],_0x508d94),_0x41d868=_0x41d868||_0xc58389(_0x56a71f[_0x2f2bc8+0x1],_0x30af5e[_0x20aeaf(0x1880)],_0x508d94),_0x41d868;}(_0x31e6ba,_0x39d729,_0x123fe3[_0x44ac17],_0x318403);_0x54be51[_0x44ac17]&&(_0x5b2ae5=_0x54be51[_0x44ac17](_0x31e6ba,_0x39d729)||_0x5b2ae5),_0x5b2ae5?(_0x8f2040([_0xe9f510],_0x5b2ae5,_0x38a52b,null,_0x58b74a(0x49f4)+_0x44ac17+')'),_0xc48397(_0x31e6ba,_0x39d729,_0x318403)):_0xefd95b['DEBUG_TAGS'];}},_0x29e986={'there':!0x0,'this':!0x0,'it':!0x0,'him':!0x0,'her':!0x0,'us':!0x0},_0x432170=function(_0x56042b,_0xc09914){const _0xf53029=_0x37e46c,_0x22985f=_0xc09914[_0xf53029(0x1578)][_0xf53029(0x1d8a)][_0xf53029(0x4820)],_0x500d63=_0xc09914['model'][_0xf53029(0x1d8a)][_0xf53029(0x839)]||{};let _0x3e3371=_0x56042b[0x0];if((_0xf53029(0x2d09)===_0x3e3371['switch']||_0x3e3371[_0xf53029(0x521a)][_0xf53029(0x3170)](_0xf53029(0x2631)))&&_0x56042b['length']>=0x2){if(_0x56042b[_0xf53029(0x1b19)]<0x4&&!_0x29e986[_0x56042b[0x1]['normal']])return;if(!_0x3e3371['tags']['has'](_0xf53029(0x39b6))&&_0x500d63[_0xf53029(0x2427)](_0x3e3371[_0xf53029(0x47d)]))return;(_0x56042b[0x1]['tags'][_0xf53029(0x3170)](_0xf53029(0x1786))||_0x56042b[0x1]['tags'][_0xf53029(0x3170)](_0xf53029(0x3b3e)))&&(_0x56042b[_0xf53029(0x384c)](0x1,0x3)[_0xf53029(0x363a)](_0x5ae4f4=>_0x5ae4f4['tags'][_0xf53029(0x3170)](_0xf53029(0x487b)))&&!_0x3e3371['tags']['has'](_0xf53029(0xd8b))||_0x22985f([_0x3e3371],_0xf53029(0x44e2),_0xc09914,null,_0xf53029(0x3d8b)));}},_0x68690=function(_0x44308b){const _0x321422=_0x37e46c;if(_0x44308b[_0x321422(0x1465)](_0x278b89=>!_0x278b89[_0x321422(0x521a)][_0x321422(0x3170)](_0x321422(0xb7e)))[_0x321422(0x1b19)]<=0x3)return!0x1;const _0x1c3047=/^[a-z]/;return _0x44308b[_0x321422(0x12d8)](_0x40995a=>!_0x1c3047[_0x321422(0x1769)](_0x40995a[_0x321422(0x4006)]));},_0x3ebdf9=function(_0x5c3018,_0x315564,_0x1f5f38,_0x308aeb){const _0x1b5269=_0x37e46c;for(let _0x2f0c48=0x0;_0x2f0c48<_0x5c3018[_0x1b5269(0x1b19)];_0x2f0c48+=0x1)!0x0!==_0x5c3018[_0x2f0c48]['frozen']&&(_0x4c1359(_0x5c3018,_0x2f0c48,_0x315564),!0x1===_0x308aeb&&_0xeecead(_0x5c3018,_0x2f0c48,_0x315564),_0x283bc7(_0x5c3018,_0x2f0c48,_0x315564),_0x49daff(_0x5c3018,_0x2f0c48,_0x315564,_0x1f5f38),_0x308101(_0x5c3018,_0x2f0c48,_0x315564),_0x4dcc5d(_0x5c3018,_0x2f0c48,_0x315564));},_0x2827c6=function(_0xfb6540,_0x32061f,_0x53343b,_0x54aff7){const _0x28b3aa=_0x37e46c;for(let _0x38b813=0x0;_0x38b813<_0xfb6540[_0x28b3aa(0x1b19)];_0x38b813+=0x1){let _0x319010=_0x143d1c(_0xfb6540,_0x38b813,_0x32061f);_0xc48397(_0xfb6540,_0x38b813,_0x32061f),_0x319010=_0x319010||_0x4ac1fd(_0xfb6540,_0x38b813,_0x32061f),_0x319010=_0x319010||_0x20b124(_0xfb6540,_0x38b813,_0x32061f);}for(let _0xbfed27=0x0;_0xbfed27<_0xfb6540[_0x28b3aa(0x1b19)];_0xbfed27+=0x1)!0x0!==_0xfb6540[_0xbfed27]['frozen']&&(_0x1e443b(_0xfb6540,_0xbfed27,_0x53343b,_0x54aff7),_0x4f82bb(_0xfb6540,_0xbfed27,_0x53343b,_0x54aff7),_0x2739cb(_0xfb6540,_0xbfed27,_0x53343b),_0x4eb59f(_0xfb6540,_0xbfed27,_0x32061f,_0x53343b),_0x444e87(_0xfb6540,_0xbfed27,_0x32061f,_0x53343b));_0x432170(_0xfb6540,_0x53343b);},_0x578560=function(_0x521092){const _0x3491ab=_0x37e46c,{methods:_0x2ba2e9,model:_0x5a5f0f,world:_0x53b367}=_0x521092;let _0x3cfee5=_0x521092['docs'];!function(_0x4e4597,_0x5e3233,_0x2a35a7){const _0x8065c4=a0_0x11e7;_0x4e4597[_0x8065c4(0xa21)](_0x975e2b=>{_0x19456a(_0x975e2b,0x0,_0x5e3233,_0x2a35a7);});}(_0x3cfee5,_0x5a5f0f,_0x53b367);let _0x117921=_0x2ba2e9[_0x3491ab(0x21c9)][_0x3491ab(0x2f6)](_0x3cfee5);for(let _0x340b5b=0x0;_0x340b5b<_0x117921[_0x3491ab(0x1b19)];_0x340b5b+=0x1){let _0x2bc82d=_0x117921[_0x340b5b];const _0x533da3=_0x68690(_0x2bc82d);_0x3ebdf9(_0x2bc82d,_0x5a5f0f,_0x53b367,_0x533da3),_0x2827c6(_0x2bc82d,_0x5a5f0f,_0x53b367,_0x533da3);}return _0x117921;},_0x3c3eb2={'Possessive':_0x6fe3f9=>{const _0x1dee63=_0x37e46c;let _0x33b767=_0x6fe3f9[_0x1dee63(0x192e)]||_0x6fe3f9[_0x1dee63(0x47d)]||_0x6fe3f9[_0x1dee63(0x4006)];return _0x33b767=_0x33b767[_0x1dee63(0x741)](/'s$/,''),_0x33b767;},'Plural':(_0x2eaa0e,_0x41e35a)=>{const _0x23346f=_0x37e46c;let _0x25ef60=_0x2eaa0e[_0x23346f(0x192e)]||_0x2eaa0e[_0x23346f(0x47d)]||_0x2eaa0e[_0x23346f(0x4006)];return _0x41e35a['methods']['two']['transform'][_0x23346f(0x4d62)]['toSingular'](_0x25ef60,_0x41e35a['model']);},'Copula':()=>'is','PastTense':(_0x40dc20,_0x3e1bf6)=>{const _0x49a10a=_0x37e46c;let _0x722b61=_0x40dc20[_0x49a10a(0x192e)]||_0x40dc20[_0x49a10a(0x47d)]||_0x40dc20[_0x49a10a(0x4006)];return _0x3e1bf6[_0x49a10a(0x1578)][_0x49a10a(0x21c9)][_0x49a10a(0x5161)]['verb'][_0x49a10a(0x25f2)](_0x722b61,_0x3e1bf6[_0x49a10a(0x1556)],'PastTense');},'Gerund':(_0x577a46,_0x43357e)=>{const _0x501c8b=_0x37e46c;let _0x1e884c=_0x577a46['machine']||_0x577a46[_0x501c8b(0x47d)]||_0x577a46[_0x501c8b(0x4006)];return _0x43357e[_0x501c8b(0x1578)][_0x501c8b(0x21c9)]['transform'][_0x501c8b(0x4134)][_0x501c8b(0x25f2)](_0x1e884c,_0x43357e[_0x501c8b(0x1556)],_0x501c8b(0x42ce));},'PresentTense':(_0x100267,_0x2daef0)=>{const _0x42d553=_0x37e46c;let _0x116c3d=_0x100267[_0x42d553(0x192e)]||_0x100267['normal']||_0x100267['text'];return _0x100267[_0x42d553(0x521a)][_0x42d553(0x3170)]('Infinitive')?_0x116c3d:_0x2daef0[_0x42d553(0x1578)][_0x42d553(0x21c9)]['transform']['verb']['toInfinitive'](_0x116c3d,_0x2daef0[_0x42d553(0x1556)],_0x42d553(0x2c88));},'Comparative':(_0x16aa27,_0x27cec7)=>{const _0x4fc785=_0x37e46c;let _0x225067=_0x16aa27[_0x4fc785(0x192e)]||_0x16aa27[_0x4fc785(0x47d)]||_0x16aa27['text'];return _0x27cec7[_0x4fc785(0x1578)]['two'][_0x4fc785(0x5161)][_0x4fc785(0x2b3f)][_0x4fc785(0x22b6)](_0x225067,_0x27cec7[_0x4fc785(0x1556)]);},'Superlative':(_0x40c543,_0x276e87)=>{const _0x8658dc=_0x37e46c;let _0x191bfc=_0x40c543[_0x8658dc(0x192e)]||_0x40c543[_0x8658dc(0x47d)]||_0x40c543[_0x8658dc(0x4006)];return _0x276e87['methods'][_0x8658dc(0x21c9)]['transform']['adjective']['fromSuperlative'](_0x191bfc,_0x276e87[_0x8658dc(0x1556)]);},'Adverb':(_0x4866e5,_0x31b833)=>{const _0xf5a6bc=_0x37e46c,{fromAdverb:_0x36264e}=_0x31b833[_0xf5a6bc(0x1578)][_0xf5a6bc(0x21c9)][_0xf5a6bc(0x5161)][_0xf5a6bc(0x2b3f)];return _0x36264e(_0x4866e5[_0xf5a6bc(0x192e)]||_0x4866e5[_0xf5a6bc(0x47d)]||_0x4866e5[_0xf5a6bc(0x4006)]);}},_0x2de5d9=function(_0x3b5cb8){const _0x3c2d73=_0x37e46c,_0x4a0a2c=_0x3b5cb8[_0x3c2d73(0x4657)],_0x5025fe=Object['keys'](_0x3c3eb2);_0x3b5cb8['docs'][_0x3c2d73(0xa21)](_0x452b0b=>{const _0x5aa077=_0x3c2d73;for(let _0x432202=0x0;_0x432202<_0x452b0b[_0x5aa077(0x1b19)];_0x432202+=0x1){const _0x2af746=_0x452b0b[_0x432202];for(let _0x22782d=0x0;_0x22782d<_0x5025fe['length'];_0x22782d+=0x1)if(_0x2af746[_0x5aa077(0x521a)][_0x5aa077(0x3170)](_0x5025fe[_0x22782d])){let _0x2cb927=(0x0,_0x3c3eb2[_0x5025fe[_0x22782d]])(_0x2af746,_0x4a0a2c);_0x2af746[_0x5aa077(0x47d)]!==_0x2cb927&&(_0x2af746['root']=_0x2cb927);break;}}});},_0x1f62e9={'Adverb':'RB','Comparative':_0x37e46c(0x32e1),'Superlative':_0x37e46c(0x2c0e),'Adjective':'JJ','TO':_0x37e46c(0x37f1),'Modal':'MD','Auxiliary':'MD','Gerund':_0x37e46c(0x177d),'PastTense':'VBD','Participle':'VBN','PresentTense':'VBZ','Infinitive':'VB','Particle':'RP','Verb':'VB','Pronoun':'PRP','Cardinal':'CD','Conjunction':'CC','Determiner':'DT','Preposition':'IN','QuestionWord':'WP','Expression':'UH','Possessive':_0x37e46c(0x21b),'ProperNoun':_0x37e46c(0x2a7b),'Person':'NNP','Place':'NNP','Organization':'NNP','Singular':'NN','Plural':_0x37e46c(0x6a4),'Noun':'NN','There':'EX'},_0x2e0fd9=function(_0x180d97){const _0x58c417=_0x37e46c;_0x180d97['compute']('tagRank'),_0x180d97[_0x58c417(0x204b)][_0x58c417(0xa21)](_0x1e4400=>{_0x1e4400['forEach'](_0x119cb9=>{const _0xacecaa=a0_0x11e7;_0x119cb9[_0xacecaa(0x42e1)]=function(_0x24de46){const _0x1a717c=_0xacecaa;if(_0x24de46[_0x1a717c(0x521a)][_0x1a717c(0x3170)]('ProperNoun')&&_0x24de46[_0x1a717c(0x521a)][_0x1a717c(0x3170)](_0x1a717c(0x25f7)))return _0x1a717c(0x2c64);if(_0x24de46[_0x1a717c(0x521a)][_0x1a717c(0x3170)](_0x1a717c(0x3d93))&&_0x24de46[_0x1a717c(0x521a)]['has'](_0x1a717c(0x2394)))return _0x1a717c(0x9f8);if(_0x1a717c(0xdd3)===_0x24de46[_0x1a717c(0x47d)])return'EX';if('to'===_0x24de46[_0x1a717c(0x47d)])return'TO';let _0x21826e=_0x24de46[_0x1a717c(0x2cf)]||[];for(let _0x2dbeed=0x0;_0x2dbeed<_0x21826e[_0x1a717c(0x1b19)];_0x2dbeed+=0x1)if(_0x1f62e9[_0x1a717c(0x2427)](_0x21826e[_0x2dbeed]))return _0x1f62e9[_0x21826e[_0x2dbeed]];return null;}(_0x119cb9);});});},_0x5ed97e={'preTagger':_0x578560,'root':_0x2de5d9,'penn':_0x2e0fd9},_0x3765e0=[_0x37e46c(0x904),'Place',_0x37e46c(0x4c36)],_0x331e79={'Noun':{'not':[_0x37e46c(0x487b),_0x37e46c(0x4972),'Adverb','Value',_0x37e46c(0x3b3e)]},'Singular':{'is':'Noun','not':[_0x37e46c(0x25f7),_0x37e46c(0x163f)]},'ProperNoun':{'is':_0x37e46c(0x1786)},'Person':{'is':_0x37e46c(0x1e9f),'also':[_0x37e46c(0xb7e)],'not':['Place',_0x37e46c(0x4c36),_0x37e46c(0x448e)]},'FirstName':{'is':_0x37e46c(0x904)},'MaleName':{'is':'FirstName','not':[_0x37e46c(0x1587),'LastName']},'FemaleName':{'is':_0x37e46c(0xa17),'not':[_0x37e46c(0x3cc2),_0x37e46c(0x30cd)]},'LastName':{'is':'Person','not':['FirstName']},'Honorific':{'is':_0x37e46c(0x904),'not':['FirstName','LastName',_0x37e46c(0x3a43)]},'Place':{'is':'Singular','not':[_0x37e46c(0x904),_0x37e46c(0x4c36)]},'Country':{'is':_0x37e46c(0x1c11),'also':['ProperNoun'],'not':['City']},'City':{'is':_0x37e46c(0x1c11),'also':['ProperNoun'],'not':[_0x37e46c(0x4138)]},'Region':{'is':_0x37e46c(0x1c11),'also':['ProperNoun']},'Address':{},'Organization':{'is':_0x37e46c(0xb7e),'not':[_0x37e46c(0x904),_0x37e46c(0x1c11)]},'SportsTeam':{'is':'Organization'},'School':{'is':'Organization'},'Company':{'is':_0x37e46c(0x4c36)},'Plural':{'is':_0x37e46c(0x1786),'not':[_0x37e46c(0x1e9f),_0x37e46c(0x163f)]},'Uncountable':{'is':_0x37e46c(0x1786)},'Pronoun':{'is':_0x37e46c(0x1786),'not':_0x3765e0},'Actor':{'is':_0x37e46c(0x1786),'not':['Place',_0x37e46c(0x4c36)]},'Activity':{'is':_0x37e46c(0x1786),'not':[_0x37e46c(0x904),_0x37e46c(0x1c11)]},'Unit':{'is':'Noun','not':_0x3765e0},'Demonym':{'is':_0x37e46c(0x1786),'also':[_0x37e46c(0xb7e)],'not':_0x3765e0},'Possessive':{'is':_0x37e46c(0x1786)},'Reflexive':{'is':'Pronoun'}},_0x1c20ba={'Adjective':{'not':[_0x37e46c(0x1786),_0x37e46c(0x487b),_0x37e46c(0x2cbd),_0x37e46c(0x3a43)]},'Comparable':{'is':'Adjective'},'Comparative':{'is':_0x37e46c(0x4972)},'Superlative':{'is':_0x37e46c(0x4972),'not':['Comparative']},'NumberRange':{},'Adverb':{'not':[_0x37e46c(0x1786),_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x3a43)]},'Determiner':{'not':[_0x37e46c(0x1786),_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x2cbd),'QuestionWord',_0x37e46c(0x37f1)]},'Conjunction':{'not':[_0x37e46c(0x1786),_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x2cbd),'Value','QuestionWord']},'Preposition':{'not':['Noun',_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x2cbd),_0x37e46c(0x3609),_0x37e46c(0x3b3e)]},'QuestionWord':{'not':[_0x37e46c(0x3b3e)]},'Currency':{'is':_0x37e46c(0x1786)},'Expression':{'not':[_0x37e46c(0x1786),'Adjective','Verb',_0x37e46c(0x2cbd)]},'Abbreviation':{},'Url':{'not':['HashTag',_0x37e46c(0xe0f),_0x37e46c(0x487b),_0x37e46c(0x4972),'Value',_0x37e46c(0x1221),_0x37e46c(0x2278)]},'PhoneNumber':{'not':[_0x37e46c(0x47e6),_0x37e46c(0x487b),_0x37e46c(0x4972),'Value',_0x37e46c(0x1221),_0x37e46c(0x2278)]},'HashTag':{},'AtMention':{'is':'Noun','not':[_0x37e46c(0x47e6),'Email']},'Emoji':{'not':[_0x37e46c(0x47e6),_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x3a43),'AtMention']},'Emoticon':{'not':[_0x37e46c(0x47e6),_0x37e46c(0x487b),'Adjective','Value',_0x37e46c(0x1221)]},'Email':{'not':['HashTag',_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x3a43),_0x37e46c(0x1221)]},'Acronym':{'not':['Plural',_0x37e46c(0x3f8),'Pronoun',_0x37e46c(0x448e)]},'Negative':{'not':[_0x37e46c(0x1786),_0x37e46c(0x4972),_0x37e46c(0x3a43),_0x37e46c(0x804)]},'Condition':{'not':[_0x37e46c(0x487b),_0x37e46c(0x4972),'Noun',_0x37e46c(0x3a43)]},'There':{'not':[_0x37e46c(0x487b),'Adjective',_0x37e46c(0x1786),_0x37e46c(0x3a43),_0x37e46c(0x37f1),_0x37e46c(0x326f)]},'Prefix':{'not':[_0x37e46c(0x1b3e),_0x37e46c(0x45e2),_0x37e46c(0xb7e)]},'Hyphenated':{}};let _0x2cbf79=Object['assign']({},_0x331e79,{'Verb':{'not':[_0x37e46c(0x1786),_0x37e46c(0x4972),_0x37e46c(0x2cbd),_0x37e46c(0x3a43),_0x37e46c(0x804)]},'PresentTense':{'is':_0x37e46c(0x487b),'not':['PastTense',_0x37e46c(0x2b4d)]},'Infinitive':{'is':_0x37e46c(0x2c88),'not':[_0x37e46c(0x42ce)]},'Imperative':{'is':_0x37e46c(0x487b),'not':[_0x37e46c(0xe52),_0x37e46c(0x42ce),_0x37e46c(0x49fa)]},'Gerund':{'is':_0x37e46c(0x2c88),'not':[_0x37e46c(0x49fa)]},'PastTense':{'is':_0x37e46c(0x487b),'not':[_0x37e46c(0x2c88),_0x37e46c(0x42ce),_0x37e46c(0x2b4d)]},'FutureTense':{'is':'Verb','not':[_0x37e46c(0x2c88),_0x37e46c(0xe52)]},'Copula':{'is':_0x37e46c(0x487b)},'Modal':{'is':'Verb','not':[_0x37e46c(0x2631)]},'Participle':{'is':_0x37e46c(0xe52)},'Auxiliary':{'is':'Verb','not':['PastTense','PresentTense','Gerund','Conjunction']},'PhrasalVerb':{'is':'Verb'},'Particle':{'is':_0x37e46c(0x39b6),'not':['PastTense',_0x37e46c(0x2c88),_0x37e46c(0x49fa),_0x37e46c(0x42ce)]},'Passive':{'is':_0x37e46c(0x487b)}},{'Value':{'not':[_0x37e46c(0x487b),_0x37e46c(0x4972),_0x37e46c(0x2cbd)]},'Ordinal':{'is':'Value','not':[_0x37e46c(0xab2)]},'Cardinal':{'is':_0x37e46c(0x3a43),'not':[_0x37e46c(0x4eea)]},'Fraction':{'is':'Value','not':['Noun']},'Multiple':{'is':_0x37e46c(0xc93)},'RomanNumeral':{'is':'Cardinal','not':[_0x37e46c(0xc93)]},'TextValue':{'is':_0x37e46c(0x3a43),'not':[_0x37e46c(0x466e)]},'NumericValue':{'is':_0x37e46c(0x3a43),'not':[_0x37e46c(0xc93)]},'Money':{'is':_0x37e46c(0xab2)},'Percent':{'is':_0x37e46c(0x3a43)}},{'Date':{'not':[_0x37e46c(0x487b),_0x37e46c(0x2cbd),_0x37e46c(0x4972)]},'Month':{'is':_0x37e46c(0x448e),'also':[_0x37e46c(0x1786)],'not':[_0x37e46c(0x37ec),_0x37e46c(0x14dd),_0x37e46c(0x1703)]},'WeekDay':{'is':_0x37e46c(0x448e),'also':[_0x37e46c(0x1786)]},'Year':{'is':_0x37e46c(0x448e),'not':[_0x37e46c(0x3f8)]},'FinancialQuarter':{'is':_0x37e46c(0x448e),'not':_0x37e46c(0x2365)},'Holiday':{'is':'Date','also':[_0x37e46c(0x1786)]},'Season':{'is':'Date'},'Timezone':{'is':'Date','also':[_0x37e46c(0x1786)],'not':[_0x37e46c(0xb7e)]},'Time':{'is':_0x37e46c(0x448e),'not':[_0x37e46c(0x1221)]},'Duration':{'is':_0x37e46c(0x448e),'also':[_0x37e46c(0x1786)]}},_0x1c20ba);const _0x377e12={'compute':_0x5ed97e,'methods':_0x2db37b,'model':_0x31db35,'tags':_0x2cbf79,'hooks':[_0x37e46c(0x2005)]},_0x320a7c=/[,)"';:\-–—.…]/,_0xd5f790=function(_0x2e2296,_0x2e94cd){const _0x1fd490=_0x37e46c;if(!_0x2e2296[_0x1fd490(0x2108)])return;let _0x43e5e9=_0x2e2296[_0x1fd490(0x50a1)]();for(let _0x592566=0x0;_0x592566<_0x43e5e9['length']-0x1;_0x592566++){const _0x42a6f2=_0x43e5e9[_0x592566];if(_0x320a7c[_0x1fd490(0x1769)](_0x42a6f2['post']))return;}_0x43e5e9[0x0][_0x1fd490(0x4570)]=_0x43e5e9[0x0]['normal'],_0x43e5e9[0x0][_0x1fd490(0x4006)]+=_0x2e94cd,_0x43e5e9[0x0][_0x1fd490(0x47d)]+=_0x2e94cd,_0x43e5e9[_0x1fd490(0x384c)](0x1)[_0x1fd490(0xa21)](_0x232755=>{const _0x4f4aec=_0x1fd490;_0x232755['implicit']=_0x232755[_0x4f4aec(0x47d)],_0x232755[_0x4f4aec(0x4006)]='',_0x232755[_0x4f4aec(0x47d)]='';});for(let _0x5b2b36=0x0;_0x5b2b36<_0x43e5e9[_0x1fd490(0x1b19)]-0x1;_0x5b2b36++)_0x43e5e9[_0x5b2b36][_0x1fd490(0x24ce)]=_0x43e5e9[_0x5b2b36][_0x1fd490(0x24ce)][_0x1fd490(0x741)](/ /,'');},_0x59979e=function(){const _0x98c1ed=_0x37e46c;let _0x289595=this[_0x98c1ed(0xc1a)]('@hasContraction'),_0x25206a=_0x289595[_0x98c1ed(0x2d96)](_0x98c1ed(0x778));return _0xd5f790(_0x25206a,_0x98c1ed(0xa5e)),_0x25206a=_0x289595[_0x98c1ed(0x2d96)]('(he|she|they|it|we|you)\x20will'),_0xd5f790(_0x25206a,'\x27ll'),_0x25206a=_0x289595[_0x98c1ed(0x2d96)](_0x98c1ed(0x1751)),_0xd5f790(_0x25206a,'\x27s'),_0x25206a=_0x289595[_0x98c1ed(0x2d96)](_0x98c1ed(0x3a6c)),_0xd5f790(_0x25206a,'\x27s'),_0x25206a=_0x289595[_0x98c1ed(0x2d96)]('#Person\x20would'),_0xd5f790(_0x25206a,'\x27d'),_0x25206a=_0x289595['match'](_0x98c1ed(0x2af3)),_0xd5f790(_0x25206a,_0x98c1ed(0x2dc2)),_0x25206a=_0x289595[_0x98c1ed(0x2d96)](_0x98c1ed(0x4c5d)),_0xd5f790(_0x25206a,_0x98c1ed(0x3b41)),_0x25206a=_0x289595[_0x98c1ed(0x2d96)](_0x98c1ed(0x30d9)),_0xd5f790(_0x25206a,_0x98c1ed(0x3b41)),_0x25206a=_0x289595[_0x98c1ed(0x2d96)](_0x98c1ed(0x1e28)),_0xd5f790(_0x25206a,'\x27m'),_0x25206a=_0x289595['match']('going\x20to'),this;},_0x5f14f5=/^\p{Lu}[\p{Ll}'’]/u,_0x135441=function(_0x4d83a0){const _0x3e0c52=_0x37e46c;class _0x4e271f extends _0x4d83a0{constructor(_0x31cb28,_0x41d88b,_0x4e9c9a){super(_0x31cb28,_0x41d88b,_0x4e9c9a),this['viewType']='Contraction';}['expand'](){const _0x2dfff6=a0_0x11e7;return this[_0x2dfff6(0x204b)][_0x2dfff6(0xa21)](_0x1b6137=>{const _0x8f1f3a=_0x2dfff6;let _0x2599fc=_0x5f14f5['test'](_0x1b6137[0x0][_0x8f1f3a(0x4006)]);_0x1b6137['forEach']((_0x21b90a,_0x447389)=>{const _0x2ce4df=_0x8f1f3a;_0x21b90a[_0x2ce4df(0x4006)]=_0x21b90a[_0x2ce4df(0x4570)]||'',delete _0x21b90a[_0x2ce4df(0x4570)],_0x447389<_0x1b6137[_0x2ce4df(0x1b19)]-0x1&&''===_0x21b90a[_0x2ce4df(0x24ce)]&&(_0x21b90a[_0x2ce4df(0x24ce)]+='\x20'),_0x21b90a[_0x2ce4df(0x1701)]=!0x0;}),_0x2599fc&&(_0x1b6137[0x0]['text']=function(_0x2679d9=''){const _0x19d239=_0x8f1f3a;return _0x2679d9['replace'](/^ *[a-z\u00C0-\u00FF]/,_0x5220bc=>_0x5220bc[_0x19d239(0x44ff)]());}(_0x1b6137[0x0]['text']));}),this[_0x2dfff6(0x23df)](_0x2dfff6(0x47d)),this;}}_0x4d83a0[_0x3e0c52(0x3b3c)][_0x3e0c52(0x8cf)]=function(){const _0x12f993=_0x3e0c52;let _0x865bd4=this[_0x12f993(0x2d96)]('@hasContraction+');return new _0x4e271f(this[_0x12f993(0x295)],_0x865bd4[_0x12f993(0x43e4)]);},_0x4d83a0[_0x3e0c52(0x3b3c)][_0x3e0c52(0x1924)]=_0x59979e;},_0x1810bd=function(_0x16d323,_0x445322,_0x3930fd){const _0x4ce402=_0x37e46c;let [_0x5535e0,_0x150f19]=_0x445322;_0x3930fd&&0x0!==_0x3930fd['length']&&(_0x3930fd=_0x3930fd[_0x4ce402(0x4833)]((_0x1fd6c3,_0x4b84c8)=>(_0x1fd6c3[_0x4ce402(0x4570)]=_0x1fd6c3[_0x4ce402(0x4006)],_0x1fd6c3[_0x4ce402(0x192e)]=_0x1fd6c3[_0x4ce402(0x4006)],_0x1fd6c3['pre']='',_0x1fd6c3[_0x4ce402(0x24ce)]='',_0x1fd6c3[_0x4ce402(0x4006)]='',_0x1fd6c3[_0x4ce402(0x47d)]='',_0x1fd6c3[_0x4ce402(0x3bb5)]=[_0x5535e0,_0x150f19+_0x4b84c8],_0x1fd6c3)),_0x3930fd[0x0]&&(_0x3930fd[0x0]['pre']=_0x16d323[_0x5535e0][_0x150f19][_0x4ce402(0x1228)],_0x3930fd[_0x3930fd[_0x4ce402(0x1b19)]-0x1]['post']=_0x16d323[_0x5535e0][_0x150f19][_0x4ce402(0x24ce)],_0x3930fd[0x0][_0x4ce402(0x4006)]=_0x16d323[_0x5535e0][_0x150f19][_0x4ce402(0x4006)],_0x3930fd[0x0]['normal']=_0x16d323[_0x5535e0][_0x150f19][_0x4ce402(0x47d)]),_0x16d323[_0x5535e0][_0x4ce402(0x4986)](_0x150f19,0x1,..._0x3930fd));},_0x5da12c=/'/,_0x202429=new Set([_0x37e46c(0x72a),'become']),_0x51ae2e=new Set(['what','how','when','if','too']);let _0x366b96=new Set([_0x37e46c(0x1ae6),_0x37e46c(0x1411),'enough']);const _0x269925=function(_0x206288,_0x2842d2){const _0x35bce1=_0x37e46c;let _0x27d877=_0x206288[_0x2842d2][_0x35bce1(0x47d)][_0x35bce1(0x1117)](_0x5da12c)[0x0];if(_0x35bce1(0x1e61)===_0x27d877)return[_0x27d877,'us'];if('there'===_0x27d877){let _0x18b45f=_0x206288[_0x2842d2+0x1];if(_0x18b45f&&_0x18b45f['tags']['has'](_0x35bce1(0x25f7)))return[_0x27d877,_0x35bce1(0x9d5)];}return _0x35bce1(0x3170)===((_0x2f951f,_0x24097a)=>{const _0x1bedd6=_0x35bce1;for(let _0x42420f=_0x24097a+0x1;_0x42420f<_0x2f951f[_0x1bedd6(0x1b19)];_0x42420f+=0x1){let _0x42913d=_0x2f951f[_0x42420f];if(_0x202429['has'](_0x42913d[_0x1bedd6(0x47d)]))return _0x1bedd6(0x3170);if(_0x51ae2e[_0x1bedd6(0x3170)](_0x42913d['normal']))return'is';if(_0x42913d['tags'][_0x1bedd6(0x3170)](_0x1bedd6(0x42ce)))return'is';if(_0x42913d['tags']['has']('Determiner'))return'is';if(_0x42913d[_0x1bedd6(0x521a)][_0x1bedd6(0x3170)](_0x1bedd6(0x4972)))return'is';if('Adj|Past'===_0x42913d[_0x1bedd6(0x857)]&&_0x2f951f[_0x42420f+0x1]){if(_0x366b96['has'](_0x2f951f[_0x42420f+0x1]['normal']))return'is';if(_0x2f951f[_0x42420f+0x1][_0x1bedd6(0x521a)][_0x1bedd6(0x3170)](_0x1bedd6(0x326f)))return'is';}if(_0x42913d[_0x1bedd6(0x521a)]['has'](_0x1bedd6(0xe52)))return _0x2f951f[_0x42420f+0x1]&&'for'===_0x2f951f[_0x42420f+0x1][_0x1bedd6(0x47d)]?'is':'has';}return'is';})(_0x206288,_0x2842d2)?[_0x27d877,_0x35bce1(0x3170)]:[_0x27d877,'is'];},_0xe7354a=/'/,_0x35ccb6=new Set([_0x37e46c(0x4fea),_0x37e46c(0x37e),_0x37e46c(0x5097),'it',_0x37e46c(0x47f4)]),_0x2884fe=new Set([_0x37e46c(0x1065),'be']),_0x2dfc90=function(_0x45e962,_0x16f523){const _0x3bfc1b=_0x37e46c;let _0xf5212=_0x45e962[_0x16f523][_0x3bfc1b(0x47d)][_0x3bfc1b(0x1117)](_0xe7354a)[0x0];return _0x3bfc1b(0x3981)===_0xf5212||'what'===_0xf5212?[_0xf5212,_0x3bfc1b(0x4af8)]:_0x3bfc1b(0x47f4)===((_0x326fac,_0xf50ac5)=>{const _0x211a8b=_0x3bfc1b;for(let _0x4f8c93=_0xf50ac5+0x1;_0x4f8c93<_0x326fac[_0x211a8b(0x1b19)];_0x4f8c93+=0x1){let _0x548f84=_0x326fac[_0x4f8c93];if(_0x35ccb6['has'](_0x548f84[_0x211a8b(0x47d)]))return _0x211a8b(0x47f4);if(_0x2884fe[_0x211a8b(0x3170)](_0x548f84[_0x211a8b(0x47d)]))return'would';if(_0x548f84['tags'][_0x211a8b(0x3170)](_0x211a8b(0xe52))||_0x211a8b(0x3f72)===_0x548f84[_0x211a8b(0x857)])return _0x211a8b(0x47f4);if(_0x548f84[_0x211a8b(0x521a)]['has'](_0x211a8b(0x2c88))||_0x548f84[_0x211a8b(0x521a)][_0x211a8b(0x3170)](_0x211a8b(0x2631)))return'would';if(_0x548f84[_0x211a8b(0x521a)][_0x211a8b(0x3170)]('#Determiner'))return _0x211a8b(0x47f4);if(_0x548f84['tags'][_0x211a8b(0x3170)](_0x211a8b(0x4972)))return _0x211a8b(0x361a);}return!0x1;})(_0x45e962,_0x16f523)?[_0xf5212,_0x3bfc1b(0x47f4)]:[_0xf5212,_0x3bfc1b(0x361a)];},_0x3d124a=function(_0x5126dd,_0x44496a){const _0x593d38=_0x37e46c;if(_0x593d38(0x26f4)===_0x5126dd[_0x44496a][_0x593d38(0x47d)]||_0x593d38(0x43c5)===_0x5126dd[_0x44496a]['normal']){if(_0x5126dd[_0x44496a+0x1]&&_0x593d38(0x1d3d)===_0x5126dd[_0x44496a+0x1][_0x593d38(0x47d)])return[_0x593d38(0x1065)];let _0x1601bb=function(_0x58e2ac,_0x29a717){const _0x4c231b=_0x593d38;for(let _0x1ec82c=_0x29a717-0x1;_0x1ec82c>=0x0;_0x1ec82c-=0x1)if(_0x58e2ac[_0x1ec82c][_0x4c231b(0x521a)]['has'](_0x4c231b(0x1786))||_0x58e2ac[_0x1ec82c]['tags'][_0x4c231b(0x3170)]('Pronoun')||_0x58e2ac[_0x1ec82c]['tags'][_0x4c231b(0x3170)]('Plural')||_0x58e2ac[_0x1ec82c][_0x4c231b(0x521a)][_0x4c231b(0x3170)](_0x4c231b(0x1e9f)))return _0x58e2ac[_0x1ec82c];return null;}(_0x5126dd,_0x44496a);if(_0x1601bb){if('we'===_0x1601bb['normal']||_0x593d38(0x32b0)===_0x1601bb[_0x593d38(0x47d)])return[_0x593d38(0x9d5),_0x593d38(0xc1a)];if('i'===_0x1601bb['normal'])return['am','not'];if(_0x1601bb[_0x593d38(0x521a)]&&_0x1601bb[_0x593d38(0x521a)]['has'](_0x593d38(0x25f7)))return[_0x593d38(0x9d5),_0x593d38(0xc1a)];}return['is',_0x593d38(0xc1a)];}return[_0x5126dd[_0x44496a]['normal'][_0x593d38(0x741)](/n't/,''),_0x593d38(0xc1a)];},_0x227e84={'that':!0x0,'there':!0x0,'let':!0x0,'here':!0x0,'everywhere':!0x0},_0x41f4ca={'in':!0x0,'by':!0x0,'for':!0x0};let _0x18ffe5=new Set([_0x37e46c(0x1ae6),_0x37e46c(0x1411),_0x37e46c(0x11ad),_0x37e46c(0x2a60)]),_0x558a11=new Set(['is',_0x37e46c(0x9d5),'did',_0x37e46c(0x3813),_0x37e46c(0x3db),_0x37e46c(0x40ea),_0x37e46c(0x1684),_0x37e46c(0x47f4),_0x37e46c(0x1065)]);const _0x19819c=(_0x201270,_0x2f4bc3)=>{const _0x3eed40=_0x37e46c;let _0x2e5630=_0x201270[_0x2f4bc3];if(_0x227e84['hasOwnProperty'](_0x2e5630[_0x3eed40(0x192e)]||_0x2e5630[_0x3eed40(0x47d)]))return!0x1;if(_0x2e5630['tags']['has'](_0x3eed40(0x3d93)))return!0x0;if(_0x2e5630[_0x3eed40(0x521a)][_0x3eed40(0x3170)]('QuestionWord'))return!0x1;if(_0x3eed40(0x394d)===_0x2e5630[_0x3eed40(0x47d)]||'she\x27s'===_0x2e5630[_0x3eed40(0x47d)])return!0x1;let _0x36b3ef=_0x201270[_0x2f4bc3+0x1];if(!_0x36b3ef)return!0x0;if(_0x3eed40(0x3c25)===_0x2e5630[_0x3eed40(0x47d)])return!!_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0xf35));if(_0x3eed40(0xeda)==_0x36b3ef[_0x3eed40(0x857)]){let _0x59cd93=_0x201270[_0x2f4bc3+0x2];return _0x59cd93?!!_0x59cd93[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x49fa))||('on'===_0x59cd93['normal']||_0x59cd93[_0x3eed40(0x47d)],!0x1):!(!_0x2e5630[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x3b29))&&!_0x2e5630['tags'][_0x3eed40(0x3170)]('ProperNoun'));}if(_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x487b)))return!!_0x36b3ef[_0x3eed40(0x521a)]['has'](_0x3eed40(0x2631))||!_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)]('Gerund')&&!!_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x2c88));if(_0x3eed40(0x3ddd)===_0x36b3ef['switch']){let _0x16048b=_0x201270[_0x2f4bc3+0x2];if(!_0x16048b)return!0x1;if(_0x558a11[_0x3eed40(0x3170)](_0x16048b[_0x3eed40(0x47d)]))return!0x0;if(_0x18ffe5['has'](_0x16048b[_0x3eed40(0x47d)]))return!0x1;}if(_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)]('Noun')){let _0x377b15=_0x36b3ef[_0x3eed40(0x192e)]||_0x36b3ef['normal'];return _0x3eed40(0xfa2)!==_0x377b15&&_0x3eed40(0xdd3)!==_0x377b15&&_0x3eed40(0x4a82)!==_0x377b15&&(!_0x36b3ef['tags'][_0x3eed40(0x3170)](_0x3eed40(0x3d93))&&!(_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)]('ProperNoun')&&!_0x2e5630['tags'][_0x3eed40(0x3170)](_0x3eed40(0xb7e))));}if(_0x201270[_0x2f4bc3-0x1]&&!0x0===_0x41f4ca[_0x201270[_0x2f4bc3-0x1][_0x3eed40(0x47d)]])return!0x0;if(_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x4972))){let _0x1709ec=_0x201270[_0x2f4bc3+0x2];if(!_0x1709ec)return!0x1;if(_0x1709ec[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x1786))&&!_0x1709ec[_0x3eed40(0x521a)]['has'](_0x3eed40(0x2394))){let _0x5f32e9=_0x36b3ef[_0x3eed40(0x47d)];return _0x3eed40(0x2048)!==_0x5f32e9&&_0x3eed40(0x47e)!==_0x5f32e9&&_0x3eed40(0x1afa)!==_0x5f32e9;}return _0x3eed40(0x2d09)===_0x1709ec[_0x3eed40(0x857)];}return!!_0x36b3ef[_0x3eed40(0x521a)][_0x3eed40(0x3170)](_0x3eed40(0x3a43));},_0x2eb7e7=/'/,_0x109e90=function(_0x1813bd,_0x2aa4ac,_0x241397,_0x88c42c){const _0x227b18=_0x37e46c;let _0x50839f=_0x2aa4ac['update']();_0x50839f[_0x227b18(0x295)]=[_0x1813bd];let _0x15fd22=_0x241397+_0x88c42c;_0x241397>0x0&&(_0x241397-=0x1),_0x1813bd[_0x15fd22]&&(_0x15fd22+=0x1),_0x50839f[_0x227b18(0x232)]=[[0x0,_0x241397,_0x15fd22]],_0x50839f[_0x227b18(0x23df)]([_0x227b18(0x209c),_0x227b18(0x2c34),_0x227b18(0x2005),_0x227b18(0x52a)]),function(_0xd33ec1){const _0x3db6f0=_0x227b18;_0xd33ec1[_0x3db6f0(0xa21)]((_0x2adcf4,_0x2c8f48)=>{const _0x5ade9b=_0x3db6f0;_0x2adcf4['index']&&(_0x2adcf4[_0x5ade9b(0x3bb5)][0x1]=_0x2c8f48);});}(_0x1813bd);},_0x573f59={'d':(_0x438f9d,_0x5ed69f)=>_0x2dfc90(_0x438f9d,_0x5ed69f),'t':(_0x5e1f7d,_0x130ba6)=>_0x3d124a(_0x5e1f7d,_0x130ba6),'s':(_0xbc29be,_0x3429e0,_0x4cddbf)=>_0x19819c(_0xbc29be,_0x3429e0)?_0x4cddbf[_0x37e46c(0x1578)][_0x37e46c(0x1d8a)][_0x37e46c(0x4820)]([_0xbc29be[_0x3429e0]],_0x37e46c(0x3d93),_0x4cddbf,null,'2-contraction'):_0x269925(_0xbc29be,_0x3429e0)},_0x178249=function(_0x115217,_0x59d064){const _0x40c65f=_0x37e46c;let _0x2e9fa4=_0x59d064['fromText'](_0x115217['join']('\x20'));return _0x2e9fa4[_0x40c65f(0x23df)]('id'),_0x2e9fa4[_0x40c65f(0x204b)][0x0];},_0x10bc4b={'contractionTwo':_0x4b5770=>{let {world:_0x2e7ef0,document:_0x396375}=_0x4b5770;_0x396375['forEach']((_0x4f6b5c,_0x14bb39)=>{const _0x16b41a=a0_0x11e7;for(let _0x23fd28=_0x4f6b5c[_0x16b41a(0x1b19)]-0x1;_0x23fd28>=0x0;_0x23fd28-=0x1){if(_0x4f6b5c[_0x23fd28][_0x16b41a(0x4570)])return;let _0x115cc6=null;!0x0===_0x2eb7e7[_0x16b41a(0x1769)](_0x4f6b5c[_0x23fd28]['normal'])&&(_0x115cc6=_0x4f6b5c[_0x23fd28][_0x16b41a(0x47d)]['split'](_0x2eb7e7)[0x1]);let _0x1fb864=null;_0x573f59[_0x16b41a(0x2427)](_0x115cc6)&&(_0x1fb864=_0x573f59[_0x115cc6](_0x4f6b5c,_0x23fd28,_0x2e7ef0)),_0x1fb864&&(_0x1fb864=_0x178249(_0x1fb864,_0x4b5770),_0x1810bd(_0x396375,[_0x14bb39,_0x23fd28],_0x1fb864),_0x109e90(_0x396375[_0x14bb39],_0x4b5770,_0x23fd28,_0x1fb864['length']));}});}},_0x12183a={'compute':_0x10bc4b,'api':_0x135441,'hooks':['contractionTwo']},_0x73dcaa=_0x37e46c(0x746),_0x80ca34=_0x37e46c(0x4e7);let _0x48cdbd=[][_0x37e46c(0x1d1d)]([{'match':_0x37e46c(0x1000),'tag':_0x37e46c(0x3139),'reason':_0x37e46c(0x4da9)},{'match':_0x37e46c(0x419c),'tag':_0x37e46c(0x3139),'reason':'was-being'},{'match':_0x37e46c(0x2fe0),'tag':'Passive','reason':_0x37e46c(0x42a2)},{'match':'will\x20be\x20being?\x20(#PastTense|#Participle)','tag':_0x37e46c(0x3139),'reason':_0x37e46c(0x4127)},{'match':_0x37e46c(0xd21),'group':0x0,'tag':'Passive','reason':_0x37e46c(0x94e)}],[{'match':_0x37e46c(0xadf),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x4148)},{'match':_0x37e46c(0x5e9),'group':0x0,'tag':'Adjective','reason':_0x37e46c(0x4225)},{'match':_0x37e46c(0x136c),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'is-filled'},{'match':_0x37e46c(0x2362),'group':0x0,'tag':'Adjective','reason':'smoked-poutine'},{'match':_0x37e46c(0x488b),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x298d)},{'match':_0x37e46c(0x2f5b),'group':0x0,'tag':'Adjective','reason':_0x37e46c(0x44b2)},{'match':'#Copula\x20[fucked\x20up?]','group':0x0,'tag':'Adjective','reason':_0x37e46c(0x26f0)},{'match':_0x37e46c(0x39b9),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x1c86)},{'match':_0x37e46c(0x2627),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x2225)},{'match':_0x37e46c(0x23fb),'group':0x0,'notIf':'(all|even)','tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x42d6)},{'match':'the\x20[said]\x20#Noun','group':0x0,'tag':_0x37e46c(0x4972),'reason':'the-said-card'},{'match':'[#Hyphenated\x20(#Hyphenated\x20&&\x20#PastTense)]\x20(#Noun|#Conjunction)','group':0x0,'tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0x1a77),'reason':_0x37e46c(0x1f18)},{'match':_0x37e46c(0x336),'group':0x0,'tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0x1a77),'reason':_0x37e46c(0x46a5)},{'match':_0x37e46c(0x3407),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x3e1)},{'match':'(#Hyphenated\x20&&\x20#Value)\x20fold','tag':_0x37e46c(0x4972),'reason':_0x37e46c(0xf5b)},{'match':_0x37e46c(0x4fff),'tag':_0x37e46c(0x4972),'reason':'must-win'},{'match':'(#Hyphenated\x20&&\x20#Infinitive)\x20#Hyphenated','tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0xd8b),'reason':_0x37e46c(0x3a19)},{'match':_0x37e46c(0x392b),'tag':'Adverb\x20Adjective','reason':'bit-4'},{'match':'a\x20bit\x20much','tag':_0x37e46c(0x31cd),'reason':'bit-3'},{'match':_0x37e46c(0x27e5),'group':0x0,'tag':[_0x37e46c(0x4972),_0x37e46c(0x3823)],'reason':_0x37e46c(0x3a9)}],[{'match':_0x37e46c(0x2366),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x341e)},{'match':_0x37e46c(0xc90),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x1d45)},{'match':_0x37e46c(0x50ee),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x1ecb)},{'match':_0x37e46c(0x11f7),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':'was-still-walking'},{'match':_0x37e46c(0x2066)+_0x73dcaa,'tag':'#PresentTense\x20#Adverb','reason':'studies-hard'},{'match':_0x37e46c(0x4992)+_0x73dcaa+_0x37e46c(0xf88),'group':0x0,'notIf':_0x37e46c(0x212e),'tag':_0x37e46c(0x2cbd),'reason':'shops-direct'},{'match':'[#Plural]\x20a\x20lot','tag':_0x37e46c(0x2c88),'reason':'studies-a-lot'}],[{'match':'as\x20[#Gerund]\x20as','group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x28dc)},{'match':_0x37e46c(0x29de),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'more-gerund-than'},{'match':_0x37e46c(0x4c6),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x11f9)},{'match':_0x37e46c(0xfdd),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x1617)},{'match':_0x37e46c(0x3246),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x4f93)},{'match':_0x37e46c(0x3c29),'group':0x0,'tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0x443d),'reason':'looking-annoying'},{'match':_0x37e46c(0x2819),'group':0x0,'tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0x443d),'reason':'looked-amazing'},{'match':'[%Adj|Gerund%]\x20#Determiner','group':0x0,'tag':_0x37e46c(0x42ce),'reason':_0x37e46c(0x1288)},{'match':_0x37e46c(0x1183),'group':0x0,'tag':'Adjective','reason':_0x37e46c(0x42a6)},{'match':_0x37e46c(0x2d8e),'tag':_0x37e46c(0xc8f),'reason':_0x37e46c(0xdf4)},{'match':_0x37e46c(0x5c2),'tag':_0x37e46c(0x4b7b),'reason':_0x37e46c(0x1bbd)},{'match':_0x37e46c(0x4fb0),'group':0x0,'tag':'Adjective','reason':'are-enduring-symbols'}],[{'match':_0x37e46c(0xac8),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'the-adj-is'},{'match':_0x37e46c(0x2e61),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x2346)},{'match':'(his|its)\x20[%Adj|Noun%]','group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3fa8),'reason':_0x37e46c(0x3916)},{'match':_0x37e46c(0x32b1),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x93d)},{'match':_0x37e46c(0x2d05),'group':0x0,'tag':'Noun','reason':'have-fun'},{'match':_0x37e46c(0x3a56),'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x13d6)},{'match':_0x37e46c(0x110f),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x1f07)},{'match':_0x37e46c(0x19b4),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x4f5b)},{'match':'[brand\x20#Gerund?]\x20new','group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x400b)},{'match':_0x37e46c(0x4e34),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x14da)},{'match':_0x37e46c(0x2de1),'group':0x0,'tag':'Adjective','reason':_0x37e46c(0x355d)},{'match':_0x37e46c(0x31d),'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x22c5)},{'match':_0x37e46c(0x2a89),'tag':'Noun','notIf':_0x37e46c(0x1210),'reason':'the-south'},{'match':_0x37e46c(0x126c),'tag':_0x37e46c(0x4972),'notIf':'(this|that|#Comparative|#Superlative)','reason':'company-wide'},{'match':'#Determiner\x20[#Adjective]\x20(#Copula|#Determiner)','notIf':_0x37e46c(0x1461),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'the-poor'},{'match':_0x37e46c(0xb92),'notIf':_0x37e46c(0x2c4c),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'stable-foundations'}],[{'match':_0x37e46c(0x2171),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x12c5)},{'match':_0x37e46c(0x2c8f),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x4976)},{'match':_0x37e46c(0x3bbd),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x12ab)},{'match':'[way]\x20#Comparative','group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x3412)},{'match':_0x37e46c(0xbb3),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x89e)},{'match':_0x37e46c(0x50fb),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':'all-verb'},{'match':_0x37e46c(0x18d0),'group':0x0,'notIf':_0x37e46c(0x1583),'tag':_0x37e46c(0x2cbd),'reason':'verb-like'},{'match':_0x37e46c(0x3b65),'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x212a)},{'match':'[even]\x20#Verb','group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x329a)},{'match':_0x37e46c(0x306a),'group':0x0,'tag':'Adverb','reason':_0x37e46c(0x1f8f)},{'match':_0x37e46c(0x37d),'group':0x0,'tag':'#Adverb','reason':_0x37e46c(0x3ea4)},{'match':_0x37e46c(0x67f),'tag':_0x37e46c(0x17d4),'reason':_0x37e46c(0x2755)},{'match':_0x37e46c(0x5003),'group':0x0,'tag':_0x37e46c(0x1a77),'reason':_0x37e46c(0x4c2b)},{'match':_0x37e46c(0x3b2d),'notIf':_0x37e46c(0x3800),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':'lazy-ly'},{'match':_0x37e46c(0x111f),'group':0x0,'tag':'Adverb','reason':_0x37e46c(0x4398)},{'match':'#Copula\x20[#Adverb]$','group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x4b15)},{'match':_0x37e46c(0x32c7),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x1d22)},{'match':_0x37e46c(0x4293),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':'super-strong'},{'match':_0x37e46c(0x4b0e),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x5a8)},{'match':_0x37e46c(0xcf2),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x3474)},{'match':_0x37e46c(0x5d7),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'a-close'},{'match':'#Gerund\x20#Adverb?\x20[close]','group':0x0,'tag':_0x37e46c(0x2cbd),'notIf':'(getting|becoming|feeling)','reason':_0x37e46c(0x27af)},{'match':_0x37e46c(0x40fc),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x30cb)},{'match':_0x37e46c(0x2d13),'group':0x0,'tag':'Adverb','notIf':'(#PhrasalVerb|#Copula)','reason':_0x37e46c(0x243d)},{'match':_0x37e46c(0x3e66),'group':0x0,'tag':_0x37e46c(0x2cbd),'notIf':'#PhrasalVerb','reason':_0x37e46c(0x1d1e)},{'match':'[later]\x20#PresentTense','group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x2c6c)},{'match':'#Determiner\x20[well]\x20!#PastTense?','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x2720)},{'match':'#Adjective\x20[enough]','group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x51d)}],[{'match':'[sun]\x20the\x20#Ordinal','tag':_0x37e46c(0x14dd),'reason':_0x37e46c(0x3404)},{'match':_0x37e46c(0x4733),'group':0x0,'tag':_0x37e46c(0x14dd),'reason':_0x37e46c(0x36b9)},{'match':_0x37e46c(0x33b6),'group':0x0,'tag':_0x37e46c(0x14dd),'reason':_0x37e46c(0x2164)},{'match':_0x37e46c(0x399d),'group':0x0,'tag':_0x37e46c(0x14dd),'reason':_0x37e46c(0x3280)},{'match':_0x37e46c(0x44a6),'group':0x0,'tag':_0x37e46c(0x14dd),'reason':'wed'},{'match':_0x37e46c(0x4bdd),'group':0x0,'tag':'Month','reason':_0x37e46c(0x1809)},{'match':_0x37e46c(0x35e6),'group':0x0,'tag':_0x37e46c(0x14dd),'reason':_0x37e46c(0x4388)},{'match':_0x37e46c(0x36c0),'group':0x0,'tag':_0x37e46c(0x1c7a),'reason':_0x37e46c(0x75d)},{'match':_0x37e46c(0x35c3),'tag':'#Date\x20#Month','reason':_0x37e46c(0x16fb)},{'match':_0x37e46c(0x22ed),'tag':'#Month\x20#Date\x20#Date','reason':_0x37e46c(0x38ff)},{'match':_0x37e46c(0x2ed6),'tag':'#Date\x20#Date\x20#Month','reason':_0x37e46c(0x34b7)},{'match':_0x37e46c(0x1dc9),'group':0x0,'tag':_0x37e46c(0x1c7a),'reason':_0x37e46c(0x202b)},{'match':_0x37e46c(0x503a),'group':0x0,'tag':_0x37e46c(0x1c7a),'reason':_0x37e46c(0x3b56)},{'match':_0x37e46c(0x1bf1),'group':0x0,'tag':'Verb','reason':_0x37e46c(0x2d45)},{'match':'[(march|may)]\x20#Adverb','group':0x0,'tag':'Verb','reason':'march-quickly'},{'match':_0x37e46c(0x3c28),'tag':_0x37e46c(0x1703),'reason':_0x37e46c(0x2950)}],[{'match':'#Holiday\x20(day|eve)','tag':_0x37e46c(0x3adc),'reason':_0x37e46c(0x2f68)},{'match':_0x37e46c(0x203),'tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x5067)},{'match':'#Cardinal\x20#Month','tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x8fb)},{'match':_0x37e46c(0x410),'tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x5200)},{'match':_0x37e46c(0x79d),'tag':'Date','reason':'month-the-value'},{'match':_0x37e46c(0x279f),'tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x46c4)},{'match':_0x37e46c(0x1ff4),'tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x28c0)},{'match':_0x37e46c(0x41d),'tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x1c28)},{'match':_0x37e46c(0x31d5),'tag':'Date','reason':_0x37e46c(0x48f1)},{'match':_0x37e46c(0x1b84),'tag':'Date','reason':_0x37e46c(0x43f9)},{'match':_0x37e46c(0x5af),'tag':_0x37e46c(0x448e),'reason':_0x37e46c(0x2372)},{'match':_0x37e46c(0x34f7),'tag':'Timezone','reason':_0x37e46c(0x4a92)},{'match':_0x37e46c(0x14b8),'tag':_0x37e46c(0x3dd9),'reason':_0x37e46c(0x4938)},{'match':_0x37e46c(0x3ed5),'group':0x0,'tag':_0x37e46c(0x3dd9),'reason':_0x37e46c(0x1024)},{'match':'(central|western|eastern)\x20european\x20time','tag':_0x37e46c(0x3dd9),'reason':_0x37e46c(0x1271)}],[{'match':_0x37e46c(0xa8f),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x2aaf)},{'match':_0x37e46c(0x3350),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'more-noun'},{'match':'(right|rights)\x20of\x20.','tag':_0x37e46c(0x1786),'reason':'right-of'},{'match':_0x37e46c(0xe45),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x2c28)},{'match':_0x37e46c(0x42f2),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':'must-2'},{'match':_0x37e46c(0x6f0),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x2657)},{'match':_0x37e46c(0x4094),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x500f)},{'match':_0x37e46c(0x414e),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'determiner6'},{'match':_0x37e46c(0xe8d),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'first-thought'},{'match':_0x37e46c(0xc86),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3800),'reason':'the-adj-verb'},{'match':_0x37e46c(0x3f4c),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x30dd)},{'match':_0x37e46c(0x1b40),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x110e)},{'match':'(a|an|the)\x20[#Verb]\x20of','group':0x0,'tag':'Noun','reason':_0x37e46c(0x432)},{'match':_0x37e46c(0x4e30),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x503),'reason':_0x37e46c(0x4b9b)},{'match':_0x37e46c(0x39fc),'group':0x0,'notIf':_0x37e46c(0x503),'tag':_0x37e46c(0x1786),'reason':'ended-in-ruins'},{'match':_0x37e46c(0x7bb),'group':0x0,'tag':_0x37e46c(0x2394),'reason':_0x37e46c(0x25b2)},{'match':_0x37e46c(0x300a),'group':0x0,'tag':'Pronoun','reason':'u-pronoun-1'},{'match':'#Determiner\x20[(western|eastern|northern|southern|central)]\x20#Noun','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x5b5)},{'match':'(#Singular\x20&&\x20@hasHyphen)\x20#PresentTense','tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x8eb)},{'match':_0x37e46c(0x2c5a),'group':0x0,'tag':'Noun','reason':'is-no-verb'},{'match':'do\x20[so]','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x4fa5)},{'match':_0x37e46c(0xc0b),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x1e97)},{'match':_0x37e46c(0x12f7),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x3b51)},{'match':'(the|these)\x20[#Singular]\x20(were|are)','group':0x0,'tag':_0x37e46c(0x25f7),'reason':_0x37e46c(0x220a)},{'match':_0x37e46c(0x40d5),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x23c8)},{'match':'(the|those|these|a|an)\x20#Adjective?\x20[#PresentTense\x20#Particle?]','group':0x0,'tag':_0x37e46c(0x1786),'notIf':'(seem|appear|include|#Gerund|#Copula)','reason':'det-inf'},{'match':'#Noun\x20#Actor','tag':_0x37e46c(0x3b29),'notIf':_0x37e46c(0x319),'reason':_0x37e46c(0x464c)},{'match':_0x37e46c(0x1881),'tag':'Actor','reason':_0x37e46c(0x2477)},{'match':_0x37e46c(0x274d),'tag':_0x37e46c(0x3b29),'reason':'co-noun'},{'match':'[#Noun+]\x20#Actor','group':0x0,'tag':_0x37e46c(0x3b29),'notIf':_0x37e46c(0x24c2),'reason':_0x37e46c(0x4530)},{'match':_0x37e46c(0x22d),'tag':'Actor','reason':_0x37e46c(0x708)},{'match':_0x37e46c(0x4d70),'tag':_0x37e46c(0x3b29),'reason':_0x37e46c(0x4850)},{'match':_0x37e46c(0x5072),'tag':_0x37e46c(0x3b29),'reason':_0x37e46c(0x4728)},{'match':'chief\x20of\x20#Noun+','tag':_0x37e46c(0x3b29),'reason':'chief-of-police'},{'match':_0x37e46c(0x4be9),'tag':'Actor','reason':_0x37e46c(0xdf9)},{'match':_0x37e46c(0x2f25),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x4ea4)},{'match':'#Verb\x20(a|an)\x20[#Value]$','group':0x0,'tag':_0x37e46c(0x1e9f),'reason':'did-a-value'},{'match':_0x37e46c(0x4086),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x20ee)},{'match':_0x37e46c(0x39a2),'tag':_0x37e46c(0x3d93),'reason':'name-poss'},{'match':_0x37e46c(0x3228),'tag':_0x37e46c(0x3d93),'reason':'org-possessive'},{'match':_0x37e46c(0x25d),'tag':'Possessive','reason':'place-possessive'},{'match':_0x37e46c(0x513c),'notIf':'(#Gerund|her)','tag':'Noun','reason':_0x37e46c(0x198e)},{'match':_0x37e46c(0x2900),'tag':_0x37e46c(0x3d93),'reason':_0x37e46c(0xbb9)},{'match':_0x37e46c(0x4c9d),'group':0x0,'unTag':_0x37e46c(0x3a43),'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x201f)},{'match':_0x37e46c(0xff2),'group':0x0,'unTag':_0x37e46c(0x3a43),'tag':_0x37e46c(0x25f7),'reason':_0x37e46c(0x2fca)},{'match':_0x37e46c(0x4296),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x3cc)},{'match':'a\x20[#Adjective]\x20#Preposition','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x195f)},{'match':_0x37e46c(0x17fb),'group':0x0,'tag':_0x37e46c(0x3b29),'reason':_0x37e46c(0x4f1b)},{'match':_0x37e46c(0x10e8),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x26b5)},{'match':_0x37e46c(0x508),'group':0x0,'tag':'Plural','reason':_0x37e46c(0x22d0)},{'match':_0x37e46c(0x51a6),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x4faa)},{'match':_0x37e46c(0x440e),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x144d)},{'match':_0x37e46c(0x88d),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'the-1992-classic'},{'match':_0x37e46c(0x48c8),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x510d)},{'match':_0x37e46c(0x35a2),'group':0x0,'tag':'Possessive','reason':_0x37e46c(0x92a)},{'match':_0x37e46c(0x4d1f),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x2405)},{'match':_0x37e46c(0x18dd),'group':0x0,'tag':_0x37e46c(0x51ea),'reason':_0x37e46c(0x280b)},{'match':_0x37e46c(0x70a),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3e4e),'reason':'left-her-boots'},{'match':'#Value\x20[%Plural|Verb%]','group':0x0,'tag':'Plural','notIf':_0x37e46c(0x521b),'reason':_0x37e46c(0x4aa6)},{'match':_0x37e46c(0x3a48),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':'(#Gerund|come|become)','reason':_0x37e46c(0x4e6)},{'match':_0x37e46c(0x1f5c),'tag':_0x37e46c(0x4445),'notIf':'#ProperNoun\x20#Noun','reason':'instant-access'},{'match':_0x37e46c(0xfb3),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x4aac)},{'match':_0x37e46c(0x35bb),'group':0x0,'tag':_0x37e46c(0x25f7),'notIf':_0x37e46c(0x869),'reason':_0x37e46c(0x35b7)},{'match':_0x37e46c(0xdd5),'group':0x0,'tag':'Plural','reason':_0x37e46c(0x4a0f)}],[{'match':_0x37e46c(0x4c57),'group':0x0,'tag':_0x37e46c(0x1e9f),'reason':_0x37e46c(0x1c80)},{'match':_0x37e46c(0x2040),'group':0x0,'ifNo':_0x37e46c(0x3800),'tag':_0x37e46c(0x25f7),'reason':_0x37e46c(0x2e2c)},{'match':_0x37e46c(0x56f),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'the-gerund-noun'},{'match':_0x37e46c(0x2fd0),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x41ea)},{'match':_0x37e46c(0x1108),'group':0x0,'tag':'Noun','notIf':_0x37e46c(0x14ae),'reason':_0x37e46c(0x112f)},{'match':'[#Gerund]\x20#Adverb?\x20not?\x20#Copula','group':0x0,'tag':_0x37e46c(0x4520),'reason':_0x37e46c(0x3233)},{'match':'#Copula\x20[(#Gerund|#Activity)]\x20#Copula','group':0x0,'tag':_0x37e46c(0x42ce),'reason':'are-doing-is'},{'match':_0x37e46c(0x4bc3),'group':0x0,'tag':_0x37e46c(0x4520),'reason':_0x37e46c(0x292f)},{'match':'#Singular\x20for\x20[%Noun|Gerund%]','group':0x0,'tag':_0x37e46c(0x42ce),'reason':_0x37e46c(0x2984)},{'match':_0x37e46c(0x3e79),'group':0x0,'tag':_0x37e46c(0x42ce),'reason':'better-for-gerund'},{'match':_0x37e46c(0x332),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'keep-the-touching'}],[{'match':_0x37e46c(0x12ff),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x34f2)},{'match':_0x37e46c(0x46e),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x3f9f)},{'match':_0x37e46c(0x4a04),'group':0x0,'tag':'Noun','reason':'the-only-reason'},{'match':'(the|this|a|an)\x20[#Infinitive]\x20#Adverb?\x20#Verb','group':0x0,'tag':_0x37e46c(0x1786),'reason':'determiner5'},{'match':'#Determiner\x20#Adjective\x20#Adjective?\x20[#Infinitive]','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x51dd)},{'match':_0x37e46c(0x1cab),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'mexican-train'},{'match':_0x37e46c(0x4708),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x420c)},{'match':_0x37e46c(0x44bd),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x4b19)},{'match':_0x37e46c(0x18f2),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x4943)},{'match':_0x37e46c(0x4506),'group':0x0,'notIf':_0x37e46c(0x27e6),'tag':_0x37e46c(0x1786),'reason':'a-noun-inf'},{'match':'(a|an)\x20#Noun\x20[#Infinitive]$','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x3a50)},{'match':'#Gerund\x20#Adjective?\x20for\x20[#Infinitive]','group':0x0,'tag':'Noun','reason':_0x37e46c(0x24ae)},{'match':'about\x20[#Infinitive]','group':0x0,'tag':_0x37e46c(0x1e9f),'reason':'about-love'},{'match':_0x37e46c(0x359a),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x4f20)},{'match':_0x37e46c(0x380e),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x31b7)},{'match':_0x37e46c(0x4402),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x1ac2)},{'match':'number\x20of\x20[#PresentTense]','group':0x0,'tag':'Noun','reason':'number-of-x'},{'match':_0x37e46c(0x2d6a),'group':0x0,'tag':'Noun','reason':'teaches-x'},{'match':'(try|use|attempt|build|make)\x20[#Verb\x20#Particle?]','notIf':'(#Copula|#Noun|sure|fun|up)','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x2000)},{'match':'^[#Infinitive]\x20(is|was)','group':0x0,'tag':_0x37e46c(0x1786),'reason':'checkmate-is'},{'match':_0x37e46c(0x38f7),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x2097)},{'match':_0x37e46c(0x74d),'group':0x0,'tag':_0x37e46c(0x37f1),'reason':'cause-cuz'},{'match':'the\x20#Singular\x20[#Infinitive]\x20#Noun','group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x869),'reason':_0x37e46c(0x471c)},{'match':_0x37e46c(0x369f),'group':0x0,'tag':_0x37e46c(0x2c88),'reason':_0x37e46c(0x44c6)},{'match':'this\x20[#Plural]','group':0x0,'tag':_0x37e46c(0x2c88),'notIf':'(#Preposition|#Date)','reason':_0x37e46c(0x79c)},{'match':'#Noun\x20that\x20[#Plural]','group':0x0,'tag':_0x37e46c(0x2c88),'notIf':_0x37e46c(0x2e30),'reason':_0x37e46c(0x4a6c)},{'match':'that\x20[#Plural]\x20to','group':0x0,'tag':_0x37e46c(0x2c88),'notIf':'#Preposition','reason':_0x37e46c(0x2056)},{'match':_0x37e46c(0x3a5a),'group':0x0,'tag':'Infinitive','reason':_0x37e46c(0x4ccf)},{'match':_0x37e46c(0x4d6b),'notIf':_0x37e46c(0x3504),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x2690)},{'match':_0x37e46c(0x14c4),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x4fea),'reason':_0x37e46c(0x28dd)},{'match':_0x37e46c(0x2585),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3800),'reason':'one-big-reason'},{'match':'#PastTense\x20#Adjective+\x20[#PresentTense]','group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x2738),'reason':_0x37e46c(0x1c24)},{'match':_0x37e46c(0x46df),'group':0x0,'tag':'Noun','notIf':_0x37e46c(0x3800),'reason':'many-poses'},{'match':_0x37e46c(0x1e36),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3800),'reason':'very-big-dream'},{'match':_0x37e46c(0x6df),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x2009)},{'match':'(for|with|of)\x20#Noun\x20(and|or|not)\x20[%Noun|Verb%]','group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x869),'reason':_0x37e46c(0x1af0)},{'match':_0x37e46c(0x2457),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3800),'reason':_0x37e46c(0x3973)},{'match':_0x37e46c(0x3464),'group':0x0,'tag':'Noun','notIf':_0x37e46c(0x3800),'reason':'higher-costs'},{'match':_0x37e46c(0x3db2),'group':0x0,'tag':'Noun','notIf':_0x37e46c(0x3800),'reason':_0x37e46c(0x2f5a)},{'match':_0x37e46c(0x338a),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'any-verbs-for'},{'match':_0x37e46c(0x3fc2),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'gas-exchange'},{'match':'#PastTense\x20(until|as|through|without)\x20[#PresentTense]','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0xf1d)},{'match':'#Gerund\x20like\x20#Adjective?\x20[#PresentTense]','group':0x0,'tag':'Plural','reason':'like-hot-cakes'},{'match':_0x37e46c(0x649),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'some-reason'},{'match':_0x37e46c(0x3e73),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x227e)},{'match':_0x37e46c(0x4a8c),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x5fb)},{'match':_0x37e46c(0x3dd),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x4386)},{'match':'#Gerund\x20#Adjective\x20#Preposition\x20[#PresentTense]','group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0xe10)},{'match':_0x37e46c(0x4061),'group':0x0,'tag':_0x37e46c(0x1786),'reason':'got-better-aim'},{'match':'whose\x20[#PresentTense]\x20#Copula','group':0x0,'tag':'Noun','reason':_0x37e46c(0x38ae)},{'match':_0x37e46c(0x151f),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x1ec6)},{'match':_0x37e46c(0x499f),'group':0x0,'tag':_0x37e46c(0x25f7),'reason':'there-are'},{'match':_0x37e46c(0x938),'group':0x0,'notIf':_0x37e46c(0x244b),'tag':_0x37e46c(0x25f7),'reason':_0x37e46c(0x1a8d)},{'match':'[#PresentTense]\x20(are|were)\x20#Adjective','group':0x0,'tag':_0x37e46c(0x25f7),'reason':_0x37e46c(0x1dc5)},{'match':'^[(hope|guess|thought|think)]\x20#Pronoun\x20#Verb','group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x33d9)},{'match':_0x37e46c(0x1d3e),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x3800),'reason':_0x37e46c(0x3ec7)},{'match':_0x37e46c(0x1c31),'group':0x0,'tag':_0x37e46c(0x2c88),'reason':_0x37e46c(0x2294)},{'match':_0x37e46c(0xaa8),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x2ba0),'reason':'ignoring-commute'},{'match':_0x37e46c(0x1732),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x427f)},{'match':_0x37e46c(0xc62),'group':0x0,'tag':_0x37e46c(0x2631),'reason':'how-to-noun'},{'match':_0x37e46c(0x40c4),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x3fd5)},{'match':_0x37e46c(0x44b4),'group':0x0,'tag':_0x37e46c(0x25f7),'reason':_0x37e46c(0x1332)},{'match':_0x37e46c(0xa0d),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x1a88)},{'match':_0x37e46c(0x434c),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x13c4)},{'match':'#Modal\x20#Noun\x20[%Noun|Verb%]','group':0x0,'tag':_0x37e46c(0x2631),'reason':'would-you-look'},{'match':'#Copula\x20just\x20[#Infinitive]','group':0x0,'tag':'Noun','reason':_0x37e46c(0x3627)},{'match':_0x37e46c(0x502b),'tag':'Imperative\x20#Plural','reason':_0x37e46c(0x477c)},{'match':_0x37e46c(0x4619),'group':0x0,'tag':_0x37e46c(0x3b57),'reason':_0x37e46c(0x306)},{'match':'#Determiner\x20#Year\x20[#Verb]','group':0x0,'tag':'Noun','reason':_0x37e46c(0x2507)},{'match':_0x37e46c(0x3689),'group':0x0,'tag':_0x37e46c(0x1786),'reason':_0x37e46c(0x5137)},{'match':_0x37e46c(0x45d1),'group':0x0,'tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0x444e),'reason':'the-individual-goals'},{'match':_0x37e46c(0x3cff),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0xd9a)},{'match':_0x37e46c(0x747),'group':0x0,'tag':_0x37e46c(0x1786),'notIf':_0x37e46c(0x30d1),'reason':_0x37e46c(0x253d)},{'match':_0x37e46c(0xe74),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x3fab)},{'match':_0x37e46c(0x2502),'tag':_0x37e46c(0xfd4),'reason':_0x37e46c(0x2cda)}],[{'match':'#Money\x20and\x20#Money\x20#Currency?','tag':'Money','reason':_0x37e46c(0x4188)},{'match':_0x37e46c(0x1b2e),'group':0x0,'tag':_0x37e46c(0x3cad),'reason':_0x37e46c(0x5237)},{'match':'#Value\x20(mark|rand|won|rub|ore)','tag':_0x37e46c(0x427c),'reason':_0x37e46c(0xb96)},{'match':_0x37e46c(0x2282),'tag':'#Money\x20#Unit','reason':_0x37e46c(0x148f)},{'match':_0x37e46c(0x3706),'tag':_0x37e46c(0x41a7),'reason':_0x37e46c(0x5fe)}],[{'match':_0x37e46c(0x202f),'group':0x0,'tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x32c5)},{'match':_0x37e46c(0x2a2c),'group':0x0,'tag':_0x37e46c(0x2365),'reason':'nearly-half'},{'match':_0x37e46c(0x311d),'group':0x0,'tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x4f85)},{'match':_0x37e46c(0xff9),'tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x1bf5)},{'match':_0x37e46c(0x3414),'tag':'Fraction','reason':_0x37e46c(0x3583)},{'match':_0x37e46c(0x235f),'tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x20f3)},{'match':'[#Cardinal+]\x20(#Fraction\x20&&\x20/s$/)','tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x3802)},{'match':_0x37e46c(0x23c7),'group':0x0,'tag':'Fraction','reason':'ordinal-of'},{'match':'[(#NumericValue\x20&&\x20#Ordinal)]\x20of\x20.','group':0x0,'tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x37ad)},{'match':_0x37e46c(0xfd9),'tag':_0x37e46c(0x2365),'reason':'a-ordinal'},{'match':_0x37e46c(0x9c2),'tag':_0x37e46c(0x2365),'reason':_0x37e46c(0x4cc)}],[{'match':_0x37e46c(0x45e6),'tag':_0x37e46c(0xdae),'reason':'one-second'},{'match':_0x37e46c(0x50e3),'group':0x0,'tag':'Value','reason':_0x37e46c(0x1456)},{'match':_0x37e46c(0x4533),'tag':_0x37e46c(0xe0f),'reason':_0x37e46c(0x1a2f)},{'match':_0x37e46c(0x31a6),'tag':_0x37e46c(0xe0f),'reason':_0x37e46c(0x1fa5)},{'match':_0x37e46c(0x2bd5),'tag':_0x37e46c(0x466b),'reason':_0x37e46c(0x4560)},{'match':'#Value\x20[(buck|bucks|grand)]','group':0x0,'tag':_0x37e46c(0x466b),'reason':_0x37e46c(0x3209)},{'match':'[#Value+]\x20#Currency','group':0x0,'tag':'Money','reason':_0x37e46c(0x1eb2)},{'match':'[second]\x20#Noun','group':0x0,'tag':_0x37e46c(0x4eea),'reason':'second-noun'},{'match':_0x37e46c(0x27c9),'group':0x0,'tag':_0x37e46c(0xdae),'reason':_0x37e46c(0x419b)},{'match':_0x37e46c(0x251e),'group':0x0,'tag':'Unit','reason':_0x37e46c(0x38ed)},{'match':'#Value\x20[#Abbreviation]','group':0x0,'tag':_0x37e46c(0xdae),'reason':_0x37e46c(0x3435)},{'match':_0x37e46c(0x4ded),'group':0x0,'tag':_0x37e46c(0xdae),'reason':_0x37e46c(0xaff)},{'match':'#Unit\x20an\x20hour','tag':'Unit','reason':'unit-an-hour'},{'match':_0x37e46c(0x5208),'tag':_0x37e46c(0x3a43),'reason':_0x37e46c(0x39aa)},{'match':_0x37e46c(0x44da),'tag':_0x37e46c(0x3a43),'reason':'value-point-value'},{'match':'#Determiner\x20[(half|quarter)]\x20#Ordinal','group':0x0,'tag':_0x37e46c(0x3a43),'reason':_0x37e46c(0x3b83)},{'match':'#Multiple+\x20and\x20#Value','tag':_0x37e46c(0x3a43),'reason':_0x37e46c(0x115b)},{'match':_0x37e46c(0x3965),'group':0x0,'tag':_0x37e46c(0xdae),'reason':_0x37e46c(0x377c)},{'match':_0x37e46c(0x4b74),'group':0x0,'tag':_0x37e46c(0xdae),'reason':_0x37e46c(0x3167)},{'match':'^[#Value]\x20(#Determiner|#Gerund)','group':0x0,'tag':_0x37e46c(0x804),'unTag':_0x37e46c(0x3a43),'reason':_0x37e46c(0x2cfe)}],[{'match':_0x37e46c(0x5070),'group':0x0,'tag':'FirstName','reason':_0x37e46c(0x16a9)},{'match':_0x37e46c(0x74f),'tag':_0x37e46c(0x904),'reason':'lady-titlecase','safe':!0x0},{'match':_0x37e46c(0x2bb4),'group':0x0,'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x4781)},{'match':_0x37e46c(0x4dae),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x2f1),'safe':!0x0},{'match':_0x37e46c(0x4e06),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x26c5),'safe':!0x0},{'match':_0x37e46c(0x37b3),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x3d9f)},{'match':'#Honorific\x20#Acronym','tag':'Person','reason':'Honorific-TitleCase'},{'match':_0x37e46c(0xc38),'tag':'Person','reason':_0x37e46c(0x2de8)},{'match':_0x37e46c(0x45de),'group':0x0,'tag':[_0x37e46c(0x45e2),_0x37e46c(0x904)],'reason':'john-e'},{'match':_0x37e46c(0x4a9d),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x4ce7),'safe':!0x0},{'match':'(king|queen|prince|saint|lady)\x20of\x20#Noun','tag':_0x37e46c(0x904),'reason':_0x37e46c(0x1c82),'safe':!0x0},{'match':_0x37e46c(0x2539),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x6a7)},{'match':'(king|queen|prince|saint)\x20#ProperNoun','tag':'Person','notIf':_0x37e46c(0x1248),'reason':_0x37e46c(0x463b)},{'match':_0x37e46c(0x6b2),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x4c02),'safe':!0x0},{'match':'#FirstName\x20de\x20#Noun','tag':_0x37e46c(0x904),'reason':'bill-de-noun'},{'match':_0x37e46c(0x1460),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x4fd7)},{'match':_0x37e46c(0x3632),'tag':'Person','reason':_0x37e46c(0x1d97)},{'match':'#FirstName\x20#FirstName\x20#ProperNoun','tag':_0x37e46c(0x904),'reason':_0x37e46c(0x3853)},{'match':'#Honorific\x20#FirstName?\x20#ProperNoun','tag':'Person','reason':_0x37e46c(0x296e)},{'match':_0x37e46c(0x2384),'tag':_0x37e46c(0x904),'reason':'name-the-great'},{'match':'#ProperNoun\x20(van|al|bin)\x20#ProperNoun','tag':_0x37e46c(0x904),'reason':'title-van-title','safe':!0x0},{'match':'#ProperNoun\x20(de|du)\x20la?\x20#ProperNoun','tag':_0x37e46c(0x904),'notIf':_0x37e46c(0x1248),'reason':_0x37e46c(0x3d80)},{'match':'#Singular\x20#Acronym\x20#LastName','tag':_0x37e46c(0x283e),'reason':'title-acro-noun','safe':!0x0},{'match':_0x37e46c(0x19f9),'group':0x0,'tag':_0x37e46c(0x904),'reason':'proper-person','safe':!0x0},{'match':'#Person\x20[#ProperNoun\x20#ProperNoun]','group':0x0,'tag':'Person','notIf':_0x37e46c(0xbb1),'reason':_0x37e46c(0x818),'safe':!0x0},{'match':_0x37e46c(0x373e),'group':0x0,'tag':_0x37e46c(0x30cd),'notIf':_0x37e46c(0xbb1),'reason':'firstname-titlecase'},{'match':'#FirstName\x20[#FirstName]','group':0x0,'tag':_0x37e46c(0x30cd),'reason':_0x37e46c(0x76b)},{'match':_0x37e46c(0x3c61),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x1c53),'safe':!0x0},{'match':'#FirstName\x20[(de|di|du|van|von)]\x20#Person','group':0x0,'tag':_0x37e46c(0x30cd),'reason':_0x37e46c(0x52e)},{'match':_0x37e46c(0x3302),'group':0x0,'tag':_0x37e46c(0x3ab),'reason':'seargeant-john'},{'match':'[(private|general|major|rear|prime|field|count|miss)]\x20#Honorific?\x20#Person','group':0x0,'tag':['Honorific',_0x37e46c(0x904)],'reason':_0x37e46c(0x2c00)},{'match':_0x37e46c(0xa6a),'group':0x0,'tag':_0x37e46c(0x30cd),'notIf':'#Possessive','reason':'dr-john-foo','safe':!0x0},{'match':_0x37e46c(0x2d1),'group':0x0,'tag':_0x37e46c(0x3ab),'reason':_0x37e46c(0x13af)},{'match':_0x37e46c(0x4275),'tag':_0x37e46c(0x3ab),'reason':'Lieutenant\x20colonel'},{'match':_0x37e46c(0x41b6),'tag':_0x37e46c(0x3ab),'reason':_0x37e46c(0x2675)},{'match':_0x37e46c(0x2ac1),'tag':_0x37e46c(0x904),'reason':'louis-IV'}],[{'match':_0x37e46c(0x4244),'tag':_0x37e46c(0x146a),'notIf':_0x37e46c(0x3ce5),'reason':_0x37e46c(0x3aaa)},{'match':'%Person|Date%\x20#Acronym?\x20#ProperNoun','tag':_0x37e46c(0x904),'reason':_0x37e46c(0x545)},{'match':_0x37e46c(0x3c62),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x4980),'safe':!0x0},{'match':_0x37e46c(0x512b),'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x41ad)},{'match':_0x37e46c(0x931),'tag':'Person','reason':_0x37e46c(0x19ad),'ifNo':'#Actor'},{'match':_0x37e46c(0x4db7),'group':0x0,'tag':_0x37e46c(0x904),'reason':'person-said'},{'match':_0x37e46c(0x3704),'group':0x0,'tag':_0x37e46c(0x1c11),'reason':'sydney-harbour'},{'match':_0x37e46c(0xd59),'group':0x0,'tag':'Place','reason':_0x37e46c(0x460c)},{'match':_0x37e46c(0x1333),'group':0x0,'tag':_0x37e46c(0x487b),'reason':'would-mark'},{'match':_0x37e46c(0x1d5d),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x3589)},{'match':_0x37e46c(0x368a),'group':0x0,'tag':_0x37e46c(0x487b),'reason':'drew-closer'},{'match':_0x37e46c(0x2781),'tag':'Person','reason':_0x37e46c(0x4186)},{'match':'%Person|Verb%\x20#Acronym\x20#ProperNoun','tag':_0x37e46c(0x904),'reason':'rob-a-smith'},{'match':_0x37e46c(0x383f),'group':0x0,'tag':_0x37e46c(0x28f1),'reason':_0x37e46c(0x283d)},{'match':_0x37e46c(0x1362),'tag':_0x37e46c(0x904),'reason':_0x37e46c(0x3964)},{'match':_0x37e46c(0x120e),'group':0x0,'safe':!0x0,'tag':_0x37e46c(0x30cd),'reason':_0x37e46c(0x4f0f)},{'match':_0x37e46c(0x20de),'group':0x0,'safe':!0x0,'tag':_0x37e46c(0x904),'reason':'sherwood-anderson'},{'match':_0x37e46c(0x2e64),'group':0x0,'unTag':_0x37e46c(0x904),'reason':'a-warhol'}],[{'match':_0x37e46c(0x2978),'tag':'#Copula\x20#Adverb\x20#Adjective','reason':'sometimes-adverb'},{'match':_0x37e46c(0x102b),'group':0x0,'tag':'Modal','reason':'i-better'},{'match':_0x37e46c(0x144c),'group':0x0,'tag':'PresentTense','reason':'modal-like'},{'match':'#Noun\x20#Adverb?\x20[left]','group':0x0,'tag':_0x37e46c(0xe52),'reason':_0x37e46c(0x4d80)},{'match':_0x37e46c(0x50ac),'group':0x0,'tag':_0x37e46c(0x49fa),'reason':_0x37e46c(0x405e)},{'match':_0x37e46c(0x3d99),'group':0x0,'tag':_0x37e46c(0x49fa),'reason':_0x37e46c(0x4033)},{'match':_0x37e46c(0x525f),'notIf':_0x37e46c(0x3cc4),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x48e2)},{'match':_0x37e46c(0x2a17),'group':0x0,'tag':_0x37e46c(0x2631),'reason':'must-march'},{'match':_0x37e46c(0x5174),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x1620)},{'match':_0x37e46c(0x3ff5),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x3f73)},{'match':'[home]\x20to','group':0x0,'tag':_0x37e46c(0x2c88),'reason':_0x37e46c(0x405a)},{'match':'[open]\x20#Determiner','group':0x0,'tag':'Infinitive','reason':_0x37e46c(0x154d)},{'match':_0x37e46c(0xcbf),'group':0x0,'tag':'PastTense','reason':_0x37e46c(0x3234)},{'match':_0x37e46c(0x31c3),'group':0x0,'tag':'Auxiliary\x20Participle','reason':_0x37e46c(0x3a0d)},{'match':_0x37e46c(0x2fa7),'group':0x0,'tag':_0x37e46c(0x3460),'reason':_0x37e46c(0x1f21)},{'match':'(had|has)\x20#Adverb?\x20[been]\x20#Adverb?\x20#PastTense','group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x4845)},{'match':'(had|has)\x20to\x20[#Noun]\x20(#Determiner|#Possessive)','group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x4af5)},{'match':'have\x20[#PresentTense]','group':0x0,'tag':_0x37e46c(0xe52),'notIf':_0x37e46c(0x4a4b),'reason':'have-read'},{'match':_0x37e46c(0x4819),'group':0x0,'tag':_0x37e46c(0xe52),'reason':_0x37e46c(0x1199)},{'match':_0x37e46c(0x4554),'group':0x0,'tag':'PresentTense','reason':_0x37e46c(0x388)},{'match':'[(look|looks)]\x20#Adjective','group':0x0,'tag':'PresentTense','reason':_0x37e46c(0x3c78)},{'match':_0x37e46c(0x3e9c),'group':0x0,'tag':'Verb','reason':_0x37e46c(0xa37)},{'match':'(have|had)\x20read','tag':_0x37e46c(0x4b91),'reason':_0x37e46c(0x472b)},{'match':'(is|was|were)\x20[(under|over)\x20#PastTense]','group':0x0,'tag':_0x37e46c(0x2b59),'reason':_0x37e46c(0xb3e)},{'match':_0x37e46c(0x4404),'group':0x0,'tag':'Verb','reason':_0x37e46c(0x4818)},{'match':_0x37e46c(0x7d3),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x19e7)},{'match':_0x37e46c(0x4f69),'group':0x0,'tag':_0x37e46c(0x487b),'reason':'swear3-verb'},{'match':_0x37e46c(0x1e8f),'tag':'.\x20#Preposition\x20#Infinitive','reason':_0x37e46c(0x16f2)},{'match':'[works]\x20for\x20me','group':0x0,'tag':_0x37e46c(0x2c88),'reason':_0x37e46c(0x4eab)},{'match':_0x37e46c(0x1f2c),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0xc21)},{'match':_0x37e46c(0x41d9),'group':0x0,'tag':[_0x37e46c(0x487b),'Prefix'],'notIf':_0x37e46c(0x5181),'reason':_0x37e46c(0x4463)},{'match':_0x37e46c(0x2f63),'group':0x0,'tag':'PastTense','reason':_0x37e46c(0x4bc5)},{'match':_0x37e46c(0xa9c),'group':0x0,'tag':_0x37e46c(0xe52),'reason':_0x37e46c(0x4bc5)},{'match':_0x37e46c(0x38c3),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x2f64)},{'match':_0x37e46c(0x2746),'group':0x0,'tag':_0x37e46c(0x2631),'reason':'to-dream-of'}],[{'match':_0x37e46c(0x3f05),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x3475)},{'match':'does\x20(#Adverb|not)?\x20[#Adjective]','group':0x0,'tag':_0x37e46c(0x2c88),'reason':'does-mean'},{'match':_0x37e46c(0x2531),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'okay-by-me'},{'match':_0x37e46c(0x2517),'group':0x0,'tag':'PresentTense','reason':_0x37e46c(0x479c)},{'match':_0x37e46c(0x5ad),'tag':_0x37e46c(0x120c),'reason':_0x37e46c(0x1798)},{'match':_0x37e46c(0x596),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0xf5d)},{'match':_0x37e46c(0x3221),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x1740)},{'match':_0x37e46c(0x1ada),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x4982)},{'match':_0x37e46c(0x1eb0),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x3d2b)},{'match':_0x37e46c(0x49d8),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x9e9)},{'match':_0x37e46c(0x215f),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x1d7d)},{'match':_0x37e46c(0x4e5f),'group':0x0,'tag':'Adjective','reason':_0x37e46c(0x1109)},{'match':_0x37e46c(0x38af),'group':0x0,'tag':'Adjective','reason':'felt-loved'},{'match':'(seem|feel|seemed|felt)\x20[#PastTense\x20#Particle?]','group':0x0,'tag':'Adjective','reason':_0x37e46c(0x1e00)},{'match':_0x37e46c(0x450a),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0xf7a)},{'match':_0x37e46c(0xc7a),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x3456)},{'match':_0x37e46c(0x2f00),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x4a06)},{'match':_0x37e46c(0x2043),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x2ac9)},{'match':_0x37e46c(0x1ba6),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x4f9c)},{'match':_0x37e46c(0x2942),'group':0x0,'tag':_0x37e46c(0x4972),'notIf':'(#Copula|#Pronoun)','reason':_0x37e46c(0x47f9)},{'match':_0x37e46c(0x258d),'group':0x0,'tag':'Adjective','reason':_0x37e46c(0x32e0)},{'match':_0x37e46c(0x4776),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x18ee)},{'match':_0x37e46c(0x4b08),'group':0x0,'tag':'Adjective','reason':'is-he-cool'},{'match':_0x37e46c(0x50ae),'group':0x0,'tag':_0x37e46c(0x4972),'notIf':_0x37e46c(0x2d30),'reason':_0x37e46c(0x4f8)},{'match':'#Copula\x20#Adverb?\x20[%Adj|Present%]$','group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x3fe6)}],[{'match':_0x37e46c(0x1db7),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x8b7)},{'match':_0x37e46c(0x2355),'group':0x0,'tag':'Auxiliary','reason':_0x37e46c(0x3c08)},{'match':_0x37e46c(0x441c),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x2932)},{'match':_0x37e46c(0x3e68),'group':0x0,'tag':'Auxiliary','reason':_0x37e46c(0x10a9)},{'match':_0x37e46c(0x4eb8),'group':0x0,'tag':'Auxiliary','reason':_0x37e46c(0x3c9)},{'match':'[(do|does|did|will|have|had|has|got)]\x20(not|#Adverb)+?\x20#Verb','group':0x0,'tag':'Auxiliary','reason':_0x37e46c(0x2902)},{'match':_0x37e46c(0x617),'group':0x0,'tag':[_0x37e46c(0x1a5e),_0x37e46c(0x487b)],'reason':'about-to'},{'match':_0x37e46c(0x4813),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x51c8)},{'match':'[(#Modal|had|has)]\x20(#Adverb|not)+?\x20[been]\x20(#Adverb|not)+?\x20#Verb','group':0x0,'tag':_0x37e46c(0x1a5e),'reason':'had-been'},{'match':_0x37e46c(0x1e44),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x2a2e)},{'match':'[may]\x20#Adverb?\x20#Infinitive','group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x1f6f)},{'match':_0x37e46c(0x508e),'group':0x0,'tag':'Auxiliary','reason':_0x37e46c(0x16b8)},{'match':_0x37e46c(0x2220),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x2798)},{'match':'[(be|been)]\x20(#Adverb|not)+?\x20#Gerund','group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x23ad)},{'match':_0x37e46c(0x1891),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x2dec)},{'match':_0x37e46c(0x1dbd),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x1dff)},{'match':'#Imperative\x20[(me|him|her)]','group':0x0,'tag':'Reflexive','reason':'tell-him'},{'match':_0x37e46c(0x4537),'group':0x0,'tag':'Negative','reason':_0x37e46c(0xca1)},{'match':'[(been|had|became|came)]\x20#PastTense','group':0x0,'notIf':'#PhrasalVerb','tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x3820)},{'match':_0x37e46c(0x42e),'group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x259a)},{'match':'[be]\x20#Gerund','group':0x0,'tag':_0x37e46c(0x1a5e),'reason':_0x37e46c(0x187d)},{'match':'[better]\x20#PresentTense','group':0x0,'tag':_0x37e46c(0x28f1),'notIf':_0x37e46c(0x371),'reason':'better-go'},{'match':_0x37e46c(0x439f),'tag':'Adverb\x20#Comparative','reason':_0x37e46c(0x31b)}],[{'match':_0x37e46c(0x42bd),'tag':_0x37e46c(0x39b6),'reason':_0x37e46c(0x6c6)},{'match':_0x37e46c(0x3dea),'tag':_0x37e46c(0x39b6),'reason':_0x37e46c(0x361f)},{'match':_0x37e46c(0xe72),'tag':_0x37e46c(0x39b6),'reason':_0x37e46c(0x9e5)},{'match':_0x37e46c(0x2cd8),'tag':'PhrasalVerb','reason':_0x37e46c(0x51c7)},{'match':_0x37e46c(0x32c8),'notIf':_0x37e46c(0x3800),'tag':_0x37e46c(0x362),'reason':'walk-in-on'},{'match':'(lived|went|crept|go)\x20[on]\x20for','group':0x0,'tag':_0x37e46c(0x39b6),'reason':_0x37e46c(0x50a0)},{'match':_0x37e46c(0x51b3),'tag':_0x37e46c(0x152c),'notIf':_0x37e46c(0xd8b),'reason':_0x37e46c(0x29a6)},{'match':_0x37e46c(0x15a0),'group':0x0,'tag':'Infinitive','reason':'help-stop'},{'match':_0x37e46c(0x3329),'tag':'#Verb\x20#Preposition\x20#Determiner','unTag':_0x37e46c(0x39b6),'reason':_0x37e46c(0x3e9e)},{'match':_0x37e46c(0x1a9f),'group':0x0,'tag':_0x37e46c(0x2631),'reason':_0x37e46c(0x29ef)},{'match':_0x37e46c(0x294e),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x31a5)}],[{'match':'^do\x20not?\x20[#Infinitive\x20#Particle?]','notIf':_0x80ca34,'group':0x0,'tag':'Imperative','reason':_0x37e46c(0x3d59)},{'match':_0x37e46c(0x1f00),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x3f25)},{'match':_0x37e46c(0x44b0),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x1a99)},{'match':'^[#Infinitive]\x20it\x20#Comparative','notIf':_0x80ca34,'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0xdc6)},{'match':_0x37e46c(0x1e27),'notIf':_0x80ca34,'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x96a)},{'match':_0x37e46c(0x3d8a),'group':0x0,'tag':_0x37e46c(0x44e2),'notIf':_0x37e46c(0x2e43),'reason':'go-quickly'},{'match':_0x37e46c(0x386f),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x15f5)},{'match':'^[#Infinitive]\x20(your|my|the|a|an|any|each|every|some|more|with|on)','group':0x0,'notIf':_0x37e46c(0x83d),'tag':'Imperative','reason':_0x37e46c(0x16cf)},{'match':_0x37e46c(0x1260),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x40e0)},{'match':'^[#Infinitive]\x20#Adjective\x20#Noun$','group':0x0,'tag':'Imperative','reason':_0x37e46c(0x2e70)},{'match':_0x37e46c(0x2d9f),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':'call-and-reserve'},{'match':_0x37e46c(0x2df1),'tag':_0x37e46c(0x44e2),'reason':'go'},{'match':_0x37e46c(0x23c5),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x4c5c)},{'match':'^let\x20(us|me)\x20[#Infinitive]','group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x4c5f)},{'match':_0x37e46c(0xeef),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x33b3)},{'match':'^[#PhrasalVerb\x20#Particle]\x20#Determiner\x20#Noun','group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x2351)},{'match':_0x37e46c(0x5c1),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x1351)},{'match':_0x37e46c(0x506e),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0xf67)},{'match':'^never\x20[#Infinitive]','group':0x0,'tag':'Imperative','reason':'never-stop'},{'match':_0x37e46c(0x2625),'tag':_0x37e46c(0x44e2),'notIf':'on','reason':_0x37e46c(0x5117)},{'match':'^come\x20and?\x20#Infinitive','tag':_0x37e46c(0x46b7),'notIf':_0x37e46c(0xd8b),'reason':_0x37e46c(0x1b01)},{'match':_0x37e46c(0x1ace),'tag':'Imperative','reason':_0x37e46c(0x1d26)},{'match':_0x37e46c(0x3ea2),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x252a)},{'match':_0x37e46c(0x514c),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x1327)},{'match':_0x37e46c(0x3ec),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':'do-not-be'},{'match':_0x37e46c(0x41b0),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':'allow-yourself'},{'match':_0x37e46c(0x2a06),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x1775)},{'match':'^[#Infinitive]\x20#Gerund','group':0x0,'tag':_0x37e46c(0x44e2),'reason':'keep-playing'},{'match':_0x37e46c(0xb2b),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x4906)},{'match':_0x37e46c(0x3e39),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x25cd)},{'match':_0x37e46c(0xc85),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':'commit-to'},{'match':_0x37e46c(0xc4b),'group':0x0,'tag':_0x37e46c(0x44e2),'reason':_0x37e46c(0x5045)},{'match':'do\x20not\x20(forget|omit|neglect)\x20to\x20[#Infinitive]','group':0x0,'tag':'Imperative','reason':'do-not-forget'},{'match':_0x37e46c(0x2267),'group':0x0,'tag':'Imperative','reason':_0x37e46c(0x36de)}],[{'match':_0x37e46c(0x4fbd),'group':0x0,'tag':_0x37e46c(0x42ce),'reason':'that-were-growing'},{'match':_0x37e46c(0x3fdb),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x4d96)}],[{'match':'u\x20r','tag':_0x37e46c(0x844),'reason':_0x37e46c(0x37e6)},{'match':_0x37e46c(0x1c10),'group':0x0,'tag':_0x37e46c(0x3b3e),'reason':_0x37e46c(0xd8e)},{'match':_0x37e46c(0x3a04),'group':0x0,'tag':_0x37e46c(0x5001),'reason':_0x37e46c(0x1b57)},{'match':_0x37e46c(0xc91),'group':0x0,'tag':_0x37e46c(0x5001),'reason':_0x37e46c(0x16eb)},{'match':'some\x20sort\x20of','tag':_0x37e46c(0x213e),'reason':_0x37e46c(0x272a)},{'match':'of\x20some\x20sort','tag':_0x37e46c(0x4581),'reason':'of-some-sort'},{'match':_0x37e46c(0x4ae1),'group':0x0,'tag':'Determiner','reason':_0x37e46c(0x3ca5)},{'match':'[right]\x20(before|after|in|into|to|toward)','group':0x0,'tag':_0x37e46c(0x1a77),'reason':_0x37e46c(0x1a4a)},{'match':_0x37e46c(0x1a05),'group':0x0,'tag':_0x37e46c(0x4972),'reason':_0x37e46c(0x2ba3)},{'match':_0x37e46c(0x4a50),'group':0x0,'tag':'Pronoun','reason':_0x37e46c(0x3d02)},{'match':_0x37e46c(0x22ca),'group':0x0,'tag':_0x37e46c(0x451a),'reason':_0x37e46c(0x5cb)},{'match':_0x37e46c(0x852),'group':0x0,'tag':_0x37e46c(0x44c1),'reason':_0x37e46c(0x2f1d)},{'match':_0x37e46c(0x5184),'group':0x0,'tag':_0x37e46c(0x4972),'reason':'always-there'},{'match':_0x37e46c(0x1e56),'group':0x0,'tag':_0x37e46c(0x4c2d),'reason':'there-is'},{'match':_0x37e46c(0x43e2),'group':0x0,'tag':_0x37e46c(0x4c2d),'reason':_0x37e46c(0x4eb3)},{'match':_0x37e46c(0x35b6),'group':0x0,'tag':_0x37e46c(0x4c2d),'reason':_0x37e46c(0x4970)},{'match':_0x37e46c(0x3b7),'group':0x0,'tag':_0x37e46c(0x3609),'reason':_0x37e46c(0x4346)},{'match':'^[does]\x20(he|she|it|#ProperNoun)','group':0x0,'tag':_0x37e46c(0x3609),'reason':_0x37e46c(0x3cf8)},{'match':_0x37e46c(0x437a),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x4d4d)},{'match':'#Determiner\x20#Noun+\x20[which]\x20#Verb','group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x1f56)},{'match':_0x37e46c(0x511c),'group':0x0,'tag':'Noun','reason':_0x37e46c(0x25e4)},{'match':_0x37e46c(0x4082),'tag':_0x37e46c(0x63b),'reason':_0x37e46c(0x2fe5)},{'match':'[fucking]\x20!#Verb','group':0x0,'tag':_0x37e46c(0x503),'reason':_0x37e46c(0x35ce)}],[{'match':_0x37e46c(0x1693),'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x4598)},{'match':_0x37e46c(0xab0),'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x18ba)},{'match':'#Organization\x20of\x20the?\x20#ProperNoun','tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x19eb),'safe':!0x0},{'match':_0x37e46c(0x41c5),'tag':'Organization','reason':_0x37e46c(0x2e06)},{'match':_0x37e46c(0x1b77),'tag':_0x37e46c(0x4c36),'notIf':_0x37e46c(0xffe),'reason':'titlecase-org'},{'match':_0x37e46c(0x29cb),'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x3419)},{'match':_0x37e46c(0x253),'group':0x0,'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0xdc0),'safe':!0x0},{'match':_0x37e46c(0x3f93),'tag':'Organization','reason':_0x37e46c(0x1414)},{'match':_0x37e46c(0x41b1),'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x3e6e)},{'match':_0x37e46c(0x5194),'tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x4eaf)},{'match':'(world|global|international|national|#Demonym)\x20#Organization','tag':_0x37e46c(0x4c36),'reason':_0x37e46c(0x37f3)},{'match':_0x37e46c(0x20d3),'tag':_0x37e46c(0x3a01),'reason':_0x37e46c(0x4b71)},{'match':_0x37e46c(0x23ce),'tag':'SportsTeam','reason':'place-sportsteam'},{'match':_0x37e46c(0x13a8),'tag':_0x37e46c(0x2aad),'reason':_0x37e46c(0x15c5)},{'match':'#Place+\x20fc','tag':_0x37e46c(0x2aad),'reason':_0x37e46c(0x454a)}],[{'match':'(west|north|south|east|western|northern|southern|eastern)+\x20#Place','tag':_0x37e46c(0x3d8e),'reason':_0x37e46c(0x357d)},{'match':_0x37e46c(0x4474),'group':0x0,'tag':_0x37e46c(0x3d8e),'reason':'us-state'},{'match':'portland\x20[or]','group':0x0,'tag':_0x37e46c(0x3d8e),'reason':_0x37e46c(0x3c5d)},{'match':_0x37e46c(0x4c77),'tag':'Place','reason':_0x37e46c(0x401)},{'match':_0x37e46c(0x3e87),'group':0x0,'tag':_0x37e46c(0x1c11),'reason':_0x37e46c(0x43e8)},{'match':_0x37e46c(0x3ac3),'tag':'Address','reason':_0x37e46c(0x3d6f)}],[{'match':_0x37e46c(0x19f8),'group':0x0,'tag':'Conjunction','reason':_0x37e46c(0x2d6d)},{'match':'[(who|what|where|why|how|when)]\x20#Noun\x20#Copula\x20#Adverb?\x20(#Verb|#Adjective)','group':0x0,'tag':_0x37e46c(0x37f1),'reason':_0x37e46c(0xf70)},{'match':_0x37e46c(0x48e),'group':0x0,'tag':'Conjunction','reason':'when-he'},{'match':_0x37e46c(0x24f5),'group':0x0,'tag':'Conjunction','reason':_0x37e46c(0x1e52)},{'match':_0x37e46c(0x1417),'group':0x0,'tag':_0x37e46c(0x37f1),'reason':_0x37e46c(0x1892)},{'match':_0x37e46c(0x255c),'group':0x0,'tag':_0x37e46c(0x37f1),'reason':_0x37e46c(0x308a)},{'match':_0x37e46c(0x1f9f),'group':0x0,'tag':_0x37e46c(0x2cbd),'reason':_0x37e46c(0x49e7)},{'match':_0x37e46c(0x339b),'group':0x0,'tag':_0x37e46c(0x326f),'reason':'that-prep'},{'match':'@hasComma\x20[which]\x20(#Pronoun|#Verb)','group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x4e70)},{'match':_0x37e46c(0x24bb),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x1ea2)},{'match':_0x37e46c(0x2c11),'group':0x0,'tag':_0x37e46c(0x326f),'reason':'like-the'},{'match':_0x37e46c(0x505d),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x1de4)},{'match':_0x37e46c(0x4f1),'group':0x0,'tag':_0x37e46c(0x487b),'reason':_0x37e46c(0x32e3)},{'match':_0x37e46c(0x4b14),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x4794)},{'match':_0x37e46c(0x17eb),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x3345)},{'match':_0x37e46c(0x49e4),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x4ec0)},{'match':_0x37e46c(0x2a78),'group':0x0,'tag':_0x37e46c(0x326f),'reason':_0x37e46c(0x26db)},{'match':_0x37e46c(0x251d),'group':0x0,'tag':_0x37e46c(0x3609),'reason':_0x37e46c(0x19cb)},{'match':_0x37e46c(0xd55),'group':0x0,'tag':'Preposition','reason':_0x37e46c(0x33f8)}],[{'match':_0x37e46c(0x4c19),'tag':_0x37e46c(0x804),'reason':'swears-expression'},{'match':_0x37e46c(0x2653),'group':0x0,'tag':_0x37e46c(0x804),'reason':_0x37e46c(0x3113)},{'match':_0x37e46c(0x1e11),'tag':_0x37e46c(0x804),'reason':_0x37e46c(0x1c4f)},{'match':_0x37e46c(0x35e1),'group':0x0,'tag':_0x37e46c(0x804),'reason':_0x37e46c(0x4dd5)},{'match':'^(ok|alright|shoot|hell|anyways)','tag':_0x37e46c(0x804),'reason':'ok-'},{'match':'^(say\x20&&\x20@hasComma)','tag':'Expression','reason':_0x37e46c(0x46b9)},{'match':_0x37e46c(0x8fe),'tag':_0x37e46c(0x804),'reason':_0x37e46c(0x46c0)},{'match':_0x37e46c(0x25f),'group':0x0,'tag':_0x37e46c(0x804),'reason':_0x37e46c(0x4f39)}]),_0x4f6456=null;const _0x3b475b={'postTagger':function(_0x13117a){const _0x2ddd3f=_0x37e46c,{world:_0x1a1190}=_0x13117a,{model:_0x3cdde4,methods:_0x4f5acd}=_0x1a1190;_0x4f6456=_0x4f6456||_0x4f5acd['one']['buildNet'](_0x3cdde4[_0x2ddd3f(0x21c9)]['matches'],_0x1a1190);let _0x507d05=_0x4f5acd['two'][_0x2ddd3f(0x2f6)](_0x13117a[_0x2ddd3f(0x295)])[_0x2ddd3f(0x4833)](_0x20c228=>{const _0xebd583=_0x2ddd3f;let _0x2593cc=_0x20c228[0x0];return[_0x2593cc[_0xebd583(0x3bb5)][0x0],_0x2593cc['index'][0x1],_0x2593cc[_0xebd583(0x3bb5)][0x1]+_0x20c228['length']];}),_0x3ba55d=_0x13117a[_0x2ddd3f(0x38d6)](_0x507d05);return _0x3ba55d[_0x2ddd3f(0x26f)](),_0x3ba55d['sweep'](_0x4f6456),_0x13117a[_0x2ddd3f(0x4742)](),_0x13117a['unfreeze'](),_0x13117a;},'tagger':_0x7350f1=>_0x7350f1['compute']([_0x37e46c(0x209c),_0x37e46c(0x2c34),'preTagger',_0x37e46c(0x118e),_0x37e46c(0x52a)])},_0x1097ab={'api':function(_0x6c47a4){const _0x2bfcad=_0x37e46c;_0x6c47a4[_0x2bfcad(0x3b3c)][_0x2bfcad(0x202a)]=function(){const _0x407029=_0x2bfcad;let _0x1acacc=0x0,_0x3be285=0x0;return this['docs'][_0x407029(0xa21)](_0x9c4006=>{const _0x5111d1=_0x407029;_0x9c4006[_0x5111d1(0xa21)](_0x16f3d0=>{const _0x333a61=_0x5111d1;_0x3be285+=0x1,_0x1acacc+=_0x16f3d0[_0x333a61(0x202a)]||0x1;});}),0x0===_0x3be285?0x1:(_0x53384a=>Math[_0x407029(0x3d6c)](0x64*_0x53384a)/0x64)(_0x1acacc/_0x3be285);},_0x6c47a4[_0x2bfcad(0x3b3c)][_0x2bfcad(0x4585)]=function(){const _0x1e7bf7=_0x2bfcad;return this['compute']([_0x1e7bf7(0x4585)]);};},'compute':_0x3b475b,'model':{'two':{'matches':_0x48cdbd}},'hooks':['postTagger']},_0x46c587=_0x1097ab,_0x3a58b1=function(_0x11b999,_0x4122e4){const _0x244e10=_0x37e46c;let _0x1306b0=function(_0x3a4f3d){const _0x2091b6=a0_0x11e7;return Object[_0x2091b6(0x1ea9)](_0x3a4f3d[_0x2091b6(0x1889)])[_0x2091b6(0x1465)](_0x507fa3=>!_0x507fa3[_0x2091b6(0x3bcf)]('#')&&!_0x507fa3['startsWith']('%'));}(_0x4122e4);if(0x0===_0x1306b0['length'])return _0x11b999;_0x11b999[_0x244e10(0x1aa7)]||_0x11b999[_0x244e10(0x26f)]();let _0x5e77ce=_0x11b999[_0x244e10(0x1aa7)];return _0x11b999['filter']((_0x1a2d85,_0x1f84a2)=>_0x1306b0[_0x244e10(0x363a)](_0x52e7f0=>_0x5e77ce[_0x1f84a2][_0x244e10(0x3170)](_0x52e7f0)));},_0x1a32ee=function(_0x1a7428,_0x20c61b){const _0x32e22e=_0x37e46c;let _0x10c5c4=_0x20c61b;_0x32e22e(0x2431)==typeof _0x20c61b&&(_0x10c5c4=this[_0x32e22e(0x19e0)]([{'match':_0x20c61b}]));let _0x49bdc3=this[_0x32e22e(0x3c0b)](_0x1a7428),_0x314dab=_0x3a58b1(_0x49bdc3,_0x10c5c4);return _0x314dab[_0x32e22e(0x2108)]?(_0x314dab[_0x32e22e(0x23df)]([_0x32e22e(0x3bb5),'tagger']),_0x314dab[_0x32e22e(0x2d96)](_0x20c61b)):_0x49bdc3[_0x32e22e(0x28b)]();},_0x1d2f58={'lib':{'lazy':_0x1a32ee}},_0x2fc1bc=function(_0x274236,_0x16ba8e){let _0x28f746=_0x16ba8e;return _0x274236['forEach'](_0x5b32b6=>{const _0x2a19a0=a0_0x11e7;_0x5b32b6[_0x2a19a0(0x3170)](_0x2a19a0(0x108e))||(_0x28f746=function(_0x362638,_0x362e06){const _0x6a4d1e=_0x2a19a0;let _0x2d8a1e=(0x0,_0x362638[_0x6a4d1e(0x1578)][_0x6a4d1e(0x21c9)]['transform'][_0x6a4d1e(0x4134)][_0x6a4d1e(0x2343)])(_0x362e06,_0x362638['model']);return _0x362638['has'](_0x6a4d1e(0x503))?_0x2d8a1e[_0x6a4d1e(0x42ce)]:_0x362638['has']('#PastTense')?_0x2d8a1e['PastTense']:_0x362638[_0x6a4d1e(0x3170)]('#PresentTense')?_0x2d8a1e[_0x6a4d1e(0x2c88)]:_0x362638[_0x6a4d1e(0x3170)](_0x6a4d1e(0x503))?_0x2d8a1e[_0x6a4d1e(0x42ce)]:_0x362e06;}(_0x5b32b6,_0x16ba8e)),_0x5b32b6[_0x2a19a0(0x1f1f)](_0x28f746);}),_0x274236;},_0x3b29d3=function(_0x2c8608,_0x5735e7,_0x563e62){const _0x183a26=_0x37e46c;let _0x4c7d69=_0x2c8608[_0x183a26(0x1117)](/ /g)[_0x183a26(0x4833)](_0x14392c=>_0x14392c['toLowerCase']()[_0x183a26(0x1b23)]());_0x4c7d69=_0x4c7d69[_0x183a26(0x1465)](_0x30492a=>_0x30492a),_0x4c7d69=_0x4c7d69[_0x183a26(0x4833)](_0x58dc3f=>'{'+_0x58dc3f+'}')['join']('\x20');let _0x1fd984=this['match'](_0x4c7d69);return _0x563e62&&(_0x1fd984=_0x1fd984['if'](_0x563e62)),_0x1fd984[_0x183a26(0x3170)](_0x183a26(0x38b0))?_0x2fc1bc(_0x1fd984,_0x5735e7):_0x1fd984[_0x183a26(0x3170)](_0x183a26(0xf35))?function(_0x43555f,_0x472614){const _0x54a239=_0x183a26;let _0x306c31=_0x472614;_0x43555f[_0x54a239(0x3170)](_0x54a239(0x3b57))&&(_0x306c31=(0x0,_0x43555f[_0x54a239(0x1578)][_0x54a239(0x21c9)][_0x54a239(0x5161)][_0x54a239(0x4d62)][_0x54a239(0x467a)])(_0x472614,_0x43555f[_0x54a239(0x1556)])),_0x43555f['replaceWith'](_0x306c31,{'possessives':!0x0});}(_0x1fd984,_0x5735e7):_0x1fd984[_0x183a26(0x3170)](_0x183a26(0x1a77))?function(_0x5f463a,_0x551f99){const _0x41fc05=_0x183a26,{toAdverb:_0x1dee2b}=_0x5f463a[_0x41fc05(0x1578)][_0x41fc05(0x21c9)][_0x41fc05(0x5161)]['adjective'];let _0x5eb36a=_0x1dee2b(_0x551f99);_0x5eb36a&&_0x5f463a[_0x41fc05(0x1f1f)](_0x5eb36a);}(_0x1fd984,_0x5735e7):_0x1fd984[_0x183a26(0x3170)](_0x183a26(0x335e))?function(_0x190d0a,_0x237941){const _0x18ba99=_0x183a26,{toComparative:_0x596306,toSuperlative:_0x449654}=_0x190d0a[_0x18ba99(0x1578)][_0x18ba99(0x21c9)]['transform'][_0x18ba99(0x2b3f)];let _0x28be7b=_0x237941;_0x190d0a['has'](_0x18ba99(0x48a0))?_0x28be7b=_0x596306(_0x28be7b,_0x190d0a[_0x18ba99(0x1556)]):_0x190d0a[_0x18ba99(0x3170)]('#Superlative')&&(_0x28be7b=_0x449654(_0x28be7b,_0x190d0a[_0x18ba99(0x1556)])),_0x28be7b&&_0x190d0a[_0x18ba99(0x1f1f)](_0x28be7b);}(_0x1fd984,_0x5735e7):this;},_0x2be28e={'api':function(_0x435523){const _0x45422d=_0x37e46c;_0x435523[_0x45422d(0x3b3c)][_0x45422d(0x25a9)]=_0x3b29d3;}};_0x41de3d['plugin'](_0x377e12),_0x41de3d[_0x37e46c(0xe15)](_0x12183a),_0x41de3d[_0x37e46c(0xe15)](_0x46c587),_0x41de3d[_0x37e46c(0xe15)](_0x1d2f58),_0x41de3d['plugin'](_0x2be28e);const _0x46cf2a=_0x41de3d,_0x332ef0=function(_0x20a4c8){const _0x2da766=_0x37e46c,{fromComparative:_0x2fbf69,fromSuperlative:_0x123363}=_0x20a4c8[_0x2da766(0x1578)][_0x2da766(0x21c9)][_0x2da766(0x5161)]['adjective'];let _0x18235e=_0x20a4c8[_0x2da766(0x4006)](_0x2da766(0x47d));return _0x20a4c8['has'](_0x2da766(0x48a0))?_0x2fbf69(_0x18235e,_0x20a4c8[_0x2da766(0x1556)]):_0x20a4c8[_0x2da766(0x3170)](_0x2da766(0x2a80))?_0x123363(_0x18235e,_0x20a4c8[_0x2da766(0x1556)]):_0x18235e;},_0xc2a495={'api':function(_0xf03469){const _0x211cfb=_0x37e46c;class _0x39646c extends _0xf03469{constructor(_0x142486,_0x4e95ea,_0x1a2838){const _0x2be811=a0_0x11e7;super(_0x142486,_0x4e95ea,_0x1a2838),this[_0x2be811(0x106d)]='Adjectives';}[_0x211cfb(0x3289)](_0x45fbae={}){const _0x332fae=_0x211cfb,{toAdverb:_0x37135b,toNoun:_0x3a5ec9,toSuperlative:_0x4b0ebc,toComparative:_0x3680c2}=this[_0x332fae(0x1578)]['two'][_0x332fae(0x5161)][_0x332fae(0x2b3f)];return _0x45fbae['normal']=!0x0,this['map'](_0x248c31=>{const _0x3fc048=_0x332fae;let _0x460e2c=_0x248c31[_0x3fc048(0x324a)]()['json'](_0x45fbae)[0x0]||{},_0x1579a2=_0x332ef0(_0x248c31);return _0x460e2c['adjective']={'adverb':_0x37135b(_0x1579a2,this[_0x3fc048(0x1556)]),'noun':_0x3a5ec9(_0x1579a2,this[_0x3fc048(0x1556)]),'superlative':_0x4b0ebc(_0x1579a2,this[_0x3fc048(0x1556)]),'comparative':_0x3680c2(_0x1579a2,this['model'])},_0x460e2c;},[]);}[_0x211cfb(0x48d1)](){const _0x3e818d=_0x211cfb;return this[_0x3e818d(0x5097)](_0x3e818d(0x187a))[_0x3e818d(0x1d1d)](this[_0x3e818d(0x1349)](_0x3e818d(0x55c)));}[_0x211cfb(0x2343)](_0x1edac7){const _0x10e6ed=_0x211cfb,{toComparative:_0x1aa72c,toSuperlative:_0x1fd8e9,toNoun:_0x3e856f,toAdverb:_0x419b2a}=this['methods'][_0x10e6ed(0x21c9)][_0x10e6ed(0x5161)][_0x10e6ed(0x2b3f)];return this[_0x10e6ed(0x152d)](_0x1edac7)['map'](_0x1d87d1=>{const _0x176373=_0x10e6ed;let _0x561526=_0x332ef0(_0x1d87d1);return{'Adjective':_0x561526,'Comparative':_0x1aa72c(_0x561526,this['model']),'Superlative':_0x1fd8e9(_0x561526,this[_0x176373(0x1556)]),'Noun':_0x3e856f(_0x561526,this['model']),'Adverb':_0x419b2a(_0x561526,this['model'])};},[]);}[_0x211cfb(0x2cd)](_0x2bd05e){const _0x3de141=_0x211cfb,{toComparative:_0x1fad18}=this[_0x3de141(0x1578)][_0x3de141(0x21c9)][_0x3de141(0x5161)][_0x3de141(0x2b3f)];return this['getNth'](_0x2bd05e)['map'](_0x3a3339=>{const _0x445ea4=_0x3de141;let _0x4b28b3=_0x332ef0(_0x3a3339),_0x532362=_0x1fad18(_0x4b28b3,this[_0x445ea4(0x1556)]);return _0x3a3339[_0x445ea4(0x1f1f)](_0x532362);});}[_0x211cfb(0x4c3a)](_0x1812ad){const _0x601352=_0x211cfb,{toSuperlative:_0x57139e}=this[_0x601352(0x1578)]['two'][_0x601352(0x5161)][_0x601352(0x2b3f)];return this['getNth'](_0x1812ad)['map'](_0x27629d=>{const _0x440750=_0x601352;let _0x3186f1=_0x332ef0(_0x27629d),_0x38e85c=_0x57139e(_0x3186f1,this[_0x440750(0x1556)]);return _0x27629d[_0x440750(0x1f1f)](_0x38e85c);});}[_0x211cfb(0xfb6)](_0x1f6f35){const _0x26d58a=_0x211cfb,{toAdverb:_0x53aed4}=this[_0x26d58a(0x1578)][_0x26d58a(0x21c9)][_0x26d58a(0x5161)][_0x26d58a(0x2b3f)];return this[_0x26d58a(0x152d)](_0x1f6f35)[_0x26d58a(0x4833)](_0x5d3d59=>{const _0x3c328=_0x26d58a;let _0x302d6f=_0x332ef0(_0x5d3d59),_0x4a6e29=_0x53aed4(_0x302d6f,this[_0x3c328(0x1556)]);return _0x5d3d59[_0x3c328(0x1f1f)](_0x4a6e29);});}[_0x211cfb(0x2931)](_0x57cdfb){const _0x1993e7=_0x211cfb,{toNoun:_0x4db062}=this['methods'][_0x1993e7(0x21c9)][_0x1993e7(0x5161)][_0x1993e7(0x2b3f)];return this[_0x1993e7(0x152d)](_0x57cdfb)[_0x1993e7(0x4833)](_0x52a1ca=>{const _0x1b2de1=_0x1993e7;let _0x177613=_0x332ef0(_0x52a1ca),_0x20364e=_0x4db062(_0x177613,this['model']);return _0x52a1ca[_0x1b2de1(0x1f1f)](_0x20364e);});}}_0xf03469[_0x211cfb(0x3b3c)][_0x211cfb(0x4c9b)]=function(_0x1f18e4){const _0x529779=_0x211cfb;let _0x140afa=this['match'](_0x529779(0x335e));return _0x140afa=_0x140afa[_0x529779(0x152d)](_0x1f18e4),new _0x39646c(_0x140afa[_0x529779(0x295)],_0x140afa[_0x529779(0x43e4)]);},_0xf03469[_0x211cfb(0x3b3c)]['superlatives']=function(_0x559c8d){const _0x2aaed3=_0x211cfb;let _0x2d6730=this[_0x2aaed3(0x2d96)](_0x2aaed3(0x2a80));return _0x2d6730=_0x2d6730[_0x2aaed3(0x152d)](_0x559c8d),new _0x39646c(_0x2d6730[_0x2aaed3(0x295)],_0x2d6730[_0x2aaed3(0x43e4)]);},_0xf03469['prototype']['comparatives']=function(_0x24e3bf){const _0x40bc90=_0x211cfb;let _0x375fb5=this[_0x40bc90(0x2d96)]('#Comparative');return _0x375fb5=_0x375fb5['getNth'](_0x24e3bf),new _0x39646c(_0x375fb5[_0x40bc90(0x295)],_0x375fb5[_0x40bc90(0x43e4)]);};}},_0x4af88c={'api':function(_0x2418d1){const _0x25f16e=_0x37e46c;class _0xa1ea89 extends _0x2418d1{constructor(_0x2bb0e0,_0x154ab0,_0x4165ff){const _0x2518f2=a0_0x11e7;super(_0x2bb0e0,_0x154ab0,_0x4165ff),this[_0x2518f2(0x106d)]='Adverbs';}[_0x25f16e(0x2343)](_0x503b1){const _0x5a7960=_0x25f16e;return this[_0x5a7960(0x152d)](_0x503b1)['map'](_0xbac420=>{const _0x28cb46=_0x5a7960;let _0x29904b=function(_0x523b41){const _0x7e53a2=a0_0x11e7;return _0x523b41[_0x7e53a2(0x23df)](_0x7e53a2(0x507b))[_0x7e53a2(0x4006)](_0x7e53a2(0x507b));}(_0xbac420);return{'Adverb':_0xbac420[_0x28cb46(0x4006)](_0x28cb46(0x47d)),'Adjective':_0x29904b};},[]);}[_0x25f16e(0x3289)](_0x496c1b={}){const _0x478ba0=_0x25f16e,_0x1f1dda=this[_0x478ba0(0x1578)][_0x478ba0(0x21c9)][_0x478ba0(0x5161)][_0x478ba0(0x2b3f)]['fromAdverb'];return _0x496c1b[_0x478ba0(0x47d)]=!0x0,this[_0x478ba0(0x4833)](_0x4ab725=>{const _0x5c7579=_0x478ba0;let _0x5a69ec=_0x4ab725[_0x5c7579(0x324a)]()[_0x5c7579(0x3289)](_0x496c1b)[0x0]||{};return _0x5a69ec['adverb']={'adjective':_0x1f1dda(_0x5a69ec[_0x5c7579(0x47d)])},_0x5a69ec;},[]);}}_0x2418d1[_0x25f16e(0x3b3c)][_0x25f16e(0x48d1)]=function(_0x207d21){const _0x57389a=_0x25f16e;let _0x2e0007=this['match'](_0x57389a(0x1a77));return _0x2e0007=_0x2e0007[_0x57389a(0x152d)](_0x207d21),new _0xa1ea89(_0x2e0007[_0x57389a(0x295)],_0x2e0007[_0x57389a(0x43e4)]);};}},_0x16b1c4=function(_0x227a09){const _0x39406d=_0x37e46c;let _0x256b25=this;_0x256b25=function(_0x5412ab){const _0x459d51=a0_0x11e7;let _0x553bf0=_0x5412ab[_0x459d51(0x47b1)]();return _0x553bf0=_0x553bf0['filter'](_0x315e6c=>_0x315e6c[_0x459d51(0x208a)]()>=0x3&&_0x315e6c['has']('#Verb')&&_0x315e6c[_0x459d51(0x3170)](_0x459d51(0xf35))),_0x5412ab['splitOn'](_0x553bf0);}(_0x256b25),_0x256b25=function(_0x2cdc98){const _0x1212f2=a0_0x11e7;let _0x172e37=_0x2cdc98[_0x1212f2(0x3ee2)]();return _0x172e37=_0x172e37[_0x1212f2(0x1465)](_0x2cdfef=>_0x2cdfef[_0x1212f2(0x208a)]()>=0x3&&_0x2cdfef[_0x1212f2(0x3170)](_0x1212f2(0x38b0))&&_0x2cdfef['has']('#Noun')),_0x2cdc98['splitOn'](_0x172e37);}(_0x256b25),_0x256b25=function(_0xc94913){const _0x39d460=a0_0x11e7;let _0x16fd7e=_0xc94913[_0x39d460(0x2d96)](_0x39d460(0x38c6));return _0x16fd7e=_0x16fd7e[_0x39d460(0x1465)](_0x2b2027=>{const _0x75a5a1=_0x39d460;if(0x1===_0x2b2027[_0x75a5a1(0x51f4)]('.')['wordCount']())return!0x1;if(0x1===_0x2b2027[_0x75a5a1(0xfbb)](_0x75a5a1(0x40c3))[_0x75a5a1(0x208a)]())return!0x1;let _0x1def2e=_0x2b2027[_0x75a5a1(0x4c27)]('.');return _0x1def2e=_0x1def2e['ifNo'](_0x75a5a1(0x413f)),_0x1def2e=_0x1def2e['ifNo']('@hasComma\x20(and|or)\x20.'),_0x1def2e=_0x1def2e[_0x75a5a1(0x385b)](_0x75a5a1(0x4bc9)),_0x1def2e=_0x1def2e[_0x75a5a1(0x385b)](_0x75a5a1(0x2433)),_0x1def2e=_0x1def2e[_0x75a5a1(0x385b)](_0x75a5a1(0x5127)),_0x1def2e=_0x1def2e[_0x75a5a1(0x385b)](_0x75a5a1(0xc18)),_0x1def2e[_0x75a5a1(0x2108)];}),_0xc94913[_0x39d460(0x319a)](_0x16fd7e);}(_0x256b25),_0x256b25=_0x256b25[_0x39406d(0x319a)](_0x39406d(0x3bf)),_0x256b25=_0x256b25[_0x39406d(0x319a)]('^#Pronoun\x20(said|says)'),_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x39406d(0x39a4)),_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x39406d(0x1730)),_0x256b25=_0x256b25['splitBefore'](_0x39406d(0x2415)),_0x256b25=_0x256b25['splitBefore'](_0x39406d(0xb4e)),_0x256b25=_0x256b25['splitBefore'](_0x39406d(0x4413)),_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x39406d(0x26a)),_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x39406d(0x4ff7)),_0x256b25=_0x256b25[_0x39406d(0x4c0)]('(whereas|whose)'),_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x39406d(0x310c)),_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x39406d(0xa4b));let _0x21f9ad=_0x256b25[_0x39406d(0x2d96)](_0x39406d(0x3bd6),0x0);_0x21f9ad['found']&&(_0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x21f9ad));let _0x5ed85a=_0x256b25['if'](_0x39406d(0xdcf))['match']('then');return _0x256b25=_0x256b25[_0x39406d(0x4c0)](_0x5ed85a),_0x39406d(0x4a80)==typeof _0x227a09&&(_0x256b25=_0x256b25[_0x39406d(0xf9e)](_0x227a09)),_0x256b25;},_0x1d3d5b=function(_0x5db8ef){const _0x521750=_0x37e46c;let _0x770eca=[],_0x1a304a=null;return _0x5db8ef[_0x521750(0xd84)]()['docs'][_0x521750(0xa21)](_0x2e4f28=>{_0x2e4f28['forEach'](_0x3445b0=>{const _0x14cd66=a0_0x11e7;_0x3445b0[_0x14cd66(0x1647)]&&_0x3445b0[_0x14cd66(0x1647)]===_0x1a304a?_0x770eca[_0x770eca[_0x14cd66(0x1b19)]-0x1][0x2]=_0x3445b0[_0x14cd66(0x3bb5)][0x1]+0x1:(_0x1a304a=_0x3445b0['chunk'],_0x770eca[_0x14cd66(0x1715)]([_0x3445b0['index'][0x0],_0x3445b0['index'][0x1],_0x3445b0[_0x14cd66(0x3bb5)][0x1]+0x1]));}),_0x1a304a=null;}),_0x5db8ef['update'](_0x770eca);},_0x7f00cf=function(_0x1287fc){const _0x57a96b=_0x37e46c;class _0x5d9b17 extends _0x1287fc{constructor(_0x1c69e1,_0x52c4b5,_0x57b784){const _0xf0916c=a0_0x11e7;super(_0x1c69e1,_0x52c4b5,_0x57b784),this[_0xf0916c(0x106d)]=_0xf0916c(0x527c);}[_0x57a96b(0x13be)](){const _0x5eafdc=_0x57a96b;return this['filter'](_0x209c0b=>_0x209c0b['has'](_0x5eafdc(0x45dc)));}['isNoun'](){const _0x3137f2=_0x57a96b;return this[_0x3137f2(0x1465)](_0x172ec2=>_0x172ec2[_0x3137f2(0x3170)](_0x3137f2(0x5035)));}[_0x57a96b(0x51f5)](){const _0x4579f9=_0x57a96b;return this[_0x4579f9(0x1465)](_0x27919a=>_0x27919a[_0x4579f9(0x3170)](_0x4579f9(0x3182)));}[_0x57a96b(0x8a9)](){const _0x362a6b=_0x57a96b;return this[_0x362a6b(0x1465)](_0x3b1f7e=>_0x3b1f7e[_0x362a6b(0x3170)](''));}[_0x57a96b(0x534)](){const _0x36ce53=_0x57a96b;return this[_0x36ce53(0x324a)]()['debug'](_0x36ce53(0x268d)),this;}[_0x57a96b(0x38d6)](_0x3be00d){const _0x111c67=_0x57a96b;let _0x3d708a=new _0x5d9b17(this[_0x111c67(0x295)],_0x3be00d);return _0x3d708a[_0x111c67(0x1aa7)]=this[_0x111c67(0x1aa7)],_0x3d708a;}}_0x1287fc['prototype'][_0x57a96b(0x268d)]=function(_0x487c34){const _0x3402af=_0x57a96b;let _0x1ebd04=_0x1d3d5b(this);return _0x1ebd04=_0x1ebd04[_0x3402af(0x152d)](_0x487c34),new _0x5d9b17(this[_0x3402af(0x295)],_0x1ebd04['pointer']);},_0x1287fc[_0x57a96b(0x3b3c)][_0x57a96b(0xd84)]=_0x16b1c4;},_0x2e27af={'this':_0x37e46c(0x1786),'then':_0x37e46c(0x2d7c)},_0x49a9a9=function(_0x55aa4d){const _0xa73b2f=_0x37e46c;for(let _0x32455f=0x0;_0x32455f<_0x55aa4d[_0xa73b2f(0x1b19)];_0x32455f+=0x1)for(let _0x24825b=0x0;_0x24825b<_0x55aa4d[_0x32455f]['length'];_0x24825b+=0x1){let _0x46929d=_0x55aa4d[_0x32455f][_0x24825b];!0x0!==_0x2e27af[_0xa73b2f(0x2427)](_0x46929d[_0xa73b2f(0x47d)])?_0x46929d[_0xa73b2f(0x521a)]['has'](_0xa73b2f(0x487b))?_0x46929d[_0xa73b2f(0x1647)]=_0xa73b2f(0x487b):_0x46929d[_0xa73b2f(0x521a)][_0xa73b2f(0x3170)]('Noun')||_0x46929d[_0xa73b2f(0x521a)]['has'](_0xa73b2f(0x3b3e))||_0x46929d[_0xa73b2f(0x521a)][_0xa73b2f(0x3170)]('Value')?_0x46929d[_0xa73b2f(0x1647)]=_0xa73b2f(0x1786):_0x46929d[_0xa73b2f(0x521a)]['has'](_0xa73b2f(0x3609))&&(_0x46929d[_0xa73b2f(0x1647)]=_0xa73b2f(0x2d7c)):_0x46929d['chunk']=_0x2e27af[_0x46929d[_0xa73b2f(0x47d)]];}},_0x26ea79=function(_0x4484ef){const _0x3f70b8=_0x37e46c;for(let _0x38cbd0=0x0;_0x38cbd0<_0x4484ef[_0x3f70b8(0x1b19)];_0x38cbd0+=0x1)for(let _0x1f4e6a=0x0;_0x1f4e6a<_0x4484ef[_0x38cbd0]['length'];_0x1f4e6a+=0x1){let _0x5e8a2d=_0x4484ef[_0x38cbd0][_0x1f4e6a];if(_0x5e8a2d[_0x3f70b8(0x1647)])continue;let _0x129dcc=_0x4484ef[_0x38cbd0][_0x1f4e6a+0x1],_0x5add47=_0x4484ef[_0x38cbd0][_0x1f4e6a-0x1];if(_0x5e8a2d[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)](_0x3f70b8(0x4972))){if(_0x5add47&&_0x5add47[_0x3f70b8(0x521a)]['has'](_0x3f70b8(0x49fa))){_0x5e8a2d[_0x3f70b8(0x1647)]=_0x3f70b8(0x4972);continue;}if(_0x5add47&&_0x5add47[_0x3f70b8(0x521a)]['has']('Determiner')){_0x5e8a2d[_0x3f70b8(0x1647)]=_0x3f70b8(0x1786);continue;}if(_0x129dcc&&_0x129dcc[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)](_0x3f70b8(0x1786))){_0x5e8a2d[_0x3f70b8(0x1647)]='Noun';continue;}}else{if(_0x5e8a2d[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)](_0x3f70b8(0x2cbd))||_0x5e8a2d[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)](_0x3f70b8(0x90f))){if(_0x5add47&&_0x5add47[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)]('Adjective')){_0x5e8a2d[_0x3f70b8(0x1647)]=_0x3f70b8(0x4972);continue;}if(_0x5add47&&_0x5add47['tags']['has']('Verb')){_0x5e8a2d[_0x3f70b8(0x1647)]=_0x3f70b8(0x487b);continue;}if(_0x129dcc&&_0x129dcc[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)](_0x3f70b8(0x4972))){_0x5e8a2d[_0x3f70b8(0x1647)]='Adjective';continue;}if(_0x129dcc&&_0x129dcc[_0x3f70b8(0x521a)][_0x3f70b8(0x3170)]('Verb')){_0x5e8a2d['chunk']=_0x3f70b8(0x487b);continue;}}}}},_0x1d2ee4=[{'match':_0x37e46c(0x2783),'group':0x0,'chunk':_0x37e46c(0x2d7c)},{'match':_0x37e46c(0x4026),'group':0x0,'chunk':_0x37e46c(0x2d7c)},{'match':_0x37e46c(0x6ce),'group':0x0,'chunk':_0x37e46c(0x2d7c)},{'match':_0x37e46c(0x1580),'group':0x0,'chunk':_0x37e46c(0x4972)},{'match':_0x37e46c(0xcc6),'chunk':_0x37e46c(0x4972)},{'match':_0x37e46c(0x4f84),'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x4bef),'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x4468),'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x2008),'chunk':'Verb'},{'match':_0x37e46c(0xc76),'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x1cda),'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x737),'chunk':'Verb'},{'match':'#Verb\x20[to]\x20#Adverb?\x20#Infinitive','group':0x0,'chunk':_0x37e46c(0x487b)},{'match':'[#Preposition]\x20#Gerund','group':0x0,'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x13f0),'group':0x0,'chunk':_0x37e46c(0x487b)},{'match':_0x37e46c(0x35d8),'chunk':_0x37e46c(0x1786)},{'match':_0x37e46c(0xf79),'chunk':_0x37e46c(0x1786)},{'match':'the\x20[#Adjective]\x20#Noun','chunk':_0x37e46c(0x1786)},{'match':_0x37e46c(0x2951),'chunk':_0x37e46c(0x1786)},{'match':_0x37e46c(0x41cc),'group':0x0,'chunk':_0x37e46c(0x2d7c)},{'match':_0x37e46c(0x49c9),'notIf':_0x37e46c(0x1cb6),'chunk':_0x37e46c(0x1786)}];let _0x4a5868=null;const _0x22d79a=function(_0x92c9b0,_0x39a08a,_0x11d12d){const _0x525428=_0x37e46c,{methods:_0x25bc4b}=_0x11d12d;_0x4a5868=_0x4a5868||_0x25bc4b[_0x525428(0x1d8a)][_0x525428(0x19e0)](_0x1d2ee4,_0x11d12d),_0x92c9b0[_0x525428(0xd6a)](_0x4a5868);},_0x91b030=function(_0x2352a0,_0x3e9770){const _0x1ecfc1=_0x37e46c;(_0x1ecfc1(0x1daa)!=typeof process&&process[_0x1ecfc1(0xe1a)]?process[_0x1ecfc1(0xe1a)]:self[_0x1ecfc1(0xe1a)]||{})['DEBUG_CHUNKS']&&(_0x2352a0['normal']+'\x27')[_0x1ecfc1(0xe80)](0x8),_0x2352a0['chunk']=_0x3e9770;},_0x5b559c=function(_0x572f0a){const _0x31a8d8=_0x37e46c;for(let _0x3e057f=0x0;_0x3e057f<_0x572f0a['length'];_0x3e057f+=0x1)for(let _0x331a0f=0x0;_0x331a0f<_0x572f0a[_0x3e057f]['length'];_0x331a0f+=0x1){let _0x291c1d=_0x572f0a[_0x3e057f][_0x331a0f];void 0x0===_0x291c1d['chunk']&&(_0x291c1d[_0x31a8d8(0x521a)]['has'](_0x31a8d8(0x37f1))||_0x291c1d[_0x31a8d8(0x521a)][_0x31a8d8(0x3170)](_0x31a8d8(0x326f))?_0x91b030(_0x291c1d,'Pivot'):_0x291c1d[_0x31a8d8(0x521a)][_0x31a8d8(0x3170)]('Adverb')?_0x91b030(_0x291c1d,_0x31a8d8(0x487b)):_0x291c1d['chunk']=_0x31a8d8(0x1786));}},_0x1bd2b9=function(_0x593675){const _0x1eedea=_0x37e46c;let _0x19ee52=[],_0x447471=null;_0x593675[_0x1eedea(0xa21)](_0x4d7538=>{const _0x538e6d=_0x1eedea;for(let _0x297e55=0x0;_0x297e55<_0x4d7538[_0x538e6d(0x1b19)];_0x297e55+=0x1){let _0x45b234=_0x4d7538[_0x297e55];_0x447471&&_0x45b234['chunk']===_0x447471?_0x19ee52[_0x19ee52['length']-0x1][_0x538e6d(0x4a03)][_0x538e6d(0x1715)](_0x45b234):(_0x19ee52['push']({'chunk':_0x45b234[_0x538e6d(0x1647)],'terms':[_0x45b234]}),_0x447471=_0x45b234['chunk']);}}),_0x19ee52['forEach'](_0x2c1753=>{const _0x7390d4=_0x1eedea;if(_0x7390d4(0x487b)===_0x2c1753['chunk']){const _0x36176d=_0x2c1753[_0x7390d4(0x4a03)][_0x7390d4(0x5144)](_0x1dc476=>_0x1dc476[_0x7390d4(0x521a)][_0x7390d4(0x3170)](_0x7390d4(0x487b)));_0x36176d||_0x2c1753['terms'][_0x7390d4(0xa21)](_0x3bf483=>_0x3bf483['chunk']=null);}});},_0x32d9d4={'chunks':function(_0x49e37f){const {document:_0x32224d,world:_0x50b178}=_0x49e37f;_0x49a9a9(_0x32224d),_0x26ea79(_0x32224d),_0x22d79a(_0x49e37f,_0x32224d,_0x50b178),_0x5b559c(_0x32224d,_0x50b178),_0x1bd2b9(_0x32224d,_0x50b178);}},_0x5ea65a={'compute':_0x32d9d4,'api':_0x7f00cf,'hooks':[_0x37e46c(0x268d)]},_0x60eda5=/\./g,_0x2ec616=function(_0x4ffae9){const _0x2ca74d=_0x37e46c;class _0x35a156 extends _0x4ffae9{constructor(_0x487e9d,_0x4114c1,_0x13d2cb){const _0x5ab013=a0_0x11e7;super(_0x487e9d,_0x4114c1,_0x13d2cb),this[_0x5ab013(0x106d)]=_0x5ab013(0x45b5);}[_0x2ca74d(0x9a5)](){const _0x267298=_0x2ca74d;return this[_0x267298(0x204b)][_0x267298(0xa21)](_0x23b712=>{const _0x265d37=_0x267298;_0x23b712[_0x265d37(0xa21)](_0x4915d8=>{const _0x2fd99e=_0x265d37;_0x4915d8[_0x2fd99e(0x4006)]=_0x4915d8[_0x2fd99e(0x4006)][_0x2fd99e(0x741)](_0x60eda5,''),_0x4915d8['normal']=_0x4915d8['normal'][_0x2fd99e(0x741)](_0x60eda5,'');});}),this;}['addPeriods'](){const _0x4139c4=_0x2ca74d;return this[_0x4139c4(0x204b)][_0x4139c4(0xa21)](_0x5b7d6c=>{const _0x2f9943=_0x4139c4;_0x5b7d6c[_0x2f9943(0xa21)](_0x3c8254=>{const _0x105d4a=_0x2f9943;_0x3c8254[_0x105d4a(0x4006)]=_0x3c8254['text'][_0x105d4a(0x741)](_0x60eda5,''),_0x3c8254[_0x105d4a(0x47d)]=_0x3c8254[_0x105d4a(0x47d)]['replace'](_0x60eda5,''),_0x3c8254[_0x105d4a(0x4006)]=_0x3c8254[_0x105d4a(0x4006)][_0x105d4a(0x1117)]('')[_0x105d4a(0x3541)]('.')+'.',_0x3c8254['normal']=_0x3c8254[_0x105d4a(0x47d)]['split']('')['join']('.')+'.';});}),this;}}_0x4ffae9['prototype'][_0x2ca74d(0x4a1d)]=function(_0x406463){const _0x53b5bc=_0x2ca74d;let _0x448db6=this[_0x53b5bc(0x2d96)](_0x53b5bc(0x505c));return _0x448db6=_0x448db6[_0x53b5bc(0x152d)](_0x406463),new _0x35a156(_0x448db6['document'],_0x448db6[_0x53b5bc(0x43e4)]);};},_0x4e159a=/\(/,_0x2077e7=/\)/,_0x392d0a=function(_0xd64f8b,_0x5f044e){const _0x1528a9=_0x37e46c;for(;_0x5f044e<_0xd64f8b[_0x1528a9(0x1b19)];_0x5f044e+=0x1)if(_0xd64f8b[_0x5f044e][_0x1528a9(0x24ce)]&&_0x2077e7[_0x1528a9(0x1769)](_0xd64f8b[_0x5f044e][_0x1528a9(0x24ce)])){let [,_0x1fe09a]=_0xd64f8b[_0x5f044e]['index'];return _0x1fe09a=_0x1fe09a||0x0,_0x1fe09a;}return null;},_0x4bcc15=function(_0x41f329){const _0x8dc610=_0x37e46c;class _0x355d1c extends _0x41f329{constructor(_0x44795d,_0x5e3935,_0xb7176c){const _0x5ef60c=a0_0x11e7;super(_0x44795d,_0x5e3935,_0xb7176c),this[_0x5ef60c(0x106d)]=_0x5ef60c(0x51f8);}[_0x8dc610(0x9a5)](){return function(_0x3b8fc1){const _0x1a91d5=a0_0x11e7;return _0x3b8fc1['docs'][_0x1a91d5(0xa21)](_0x2b5aac=>{const _0x3edbb9=_0x1a91d5;_0x2b5aac[0x0][_0x3edbb9(0x1228)]=_0x2b5aac[0x0][_0x3edbb9(0x1228)][_0x3edbb9(0x741)](_0x4e159a,'');let _0x572fcc=_0x2b5aac[_0x2b5aac[_0x3edbb9(0x1b19)]-0x1];_0x572fcc[_0x3edbb9(0x24ce)]=_0x572fcc[_0x3edbb9(0x24ce)][_0x3edbb9(0x741)](_0x2077e7,'');}),_0x3b8fc1;}(this);}}_0x41f329['prototype'][_0x8dc610(0x47b1)]=function(_0x5a3250){const _0x5b9a3d=_0x8dc610;let _0x3dfe09=function(_0x57feb8){const _0x94d896=a0_0x11e7;let _0x114a68=[];return _0x57feb8['docs'][_0x94d896(0xa21)](_0x37aebc=>{const _0x14ed13=_0x94d896;for(let _0x38dbb1=0x0;_0x38dbb1<_0x37aebc[_0x14ed13(0x1b19)];_0x38dbb1+=0x1){let _0x3e334b=_0x37aebc[_0x38dbb1];if(_0x3e334b[_0x14ed13(0x1228)]&&_0x4e159a['test'](_0x3e334b['pre'])){let _0x4de8bc=_0x392d0a(_0x37aebc,_0x38dbb1);if(null!==_0x4de8bc){let [_0xbda7f6,_0x1c475e]=_0x37aebc[_0x38dbb1][_0x14ed13(0x3bb5)];_0x114a68[_0x14ed13(0x1715)]([_0xbda7f6,_0x1c475e,_0x4de8bc+0x1,_0x37aebc[_0x38dbb1]['id']]),_0x38dbb1=_0x4de8bc;}}}}),_0x57feb8[_0x94d896(0x38d6)](_0x114a68);}(this);return _0x3dfe09=_0x3dfe09[_0x5b9a3d(0x152d)](_0x5a3250),new _0x355d1c(_0x3dfe09[_0x5b9a3d(0x295)],_0x3dfe09['pointer']);};},_0x27937c=/'s$/,_0x21a3bc=function(_0x16459f){const _0x1f2763=_0x37e46c;class _0x59aea0 extends _0x16459f{constructor(_0x16cfe8,_0x15f993,_0x5c01c6){const _0x502961=a0_0x11e7;super(_0x16cfe8,_0x15f993,_0x5c01c6),this[_0x502961(0x106d)]='Possessives';}[_0x1f2763(0x9a5)](){return this['docs']['forEach'](_0x17c3ba=>{_0x17c3ba['forEach'](_0x5a98f7=>{const _0x23c47e=a0_0x11e7;_0x5a98f7['text']=_0x5a98f7[_0x23c47e(0x4006)]['replace'](_0x27937c,''),_0x5a98f7['normal']=_0x5a98f7[_0x23c47e(0x47d)][_0x23c47e(0x741)](_0x27937c,'');});}),this;}}_0x16459f[_0x1f2763(0x3b3c)]['possessives']=function(_0x472dcd){const _0x333d5e=_0x1f2763;let _0x285391=function(_0xb070ab){const _0x11b859=a0_0x11e7;let _0x35fc98=_0xb070ab[_0x11b859(0x2d96)](_0x11b859(0x2c4b));return _0x35fc98[_0x11b859(0x3170)](_0x11b859(0x47c9))&&(_0x35fc98=_0x35fc98[_0x11b859(0x51f4)](_0x11b859(0x5214))),_0x35fc98['has']('#Place')&&(_0x35fc98=_0x35fc98['growLeft']('#Place+')),_0x35fc98['has']('#Organization')&&(_0x35fc98=_0x35fc98[_0x11b859(0x51f4)](_0x11b859(0x2b1e))),_0x35fc98;}(this);return _0x285391=_0x285391[_0x333d5e(0x152d)](_0x472dcd),new _0x59aea0(_0x285391['document'],_0x285391['pointer']);};},_0x2ac566={'\x22':'\x22','"':'"','\x27':'\x27','“':'”','‘':'’','‟':'”','‛':'’','„':'”','⹂':'”','‚':'’','«':'»','‹':'›','‵':'′','‶':'″','‷':'‴','〝':'〞','`':'´','〟':'〞'},_0x11896e=RegExp('['+Object[_0x37e46c(0x1ea9)](_0x2ac566)[_0x37e46c(0x3541)]('')+']'),_0x5e2276=RegExp('['+Object[_0x37e46c(0x1fae)](_0x2ac566)[_0x37e46c(0x3541)]('')+']'),_0x592456=function(_0x54440e,_0x30bde7){const _0x4d2526=_0x37e46c,_0x11e227=_0x54440e[_0x30bde7][_0x4d2526(0x1228)][_0x4d2526(0x2d96)](_0x11896e)[0x0]||'';if(!_0x11e227||!_0x2ac566[_0x11e227])return null;const _0x5f1377=_0x2ac566[_0x11e227];for(;_0x30bde7<_0x54440e['length'];_0x30bde7+=0x1)if(_0x54440e[_0x30bde7]['post']&&_0x54440e[_0x30bde7][_0x4d2526(0x24ce)]['match'](_0x5f1377))return _0x30bde7;return null;},_0x5932bf=function(_0x1538ec){const _0x40879c=_0x37e46c;class _0x4459c3 extends _0x1538ec{constructor(_0x1060a6,_0x5e64c3,_0x26a69f){const _0x1f851e=a0_0x11e7;super(_0x1060a6,_0x5e64c3,_0x26a69f),this[_0x1f851e(0x106d)]=_0x1f851e(0x51f8);}['strip'](){return function(_0x138477){const _0x5d5931=a0_0x11e7;_0x138477[_0x5d5931(0x204b)][_0x5d5931(0xa21)](_0x2a8cb9=>{const _0x445e96=_0x5d5931;_0x2a8cb9[0x0][_0x445e96(0x1228)]=_0x2a8cb9[0x0][_0x445e96(0x1228)][_0x445e96(0x741)](_0x11896e,'');let _0x5cdce0=_0x2a8cb9[_0x2a8cb9[_0x445e96(0x1b19)]-0x1];_0x5cdce0[_0x445e96(0x24ce)]=_0x5cdce0[_0x445e96(0x24ce)][_0x445e96(0x741)](_0x5e2276,'');});}(this);}}_0x1538ec[_0x40879c(0x3b3c)][_0x40879c(0x3ee2)]=function(_0x5a0bc7){const _0x4f5092=_0x40879c;let _0x2eef1a=function(_0x355b09){const _0x5ec7a9=a0_0x11e7;let _0x3dbd13=[];return _0x355b09[_0x5ec7a9(0x204b)][_0x5ec7a9(0xa21)](_0x48cd8d=>{const _0x15f6a7=_0x5ec7a9;for(let _0x177e8c=0x0;_0x177e8c<_0x48cd8d[_0x15f6a7(0x1b19)];_0x177e8c+=0x1){let _0x585319=_0x48cd8d[_0x177e8c];if(_0x585319['pre']&&_0x11896e['test'](_0x585319[_0x15f6a7(0x1228)])){let _0x163476=_0x592456(_0x48cd8d,_0x177e8c);if(null!==_0x163476){let [_0x2f007f,_0x292afb]=_0x48cd8d[_0x177e8c][_0x15f6a7(0x3bb5)];_0x3dbd13[_0x15f6a7(0x1715)]([_0x2f007f,_0x292afb,_0x163476+0x1,_0x48cd8d[_0x177e8c]['id']]),_0x177e8c=_0x163476;}}}}),_0x355b09['update'](_0x3dbd13);}(this);return _0x2eef1a=_0x2eef1a[_0x4f5092(0x152d)](_0x5a0bc7),new _0x4459c3(_0x2eef1a[_0x4f5092(0x295)],_0x2eef1a[_0x4f5092(0x43e4)]);};},_0x2b9548=function(_0x197cf5){const _0x53dfda=_0x37e46c;let _0x330a9c=this['splitAfter'](_0x53dfda(0x38c6));return _0x330a9c=_0x330a9c[_0x53dfda(0x2d96)]('#PhoneNumber+'),_0x330a9c=_0x330a9c['getNth'](_0x197cf5),_0x330a9c;},_0x1504c0=[[_0x37e46c(0x3d3b),_0x37e46c(0x1363)],[_0x37e46c(0x3330),_0x37e46c(0xb8d)],[_0x37e46c(0x2364),_0x37e46c(0xd5c)],[_0x37e46c(0x2c20),_0x37e46c(0x675)],[_0x37e46c(0x4ca1),'#Emoticon'],[_0x37e46c(0xd4e),_0x37e46c(0x1a14)],[_0x37e46c(0x2c38),'#Url'],[_0x37e46c(0x4f68),_0x37e46c(0x4d4e)],[_0x37e46c(0x4cf1),_0x37e46c(0x8b6)],[_0x37e46c(0x3b89),'#Abbreviation'],[_0x37e46c(0x5136),_0x37e46c(0xb56)]];let _0x371668=[[_0x37e46c(0x261c),_0x37e46c(0x2c20)],['atmentions',_0x37e46c(0xd4e)]];const _0xc08483=function(_0x196460){const _0x5b0d6e=_0x37e46c;_0x1504c0[_0x5b0d6e(0xa21)](_0x12bf4e=>{const _0x4a3cfe=_0x5b0d6e;_0x196460[_0x4a3cfe(0x3b3c)][_0x12bf4e[0x0]]=function(_0x1ef9ff){const _0x594fa4=_0x4a3cfe;let _0x3ea3be=this[_0x594fa4(0x2d96)](_0x12bf4e[0x1]);return _0x594fa4(0x4a80)==typeof _0x1ef9ff?_0x3ea3be[_0x594fa4(0xf9e)](_0x1ef9ff):_0x3ea3be;};}),_0x196460['prototype'][_0x5b0d6e(0x28d9)]=_0x2b9548,_0x371668[_0x5b0d6e(0xa21)](_0x5a6c=>{const _0x4cd0d3=_0x5b0d6e;_0x196460[_0x4cd0d3(0x3b3c)][_0x5a6c[0x0]]=_0x196460[_0x4cd0d3(0x3b3c)][_0x5a6c[0x1]];});},_0x13c872={'api':function(_0x1ac784){_0x2ec616(_0x1ac784),_0x4bcc15(_0x1ac784),_0x21a3bc(_0x1ac784),_0x5932bf(_0x1ac784),_0xc08483(_0x1ac784);}},_0x29248d=function(_0x4e044a,_0x1fbbe0){const _0x3f7178=_0x37e46c;_0x4e044a['docs'][_0x3f7178(0xa21)](_0x2647fb=>{const _0x27b98f=_0x3f7178;_0x2647fb[_0x27b98f(0xa21)](_0x1fbbe0);});},_0x14a212={'case':_0x3b8e32=>{_0x29248d(_0x3b8e32,_0xa62853=>{const _0x5da957=a0_0x11e7;_0xa62853[_0x5da957(0x4006)]=_0xa62853[_0x5da957(0x4006)][_0x5da957(0x6e8)]();});},'unicode':_0x77297b=>{const _0x43e3fa=_0x37e46c,_0x5e5932=_0x77297b[_0x43e3fa(0x4657)],_0x3bbdd7=_0x5e5932['methods']['one']['killUnicode'];_0x29248d(_0x77297b,_0x964663=>_0x964663[_0x43e3fa(0x4006)]=_0x3bbdd7(_0x964663[_0x43e3fa(0x4006)],_0x5e5932));},'whitespace':_0x19b6b7=>{_0x29248d(_0x19b6b7,_0x34fe15=>{const _0x46b35b=a0_0x11e7;_0x34fe15['post']=_0x34fe15[_0x46b35b(0x24ce)][_0x46b35b(0x741)](/\s+/g,'\x20'),_0x34fe15[_0x46b35b(0x24ce)]=_0x34fe15[_0x46b35b(0x24ce)][_0x46b35b(0x741)](/\s([.,?!:;])/g,'$1'),_0x34fe15[_0x46b35b(0x1228)]=_0x34fe15[_0x46b35b(0x1228)][_0x46b35b(0x741)](/\s+/g,'');});},'punctuation':_0xa53ea9=>{const _0xf4a486=_0x37e46c;_0x29248d(_0xa53ea9,_0x5ea2b7=>{const _0x2ca773=a0_0x11e7;_0x5ea2b7[_0x2ca773(0x24ce)]=_0x5ea2b7[_0x2ca773(0x24ce)]['replace'](/[–—-]/g,'\x20'),_0x5ea2b7['post']=_0x5ea2b7[_0x2ca773(0x24ce)][_0x2ca773(0x741)](/[,:;]/g,''),_0x5ea2b7[_0x2ca773(0x24ce)]=_0x5ea2b7[_0x2ca773(0x24ce)][_0x2ca773(0x741)](/\.{2,}/g,''),_0x5ea2b7[_0x2ca773(0x24ce)]=_0x5ea2b7['post'][_0x2ca773(0x741)](/\?{2,}/g,'?'),_0x5ea2b7['post']=_0x5ea2b7[_0x2ca773(0x24ce)][_0x2ca773(0x741)](/!{2,}/g,'!'),_0x5ea2b7['post']=_0x5ea2b7[_0x2ca773(0x24ce)][_0x2ca773(0x741)](/\?!+/g,'?');});let _0x47427b=_0xa53ea9[_0xf4a486(0x204b)],_0x2b64ba=_0x47427b[_0x47427b[_0xf4a486(0x1b19)]-0x1];if(_0x2b64ba&&_0x2b64ba['length']>0x0){let _0x35a0e9=_0x2b64ba[_0x2b64ba[_0xf4a486(0x1b19)]-0x1];_0x35a0e9[_0xf4a486(0x24ce)]=_0x35a0e9[_0xf4a486(0x24ce)][_0xf4a486(0x741)](/ /g,'');}},'contractions':_0x31039a=>{const _0x3d59ba=_0x37e46c;_0x31039a['contractions']()[_0x3d59ba(0x2d1c)]();},'acronyms':_0x55900e=>{const _0x334ba1=_0x37e46c;_0x55900e[_0x334ba1(0x4a1d)]()['strip']();},'parentheses':_0x3e00c4=>{_0x3e00c4['parentheses']()['strip']();},'possessives':_0xe93bb5=>{const _0xdb9827=_0x37e46c;_0xe93bb5['possessives']()[_0xdb9827(0x9a5)]();},'quotations':_0x1117ec=>{const _0x4b0465=_0x37e46c;_0x1117ec[_0x4b0465(0x3ee2)]()[_0x4b0465(0x9a5)]();},'emoji':_0x372ba5=>{const _0x4eb45f=_0x37e46c;_0x372ba5[_0x4eb45f(0x261c)]()[_0x4eb45f(0x42a1)]();},'honorifics':_0x46d56d=>{const _0x210575=_0x37e46c;_0x46d56d[_0x210575(0x2d96)](_0x210575(0x1a9b))['honorifics']()[_0x210575(0x42a1)]();},'adverbs':_0x238949=>{const _0x476875=_0x37e46c;_0x238949[_0x476875(0x48d1)]()[_0x476875(0x42a1)]();},'nouns':_0x3bbf6c=>{const _0x57e505=_0x37e46c;_0x3bbf6c[_0x57e505(0x268a)]()[_0x57e505(0x164b)]();},'verbs':_0xc681ff=>{const _0x1d43fc=_0x37e46c;_0xc681ff[_0x1d43fc(0x34d1)]()[_0x1d43fc(0x25f2)]();},'numbers':_0xb6a904=>{_0xb6a904['numbers']()['toNumber']();},'debullet':_0x8c76a3=>{const _0xe3540e=_0x37e46c,_0x419e15=/^\s*([-–—*•])\s*$/;return _0x8c76a3[_0xe3540e(0x204b)][_0xe3540e(0xa21)](_0x3aa757=>{const _0x14f09a=_0xe3540e;_0x419e15[_0x14f09a(0x1769)](_0x3aa757[0x0][_0x14f09a(0x1228)])&&(_0x3aa757[0x0][_0x14f09a(0x1228)]=_0x3aa757[0x0]['pre']['replace'](_0x419e15,''));}),_0x8c76a3;}},_0x221a04=_0x48aafc=>_0x48aafc[_0x37e46c(0x1117)]('|')[_0x37e46c(0x24d8)]((_0x2434ec,_0x1f6588)=>(_0x2434ec[_0x1f6588]=!0x0,_0x2434ec),{}),_0x3131fe=_0x37e46c(0x320c),_0x478a21=_0x37e46c(0xcd6),_0x31e694={'light':_0x221a04(_0x3131fe),'medium':_0x221a04(_0x3131fe+_0x478a21),'heavy':_0x221a04(_0x3131fe+_0x478a21+_0x37e46c(0x39f))},_0x1ff736={'api':function(_0x430646){const _0x43318d=_0x37e46c;_0x430646[_0x43318d(0x3b3c)][_0x43318d(0x2429)]=function(_0x1dd744=_0x43318d(0x4b25)){const _0x447b81=_0x43318d;return'string'==typeof _0x1dd744&&(_0x1dd744=_0x31e694[_0x1dd744]),Object['keys'](_0x1dd744)[_0x447b81(0xa21)](_0x289f3b=>{const _0x406a59=_0x447b81;_0x14a212[_0x406a59(0x2427)](_0x289f3b)&&_0x14a212[_0x289f3b](this,_0x1dd744[_0x289f3b]);}),this;};}},_0x5b5c2d=function(_0x3475b0){const _0x1fc3fe=_0x37e46c;let _0x34ccba=_0x3475b0[_0x1fc3fe(0xd84)]()[_0x1fc3fe(0x2d96)](_0x1fc3fe(0x5035)),_0x12e226=_0x34ccba[_0x1fc3fe(0x2d96)](_0x1fc3fe(0x38c6));return _0x12e226=_0x12e226[_0x1fc3fe(0xc1a)](_0x1fc3fe(0x1248)),_0x12e226[_0x1fc3fe(0x2108)]&&(_0x34ccba=_0x34ccba[_0x1fc3fe(0x319a)](_0x12e226)),_0x34ccba=_0x34ccba[_0x1fc3fe(0x3a4d)]('#Expression'),_0x34ccba=_0x34ccba['splitOn']('(he|she|we|you|they|i)'),_0x34ccba=_0x34ccba['splitOn'](_0x1fc3fe(0x492),0x0),_0x34ccba=_0x34ccba[_0x1fc3fe(0x3a4d)](_0x1fc3fe(0x1d76),0x0),_0x34ccba=_0x34ccba[_0x1fc3fe(0x4c0)]('#Noun\x20[(the|a|an)]\x20#Adjective?\x20#Noun',0x0),_0x34ccba=_0x34ccba['splitOn']('[(here|there)]\x20#Noun',0x0),_0x34ccba=_0x34ccba[_0x1fc3fe(0x3a4d)](_0x1fc3fe(0x1f28),0x0),_0x34ccba=_0x34ccba[_0x1fc3fe(0x4c0)]('(our|my|their|your)'),_0x34ccba=_0x34ccba[_0x1fc3fe(0x3a4d)](_0x1fc3fe(0x3700),0x0),_0x34ccba=_0x34ccba['if']('#Noun'),_0x34ccba;},_0x2356f9=[_0x37e46c(0x1349),'although',_0x37e46c(0x1227),'as\x20long\x20as','as',_0x37e46c(0x2fb6),'before','even\x20if','even\x20though','ever\x20since','if','in\x20order\x20that',_0x37e46c(0x30a5),_0x37e46c(0x1de2),_0x37e46c(0x2530),'than',_0x37e46c(0x3a9c),_0x37e46c(0xc2e),_0x37e46c(0x26b1),_0x37e46c(0x30d6),_0x37e46c(0x3fcc),_0x37e46c(0x3d5f),_0x37e46c(0x191b),_0x37e46c(0x3cdd),'where',_0x37e46c(0x2a30),'wherever','whether',_0x37e46c(0x34e9),_0x37e46c(0x27e8),_0x37e46c(0x3e11),_0x37e46c(0x42ee),'whom',_0x37e46c(0x20c0),'whose'],_0xc4c55c=function(_0x46100e){const _0x5a0bac=_0x37e46c;if(_0x46100e[_0x5a0bac(0x5097)](_0x5a0bac(0x4c46))[_0x5a0bac(0x2108)])return!0x0;if(!_0x46100e[_0x5a0bac(0x5097)]()['found'])return!0x1;for(let _0x244302=0x0;_0x244302<_0x2356f9[_0x5a0bac(0x1b19)];_0x244302+=0x1)if(_0x46100e[_0x5a0bac(0x3170)](_0x2356f9[_0x244302]))return!0x0;return!0x1;},_0x2a3664=function(_0x50e663,_0x1837ed){const _0x347691=_0x37e46c;if(_0x50e663['has'](_0x347691(0x3b57)))return!0x0;if(_0x50e663[_0x347691(0x3170)](_0x347691(0x1d95)))return!0x0;if(_0x50e663[_0x347691(0x3170)](_0x347691(0x1cd7)))return!0x0;if(!0x0===_0x1837ed[_0x347691(0x3170)](_0x347691(0x225c)))return!0x1;if(_0x50e663[_0x347691(0x3170)](_0x347691(0xd6f)))return!0x1;let _0x5b2ebb=_0x1837ed[_0x347691(0x4006)](_0x347691(0x47d));return _0x5b2ebb['length']>0x3&&_0x5b2ebb[_0x347691(0x2a85)]('s')&&!_0x5b2ebb[_0x347691(0x2a85)]('ss');},_0x5a4d0d=function(_0x5daa65){const _0x4ad1be=_0x37e46c;let _0x4d1b18=function(_0x2a11ee){const _0x4a48a1=a0_0x11e7;let _0x14448f=_0x2a11ee['clone']();return _0x14448f=_0x14448f[_0x4a48a1(0x2d96)](_0x4a48a1(0x3ab1)),_0x14448f=_0x14448f['remove'](_0x4a48a1(0xbbd)),_0x14448f=_0x14448f['not'](_0x4a48a1(0xbb1)),_0x14448f=_0x14448f['first'](),_0x14448f[_0x4a48a1(0x2108)]?_0x14448f:_0x2a11ee;}(_0x5daa65);return{'determiner':_0x5daa65[_0x4ad1be(0x2d96)](_0x4ad1be(0x48cd))['eq'](0x0),'adjectives':_0x5daa65['match'](_0x4ad1be(0x335e)),'number':_0x5daa65['values'](),'isPlural':_0x2a3664(_0x5daa65,_0x4d1b18),'isSubordinate':_0xc4c55c(_0x5daa65),'root':_0x4d1b18};},_0x253709=_0x9b49f9=>_0x9b49f9[_0x37e46c(0x4006)](),_0x468c68=_0x5513aa=>_0x5513aa[_0x37e46c(0x3289)]({'terms':!0x1,'normal':!0x0})[_0x37e46c(0x4833)](_0x3e032f=>_0x3e032f[_0x37e46c(0x47d)]),_0x43696c=function(_0x1be3ad){const _0x170cee=_0x37e46c;if(!_0x1be3ad[_0x170cee(0x2108)])return null;let _0x436595=_0x1be3ad[_0x170cee(0x1fae)](0x0);if(_0x436595['found'])return(_0x436595['parse']()[0x0]||{})[_0x170cee(0x51b1)];return null;},_0x21913f=function(_0x2fd5b5){const _0x332d75=_0x37e46c;let _0x428a92=_0x5a4d0d(_0x2fd5b5);return{'root':_0x253709(_0x428a92[_0x332d75(0x507b)]),'number':_0x43696c(_0x428a92[_0x332d75(0x4a80)]),'determiner':_0x253709(_0x428a92['determiner']),'adjectives':_0x468c68(_0x428a92[_0x332d75(0x4c9b)]),'isPlural':_0x428a92[_0x332d75(0x2569)],'isSubordinate':_0x428a92[_0x332d75(0xfcf)]};},_0x4b40e4=function(_0x4a53e8){const _0x47d63b=_0x37e46c;return!_0x4a53e8['has'](_0x47d63b(0xb05));},_0x3d700a={'tags':!0x0},_0x4df5bd=function(_0x22985a,_0x2e8f82){const _0x2682e7=_0x37e46c;if(!0x0===_0x2e8f82['isPlural'])return _0x22985a;if(_0x2e8f82['root'][_0x2682e7(0x3170)](_0x2682e7(0xbb1))&&(_0x2e8f82['root']=_0x2e8f82[_0x2682e7(0x507b)][_0x2682e7(0xa76)]()[_0x2682e7(0x9a5)]()),!_0x4b40e4(_0x2e8f82[_0x2682e7(0x507b)]))return _0x22985a;const {methods:_0x123394,model:_0x271fc2}=_0x22985a[_0x2682e7(0x4657)],{toPlural:_0x73fde6}=_0x123394[_0x2682e7(0x21c9)][_0x2682e7(0x5161)][_0x2682e7(0x4d62)];let _0x313c78=_0x73fde6(_0x2e8f82[_0x2682e7(0x507b)][_0x2682e7(0x4006)]({'keepPunct':!0x1}),_0x271fc2);_0x22985a['match'](_0x2e8f82[_0x2682e7(0x507b)])[_0x2682e7(0x1f1f)](_0x313c78,_0x3d700a)[_0x2682e7(0x15a9)](_0x2682e7(0x25f7),_0x2682e7(0x467a)),_0x2e8f82[_0x2682e7(0x3bbb)][_0x2682e7(0x3170)](_0x2682e7(0x1a53))&&_0x22985a[_0x2682e7(0x42a1)](_0x2e8f82['determiner']);let _0x574624=_0x2e8f82[_0x2682e7(0x507b)][_0x2682e7(0x1349)](_0x2682e7(0x4ba8),0x0);return _0x574624[_0x2682e7(0x2108)]&&(_0x574624[_0x2682e7(0x3170)]('is')?_0x22985a[_0x2682e7(0x741)](_0x574624,_0x2682e7(0x9d5)):_0x574624['has']('was')&&_0x22985a[_0x2682e7(0x741)](_0x574624,'were')),_0x22985a;},_0x79fa11={'tags':!0x0},_0x43b273=function(_0x31bdff,_0x108565){const _0x4fbb18=_0x37e46c;if(!0x1===_0x108565[_0x4fbb18(0x2569)])return _0x31bdff;const {methods:_0x5b88b5,model:_0x150dd4}=_0x31bdff[_0x4fbb18(0x4657)],{toSingular:_0x541343}=_0x5b88b5['two'][_0x4fbb18(0x5161)]['noun'];let _0x55b178=_0x541343(_0x108565['root'][_0x4fbb18(0x4006)](_0x4fbb18(0x47d)),_0x150dd4);return _0x31bdff[_0x4fbb18(0x741)](_0x108565[_0x4fbb18(0x507b)],_0x55b178,_0x79fa11)[_0x4fbb18(0x15a9)]('Singular',_0x4fbb18(0x467a)),_0x31bdff;},_0x2290a5=function(_0x344b1b){const _0x276ad6=_0x37e46c;class _0x17657d extends _0x344b1b{constructor(_0x653f32,_0x18c7a7,_0x16fa77){const _0xa7fa8e=a0_0x11e7;super(_0x653f32,_0x18c7a7,_0x16fa77),this[_0xa7fa8e(0x106d)]=_0xa7fa8e(0x16c6);}[_0x276ad6(0x2956)](_0x316aa9){const _0x58873f=_0x276ad6;return this[_0x58873f(0x152d)](_0x316aa9)[_0x58873f(0x4833)](_0x5a4d0d);}[_0x276ad6(0x3289)](_0x4b4b62){const _0xba64a1=_0x276ad6;let _0x34e580=_0xba64a1(0x20c7)==typeof _0x4b4b62?_0x4b4b62:{};return this['getNth'](_0x4b4b62)[_0xba64a1(0x4833)](_0x3e1fe4=>{const _0x545355=_0xba64a1;let _0x23920a=_0x3e1fe4['toView']()['json'](_0x34e580)[0x0]||{};return _0x34e580&&!0x1!==_0x34e580['noun']&&(_0x23920a[_0x545355(0x4d62)]=_0x21913f(_0x3e1fe4)),_0x23920a;},[]);}[_0x276ad6(0x2343)](_0x2082a5){const _0x5517e0=_0x276ad6,_0x48ea8d=this[_0x5517e0(0x4657)][_0x5517e0(0x1578)][_0x5517e0(0x21c9)][_0x5517e0(0x5161)][_0x5517e0(0x4d62)];return this[_0x5517e0(0x152d)](_0x2082a5)[_0x5517e0(0x4833)](_0x44e5c1=>{const _0x189ad1=_0x5517e0;let _0x8ff57c=_0x5a4d0d(_0x44e5c1),_0x107804=_0x8ff57c[_0x189ad1(0x507b)]['compute'](_0x189ad1(0x507b))['text'](_0x189ad1(0x507b)),_0x153623={'Singular':_0x107804};return _0x4b40e4(_0x8ff57c[_0x189ad1(0x507b)])&&(_0x153623[_0x189ad1(0x25f7)]=_0x48ea8d['toPlural'](_0x107804,this[_0x189ad1(0x1556)])),_0x153623[_0x189ad1(0x1e9f)]===_0x153623[_0x189ad1(0x25f7)]&&delete _0x153623['Plural'],_0x153623;},[]);}[_0x276ad6(0x2569)](_0x2d37c0){const _0x204232=_0x276ad6;let _0x23cd3c=this[_0x204232(0x1465)](_0x578414=>_0x5a4d0d(_0x578414)[_0x204232(0x2569)]);return _0x23cd3c[_0x204232(0x152d)](_0x2d37c0);}[_0x276ad6(0x16a2)](_0x5c9ce5){const _0x4fdb1f=_0x276ad6;let _0x1314a7=this[_0x4fdb1f(0x1465)](_0x468cfc=>!_0x5a4d0d(_0x468cfc)[_0x4fdb1f(0x2569)]);return _0x1314a7[_0x4fdb1f(0x152d)](_0x5c9ce5);}[_0x276ad6(0x4c9b)](_0x5b3fb0){const _0x7ee813=_0x276ad6;let _0x19e959=this[_0x7ee813(0x38d6)]([]);return this[_0x7ee813(0xa21)](_0x18a132=>{const _0x4323b7=_0x7ee813;let _0x1614b4=_0x5a4d0d(_0x18a132)[_0x4323b7(0x4c9b)];_0x1614b4[_0x4323b7(0x2108)]&&(_0x19e959=_0x19e959['concat'](_0x1614b4));}),_0x19e959[_0x7ee813(0x152d)](_0x5b3fb0);}[_0x276ad6(0x467a)](_0x569a52){const _0x2687be=_0x276ad6;return this[_0x2687be(0x152d)](_0x569a52)[_0x2687be(0x4833)](_0x3b2cae=>_0x4df5bd(_0x3b2cae,_0x5a4d0d(_0x3b2cae)));}[_0x276ad6(0x164b)](_0x287ec3){const _0x198722=_0x276ad6;return this[_0x198722(0x152d)](_0x287ec3)[_0x198722(0x4833)](_0x49f085=>{let _0x2dfde1=_0x5a4d0d(_0x49f085);return _0x43b273(_0x49f085,_0x2dfde1);});}[_0x276ad6(0x38d6)](_0x4397b6){const _0x2992f0=_0x276ad6;let _0xc67611=new _0x17657d(this[_0x2992f0(0x295)],_0x4397b6);return _0xc67611[_0x2992f0(0x1aa7)]=this[_0x2992f0(0x1aa7)],_0xc67611;}}_0x344b1b[_0x276ad6(0x3b3c)][_0x276ad6(0x268a)]=function(_0x56c0dd){const _0xa42f59=_0x276ad6;let _0x51bbae=_0x5b5c2d(this);return _0x51bbae=_0x51bbae['getNth'](_0x56c0dd),new _0x17657d(this[_0xa42f59(0x295)],_0x51bbae[_0xa42f59(0x43e4)]);};},_0x158818={'api':_0x2290a5},_0x3778bf=function(_0x2b6f03,_0x54ecc1){const _0x23f539=_0x37e46c;let _0x538fb4=_0x2b6f03[_0x23f539(0x2d96)](_0x23f539(0x3c6d));return _0x538fb4=_0x538fb4[_0x23f539(0x1465)](_0x2d8d27=>!_0x2d8d27[_0x23f539(0x215d)]('#Value\x20and$')[_0x23f539(0x2108)]),_0x538fb4=_0x538fb4[_0x23f539(0x45f9)](_0x23f539(0x4fe2)),_0x23f539(0x4a80)==typeof _0x54ecc1&&(_0x538fb4=_0x538fb4['eq'](_0x54ecc1)),_0x538fb4;},_0x4ebb9d=_0x125792=>{const _0xf591ad=_0x37e46c,_0x719cdb=[{'reg':/^(minus|negative)[\s-]/i,'mult':-0x1},{'reg':/^(a\s)?half[\s-](of\s)?/i,'mult':0.5}];for(let _0x28bda6=0x0;_0x28bda6<_0x719cdb[_0xf591ad(0x1b19)];_0x28bda6++)if(!0x0===_0x719cdb[_0x28bda6][_0xf591ad(0x170f)][_0xf591ad(0x1769)](_0x125792))return{'amount':_0x719cdb[_0x28bda6][_0xf591ad(0x247e)],'str':_0x125792['replace'](_0x719cdb[_0x28bda6]['reg'],'')};return{'amount':0x1,'str':_0x125792};},_0x476f66={'ones':{'zeroth':0x0,'first':0x1,'second':0x2,'third':0x3,'fourth':0x4,'fifth':0x5,'sixth':0x6,'seventh':0x7,'eighth':0x8,'ninth':0x9,'zero':0x0,'one':0x1,'two':0x2,'three':0x3,'four':0x4,'five':0x5,'six':0x6,'seven':0x7,'eight':0x8,'nine':0x9},'teens':{'tenth':0xa,'eleventh':0xb,'twelfth':0xc,'thirteenth':0xd,'fourteenth':0xe,'fifteenth':0xf,'sixteenth':0x10,'seventeenth':0x11,'eighteenth':0x12,'nineteenth':0x13,'ten':0xa,'eleven':0xb,'twelve':0xc,'thirteen':0xd,'fourteen':0xe,'fifteen':0xf,'sixteen':0x10,'seventeen':0x11,'eighteen':0x12,'nineteen':0x13},'tens':{'twentieth':0x14,'thirtieth':0x1e,'fortieth':0x28,'fourtieth':0x28,'fiftieth':0x32,'sixtieth':0x3c,'seventieth':0x46,'eightieth':0x50,'ninetieth':0x5a,'twenty':0x14,'thirty':0x1e,'forty':0x28,'fourty':0x28,'fifty':0x32,'sixty':0x3c,'seventy':0x46,'eighty':0x50,'ninety':0x5a},'multiples':{'hundredth':0x64,'thousandth':0x3e8,'millionth':0xf4240,'billionth':0x3b9aca00,'trillionth':0xe8d4a51000,'quadrillionth':0x38d7ea4c68000,'quintillionth':0xde0b6b3a7640000,'sextillionth':0x3635c9adc5dea00000,'septillionth':0xd3c21bcecceda0000000,'hundred':0x64,'thousand':0x3e8,'million':0xf4240,'billion':0x3b9aca00,'trillion':0xe8d4a51000,'quadrillion':0x38d7ea4c68000,'quintillion':0xde0b6b3a7640000,'sextillion':0x3635c9adc5dea00000,'septillion':0xd3c21bcecceda0000000,'grand':0x3e8}},_0x55fe05=(_0x1b5f71,_0x5d12c5)=>{const _0x3bf529=_0x37e46c;if(_0x476f66[_0x3bf529(0x8ef)]['hasOwnProperty'](_0x1b5f71)){if(_0x5d12c5[_0x3bf529(0x8ef)]||_0x5d12c5[_0x3bf529(0x44d6)])return!0x1;}else{if(_0x476f66[_0x3bf529(0x44d6)][_0x3bf529(0x2427)](_0x1b5f71)){if(_0x5d12c5['ones']||_0x5d12c5[_0x3bf529(0x44d6)]||_0x5d12c5[_0x3bf529(0x39ac)])return!0x1;}else{if(_0x476f66[_0x3bf529(0x39ac)][_0x3bf529(0x2427)](_0x1b5f71)&&(_0x5d12c5[_0x3bf529(0x8ef)]||_0x5d12c5['teens']||_0x5d12c5['tens']))return!0x1;}}return!0x0;},_0x32c8d1=function(_0x35d0e4){const _0xeadfe7=_0x37e46c;let _0x314b13='0.';for(let _0xd984e6=0x0;_0xd984e6<_0x35d0e4[_0xeadfe7(0x1b19)];_0xd984e6++){let _0x30f903=_0x35d0e4[_0xd984e6];if(!0x0===_0x476f66[_0xeadfe7(0x8ef)][_0xeadfe7(0x2427)](_0x30f903))_0x314b13+=_0x476f66['ones'][_0x30f903];else{if(!0x0===_0x476f66[_0xeadfe7(0x44d6)][_0xeadfe7(0x2427)](_0x30f903))_0x314b13+=_0x476f66[_0xeadfe7(0x44d6)][_0x30f903];else{if(!0x0===_0x476f66['tens'][_0xeadfe7(0x2427)](_0x30f903))_0x314b13+=_0x476f66[_0xeadfe7(0x39ac)][_0x30f903];else{if(!0x0!==/^[0-9]$/[_0xeadfe7(0x1769)](_0x30f903))return 0x0;_0x314b13+=_0x30f903;}}}}return parseFloat(_0x314b13);},_0x494a32=_0x17c363=>_0x17c363=(_0x17c363=(_0x17c363=(_0x17c363=(_0x17c363=(_0x17c363=(_0x17c363=(_0x17c363=_0x17c363['replace'](/1st$/,'1'))[_0x37e46c(0x741)](/2nd$/,'2'))['replace'](/3rd$/,'3'))['replace'](/([4567890])r?th$/,'$1'))['replace'](/^[$€¥£¢]/,''))[_0x37e46c(0x741)](/[%$€¥£¢]$/,''))['replace'](/,/g,''))[_0x37e46c(0x741)](/([0-9])([a-z\u00C0-\u00FF]{1,2})$/,'$1'),_0x54031c=/^([0-9,. ]+)\/([0-9,. ]+)$/,_0xaf2011={'a\x20few':0x3,'a\x20couple':0x2,'a\x20dozen':0xc,'two\x20dozen':0x18,'zero':0x0},_0x5928c6=_0x44a658=>Object[_0x37e46c(0x1ea9)](_0x44a658)[_0x37e46c(0x24d8)]((_0x1bfe53,_0x43800d)=>_0x1bfe53+=_0x44a658[_0x43800d],0x0),_0x1a605f=function(_0x57d30d){const _0x4c721f=_0x37e46c;if(!0x0===_0xaf2011[_0x4c721f(0x2427)](_0x57d30d))return _0xaf2011[_0x57d30d];if('a'===_0x57d30d||'an'===_0x57d30d)return 0x1;const _0xad8dd=_0x4ebb9d(_0x57d30d);let _0x32e1ca=null,_0x5cdc7d={},_0x19770a=0x0,_0x429fbb=!0x1;const _0x489386=(_0x57d30d=_0xad8dd[_0x4c721f(0x257f)])[_0x4c721f(0x1117)](/[ -]/);for(let _0x1448a2=0x0;_0x1448a2<_0x489386[_0x4c721f(0x1b19)];_0x1448a2++){let _0x46fbd5=_0x489386[_0x1448a2];if(_0x46fbd5=_0x494a32(_0x46fbd5),!_0x46fbd5||_0x4c721f(0x2663)===_0x46fbd5)continue;if('-'===_0x46fbd5||_0x4c721f(0x298)===_0x46fbd5){_0x429fbb=!0x0;continue;}if('-'===_0x46fbd5[_0x4c721f(0x2fe2)](0x0)&&(_0x429fbb=!0x0,_0x46fbd5=_0x46fbd5['substring'](0x1)),_0x4c721f(0x2464)===_0x46fbd5)return _0x19770a+=_0x5928c6(_0x5cdc7d),_0x19770a+=_0x32c8d1(_0x489386[_0x4c721f(0x384c)](_0x1448a2+0x1,_0x489386[_0x4c721f(0x1b19)])),_0x19770a*=_0xad8dd[_0x4c721f(0x50f)],_0x19770a;const _0x2dbe76=_0x46fbd5[_0x4c721f(0x2d96)](_0x54031c);if(_0x2dbe76){const _0x36b3f1=parseFloat(_0x2dbe76[0x1][_0x4c721f(0x741)](/[, ]/g,'')),_0x471cc6=parseFloat(_0x2dbe76[0x2][_0x4c721f(0x741)](/[, ]/g,''));_0x471cc6&&(_0x19770a+=_0x36b3f1/_0x471cc6||0x0);}else{if(_0x476f66[_0x4c721f(0x39ac)][_0x4c721f(0x2427)](_0x46fbd5)&&_0x5cdc7d[_0x4c721f(0x8ef)]&&0x1===Object['keys'](_0x5cdc7d)[_0x4c721f(0x1b19)]&&(_0x19770a=0x64*_0x5cdc7d[_0x4c721f(0x8ef)],_0x5cdc7d={}),!0x1===_0x55fe05(_0x46fbd5,_0x5cdc7d))return null;if(/^[0-9.]+$/[_0x4c721f(0x1769)](_0x46fbd5))_0x5cdc7d[_0x4c721f(0x8ef)]=parseFloat(_0x46fbd5);else{if(!0x0===_0x476f66[_0x4c721f(0x8ef)][_0x4c721f(0x2427)](_0x46fbd5))_0x5cdc7d[_0x4c721f(0x8ef)]=_0x476f66['ones'][_0x46fbd5];else{if(!0x0===_0x476f66['teens'][_0x4c721f(0x2427)](_0x46fbd5))_0x5cdc7d[_0x4c721f(0x44d6)]=_0x476f66[_0x4c721f(0x44d6)][_0x46fbd5];else{if(!0x0===_0x476f66[_0x4c721f(0x39ac)][_0x4c721f(0x2427)](_0x46fbd5))_0x5cdc7d['tens']=_0x476f66[_0x4c721f(0x39ac)][_0x46fbd5];else{if(!0x0===_0x476f66[_0x4c721f(0x2c76)][_0x4c721f(0x2427)](_0x46fbd5)){let _0x59354b=_0x476f66[_0x4c721f(0x2c76)][_0x46fbd5];if(_0x59354b===_0x32e1ca)return null;if(0x64===_0x59354b&&void 0x0!==_0x489386[_0x1448a2+0x1]){const _0x54571b=_0x489386[_0x1448a2+0x1];_0x476f66['multiples'][_0x54571b]&&(_0x59354b*=_0x476f66[_0x4c721f(0x2c76)][_0x54571b],_0x1448a2+=0x1);}null===_0x32e1ca||_0x59354b<_0x32e1ca?(_0x19770a+=(_0x5928c6(_0x5cdc7d)||0x1)*_0x59354b,_0x32e1ca=_0x59354b,_0x5cdc7d={}):(_0x19770a+=_0x5928c6(_0x5cdc7d),_0x32e1ca=_0x59354b,_0x19770a=(_0x19770a||0x1)*_0x59354b,_0x5cdc7d={});}}}}}}}return _0x19770a+=_0x5928c6(_0x5cdc7d),_0x19770a*=_0xad8dd[_0x4c721f(0x50f)],_0x19770a*=_0x429fbb?-0x1:0x1,0x0===_0x19770a&&0x0===Object[_0x4c721f(0x1ea9)](_0x5cdc7d)[_0x4c721f(0x1b19)]?null:_0x19770a;},_0x48f776=/s$/,_0x100f40=function(_0x3201d3){let _0x558390=_0x3201d3['text']('reduced');return _0x1a605f(_0x558390);};let _0x3b22d3={'half':0x2,'halve':0x2,'quarter':0x4};const _0x2de897=function(_0x3e0116){const _0x28f09=_0x37e46c;let _0x1256af=function(_0x24fd0b){const _0x5aec3d=a0_0x11e7;let _0x3e6452=_0x24fd0b[_0x5aec3d(0x4006)](_0x5aec3d(0x25ba));return _0x3b22d3[_0x5aec3d(0x2427)](_0x3e6452)?{'numerator':0x1,'denominator':_0x3b22d3[_0x3e6452]}:null;}(_0x3e0116=_0x3e0116[_0x28f09(0x150c)]())||function(_0x140bd2){const _0x44f2d0=_0x28f09;let _0x899fbb=_0x140bd2[_0x44f2d0(0x4006)](_0x44f2d0(0x25ba))[_0x44f2d0(0x2d96)](/^([-+]?[0-9]+)\/([-+]?[0-9]+)(st|nd|rd|th)?s?$/);return _0x899fbb&&_0x899fbb[0x1]&&_0x899fbb[0x0]?{'numerator':Number(_0x899fbb[0x1]),'denominator':Number(_0x899fbb[0x2])}:null;}(_0x3e0116)||function(_0x4355ec){const _0x172c8d=_0x28f09;let _0x19fac9=_0x4355ec[_0x172c8d(0x2d96)]('[#Value+]\x20out\x20of\x20every?\x20[#Value+]');if(!0x0!==_0x19fac9[_0x172c8d(0x2108)])return null;let {num:_0x19bf9f,den:_0x3d5d97}=_0x19fac9['groups']();return _0x19bf9f&&_0x3d5d97?(_0x19bf9f=_0x100f40(_0x19bf9f),_0x3d5d97=_0x100f40(_0x3d5d97),_0x19bf9f&&_0x3d5d97&&'number'==typeof _0x19bf9f&&_0x172c8d(0x4a80)==typeof _0x3d5d97?{'numerator':_0x19bf9f,'denominator':_0x3d5d97}:null):null;}(_0x3e0116)||function(_0x42dcd0){const _0x1d8cba=_0x28f09;let _0x1e8a94=_0x42dcd0[_0x1d8cba(0x2d96)](_0x1d8cba(0xc66));if(!0x0!==_0x1e8a94['found'])return null;let {num:_0x2a0dc9,den:_0x12a847}=_0x1e8a94['groups']();_0x2a0dc9=_0x2a0dc9[_0x1d8cba(0x3170)]('a')?0x1:_0x100f40(_0x2a0dc9);let _0x1d886a=_0x12a847[_0x1d8cba(0x4006)](_0x1d8cba(0x25ba));return _0x48f776['test'](_0x1d886a)&&(_0x1d886a=_0x1d886a[_0x1d8cba(0x741)](_0x48f776,''),_0x12a847=_0x12a847[_0x1d8cba(0x1f1f)](_0x1d886a)),_0x12a847=_0x3b22d3[_0x1d8cba(0x2427)](_0x1d886a)?_0x3b22d3[_0x1d886a]:_0x100f40(_0x12a847),_0x1d8cba(0x4a80)==typeof _0x2a0dc9&&'number'==typeof _0x12a847?{'numerator':_0x2a0dc9,'denominator':_0x12a847}:null;}(_0x3e0116)||function(_0x4cc13a){const _0x2f9756=_0x28f09;let _0x22989c=_0x4cc13a[_0x2f9756(0x2d96)](_0x2f9756(0x297a));if(!0x0!==_0x22989c[_0x2f9756(0x2108)])return null;if(_0x4cc13a[_0x2f9756(0x4051)]('^of\x20.'))return{'numerator':0x1,'denominator':_0x100f40(_0x22989c)};return null;}(_0x3e0116)||null;return null!==_0x1256af&&_0x1256af[_0x28f09(0x39da)]&&_0x1256af['denominator']&&(_0x1256af[_0x28f09(0x2353)]=_0x1256af['numerator']/_0x1256af[_0x28f09(0x4d88)],_0x1256af[_0x28f09(0x2353)]=(_0x5a0ba2=>{const _0x555be4=_0x28f09;let _0x496759=Math[_0x555be4(0x3d6c)](0x3e8*_0x5a0ba2)/0x3e8;return 0x0===_0x496759&&0x0!==_0x5a0ba2?_0x5a0ba2:_0x496759;})(_0x1256af[_0x28f09(0x2353)])),_0x1256af;},_0x319f4e=function(_0x729eb7){const _0x229a91=_0x37e46c;if(_0x729eb7<0xf4240)return String(_0x729eb7);let _0x41faec;return _0x41faec=_0x229a91(0x4a80)==typeof _0x729eb7?_0x729eb7['toFixed'](0x0):_0x729eb7,-0x1===_0x41faec['indexOf']('e+')?_0x41faec:_0x41faec[_0x229a91(0x741)]('.','')[_0x229a91(0x1117)]('e+')[_0x229a91(0x24d8)](function(_0x2c8989,_0x21d81b){const _0x5b1538=_0x229a91;return _0x2c8989+Array(_0x21d81b-_0x2c8989[_0x5b1538(0x1b19)]+0x2)[_0x5b1538(0x3541)](0x0);});},_0x5ea334=[['ninety',0x5a],['eighty',0x50],[_0x37e46c(0x2e5a),0x46],[_0x37e46c(0x4b59),0x3c],[_0x37e46c(0x4c69),0x32],['forty',0x28],[_0x37e46c(0x3448),0x1e],[_0x37e46c(0x2d69),0x14]],_0x1dac32=['',_0x37e46c(0x1d8a),_0x37e46c(0x21c9),_0x37e46c(0x1655),_0x37e46c(0x3eeb),_0x37e46c(0xd51),'six','seven',_0x37e46c(0x426c),_0x37e46c(0x1f64),_0x37e46c(0x196f),_0x37e46c(0x34ae),_0x37e46c(0x758),_0x37e46c(0x2c04),_0x37e46c(0x4797),_0x37e46c(0x3405),_0x37e46c(0x93a),_0x37e46c(0xaba),'eighteen','nineteen'],_0x286f11=[[0xd3c21bcecceda0000000,_0x37e46c(0x1612)],[0x56bc75e2d63100000,_0x37e46c(0x17ea)],[0x3635c9adc5dea00000,'sextillion'],[0x56bc75e2d63100000,'hundred\x20quintillion'],[0xde0b6b3a7640000,_0x37e46c(0x1331)],[0x16345785d8a0000,_0x37e46c(0x4f16)],[0x38d7ea4c68000,_0x37e46c(0x25bf)],[0x5af3107a4000,'hundred\x20trillion'],[0xe8d4a51000,'trillion'],[0x174876e800,'hundred\x20billion'],[0x3b9aca00,_0x37e46c(0x1fa8)],[0x5f5e100,_0x37e46c(0x2281)],[0xf4240,_0x37e46c(0x2fa)],[0x186a0,'hundred\x20thousand'],[0x3e8,'thousand'],[0x64,_0x37e46c(0x4f8b)],[0x1,_0x37e46c(0x1d8a)]],_0x597036=function(_0x15eca5){const _0x4a0e1=_0x37e46c;let _0x40fef8=[];if(_0x15eca5>0x64)return _0x40fef8;for(let _0x1b5cc1=0x0;_0x1b5cc1<_0x5ea334['length'];_0x1b5cc1++)_0x15eca5>=_0x5ea334[_0x1b5cc1][0x1]&&(_0x15eca5-=_0x5ea334[_0x1b5cc1][0x1],_0x40fef8[_0x4a0e1(0x1715)](_0x5ea334[_0x1b5cc1][0x0]));return _0x1dac32[_0x15eca5]&&_0x40fef8[_0x4a0e1(0x1715)](_0x1dac32[_0x15eca5]),_0x40fef8;},_0x478652=function(_0x3db65a){const _0x31364c=_0x37e46c;let _0x348eda=_0x3db65a[_0x31364c(0x51b1)];if(0x0===_0x348eda||'0'===_0x348eda)return _0x31364c(0x3eea);_0x348eda>0x3635c9adc5dea00000&&(_0x348eda=_0x319f4e(_0x348eda));let _0x33f19f=[];_0x348eda<0x0&&(_0x33f19f['push'](_0x31364c(0x3ce8)),_0x348eda=Math[_0x31364c(0xbe0)](_0x348eda));let _0x317a3b=function(_0x10d9fc){const _0x8b69df=_0x31364c;let _0x5e8df0=_0x10d9fc,_0x1dcada=[];return _0x286f11[_0x8b69df(0xa21)](_0x3a1155=>{const _0x462777=_0x8b69df;if(_0x10d9fc>=_0x3a1155[0x0]){let _0x51d641=Math[_0x462777(0x2e2d)](_0x5e8df0/_0x3a1155[0x0]);_0x5e8df0-=_0x51d641*_0x3a1155[0x0],_0x51d641&&_0x1dcada['push']({'unit':_0x3a1155[0x1],'count':_0x51d641});}}),_0x1dcada;}(_0x348eda);for(let _0x471d72=0x0;_0x471d72<_0x317a3b[_0x31364c(0x1b19)];_0x471d72++){let _0x5814bc=_0x317a3b[_0x471d72][_0x31364c(0x43c3)];_0x31364c(0x1d8a)===_0x5814bc&&(_0x5814bc='',_0x33f19f[_0x31364c(0x1b19)]>0x1&&_0x33f19f[_0x31364c(0x1715)]('and')),_0x33f19f=_0x33f19f[_0x31364c(0x1d1d)](_0x597036(_0x317a3b[_0x471d72][_0x31364c(0x404e)])),_0x33f19f[_0x31364c(0x1715)](_0x5814bc);}return _0x33f19f=_0x33f19f[_0x31364c(0x1d1d)]((_0x3688f3=>{const _0x44dbd3=_0x31364c,_0x14f3ee=[_0x44dbd3(0x3eea),_0x44dbd3(0x1d8a),_0x44dbd3(0x21c9),'three',_0x44dbd3(0x3eeb),_0x44dbd3(0xd51),'six',_0x44dbd3(0x287e),_0x44dbd3(0x426c),_0x44dbd3(0x1f64)];let _0x44f5c5=[],_0x40b1bf=_0x319f4e(_0x3688f3)[_0x44dbd3(0x2d96)](/\.([0-9]+)/);if(!_0x40b1bf||!_0x40b1bf[0x0])return _0x44f5c5;_0x44f5c5['push'](_0x44dbd3(0x2464));let _0x199596=_0x40b1bf[0x0]['split']('');for(let _0x1b1177=0x0;_0x1b1177<_0x199596[_0x44dbd3(0x1b19)];_0x1b1177++)_0x44f5c5['push'](_0x14f3ee[_0x199596[_0x1b1177]]);return _0x44f5c5;})(_0x348eda)),_0x33f19f=_0x33f19f[_0x31364c(0x1465)](_0x52ab04=>_0x52ab04),0x0===_0x33f19f[_0x31364c(0x1b19)]&&(_0x33f19f[0x0]=''),_0x33f19f[_0x31364c(0x3541)]('\x20');},_0x5c53e9=function(_0x384274){const _0x42eb9d=_0x37e46c;if(!_0x384274['numerator']||!_0x384274[_0x42eb9d(0x4d88)])return'';return _0x478652({'num':_0x384274[_0x42eb9d(0x39da)]})+_0x42eb9d(0x4886)+_0x478652({'num':_0x384274['denominator']});},_0x2d6cee={'one':_0x37e46c(0x4d51),'two':_0x37e46c(0x274e),'three':_0x37e46c(0x1cd2),'five':_0x37e46c(0x252f),'eight':_0x37e46c(0x1d10),'nine':_0x37e46c(0xad1),'twelve':'twelfth','twenty':_0x37e46c(0x46e7),'thirty':'thirtieth','forty':_0x37e46c(0x30a3),'fourty':_0x37e46c(0x1394),'fifty':_0x37e46c(0x2180),'sixty':_0x37e46c(0xce5),'seventy':'seventieth','eighty':_0x37e46c(0x3f5b),'ninety':_0x37e46c(0x475e)},_0x5b2a92=_0xe15e2f=>{const _0x52b175=_0x37e46c;let _0x30977b=_0x478652(_0xe15e2f)['split']('\x20'),_0x517e64=_0x30977b[_0x30977b[_0x52b175(0x1b19)]-0x1];return _0x2d6cee[_0x52b175(0x2427)](_0x517e64)?_0x30977b[_0x30977b[_0x52b175(0x1b19)]-0x1]=_0x2d6cee[_0x517e64]:_0x30977b[_0x30977b[_0x52b175(0x1b19)]-0x1]=_0x517e64[_0x52b175(0x741)](/y$/,'i')+'th',_0x30977b[_0x52b175(0x3541)]('\x20');},_0x49710f=function(_0x5362d8){const _0x3e3d11=_0x37e46c;if(!_0x5362d8['numerator']||!_0x5362d8[_0x3e3d11(0x4d88)])return'';let _0x25082c=_0x478652({'num':_0x5362d8[_0x3e3d11(0x39da)]}),_0x5b99ae=_0x5b2a92({'num':_0x5362d8[_0x3e3d11(0x4d88)]});return 0x2===_0x5362d8['denominator']&&(_0x5b99ae=_0x3e3d11(0x4b9)),_0x25082c&&_0x5b99ae?(0x1!==_0x5362d8[_0x3e3d11(0x39da)]&&(_0x5b99ae+='s'),_0x25082c+'\x20'+_0x5b99ae):'';},_0x5d6d9f=function(_0x1e62d4){const _0x26594b=_0x37e46c;class _0x3f1615 extends _0x1e62d4{constructor(_0x9913ed,_0x12c843,_0x2f8d28){super(_0x9913ed,_0x12c843,_0x2f8d28),this['viewType']='Fractions';}[_0x26594b(0x2956)](_0x1ec0c3){const _0x4efc63=_0x26594b;return this[_0x4efc63(0x152d)](_0x1ec0c3)['map'](_0x2de897);}['get'](_0x2d6a7c){const _0x586a1d=_0x26594b;return this[_0x586a1d(0x152d)](_0x2d6a7c)[_0x586a1d(0x4833)](_0x2de897);}['json'](_0x6659d5){const _0xcd68e5=_0x26594b;return this[_0xcd68e5(0x152d)](_0x6659d5)['map'](_0x450cbd=>{const _0x36cccc=_0xcd68e5;let _0x4eba36=_0x450cbd[_0x36cccc(0x324a)]()[_0x36cccc(0x3289)](_0x6659d5)[0x0],_0xf4449f=_0x2de897(_0x450cbd);return _0x4eba36[_0x36cccc(0xe0e)]=_0xf4449f,_0x4eba36;},[]);}[_0x26594b(0x40c1)](_0x58c1c0){const _0x1ea001=_0x26594b;return this['getNth'](_0x58c1c0)[_0x1ea001(0xa21)](_0x40f3ea=>{const _0xdea4c2=_0x1ea001;let {decimal:_0x359eba}=_0x2de897(_0x40f3ea);(_0x40f3ea=_0x40f3ea[_0xdea4c2(0x1f1f)](String(_0x359eba),!0x0))['tag'](_0xdea4c2(0x466e)),_0x40f3ea[_0xdea4c2(0x1b1a)](_0xdea4c2(0x2365));}),this;}[_0x26594b(0x32dc)](_0x6dacc9){const _0x37486a=_0x26594b;return this[_0x37486a(0x152d)](_0x6dacc9)[_0x37486a(0xa21)](_0x3d363c=>{const _0x48a239=_0x37486a;let _0x1b1d5c=_0x2de897(_0x3d363c);if(_0x1b1d5c&&_0x48a239(0x4a80)==typeof _0x1b1d5c[_0x48a239(0x39da)]&&'number'==typeof _0x1b1d5c[_0x48a239(0x4d88)]){let _0x30fec8=_0x1b1d5c['numerator']+'/'+_0x1b1d5c[_0x48a239(0x4d88)];this[_0x48a239(0x741)](_0x3d363c,_0x30fec8);}}),this;}['toOrdinal'](_0x3261f4){const _0x5262fc=_0x26594b;return this[_0x5262fc(0x152d)](_0x3261f4)[_0x5262fc(0xa21)](_0x3e8f6d=>{const _0x353a75=_0x5262fc;let _0x5e492c=_0x2de897(_0x3e8f6d),_0x3cdcd5=_0x49710f(_0x5e492c);_0x3e8f6d[_0x353a75(0x1349)](_0x353a75(0x375))[_0x353a75(0x2108)]&&(_0x3cdcd5+=_0x353a75(0x11c3)),_0x3e8f6d[_0x353a75(0x1f1f)](_0x3cdcd5);}),this;}[_0x26594b(0x1eb9)](_0x36e51e){const _0x3b2409=_0x26594b;return this[_0x3b2409(0x152d)](_0x36e51e)[_0x3b2409(0xa21)](_0x13de8a=>{let _0xdb2b7e=_0x2de897(_0x13de8a),_0x4e66e0=_0x5c53e9(_0xdb2b7e);_0x13de8a['replaceWith'](_0x4e66e0);}),this;}['toPercentage'](_0x169b29){const _0x58c9d5=_0x26594b;return this[_0x58c9d5(0x152d)](_0x169b29)[_0x58c9d5(0xa21)](_0x3c839e=>{const _0x30ebad=_0x58c9d5;let {decimal:_0x3c6aab}=_0x2de897(_0x3c839e),_0x118191=0x64*_0x3c6aab;_0x118191=Math['round'](0x64*_0x118191)/0x64,_0x3c839e[_0x30ebad(0x1f1f)](_0x118191+'%');}),this;}}_0x1e62d4[_0x26594b(0x3b3c)][_0x26594b(0x1186)]=function(_0x79db9d){const _0x186e88=_0x26594b;let _0x4f6915=_0x3778bf(this);return _0x4f6915=_0x4f6915[_0x186e88(0x152d)](_0x79db9d),new _0x3f1615(this[_0x186e88(0x295)],_0x4f6915[_0x186e88(0x43e4)]);};},_0x5075b9=_0x37e46c(0x2737),_0x5db445=function(_0x1c21f5){const _0x34fca8=_0x37e46c;let _0x3b7d5f=_0x1c21f5[_0x34fca8(0x2d96)]('#Value+');if(_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x3778))&&(_0x3b7d5f['has'](_0x34fca8(0x36a8))?_0x3b7d5f[_0x34fca8(0x319a)](_0x34fca8(0x38c6)):_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x2594))?_0x3b7d5f['splitAfter'](_0x34fca8(0x2594)):_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x319a)](_0x34fca8(0x4194))),_0x3b7d5f[_0x34fca8(0x3170)]('#Value\x20#Value\x20#Value')&&!_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x23f))&&_0x3b7d5f[_0x34fca8(0x3170)]('('+_0x5075b9+_0x34fca8(0x1990))&&(_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x319a)]('('+_0x5075b9+_0x34fca8(0x3c8b))),_0x3b7d5f[_0x34fca8(0x3170)]('#Value\x20#Value')){_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x3778))&&(_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x3a4d)](_0x34fca8(0x456))),_0x3b7d5f[_0x34fca8(0x3170)]('('+_0x5075b9+_0x34fca8(0x2760))&&(_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x319a)]('('+_0x5075b9+')'));let _0x550d96=_0x3b7d5f['match'](_0x34fca8(0x1f6));if(_0x550d96[_0x34fca8(0x2108)]&&!_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x7fa))&&!_0x550d96[_0x34fca8(0x3170)](_0x34fca8(0x41e7))){let _0x40b207=_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x3480)+_0x5075b9+')'),_0x68b8de=_0x550d96[_0x34fca8(0x3170)]('('+_0x5075b9+_0x34fca8(0x3c8b)),_0x3efa44=_0x550d96[_0x34fca8(0x3170)](_0x34fca8(0x5df));_0x40b207||_0x68b8de||_0x3efa44||_0x550d96[_0x34fca8(0x4a03)]()[_0x34fca8(0xa21)](_0x51cd94=>{const _0x1a7035=_0x34fca8;_0x3b7d5f=_0x3b7d5f[_0x1a7035(0x3a4d)](_0x51cd94);});}_0x3b7d5f[_0x34fca8(0x2d96)](_0x34fca8(0x4e8a))[_0x34fca8(0x2d96)](_0x34fca8(0x3aa4))['found']&&!_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x23f))&&(_0x3b7d5f[_0x34fca8(0x3170)]('('+_0x5075b9+_0x34fca8(0x4bc))||(_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x319a)](_0x34fca8(0x2a5e)))),_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x4c0)]('#Ordinal\x20[#Cardinal]',0x0),_0x3b7d5f[_0x34fca8(0x3170)](_0x34fca8(0x18ea))&&!_0x3b7d5f['has']('('+_0x5075b9+'|#Multiple)')&&(_0x3b7d5f=_0x3b7d5f[_0x34fca8(0x4c0)](_0x34fca8(0x18ea)));}return _0x3b7d5f=_0x3b7d5f[_0x34fca8(0x319a)](_0x34fca8(0x3b95)),_0x3b7d5f=_0x3b7d5f['splitBefore'](_0x34fca8(0x456)),_0x3b7d5f;},_0x25e8f7=function(_0x4e58a3){const _0x1d6ab4=_0x37e46c;if(_0x1d6ab4(0x2431)==typeof _0x4e58a3)return{'num':_0x1a605f(_0x4e58a3)};let _0x14003b=_0x4e58a3[_0x1d6ab4(0x4006)](_0x1d6ab4(0x25ba)),_0x4a6b15=_0x4e58a3[_0x1d6ab4(0xfbb)](_0x1d6ab4(0x4002))[_0x1d6ab4(0x2d96)]('#Unit$')['text'](_0x1d6ab4(0x192e)),_0x33b586=/[0-9],[0-9]/[_0x1d6ab4(0x1769)](_0x4e58a3[_0x1d6ab4(0x4006)](_0x1d6ab4(0x4006)));if(0x1===_0x4e58a3[_0x1d6ab4(0x4a03)]()[_0x1d6ab4(0x1b19)]&&!_0x4e58a3[_0x1d6ab4(0x3170)]('#Multiple')){let _0x219d7e=function(_0x2e6e26,_0x2a9f15){const _0x26f856=_0x1d6ab4;let _0x292393=(_0x2e6e26=_0x2e6e26[_0x26f856(0x741)](/,/g,''))[_0x26f856(0x1117)](/([0-9.,]*)/),[_0x5553ac,_0x1807b2]=_0x292393,_0x20160b=_0x292393['slice'](0x2)['join']('');return''!==_0x1807b2&&_0x2a9f15['length']<0x2?(_0x1807b2=Number(_0x1807b2||_0x2e6e26),_0x26f856(0x4a80)!=typeof _0x1807b2&&(_0x1807b2=null),_0x20160b=_0x20160b||'','st'!==_0x20160b&&'nd'!==_0x20160b&&'rd'!==_0x20160b&&'th'!==_0x20160b||(_0x20160b=''),{'prefix':_0x5553ac||'','num':_0x1807b2,'suffix':_0x20160b}):null;}(_0x14003b,_0x4e58a3);if(null!==_0x219d7e)return _0x219d7e[_0x1d6ab4(0x2693)]=_0x33b586,_0x219d7e[_0x1d6ab4(0x43c3)]=_0x4a6b15,_0x219d7e;}let _0x73e274=_0x4e58a3[_0x1d6ab4(0x2d96)](_0x1d6ab4(0x4377));_0x73e274=!0x1===_0x73e274[_0x1d6ab4(0x2108)]?_0x4e58a3[_0x1d6ab4(0x2d96)](_0x1d6ab4(0x3e4)):_0x73e274;let _0x2cf1f9=null;_0x73e274[_0x1d6ab4(0x2108)]&&(_0x73e274[_0x1d6ab4(0x3170)]('#Value\x20and\x20#Value\x20#Fraction')&&(_0x73e274=_0x73e274['match'](_0x1d6ab4(0x2a79))),_0x2cf1f9=_0x2de897(_0x73e274),_0x14003b=(_0x4e58a3=(_0x4e58a3=_0x4e58a3[_0x1d6ab4(0xc1a)](_0x73e274))[_0x1d6ab4(0xc1a)]('and$'))[_0x1d6ab4(0x4006)](_0x1d6ab4(0x25ba)));let _0x2b9a4d=0x0;return _0x14003b&&(_0x2b9a4d=_0x1a605f(_0x14003b)||0x0),_0x2cf1f9&&_0x2cf1f9[_0x1d6ab4(0x2353)]&&(_0x2b9a4d+=_0x2cf1f9[_0x1d6ab4(0x2353)]),{'hasComma':_0x33b586,'prefix':'','num':_0x2b9a4d,'suffix':'','isOrdinal':_0x4e58a3[_0x1d6ab4(0x3170)](_0x1d6ab4(0x2a5e)),'isText':_0x4e58a3[_0x1d6ab4(0x3170)]('#TextValue'),'isFraction':_0x4e58a3[_0x1d6ab4(0x3170)]('#Fraction'),'isMoney':_0x4e58a3[_0x1d6ab4(0x3170)]('#Money'),'unit':_0x4a6b15};},_0x145295=function(_0x21868c){const _0x3061bb=_0x37e46c;let _0x46744f=_0x21868c['num'];if(!_0x46744f&&0x0!==_0x46744f)return null;let _0x6207b5=_0x46744f%0x64;if(_0x6207b5>0xa&&_0x6207b5<0x14)return String(_0x46744f)+'th';const _0x1d00ef={0x0:'th',0x1:'st',0x2:'nd',0x3:'rd'};let _0x51e7c2=_0x319f4e(_0x46744f),_0x36477e=_0x51e7c2[_0x3061bb(0x384c)](_0x51e7c2[_0x3061bb(0x1b19)]-0x1,_0x51e7c2['length']);return _0x51e7c2+=_0x1d00ef[_0x36477e]?_0x1d00ef[_0x36477e]:'th',_0x51e7c2;},_0x210c09={'¢':_0x37e46c(0x4426),'$':'dollars','£':_0x37e46c(0x1dcf),'¥':_0x37e46c(0x3669),'€':_0x37e46c(0x4481),'₡':_0x37e46c(0x1123),'฿':_0x37e46c(0x34b9),'₭':_0x37e46c(0x2abb),'₩':_0x37e46c(0x1112),'₹':_0x37e46c(0x40b2),'₽':_0x37e46c(0x74c),'₺':_0x37e46c(0x1665)},_0x223592={'%':_0x37e46c(0x11f8),'°':_0x37e46c(0x187c)},_0x5ee40c=function(_0xd7878e){const _0x269e11=_0x37e46c;let _0x1f371f={'suffix':'','prefix':_0xd7878e[_0x269e11(0x11e8)]};return _0x210c09[_0x269e11(0x2427)](_0xd7878e[_0x269e11(0x11e8)])&&(_0x1f371f[_0x269e11(0x36d7)]+='\x20'+_0x210c09[_0xd7878e['prefix']],_0x1f371f['prefix']=''),_0x223592[_0x269e11(0x2427)](_0xd7878e[_0x269e11(0x36d7)])&&(_0x1f371f[_0x269e11(0x36d7)]+='\x20'+_0x223592[_0xd7878e[_0x269e11(0x36d7)]]),_0x1f371f[_0x269e11(0x36d7)]&&0x1===_0xd7878e[_0x269e11(0x51b1)]&&(_0x1f371f[_0x269e11(0x36d7)]=_0x1f371f['suffix'][_0x269e11(0x741)](/s$/,'')),!_0x1f371f[_0x269e11(0x36d7)]&&_0xd7878e[_0x269e11(0x36d7)]&&(_0x1f371f[_0x269e11(0x36d7)]+='\x20'+_0xd7878e[_0x269e11(0x36d7)]),_0x1f371f;},_0x245f06=function(_0xab3dda,_0x194f10){const _0x19e772=_0x37e46c;if('TextOrdinal'===_0x194f10){let {prefix:_0x2a7629,suffix:_0x5369dc}=_0x5ee40c(_0xab3dda);return _0x2a7629+_0x5b2a92(_0xab3dda)+_0x5369dc;}if('Ordinal'===_0x194f10)return _0xab3dda[_0x19e772(0x11e8)]+_0x145295(_0xab3dda)+_0xab3dda[_0x19e772(0x36d7)];if(_0x19e772(0x504d)===_0x194f10){let {prefix:_0x2d27dc,suffix:_0x243d61}=_0x5ee40c(_0xab3dda);return _0x2d27dc+_0x478652(_0xab3dda)+_0x243d61;}let _0x209bee=_0xab3dda[_0x19e772(0x51b1)];return _0xab3dda[_0x19e772(0x2693)]&&(_0x209bee=_0x209bee[_0x19e772(0x3af4)]()),_0xab3dda['prefix']+String(_0x209bee)+_0xab3dda[_0x19e772(0x36d7)];},_0x442a40=function(_0x1f901c){const _0xddb200=_0x37e46c;if(_0xddb200(0x2431)==typeof _0x1f901c||_0xddb200(0x4a80)==typeof _0x1f901c){let _0x570c11={};return _0x570c11[_0x1f901c]=!0x0,_0x570c11;}return _0x1d9064=_0x1f901c,'[object\x20Array]'===Object['prototype'][_0xddb200(0x8e8)][_0xddb200(0x236b)](_0x1d9064)?_0x1f901c[_0xddb200(0x24d8)]((_0x47fbd7,_0x4d1818)=>(_0x47fbd7[_0x4d1818]=!0x0,_0x47fbd7),{}):_0x1f901c||{};var _0x1d9064;},_0x3db771=function(_0x43363c,_0x2efb9f={}){const _0x56785c=_0x37e46c;return _0x2efb9f=_0x442a40(_0x2efb9f),_0x43363c[_0x56785c(0x1465)](_0x7e388a=>{let {unit:_0x5751d8}=_0x25e8f7(_0x7e388a);return!(!_0x5751d8||!0x0!==_0x2efb9f[_0x5751d8]);});},_0x5dcdf2=function(_0x1357ab){const _0x558799=_0x37e46c;class _0x4140ed extends _0x1357ab{constructor(_0x3ae7d6,_0x11f6b9,_0x10f2b3){const _0x5bbcfe=a0_0x11e7;super(_0x3ae7d6,_0x11f6b9,_0x10f2b3),this[_0x5bbcfe(0x106d)]='Numbers';}[_0x558799(0x2956)](_0xc2aa2f){const _0x1134ba=_0x558799;return this[_0x1134ba(0x152d)](_0xc2aa2f)[_0x1134ba(0x4833)](_0x25e8f7);}[_0x558799(0xf9e)](_0x23be2c){const _0x5555e4=_0x558799;return this[_0x5555e4(0x152d)](_0x23be2c)['map'](_0x25e8f7)[_0x5555e4(0x4833)](_0x2dc2e8=>_0x2dc2e8[_0x5555e4(0x51b1)]);}[_0x558799(0x3289)](_0x1ef549){const _0x49c3c2=_0x558799;let _0x2178b1=_0x49c3c2(0x20c7)==typeof _0x1ef549?_0x1ef549:{};return this[_0x49c3c2(0x152d)](_0x1ef549)[_0x49c3c2(0x4833)](_0x2e7627=>{const _0x5e1f54=_0x49c3c2;let _0x1ea2b9=_0x2e7627[_0x5e1f54(0x324a)]()[_0x5e1f54(0x3289)](_0x2178b1)[0x0],_0x467f25=_0x25e8f7(_0x2e7627);return _0x1ea2b9[_0x5e1f54(0x4a80)]={'prefix':_0x467f25[_0x5e1f54(0x11e8)],'num':_0x467f25[_0x5e1f54(0x51b1)],'suffix':_0x467f25[_0x5e1f54(0x36d7)],'hasComma':_0x467f25['hasComma'],'unit':_0x467f25[_0x5e1f54(0x43c3)]},_0x1ea2b9;},[]);}['units'](){const _0x2db376=_0x558799;return this[_0x2db376(0xfbb)](_0x2db376(0x4002))[_0x2db376(0x2d96)](_0x2db376(0x298f));}[_0x558799(0x380)](_0x213a5c){return _0x3db771(this,_0x213a5c);}[_0x558799(0x36ef)](){const _0x5ce04a=_0x558799;return this['if'](_0x5ce04a(0x2a5e));}[_0x558799(0x17a6)](){const _0x8cf425=_0x558799;return this['if'](_0x8cf425(0x696));}[_0x558799(0x2d04)](){const _0x3fe9e6=_0x558799;return this['if'](_0x3fe9e6(0x3aa4))[_0x3fe9e6(0xa21)](_0x2699a4=>{const _0x58682e=_0x3fe9e6;let _0x23db04=_0x25e8f7(_0x2699a4);if(null===_0x23db04[_0x58682e(0x51b1)])return;let _0xe6ca5d=_0x2699a4[_0x58682e(0x3170)](_0x58682e(0x2a5e))?'Ordinal':'Cardinal',_0x368354=_0x245f06(_0x23db04,_0xe6ca5d);_0x2699a4[_0x58682e(0x1f1f)](_0x368354,{'tags':!0x0}),_0x2699a4['tag'](_0x58682e(0x466e));}),this;}[_0x558799(0x3af4)](){const _0x5c8c4d=_0x558799;return this[_0x5c8c4d(0xa21)](_0x566d2f=>{const _0x175ddb=_0x5c8c4d;let _0xe34578=_0x25e8f7(_0x566d2f);if(null===_0xe34578['num'])return;let _0x344464=_0xe34578[_0x175ddb(0x51b1)][_0x175ddb(0x3af4)]();if(_0x566d2f[_0x175ddb(0x3170)]('#Ordinal')){let _0x4b7b6d=_0x245f06(_0xe34578,_0x175ddb(0x4eea))[_0x175ddb(0x2d96)](/[a-z]+$/);_0x4b7b6d&&(_0x344464+=_0x4b7b6d[0x0]||'');}_0x566d2f[_0x175ddb(0x1f1f)](_0x344464,{'tags':!0x0});}),this;}[_0x558799(0x2f71)](){const _0x34847b=_0x558799;let _0x4e040b=this[_0x34847b(0x4833)](_0x582704=>{const _0x5adef6=_0x34847b;if(_0x582704[_0x5adef6(0x3170)](_0x5adef6(0x3aa4)))return _0x582704;let _0x1c27c7=_0x25e8f7(_0x582704);if(null===_0x1c27c7[_0x5adef6(0x51b1)])return _0x582704;let _0x504537=_0x582704[_0x5adef6(0x3170)](_0x5adef6(0x2a5e))?_0x5adef6(0x5239):'TextCardinal',_0x5f22c8=_0x245f06(_0x1c27c7,_0x504537);return _0x582704[_0x5adef6(0x1f1f)](_0x5f22c8,{'tags':!0x0}),_0x582704['tag'](_0x5adef6(0xc93)),_0x582704;});return new _0x4140ed(_0x4e040b[_0x34847b(0x295)],_0x4e040b[_0x34847b(0x43e4)]);}[_0x558799(0x1eb9)](){const _0x1357dd=_0x558799;let _0x343735=this[_0x1357dd(0x4833)](_0x3e5053=>{const _0x3d9674=_0x1357dd;if(!_0x3e5053['has'](_0x3d9674(0x2a5e)))return _0x3e5053;let _0x54db3f=_0x25e8f7(_0x3e5053);if(null===_0x54db3f['num'])return _0x3e5053;let _0x513547=_0x3e5053[_0x3d9674(0x3170)](_0x3d9674(0x3aa4))?'TextCardinal':'Cardinal',_0x3bfc50=_0x245f06(_0x54db3f,_0x513547);return _0x3e5053['replaceWith'](_0x3bfc50,{'tags':!0x0}),_0x3e5053[_0x3d9674(0x15a9)]('Cardinal'),_0x3e5053;});return new _0x4140ed(_0x343735[_0x1357dd(0x295)],_0x343735['pointer']);}['toOrdinal'](){const _0x2423f5=_0x558799;let _0xfab26f=this[_0x2423f5(0x4833)](_0x28292e=>{const _0x57b86f=_0x2423f5;if(_0x28292e['has'](_0x57b86f(0x2a5e)))return _0x28292e;let _0xdfe604=_0x25e8f7(_0x28292e);if(null===_0xdfe604[_0x57b86f(0x51b1)])return _0x28292e;let _0x47d86a=_0x28292e['has'](_0x57b86f(0x3aa4))?_0x57b86f(0x5239):_0x57b86f(0x4eea),_0x501413=_0x245f06(_0xdfe604,_0x47d86a);return _0x28292e['replaceWith'](_0x501413,{'tags':!0x0}),_0x28292e[_0x57b86f(0x15a9)](_0x57b86f(0x4eea)),_0x28292e;});return new _0x4140ed(_0xfab26f[_0x2423f5(0x295)],_0xfab26f[_0x2423f5(0x43e4)]);}[_0x558799(0x35bf)](_0x17fea4){const _0x8b555c=_0x558799;return this[_0x8b555c(0x1465)](_0x25db24=>_0x25e8f7(_0x25db24)['num']===_0x17fea4);}[_0x558799(0x36b8)](_0xd0a6e2){const _0x24de47=_0x558799;return this[_0x24de47(0x1465)](_0x46c734=>_0x25e8f7(_0x46c734)['num']>_0xd0a6e2);}[_0x558799(0x112b)](_0x19ed9e){return this['filter'](_0xf83ebb=>_0x25e8f7(_0xf83ebb)['num']<_0x19ed9e);}[_0x558799(0x2597)](_0x40f31f,_0x3231ff){const _0x5bbe6a=_0x558799;return this[_0x5bbe6a(0x1465)](_0x53886f=>{const _0x1bf75d=_0x5bbe6a;let _0x3eef5a=_0x25e8f7(_0x53886f)[_0x1bf75d(0x51b1)];return _0x3eef5a>_0x40f31f&&_0x3eef5a<_0x3231ff;});}[_0x558799(0x1fa)](_0x157316){const _0x3aaf97=_0x558799;if(void 0x0===_0x157316)return this;_0x3aaf97(0x2431)==typeof _0x157316&&(_0x157316=_0x25e8f7(_0x157316)[_0x3aaf97(0x51b1)]);let _0x1dcdc3=this[_0x3aaf97(0x4833)](_0x35b8d2=>{const _0x1a95bf=_0x3aaf97;let _0x2f0f02=_0x25e8f7(_0x35b8d2);if(_0x2f0f02[_0x1a95bf(0x51b1)]=_0x157316,null===_0x2f0f02['num'])return _0x35b8d2;let _0x584232=_0x35b8d2[_0x1a95bf(0x3170)]('#Ordinal')?_0x1a95bf(0x4eea):_0x1a95bf(0xab2);_0x35b8d2[_0x1a95bf(0x3170)](_0x1a95bf(0x3aa4))&&(_0x584232=_0x35b8d2[_0x1a95bf(0x3170)](_0x1a95bf(0x2a5e))?_0x1a95bf(0x5239):'TextCardinal');let _0x160a2c=_0x245f06(_0x2f0f02,_0x584232);return _0x2f0f02[_0x1a95bf(0x2693)]&&_0x1a95bf(0xab2)===_0x584232&&(_0x160a2c=Number(_0x160a2c)['toLocaleString']()),(_0x35b8d2=_0x35b8d2[_0x1a95bf(0xc1a)](_0x1a95bf(0x4624)))[_0x1a95bf(0x1f1f)](_0x160a2c,{'tags':!0x0}),_0x35b8d2;});return new _0x4140ed(_0x1dcdc3[_0x3aaf97(0x295)],_0x1dcdc3['pointer']);}['add'](_0x327e07){const _0x16c60=_0x558799;if(!_0x327e07)return this;_0x16c60(0x2431)==typeof _0x327e07&&(_0x327e07=_0x25e8f7(_0x327e07)[_0x16c60(0x51b1)]);let _0x53bc1a=this[_0x16c60(0x4833)](_0x19635c=>{const _0x20a1cb=_0x16c60;let _0x356448=_0x25e8f7(_0x19635c);if(null===_0x356448[_0x20a1cb(0x51b1)])return _0x19635c;_0x356448['num']+=_0x327e07;let _0x4cc730=_0x19635c['has'](_0x20a1cb(0x2a5e))?_0x20a1cb(0x4eea):_0x20a1cb(0xab2);_0x356448[_0x20a1cb(0xa8e)]&&(_0x4cc730=_0x19635c[_0x20a1cb(0x3170)](_0x20a1cb(0x2a5e))?_0x20a1cb(0x5239):_0x20a1cb(0x504d));let _0x2e54c4=_0x245f06(_0x356448,_0x4cc730);return _0x19635c[_0x20a1cb(0x1f1f)](_0x2e54c4,{'tags':!0x0}),_0x19635c;});return new _0x4140ed(_0x53bc1a[_0x16c60(0x295)],_0x53bc1a[_0x16c60(0x43e4)]);}[_0x558799(0x2af4)](_0x5d6cdd,_0x22c343){const _0x3459ed=_0x558799;return this[_0x3459ed(0x362c)](-0x1*_0x5d6cdd,_0x22c343);}[_0x558799(0x4a85)](_0x630668){const _0x6dfd3=_0x558799;return this[_0x6dfd3(0x362c)](0x1,_0x630668);}[_0x558799(0x4b9a)](_0x4a3b10){return this['add'](-0x1,_0x4a3b10);}[_0x558799(0x38d6)](_0x532c0d){const _0x4e3747=_0x558799;let _0x45c451=new _0x4140ed(this[_0x4e3747(0x295)],_0x532c0d);return _0x45c451[_0x4e3747(0x1aa7)]=this[_0x4e3747(0x1aa7)],_0x45c451;}}_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x1325)]=_0x4140ed[_0x558799(0x3b3c)]['toLocaleString'],_0x4140ed['prototype'][_0x558799(0x42cd)]=_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x2597)],_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x3ce8)]=_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x2af4)],_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x3a4b)]=_0x4140ed[_0x558799(0x3b3c)]['add'],_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x39f9)]=_0x4140ed[_0x558799(0x3b3c)][_0x558799(0x35bf)],_0x1357ab[_0x558799(0x3b3c)][_0x558799(0x265f)]=function(_0x1fae54){const _0x429f93=_0x558799;let _0x429691=_0x5db445(this);return _0x429691=_0x429691[_0x429f93(0x152d)](_0x1fae54),new _0x4140ed(this[_0x429f93(0x295)],_0x429691[_0x429f93(0x43e4)]);},_0x1357ab[_0x558799(0x3b3c)][_0x558799(0x3bb8)]=function(_0x2d9c0a){const _0x3d0202=_0x558799;let _0x5aaef8=_0x5db445(this);return _0x5aaef8=_0x5aaef8[_0x3d0202(0x1465)](_0x5c8a96=>_0x5c8a96['has'](_0x3d0202(0x5291))||_0x5c8a96[_0x3d0202(0x1349)]('^percent')),_0x5aaef8=_0x5aaef8[_0x3d0202(0x152d)](_0x2d9c0a),new _0x4140ed(this[_0x3d0202(0x295)],_0x5aaef8[_0x3d0202(0x43e4)]);},_0x1357ab[_0x558799(0x3b3c)][_0x558799(0x3cad)]=function(_0x499197){const _0x56a7ce=_0x558799;let _0x394dfa=_0x5db445(this);return _0x394dfa=_0x394dfa['filter'](_0x40b6ed=>_0x40b6ed[_0x56a7ce(0x3170)](_0x56a7ce(0x4366))||_0x40b6ed[_0x56a7ce(0x1349)]('^#Currency')),_0x394dfa=_0x394dfa[_0x56a7ce(0x152d)](_0x499197),new _0x4140ed(this[_0x56a7ce(0x295)],_0x394dfa['pointer']);},_0x1357ab[_0x558799(0x3b3c)][_0x558799(0x1fae)]=_0x1357ab[_0x558799(0x3b3c)][_0x558799(0x265f)];},_0x2a2326={'api':function(_0x201bd7){_0x5d6d9f(_0x201bd7),_0x5dcdf2(_0x201bd7);}},_0x2020db={'people':!0x0,'emails':!0x0,'phoneNumbers':!0x0,'places':!0x0},_0x1614be=function(_0x22243a={}){const _0x788d02=_0x37e46c;return!0x1!==(_0x22243a=Object[_0x788d02(0x4e14)]({},_0x2020db,_0x22243a))['people']&&this[_0x788d02(0x4968)]()[_0x788d02(0x1f1f)](_0x788d02(0x1d4d)),!0x1!==_0x22243a[_0x788d02(0x2364)]&&this['emails']()[_0x788d02(0x1f1f)](_0x788d02(0x1d4d)),!0x1!==_0x22243a[_0x788d02(0x3e59)]&&this[_0x788d02(0x3e59)]()['replaceWith']('██████████'),!0x1!==_0x22243a[_0x788d02(0x28d9)]&&this[_0x788d02(0x28d9)]()[_0x788d02(0x1f1f)](_0x788d02(0x12e1)),this;},_0x5026e9={'api':function(_0x2024cc){const _0x3665e3=_0x37e46c;_0x2024cc[_0x3665e3(0x3b3c)]['redact']=_0x1614be;}},_0x45f775=_0x5026e9,_0x412ad2=function(_0x4799d1){const _0x3d5cfb=_0x37e46c,_0x3978f6=/\?/,{document:_0x1539c8}=_0x4799d1;return _0x4799d1[_0x3d5cfb(0x1465)](_0x2c2897=>{const _0x53354e=_0x3d5cfb;let _0x48cc45=_0x2c2897['docs'][0x0]||[],_0x49eec4=_0x48cc45[_0x48cc45[_0x53354e(0x1b19)]-0x1];return!(!_0x49eec4||_0x1539c8[_0x49eec4[_0x53354e(0x3bb5)][0x0]]['length']!==_0x48cc45[_0x53354e(0x1b19)])&&(!!_0x3978f6[_0x53354e(0x1769)](_0x49eec4[_0x53354e(0x24ce)])||function(_0x3b8fc7){const _0xf6cd4=_0x53354e;let _0x70ae47=_0x3b8fc7[_0xf6cd4(0xd84)]();return!(/\.\.$/[_0xf6cd4(0x1769)](_0x3b8fc7[_0xf6cd4(0x3ab5)](_0xf6cd4(0x4006)))||_0x3b8fc7['has'](_0xf6cd4(0x964))&&_0x3b8fc7['has'](_0xf6cd4(0x38c6))||!_0x3b8fc7[_0xf6cd4(0x3170)](_0xf6cd4(0x1579))&&!_0x3b8fc7['has'](_0xf6cd4(0x964))&&!_0x3b8fc7['has'](_0xf6cd4(0x373d))&&!_0x3b8fc7[_0xf6cd4(0x3170)](_0xf6cd4(0x1f7e))&&!_0x70ae47[_0xf6cd4(0x3170)]('(do|does|is|was)\x20#Noun+\x20#Adverb?\x20(#Adjective|#Infinitive)$'));}(_0x2c2897));});},_0x44694d=function(_0x2fe7a8){const _0x38a77f=_0x37e46c;let _0x31d220=_0x2fe7a8;return 0x1===_0x31d220[_0x38a77f(0x1b19)]?_0x31d220:(_0x31d220=_0x31d220['if']('#Verb'),0x1===_0x31d220['length']?_0x31d220:(_0x31d220=_0x31d220[_0x38a77f(0x385b)](_0x38a77f(0x4f31)),_0x31d220=_0x31d220[_0x38a77f(0x385b)](_0x38a77f(0x148d)),_0x31d220=_0x31d220[_0x38a77f(0x385b)](_0x38a77f(0x47fa)),_0x31d220=_0x31d220[_0x38a77f(0x385b)](_0x38a77f(0x4f03)),_0x31d220=_0x31d220[_0x38a77f(0x385b)]('^provided\x20that'),0x1===_0x31d220['length']?_0x31d220:(_0x31d220=_0x31d220[_0x38a77f(0x385b)]('(that|which|whichever|who|whoever|whom|whose|whomever)'),0x1===_0x31d220['length']?_0x31d220:(_0x31d220=_0x31d220['ifNo'](_0x38a77f(0x307)),0x1===_0x31d220[_0x38a77f(0x1b19)]?_0x31d220:(_0x31d220=_0x31d220['ifNo']('^#Gerund'),0x1===_0x31d220['length']?_0x31d220:(0x0===_0x31d220[_0x38a77f(0x1b19)]&&(_0x31d220=_0x2fe7a8),_0x31d220['eq'](0x0)))))));},_0x1daabe=function(_0x9bad89){const _0x4d1ea3=_0x37e46c;let _0x38b156=null;return _0x9bad89[_0x4d1ea3(0x3170)](_0x4d1ea3(0x382e))?_0x38b156=_0x4d1ea3(0xe52):_0x9bad89[_0x4d1ea3(0x3170)]('#FutureTense')?_0x38b156=_0x4d1ea3(0x2b4d):_0x9bad89['has']('#PresentTense')&&(_0x38b156=_0x4d1ea3(0x2c88)),{'tense':_0x38b156};},_0x4a641e=function(_0x22d3b2){const _0x56ea3f=_0x37e46c;let _0x19c9d2=_0x22d3b2[_0x56ea3f(0xd84)](),_0x4a780a=_0x44694d(_0x19c9d2)[_0x56ea3f(0x268d)](),_0xb010bb=_0x22d3b2['none'](),_0x5b4e5e=_0x22d3b2[_0x56ea3f(0x28b)](),_0x44920b=_0x22d3b2['none']();return _0x4a780a[_0x56ea3f(0xa21)]((_0x116a36,_0x2776c6)=>{const _0x207222=_0x56ea3f;0x0!==_0x2776c6||_0x116a36[_0x207222(0x3170)](_0x207222(0x45dc))?_0x5b4e5e['found']||!_0x116a36[_0x207222(0x3170)](_0x207222(0x45dc))?_0x5b4e5e['found']&&(_0x44920b=_0x44920b[_0x207222(0x1d1d)](_0x116a36)):_0x5b4e5e=_0x116a36:_0xb010bb=_0x116a36;}),_0x5b4e5e[_0x56ea3f(0x2108)]&&!_0xb010bb[_0x56ea3f(0x2108)]&&(_0xb010bb=_0x5b4e5e[_0x56ea3f(0x5097)]('+')[_0x56ea3f(0x4d51)]()),{'subj':_0xb010bb,'verb':_0x5b4e5e,'pred':_0x44920b,'grammar':_0x1daabe(_0x5b4e5e)};},_0x19de3c=function(_0x4999da){const _0x49e08c=_0x37e46c;let _0x120694=_0x4999da[_0x49e08c(0x34d1)](),_0x444c28=_0x120694['eq'](0x0);if(_0x444c28[_0x49e08c(0x3170)](_0x49e08c(0x382e)))return _0x4999da;if(_0x444c28[_0x49e08c(0x4d85)](),_0x120694[_0x49e08c(0x1b19)]>0x1){_0x120694=_0x120694['slice'](0x1),_0x120694=_0x120694['filter'](_0x3aaacf=>!_0x3aaacf[_0x49e08c(0x215d)](_0x49e08c(0x339))[_0x49e08c(0x2108)]),_0x120694=_0x120694['if'](_0x49e08c(0x4407)),_0x120694=_0x120694['notIf']('#Gerund');let _0x96bb7f=_0x4999da[_0x49e08c(0x2d96)](_0x49e08c(0x2a00))[_0x49e08c(0x4a03)]();_0x120694=_0x120694[_0x49e08c(0xc1a)](_0x96bb7f),_0x120694[_0x49e08c(0x2108)]&&_0x120694['verbs']()[_0x49e08c(0x4d85)]();}return _0x4999da;},_0x57e7c5=function(_0x5dfeaa){const _0x3622aa=_0x37e46c;let _0x4d5b66=_0x5dfeaa[_0x3622aa(0x34d1)]();return _0x4d5b66['eq'](0x0)['toPresentTense'](),_0x4d5b66['length']>0x1&&(_0x4d5b66=_0x4d5b66[_0x3622aa(0x384c)](0x1),_0x4d5b66=_0x4d5b66[_0x3622aa(0x1465)](_0x381b0a=>!_0x381b0a[_0x3622aa(0x215d)](_0x3622aa(0x339))[_0x3622aa(0x2108)]),_0x4d5b66=_0x4d5b66[_0x3622aa(0x45f9)](_0x3622aa(0x503)),_0x4d5b66[_0x3622aa(0x2108)]&&_0x4d5b66[_0x3622aa(0x34d1)]()[_0x3622aa(0x299f)]()),_0x5dfeaa;},_0x8e5f8b=function(_0x4c999d){const _0x43ba21=_0x37e46c;let _0x330948=_0x4c999d['verbs']();if(_0x330948['eq'](0x0)['toFutureTense'](),_0x330948=(_0x4c999d=_0x4c999d[_0x43ba21(0x1bb9)]())[_0x43ba21(0x34d1)](),_0x330948[_0x43ba21(0x1b19)]>0x1){_0x330948=_0x330948[_0x43ba21(0x384c)](0x1);let _0x507078=_0x330948[_0x43ba21(0x1465)](_0x46e0b2=>!_0x46e0b2[_0x43ba21(0x215d)](_0x43ba21(0x339))[_0x43ba21(0x2108)]&&(!!_0x46e0b2[_0x43ba21(0x3170)](_0x43ba21(0x216a))||!_0x46e0b2['has'](_0x43ba21(0x503))&&(!!_0x46e0b2[_0x43ba21(0x3170)](_0x43ba21(0x3800))||!(_0x46e0b2['has']('#PresentTense')&&!_0x46e0b2[_0x43ba21(0x3170)](_0x43ba21(0x108e))&&_0x46e0b2[_0x43ba21(0x261d)]('(he|she|it|that|which)$')[_0x43ba21(0x2108)]))));_0x507078['found']&&_0x507078[_0x43ba21(0xa21)](_0x146cfc=>{const _0x1a0413=_0x43ba21;if(_0x146cfc[_0x1a0413(0x3170)](_0x1a0413(0x3800)))return _0x146cfc['match']('was')['replaceWith']('is'),void _0x146cfc[_0x1a0413(0x2d96)]('is')['replaceWith'](_0x1a0413(0x211e));_0x146cfc[_0x1a0413(0x25f2)]();});}return _0x4c999d;},_0x2a1f0e=function(_0xf953ca){const _0x33fc91=_0x37e46c;return _0xf953ca['verbs']()[_0x33fc91(0x25f2)](),_0xf953ca;},_0x57d189=function(_0xc419fd){const _0x4b1411=_0x37e46c;class _0x50d484 extends _0xc419fd{constructor(_0x3c550e,_0x4b1c6c,_0x559062){const _0x2c979a=a0_0x11e7;super(_0x3c550e,_0x4b1c6c,_0x559062),this[_0x2c979a(0x106d)]=_0x2c979a(0x2eb6);}[_0x4b1411(0x3289)](_0x4cef66={}){const _0x15271c=_0x4b1411;return this[_0x15271c(0x4833)](_0x425e8e=>{const _0x527292=_0x15271c;let _0x2db552=_0x425e8e['toView']()['json'](_0x4cef66)[0x0]||{},{subj:_0x3a9c7c,verb:_0xfc5a27,pred:_0x44f2dd,grammar:_0x47dfd5}=_0x4a641e(_0x425e8e);return _0x2db552[_0x527292(0x3824)]={'subject':_0x3a9c7c[_0x527292(0x4006)](_0x527292(0x47d)),'verb':_0xfc5a27[_0x527292(0x4006)](_0x527292(0x47d)),'predicate':_0x44f2dd[_0x527292(0x4006)](_0x527292(0x47d)),'grammar':_0x47dfd5},_0x2db552;},[]);}[_0x4b1411(0x4d85)](_0x5c309b){const _0x2724ba=_0x4b1411;return this[_0x2724ba(0x152d)](_0x5c309b)[_0x2724ba(0x4833)](_0x43da20=>{let _0xac05de=_0x4a641e(_0x43da20);return _0x19de3c(_0x43da20,_0xac05de);});}['toPresentTense'](_0x48ae82){const _0x21bb53=_0x4b1411;return this['getNth'](_0x48ae82)[_0x21bb53(0x4833)](_0x45149d=>{let _0x307400=_0x4a641e(_0x45149d);return _0x57e7c5(_0x45149d,_0x307400);});}[_0x4b1411(0x3ace)](_0x59041d){const _0x12ee16=_0x4b1411;return this[_0x12ee16(0x152d)](_0x59041d)[_0x12ee16(0x4833)](_0x335e1c=>{let _0x956ea1=_0x4a641e(_0x335e1c);return _0x335e1c=_0x8e5f8b(_0x335e1c,_0x956ea1);});}['toInfinitive'](_0x40d660){const _0x283e43=_0x4b1411;return this[_0x283e43(0x152d)](_0x40d660)[_0x283e43(0x4833)](_0x19e32f=>{let _0x738301=_0x4a641e(_0x19e32f);return _0x2a1f0e(_0x19e32f,_0x738301);});}[_0x4b1411(0x413)](_0x1bd451){const _0xaa9adc=_0x4b1411;return this[_0xaa9adc(0x152d)](_0x1bd451)[_0xaa9adc(0x4833)](_0x5854dc=>{return _0x4a641e(_0x5854dc),function(_0x48a7bb){const _0x1506e2=a0_0x11e7;return _0x48a7bb[_0x1506e2(0x34d1)]()[_0x1506e2(0x4d51)]()['toNegative']()[_0x1506e2(0x23df)](_0x1506e2(0x268d)),_0x48a7bb;}(_0x5854dc);});}[_0x4b1411(0x2037)](_0x597b09){const _0x2933eb=_0x4b1411;return this[_0x2933eb(0x152d)](_0x597b09)[_0x2933eb(0x4833)](_0x19a288=>{return _0x4a641e(_0x19a288),function(_0x5bdca0){const _0x52a7e1=a0_0x11e7;return _0x5bdca0[_0x52a7e1(0x34d1)]()[_0x52a7e1(0x4d51)]()[_0x52a7e1(0x2037)]()[_0x52a7e1(0x23df)](_0x52a7e1(0x268d)),_0x5bdca0;}(_0x19a288);});}['isQuestion'](_0x3db86f){const _0x2c4607=_0x4b1411;return this[_0x2c4607(0x11b3)](_0x3db86f);}['isExclamation'](_0x5436d5){const _0x387661=_0x4b1411;let _0x5ea801=this[_0x387661(0x1465)](_0x28124d=>_0x28124d[_0x387661(0x2cce)]()[_0x387661(0x3170)]('@hasExclamation'));return _0x5ea801[_0x387661(0x152d)](_0x5436d5);}[_0x4b1411(0x32b6)](_0x4909bc){const _0x1fb432=_0x4b1411;let _0x2d671d=this['filter'](_0x5c5ac7=>!_0x5c5ac7['isExclamation']()[_0x1fb432(0x2108)]&&!_0x5c5ac7[_0x1fb432(0x5168)]()[_0x1fb432(0x2108)]);return _0x2d671d['getNth'](_0x4909bc);}[_0x4b1411(0x38d6)](_0x26922d){const _0x450f00=_0x4b1411;let _0x4adc96=new _0x50d484(this[_0x450f00(0x295)],_0x26922d);return _0x4adc96[_0x450f00(0x1aa7)]=this[_0x450f00(0x1aa7)],_0x4adc96;}}_0x50d484[_0x4b1411(0x3b3c)][_0x4b1411(0x45ee)]=_0x50d484['prototype'][_0x4b1411(0x299f)],_0x50d484['prototype'][_0x4b1411(0x318d)]=_0x50d484['prototype'][_0x4b1411(0x4d85)],_0x50d484[_0x4b1411(0x3b3c)][_0x4b1411(0x110d)]=_0x50d484['prototype'][_0x4b1411(0x3ace)];const _0x6b836={'sentences':function(_0x611882){const _0x439351=_0x4b1411;let _0x2945e7=this[_0x439351(0x4833)](_0x57fd99=>_0x57fd99[_0x439351(0x1bb9)]());return _0x2945e7=_0x2945e7[_0x439351(0x152d)](_0x611882),new _0x50d484(this[_0x439351(0x295)],_0x2945e7[_0x439351(0x43e4)]);},'questions':function(_0x3069e2){const _0x5d1c=_0x4b1411;return _0x412ad2(this)[_0x5d1c(0x152d)](_0x3069e2);}};Object['assign'](_0xc419fd[_0x4b1411(0x3b3c)],_0x6b836);},_0x253bdd={'api':_0x57d189},_0x43ead5=function(_0x12a13f){const _0x4e9fcb=_0x37e46c;let _0x616c50=_0x12a13f[_0x4e9fcb(0x2d96)](_0x4e9fcb(0xc57)),_0x1f43da=_0x616c50['match'](_0x4e9fcb(0xbb1))['notIf'](_0x4e9fcb(0x38eb));return _0x616c50=_0x616c50[_0x4e9fcb(0x319a)](_0x1f43da),_0x616c50;},_0x4fb3ea=function(_0x40ca03){const _0x35f53d=_0x37e46c;let _0x3f8491={};_0x3f8491['firstName']=_0x40ca03[_0x35f53d(0x2d96)](_0x35f53d(0x4ac8)),_0x3f8491[_0x35f53d(0x4f5d)]=_0x40ca03[_0x35f53d(0x2d96)](_0x35f53d(0xfa5)),_0x3f8491[_0x35f53d(0x10dc)]=_0x40ca03[_0x35f53d(0x2d96)](_0x35f53d(0x1364));let _0x323cc7=_0x3f8491[_0x35f53d(0x4f5d)],_0x34d772=_0x3f8491[_0x35f53d(0x3890)];return _0x34d772['found']&&_0x323cc7[_0x35f53d(0x2108)]||_0x34d772['found']||_0x323cc7[_0x35f53d(0x2108)]||!_0x40ca03[_0x35f53d(0x3170)](_0x35f53d(0x3430))||(_0x3f8491['lastName']=_0x40ca03[_0x35f53d(0x2d96)]('.$')),_0x3f8491;},_0x1ddd1d='male',_0x17f837=_0x37e46c(0x1f25),_0x119226={'mr':_0x1ddd1d,'mrs':_0x17f837,'miss':_0x17f837,'madam':_0x17f837,'king':_0x1ddd1d,'queen':_0x17f837,'duke':_0x1ddd1d,'duchess':_0x17f837,'baron':_0x1ddd1d,'baroness':_0x17f837,'count':_0x1ddd1d,'countess':_0x17f837,'prince':_0x1ddd1d,'princess':_0x17f837,'sire':_0x1ddd1d,'dame':_0x17f837,'lady':_0x17f837,'ayatullah':_0x1ddd1d,'congressman':_0x1ddd1d,'congresswoman':_0x17f837,'first\x20lady':_0x17f837,'mx':null},_0x471d20=function(_0x801085,_0x42c2df){const _0x1af5f6=_0x37e46c;let {firstName:_0x253b82,honorific:_0x4a5d32}=_0x801085;if(_0x253b82[_0x1af5f6(0x3170)]('#FemaleName'))return _0x17f837;if(_0x253b82[_0x1af5f6(0x3170)](_0x1af5f6(0xfdf)))return _0x1ddd1d;if(_0x4a5d32[_0x1af5f6(0x2108)]){let _0x62f093=_0x4a5d32[_0x1af5f6(0x4006)](_0x1af5f6(0x47d));if(_0x62f093=_0x62f093[_0x1af5f6(0x741)](/\./g,''),_0x119226['hasOwnProperty'](_0x62f093))return _0x119226[_0x62f093];if(/^her /['test'](_0x62f093))return _0x17f837;if(/^his /['test'](_0x62f093))return _0x1ddd1d;}let _0x2f8a58=_0x42c2df['after']();if(!_0x2f8a58[_0x1af5f6(0x3170)](_0x1af5f6(0x47c9))&&_0x2f8a58[_0x1af5f6(0x3170)](_0x1af5f6(0x869))){let _0xeb6c51=_0x2f8a58[_0x1af5f6(0x2d96)](_0x1af5f6(0x869));if(_0xeb6c51['has'](_0x1af5f6(0x17d0)))return null;let _0x295c8b=_0xeb6c51['has'](_0x1af5f6(0x20e3)),_0x3d1a45=_0xeb6c51[_0x1af5f6(0x3170)](_0x1af5f6(0x20e9));if(_0x295c8b&&!_0x3d1a45)return _0x1ddd1d;if(_0x3d1a45&&!_0x295c8b)return _0x17f837;}return null;},_0xd7a9bf=function(_0xe9456a){const _0x590e0f=_0x37e46c;class _0x1f59e5 extends _0xe9456a{constructor(_0x3161a8,_0x17f3cf,_0x513db3){const _0x220d0=a0_0x11e7;super(_0x3161a8,_0x17f3cf,_0x513db3),this[_0x220d0(0x106d)]=_0x220d0(0x2eeb);}[_0x590e0f(0x2956)](_0x3e17ab){return this['getNth'](_0x3e17ab)['map'](_0x4fb3ea);}['json'](_0x4f7c74){const _0x59c7fa=_0x590e0f;let _0xb62ae8=_0x59c7fa(0x20c7)==typeof _0x4f7c74?_0x4f7c74:{};return this[_0x59c7fa(0x152d)](_0x4f7c74)[_0x59c7fa(0x4833)](_0x4db428=>{const _0x2a00bc=_0x59c7fa;let _0x44c7eb=_0x4db428[_0x2a00bc(0x324a)]()[_0x2a00bc(0x3289)](_0xb62ae8)[0x0],_0x4b2ac7=_0x4fb3ea(_0x4db428);return _0x44c7eb['person']={'firstName':_0x4b2ac7['firstName'][_0x2a00bc(0x4006)]('normal'),'lastName':_0x4b2ac7[_0x2a00bc(0x4f5d)][_0x2a00bc(0x4006)](_0x2a00bc(0x47d)),'honorific':_0x4b2ac7[_0x2a00bc(0x10dc)]['text'](_0x2a00bc(0x47d)),'presumed_gender':_0x471d20(_0x4b2ac7,_0x4db428)},_0x44c7eb;},[]);}['presumedMale'](){const _0x2ccdbc=_0x590e0f;return this['filter'](_0x33b873=>_0x33b873[_0x2ccdbc(0x3170)](_0x2ccdbc(0x31b8)));}[_0x590e0f(0x29d1)](){const _0x427dd5=_0x590e0f;return this[_0x427dd5(0x1465)](_0x19195c=>_0x19195c[_0x427dd5(0x3170)](_0x427dd5(0x4d02)));}[_0x590e0f(0x38d6)](_0x30f60e){const _0x4d4213=_0x590e0f;let _0xfe10c5=new _0x1f59e5(this[_0x4d4213(0x295)],_0x30f60e);return _0xfe10c5[_0x4d4213(0x1aa7)]=this['_cache'],_0xfe10c5;}}_0xe9456a[_0x590e0f(0x3b3c)]['people']=function(_0x37c899){const _0x518c24=_0x590e0f;let _0x4a22d7=_0x43ead5(this);return _0x4a22d7=_0x4a22d7[_0x518c24(0x152d)](_0x37c899),new _0x1f59e5(this[_0x518c24(0x295)],_0x4a22d7[_0x518c24(0x43e4)]);};},_0x33183a=function(_0x4525d5){const _0x230696=_0x37e46c;let _0x2dff97=_0x4525d5[_0x230696(0x2d96)](_0x230696(0x1134)),_0x1e8acd=_0x2dff97[_0x230696(0x2d96)](_0x230696(0x38c6));return _0x1e8acd=_0x1e8acd[_0x230696(0x1465)](_0x41031f=>!!_0x41031f[_0x230696(0x3170)](_0x230696(0x4486))||(!_0x41031f['has'](_0x230696(0x4274))||!_0x41031f[_0x230696(0x1349)](_0x230696(0x327d))[_0x230696(0x2108)])),_0x2dff97=_0x2dff97[_0x230696(0x319a)](_0x1e8acd),_0x2dff97;},_0x21223b=function(_0x4a8a58){const _0x405132=_0x37e46c;_0x4a8a58[_0x405132(0x3b3c)]['places']=function(_0x2d3f23){const _0x5ccf5b=_0x405132;let _0x2ba87d=_0x33183a(this);return _0x2ba87d=_0x2ba87d['getNth'](_0x2d3f23),new _0x4a8a58(this[_0x5ccf5b(0x295)],_0x2ba87d[_0x5ccf5b(0x43e4)]);};},_0x54394b=function(_0xb52c53){const _0x4b971e=_0x37e46c;_0xb52c53[_0x4b971e(0x3b3c)][_0x4b971e(0x48c4)]=function(_0x47a35b){const _0x52a8ca=_0x4b971e;return this['match'](_0x52a8ca(0x2b1e))['getNth'](_0x47a35b);};},_0x341be1=function(_0x3fa2d8){const _0x271a0c=_0x37e46c;let _0x266275=this[_0x271a0c(0xd84)](),_0x27e8e8=_0x266275[_0x271a0c(0x4968)]();return _0x27e8e8=_0x27e8e8[_0x271a0c(0x1d1d)](_0x266275[_0x271a0c(0x3e59)]()),_0x27e8e8=_0x27e8e8[_0x271a0c(0x1d1d)](_0x266275[_0x271a0c(0x48c4)]()),_0x27e8e8=_0x27e8e8[_0x271a0c(0xc1a)]('(someone|man|woman|mother|brother|sister|father)'),_0x27e8e8=_0x27e8e8[_0x271a0c(0x4c33)](_0x271a0c(0xc26)),_0x27e8e8=_0x27e8e8[_0x271a0c(0x152d)](_0x3fa2d8),_0x27e8e8;},_0x1161db=function(_0x26e636){const _0x187f3c=_0x37e46c;_0x26e636[_0x187f3c(0x3b3c)][_0x187f3c(0x1d99)]=_0x341be1;},_0x23e681={'api':function(_0x1bb938){_0xd7a9bf(_0x1bb938),_0x21223b(_0x1bb938),_0x54394b(_0x1bb938),_0x1161db(_0x1bb938);}},_0x11005e=function(_0x4185dd){const _0x342325=_0x37e46c;let _0xd54766=_0x4185dd[_0x342325(0x2d96)](_0x342325(0x45dc));return _0xd54766=_0xd54766[_0x342325(0xc1a)](_0x342325(0x4d4e)),_0xd54766=_0xd54766[_0x342325(0xc1a)](_0x342325(0x8b6)),_0xd54766=_0xd54766[_0x342325(0x319a)](_0x342325(0x38c6)),_0xd54766=_0xd54766[_0x342325(0x319a)](_0x342325(0x112a),0x0),_0xd54766=_0xd54766['splitBefore'](_0x342325(0x1d68),0x0),_0xd54766=_0xd54766[_0x342325(0x4c0)](_0x342325(0x3c04),0x0),_0xd54766=_0xd54766[_0x342325(0x319a)](_0x342325(0x495a),0x0),_0xd54766=_0xd54766[_0x342325(0x4c0)](_0x342325(0x3151),0x0),_0xd54766=_0xd54766[_0x342325(0x4c0)](_0x342325(0x147b),0x0),_0xd54766=_0xd54766[_0x342325(0x4c0)]('(#PresentTense|#PastTense)\x20[(had|has)]',0x0),_0xd54766=_0xd54766[_0x342325(0xc1a)](_0x342325(0x3142)),_0xd54766=_0xd54766[_0x342325(0xc1a)](_0x342325(0x335e)),_0xd54766=_0xd54766[_0x342325(0x319a)](_0x342325(0x3edb),0x0),_0xd54766=_0xd54766['splitAfter']('[#PastTense]\x20#Auxiliary+\x20#PastTense',0x0),_0xd54766=_0xd54766[_0x342325(0x319a)]('#Copula\x20[#Gerund]\x20#PastTense',0x0),_0xd54766=_0xd54766['if'](_0x342325(0x38b0)),_0xd54766[_0x342325(0x3170)]('(#Verb\x20&&\x20!#Auxiliary)\x20#Adverb+?\x20#Copula')&&(_0xd54766=_0xd54766[_0x342325(0x4c0)](_0x342325(0x3800))),_0xd54766;},_0x3b4858=function(_0x9bc940){const _0x1e4751=_0x37e46c;let _0x303f3b=_0x9bc940;return _0x9bc940[_0x1e4751(0x208a)]()>0x1&&(_0x303f3b=_0x9bc940['not'](_0x1e4751(0xeaa))),_0x303f3b[_0x1e4751(0x1b19)]>0x1&&!_0x303f3b['has']('#Phrasal\x20#Particle')&&(_0x303f3b=_0x303f3b[_0x1e4751(0x4d3c)]()),_0x303f3b=_0x303f3b[_0x1e4751(0xc1a)]('(want|wants|wanted)\x20to'),_0x303f3b[_0x1e4751(0x2108)]||(_0x303f3b=_0x9bc940[_0x1e4751(0xc1a)](_0x1e4751(0x4f07))),_0x303f3b;},_0x3e3191=function(_0x5d6dae,_0xd4b683){const _0x121e29=_0x37e46c;let _0x3a5555={'pre':_0x5d6dae[_0x121e29(0x28b)](),'post':_0x5d6dae[_0x121e29(0x28b)]()};if(!_0x5d6dae['has']('#Adverb'))return _0x3a5555;let _0x1bbcdd=_0x5d6dae['splitOn'](_0xd4b683);return 0x3===_0x1bbcdd[_0x121e29(0x1b19)]?{'pre':_0x1bbcdd['eq'](0x0)[_0x121e29(0x48d1)](),'post':_0x1bbcdd['eq'](0x2)['adverbs']()}:_0x1bbcdd['eq'](0x0)['isDoc'](_0xd4b683)?(_0x3a5555[_0x121e29(0x24ce)]=_0x1bbcdd['eq'](0x1)['adverbs'](),_0x3a5555):(_0x3a5555['pre']=_0x1bbcdd['eq'](0x0)['adverbs'](),_0x3a5555);},_0xf4cba9=function(_0x20cdc3,_0x2ca8c9){const _0x2b9173=_0x37e46c;let _0x50bf5b=_0x20cdc3[_0x2b9173(0x4c0)](_0x2ca8c9);if(_0x50bf5b[_0x2b9173(0x1b19)]<=0x1)return _0x20cdc3[_0x2b9173(0x28b)]();let _0x2ea99f=_0x50bf5b['eq'](0x0);return _0x2ea99f=_0x2ea99f[_0x2b9173(0xc1a)]('(#Adverb|#Negative|#Prefix)'),_0x2ea99f;},_0x25b75a=function(_0x161d31){const _0x42761a=_0x37e46c;return _0x161d31['match'](_0x42761a(0x4f07));},_0x364315=function(_0x56dbc3){const _0x596b7b=_0x37e46c;if(!_0x56dbc3['has'](_0x596b7b(0x3df2)))return{'verb':_0x56dbc3[_0x596b7b(0x28b)](),'particle':_0x56dbc3[_0x596b7b(0x28b)]()};let _0x1e9873=_0x56dbc3[_0x596b7b(0x2d96)]('#Particle$');return{'verb':_0x56dbc3['not'](_0x1e9873),'particle':_0x1e9873};},_0x4e0645=function(_0x1ac58f){const _0x188efc=_0x37e46c;let _0x388683=_0x1ac58f[_0x188efc(0x150c)]();_0x388683[_0x188efc(0x8cf)]()['expand']();const _0x27b6f2=_0x3b4858(_0x388683);return{'root':_0x27b6f2,'prefix':_0x388683[_0x188efc(0x2d96)](_0x188efc(0x3ebe)),'adverbs':_0x3e3191(_0x388683,_0x27b6f2),'auxiliary':_0xf4cba9(_0x388683,_0x27b6f2),'negative':_0x25b75a(_0x388683),'phrasal':_0x364315(_0x27b6f2)};},_0x3d05ae={'tense':'PresentTense'},_0x517acd={'conditional':!0x0},_0x395c8c={'tense':_0x37e46c(0x2b4d)},_0xb76472={'progressive':!0x0},_0x2c21a1={'tense':_0x37e46c(0xe52)},_0x2ba454={'complete':!0x0,'progressive':!0x1},_0x1c4df4={'passive':!0x0},_0x2b8fb0=function(_0x2178bd){const _0x625bb4=_0x37e46c;let _0xb97b49={};return _0x2178bd[_0x625bb4(0xa21)](_0xca3658=>{const _0x54b632=_0x625bb4;Object[_0x54b632(0x4e14)](_0xb97b49,_0xca3658);}),_0xb97b49;},_0x306660={'imperative':[[_0x37e46c(0x3018),[]]],'want-infinitive':[[_0x37e46c(0x3da8),[_0x3d05ae]],[_0x37e46c(0x1c62),[_0x2c21a1]],[_0x37e46c(0x33c4),[_0x395c8c]]],'gerund-phrase':[[_0x37e46c(0x2f23),[_0x2c21a1]],[_0x37e46c(0x2310),[_0x3d05ae]],[_0x37e46c(0x3939),[_0x3d05ae]],['^will\x20#Infinitive\x20#Gerund$',[_0x395c8c]],['^have\x20#PastTense\x20#Gerund$',[_0x2c21a1]],['^will\x20have\x20#PastTense\x20#Gerund$',[_0x2c21a1]]],'simple-present':[[_0x37e46c(0x438d),[_0x3d05ae]],[_0x37e46c(0x1868),[_0x3d05ae]]],'simple-past':[[_0x37e46c(0x51d6),[_0x2c21a1]]],'simple-future':[['^will\x20#Adverb?\x20#Infinitive',[_0x395c8c]]],'present-progressive':[[_0x37e46c(0x3a30),[_0x3d05ae,_0xb76472]]],'past-progressive':[[_0x37e46c(0x50a4),[_0x2c21a1,_0xb76472]]],'future-progressive':[['^will\x20be\x20#Gerund$',[_0x395c8c,_0xb76472]]],'present-perfect':[[_0x37e46c(0x2208),[_0x2c21a1,_0x2ba454]]],'past-perfect':[[_0x37e46c(0xc2f),[_0x2c21a1,_0x2ba454]],[_0x37e46c(0x3f48),[_0x2c21a1,_0x2ba454]]],'future-perfect':[['^will\x20have\x20#PastTense$',[_0x395c8c,_0x2ba454]]],'present-perfect-progressive':[[_0x37e46c(0x3bb),[_0x2c21a1,_0xb76472]]],'past-perfect-progressive':[[_0x37e46c(0x1d0d),[_0x2c21a1,_0xb76472]]],'future-perfect-progressive':[['^will\x20have\x20been\x20#Gerund$',[_0x395c8c,_0xb76472]]],'passive-past':[['(got|were|was)\x20#Passive',[_0x2c21a1,_0x1c4df4]],['^(was|were)\x20being\x20#Passive',[_0x2c21a1,_0x1c4df4]],[_0x37e46c(0x12be),[_0x2c21a1,_0x1c4df4]]],'passive-present':[[_0x37e46c(0x26ac),[_0x3d05ae,_0x1c4df4]],[_0x37e46c(0xc58),[_0x3d05ae,_0x1c4df4]],[_0x37e46c(0x4128),[_0x3d05ae,_0x1c4df4]]],'passive-future':[[_0x37e46c(0x2f67),[_0x395c8c,_0x1c4df4,_0x517acd]],[_0x37e46c(0x3551),[_0x395c8c,_0x1c4df4,_0x517acd]]],'present-conditional':[[_0x37e46c(0xa00),[_0x3d05ae,_0x517acd]]],'past-conditional':[[_0x37e46c(0x580),[_0x2c21a1,_0x517acd]]],'auxiliary-future':[[_0x37e46c(0x535),[_0x395c8c]]],'auxiliary-past':[[_0x37e46c(0xc65),[_0x2c21a1,{'plural':!0x1}]],[_0x37e46c(0x24fa),[_0x2c21a1,_0x2ba454]]],'auxiliary-present':[[_0x37e46c(0x942),[_0x3d05ae,_0x2ba454,{'plural':!0x0}]]],'modal-past':[[_0x37e46c(0x1cc5),[_0x2c21a1]]],'modal-infinitive':[[_0x37e46c(0x4e91),[]]],'infinitive':[[_0x37e46c(0x1868),[]]]};let _0xac2836=[];Object[_0x37e46c(0x1ea9)](_0x306660)[_0x37e46c(0x4833)](_0x2229c9=>{const _0x2b5a1b=_0x37e46c;_0x306660[_0x2229c9][_0x2b5a1b(0xa21)](_0x44e8f3=>{_0xac2836['push']({'name':_0x2229c9,'match':_0x44e8f3[0x0],'data':_0x2b8fb0(_0x44e8f3[0x1])});});});const _0x5230d4=_0xac2836,_0x41126d=function(_0x4e1823,_0x11fcc7){const _0x784869=_0x37e46c;let _0x9cde68={};_0x4e1823=function(_0xb92054,_0x3218d7){const _0x21c6ab=a0_0x11e7;return _0xb92054=_0xb92054['clone'](),_0x3218d7[_0x21c6ab(0x48d1)][_0x21c6ab(0x24ce)]&&_0x3218d7[_0x21c6ab(0x48d1)][_0x21c6ab(0x24ce)][_0x21c6ab(0x2108)]&&_0xb92054['remove'](_0x3218d7[_0x21c6ab(0x48d1)][_0x21c6ab(0x24ce)]),_0x3218d7[_0x21c6ab(0x48d1)][_0x21c6ab(0x1228)]&&_0x3218d7[_0x21c6ab(0x48d1)][_0x21c6ab(0x1228)][_0x21c6ab(0x2108)]&&_0xb92054['remove'](_0x3218d7[_0x21c6ab(0x48d1)]['pre']),_0xb92054['has'](_0x21c6ab(0x4f07))&&(_0xb92054=_0xb92054['remove'](_0x21c6ab(0x4f07))),_0xb92054[_0x21c6ab(0x3170)]('#Prefix')&&(_0xb92054=_0xb92054[_0x21c6ab(0x42a1)](_0x21c6ab(0x3ebe))),_0x3218d7[_0x21c6ab(0x507b)][_0x21c6ab(0x3170)](_0x21c6ab(0x1aed))&&_0xb92054['remove'](_0x21c6ab(0x373b)),_0xb92054[_0x21c6ab(0xc1a)](_0x21c6ab(0x1a77));}(_0x4e1823,_0x11fcc7);for(let _0x26f51a=0x0;_0x26f51a<_0x5230d4['length'];_0x26f51a+=0x1){let _0x7c4306=_0x5230d4[_0x26f51a];if(!0x0===_0x4e1823['has'](_0x7c4306[_0x784869(0x2d96)])){_0x9cde68[_0x784869(0x31e0)]=_0x7c4306['name'],Object[_0x784869(0x4e14)](_0x9cde68,_0x7c4306['data']);break;}}return _0x9cde68[_0x784869(0x31e0)]||_0x4e1823[_0x784869(0x3170)]('^#Verb$')&&(_0x9cde68['form']=_0x784869(0x44e9)),_0x9cde68[_0x784869(0xfbf)]||(_0x9cde68[_0x784869(0xfbf)]=_0x11fcc7[_0x784869(0x507b)][_0x784869(0x3170)]('#PastTense')?_0x784869(0xe52):_0x784869(0x2c88)),_0x9cde68[_0x784869(0x214b)]=_0x11fcc7['root'][_0x784869(0x3170)](_0x784869(0x3800)),_0x9cde68[_0x784869(0x4cbc)]=function(_0xb8301b){const _0x3611ec=_0x784869;if(_0xb8301b['has'](_0x3611ec(0x108e))&&_0xb8301b[_0x3611ec(0x51f4)]('to')[_0x3611ec(0x3170)](_0x3611ec(0x1c50)))return!0x0;return!0x1;}(_0x4e1823),_0x9cde68;},_0x21f9bc=function(_0x5c69a2){const _0x4a44be=_0x37e46c;if(_0x5c69a2['length']<=0x1)return!0x1;return(_0x5c69a2[_0x4a44be(0x2956)]()[0x0]||{})[_0x4a44be(0xfcf)];},_0x58a1ca=function(_0x1a9653,_0x1cb9ea){const _0x4b5051=_0x37e46c;return!!_0x1cb9ea[_0x4b5051(0x3170)](_0x4b5051(0x2301))||(!!_0x1a9653[_0x4b5051(0x3170)](_0x4b5051(0x801))||!(!_0x1a9653[_0x4b5051(0x2108)]||!_0x1a9653[_0x4b5051(0x2569)])&&_0x1a9653[_0x4b5051(0x2569)]()[_0x4b5051(0x2108)]);},_0x12677e=function(_0x43aa06){let _0x218636=function(_0x24c95a){const _0x3483f7=a0_0x11e7;let _0x2629a4=_0x24c95a[_0x3483f7(0x5097)]();_0x2629a4=function(_0x5a80f4){const _0x404674=_0x3483f7;let _0x477fdf=_0x5a80f4[_0x404674(0xd84)]();return _0x477fdf=_0x477fdf[_0x404674(0x1465)]((_0x460c99,_0x2158ac)=>!(_0x460c99[_0x404674(0x3170)](_0x404674(0x3418))||_0x2158ac>0x0&&_0x460c99[_0x404674(0x3170)](_0x404674(0x22d3))||_0x2158ac>0x0&&_0x460c99[_0x404674(0x3170)](_0x404674(0x12bf)))),0x0===_0x477fdf[_0x404674(0x1b19)]?_0x5a80f4:_0x477fdf;}(_0x2629a4);let _0x280619=_0x2629a4[_0x3483f7(0x268a)](),_0x20bc5c=_0x280619[_0x3483f7(0x4d3c)](),_0x639232=_0x20bc5c['match'](_0x3483f7(0x113f));if(_0x639232[_0x3483f7(0x2108)])return _0x639232[_0x3483f7(0x268a)]();let _0x4a8b9b=_0x280619['if'](_0x3483f7(0x76e));return _0x4a8b9b[_0x3483f7(0x2108)]||!0x1===_0x280619[_0x3483f7(0x2108)]&&(_0x4a8b9b=_0x2629a4['match']('^(that|this|those)'),_0x4a8b9b[_0x3483f7(0x2108)])?_0x4a8b9b:(_0x20bc5c=_0x280619[_0x3483f7(0x4d3c)](),_0x21f9bc(_0x20bc5c)&&(_0x280619['remove'](_0x20bc5c),_0x20bc5c=_0x280619['last']()),_0x21f9bc(_0x20bc5c)&&(_0x280619[_0x3483f7(0x42a1)](_0x20bc5c),_0x20bc5c=_0x280619['last']()),_0x20bc5c);}(_0x43aa06);return{'subject':_0x218636,'plural':_0x58a1ca(_0x218636,_0x43aa06)};},_0x27bf22=_0x19160f=>_0x19160f,_0x110fd6=(_0x572354,_0x3dcfe4)=>{const _0x4774e6=_0x37e46c;let _0x41594b=_0x12677e(_0x572354,_0x3dcfe4),_0x50cd21=_0x41594b[_0x4774e6(0x1beb)];return!(!_0x50cd21['has']('i')&&!_0x50cd21[_0x4774e6(0x3170)]('we'))||_0x41594b['plural'];},_0x510b69=function(_0x6b568c,_0x599fbe){const _0x5a144e=_0x37e46c;if(_0x6b568c[_0x5a144e(0x3170)]('were'))return _0x5a144e(0x9d5);let {subject:_0x10b766,plural:_0x5649cf}=_0x12677e(_0x6b568c,_0x599fbe);return _0x10b766['has']('i')?'am':_0x10b766[_0x5a144e(0x3170)]('we')||_0x5649cf?_0x5a144e(0x9d5):'is';},_0x5488f0=function(_0x57a01b,_0x501510){const _0x301f78=_0x37e46c;let _0x124aff=_0x12677e(_0x57a01b,_0x501510),_0xa72d5f=_0x124aff['subject'];return _0xa72d5f['has']('i')||_0xa72d5f[_0x301f78(0x3170)]('we')||_0x124aff['plural']?'do':_0x301f78(0x14e5);},_0x4640f8=function(_0x36c387){const _0xd864d6=_0x37e46c;return _0x36c387['has'](_0xd864d6(0x108e))?_0xd864d6(0x2631):_0x36c387[_0xd864d6(0x3170)]('#Participle')?_0xd864d6(0x1ebc):_0x36c387[_0xd864d6(0x3170)]('#PastTense')?_0xd864d6(0xe52):_0x36c387[_0xd864d6(0x3170)]('#Gerund')?_0xd864d6(0x42ce):_0x36c387[_0xd864d6(0x3170)]('#PresentTense')?_0xd864d6(0x2c88):void 0x0;},_0x17ae06=function(_0x4d43e3,_0x4c9dee){const _0x2a9527=_0x37e46c,{toInfinitive:_0x302648}=_0x4d43e3[_0x2a9527(0x1578)][_0x2a9527(0x21c9)]['transform'][_0x2a9527(0x4134)];let _0x8ad35c=_0x4c9dee[_0x2a9527(0x507b)][_0x2a9527(0x4006)]({'keepPunct':!0x1});return _0x8ad35c=_0x302648(_0x8ad35c,_0x4d43e3['model'],_0x4640f8(_0x4d43e3)),_0x8ad35c&&_0x4d43e3[_0x2a9527(0x741)](_0x4c9dee[_0x2a9527(0x507b)],_0x8ad35c),_0x4d43e3;},_0xfab37f=_0x588b2d=>_0x588b2d['has'](_0x37e46c(0x3f24))?_0x588b2d[_0x37e46c(0x741)](_0x37e46c(0x3f24),_0x37e46c(0x3082)):_0x588b2d[_0x37e46c(0x42a1)](_0x37e46c(0x207)),_0x3e0012=function(_0x65f301){const _0x7e0de5=_0x37e46c;if(!_0x65f301||!_0x65f301[_0x7e0de5(0x4d72)])return[];return _0x65f301['json']({'normal':!0x0,'terms':!0x1,'text':!0x1})[_0x7e0de5(0x4833)](_0x2a2e04=>_0x2a2e04['normal']);},_0x25b400=function(_0x55ab28){const _0x459cad=_0x37e46c;return _0x55ab28&&_0x55ab28[_0x459cad(0x4d72)]?_0x55ab28['text'](_0x459cad(0x47d)):'';},_0x2cede4=function(_0x25f264){const _0x41a31d=_0x37e46c,{toInfinitive:_0x13c4c7}=_0x25f264[_0x41a31d(0x1578)][_0x41a31d(0x21c9)][_0x41a31d(0x5161)][_0x41a31d(0x4134)];return _0x13c4c7(_0x25f264[_0x41a31d(0x4006)]('normal'),_0x25f264[_0x41a31d(0x1556)],_0x4640f8(_0x25f264));},_0x3a5627=function(_0x33bd25){const _0xa87a63=_0x37e46c;let _0x13b846=_0x4e0645(_0x33bd25);_0x33bd25=_0x33bd25['clone']()['toView']();const _0x35309a=_0x41126d(_0x33bd25,_0x13b846);return{'root':_0x13b846[_0xa87a63(0x507b)]['text'](),'preAdverbs':_0x3e0012(_0x13b846[_0xa87a63(0x48d1)][_0xa87a63(0x1228)]),'postAdverbs':_0x3e0012(_0x13b846[_0xa87a63(0x48d1)][_0xa87a63(0x24ce)]),'auxiliary':_0x25b400(_0x13b846[_0xa87a63(0x1160)]),'negative':_0x13b846[_0xa87a63(0x298)][_0xa87a63(0x2108)],'prefix':_0x25b400(_0x13b846[_0xa87a63(0x11e8)]),'infinitive':_0x2cede4(_0x13b846[_0xa87a63(0x507b)]),'grammar':_0x35309a};},_0x36546a={'tags':!0x0},_0x5d9f5c=function(_0x4e565d,_0x4d89af){const _0x5efa13=_0x37e46c,{toInfinitive:_0x1bfc8b}=_0x4e565d['methods'][_0x5efa13(0x21c9)][_0x5efa13(0x5161)]['verb'],{root:_0x16939b,auxiliary:_0x118e05}=_0x4d89af;let _0x5dd736=_0x118e05[_0x5efa13(0x4a03)]()[_0x5efa13(0x3c3d)](),_0x477607=_0x16939b[_0x5efa13(0x4006)](_0x5efa13(0x47d));if(_0x477607=_0x1bfc8b(_0x477607,_0x4e565d[_0x5efa13(0x1556)],_0x4640f8(_0x16939b)),_0x477607&&_0x4e565d[_0x5efa13(0x741)](_0x16939b,_0x477607,_0x36546a)[_0x5efa13(0x15a9)](_0x5efa13(0x487b))[_0x5efa13(0x42fe)]()[_0x5efa13(0x15a9)]('Infinitive'),_0x5dd736[_0x5efa13(0x2108)]&&_0x4e565d['remove'](_0x5dd736),_0x4d89af[_0x5efa13(0x298)][_0x5efa13(0x2108)]){_0x4e565d[_0x5efa13(0x3170)](_0x5efa13(0xc1a))||_0x4e565d[_0x5efa13(0xe5a)](_0x5efa13(0xc1a));let _0x54b729=_0x5488f0(_0x4e565d,_0x4d89af);_0x4e565d['prepend'](_0x54b729);}return _0x4e565d[_0x5efa13(0x1bb9)]()['compute']([_0x5efa13(0x209c),'lexicon',_0x5efa13(0x2005),'postTagger',_0x5efa13(0x52a),'chunks']),_0x4e565d;},_0x1e90aa={'tags':!0x0},_0x172025={'noAux':(_0x37ed00,_0x398246)=>(_0x398246['auxiliary'][_0x37e46c(0x2108)]&&(_0x37ed00=_0x37ed00[_0x37e46c(0x42a1)](_0x398246[_0x37e46c(0x1160)])),_0x37ed00),'simple':(_0x1f14a8,_0x106809)=>{const _0x482b7d=_0x37e46c,{conjugate:_0x52bcb9,toInfinitive:_0x84e1bb}=_0x1f14a8[_0x482b7d(0x1578)][_0x482b7d(0x21c9)]['transform'][_0x482b7d(0x4134)],_0x3321bf=_0x106809[_0x482b7d(0x507b)];if(_0x3321bf['has']('#Modal'))return _0x1f14a8;let _0x4dfe9f=_0x3321bf[_0x482b7d(0x4006)]({'keepPunct':!0x1});return _0x4dfe9f=_0x84e1bb(_0x4dfe9f,_0x1f14a8[_0x482b7d(0x1556)],_0x4640f8(_0x3321bf)),_0x4dfe9f=_0x52bcb9(_0x4dfe9f,_0x1f14a8[_0x482b7d(0x1556)])[_0x482b7d(0xe52)],_0x4dfe9f=_0x482b7d(0x72a)===_0x4dfe9f?_0x482b7d(0x23ef):_0x4dfe9f,_0x482b7d(0x23ef)===_0x4dfe9f&&(_0x4dfe9f=((_0x94b9ba,_0x21c234)=>{const _0x6cf26f=_0x482b7d;let {subject:_0x378e26,plural:_0x25b0cb}=_0x12677e(_0x94b9ba,_0x21c234);return _0x25b0cb||_0x378e26[_0x6cf26f(0x3170)]('we')?_0x6cf26f(0x3813):_0x6cf26f(0x23ef);})(_0x1f14a8,_0x106809)),_0x4dfe9f&&_0x1f14a8[_0x482b7d(0x741)](_0x3321bf,_0x4dfe9f,_0x1e90aa),_0x1f14a8;},'both':function(_0x15cd3a,_0x416764){const _0x448162=_0x37e46c;return _0x416764['negative']['found']?(_0x15cd3a[_0x448162(0x741)]('will','did'),_0x15cd3a):(_0x15cd3a=_0x172025[_0x448162(0x4dc2)](_0x15cd3a,_0x416764),_0x15cd3a=_0x172025[_0x448162(0x4ee3)](_0x15cd3a,_0x416764));},'hasHad':_0x82a20b=>(_0x82a20b[_0x37e46c(0x741)]('has',_0x37e46c(0x47f4),_0x1e90aa),_0x82a20b),'hasParticiple':(_0x20bc1a,_0x38d1a2)=>{const _0x46eba6=_0x37e46c,{conjugate:_0x1693b7,toInfinitive:_0xc323ba}=_0x20bc1a[_0x46eba6(0x1578)][_0x46eba6(0x21c9)][_0x46eba6(0x5161)][_0x46eba6(0x4134)],_0x369207=_0x38d1a2['root'];let _0x15f61f=_0x369207[_0x46eba6(0x4006)](_0x46eba6(0x47d));return _0x15f61f=_0xc323ba(_0x15f61f,_0x20bc1a[_0x46eba6(0x1556)],_0x4640f8(_0x369207)),_0x1693b7(_0x15f61f,_0x20bc1a[_0x46eba6(0x1556)])[_0x46eba6(0x1ebc)];}},_0x345cbb={'infinitive':_0x172025['simple'],'simple-present':_0x172025[_0x37e46c(0x4dc2)],'simple-past':_0x27bf22,'simple-future':_0x172025[_0x37e46c(0x830)],'present-progressive':_0x6146ab=>(_0x6146ab['replace'](_0x37e46c(0x9d5),'were',_0x1e90aa),_0x6146ab['replace']('(is|are|am)','was',_0x1e90aa),_0x6146ab),'past-progressive':_0x27bf22,'future-progressive':(_0x35d690,_0x4024d8)=>(_0x35d690[_0x37e46c(0x2d96)](_0x4024d8[_0x37e46c(0x507b)])['insertBefore'](_0x37e46c(0x23ef)),_0x35d690[_0x37e46c(0x42a1)](_0x37e46c(0x4648)),_0x35d690),'present-perfect':_0x172025[_0x37e46c(0xcf6)],'past-perfect':_0x27bf22,'future-perfect':(_0x13276b,_0x2b2844)=>(_0x13276b['match'](_0x2b2844[_0x37e46c(0x507b)])['insertBefore'](_0x37e46c(0x47f4)),_0x13276b[_0x37e46c(0x3170)](_0x37e46c(0x207))&&(_0x13276b=_0xfab37f(_0x13276b)),_0x13276b[_0x37e46c(0x42a1)](_0x37e46c(0x1065)),_0x13276b),'present-perfect-progressive':_0x172025[_0x37e46c(0xcf6)],'past-perfect-progressive':_0x27bf22,'future-perfect-progressive':_0x91d433=>(_0x91d433['remove'](_0x37e46c(0x207)),_0x91d433[_0x37e46c(0x741)](_0x37e46c(0x1065),_0x37e46c(0x47f4),_0x1e90aa),_0x91d433),'passive-past':_0x54e833=>(_0x54e833[_0x37e46c(0x741)](_0x37e46c(0x1065),_0x37e46c(0x47f4),_0x1e90aa),_0x54e833),'passive-present':_0xb8d3f0=>(_0xb8d3f0[_0x37e46c(0x741)](_0x37e46c(0x5271),_0x37e46c(0x23ef),_0x1e90aa),_0xb8d3f0),'passive-future':(_0x2c76b5,_0x1124f8)=>(_0x1124f8[_0x37e46c(0x1160)][_0x37e46c(0x3170)](_0x37e46c(0x211e))&&(_0x2c76b5[_0x37e46c(0x2d96)](_0x1124f8[_0x37e46c(0x507b)])['insertBefore'](_0x37e46c(0x4bd8)),_0x2c76b5[_0x37e46c(0x42a1)](_0x37e46c(0x4648))),_0x1124f8[_0x37e46c(0x1160)][_0x37e46c(0x3170)](_0x37e46c(0x1903))&&(_0x2c76b5['replace'](_0x37e46c(0x1065),'had',_0x1e90aa),_0x2c76b5['remove'](_0x37e46c(0x207))),_0x2c76b5),'present-conditional':_0x5bff42=>(_0x5bff42[_0x37e46c(0x741)]('be',_0x37e46c(0x17f5)),_0x5bff42),'past-conditional':_0x27bf22,'auxiliary-future':_0x3f544d=>(_0x3f544d[_0x37e46c(0x741)](_0x37e46c(0x4aa0),_0x37e46c(0x23ef),_0x1e90aa),_0x3f544d),'auxiliary-past':_0x27bf22,'auxiliary-present':_0x53b2f5=>(_0x53b2f5[_0x37e46c(0x741)]('(do|does)',_0x37e46c(0x4af8),_0x1e90aa),_0x53b2f5),'modal-infinitive':(_0x15bc2c,_0x42256d)=>(_0x15bc2c[_0x37e46c(0x3170)]('can')?_0x15bc2c[_0x37e46c(0x741)](_0x37e46c(0x312c),_0x37e46c(0x3db),_0x1e90aa):(_0x172025['simple'](_0x15bc2c,_0x42256d),_0x15bc2c[_0x37e46c(0x2d96)](_0x37e46c(0x3504))[_0x37e46c(0x3219)](_0x37e46c(0x1065))[_0x37e46c(0x15a9)](_0x37e46c(0x1a5e))),_0x15bc2c),'modal-past':_0x27bf22,'want-infinitive':_0x7acf94=>(_0x7acf94[_0x37e46c(0x741)](_0x37e46c(0x6cd),'wanted',_0x1e90aa),_0x7acf94['remove'](_0x37e46c(0x207)),_0x7acf94),'gerund-phrase':(_0x222780,_0x16cc9c)=>(_0x16cc9c[_0x37e46c(0x507b)]=_0x16cc9c['root'][_0x37e46c(0xc1a)]('#Gerund$'),_0x172025[_0x37e46c(0x4dc2)](_0x222780,_0x16cc9c),_0xfab37f(_0x222780),_0x222780)},_0x277a80=function(_0x306318,_0x5ad656,_0x1ac61d){const _0x2876de=_0x37e46c;return _0x345cbb[_0x2876de(0x2427)](_0x1ac61d)?((_0x306318=_0x345cbb[_0x1ac61d](_0x306318,_0x5ad656))[_0x2876de(0x1bb9)]()[_0x2876de(0x23df)]([_0x2876de(0x4585),_0x2876de(0x268d)]),_0x306318):_0x306318;},_0x1a8244=function(_0xd2ff02,_0xffbf9c){const _0x313ec3=_0x37e46c;let _0x3d393c=_0x12677e(_0xd2ff02,_0xffbf9c),_0x61bd1c=_0x3d393c[_0x313ec3(0x1beb)];return _0x61bd1c[_0x313ec3(0x3170)]('(i|we|you)')?_0x313ec3(0x1065):!0x1===_0x3d393c[_0x313ec3(0x316d)]||_0x61bd1c[_0x313ec3(0x3170)]('he')||_0x61bd1c[_0x313ec3(0x3170)](_0x313ec3(0xbeb))||_0x61bd1c['has']('#Person')?_0x313ec3(0x3170):_0x313ec3(0x1065);},_0x418883=(_0x183213,_0x5790a5)=>{const _0x84989b=_0x37e46c,{conjugate:_0x2a33f0,toInfinitive:_0xd19fbb}=_0x183213[_0x84989b(0x1578)][_0x84989b(0x21c9)]['transform'][_0x84989b(0x4134)],{root:_0x5c2482,auxiliary:_0x15b6e9}=_0x5790a5;if(_0x5c2482[_0x84989b(0x3170)](_0x84989b(0x3504)))return _0x183213;let _0x3799eb=_0x5c2482[_0x84989b(0x4006)]({'keepPunct':!0x1});_0x3799eb=_0xd19fbb(_0x3799eb,_0x183213['model'],_0x4640f8(_0x5c2482));let _0x58d94f=_0x2a33f0(_0x3799eb,_0x183213[_0x84989b(0x1556)]);if(_0x3799eb=_0x58d94f[_0x84989b(0x1ebc)]||_0x58d94f['PastTense'],_0x3799eb){_0x183213=_0x183213[_0x84989b(0x741)](_0x5c2482,_0x3799eb);let _0x5cabb1=_0x1a8244(_0x183213,_0x5790a5);_0x183213[_0x84989b(0xe5a)](_0x5cabb1)[_0x84989b(0x2d96)](_0x5cabb1)[_0x84989b(0x15a9)](_0x84989b(0x1a5e)),_0x183213[_0x84989b(0x42a1)](_0x15b6e9);}return _0x183213;},_0x40de67={'infinitive':_0x418883,'simple-present':_0x418883,'simple-future':(_0x1c2800,_0x34044c)=>_0x1c2800[_0x37e46c(0x741)](_0x37e46c(0x207),_0x1a8244(_0x1c2800,_0x34044c)),'present-perfect':_0x27bf22,'past-perfect':_0x27bf22,'future-perfect':(_0x283005,_0x44419b)=>_0x283005[_0x37e46c(0x741)](_0x37e46c(0x211f),_0x1a8244(_0x283005,_0x44419b)),'present-perfect-progressive':_0x27bf22,'past-perfect-progressive':_0x27bf22,'future-perfect-progressive':_0x27bf22},_0x4c016c=function(_0x3b6563,_0x1bd76e,_0x221afa){const _0x56bfa9=_0x37e46c;return _0x40de67[_0x56bfa9(0x2427)](_0x221afa)?((_0x3b6563=_0x40de67[_0x221afa](_0x3b6563,_0x1bd76e))[_0x56bfa9(0x1bb9)]()[_0x56bfa9(0x23df)]([_0x56bfa9(0x4585),_0x56bfa9(0x268d)]),_0x3b6563):((_0x3b6563=_0x418883(_0x3b6563,_0x1bd76e))[_0x56bfa9(0x1bb9)]()[_0x56bfa9(0x23df)]([_0x56bfa9(0x4585),_0x56bfa9(0x268d)]),_0x3b6563);},_0x25c16d={'tags':!0x0},_0x1c5bdf=(_0x223f1e,_0x4fec2b)=>{const _0x31a80d=_0x37e46c,{conjugate:_0x1fea2f,toInfinitive:_0x2f3cf1}=_0x223f1e[_0x31a80d(0x1578)][_0x31a80d(0x21c9)][_0x31a80d(0x5161)]['verb'],_0x3eec8c=_0x4fec2b['root'];let _0x4e6a76=_0x3eec8c[_0x31a80d(0x4006)](_0x31a80d(0x47d));return _0x4e6a76=_0x2f3cf1(_0x4e6a76,_0x223f1e[_0x31a80d(0x1556)],_0x4640f8(_0x3eec8c)),!0x1===_0x110fd6(_0x223f1e,_0x4fec2b)&&(_0x4e6a76=_0x1fea2f(_0x4e6a76,_0x223f1e[_0x31a80d(0x1556)])['PresentTense']),_0x3eec8c[_0x31a80d(0x3170)](_0x31a80d(0x3800))&&(_0x4e6a76=_0x510b69(_0x223f1e,_0x4fec2b)),_0x4e6a76&&(_0x223f1e=_0x223f1e[_0x31a80d(0x741)](_0x3eec8c,_0x4e6a76,_0x25c16d))['not'](_0x31a80d(0x4973))[_0x31a80d(0x15a9)]('PresentTense'),_0x223f1e;},_0x3bf9cc=(_0x378d2f,_0x2687a3)=>{const _0x574b52=_0x37e46c,{conjugate:_0x335757,toInfinitive:_0x174c20}=_0x378d2f[_0x574b52(0x1578)]['two'][_0x574b52(0x5161)]['verb'],_0x2d59b9=_0x2687a3[_0x574b52(0x507b)];let _0x2cab9c=_0x2d59b9[_0x574b52(0x4006)](_0x574b52(0x47d));return _0x2cab9c=_0x174c20(_0x2cab9c,_0x378d2f[_0x574b52(0x1556)],_0x4640f8(_0x2d59b9)),!0x1===_0x110fd6(_0x378d2f,_0x2687a3)&&(_0x2cab9c=_0x335757(_0x2cab9c,_0x378d2f[_0x574b52(0x1556)])[_0x574b52(0x42ce)]),_0x2cab9c&&(_0x378d2f=_0x378d2f['replace'](_0x2d59b9,_0x2cab9c,_0x25c16d))['not'](_0x574b52(0x4973))[_0x574b52(0x15a9)]('Gerund'),_0x378d2f;},_0x4354fd={'infinitive':_0x1c5bdf,'simple-present':(_0x35c1be,_0x216e05)=>{const _0x2f18fb=_0x37e46c,{conjugate:_0x46cc4b}=_0x35c1be[_0x2f18fb(0x1578)]['two'][_0x2f18fb(0x5161)][_0x2f18fb(0x4134)];let {root:_0x4e0b8d}=_0x216e05;if(!_0x4e0b8d['has']('#Infinitive'))return _0x1c5bdf(_0x35c1be,_0x216e05);{let _0x299768=_0x12677e(_0x35c1be,_0x216e05)[_0x2f18fb(0x1beb)];if(_0x110fd6(_0x35c1be,_0x216e05)||_0x299768[_0x2f18fb(0x3170)]('i'))return _0x35c1be;let _0x5337bc=_0x4e0b8d[_0x2f18fb(0x4006)](_0x2f18fb(0x47d)),_0xe685a1=_0x46cc4b(_0x5337bc,_0x35c1be[_0x2f18fb(0x1556)])['PresentTense'];_0x5337bc!==_0xe685a1&&_0x35c1be['replace'](_0x4e0b8d,_0xe685a1,_0x25c16d);}return _0x35c1be;},'simple-past':_0x1c5bdf,'simple-future':(_0x3ebd4b,_0x3a0b6e)=>{const _0x5c7b85=_0x37e46c,{root:_0x5b0725,auxiliary:_0x5d05a0}=_0x3a0b6e;if(_0x5d05a0[_0x5c7b85(0x3170)](_0x5c7b85(0x207))&&_0x5b0725['has']('be')){let _0x5a2f73=_0x510b69(_0x3ebd4b,_0x3a0b6e);_0x3ebd4b[_0x5c7b85(0x741)](_0x5b0725,_0x5a2f73),(_0x3ebd4b=_0x3ebd4b['remove'](_0x5c7b85(0x207)))['replace'](_0x5c7b85(0x345)+_0x5a2f73,_0x5a2f73+_0x5c7b85(0x4bf3));}else _0x1c5bdf(_0x3ebd4b,_0x3a0b6e),_0x3ebd4b=_0x3ebd4b[_0x5c7b85(0x42a1)](_0x5c7b85(0x207));return _0x3ebd4b;},'present-progressive':_0x27bf22,'past-progressive':(_0x195812,_0x38ead0)=>{const _0x38c0be=_0x37e46c;let _0x4400f3=_0x510b69(_0x195812,_0x38ead0);return _0x195812[_0x38c0be(0x741)]('(were|was)',_0x4400f3,_0x25c16d);},'future-progressive':_0x493b84=>(_0x493b84[_0x37e46c(0x2d96)](_0x37e46c(0x207))[_0x37e46c(0x197c)]('is'),_0x493b84['remove']('be'),_0x493b84[_0x37e46c(0x42a1)]('will')),'present-perfect':(_0x58867a,_0x526583)=>(_0x1c5bdf(_0x58867a,_0x526583),_0x58867a=_0x58867a[_0x37e46c(0x42a1)](_0x37e46c(0x2f61))),'past-perfect':(_0x4d2011,_0x43e152)=>{const _0xe05694=_0x37e46c;let _0x139d56=_0x12677e(_0x4d2011,_0x43e152)[_0xe05694(0x1beb)];return _0x110fd6(_0x4d2011,_0x43e152)||_0x139d56[_0xe05694(0x3170)]('i')?((_0x4d2011=_0x17ae06(_0x4d2011,_0x43e152))[_0xe05694(0x42a1)](_0xe05694(0x47f4)),_0x4d2011):(_0x4d2011[_0xe05694(0x741)]('had','has',_0x25c16d),_0x4d2011);},'future-perfect':_0x3ba61d=>(_0x3ba61d['match'](_0x37e46c(0x207))[_0x37e46c(0x197c)]('has'),_0x3ba61d[_0x37e46c(0x42a1)](_0x37e46c(0x1065))[_0x37e46c(0x42a1)](_0x37e46c(0x207))),'present-perfect-progressive':_0x27bf22,'past-perfect-progressive':_0x39c817=>_0x39c817[_0x37e46c(0x741)]('had','has',_0x25c16d),'future-perfect-progressive':_0xaec985=>(_0xaec985[_0x37e46c(0x2d96)]('will')[_0x37e46c(0x197c)](_0x37e46c(0x3170)),_0xaec985[_0x37e46c(0x42a1)](_0x37e46c(0x1065))[_0x37e46c(0x42a1)](_0x37e46c(0x207))),'passive-past':(_0x4d96c8,_0x2ae17b)=>{const _0x382fc9=_0x37e46c;let _0x3c2eaa=_0x510b69(_0x4d96c8,_0x2ae17b);return _0x4d96c8[_0x382fc9(0x3170)](_0x382fc9(0x396f))&&_0x4d96c8[_0x382fc9(0x3170)](_0x382fc9(0x72a))?(_0x4d96c8[_0x382fc9(0x741)]('(had|have|has)',_0x3c2eaa,_0x25c16d),_0x4d96c8['replace'](_0x382fc9(0x72a),_0x382fc9(0x642)),_0x4d96c8):_0x4d96c8[_0x382fc9(0x741)](_0x382fc9(0x175e),_0x3c2eaa);},'passive-present':_0x27bf22,'passive-future':_0x1b8ccb=>(_0x1b8ccb[_0x37e46c(0x741)](_0x37e46c(0x207),'is'),_0x1b8ccb[_0x37e46c(0x741)]('be','being')),'present-conditional':_0x27bf22,'past-conditional':_0x2d9b42=>(_0x2d9b42[_0x37e46c(0x741)](_0x37e46c(0x72a),'be'),_0x2d9b42[_0x37e46c(0x42a1)](_0x37e46c(0x1065))),'auxiliary-future':(_0x4bd809,_0x4c56c7)=>(_0x3bf9cc(_0x4bd809,_0x4c56c7),_0x4bd809[_0x37e46c(0x42a1)]('(going|to)'),_0x4bd809),'auxiliary-past':(_0x103b8f,_0x507815)=>{const _0x11dd83=_0x37e46c;if(_0x507815[_0x11dd83(0x1160)][_0x11dd83(0x3170)](_0x11dd83(0x4af8))){let _0xc2b722=_0x5488f0(_0x103b8f,_0x507815);return _0x103b8f[_0x11dd83(0x741)](_0x507815[_0x11dd83(0x1160)],_0xc2b722),_0x103b8f;}return _0x3bf9cc(_0x103b8f,_0x507815),_0x103b8f[_0x11dd83(0x741)](_0x507815[_0x11dd83(0x1160)],'is'),_0x103b8f;},'auxiliary-present':_0x27bf22,'modal-infinitive':_0x27bf22,'modal-past':(_0x41b36f,_0x2956af)=>(((_0x18ea8e,_0x235561)=>{const _0x25e9b3=_0x37e46c,{toInfinitive:_0x2d717f}=_0x18ea8e[_0x25e9b3(0x1578)][_0x25e9b3(0x21c9)][_0x25e9b3(0x5161)][_0x25e9b3(0x4134)],_0x4b399c=_0x235561[_0x25e9b3(0x507b)];let _0x1473e2=_0x235561['root'][_0x25e9b3(0x4006)]('normal');_0x1473e2=_0x2d717f(_0x1473e2,_0x18ea8e[_0x25e9b3(0x1556)],_0x4640f8(_0x4b399c)),_0x1473e2&&(_0x18ea8e=_0x18ea8e['replace'](_0x235561[_0x25e9b3(0x507b)],_0x1473e2,_0x25c16d));})(_0x41b36f,_0x2956af),_0x41b36f[_0x37e46c(0x42a1)](_0x37e46c(0x1065))),'gerund-phrase':(_0x34999e,_0x2af24c)=>(_0x2af24c['root']=_0x2af24c[_0x37e46c(0x507b)][_0x37e46c(0xc1a)]('#Gerund$'),_0x1c5bdf(_0x34999e,_0x2af24c),_0x34999e['remove'](_0x37e46c(0x2b65))),'want-infinitive':(_0x484e87,_0x1a2b87)=>{const _0x5d4239=_0x37e46c;let _0x19049a=_0x5d4239(0x22fa);return _0x110fd6(_0x484e87,_0x1a2b87)&&(_0x19049a=_0x5d4239(0x44e5)),_0x484e87[_0x5d4239(0x741)](_0x5d4239(0x2d66),_0x19049a,_0x25c16d),_0x484e87[_0x5d4239(0x42a1)]('will'),_0x484e87;}},_0x417736=function(_0x381490,_0x4535c3,_0x3d7bc2){const _0x11eb27=_0x37e46c;return _0x4354fd['hasOwnProperty'](_0x3d7bc2)?((_0x381490=_0x4354fd[_0x3d7bc2](_0x381490,_0x4535c3))[_0x11eb27(0x1bb9)]()[_0x11eb27(0x23df)]([_0x11eb27(0x4585),_0x11eb27(0x268d)]),_0x381490):_0x381490;},_0x2ceeca={'tags':!0x0},_0x4b5a70=(_0x4e47d5,_0x184a18)=>{const _0x215cf8=_0x37e46c,{toInfinitive:_0x48b75a}=_0x4e47d5['methods'][_0x215cf8(0x21c9)][_0x215cf8(0x5161)][_0x215cf8(0x4134)],{root:_0x443550,auxiliary:_0x5365ac}=_0x184a18;if(_0x443550['has']('#Modal'))return _0x4e47d5;let _0xd0fd0a=_0x443550[_0x215cf8(0x4006)](_0x215cf8(0x47d));return _0xd0fd0a=_0x48b75a(_0xd0fd0a,_0x4e47d5[_0x215cf8(0x1556)],_0x4640f8(_0x443550)),_0xd0fd0a&&(_0x4e47d5=_0x4e47d5['replace'](_0x443550,_0xd0fd0a,_0x2ceeca))['not']('#Particle')['tag'](_0x215cf8(0x487b)),_0x4e47d5['prepend']('will')['match']('will')[_0x215cf8(0x15a9)](_0x215cf8(0x1a5e)),_0x4e47d5[_0x215cf8(0x42a1)](_0x5365ac),_0x4e47d5;},_0x3e19ee=(_0x9e9dc6,_0x2182ba)=>{const _0x55de4f=_0x37e46c,{conjugate:_0x5ebd6c,toInfinitive:_0x18c7ee}=_0x9e9dc6[_0x55de4f(0x1578)]['two'][_0x55de4f(0x5161)][_0x55de4f(0x4134)],{root:_0x77aee3,auxiliary:_0x53561c}=_0x2182ba;let _0x388ab8=_0x77aee3[_0x55de4f(0x4006)](_0x55de4f(0x47d));return _0x388ab8=_0x18c7ee(_0x388ab8,_0x9e9dc6['model'],_0x4640f8(_0x77aee3)),_0x388ab8&&(_0x388ab8=_0x5ebd6c(_0x388ab8,_0x9e9dc6['model'])[_0x55de4f(0x42ce)],_0x9e9dc6['replace'](_0x77aee3,_0x388ab8,_0x2ceeca),_0x9e9dc6[_0x55de4f(0xc1a)](_0x55de4f(0x4973))['tag']('PresentTense')),_0x9e9dc6[_0x55de4f(0x42a1)](_0x53561c),_0x9e9dc6[_0x55de4f(0xe5a)](_0x55de4f(0x211e))['match'](_0x55de4f(0x211e))[_0x55de4f(0x15a9)](_0x55de4f(0x1a5e)),_0x9e9dc6;},_0x3ff303={'infinitive':_0x4b5a70,'simple-present':_0x4b5a70,'simple-past':_0x4b5a70,'simple-future':_0x27bf22,'present-progressive':_0x3e19ee,'past-progressive':_0x3e19ee,'future-progressive':_0x27bf22,'present-perfect':_0x6c6f84=>(_0x6c6f84['match'](_0x37e46c(0x31a0))[_0x37e46c(0x1f1f)](_0x37e46c(0x211f)),_0x6c6f84),'past-perfect':_0xf32f51=>_0xf32f51[_0x37e46c(0x741)]('(had|has)',_0x37e46c(0x211f)),'future-perfect':_0x27bf22,'present-perfect-progressive':_0x3f7be7=>_0x3f7be7[_0x37e46c(0x741)](_0x37e46c(0x3170),'will\x20have'),'past-perfect-progressive':_0x7cc31c=>_0x7cc31c['replace'](_0x37e46c(0x47f4),_0x37e46c(0x211f)),'future-perfect-progressive':_0x27bf22,'passive-past':_0x181ad0=>_0x181ad0['has'](_0x37e46c(0x176c))?_0x181ad0['replace']('got',_0x37e46c(0x433b)):_0x181ad0[_0x37e46c(0x3170)](_0x37e46c(0x763))?(_0x181ad0['replace']('(was|were)',_0x37e46c(0x211e)),_0x181ad0[_0x37e46c(0x42a1)](_0x37e46c(0x642))):_0x181ad0[_0x37e46c(0x3170)](_0x37e46c(0x4068))?_0x181ad0[_0x37e46c(0x741)](_0x37e46c(0x4068),'will\x20be'):_0x181ad0,'passive-present':_0x1e3cbd=>(_0x1e3cbd[_0x37e46c(0x741)](_0x37e46c(0x642),_0x37e46c(0x211e)),_0x1e3cbd['remove'](_0x37e46c(0x4aa0)),_0x1e3cbd),'passive-future':_0x27bf22,'present-conditional':_0x16b315=>_0x16b315[_0x37e46c(0x741)](_0x37e46c(0x361a),'will'),'past-conditional':_0x2d56f4=>_0x2d56f4['replace'](_0x37e46c(0x361a),_0x37e46c(0x207)),'auxiliary-future':_0x27bf22,'auxiliary-past':_0x2bc726=>_0x2bc726[_0x37e46c(0x3170)](_0x37e46c(0x4fd8))&&_0x2bc726[_0x37e46c(0x3170)]('to')?(_0x2bc726[_0x37e46c(0x741)](_0x37e46c(0x4fd8),_0x37e46c(0x207)),_0x2bc726['remove']('to')):(_0x2bc726[_0x37e46c(0x741)]('did',_0x37e46c(0x207)),_0x2bc726),'auxiliary-present':_0x5183fc=>_0x5183fc['replace'](_0x37e46c(0x2541),'will'),'modal-infinitive':_0x27bf22,'modal-past':_0x27bf22,'gerund-phrase':(_0x5d5e61,_0x252027)=>(_0x252027[_0x37e46c(0x507b)]=_0x252027[_0x37e46c(0x507b)][_0x37e46c(0xc1a)]('#Gerund$'),_0x4b5a70(_0x5d5e61,_0x252027),_0x5d5e61[_0x37e46c(0x42a1)](_0x37e46c(0x310f))),'want-infinitive':_0x53a323=>(_0x53a323[_0x37e46c(0x741)]('(want|wants|wanted)',_0x37e46c(0x12e9)),_0x53a323)},_0x5a41a9=function(_0x1c1a04,_0x202af4,_0x420f63){const _0x1ff17b=_0x37e46c;return _0x1c1a04[_0x1ff17b(0x3170)]('will')||_0x1c1a04[_0x1ff17b(0x3170)]('going\x20to')?_0x1c1a04:_0x3ff303[_0x1ff17b(0x2427)](_0x420f63)?((_0x1c1a04=_0x3ff303[_0x420f63](_0x1c1a04,_0x202af4))['fullSentence']()[_0x1ff17b(0x23df)](['tagger',_0x1ff17b(0x268d)]),_0x1c1a04):_0x1c1a04;},_0x5caa3a={'tags':!0x0},_0x3af729=function(_0x3050fb,_0x2ab834){const _0x3a7acb=_0x37e46c,{toInfinitive:_0x3d83fb,conjugate:_0x4533bf}=_0x3050fb[_0x3a7acb(0x1578)][_0x3a7acb(0x21c9)][_0x3a7acb(0x5161)][_0x3a7acb(0x4134)],{root:_0x189f88,auxiliary:_0x4ffc7e}=_0x2ab834;if(_0x3050fb['has'](_0x3a7acb(0x503)))return _0x3050fb;let _0x51b36e=_0x189f88['text'](_0x3a7acb(0x47d));_0x51b36e=_0x3d83fb(_0x51b36e,_0x3050fb[_0x3a7acb(0x1556)],_0x4640f8(_0x189f88));let _0x2c7787=_0x4533bf(_0x51b36e,_0x3050fb['model'])[_0x3a7acb(0x42ce)];if(_0x2c7787){let _0x5f0afb=_0x510b69(_0x3050fb,_0x2ab834);_0x3050fb[_0x3a7acb(0x741)](_0x189f88,_0x2c7787,_0x5caa3a),_0x3050fb[_0x3a7acb(0x42a1)](_0x4ffc7e),_0x3050fb['prepend'](_0x5f0afb);}return _0x3050fb[_0x3a7acb(0x741)](_0x3a7acb(0x35da),_0x3a7acb(0x2d3f)),_0x3050fb['replace'](_0x3a7acb(0x2b53),_0x3a7acb(0x3fa5)),_0x3050fb[_0x3a7acb(0x1bb9)]()[_0x3a7acb(0x23df)](['tagger','chunks']),_0x3050fb;},_0x2665e2={'tags':!0x0},_0x575a58=function(_0x10d9dc,_0x46c657){const _0x48625e=_0x37e46c;let _0xf15ba0=_0x5488f0(_0x10d9dc,_0x46c657);return _0x10d9dc['prepend'](_0xf15ba0+_0x48625e(0x4bf3)),_0x10d9dc;},_0x54cf6b=function(_0x1b1a24){const _0x1a46bd=_0x37e46c;let _0x504590=_0x1b1a24[_0x1a46bd(0x2d96)]('be');return _0x504590['found']?(_0x504590[_0x1a46bd(0xe5a)](_0x1a46bd(0xc1a)),_0x1b1a24):(_0x504590=_0x1b1a24[_0x1a46bd(0x2d96)]('(is|was|am|are|will|were)'),_0x504590[_0x1a46bd(0x2108)]?(_0x504590[_0x1a46bd(0x366b)]('not'),_0x1b1a24):_0x1b1a24);},_0x3fece6=_0x5c834d=>_0x5c834d['has']('(is|was|am|are|will|were|be)'),_0x70b8ac={'simple-present':(_0x2211ab,_0x3f20bf)=>!0x0===_0x3fece6(_0x2211ab)?_0x54cf6b(_0x2211ab):(_0x2211ab=_0x17ae06(_0x2211ab,_0x3f20bf),_0x2211ab=_0x575a58(_0x2211ab,_0x3f20bf)),'simple-past':(_0x34d21c,_0x357a51)=>!0x0===_0x3fece6(_0x34d21c)?_0x54cf6b(_0x34d21c):((_0x34d21c=_0x17ae06(_0x34d21c,_0x357a51))[_0x37e46c(0xe5a)]('did\x20not'),_0x34d21c),'imperative':_0x2add9c=>(_0x2add9c[_0x37e46c(0xe5a)]('do\x20not'),_0x2add9c),'infinitive':(_0x364e6a,_0x5b46f9)=>!0x0===_0x3fece6(_0x364e6a)?_0x54cf6b(_0x364e6a):_0x575a58(_0x364e6a,_0x5b46f9),'passive-past':_0x4ca321=>{const _0x2392b1=_0x37e46c;if(_0x4ca321['has'](_0x2392b1(0x176c)))return _0x4ca321['replace'](_0x2392b1(0x176c),_0x2392b1(0xf9e),_0x2665e2),_0x4ca321[_0x2392b1(0xe5a)](_0x2392b1(0x30e8)),_0x4ca321;let _0x382b95=_0x4ca321[_0x2392b1(0x2d96)](_0x2392b1(0xd9c));return _0x382b95[_0x2392b1(0x2108)]&&_0x382b95['append'](_0x2392b1(0xc1a)),_0x4ca321;},'auxiliary-past':_0x460c77=>{const _0x82a3f8=_0x37e46c;if(_0x460c77[_0x82a3f8(0x3170)](_0x82a3f8(0x4fd8)))return _0x460c77[_0x82a3f8(0xe5a)](_0x82a3f8(0x30e8)),_0x460c77;let _0x36ff67=_0x460c77[_0x82a3f8(0x2d96)](_0x82a3f8(0x7ed));return _0x36ff67[_0x82a3f8(0x2108)]&&_0x36ff67['append'](_0x82a3f8(0xc1a)),_0x460c77;},'want-infinitive':(_0x5ea7bd,_0x4b20c0)=>_0x5ea7bd=(_0x5ea7bd=_0x575a58(_0x5ea7bd,_0x4b20c0))['replace']('wants',_0x37e46c(0x44e5),_0x2665e2)},_0xd9347f=function(_0x10b282,_0x4104a2,_0x5a2028){const _0x2d6a82=_0x37e46c;if(_0x10b282[_0x2d6a82(0x3170)](_0x2d6a82(0x4f07)))return _0x10b282;if(_0x70b8ac['hasOwnProperty'](_0x5a2028))return _0x10b282=_0x70b8ac[_0x5a2028](_0x10b282,_0x4104a2);let _0x2d508d=_0x10b282[_0x2d6a82(0x4b95)]('be');return _0x2d508d[_0x2d6a82(0x2108)]?(_0x2d508d[_0x2d6a82(0xe5a)]('not'),_0x10b282):!0x0===_0x3fece6(_0x10b282)?_0x54cf6b(_0x10b282):(_0x2d508d=_0x10b282['matchOne']('(will|had|have|has|did|does|do|#Modal)'),_0x2d508d[_0x2d6a82(0x2108)]?(_0x2d508d[_0x2d6a82(0x366b)](_0x2d6a82(0xc1a)),_0x10b282):_0x10b282);},_0x18c7f0=function(_0x445b33){const _0x4c6ec1=_0x37e46c;class _0x149a56 extends _0x445b33{constructor(_0x555f23,_0x3a4d13,_0x21d1fe){const _0x241bef=a0_0x11e7;super(_0x555f23,_0x3a4d13,_0x21d1fe),this['viewType']=_0x241bef(0x2f65);}[_0x4c6ec1(0x2956)](_0x2cc620){const _0x131ca7=_0x4c6ec1;return this[_0x131ca7(0x152d)](_0x2cc620)[_0x131ca7(0x4833)](_0x4e0645);}[_0x4c6ec1(0x3289)](_0x38a44d,_0x1c6676){const _0x28d1a8=_0x4c6ec1;let _0x26eeb5=this[_0x28d1a8(0x152d)](_0x1c6676)[_0x28d1a8(0x4833)](_0xbf0d9a=>{const _0x2a03bf=_0x28d1a8;let _0x9df2fe=_0xbf0d9a[_0x2a03bf(0x324a)]()['json'](_0x38a44d)[0x0]||{};return _0x9df2fe['verb']=_0x3a5627(_0xbf0d9a),_0x9df2fe;},[]);return _0x26eeb5;}[_0x4c6ec1(0x1d49)](_0x380d01){const _0x211323=_0x4c6ec1;return this[_0x211323(0x152d)](_0x380d01)[_0x211323(0x4833)](_0x2a0682=>{const _0x230395=_0x211323;let _0x537f82=_0x4e0645(_0x2a0682);return _0x12677e(_0x2a0682,_0x537f82)[_0x230395(0x1beb)];});}[_0x4c6ec1(0x48d1)](_0x526f3d){const _0x3ecbd2=_0x4c6ec1;return this[_0x3ecbd2(0x152d)](_0x526f3d)[_0x3ecbd2(0x4833)](_0x442893=>_0x442893[_0x3ecbd2(0x2d96)](_0x3ecbd2(0x1a77)));}[_0x4c6ec1(0x16a2)](_0x2b9578){const _0x193b73=_0x4c6ec1;return this['getNth'](_0x2b9578)[_0x193b73(0x1465)](_0x5df9a1=>!0x0!==_0x12677e(_0x5df9a1)[_0x193b73(0x316d)]);}['isPlural'](_0x170765){const _0x2772f1=_0x4c6ec1;return this['getNth'](_0x170765)[_0x2772f1(0x1465)](_0x3200c8=>!0x0===_0x12677e(_0x3200c8)[_0x2772f1(0x316d)]);}[_0x4c6ec1(0x51e2)](_0x18d7e5){const _0xb02d47=_0x4c6ec1;return this['getNth'](_0x18d7e5)[_0xb02d47(0x1465)](_0x201fa6=>_0x201fa6['has'](_0xb02d47(0x3018)));}[_0x4c6ec1(0x25f2)](_0x30db30){const _0xa99040=_0x4c6ec1;return this[_0xa99040(0x152d)](_0x30db30)[_0xa99040(0x4833)](_0x133378=>{const _0x417f64=_0xa99040;let _0x2d2809=_0x4e0645(_0x133378),_0x3bb45e=_0x41126d(_0x133378,_0x2d2809);return _0x5d9f5c(_0x133378,_0x2d2809,_0x3bb45e[_0x417f64(0x31e0)]);});}['toPresentTense'](_0x2ea11d){const _0xa17afa=_0x4c6ec1;return this[_0xa17afa(0x152d)](_0x2ea11d)[_0xa17afa(0x4833)](_0xfec1c4=>{const _0x3f3e77=_0xa17afa;let _0x42850d=_0x4e0645(_0xfec1c4),_0x4f5bd9=_0x41126d(_0xfec1c4,_0x42850d);return _0x4f5bd9[_0x3f3e77(0x4cbc)]?_0xfec1c4:_0x417736(_0xfec1c4,_0x42850d,_0x4f5bd9[_0x3f3e77(0x31e0)]);});}['toPastTense'](_0x58d650){const _0x36b7da=_0x4c6ec1;return this[_0x36b7da(0x152d)](_0x58d650)['map'](_0xce46ee=>{const _0x2b3db2=_0x36b7da;let _0x3b5b2c=_0x4e0645(_0xce46ee),_0x3113a8=_0x41126d(_0xce46ee,_0x3b5b2c);return _0x3113a8[_0x2b3db2(0x4cbc)]?_0xce46ee:_0x277a80(_0xce46ee,_0x3b5b2c,_0x3113a8[_0x2b3db2(0x31e0)]);});}['toFutureTense'](_0x28d332){const _0x1ece1a=_0x4c6ec1;return this[_0x1ece1a(0x152d)](_0x28d332)[_0x1ece1a(0x4833)](_0x39d3a8=>{const _0x16c43b=_0x1ece1a;let _0x54728a=_0x4e0645(_0x39d3a8),_0x12b2da=_0x41126d(_0x39d3a8,_0x54728a);return _0x12b2da[_0x16c43b(0x4cbc)]?_0x39d3a8:_0x5a41a9(_0x39d3a8,_0x54728a,_0x12b2da[_0x16c43b(0x31e0)]);});}['toGerund'](_0x4d7b5b){const _0x59f45a=_0x4c6ec1;return this[_0x59f45a(0x152d)](_0x4d7b5b)[_0x59f45a(0x4833)](_0x31104f=>{const _0x33eb89=_0x59f45a;let _0x1fcca3=_0x4e0645(_0x31104f),_0x4eb989=_0x41126d(_0x31104f,_0x1fcca3);return _0x4eb989['isInfinitive']?_0x31104f:_0x3af729(_0x31104f,_0x1fcca3,_0x4eb989[_0x33eb89(0x31e0)]);});}['toPastParticiple'](_0x43efaf){const _0x3cd39c=_0x4c6ec1;return this['getNth'](_0x43efaf)[_0x3cd39c(0x4833)](_0x426e61=>{let _0x4c25ad=_0x4e0645(_0x426e61),_0x2c7ffb=_0x41126d(_0x426e61,_0x4c25ad);return _0x2c7ffb['isInfinitive']?_0x426e61:_0x4c016c(_0x426e61,_0x4c25ad,_0x2c7ffb['form']);});}[_0x4c6ec1(0x2343)](_0x592b77){const _0x54e4f2=_0x4c6ec1,{conjugate:_0x1ff442,toInfinitive:_0x4f097f}=this[_0x54e4f2(0x4657)][_0x54e4f2(0x1578)][_0x54e4f2(0x21c9)]['transform'][_0x54e4f2(0x4134)];return this[_0x54e4f2(0x152d)](_0x592b77)[_0x54e4f2(0x4833)](_0x3f81ba=>{const _0xbe2d99=_0x54e4f2;let _0x10ce0c=_0x4e0645(_0x3f81ba),_0x4f0d83=_0x41126d(_0x3f81ba,_0x10ce0c);_0xbe2d99(0x4802)===_0x4f0d83[_0xbe2d99(0x31e0)]&&(_0x4f0d83[_0xbe2d99(0x31e0)]=_0xbe2d99(0x3e5c));let _0x1bb966=_0x10ce0c[_0xbe2d99(0x507b)][_0xbe2d99(0x4006)]('normal');if(!_0x10ce0c[_0xbe2d99(0x507b)][_0xbe2d99(0x3170)](_0xbe2d99(0x108e))){let _0x433405=_0x4640f8(_0x10ce0c[_0xbe2d99(0x507b)]);_0x1bb966=_0x4f097f(_0x1bb966,_0x3f81ba[_0xbe2d99(0x1556)],_0x433405)||_0x1bb966;}return _0x1ff442(_0x1bb966,_0x3f81ba[_0xbe2d99(0x1556)]);},[]);}[_0x4c6ec1(0x1296)](){const _0x1505c4=_0x4c6ec1;return this['if'](_0x1505c4(0x4f07));}['isPositive'](){const _0x2cf35a=_0x4c6ec1;return this[_0x2cf35a(0x385b)](_0x2cf35a(0x4f07));}[_0x4c6ec1(0x2037)](){const _0x5ad78a=_0x4c6ec1;let _0x5c0691=this['match']('do\x20not\x20#Verb');return _0x5c0691[_0x5ad78a(0x2108)]&&_0x5c0691[_0x5ad78a(0x42a1)](_0x5ad78a(0x3718)),this['remove'](_0x5ad78a(0x4f07));}[_0x4c6ec1(0x413)](_0x583f44){const _0x230941=_0x4c6ec1;return this[_0x230941(0x152d)](_0x583f44)['map'](_0x3b822b=>{const _0x3e6994=_0x230941;let _0x413693=_0x4e0645(_0x3b822b),_0x45c7f8=_0x41126d(_0x3b822b,_0x413693);return _0xd9347f(_0x3b822b,_0x413693,_0x45c7f8[_0x3e6994(0x31e0)]);});}[_0x4c6ec1(0x38d6)](_0x4e3483){const _0xcb1b0c=_0x4c6ec1;let _0x342ca9=new _0x149a56(this[_0xcb1b0c(0x295)],_0x4e3483);return _0x342ca9[_0xcb1b0c(0x1aa7)]=this['_cache'],_0x342ca9;}}_0x149a56['prototype'][_0x4c6ec1(0x318d)]=_0x149a56['prototype'][_0x4c6ec1(0x4d85)],_0x149a56[_0x4c6ec1(0x3b3c)][_0x4c6ec1(0x45ee)]=_0x149a56[_0x4c6ec1(0x3b3c)][_0x4c6ec1(0x299f)],_0x149a56[_0x4c6ec1(0x3b3c)][_0x4c6ec1(0x110d)]=_0x149a56[_0x4c6ec1(0x3b3c)][_0x4c6ec1(0x3ace)],_0x445b33[_0x4c6ec1(0x3b3c)][_0x4c6ec1(0x34d1)]=function(_0x358f60){const _0x454244=_0x4c6ec1;let _0x221dc8=_0x11005e(this);return _0x221dc8=_0x221dc8[_0x454244(0x152d)](_0x358f60),new _0x149a56(this[_0x454244(0x295)],_0x221dc8[_0x454244(0x43e4)]);};},_0x34d61d={'api':_0x18c7f0},_0x462ebf=function(_0x27fdf5,_0xca8fc3){const _0x2281bd=_0x37e46c;let _0x1eb829=_0xca8fc3[_0x2281bd(0x2d96)](_0x27fdf5);if(_0x1eb829[_0x2281bd(0x2108)]){let _0x5eadb7=_0x1eb829[_0x2281bd(0x3846)]()[_0x2281bd(0x789)]();if(_0x5eadb7['found'])return _0x5eadb7;}return _0xca8fc3['none']();},_0x54ed64=function(_0xd9a8df){const _0x638f25=_0x37e46c;if(!_0xd9a8df[_0x638f25(0x2108)])return _0xd9a8df;let [_0x50b25b]=_0xd9a8df[_0x638f25(0x34ce)][0x0];return _0x50b25b&&_0x50b25b>0x0?_0xd9a8df['update']([[_0x50b25b-0x1]]):_0xd9a8df[_0x638f25(0x28b)]();},_0x56c838=function(_0xfd09ee,_0x1e699d){const _0x543305=_0x37e46c;let _0x3267c4=_0xfd09ee[_0x543305(0x4968)]();return _0x3267c4=function(_0x141ba0,_0x34f337){const _0x323b86=_0x543305;return'm'===_0x34f337?_0x141ba0[_0x323b86(0x1465)](_0x45d1c0=>!_0x45d1c0[_0x323b86(0x29d1)]()[_0x323b86(0x2108)]):'f'===_0x34f337?_0x141ba0[_0x323b86(0x1465)](_0x342059=>!_0x342059[_0x323b86(0x4ae2)]()[_0x323b86(0x2108)]):_0x141ba0;}(_0x3267c4,_0x1e699d),_0x3267c4[_0x543305(0x2108)]?_0x3267c4[_0x543305(0x4d3c)]():(_0x3267c4=_0xfd09ee[_0x543305(0x268a)]('#Actor'),_0x3267c4[_0x543305(0x2108)]?_0x3267c4[_0x543305(0x4d3c)]():'f'===_0x1e699d?_0x462ebf(_0x543305(0x20e9),_0xfd09ee):'m'===_0x1e699d?_0x462ebf(_0x543305(0x11c2),_0xfd09ee):_0xfd09ee[_0x543305(0x28b)]());},_0x44acde=function(_0x3d8c4b){const _0x27daa3=_0x37e46c;let _0x3249e4=_0x3d8c4b['nouns'](),_0xf6c70b=_0x3249e4[_0x27daa3(0x2569)]()['notIf']('#Pronoun');if(_0xf6c70b[_0x27daa3(0x2108)])return _0xf6c70b[_0x27daa3(0x4d3c)]();let _0x1871b1=_0x462ebf(_0x27daa3(0x24f3),_0x3d8c4b);return _0x1871b1['found']?_0x1871b1:(_0xf6c70b=_0x3249e4[_0x27daa3(0x2d96)]('(somebody|nobody|everybody|anybody|someone|noone|everyone|anyone)'),_0xf6c70b[_0x27daa3(0x2108)]?_0xf6c70b[_0x27daa3(0x4d3c)]():_0x3d8c4b['none']());},_0x58736e=function(_0x2a086c,_0x35edda){const _0x5c80c1=_0x37e46c;let _0x63c5d8=_0x2a086c[_0x5c80c1(0x5097)](),_0x255e3d=_0x35edda(_0x63c5d8);return _0x255e3d[_0x5c80c1(0x2108)]?_0x255e3d:(_0x63c5d8=_0x54ed64(_0x2a086c),_0x255e3d=_0x35edda(_0x63c5d8),_0x255e3d[_0x5c80c1(0x2108)]?_0x255e3d:(_0x63c5d8=_0x54ed64(_0x63c5d8),_0x255e3d=_0x35edda(_0x63c5d8),_0x255e3d['found']?_0x255e3d:_0x2a086c[_0x5c80c1(0x28b)]()));},_0x313b4c=function(_0x2bec51){const _0x583de3=_0x37e46c;_0x2bec51['pronouns']()['if'](_0x583de3(0x4f2e))['forEach'](_0x54a23e=>{const _0x27b33e=_0x583de3;let _0x384d1e=null;_0x54a23e[_0x27b33e(0x3170)](_0x27b33e(0x11c2))?_0x384d1e=_0x58736e(_0x54a23e,_0x3a9acf=>_0x56c838(_0x3a9acf,'m')):_0x54a23e[_0x27b33e(0x3170)](_0x27b33e(0x20e9))?_0x384d1e=_0x58736e(_0x54a23e,_0x4d170d=>_0x56c838(_0x4d170d,'f')):_0x54a23e[_0x27b33e(0x3170)](_0x27b33e(0x24f3))&&(_0x384d1e=_0x58736e(_0x54a23e,_0x44acde)),_0x384d1e&&_0x384d1e['found']&&function(_0x2f1e72,_0x3e7f81){const _0x393084=_0x27b33e;_0x3e7f81&&_0x3e7f81[_0x393084(0x2108)]&&(_0x2f1e72[_0x393084(0x204b)][0x0][0x0][_0x393084(0x3d0f)]=_0x3e7f81[_0x393084(0x232)][0x0]);}(_0x54a23e,_0x384d1e);});},_0x344504=function(_0x5b8d60){const _0x459a55=_0x37e46c;class _0x8b6b1f extends _0x5b8d60{constructor(_0x38705d,_0x21dc96,_0x3e889f){const _0x159115=a0_0x11e7;super(_0x38705d,_0x21dc96,_0x3e889f),this[_0x159115(0x106d)]=_0x159115(0xdd6);}[_0x459a55(0x4103)](){const _0x2a6f5c=_0x459a55;return this[_0x2a6f5c(0x23df)](_0x2a6f5c(0x2ea0)),this[_0x2a6f5c(0x1465)](_0x360964=>_0x360964[_0x2a6f5c(0x204b)][0x0][0x0]['reference']);}[_0x459a55(0x789)](){const _0x2a1515=_0x459a55;return this[_0x2a1515(0x23df)](_0x2a1515(0x2ea0)),this[_0x2a1515(0x4833)](_0x1098f1=>{const _0x4ced90=_0x2a1515;if(!_0x1098f1[_0x4ced90(0x2108)])return _0x1098f1[_0x4ced90(0x28b)]();let _0x4b8fa7=_0x1098f1[_0x4ced90(0x204b)][0x0][0x0];return _0x4b8fa7[_0x4ced90(0x3d0f)]?_0x1098f1[_0x4ced90(0x38d6)]([_0x4b8fa7[_0x4ced90(0x3d0f)]]):_0x1098f1[_0x4ced90(0x28b)]();});}[_0x459a55(0x38d6)](_0x2a7d7d){const _0x376c32=_0x459a55;let _0x31cc9d=new _0x8b6b1f(this[_0x376c32(0x295)],_0x2a7d7d);return _0x31cc9d[_0x376c32(0x1aa7)]=this['_cache'],_0x31cc9d;}}_0x5b8d60['prototype'][_0x459a55(0x3846)]=function(_0x4b383e){const _0x18a0fd=_0x459a55;let _0x3ac71e=this[_0x18a0fd(0x2d96)](_0x18a0fd(0x869));return _0x3ac71e=_0x3ac71e[_0x18a0fd(0x152d)](_0x4b383e),new _0x8b6b1f(_0x3ac71e['document'],_0x3ac71e[_0x18a0fd(0x43e4)]);};},_0x5c2f7a={'compute':{'coreference':_0x313b4c},'api':_0x344504};_0x46cf2a[_0x37e46c(0xe15)](_0xc2a495),_0x46cf2a[_0x37e46c(0xe15)](_0x4af88c),_0x46cf2a[_0x37e46c(0xe15)](_0x5ea65a),_0x46cf2a[_0x37e46c(0xe15)](_0x5c2f7a),_0x46cf2a[_0x37e46c(0xe15)](_0x13c872),_0x46cf2a[_0x37e46c(0xe15)](_0x1ff736),_0x46cf2a['plugin'](_0x158818),_0x46cf2a[_0x37e46c(0xe15)](_0x2a2326),_0x46cf2a[_0x37e46c(0xe15)](_0x45f775),_0x46cf2a[_0x37e46c(0xe15)](_0x253bdd),_0x46cf2a[_0x37e46c(0xe15)](_0x23e681),_0x46cf2a['plugin'](_0x34d61d);const _0x5b2838=_0x46cf2a;function _0x384dc8(_0x151ff9){const _0x1e6bc7=_0x37e46c;return _0x5b2838(_0x151ff9)[_0x1e6bc7(0x1d99)]()['out']('array');}function _0x385bac(_0x3bd8e3){const _0x9c0a7e=_0x37e46c;return _0x5b2838(_0x3bd8e3)[_0x9c0a7e(0x268a)]()['out'](_0x9c0a7e(0x26f6));}function _0x5eea22(_0x59cd26){const _0x27febd=_0x37e46c;return _0x5b2838(_0x59cd26)[_0x27febd(0x2429)]()[_0x27febd(0x42a1)]('#Stop')[_0x27febd(0x3ab5)](_0x27febd(0x4006));}function _0x46561d(_0x4020d1){const _0x45ffc7=_0x37e46c;let _0x3dcc1b=_0x4020d1[_0x45ffc7(0x741)](/<[^>]*>/g,'');return[_0x45ffc7(0x1e06),_0x45ffc7(0xcab),'QUESTION','QUOTE',_0x45ffc7(0x2f4a),'POWER_UP',_0x45ffc7(0xdea)][_0x45ffc7(0xa21)](_0xc5ba68=>{const _0x462b4e=_0x45ffc7,_0x57358b=new RegExp(_0xc5ba68+_0x462b4e(0x265d),'gi');_0x3dcc1b=_0x3dcc1b['replace'](_0x57358b,'');}),_0x3dcc1b=_0x3dcc1b[_0x45ffc7(0x1b23)]()['replace'](/\s+/g,'\x20'),_0x3dcc1b;}function _0x19ba43(_0x5df507){const _0x7a8325=_0x37e46c;let _0x79452f=function(_0x2c2e57){const _0x988d3b=a0_0x11e7,_0x1ef70b=document[_0x988d3b(0x3ac9)]('div');return _0x1ef70b[_0x988d3b(0x3cdf)]=_0x2c2e57,_0x1ef70b['textContent']||_0x1ef70b[_0x988d3b(0x4eb2)]||'';}(_0x5df507);_0x79452f=_0x46561d(_0x5df507),_0x79452f=_0x79452f[_0x7a8325(0x741)](/\s+/g,'\x20')[_0x7a8325(0x741)](/\n+/g,'\x0a')[_0x7a8325(0x1b23)](),_0x79452f=_0x5eea22(_0x79452f);let _0x579715=_0x384dc8(_0x79452f);return 0x0===_0x579715[_0x7a8325(0x1b19)]&&(_0x579715=_0x385bac(_0x79452f),0x0===_0x579715['length']&&(_0x579715=_0x79452f['split']('\x20'))),_0x579715;}function _0x46a588(){const _0x3b53d1=_0x37e46c,_0x3cd63a=document[_0x3b53d1(0x4f1a)][_0x3b53d1(0x2f80)];return _0x3cd63a[_0x3b53d1(0x741)](/\s+/g,'\x20')['trim']();}const _0x2f4163=(_0xdfc428,_0x54dcc5)=>_0x54dcc5[_0x37e46c(0x363a)](_0x196fe=>_0xdfc428 instanceof _0x196fe);let _0x37d4a0,_0x39fc5d;const _0x2d167c=new WeakMap(),_0x28f708=new WeakMap(),_0x1eaf78=new WeakMap();let _0x2ddcd6={'get'(_0x25d466,_0x4e1c38,_0x1aeb0c){const _0xefa685=_0x37e46c;if(_0x25d466 instanceof IDBTransaction){if(_0xefa685(0x37e)===_0x4e1c38)return _0x2d167c[_0xefa685(0xf9e)](_0x25d466);if(_0xefa685(0x2570)===_0x4e1c38)return _0x1aeb0c[_0xefa685(0x20d)][0x1]?void 0x0:_0x1aeb0c['objectStore'](_0x1aeb0c[_0xefa685(0x20d)][0x0]);}return _0x20d85e(_0x25d466[_0x4e1c38]);},'set':(_0x6db299,_0x3b32e9,_0x32d773)=>(_0x6db299[_0x3b32e9]=_0x32d773,!0x0),'has':(_0x186207,_0x4b40f3)=>_0x186207 instanceof IDBTransaction&&(_0x37e46c(0x37e)===_0x4b40f3||_0x37e46c(0x2570)===_0x4b40f3)||_0x4b40f3 in _0x186207};function _0x2143ad(_0x3e88af){_0x2ddcd6=_0x3e88af(_0x2ddcd6);}function _0x45d47c(_0x5e489a){const _0x214233=_0x37e46c;return(_0x39fc5d||(_0x39fc5d=[IDBCursor[_0x214233(0x3b3c)][_0x214233(0x4b58)],IDBCursor[_0x214233(0x3b3c)][_0x214233(0x16d9)],IDBCursor[_0x214233(0x3b3c)][_0x214233(0x3bd4)]]))['includes'](_0x5e489a)?function(..._0x1de2bf){const _0x4ed9e9=_0x214233;return _0x5e489a['apply'](_0x42aa1d(this),_0x1de2bf),_0x20d85e(this[_0x4ed9e9(0x3519)]);}:function(..._0x55d5e9){return _0x20d85e(_0x5e489a['apply'](_0x42aa1d(this),_0x55d5e9));};}function _0x72408b(_0xc93ba2){return'function'==typeof _0xc93ba2?_0x45d47c(_0xc93ba2):(_0xc93ba2 instanceof IDBTransaction&&function(_0x27c07b){if(_0x2d167c['has'](_0x27c07b))return;const _0x323f0b=new Promise((_0x2fb964,_0x3a5a08)=>{const _0x4f1643=a0_0x11e7,_0x9d689d=()=>{const _0x4958e2=a0_0x11e7;_0x27c07b[_0x4958e2(0x2cf7)](_0x4958e2(0x189f),_0x5d3c01),_0x27c07b[_0x4958e2(0x2cf7)](_0x4958e2(0x3d85),_0x2532be),_0x27c07b[_0x4958e2(0x2cf7)](_0x4958e2(0x1ec4),_0x2532be);},_0x5d3c01=()=>{_0x2fb964(),_0x9d689d();},_0x2532be=()=>{const _0x4755bb=a0_0x11e7;_0x3a5a08(_0x27c07b[_0x4755bb(0x3d85)]||new DOMException('AbortError',_0x4755bb(0x38c2))),_0x9d689d();};_0x27c07b[_0x4f1643(0xc61)](_0x4f1643(0x189f),_0x5d3c01),_0x27c07b[_0x4f1643(0xc61)]('error',_0x2532be),_0x27c07b[_0x4f1643(0xc61)](_0x4f1643(0x1ec4),_0x2532be);});_0x2d167c['set'](_0x27c07b,_0x323f0b);}(_0xc93ba2),_0x2f4163(_0xc93ba2,_0x37d4a0||(_0x37d4a0=[IDBDatabase,IDBObjectStore,IDBIndex,IDBCursor,IDBTransaction]))?new Proxy(_0xc93ba2,_0x2ddcd6):_0xc93ba2);}function _0x20d85e(_0xc930ca){const _0x22c12c=_0x37e46c;if(_0xc930ca instanceof IDBRequest)return function(_0x46d529){const _0xc1aab5=new Promise((_0x40f768,_0x439488)=>{const _0x19d5f3=a0_0x11e7,_0xb19f8a=()=>{const _0x586a6c=a0_0x11e7;_0x46d529['removeEventListener'](_0x586a6c(0x3754),_0x379da9),_0x46d529[_0x586a6c(0x2cf7)](_0x586a6c(0x3d85),_0x10345b);},_0x379da9=()=>{const _0x57e38f=a0_0x11e7;_0x40f768(_0x20d85e(_0x46d529[_0x57e38f(0xa34)])),_0xb19f8a();},_0x10345b=()=>{_0x439488(_0x46d529['error']),_0xb19f8a();};_0x46d529[_0x19d5f3(0xc61)](_0x19d5f3(0x3754),_0x379da9),_0x46d529['addEventListener'](_0x19d5f3(0x3d85),_0x10345b);});return _0x1eaf78['set'](_0xc1aab5,_0x46d529),_0xc1aab5;}(_0xc930ca);if(_0x28f708['has'](_0xc930ca))return _0x28f708[_0x22c12c(0xf9e)](_0xc930ca);const _0x4df55a=_0x72408b(_0xc930ca);return _0x4df55a!==_0xc930ca&&(_0x28f708[_0x22c12c(0x1fa)](_0xc930ca,_0x4df55a),_0x1eaf78[_0x22c12c(0x1fa)](_0x4df55a,_0xc930ca)),_0x4df55a;}const _0x42aa1d=_0x570654=>_0x1eaf78[_0x37e46c(0xf9e)](_0x570654),_0x2570fa=['get',_0x37e46c(0xd7c),_0x37e46c(0x3526),_0x37e46c(0x340f),_0x37e46c(0x404e)],_0x422b2d=[_0x37e46c(0xbe8),'add','delete',_0x37e46c(0x4933)],_0x503738=new Map();function _0xa5569c(_0x18ce93,_0x2f4144){const _0x19cea3=_0x37e46c;if(!(_0x18ce93 instanceof IDBDatabase)||_0x2f4144 in _0x18ce93||'string'!=typeof _0x2f4144)return;if(_0x503738['get'](_0x2f4144))return _0x503738[_0x19cea3(0xf9e)](_0x2f4144);const _0x305264=_0x2f4144[_0x19cea3(0x741)](/FromIndex$/,''),_0x1f1672=_0x2f4144!==_0x305264,_0x39b360=_0x422b2d[_0x19cea3(0x2628)](_0x305264);if(!(_0x305264 in(_0x1f1672?IDBIndex:IDBObjectStore)['prototype'])||!_0x39b360&&!_0x2570fa[_0x19cea3(0x2628)](_0x305264))return;const _0x1c4b7e=async function(_0x3a4e2a,..._0x27dc6f){const _0xf9dfdc=_0x19cea3,_0x11c5d5=this['transaction'](_0x3a4e2a,_0x39b360?_0xf9dfdc(0x23d0):_0xf9dfdc(0x1aa2));let _0xa73877=_0x11c5d5[_0xf9dfdc(0x2570)];return _0x1f1672&&(_0xa73877=_0xa73877[_0xf9dfdc(0x3bb5)](_0x27dc6f[_0xf9dfdc(0x34fe)]())),(await Promise[_0xf9dfdc(0xc36)]([_0xa73877[_0x305264](..._0x27dc6f),_0x39b360&&_0x11c5d5[_0xf9dfdc(0x37e)]]))[0x0];};return _0x503738[_0x19cea3(0x1fa)](_0x2f4144,_0x1c4b7e),_0x1c4b7e;}_0x2143ad(_0x1d88f3=>({..._0x1d88f3,'get':(_0xa3dd79,_0x377e92,_0x2c4e30)=>_0xa5569c(_0xa3dd79,_0x377e92)||_0x1d88f3[_0x37e46c(0xf9e)](_0xa3dd79,_0x377e92,_0x2c4e30),'has':(_0x3800fa,_0x1b02e3)=>!!_0xa5569c(_0x3800fa,_0x1b02e3)||_0x1d88f3['has'](_0x3800fa,_0x1b02e3)}));const _0x4cbb13=['continue',_0x37e46c(0x3bd4),'advance'],_0x15227d={},_0x555c02=new WeakMap(),_0x4ad045=new WeakMap(),_0x2995e0={'get'(_0x1c6441,_0x207439){const _0x57cd0a=_0x37e46c;if(!_0x4cbb13[_0x57cd0a(0x2628)](_0x207439))return _0x1c6441[_0x207439];let _0x165a6d=_0x15227d[_0x207439];return _0x165a6d||(_0x165a6d=_0x15227d[_0x207439]=function(..._0x18d807){const _0x1de45c=_0x57cd0a;_0x555c02[_0x1de45c(0x1fa)](this,_0x4ad045[_0x1de45c(0xf9e)](this)[_0x207439](..._0x18d807));}),_0x165a6d;}};async function*_0x23f344(..._0x14a3a5){const _0x1c5328=_0x37e46c;let _0x213583=this;if(_0x213583 instanceof IDBCursor||(_0x213583=await _0x213583[_0x1c5328(0xfe1)](..._0x14a3a5)),!_0x213583)return;const _0x369b4b=new Proxy(_0x213583,_0x2995e0);for(_0x4ad045[_0x1c5328(0x1fa)](_0x369b4b,_0x213583),_0x1eaf78[_0x1c5328(0x1fa)](_0x369b4b,_0x42aa1d(_0x213583));_0x213583;)yield _0x369b4b,_0x213583=await(_0x555c02[_0x1c5328(0xf9e)](_0x369b4b)||_0x213583[_0x1c5328(0x16d9)]()),_0x555c02[_0x1c5328(0x5be)](_0x369b4b);}function _0x5956e8(_0x8075d3,_0x28bcf8){const _0x1cfb35=_0x37e46c;return _0x28bcf8===Symbol[_0x1cfb35(0x3050)]&&_0x2f4163(_0x8075d3,[IDBIndex,IDBObjectStore,IDBCursor])||_0x1cfb35(0x213d)===_0x28bcf8&&_0x2f4163(_0x8075d3,[IDBIndex,IDBObjectStore]);}_0x2143ad(_0x7090ca=>({..._0x7090ca,'get':(_0x4cdd36,_0x38674e,_0x53b767)=>_0x5956e8(_0x4cdd36,_0x38674e)?_0x23f344:_0x7090ca[_0x37e46c(0xf9e)](_0x4cdd36,_0x38674e,_0x53b767),'has':(_0x5bb264,_0x3efd99)=>_0x5956e8(_0x5bb264,_0x3efd99)||_0x7090ca[_0x37e46c(0x3170)](_0x5bb264,_0x3efd99)}));const _0x1b0975=_0x37e46c(0x1989);function _0x57a9ad(_0xe92c4f,_0x1b247c,_0x286b55){const _0x1c3dee=_0x37e46c;try{let _0x4d01e7;if(_0x1c3dee(0x3754)===_0x286b55)return _0x4d01e7=_0xe92c4f['querySelector'](_0x1c3dee(0x2dde)),_0x4d01e7||(_0x4d01e7=document['createElement'](_0x1c3dee(0x4c88)),_0x4d01e7['id']=_0x1c3dee(0x17c7),_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x12ca)]=_0x1c3dee(0x28b),_0x4d01e7[_0x1c3dee(0x1a84)]['position']=_0x1c3dee(0x1aaf),_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x3335)]=_0x1c3dee(0x2049),_0x4d01e7[_0x1c3dee(0x1a84)]['left']=_0x1c3dee(0x2049),_0x4d01e7[_0x1c3dee(0x1a84)]['background']='lightgreen',_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x5252)]=_0x1c3dee(0x2049),_0x4d01e7['style'][_0x1c3dee(0x4378)]=_0x1c3dee(0x2049),_0x4d01e7['style'][_0x1c3dee(0x24df)]='9999',_0xe92c4f['appendChild'](_0x4d01e7)),_0x4d01e7[_0x1c3dee(0x2f80)]=_0x1b247c,_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x12ca)]=_0x1c3dee(0x1f2e),setTimeout(()=>{const _0x1ded08=_0x1c3dee;_0x4d01e7['style'][_0x1ded08(0x12ca)]=_0x1ded08(0x28b);},0x7d0),'none';_0x4d01e7=_0xe92c4f[_0x1c3dee(0x2842)](_0x1c3dee(0x5158)),_0x4d01e7||(_0x4d01e7=document[_0x1c3dee(0x3ac9)]('div'),_0x4d01e7['id']=_0x1c3dee(0x411c),_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x12ca)]='none',_0x4d01e7['style'][_0x1c3dee(0x25f1)]=_0x1c3dee(0x1aaf),_0x4d01e7['style'][_0x1c3dee(0x3335)]=_0x1c3dee(0x2049),_0x4d01e7[_0x1c3dee(0x1a84)]['left']=_0x1c3dee(0x2049),_0x4d01e7['style'][_0x1c3dee(0x1471)]=_0x1c3dee(0x27ca),_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x5252)]=_0x1c3dee(0x2049),_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x4378)]='5px',_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x24df)]=_0x1c3dee(0x2a92),_0xe92c4f[_0x1c3dee(0x335a)](_0x4d01e7)),_0x4d01e7['textContent']=_0x1b247c,_0x4d01e7[_0x1c3dee(0x1a84)][_0x1c3dee(0x12ca)]=_0x1c3dee(0x1f2e),setTimeout(()=>{const _0x337795=_0x1c3dee;_0x4d01e7[_0x337795(0x1a84)][_0x337795(0x12ca)]=_0x337795(0x28b);},0x7d0);}catch(_0x41a329){}}function _0x5a6252(_0x556571){const _0x37b5f1=_0x556571['querySelector']('#message-container');_0x37b5f1&&requestAnimationFrame(()=>{const _0x54c0cc=a0_0x11e7;_0x37b5f1[_0x54c0cc(0x5005)]=_0x37b5f1[_0x54c0cc(0x768)];});}function _0x582d6d(_0xe94c0b){const _0x3c7ed1=_0x37e46c,_0x4fa11c=_0xe94c0b[_0x3c7ed1(0x2842)](_0x3c7ed1(0x745));let _0x41ef1b=_0xe94c0b[_0x3c7ed1(0x2842)](_0x3c7ed1(0xe28));_0x4fa11c&&_0x41ef1b?_0x41ef1b['scrollIntoView']({'behavior':'smooth'}):_0x4fa11c&&!_0x41ef1b&&(_0x41ef1b=document[_0x3c7ed1(0x3ac9)]('div'),_0x41ef1b['id']='scroll-target',_0x4fa11c[_0x3c7ed1(0x335a)](_0x41ef1b),_0x41ef1b[_0x3c7ed1(0x50d3)]({'behavior':_0x3c7ed1(0x3f51)}));}function _0x187608(_0x16bd46,_0x4ab51f,_0x4e64ea='',_0x9ce39e=''){const _0x134851=_0x37e46c,_0x2f8a4f=_0x16bd46['querySelector']('#'+_0x4ab51f);if(!_0x2f8a4f)return null;const _0x409547=_0x2f8a4f['cloneNode'](!0x0);if(''!==_0x4e64ea?_0x409547['id']=_0x4e64ea:_0x409547['removeAttribute']('id'),''!==_0x9ce39e){const _0x35827d=_0x16bd46[_0x134851(0x2842)]('#'+_0x9ce39e);if(_0x35827d){if(_0x134851(0x1906)===_0x9ce39e){const _0x15f2c7=_0x16bd46['querySelector']('#scroll-target');_0x35827d[_0x134851(0x197c)](_0x409547,_0x15f2c7);}else _0x35827d[_0x134851(0x335a)](_0x409547);}}return _0x409547;}function _0x4ac7fa(_0x27331f){const _0x318b52=_0x37e46c;_0x27331f['id']='id-'+performance[_0x318b52(0xae4)]()[_0x318b52(0x8e8)]()[_0x318b52(0x741)]('.','')+'-'+Math[_0x318b52(0xe98)]()[_0x318b52(0x8e8)](0x24)[_0x318b52(0x37b5)](0x2,0xf),_0x27331f[_0x318b52(0x492f)]('*')[_0x318b52(0xa21)](_0x4ac7fa);}function _0x370a32(_0x39d605,_0x3ac87c,_0x24ac60){const _0x344a4e=_0x37e46c,_0x4de9b4=document[_0x344a4e(0x3ac9)](_0x344a4e(0x4c88));_0x4de9b4[_0x344a4e(0x998)](_0x344a4e(0x1390),'flex\x20justify-between');const _0xe9ac6=document[_0x344a4e(0x3ac9)](_0x344a4e(0x4c88));_0xe9ac6['setAttribute']('class',_0x344a4e(0x25b9)),_0x24ac60 instanceof Element&&_0xe9ac6[_0x344a4e(0x335a)](_0x24ac60);const _0x32cb8a=document['createElement'](_0x344a4e(0x4c88));_0x32cb8a[_0x344a4e(0x3cdf)]=_0x1b0975,_0x4de9b4[_0x344a4e(0x335a)](_0xe9ac6),_0x4de9b4[_0x344a4e(0x335a)](_0x32cb8a),_0x3ac87c[_0x344a4e(0x335a)](_0x4de9b4),function(_0x45a1aa,_0x16b9f0){const _0x29ec42=_0x344a4e,_0x1cc5e2=_0x16b9f0[_0x29ec42(0x2842)]('#copy-btn'),_0x39b544=_0x16b9f0[_0x29ec42(0x2842)](_0x29ec42(0xeb4)),_0x24946c=_0x16b9f0[_0x29ec42(0x2842)](_0x29ec42(0xbd9)),_0x2c7ce2=_0x16b9f0['querySelector']('#markdown-preview');_0x1cc5e2['addEventListener'](_0x29ec42(0x364d),function(){const _0x5cadb2=_0x29ec42;navigator[_0x5cadb2(0x4764)][_0x5cadb2(0xeb1)](_0x2c7ce2['textContent'])[_0x5cadb2(0xaf5)](()=>{const _0x382eaf=_0x5cadb2;_0x57a9ad(_0x45a1aa,_0x382eaf(0x1673),_0x382eaf(0x3754));})[_0x5cadb2(0x31a3)](_0x5cfdc9=>{});}),_0x39b544[_0x29ec42(0xc61)](_0x29ec42(0x364d),function(){const _0x548b84=_0x29ec42,_0x5b6fab=_0x2c7ce2['textContent'],_0x1090cc=new Blob([_0x5b6fab],{'type':_0x548b84(0x129c)}),_0x88aa8=URL[_0x548b84(0x7fe)](_0x1090cc),_0x1460cf=document[_0x548b84(0x3ac9)]('a');_0x1460cf[_0x548b84(0xe63)]=_0x88aa8,_0x1460cf[_0x548b84(0x34e)]=_0x548b84(0x357c),document[_0x548b84(0x4f1a)][_0x548b84(0x335a)](_0x1460cf),_0x1460cf['click'](),document[_0x548b84(0x4f1a)][_0x548b84(0x477)](_0x1460cf),URL[_0x548b84(0x4b6e)](_0x88aa8);}),_0x24946c[_0x29ec42(0xc61)](_0x29ec42(0x364d),function(){const _0xb96534=_0x29ec42,_0x530797=_0xb96534(0x2d93)+encodeURIComponent('Here\x27s\x20something\x20I\x20wanted\x20to\x20share\x20with\x20you')+_0xb96534(0x375b)+encodeURIComponent(_0x2c7ce2['textContent']);window[_0xb96534(0x167e)]['href']=_0x530797;});}(_0x39d605,_0x3ac87c);}let _0x1a4e51,_0x3a921d,_0xa8cda5,_0x30263c,_0x46079f,_0x329443=!0x1,_0x20c8e4=!0x1,_0xa08fd1='';function _0x188dc7(_0x1d7d31){const _0x510a75=_0x37e46c;let _0x38da44=function(){const _0x44dd92=a0_0x11e7,_0x5b740c=document[_0x44dd92(0x492f)]('p'),_0x4a55f9=Array[_0x44dd92(0x27e6)](_0x5b740c)['map'](_0x188148=>_0x188148[_0x44dd92(0x2f80)][_0x44dd92(0x1b23)]());return _0x4a55f9;}()[_0x510a75(0x3541)]('\x20');void 0x0!==_0x38da44&&0x0!==_0x38da44[_0x510a75(0x1b19)]||(_0x38da44=_0x46a588()),_0x46079f=function(_0x53e806){const _0x48d39e=_0x510a75;let _0x47a817=_0x5b2838(_0x53e806),_0x3dc2a5=[];return _0x47a817[_0x48d39e(0x4a5b)]()[_0x48d39e(0xa21)](_0x12de9a=>{const _0x41f60c=_0x48d39e;let _0xd3e416=0x0,_0x4ecac3=_0x12de9a[_0x41f60c(0x4006)]();_0x12de9a[_0x41f60c(0x2d96)](_0x41f60c(0x3c21))[_0x41f60c(0x2108)]&&(_0xd3e416+=0x2),_0x12de9a['numbers']()[_0x41f60c(0x2108)]&&(_0xd3e416+=0x1),_0x4ecac3[_0x41f60c(0x1117)]('\x20')[_0x41f60c(0x1b19)]>0xc&&(_0xd3e416+=0x1),_0xd3e416>0x0&&_0x3dc2a5['push']({'sentence':_0x4ecac3,'score':_0xd3e416});}),_0x3dc2a5[_0x48d39e(0x4c33)]((_0x447180,_0xc906e8)=>_0xc906e8[_0x48d39e(0x22f3)]-_0x447180[_0x48d39e(0x22f3)]),_0x3dc2a5[_0x48d39e(0x384c)](0x0,0x3)['map'](_0x1f29ec=>_0x1f29ec[_0x48d39e(0x3824)])['join']('\x20');}(_0x38da44),_0xa8cda5=_0x1d7d31,_0x1a4e51=_0x1d7d31[_0x510a75(0xffd)]('text-selection-menu');const _0x30f1ca=_0x1d7d31[_0x510a75(0xffd)]('user-input');function _0x417dcf(_0x3189bc,_0x572172){const _0x4eb2be=_0x510a75,_0x21e70b=_0x572172[_0x4eb2be(0x2842)](_0x4eb2be(0x41d8));_0x3189bc?(_0x21e70b[_0x4eb2be(0x1745)][_0x4eb2be(0x42a1)](_0x4eb2be(0x372f),_0x4eb2be(0x1b2f)),_0x21e70b[_0x4eb2be(0x3f89)]=!0x1):(_0x21e70b[_0x4eb2be(0x1745)][_0x4eb2be(0x362c)]('pointer-events-none','opacity-50'),_0x21e70b[_0x4eb2be(0x3f89)]=!0x0);}document['addEventListener'](_0x510a75(0x3b7f),async _0x128a14=>{const _0x12cb31=_0x510a75,_0x90949d=await window[_0x12cb31(0x350d)](),_0x4ceee6=_0x128a14[_0x12cb31(0x1bba)]()[_0x12cb31(0x2628)](_0x1a4e51);_0x30263c=function(_0xc47e3e,_0x1959f2){const _0x3a096a=_0x12cb31;if(_0xc47e3e['toString']()[_0x3a096a(0x1b19)]>0x0){var _0x1011e6=document['elementFromPoint'](_0x1959f2[_0x3a096a(0x28a0)],_0x1959f2['clientY'])[_0x3a096a(0x4988)](_0x3a096a(0x1696));if(_0x1011e6)_0x30263c=''+window[_0x3a096a(0x167e)][_0x3a096a(0x451e)]+window[_0x3a096a(0x167e)][_0x3a096a(0x1211)]+'#'+_0x1011e6['id'];else{const _0x510c94=_0x1959f2[_0x3a096a(0x2974)],_0x2724cf=_0x1959f2[_0x3a096a(0x1b68)];_0x30263c=window['location'][_0x3a096a(0xe63)][_0x3a096a(0x1117)]('?')[0x0]+'?x='+_0x510c94+_0x3a096a(0x29cc)+_0x2724cf;}}return _0x30263c;}(_0x90949d,_0x128a14),_0x9ad941(),_0x3a921d=_0x90949d['toString']()[_0x12cb31(0x1b23)](),_0x3a921d['length']>0x0&&!_0x329443?(_0x20c8e4=!0x0,_0x417dcf(!0x0,_0xa8cda5),setTimeout(()=>{_0x20c8e4=!0x1;},0x14),_0x329443=!0x0):_0x3a921d[_0x12cb31(0x1b19)]>0x0&&_0x329443&&_0xa08fd1!==_0x3a921d&&!_0x4ceee6?_0x417dcf(!0x0,_0xa8cda5):0x0===_0x3a921d['length']&&_0x417dcf(!0x1,_0xa8cda5),_0xa08fd1=_0x3a921d;}),document['addEventListener'](_0x510a75(0x364d),_0x207468=>{const _0x4a3beb=_0x510a75,_0x50da9c=_0x207468['composedPath'](),_0x580037=window[_0x4a3beb(0x350d)]()[_0x4a3beb(0x8e8)]()[_0x4a3beb(0x1b23)]();if(!_0x20c8e4&&_0x329443){const _0x11aa1f=_0x50da9c[_0x4a3beb(0x2628)](_0x1a4e51);_0x11aa1f||0x0!==_0x580037[_0x4a3beb(0x1b19)]?_0x11aa1f?_0x417dcf(!0x1,_0xa8cda5):_0x580037!==_0x3a921d&&_0x580037[_0x4a3beb(0x1b19)]>0x0&&(_0x3a921d=_0x580037,_0x9ad941(window[_0x4a3beb(0x350d)]()),_0x417dcf(!0x0,_0xa8cda5)):(_0x329443=!0x1,document['querySelectorAll'](_0x4a3beb(0x2748))[_0x4a3beb(0xa21)](_0x5299d6=>_0x5299d6['remove']()),_0x417dcf(!0x1,_0xa8cda5));}});const _0x3fad31=_0x1d7d31['getElementById'](_0x510a75(0x20ba)),_0x8b6c3f=_0x1d7d31['getElementById'](_0x510a75(0x39e5)),_0x3d3a81=_0x1d7d31[_0x510a75(0xffd)](_0x510a75(0x43e0)),_0x4ff66f=_0x1d7d31[_0x510a75(0xffd)](_0x510a75(0x362e)),_0x1b44bb=_0x1d7d31[_0x510a75(0xffd)](_0x510a75(0x1742)),_0x42626b=_0x1d7d31[_0x510a75(0xffd)]('research-btn'),_0x9422e=_0x1d7d31[_0x510a75(0xffd)]('examples-btn'),_0x3c3d3f=_0x1d7d31['getElementById']('mindfulness-btn');function _0x598338(_0x487c2b,_0x3d300a){const _0x1ae6f5=_0x510a75,_0x1b6ac3=new CustomEvent('aiActionCompleted',{'detail':{'text':_0x487c2b,'type':_0x1ae6f5(0x150a),'links':_0x3d300a}});window[_0x1ae6f5(0x135d)](_0x1b6ac3);}_0x1b44bb['addEventListener'](_0x510a75(0x364d),function(){const _0x10678b=_0x510a75;_0x598338('Summarize\x20the\x20general\x20gist\x20of\x20this\x20text:\x20'+_0x46079f,[window[_0x10678b(0x167e)][_0x10678b(0xe63)]]);}),_0x3c3d3f['addEventListener'](_0x510a75(0x364d),function(){const _0x57b1ef=_0x510a75;_0x598338(_0x57b1ef(0x3ff9),[window[_0x57b1ef(0x167e)][_0x57b1ef(0xe63)]]);}),_0x42626b[_0x510a75(0xc61)](_0x510a75(0x364d),async function(){const _0x2c99f6=_0x510a75;!function(_0x50ba69,_0x5de6c3){const _0x4418ef=a0_0x11e7,_0x511b44=new CustomEvent(_0x4418ef(0x5216),{'detail':{'text':_0x50ba69,'type':_0x4418ef(0x14bf),'links':_0x5de6c3}});window[_0x4418ef(0x135d)](_0x511b44);}(_0x46079f,[window[_0x2c99f6(0x167e)][_0x2c99f6(0xe63)]]);}),_0x9422e[_0x510a75(0xc61)]('click',function(){const _0x10e143=_0x510a75;_0x598338(_0x10e143(0x3120)+_0x46079f,[window['location']['href']]);}),_0x3d3a81[_0x510a75(0xc61)]('click',function(){_0x4a5120(_0xa8cda5),_0x582d6d(_0xa8cda5);}),_0x30f1ca[_0x510a75(0xc61)](_0x510a75(0x301f),_0x39fe83=>{const _0x2126d9=_0x510a75,_0x26352e=_0x30f1ca['value'];_0x2126d9(0x3c55)!==_0x39fe83[_0x2126d9(0x49fe)]||_0x39fe83[_0x2126d9(0x3365)]||_0x39fe83[_0x2126d9(0x4a0e)]||(_0x39fe83[_0x2126d9(0x1e26)](),function(_0x13dc35,_0x72a4c,_0x245a1d){const _0x37f794=new CustomEvent(_0x72a4c,{'detail':{'text':_0x13dc35,'type':_0x245a1d}});window['dispatchEvent'](_0x37f794);}(_0x26352e,'aiActionCompleted','query'),_0x30f1ca[_0x2126d9(0x4fe9)]='');}),_0x4ff66f['addEventListener'](_0x510a75(0x364d),_0x6ce752=>{!function(_0xde2838){const _0x18c90d=a0_0x11e7,_0x39de76=new CustomEvent(_0x18c90d(0x5216),{'detail':{'text':_0xde2838,'type':_0x18c90d(0xf68),'links':[_0x30263c]}});window[_0x18c90d(0x135d)](_0x39de76);}(_0x3a921d);}),_0x3fad31[_0x510a75(0xc61)]('click',_0x26872c=>{const _0x497413=_0x510a75;_0x26872c[_0x497413(0x2e45)]();const _0x689196=_0x30f1ca[_0x497413(0x4fe9)];_0x30f1ca[_0x497413(0x4fe9)]='';const _0x258893=new CustomEvent('aiActionCompleted',{'detail':{'text':_0x689196,'type':_0x497413(0x1cff)}});window['dispatchEvent'](_0x258893);},{'capture':!0x0}),_0x8b6c3f[_0x510a75(0xc61)](_0x510a75(0x364d),_0x1a103a=>{const _0x5dd497=_0x510a75;_0x1a103a[_0x5dd497(0x2e45)](),_0x3a921d||(_0x3a921d=_0x46079f);const _0x477187=new CustomEvent('aiActionCompleted',{'detail':{'text':_0x3a921d,'type':_0x5dd497(0x3277)}});window['dispatchEvent'](_0x477187);});}function _0x23eb08(_0x95ab95){const _0x39c9bc=_0x37e46c,_0x1f06aa=_0x95ab95[_0x39c9bc(0x2842)](_0x39c9bc(0x24dd));return _0x1f06aa[_0x39c9bc(0x3cdf)]=_0x2ed16,_0x1f06aa;}const _0x3e8a84='https://tinymlbackend3.azurewebsites.net/',_0x24a2f2=[_0x37e46c(0x257b),_0x37e46c(0x2a7),'Retrieving\x20relevant\x20sentences','Calling\x20LLM\x20model',_0x37e46c(0x1d11)],_0x54c8b7=[_0x37e46c(0x4fc8),_0x37e46c(0x98b),_0x37e46c(0x1d11)],_0x36c8bc=[_0x37e46c(0x5e4),'Checking\x20conversation\x20history',_0x37e46c(0x98b),_0x37e46c(0x1d11)],_0x5e5ccb=[_0x37e46c(0x3b63),_0x37e46c(0x502e),_0x37e46c(0x98b),_0x37e46c(0x1d11)],_0x3f1935=[_0x37e46c(0x7c2),_0x37e46c(0x502e),_0x37e46c(0x98b),_0x37e46c(0x1d11)];let _0x4714de=_0x37e46c(0x8a8);const _0xfcad03={'prompt':_0x37e46c(0x29f9),'quote':'','comments':'','background_knowledge':'','power_up':0x14,'understanding':0x3},_0x163b6e={'prompt':_0x37e46c(0x37ee),'quote':'','comments':'','power_up':0x14,'understanding':0x3},_0x409fa0={'prompt':_0x37e46c(0x201c),'conversation_history':'','question':'','query':'','quote':'','background_knowledge':'','comments':'','power_up':0x14,'understanding':0x3};function _0x18d7d6(_0x2ae901=_0x37e46c(0xf68)){const _0x41a6cd=_0x37e46c;let _0x3e7168,_0x3097a5;if(_0x41a6cd(0xf68)===_0x2ae901)_0x3e7168={..._0xfcad03};else{if(_0x41a6cd(0x3277)===_0x2ae901)_0x3e7168={..._0x163b6e};else{if(_0x41a6cd(0x1cff)!==_0x2ae901)throw new Error('Unsupported\x20config\x20type:\x20'+_0x2ae901);_0x3e7168={..._0x409fa0};}}return _0x3097a5={..._0x3e7168},{'set_field':function(_0x232154,_0x431460){const _0x48f6ad=_0x41a6cd;if(!_0x3097a5[_0x48f6ad(0x2427)](_0x232154))throw new Error('Field\x20'+_0x232154+_0x48f6ad(0x96f));_0x3097a5[_0x232154]=_0x431460;},'get_field':function(_0x3100fe){const _0xd9070d=_0x41a6cd;if(_0x3097a5[_0xd9070d(0x2427)](_0x3100fe))return _0x3097a5[_0x3100fe];throw new Error(_0xd9070d(0x4176)+_0x3100fe+_0xd9070d(0x96f));},'make_field':function(_0x36fe10,_0x4b2948){const _0x40a952=_0x41a6cd;_0x3097a5[_0x40a952(0x2427)](_0x36fe10)?this['set_field'](_0x36fe10,_0x4b2948):_0x3097a5[_0x36fe10]=_0x4b2948;},'return_all_fields':function(){return _0x3097a5;},'reset_fields':function(){_0x3097a5={..._0x3e7168};}};}const _0x29c642=_0x3e8a84+_0x37e46c(0xaf8),_0x2c6561=_0x3e8a84+_0x37e46c(0x4c54),_0x25dca3=_0x3e8a84+_0x37e46c(0x4aa1),_0x2a0f5a=_0x3e8a84+_0x37e46c(0x3c66);async function*_0xe54f68(_0x3536f5,_0x23d534,_0x9e6312=!0x1,_0x490be1=!0x1){const _0x4c9368=_0x37e46c;let _0x1f913b=_0x29c642;try{_0x9e6312&&(_0x1f913b=_0x25dca3),_0x490be1&&(_0x1f913b=_0x2a0f5a);const _0x185ea7=await fetch(_0x1f913b,{'method':_0x4c9368(0x153c),'headers':{'Authorization':_0x4c9368(0x4942)+_0x23d534,'Content-Type':_0x4c9368(0x21d9)},'body':JSON['stringify'](_0x3536f5)});if(!_0x185ea7['ok'])throw new Error('HTTP\x20error!\x20Status:\x20'+_0x185ea7[_0x4c9368(0x45c6)]+_0x4c9368(0x2169)+_0x185ea7);const _0x337bec=_0x185ea7[_0x4c9368(0x4f1a)]['getReader'](),_0x29c957=new TextDecoder(_0x4c9368(0x2907));for(;;){const {done:_0x58fee5,value:_0x10089d}=await _0x337bec[_0x4c9368(0x50a6)]();if(_0x58fee5)break;let _0x4071f9=_0x29c957[_0x4c9368(0x2f4f)](_0x10089d,{'stream':!0x0});yield _0x4071f9;}}catch(_0x53acf9){}}const _0x3e9cc3=_0x3e8a84+_0x37e46c(0x3426);async function _0x173a68(_0x55bf3a,_0x2e2c6d){const _0x444b3f=_0x37e46c;try{const _0x4b20ec=await fetch(_0x3e9cc3,{'method':_0x444b3f(0x153c),'headers':{'Content-Type':_0x444b3f(0x21d9),'Authorization':_0x444b3f(0x4942)+_0x2e2c6d},'body':JSON[_0x444b3f(0x3cbd)]({'text':_0x55bf3a})});if(!_0x4b20ec['ok'])throw new Error(_0x444b3f(0x3b55)+_0x4b20ec['status']);return await _0x4b20ec['json']();}catch(_0x4e6560){}}function _0x4973e0(_0x4b82f9,_0x5edcd1){const _0x3823d9=_0x37e46c,_0x53bff6=_0x4b82f9['querySelector'](_0x3823d9(0x1a41));_0x4b82f9['querySelector'](_0x3823d9(0x387f))[_0x3823d9(0xc61)](_0x3823d9(0x364d),_0x5690e2=>{const _0x4dfab3=_0x3823d9;_0x5690e2[_0x4dfab3(0x1e26)]();const _0x22f97d=_0x53bff6[_0x4dfab3(0x492f)](_0x4dfab3(0x2ee9)),_0x18b72a=_0x53bff6[_0x4dfab3(0x492f)](_0x4dfab3(0x3c39));let _0xb32c5b=0x0;_0x22f97d[_0x4dfab3(0xa21)](_0x34267c=>{const _0x3082ea=_0x4dfab3,_0x3a8f0f=_0x3082ea(0x4022)===_0x34267c[_0x3082ea(0x37e4)]['correct'],_0x327310=_0x34267c[_0x3082ea(0x40f0)];_0x3a8f0f?(_0xb32c5b++,_0x327310[_0x3082ea(0x1745)]['add'](_0x3082ea(0x3416))):_0x327310[_0x3082ea(0x1745)][_0x3082ea(0x362c)](_0x3082ea(0x31ac));}),_0x18b72a[_0x4dfab3(0xa21)](_0x270d6a=>{const _0x9b11c4=_0x4dfab3;_0x270d6a[_0x9b11c4(0x1745)][_0x9b11c4(0x42a1)](_0x9b11c4(0x53f));});const _0x5905da='You\x20got\x20'+_0xb32c5b+'\x20out\x20of\x20'+_0x5edcd1[_0x4dfab3(0x1b19)]+_0x4dfab3(0x7a2);_0x4b82f9[_0x4dfab3(0x2842)](_0x4dfab3(0x2ac7))[_0x4dfab3(0x2f80)]=_0x5905da;}),function(_0x5b7972){const _0x5b471c=_0x3823d9;_0x53bff6[_0x5b471c(0x3cdf)]='',_0x5b7972[_0x5b471c(0xa21)]((_0x1f60ed,_0x2a0fe3)=>{const _0x330046=_0x5b471c,_0x59ae4f=document[_0x330046(0x3ac9)](_0x330046(0x4c88));_0x59ae4f[_0x330046(0x1745)][_0x330046(0x362c)]('mb-8');const _0x41ed8f=document['createElement']('h4');_0x41ed8f['classList']['add'](_0x330046(0x4d30),_0x330046(0x3d9d),_0x330046(0x609)),_0x41ed8f[_0x330046(0x2f80)]=_0x1f60ed['question'],_0x59ae4f[_0x330046(0x335a)](_0x41ed8f);const _0x3aca14=document[_0x330046(0x3ac9)]('ul');_0x3aca14['classList'][_0x330046(0x362c)]('list-none',_0x330046(0x397)),_0x1f60ed[_0x330046(0x43be)][_0x330046(0xa21)]((_0x54a58f,_0x2afa1e)=>{const _0x49db89=_0x330046,_0x5190d4=document[_0x49db89(0x3ac9)]('li'),_0x59c6d3=document[_0x49db89(0x3ac9)]('div');_0x59c6d3[_0x49db89(0x1745)]['add']('answer-option',_0x49db89(0x2a07),'items-center');const _0x5e1e37=document[_0x49db89(0x3ac9)](_0x49db89(0x7b0));_0x5e1e37['type']=_0x49db89(0x375f);const _0x422e3b=new Date()['getTime']()+'-'+Math['random']()['toString'](0x24)['substring'](0x2,0xf);_0x5e1e37['id']=_0x422e3b,_0x5e1e37[_0x49db89(0x11d8)]='question'+_0x2a0fe3,_0x5e1e37[_0x49db89(0x4fe9)]=_0x2afa1e,_0x5e1e37['classList'][_0x49db89(0x362c)](_0x49db89(0x2805)),_0x5e1e37[_0x49db89(0x37e4)][_0x49db89(0x3eec)]=_0x54a58f[_0x49db89(0x3eec)];const _0x3500ee=document[_0x49db89(0x3ac9)]('label');_0x3500ee[_0x49db89(0x3fe7)]=_0x422e3b,_0x3500ee[_0x49db89(0x2f80)]=_0x54a58f[_0x49db89(0x4006)],_0x3500ee[_0x49db89(0x1745)][_0x49db89(0x362c)]('flex-1');const _0x147412=document[_0x49db89(0x3ac9)](_0x49db89(0x4c88));if(_0x147412[_0x49db89(0x1745)]['add']('explains_wrapper',_0x49db89(0x53f)),_0x54a58f['correct']){_0x147412[_0x49db89(0x1745)][_0x49db89(0x362c)](_0x49db89(0x2735),_0x49db89(0x2ee0),'rounded-lg','border-2',_0x49db89(0x3993),'p-3');const _0x17d650=document[_0x49db89(0x3ac9)](_0x49db89(0x4c88));_0x17d650['classList']['add'](_0x49db89(0x5140),_0x49db89(0x22f4),_0x49db89(0x986),'rounded-lg',_0x49db89(0x228f),'border-green-500',_0x49db89(0x103b),_0x49db89(0x2fdc),_0x49db89(0x2705),'text-green-500'),_0x17d650[_0x49db89(0x2f80)]=_0x49db89(0x3c64),_0x147412['appendChild'](_0x17d650);}const _0x38fa2a=document[_0x49db89(0x3ac9)]('p');_0x38fa2a[_0x49db89(0x2f80)]=_0x54a58f[_0x49db89(0x2b5a)],_0x38fa2a['classList']['add'](_0x49db89(0x2b5a),'text-sm',_0x49db89(0x2ee5)),_0x147412[_0x49db89(0x335a)](_0x38fa2a),_0x59c6d3[_0x49db89(0x335a)](_0x5e1e37),_0x59c6d3[_0x49db89(0x335a)](_0x3500ee),_0x5190d4[_0x49db89(0x366b)](_0x59c6d3),_0x5190d4[_0x49db89(0x335a)](_0x147412),_0x3aca14[_0x49db89(0x335a)](_0x5190d4);}),_0x59ae4f['appendChild'](_0x3aca14),_0x53bff6['appendChild'](_0x59ae4f);});}(_0x5edcd1);}class _0x20129f{constructor(_0x515298){const _0x264aec=_0x37e46c;this[_0x264aec(0x4b2a)]=_0x515298[_0x264aec(0x2842)]('#modal1'),this[_0x264aec(0x3610)]={'sliderValue':0x1,'selectedDropdownValue':'','checkboxes':{},'modalDisplayed':!0x1};}[_0x37e46c(0x28bf)](){const _0x740fec=_0x37e46c,_0x12c368=this['shadowEle'][_0x740fec(0x2842)]('#understanding-slider');_0x12c368&&(_0x12c368[_0x740fec(0xc61)](_0x740fec(0x7b0),_0x51cad7=>{const _0x39c4eb=_0x740fec;this[_0x39c4eb(0x3aff)](_0x51cad7[_0x39c4eb(0x1bc7)][_0x39c4eb(0x4fe9)]);}),this[_0x740fec(0x3aff)](_0x12c368[_0x740fec(0x4fe9)]));const _0x5b662e=this[_0x740fec(0x4b2a)]['querySelector'](_0x740fec(0x1b29));_0x5b662e&&(_0x5b662e[_0x740fec(0xc61)](_0x740fec(0x1efc),_0x424d7a=>{const _0x398f4d=_0x740fec;this[_0x398f4d(0x1b26)](_0x424d7a[_0x398f4d(0x1bc7)]['value']);}),this[_0x740fec(0x1b26)](_0x5b662e[_0x740fec(0x4fe9)])),(this[_0x740fec(0x4b2a)][_0x740fec(0x492f)](_0x740fec(0x137f))[_0x740fec(0xa21)](_0x481f2a=>{const _0x28a28f=_0x740fec;_0x481f2a[_0x28a28f(0xc61)](_0x28a28f(0x1efc),_0x3d6c60=>{const _0x535ed7=_0x28a28f;this[_0x535ed7(0x4614)](_0x481f2a['id'],_0x3d6c60['target'][_0x535ed7(0x494d)]);}),this[_0x28a28f(0x4614)](_0x481f2a['id'],_0x481f2a[_0x28a28f(0x494d)]);}),this[_0x740fec(0x3e58)](_0x740fec(0x28bf),!0x0),this[_0x740fec(0x87e)](_0x740fec(0x28bf),!0x0));}[_0x37e46c(0x3e58)](_0x3298fb,_0x3c461f){const _0x211d6b=_0x37e46c,_0x2407a4=new CustomEvent('aiActionCompleted',{'detail':{'type':_0x211d6b(0x2e7),'text':this[_0x211d6b(0x2610)]()}});window['dispatchEvent'](_0x2407a4),_0x211d6b(0x28bf)!==_0x3298fb&&_0x57a9ad(this[_0x211d6b(0x4b2a)],_0x211d6b(0xe29)+_0x3298fb+_0x211d6b(0x2d4b)+_0x3c461f+_0x211d6b(0x43b6),_0x211d6b(0x3754));}['dispatchAndUpdate_settings'](_0x1fd4c0,_0x4b7952){const _0x3d2c09=_0x37e46c,_0x3c43d6=new CustomEvent('aiActionCompleted',{'detail':{'settings':this[_0x3d2c09(0x2eb)](),'type':'settings'}});window['dispatchEvent'](_0x3c43d6),_0x3d2c09(0x28bf)!==_0x1fd4c0&&_0x57a9ad(this[_0x3d2c09(0x4b2a)],_0x3d2c09(0x3977)+_0x1fd4c0+'\x20with\x20value\x20'+_0x4b7952+_0x3d2c09(0x43b6),'success');}['updateSlider'](_0x481e9c){const _0x209b20=_0x37e46c;this[_0x209b20(0x3610)][_0x209b20(0x56e)]=parseInt(_0x481e9c,0xa),this['dispatchAndUpdate']('sliderValue',this['settings'][_0x209b20(0x56e)]);}[_0x37e46c(0x1b26)](_0x25a318){const _0x1edda1=_0x37e46c;this[_0x1edda1(0x3610)][_0x1edda1(0x20e1)]=_0x25a318,this[_0x1edda1(0x87e)]('selectedDropdownValue',this[_0x1edda1(0x3610)][_0x1edda1(0x20e1)]);}[_0x37e46c(0x4614)](_0x3ea85d,_0x595238){const _0x2ee11c=_0x37e46c;this['settings'][_0x2ee11c(0x2b8)][_0x3ea85d]=_0x595238,_0x2ee11c(0x15a3)===_0x3ea85d?this[_0x2ee11c(0x3e58)](_0x3ea85d,_0x595238):this[_0x2ee11c(0x87e)](_0x3ea85d,_0x595238);}[_0x37e46c(0x2610)](){const _0x2c8f32=_0x37e46c;return _0x2c8f32(0x3ff7)+(['complete\x20beginner',_0x2c8f32(0x4b10),_0x2c8f32(0x2c6),_0x2c8f32(0x5c6)][this[_0x2c8f32(0x3610)][_0x2c8f32(0x56e)]]||_0x2c8f32(0x379f))+'.\x20'+(this[_0x2c8f32(0x3610)][_0x2c8f32(0x2b8)][_0x2c8f32(0x15a3)]?'':_0x2c8f32(0x28f3));}[_0x37e46c(0x2eb)](){const _0x4e8315=_0x37e46c;return{'llm_model':this[_0x4e8315(0x3610)][_0x4e8315(0x20e1)],'show_progress':this['settings'][_0x4e8315(0x2b8)]['Show\x20chain\x20of\x20thought']};}}const _0x2f6a27=_0x37e46c(0x615);function _0x516fdd(_0x32fab3=_0x37e46c(0x226a),_0x610d96=_0x37e46c(0x3b1a)){return new Promise((_0x286e33,_0x106c43)=>{const _0x3abb08=a0_0x11e7,_0x1ffe94=indexedDB['open'](_0x32fab3,0x1);_0x1ffe94[_0x3abb08(0x18cd)]=function(_0x250a0c){const _0x234860=_0x3abb08;_0x106c43(_0x234860(0x1e19)+_0x250a0c[_0x234860(0x1bc7)][_0x234860(0x176e)]);},_0x1ffe94[_0x3abb08(0x2e1b)]=function(_0x3c3b72){const _0x18933d=_0x3abb08;_0x286e33(_0x3c3b72[_0x18933d(0x1bc7)][_0x18933d(0xa34)]);},_0x1ffe94[_0x3abb08(0x4d94)]=function(_0x482199){const _0x351255=_0x3abb08;_0x482199[_0x351255(0x1bc7)][_0x351255(0xa34)][_0x351255(0x115f)](_0x610d96,{'keyPath':'id'});};});}function _0x101895(_0x1361c4){const _0x535d62=_0x37e46c;return _0x1361c4[_0x535d62(0x741)](/[^a-zA-Z\s]/g,'');}async function _0x131574(_0x5345ff){const _0xd5a439=_0x37e46c,_0x5b3629=_0x5345ff[_0xd5a439(0xffd)]('message-container'),_0x5c381f=_0x5b3629[_0xd5a439(0x3cdf)];let _0x5c63c9=function(_0x5cbd74){const _0x1f0936=_0xd5a439;let _0x3be01b='';const _0x1b1d8f=document[_0x1f0936(0x13d4)](_0x5cbd74,NodeFilter[_0x1f0936(0x3cd1)],{'acceptNode':function(_0x2b25ff){const _0x14da9f=_0x1f0936;return _0x2b25ff[_0x14da9f(0x26d8)][_0x14da9f(0x38f)][_0x14da9f(0x2d96)](/SCRIPT|STYLE/i)||''===_0x2b25ff[_0x14da9f(0x429f)][_0x14da9f(0x1b23)]()?NodeFilter['FILTER_REJECT']:NodeFilter[_0x14da9f(0x1529)];}},!0x1);let _0x13dc9e;for(;_0x13dc9e=_0x1b1d8f[_0x1f0936(0x25a5)]();)_0x3be01b+=_0x13dc9e['textContent']+'\x20';return _0x3be01b['trim']();}(_0x5b3629),_0x1f07e5=_0x19ba43(_0x5c63c9);if(_0x1f07e5=_0x1f07e5[_0xd5a439(0x1b19)]>0x3?[..._0x1f07e5][_0xd5a439(0x4c33)](()=>0.5-Math['random']())[_0xd5a439(0x384c)](0x0,0x3)[_0xd5a439(0x3541)]('-'):_0x1f07e5['join']('-'),0x0===_0x1f07e5[_0xd5a439(0x1b19)])return new Date()['toString']();try{const _0x319d86=await async function(_0x2287dd,_0x3c8dcd,_0x4fb4f6='tinyMLDB_chats'){const _0x2d89a8=_0xd5a439,_0x2fee4a=(await _0x516fdd())[_0x2d89a8(0x2bdb)]([_0x4fb4f6],'readwrite')[_0x2d89a8(0x91a)](_0x4fb4f6),_0x36bb4e=_0x2fee4a['count'](),_0x3f263c=await new Promise((_0x17594a,_0x197af9)=>{const _0x248902=_0x2d89a8;_0x36bb4e[_0x248902(0x2e1b)]=()=>{const _0x46a0e1=_0x248902;_0x17594a(_0x36bb4e[_0x46a0e1(0xa34)]+0x1);},_0x36bb4e[_0x248902(0x18cd)]=_0x55c713=>{const _0x36ee4d=_0x248902;_0x197af9(_0x36ee4d(0x127d)+_0x55c713[_0x36ee4d(0x1bc7)][_0x36ee4d(0x176e)]);};});return new Promise((_0x5a3536,_0x2c6c97)=>{const _0x5b4c66=_0x2d89a8,_0xb6984a=_0x2fee4a[_0x5b4c66(0xbe8)]({'id':_0x3f263c,'title':_0x2287dd,'html_content':_0x3c8dcd});_0xb6984a[_0x5b4c66(0x2e1b)]=()=>_0x5a3536({'success':!0x0,'id':_0x3f263c}),_0xb6984a[_0x5b4c66(0x18cd)]=_0x2d92f2=>_0x2c6c97(_0x5b4c66(0x4cf0)+_0x2d92f2[_0x5b4c66(0x1bc7)]['errorCode']);});}(_0x1f07e5,_0x5c381f);return _0x319d86['id'];}catch(_0x46100c){return null;}}async function _0x366533(_0x1df9a2,_0xdc6fbe,_0x500557=_0x37e46c(0x226a),_0x377f00='tinyMLDB_chats'){const _0x3e0c0c=_0x37e46c;(async function(_0x41b64a=_0x3e0c0c(0x226a),_0x366f67=_0x3e0c0c(0x3b1a),_0x14567d){const _0x90eb98=_0x3e0c0c,_0x43655e=(await _0x516fdd(_0x41b64a))[_0x90eb98(0x2bdb)]([_0x366f67],'readonly')[_0x90eb98(0x91a)](_0x366f67)[_0x90eb98(0xf9e)](_0x14567d);return new Promise((_0x547532,_0x5e1fad)=>{const _0x403af3=_0x90eb98;_0x43655e[_0x403af3(0x2e1b)]=_0x533ded=>{const _0xe4bed8=_0x403af3;_0x547532(_0x533ded[_0xe4bed8(0x1bc7)][_0xe4bed8(0xa34)]);},_0x43655e['onerror']=_0x3db2ea=>{const _0x3e4aa2=_0x403af3;_0x5e1fad(_0x3e4aa2(0x17ff)+_0x3db2ea[_0x3e4aa2(0x1bc7)][_0x3e4aa2(0x176e)]);};});}(_0x500557,_0x377f00,_0xdc6fbe)['then'](_0x2e781e=>{const _0xd6f4f3=_0x3e0c0c,_0x102b28=_0x1df9a2[_0xd6f4f3(0xffd)](_0xd6f4f3(0x1906));_0x2e781e&&void 0x0!==_0x2e781e['html_content']&&(_0x102b28[_0xd6f4f3(0x3cdf)]=_0x2e781e['html_content']);})[_0x3e0c0c(0x31a3)](_0x397543=>{}));}async function _0x384008(_0x3891a5,_0x989603='tinyMLDB',_0x2088f4=_0x37e46c(0x3b1a)){const _0x54715e=_0x37e46c;(function(_0x44a904,_0x464338,_0x2ae42e){return new Promise((_0x10b9ca,_0x2ee452)=>{const _0x321fc5=a0_0x11e7,_0x897b2c=indexedDB[_0x321fc5(0x1795)](_0x44a904);_0x897b2c[_0x321fc5(0x18cd)]=_0x232a1e=>{const _0x3ff5a8=_0x321fc5;_0x2ee452(_0x3ff5a8(0x3eed),_0x232a1e[_0x3ff5a8(0x1bc7)][_0x3ff5a8(0x3d85)]);},_0x897b2c['onsuccess']=_0x5d0d8b=>{const _0x1ec49a=_0x321fc5,_0x22d90d=_0x5d0d8b[_0x1ec49a(0x1bc7)][_0x1ec49a(0xa34)][_0x1ec49a(0x2bdb)]([_0x464338],'readwrite')[_0x1ec49a(0x91a)](_0x464338)[_0x1ec49a(0x5be)](_0x2ae42e);_0x22d90d[_0x1ec49a(0x2e1b)]=()=>{_0x10b9ca();},_0x22d90d[_0x1ec49a(0x18cd)]=_0x48eb6c=>{const _0x4d84b1=_0x1ec49a;_0x2ee452('Delete\x20request\x20error:',_0x48eb6c[_0x4d84b1(0x1bc7)][_0x4d84b1(0x3d85)]);};};});}(_0x989603,_0x2088f4,_0x3891a5)[_0x54715e(0xaf5)](()=>!0x0)[_0x54715e(0x31a3)](_0x187073=>!0x1));}async function _0x2ff3c7(_0x1031fd,_0x309889){const _0xa268fa=_0x37e46c;if(void 0x0===_0x309889)_0x309889=await _0x131574(_0x1031fd);else{const _0x141a6d=_0x1031fd[_0xa268fa(0x2842)](_0xa268fa(0x745));if(!_0x141a6d)return;(async function(_0x5ae4ba,_0x64eebb,_0x370518=_0xa268fa(0x3b1a)){const _0x31c2e5=_0xa268fa;if(null==_0x5ae4ba)throw new Error(_0x31c2e5(0x505f));const _0xbfd6ee=await _0x516fdd(),_0x160721=_0xbfd6ee[_0x31c2e5(0x2bdb)]([_0x370518],_0x31c2e5(0x23d0))[_0x31c2e5(0x91a)](_0x370518);return new Promise((_0x1e32bc,_0x4cfd7e)=>{const _0x172ab0=_0x31c2e5,_0x500afd=_0x160721['get'](_0x5ae4ba);_0x500afd[_0x172ab0(0x2e1b)]=()=>{const _0x1eba5c=_0x172ab0,_0x479c1c=_0x500afd[_0x1eba5c(0xa34)];if(_0x479c1c){_0x479c1c['html_content']=_0x64eebb;const _0x4972a0=_0x160721['put'](_0x479c1c);_0x4972a0[_0x1eba5c(0x2e1b)]=()=>{_0x1e32bc({'success':!0x0,'id':_0x5ae4ba});},_0x4972a0[_0x1eba5c(0x18cd)]=_0x31423e=>_0x4cfd7e(_0x1eba5c(0x520e)+_0x31423e[_0x1eba5c(0x1bc7)][_0x1eba5c(0x176e)]);}else _0x4cfd7e('Entry\x20not\x20found\x20with\x20ID:\x20'+_0x5ae4ba);},_0x500afd[_0x172ab0(0x18cd)]=_0x2e48a9=>_0x4cfd7e(_0x172ab0(0x45d2)+_0x2e48a9[_0x172ab0(0x1bc7)]['errorCode']);});}(_0x309889,_0x141a6d['innerHTML'])['then'](_0x2c884f=>{})[_0xa268fa(0x31a3)](_0x1afd5e=>{}));}return _0x309889;}async function _0xa9700d(_0x2aea20=_0x37e46c(0x226a),_0x1f0488=_0x37e46c(0x3b1a),_0x579161){const _0x5ccae4=_0x37e46c;try{const _0x2ca5ce=await async function(_0x21b2af,_0x12aa18){const _0x2855ee=a0_0x11e7,_0x517fb5=(await _0x516fdd(_0x21b2af))[_0x2855ee(0x2bdb)]([_0x12aa18],_0x2855ee(0x1aa2))[_0x2855ee(0x91a)](_0x12aa18)['openCursor'](),_0x25b5cd=[];return new Promise((_0x3fa311,_0x4cd651)=>{const _0x12e948=_0x2855ee;_0x517fb5[_0x12e948(0x2e1b)]=_0x37ec53=>{const _0x567706=_0x12e948,_0x183a92=_0x37ec53[_0x567706(0x1bc7)][_0x567706(0xa34)];_0x183a92?(_0x25b5cd[_0x567706(0x1715)](_0x183a92['value']),_0x183a92['continue']()):_0x3fa311(_0x25b5cd);},_0x517fb5[_0x12e948(0x18cd)]=_0x327129=>{const _0x37279b=_0x12e948;_0x4cd651(_0x37279b(0x4cf9)+_0x327129[_0x37279b(0x1bc7)][_0x37279b(0x176e)]);};});}(_0x2aea20,_0x1f0488),_0x2fa4e4=_0x101895(_0x579161)[_0x5ccae4(0x6e8)]();return _0x2ca5ce['filter'](_0x2884b7=>_0x101895(_0x2884b7[_0x5ccae4(0x529)])[_0x5ccae4(0x6e8)]()[_0x5ccae4(0x2628)](_0x2fa4e4));}catch(_0x3776c0){return[];}}function _0xad222e(_0x3c7600,_0x59f6d6){const _0x3be67a=_0x37e46c,_0x71a0e5=_0x3c7600['querySelector']('#list-container');if(_0x71a0e5){const _0x30e128=document[_0x3be67a(0x3ac9)]('li');_0x30e128['id']=_0x3be67a(0x9cd),_0x30e128[_0x3be67a(0x1cea)]=_0x3be67a(0x4fbf),_0x30e128[_0x3be67a(0x3cdf)]=_0x3be67a(0x50ed)+_0x59f6d6['id']+'\x22\x0a\x20\x20\x20\x20class=\x22flex\x20items-center\x20justify-between\x20w-full\x20px-4\x20py-2\x20text-left\x20text-zinc-800\x20bg-zinc-100\x20rounded-lg\x20hover:bg-zinc-300\x20focus:outline-none\x22\x0a\x20\x20\x20\x20style=\x22background-color:\x20#f4f4f5;\x22\x0a>\x0a\x20\x20\x20\x20'+_0x59f6d6[_0x3be67a(0x4685)]+_0x3be67a(0x1062)+_0x59f6d6['id']+'\x20class=\x27rounded-lg\x20hover:bg-blue-100\x27>\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20{-0x1!==_0x59f6d6['id']&&async function(_0x343286,_0x110bfa){const _0x114a57=a0_0x11e7;await _0x366533(_0x343286,_0x110bfa,_0x114a57(0x226a),_0x114a57(0x3b1a)),_0x57a9ad(_0x343286,'Loading\x20chat\x20'+_0x110bfa,_0x114a57(0x3754)),_0xd56255(_0x343286),_0x582d6d(_0x343286);}(_0x3c7600,_0x59f6d6['id']);}),_0x14d439[_0x3be67a(0xc61)]('click',async()=>{await _0x384008(_0x59f6d6['id']),_0xec9c19(_0x3c7600);}),_0x71a0e5['appendChild'](_0x30e128);}}function _0x554348(){const _0x301a3f=_0x37e46c,_0x232070=new CustomEvent('resetAIChat',{'detail':{'containerId':_0x301a3f(0x24ed)}});window[_0x301a3f(0x135d)](_0x232070);}let _0x5aca92;async function _0xec9c19(_0xb326d1){const _0x41a443=_0x37e46c,_0x31cb3c=await async function(_0x20f612,_0x4d9c45){const _0x3f6448=a0_0x11e7,_0x179ecf=(await _0x516fdd(_0x20f612))[_0x3f6448(0x2bdb)]([_0x4d9c45],_0x3f6448(0x1aa2))[_0x3f6448(0x91a)](_0x4d9c45)[_0x3f6448(0xfe1)](),_0x50865a=[];return new Promise((_0x389cec,_0x42bb1b)=>{const _0x42b44e=_0x3f6448;_0x179ecf[_0x42b44e(0x2e1b)]=_0x4ba2aa=>{const _0x3ccb5f=_0x42b44e,_0x716196=_0x4ba2aa[_0x3ccb5f(0x1bc7)]['result'];_0x716196?(_0x50865a['push'](_0x716196[_0x3ccb5f(0x4fe9)]),_0x716196[_0x3ccb5f(0x16d9)]()):_0x389cec(_0x50865a);},_0x179ecf[_0x42b44e(0x18cd)]=_0x4e4f75=>{const _0x24b1b1=_0x42b44e;_0x42bb1b(_0x24b1b1(0x4cf9)+_0x4e4f75[_0x24b1b1(0x1bc7)]['errorCode']);};});}('tinyMLDB',_0x41a443(0x3b1a)),_0x59c865=_0xb326d1[_0x41a443(0x2842)]('#list-container');if(_0x59c865){if(_0x5aca92&&_0x31cb3c[_0x41a443(0x1b19)]===_0x5aca92['length'])return;_0x59c865[_0x41a443(0x3cdf)]='',_0x31cb3c[_0x41a443(0xa21)]((_0x322051,_0x12ebdc)=>{_0xad222e(_0xb326d1,_0x322051);});}_0x5aca92=_0x31cb3c;}const _0xd56255=async _0x4ee54c=>{const _0xfb6349=_0x37e46c,_0x7821da=_0x4ee54c[_0xfb6349(0x2842)](_0xfb6349(0x25ae));let _0x4c4828=_0x4ee54c[_0xfb6349(0x2842)](_0xfb6349(0x1cf4));_0xfb6349(0x1f2e)===_0x7821da[_0xfb6349(0x1a84)][_0xfb6349(0x12ca)]?(_0x7821da[_0xfb6349(0x1a84)][_0xfb6349(0x12ca)]=_0xfb6349(0x28b),_0x4c4828[_0xfb6349(0x1a84)][_0xfb6349(0x12ca)]=_0xfb6349(0x28b)):(await _0xec9c19(_0x4ee54c),_0x7821da[_0xfb6349(0x1a84)][_0xfb6349(0x12ca)]=_0xfb6349(0x1f2e),_0x4c4828[_0xfb6349(0x1a84)]['display']=_0xfb6349(0x1f2e));};function _0x227b35(_0x241676){const _0xf4ad60=_0x37e46c,_0x32a5e9=_0x241676[_0xf4ad60(0x2842)]('#popover-container'),_0x294f31=_0x241676[_0xf4ad60(0x2842)](_0xf4ad60(0xe24)),_0x2e99de=_0x241676['querySelector'](_0xf4ad60(0x3e89));let _0x43bd29=_0x241676[_0xf4ad60(0x2842)](_0xf4ad60(0x1cf4));if(_0x241676[_0xf4ad60(0x2842)](_0xf4ad60(0xf05))[_0xf4ad60(0xc61)]('click',async()=>{const _0x46c3e3=_0xf4ad60;confirm(_0x46c3e3(0x3025))&&(await function(_0x4f588d=_0x46c3e3(0x226a),_0xf25704=_0x46c3e3(0x3b1a)){return new Promise((_0x3f874a,_0x5cb447)=>{const _0xa045e=a0_0x11e7,_0x40ada4=indexedDB['open'](_0x4f588d);_0x40ada4[_0xa045e(0x18cd)]=_0x4f5a04=>{const _0x2c8d37=_0xa045e;_0x5cb447(_0x2c8d37(0x3eed),_0x4f5a04[_0x2c8d37(0x1bc7)][_0x2c8d37(0x3d85)]);},_0x40ada4['onsuccess']=_0x485c73=>{const _0x142957=_0xa045e,_0x1590cd=_0x485c73['target'][_0x142957(0xa34)]['transaction']([_0xf25704],_0x142957(0x23d0))['objectStore'](_0xf25704)[_0x142957(0x4933)]();_0x1590cd[_0x142957(0x2e1b)]=()=>{_0x3f874a();},_0x1590cd[_0x142957(0x18cd)]=_0x3ea21a=>{const _0x49ca1d=_0x142957;_0x5cb447(_0x49ca1d(0x3c86),_0x3ea21a[_0x49ca1d(0x1bc7)][_0x49ca1d(0x3d85)]);};};});}(_0x46c3e3(0x226a),_0x46c3e3(0x3b1a)),_0xec9c19(_0x241676),_0x554348());}),!_0x43bd29){const _0x359a85=_0x241676[_0xf4ad60(0x2842)](_0xf4ad60(0x27c4));_0x43bd29=document[_0xf4ad60(0x3ac9)](_0xf4ad60(0x4c88)),_0x43bd29[_0xf4ad60(0x1a84)][_0xf4ad60(0x25f1)]='fixed',_0x43bd29[_0xf4ad60(0x1a84)][_0xf4ad60(0x279d)]='0',_0x43bd29[_0xf4ad60(0x1a84)]['left']='0',_0x43bd29[_0xf4ad60(0x1a84)][_0xf4ad60(0x17d2)]='100vw',_0x43bd29['style'][_0xf4ad60(0x3cd6)]=_0xf4ad60(0x109a),_0x43bd29['style'][_0xf4ad60(0x1ff8)]='rgba(0,\x200,\x200,\x200.5)',_0x43bd29[_0xf4ad60(0x1a84)][_0xf4ad60(0x24df)]=_0xf4ad60(0xe19),_0x43bd29['style'][_0xf4ad60(0x12ca)]='none',_0x43bd29[_0xf4ad60(0x1745)][_0xf4ad60(0x362c)](_0xf4ad60(0x4d61)),_0x359a85[_0xf4ad60(0x335a)](_0x43bd29);}const _0x2936a2=_0x241676[_0xf4ad60(0x2842)](_0xf4ad60(0x244f)),_0x25ffea=_0x241676[_0xf4ad60(0x2842)](_0xf4ad60(0x1a7c));_0x2936a2[_0xf4ad60(0xc61)](_0xf4ad60(0x364d),()=>{_0x554348(),_0xd56255(_0x241676);}),_0x25ffea[_0xf4ad60(0xc61)](_0xf4ad60(0x364d),async()=>{_0xd56255(_0x241676);}),_0x294f31&&_0x294f31['addEventListener'](_0xf4ad60(0x364d),()=>{const _0x1a082a=_0xf4ad60;_0x32a5e9[_0x1a082a(0x1a84)][_0x1a082a(0x12ca)]='none',_0x43bd29['style'][_0x1a082a(0x12ca)]=_0x1a082a(0x28b);}),_0x2e99de&&_0x2e99de[_0xf4ad60(0xc61)](_0xf4ad60(0x384f),async _0x5ac755=>{const _0x1c159d=_0xf4ad60;await async function(_0x8a95be,_0xc583e1){const _0x26f595=a0_0x11e7,_0x3f6e22=await _0xa9700d(_0x26f595(0x226a),_0x26f595(0x3b1a),_0x8a95be);!function(_0x83cf92,_0x41cccf){if(_0x83cf92===_0x41cccf)return!0x0;if(null==_0x83cf92||null==_0x41cccf)return!0x1;if(_0x83cf92['length']!==_0x41cccf['length'])return!0x1;for(let _0x3b736d=0x0;_0x3b736d<_0x83cf92['length'];++_0x3b736d)if(_0x83cf92[_0x3b736d]!==_0x41cccf[_0x3b736d])return!0x1;return!0x0;}(_0x3f6e22,_0x22c1c1)&&(_0xc583e1['querySelector']('#list-container')[_0x26f595(0x3cdf)]='',_0x3f6e22[_0x26f595(0xa21)]((_0x2bbde0,_0x3b16b7)=>{_0xad222e(_0xc583e1,_0x2bbde0),_0x22c1c1=_0x3f6e22;}));}(_0x5ac755[_0x1c159d(0x1bc7)][_0x1c159d(0x4fe9)],_0x241676);}),_0x43bd29[_0xf4ad60(0xc61)](_0xf4ad60(0x364d),()=>{const _0x3f008d=_0xf4ad60;_0x32a5e9[_0x3f008d(0x1a84)][_0x3f008d(0x12ca)]=_0x3f008d(0x28b),_0x43bd29[_0x3f008d(0x1a84)][_0x3f008d(0x12ca)]=_0x3f008d(0x28b);}),window[_0xf4ad60(0xc61)](_0xf4ad60(0x364d),_0x132958=>{const _0x11c0d0=_0xf4ad60;_0x132958[_0x11c0d0(0x1bc7)]!==_0x32a5e9&&_0x132958[_0x11c0d0(0x1bc7)]!==_0x43bd29||(_0x32a5e9['style'][_0x11c0d0(0x12ca)]=_0x11c0d0(0x28b),_0x43bd29[_0x11c0d0(0x1a84)][_0x11c0d0(0x12ca)]=_0x11c0d0(0x28b));}),window['addEventListener']('keydown',_0x28b696=>{const _0x377034=_0xf4ad60;_0x377034(0x153e)===_0x28b696['key']&&_0x377034(0x1f2e)===_0x32a5e9[_0x377034(0x1a84)][_0x377034(0x12ca)]&&(_0x32a5e9[_0x377034(0x1a84)][_0x377034(0x12ca)]=_0x377034(0x28b),_0x43bd29['style'][_0x377034(0x12ca)]='none');});}let _0x22c1c1=[];const _0x97864f='\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20Assistant\x20\x20Ask\x20me\x20questions\x20about\x20this\x20highlighted\x20text\x20\x20\x20Explain\x20highlighted\x20text\x20Get\x20Current\x20research\x20Generate\x20Quiz\x20\x20\x20\x20Create\x20citation\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20Chicago\x20APA\x20Harvard\x20\x20\x20\x20\x20\x20⏎\x20\x20\x20\x20\x20\x20';function _0xc1bcaf(_0x5c0506){const _0x544de0=_0x37e46c;return!!_0x5c0506[_0x544de0(0xffd)](_0x544de0(0x300c))['classList']['contains'](_0x544de0(0x3021));}function _0x2142c3(_0x374408){const _0xb3d23b=_0x37e46c,_0x264309=_0x374408[_0xb3d23b(0x2842)](_0xb3d23b(0x3636));!function(_0x3fc31a){const _0x4c6759=_0xb3d23b,_0x8f8c6d=_0x3fc31a[_0x4c6759(0x2842)]('#up-chevron'),_0x349644=_0x3fc31a['querySelector'](_0x4c6759(0x832));_0x3fc31a['querySelector']('#citationMenu')[_0x4c6759(0x1745)][_0x4c6759(0x362c)](_0x4c6759(0x53f)),_0x8f8c6d[_0x4c6759(0x1745)][_0x4c6759(0x362c)](_0x4c6759(0x53f)),_0x349644[_0x4c6759(0x1745)]['remove'](_0x4c6759(0x53f));}(_0x264309),_0x264309[_0xb3d23b(0x1745)][_0xb3d23b(0x362c)]('hidden');}let _0x5f0ac1,_0x4265a7,_0x2cf0b4,_0x56a770,_0x997cd0,_0x1089a1,_0x45dde0,_0x33b1d9,_0x5c0b23,_0x1ae815,_0x2cd8b6,_0x174a47,_0x45e9a3,_0x5b910d,_0x442d2e=!0x1,_0x4bf239=!0x1,_0x32c170='',_0x1a6420=_0x37e46c(0x3010),_0x3d67a5;function _0x168c23(_0x46becc,_0x42f556){const _0x2cf40b=_0x37e46c;_0x46becc[_0x2cf40b(0x1a84)]['position']=_0x2cf40b(0x1aaf),_0x46becc['style'][_0x2cf40b(0x279d)]=_0x42f556[_0x2cf40b(0x32c6)]-0xc8+'px',_0x46becc[_0x2cf40b(0x1a84)]['left']=_0x42f556[_0x2cf40b(0x28a0)]+'px',_0x46becc[_0x2cf40b(0x1745)][_0x2cf40b(0x42a1)](_0x2cf40b(0x53f)),setTimeout(()=>{const _0x4ebafb=_0x2cf40b,_0x276c6a=_0x46becc[_0x4ebafb(0x3b6a)]();_0x276c6a[_0x4ebafb(0x4d50)]>window[_0x4ebafb(0x4154)]&&(_0x46becc[_0x4ebafb(0x1a84)][_0x4ebafb(0x48eb)]=window[_0x4ebafb(0x4154)]-_0x276c6a[_0x4ebafb(0x17d2)]-0x19+'px'),_0x276c6a['bottom']>window[_0x4ebafb(0x4732)]&&(_0x46becc['style']['top']=window[_0x4ebafb(0x4732)]-_0x276c6a[_0x4ebafb(0x3cd6)]+'px');},0x0);}function _0x49c7cf(_0x4e0105,_0x5b9546=''){const _0x552757=_0x37e46c,_0x3d8011=new CustomEvent(_0x552757(0x5216),{'detail':{'text':_0x4e0105,'type':_0x552757(0x150a),'links':[window[_0x552757(0x167e)][_0x552757(0xe63)]]}});window[_0x552757(0x135d)](_0x3d8011);}function _0x54725a(_0x4d9481){const _0x38fbd0=_0x37e46c;_0x997cd0=_0x4d9481[_0x38fbd0(0x2842)](_0x38fbd0(0x329f)),_0x1089a1=_0x4d9481[_0x38fbd0(0x2842)]('#explainMiniBtn'),_0x45dde0=_0x4d9481[_0x38fbd0(0x2842)]('#quizMiniBtn'),_0x33b1d9=_0x4d9481['querySelector']('#researchMiniBtn'),_0x1ae815=_0x4d9481['querySelector'](_0x38fbd0(0x528d)),_0x5c0b23=_0x4d9481[_0x38fbd0(0x2842)](_0x38fbd0(0x1f8e)),_0x2cd8b6=_0x4d9481[_0x38fbd0(0x2842)](_0x38fbd0(0x25f0)),_0x45e9a3=_0x4d9481['querySelector'](_0x38fbd0(0x3687));const _0x1f3494=function(_0x8f2e34=''){const _0x9dd59=_0x38fbd0,_0x15a95f=document[_0x9dd59(0x4685)],_0x1e93a5=window['location'][_0x9dd59(0xe63)],_0x58e1b9=new Date()['toLocaleDateString'](_0x9dd59(0x2ee6),{'year':_0x9dd59(0x2307),'month':_0x9dd59(0x324f),'day':_0x9dd59(0x2307)});let _0x52f223='';return _0x52f223=(document[_0x9dd59(0x2842)](_0x9dd59(0x46ac))?document[_0x9dd59(0x2842)]('meta[name=\x22author\x22]')[_0x9dd59(0x4a5c)](_0x9dd59(0x484f)):'No\x20author\x20specified')+'\x20('+(document['querySelector'](_0x9dd59(0x3759))?document[_0x9dd59(0x2842)](_0x9dd59(0x3759))[_0x9dd59(0x4a5c)](_0x9dd59(0x484f)):'No\x20publication\x20date\x20specified')+').\x20'+_0x15a95f+_0x9dd59(0x44e8)+_0x58e1b9+',\x20from\x20'+_0x1e93a5,_0x52f223;}();function _0x2f47c2(_0xee2698){const _0x4d87ad=_0x38fbd0;_0xee2698[_0x4d87ad(0x1e26)](),_0xc1bcaf(_0x4d9481)||_0x4b56d9();const _0x2c2ad2=_0x45e9a3['value'];_0x45e9a3[_0x4d87ad(0x4fe9)]='',!function(_0x214316,_0x35a341='aiActionCompleted',_0x25ec5d=_0x4d87ad(0x1cff)){const _0x2172e5=_0x4d87ad,_0xa7269b=new CustomEvent(_0x35a341,{'detail':{'text':_0x214316,'type':_0x25ec5d,'fromRightClickMenu':!0x0}});window[_0x2172e5(0x135d)](_0xa7269b);}(_0x4d87ad(0x38db)+_0x2cf0b4+_0x4d87ad(0x182d)+_0x2c2ad2+_0x4d87ad(0x2302));}function _0x4b56d9(){_0x4a5120(_0x4d9481),_0x2142c3(_0x4d9481);}_0x1a6420=_0x38fbd0(0xffa)+_0x1f3494+'\x20'+_0x1a6420,_0x45e9a3[_0x38fbd0(0xc61)](_0x38fbd0(0x301f),function(_0x31abe4){const _0x3d31da=_0x38fbd0;'Enter'===_0x31abe4[_0x3d31da(0x49fe)]&&_0x2f47c2(_0x31abe4);}),_0x997cd0[_0x38fbd0(0xc61)](_0x38fbd0(0x364d),_0x301f74=>{_0x2f47c2(_0x301f74);}),_0x1089a1[_0x38fbd0(0xc61)](_0x38fbd0(0x364d),_0x178f8d=>{!function(_0xa439d6){const _0x16eb8e=a0_0x11e7,_0x5ac84e=new CustomEvent(_0x16eb8e(0x5216),{'detail':{'text':_0xa439d6,'type':'explain','links':[window[_0x16eb8e(0x167e)]['href']]}});window[_0x16eb8e(0x135d)](_0x5ac84e);}(_0x2cf0b4),_0xc1bcaf(_0x4d9481)||_0x4b56d9();}),_0x45dde0['addEventListener'](_0x38fbd0(0x364d),_0x35cd1c=>{!function(_0x4e47a1){const _0x42a462=a0_0x11e7,_0x19113b=new CustomEvent(_0x42a462(0x5216),{'detail':{'text':_0x4e47a1,'type':'quiz','links':[window[_0x42a462(0x167e)][_0x42a462(0xe63)]]}});window[_0x42a462(0x135d)](_0x19113b);}(_0x2cf0b4),_0xc1bcaf(_0x4d9481)||_0x4b56d9();}),_0x33b1d9[_0x38fbd0(0xc61)](_0x38fbd0(0x364d),_0x53af17=>{!function(_0x369c69,_0x2bc656=''){const _0x83fec=a0_0x11e7,_0x1811a2=new CustomEvent(_0x83fec(0x5216),{'detail':{'text':_0x369c69,'type':_0x83fec(0x14bf),'links':[window[_0x83fec(0x167e)][_0x83fec(0xe63)]],'getNew':!0x0}});window[_0x83fec(0x135d)](_0x1811a2);}(_0x2cf0b4),_0xc1bcaf(_0x4d9481)||_0x4b56d9();}),_0x1ae815[_0x38fbd0(0xc61)]('click',_0x2ae929=>{const _0x38cee5=_0x38fbd0;_0x49c7cf(_0x1a6420+_0x38cee5(0x5225)),_0xc1bcaf(_0x4d9481)||_0x4b56d9();}),_0x5c0b23[_0x38fbd0(0xc61)](_0x38fbd0(0x364d),_0x15582d=>{const _0x2c4845=_0x38fbd0;_0x49c7cf(_0x1a6420+_0x2c4845(0x1ce6)),_0xc1bcaf(_0x4d9481)||_0x4b56d9();}),_0x2cd8b6[_0x38fbd0(0xc61)]('click',_0xbc6d24=>{const _0x4f3e4f=_0x38fbd0;_0x49c7cf(_0x1a6420+_0x4f3e4f(0x68d)),_0xc1bcaf(_0x4d9481)||_0x4b56d9();}),_0x174a47=localStorage['getItem'](_0x38fbd0(0x3c4e));}let _0x98e9f6,_0x5cf452='',_0x130252='',_0xd16bd7=[_0x37e46c(0x1308),_0x37e46c(0x519)];function _0x582cc7(_0x1f53ba,_0x840bc2){const _0x5125e3=_0x37e46c;return _0x1f53ba[_0x5125e3(0x1b19)]<=_0x840bc2?_0x1f53ba:_0x1f53ba[_0x5125e3(0x384c)](0x0,_0x840bc2)+_0x5125e3(0x278e);}const _0x428d74=function(_0x120368){const _0x80c5a0=_0x37e46c,_0x144688=document[_0x80c5a0(0x3ac9)]('div');return _0x144688['innerHTML']=_0x120368[_0x80c5a0(0x1b23)](),_0x144688[_0x80c5a0(0x35f8)];}(_0x37e46c(0x2641));function _0x302ba2(_0x59dbd1){const _0x1aef9f=_0x37e46c;return _0x59dbd1[_0x1aef9f(0x741)](/[^a-zA-Z0-9 \-+?.,:;"'*()&^%$#@!{}\[\]]/g,'');}function _0x34f054(){_0x5cf452='',_0x3d67a5['innerHTML']='';}function _0x20aace(_0x169de0){const _0x150bf2=_0x37e46c;_0x3d67a5=_0x169de0[_0x150bf2(0x2842)](_0x150bf2(0x262b));const _0x264011=_0x169de0[_0x150bf2(0x2842)](_0x150bf2(0x3687));function _0x441b35(_0x3e8fa3,_0xc11a47){const _0x419a3b=_0x150bf2;_0x109ac3(_0x3e8fa3);let _0x52bdce=_0x3e8fa3[_0x419a3b(0x6e8)]();if(_0x52bdce=_0x302ba2(_0x52bdce),_0x98e9f6=!0x1,_0x3e8fa3)_0x98e9f6=_0xd16bd7[_0x419a3b(0x363a)](_0x23a977=>!!_0x23a977[_0x419a3b(0x6e8)]()[_0x419a3b(0x3bcf)](_0x52bdce)&&(_0x5cf452=_0x23a977[_0x419a3b(0x37b5)](_0x3e8fa3['length']),_0x3d67a5[_0x419a3b(0x2f80)]=_0x582cc7(_0x3e8fa3+_0x5cf452,0x3c),_0x3d67a5[_0x419a3b(0x335a)](_0x428d74),!0x0));else{if(0x0===_0x3e8fa3['length']){const _0x178783=Math[_0x419a3b(0x2e2d)](Math['random']()*_0xd16bd7[_0x419a3b(0x1b19)]);_0x5cf452=_0xd16bd7[_0x178783],_0x3d67a5[_0x419a3b(0x2f80)]=_0x582cc7(_0x5cf452,0x3c),_0x3d67a5[_0x419a3b(0x335a)](_0x428d74);}}!_0x98e9f6&&_0x3e8fa3[_0x419a3b(0x1b19)]>0x0&&_0x34f054();}_0x264011[_0x150bf2(0xc61)](_0x150bf2(0x7b0),function(_0x43f333){const _0x444791=_0x150bf2;_0x441b35(_0x43f333[_0x444791(0x1bc7)][_0x444791(0x4fe9)],_0x169de0);}),_0x264011[_0x150bf2(0xc61)](_0x150bf2(0x4ba),function(){const _0x53f346=_0x150bf2;''===this[_0x53f346(0x4fe9)]&&_0x441b35('',_0x169de0),this[_0x53f346(0x25db)](_0x53f346(0x10e2));}),_0x264011[_0x150bf2(0xc61)]('blur',function(){const _0x1b680e=_0x150bf2;0x0===this[_0x1b680e(0x4fe9)][_0x1b680e(0x1b19)]&&(_0x34f054(),this[_0x1b680e(0x998)]('placeholder',_0x1b680e(0x51b5)));}),_0x264011[_0x150bf2(0xc61)](_0x150bf2(0x301f),function(_0x1f166b){const _0x4a783a=_0x150bf2;'Tab'===_0x1f166b['key']&&_0x5cf452?(_0x1f166b[_0x4a783a(0x1e26)](),function(_0xae0611){const _0x4fc24c=_0x4a783a,_0x1cae24=_0xae0611[_0x4fc24c(0x2842)](_0x4fc24c(0x3687));_0x5cf452&&(_0x1cae24['value']+=_0x5cf452,_0x441b35(_0x1cae24['value'],_0xae0611),_0x34f054());}(_0x169de0)):_0x4a783a(0x4837)===_0x1f166b[_0x4a783a(0x49fe)]&&_0x5cf452&&_0x34f054();}),_0x264011['addEventListener'](_0x150bf2(0x4ba),function(){const _0x1deb97=_0x150bf2;_0x441b35(''),this[_0x1deb97(0x25db)]('placeholder');});}async function _0x469a32(_0x1fa77f){const _0x263e77=_0x37e46c;if(_0x1fa77f!==_0x130252&&_0x1fa77f){_0x130252=_0x1fa77f;const _0x196005=_0x263e77(0xceb)+_0x130252+_0x263e77(0x448),_0x27b372=await async function(_0x1a1a4f,_0x4b6ef2=''){const _0x1efc47=_0x263e77,_0x30d55f={'prompt':_0x1a1a4f=_0x1efc47(0x4ee)+_0x2cf0b4+'\x20\x0a\x20'+_0x1a1a4f};let _0x1705be='';try{for await(let _0x25c5e6 of _0xe54f68(_0x30d55f,_0x174a47))_0x1705be+=_0x25c5e6;}catch(_0x24db58){}return _0x1705be;}(_0x196005);_0x27b372&&(_0xd16bd7[_0x263e77(0x2767)](_0x302ba2(_0x27b372)),_0xd16bd7=_0xd16bd7[_0x263e77(0x384c)](0x0,0x5));}}const _0x109ac3=function(_0x4d966d,_0x70cdd){let _0x5f2f8c;return function(..._0x59332c){const _0x339a8d=this;clearTimeout(_0x5f2f8c),_0x5f2f8c=setTimeout(()=>_0x4d966d['apply'](_0x339a8d,_0x59332c),_0x70cdd);};}(_0x526238=>_0x469a32(_0x526238),0x1f4);function _0x49164e(_0x458cb9){const _0x371d98=_0x37e46c,_0xb4fcb7=document[_0x371d98(0x3ac9)](_0x371d98(0x15c6));_0xb4fcb7['innerHTML']=_0x97864f,_0x458cb9['appendChild'](_0xb4fcb7[_0x371d98(0x484f)]['cloneNode'](!0x0)),function(_0x142027){const _0x311a3f=_0x371d98;async function _0x4b44f5(_0x51f090){const _0x4fe494=a0_0x11e7,_0x54e947=await window[_0x4fe494(0x350d)](),_0x45f412=_0x51f090[_0x4fe494(0x1bba)](),_0x18d035=_0x45f412[_0x4fe494(0x2628)](_0x5f0ac1),_0x13c281=_0x45f412[_0x4fe494(0x2628)](_0x4265a7);_0x18d035||_0x13c281||(_0x2cf0b4=_0x54e947[_0x4fe494(0x8e8)]()[_0x4fe494(0x1b23)]()),_0x2cf0b4[_0x4fe494(0x1b19)]>0x0&&!_0x442d2e&&!_0x13c281?(_0x4bf239=!0x0,setTimeout(()=>{_0x4bf239=!0x1;},0x14),_0x168c23(_0x5f0ac1,_0x51f090),_0x5f0ac1[_0x4fe494(0x1745)][_0x4fe494(0x42a1)](_0x4fe494(0x53f)),_0x442d2e=!0x0):_0x2cf0b4['length'],_0x32c170=_0x2cf0b4;}_0x56a770=_0x142027,_0x5f0ac1=_0x142027[_0x311a3f(0xffd)](_0x311a3f(0x29bb)),_0x4265a7=_0x142027[_0x311a3f(0xffd)](_0x311a3f(0x300c)),document['addEventListener'](_0x311a3f(0x50c5),async function(_0x16c649){_0x5b910d=setTimeout(async()=>{await _0x4b44f5(_0x16c649['touches'][0x0]);},0x320);}),document[_0x311a3f(0xc61)](_0x311a3f(0xc1b),function(_0x410977){const _0x43a521=_0x311a3f;clearTimeout(_0x5b910d),_0x5f0ac1['contains'](_0x410977[_0x43a521(0x1bc7)])||'hidden'!==!_0x5f0ac1[_0x43a521(0x1745)][_0x43a521(0x2b31)]||_0x2142c3(_0x56a770);}),document['addEventListener']('mouseup',async _0x37a802=>{await _0x4b44f5(_0x37a802);}),document[_0x311a3f(0xc61)](_0x311a3f(0x364d),_0x262a2a=>{const _0x2f837f=_0x311a3f,_0x5af104=_0x262a2a[_0x2f837f(0x1bba)](),_0x6020a9=window[_0x2f837f(0x350d)]()[_0x2f837f(0x8e8)]()['trim']();if(!_0x4bf239&&_0x442d2e){const _0x5bb8a6=_0x5af104['includes'](_0x5f0ac1);_0x5bb8a6||0x0!==_0x6020a9['length']?_0x5bb8a6||_0x6020a9!==_0x2cf0b4&&_0x6020a9[_0x2f837f(0x1b19)]>0x0&&!_0x5bb8a6&&(_0x2cf0b4=_0x6020a9,_0x168c23(_0x5f0ac1,_0x262a2a)):(_0x2142c3(_0x56a770),_0x442d2e=!0x1);}});}(_0x458cb9),function(_0x5b9602){const _0x3e6e94=_0x371d98;_0x5b9602['querySelector'](_0x3e6e94(0x2dd2))[_0x3e6e94(0xc61)]('click',function(){const _0x1022c7=_0x3e6e94,_0x31c529=_0x5b9602[_0x1022c7(0x2842)](_0x1022c7(0x2448)),_0x56840f=this[_0x1022c7(0x2842)](_0x1022c7(0x1873)),_0xf3068b=this[_0x1022c7(0x2842)](_0x1022c7(0x832));_0x31c529[_0x1022c7(0x1745)][_0x1022c7(0x1908)](_0x1022c7(0x53f)),_0xf3068b[_0x1022c7(0x1745)]['toggle'](_0x1022c7(0x53f)),_0x56840f[_0x1022c7(0x1745)][_0x1022c7(0x1908)](_0x1022c7(0x53f));});}(_0x458cb9),_0x54725a(_0x458cb9),_0x20aace(_0x458cb9);}function _0x5bc834(_0x486980,_0x2cda3e){const _0x519086=_0x37e46c;let _0x55fc7e;return _0x55fc7e=_0x187608(_0x486980,'ai'===_0x2cda3e?_0x519086(0x3ce):_0x519086(0x30d7),'','message-container'),_0x55fc7e;}function _0x40bfe6(_0x17b08,_0x5b3aef,_0x4e0dd){const _0x14f721=_0x37e46c,_0x49b23a=_0x187608(_0x17b08,_0x14f721(0x2fec),'');return _0x4e0dd[_0x14f721(0xa21)]((_0xf693af,_0x2698d3)=>{const _0x403e1e=_0x14f721,_0x2c9ac1=_0x187608(_0x17b08,_0x403e1e(0x3968),'');_0x2c9ac1[_0x403e1e(0x998)](_0x403e1e(0xe63),_0xf693af),_0x2c9ac1[_0x403e1e(0x2f80)]=_0x2698d3+0x1,_0x49b23a[_0x403e1e(0x335a)](_0x2c9ac1);}),_0x49b23a;}const _0x1e37ef=_0x37e46c(0x4e77);async function _0x2fe1b5(_0x2b9b66,_0x31c116){const _0x36421b=_0x37e46c,_0x1ad371=_0x2b9b66[_0x36421b(0x4833)](async _0x1b2cea=>{const _0x744319=_0x36421b,_0x3567de=await fetch(''+_0x1e37ef+_0x1b2cea+_0x744319(0x4e0e)+_0x31c116),_0x5c6db0=await _0x3567de[_0x744319(0x4006)](),_0x171508=new DOMParser()[_0x744319(0x4420)](_0x5c6db0,_0x744319(0x691));return Array[_0x744319(0x27e6)](_0x171508['querySelectorAll'](_0x744319(0x1a87)))['map'](_0x2012a5=>{const _0x32214a=_0x744319,_0x323ddb=_0x2012a5[_0x32214a(0x2842)]('title')[_0x32214a(0x2f80)],_0x448f48=_0x2012a5[_0x32214a(0x2842)]('id')[_0x32214a(0x2f80)],_0x2c2bf4=_0x2012a5['querySelector'](_0x32214a(0x3155))['textContent'],_0x4f23e4=_0x423ec3(_0x2012a5[_0x32214a(0x2842)](_0x32214a(0x4829))[_0x32214a(0x2f80)]),_0x47cd11=_0x2012a5[_0x32214a(0x2842)](_0x32214a(0x35e3))[_0x32214a(0x2f80)];return{'title':_0x323ddb,'link':_0x448f48,'author':_0x2c2bf4,'summary':_0x4f23e4,'year':new Date(_0x47cd11)['getFullYear']()};});});return(await Promise['all'](_0x1ad371))['flat']();}function _0xe45224(_0x3d3312,_0x4edb15){const _0x4b743a=_0x37e46c;let _0x5374ef='';var _0x40eddb;return _0x5374ef+='|\x20\x20Title\x20\x20|Link\x20|\x0a',_0x5374ef+=_0x4b743a(0x4c99),_0x5374ef+='|\x20'+(_0x40eddb=_0x3d3312[_0x4b743a(0x4685)],_0x40eddb[_0x4b743a(0x741)](/[\r\n]+/g,''))+_0x4b743a(0x47d5)+_0x3d3312[_0x4b743a(0x4b32)]+_0x4b743a(0x834),_0x3d3312[_0x4b743a(0x4848)]&&(_0x5374ef+=_0x4b743a(0x283c)+_0x3d3312[_0x4b743a(0x4848)]+'\x0a\x0a'),_0x3d3312['citations']&&'Not\x20available'!==_0x3d3312[_0x4b743a(0x2a5c)]&&(_0x5374ef+=_0x4b743a(0x1a73)+_0x3d3312['citations']+'\x0a\x0a'),_0x3d3312[_0x4b743a(0x4469)]&&(_0x5374ef+='≈\x20Similarity:\x20'+(0x64*_0x3d3312[_0x4b743a(0x4469)])['toFixed'](0x2)+'%\x0a\x0a'),(_0x5374ef+=':::\x20spoiler\x20Summary\x0a*'+_0x3d3312[_0x4b743a(0x4829)]+_0x4b743a(0x4ef1),_0x5374ef+=_0x4b743a(0x1ea8),_0x5374ef);}async function _0x24d8ea(_0x4fe3d0,_0x19708b,_0x550a8a,_0x489534,_0x5d0cec=0xa){const _0x431291=await async function(_0x52afcf,_0x5a5a48=0x5){const _0x5d824b=a0_0x11e7,_0x5cccca=[];try{const _0xe83b02=await async function(_0x2e9ad0){const _0x580d75=a0_0x11e7,_0x324c85=_0x5e9617+_0x580d75(0x4b5b)+encodeURIComponent(_0x2e9ad0)+_0x580d75(0x2654),_0x525e5b=await fetch(_0x324c85,{'headers':{'x-api-key':_0x1f89fc,'Accept':_0x580d75(0x21d9)}});if(!_0x525e5b['ok'])throw new Error(_0x580d75(0x3775)+_0x525e5b[_0x580d75(0x45c6)]);const _0x4b0cd3=await _0x525e5b[_0x580d75(0x3289)]();if(!_0x4b0cd3[_0x580d75(0x5139)])throw new Error(_0x580d75(0x595),_0x4b0cd3);const _0x19af76=_0x4b0cd3['data']['map'](_0xf2e411=>({'title':_0xf2e411['title'],'author':_0xf2e411[_0x580d75(0x1c43)][_0x580d75(0x4833)](_0x4e9e7d=>_0x4e9e7d[_0x580d75(0x11d8)])[_0x580d75(0x3541)](',\x20'),'year':_0xf2e411[_0x580d75(0x4848)],'link':_0xf2e411[_0x580d75(0xd17)],'summary':_0x423ec3(_0xf2e411['abstract'])||'No\x20summary\x20available','citations':_0xf2e411[_0x580d75(0x4995)]||_0x580d75(0x13ce)}))[_0x580d75(0x4c33)]((_0x3f1b59,_0x25a9ae)=>_0x25a9ae[_0x580d75(0x2a5c)]-_0x3f1b59[_0x580d75(0x2a5c)]);return _0x19af76;}(_0x52afcf);_0x5cccca[_0x5d824b(0x1715)](..._0xe83b02);}catch(_0x2558d6){try{const [_0x302e78]=await Promise[_0x5d824b(0xc36)]([_0x2fe1b5([_0x52afcf],_0x5a5a48)]);_0x5cccca[_0x5d824b(0x1715)](..._0x302e78);}catch(_0x3f2fa7){}}return _0x5cccca;}(_0x489534,_0x5d0cec),_0x5dea96=function(_0x3f0c09,_0x340c50){const _0x55254b=a0_0x11e7;let _0x4ce3bf='';if(_0x340c50[_0x55254b(0x384c)](0x0,0x5)['forEach']((_0x5d9316,_0x4e83c0)=>{_0x4ce3bf+=_0xe45224(_0x5d9316);}),_0x340c50[_0x55254b(0x1b19)]>0x5){const _0x2d683d=_0x55254b(0x1244)+_0x3f0c09;_0x4ce3bf+='\x0a\x0a{_0x4ce3bf+=_0xe45224(_0x4f434d);}),_0x4ce3bf+=_0x55254b(0x21c6)+_0x3f0c09+_0x55254b(0x1d6c)+_0x3f0c09+_0x55254b(0x4efc);}return _0x4ce3bf;}(_0x550a8a,_0x431291);return _0x5dea96;}const _0x5e9617=_0x37e46c(0x1bab),_0x1f89fc=_0x37e46c(0x45d5);function _0x423ec3(_0x32d4c8){const _0x9efbf0=_0x37e46c;if(!_0x32d4c8)return'';const _0x4f33f1=_0x32d4c8[_0x9efbf0(0x2d96)](/[^\.!\?]+[\.!\?]+/g);return _0x4f33f1&&0x0!==_0x4f33f1[_0x9efbf0(0x1b19)]?_0x4f33f1['slice'](0x0,0x2)['join']('\x20'):'';}const _0x192c47=_0x37e46c(0x29fc);let _0x59abf0,_0x4ea4e8=!0x1;function _0x1e5a00(_0x15e946){const _0x2834fb=_0x37e46c;if(_0x4ea4e8)return;const _0x4c94f7=document[_0x2834fb(0x3ac9)](_0x2834fb(0x15c6));_0x4c94f7[_0x2834fb(0x3cdf)]=_0x192c47;const _0x1ae0ee=_0x15e946[_0x2834fb(0x2842)](_0x2834fb(0x1d07));return _0x1ae0ee&&(_0x59abf0=_0x1ae0ee[_0x2834fb(0x51c2)](!0x0),_0x1ae0ee[_0x2834fb(0x3cdf)]='',_0x1ae0ee['appendChild'](_0x4c94f7[_0x2834fb(0x484f)]['cloneNode'](!0x0))),_0x1ae0ee;}async function _0x4a9b57(_0x277baa,_0x27ef20){const _0x56c9f2=_0x37e46c;if(_0x4ea4e8)return;const _0x4bf35f=_0x277baa[_0x56c9f2(0x2842)]('#'+_0x27ef20);_0x4bf35f&&(!function(_0x183ff8){const _0x571268=_0x56c9f2;if(_0x4ea4e8)return;const _0x1f2eff=_0x183ff8[_0x571268(0x492f)](_0x571268(0x4ca));_0x1f2eff[_0x571268(0xa21)](_0x5cb0f4=>{const _0x2b3b7e=_0x571268;_0x5cb0f4['classList']['remove'](_0x2b3b7e(0x53f));});}(_0x4bf35f),setTimeout(()=>{const _0x50c2c2=_0x56c9f2;_0x4bf35f[_0x50c2c2(0x26d8)]&&_0x4bf35f[_0x50c2c2(0x26d8)][_0x50c2c2(0x477)](_0x4bf35f);},0x1f4));}function _0x29a6aa(_0x215289,_0x138f2e){const _0x182883=_0x37e46c;if(_0x4ea4e8)return;!function(_0x53f91,_0x11455d,_0x262c8e,_0x590336,_0x8787de){const _0x1f2f7c=a0_0x11e7;if(_0x4ea4e8)return;const _0x4febdd=Date[_0x1f2f7c(0xae4)](),_0x49bfac=setInterval(function(){const _0x327a4d=_0x1f2f7c;if(Date[_0x327a4d(0xae4)]()-_0x4febdd>_0x8787de)return void clearInterval(_0x49bfac);const _0x477658=_0x11455d[_0x327a4d(0x2842)](_0x53f91);_0x477658&&(clearInterval(_0x49bfac),_0x262c8e(_0x477658));},_0x590336);}('#'+(_0x182883(0x357b)+_0x138f2e),_0x215289,function(_0x2e7256){const _0x282263=_0x182883;_0x2e7256['classList'][_0x282263(0x42a1)]('hidden');},0x64,0x1388);}function _0x802b77(_0xc8657d,_0x4dde82){const _0x417967=_0x37e46c;_0x4ea4e8||_0xc8657d[_0x417967(0xa21)]((_0x2ed8f4,_0xf10d5e)=>{setTimeout(()=>function(_0x1a1c2d,_0x249467,_0x5cd120){const _0x1bef2e=a0_0x11e7;if(_0x4ea4e8)return;const _0x30c188=_0x1a1c2d[_0x1bef2e(0x2842)](_0x1bef2e(0x361d));if(_0x30c188){const _0x417237=document[_0x1bef2e(0x3ac9)]('div');_0x417237[_0x1bef2e(0x1cea)]='flex\x20items-center\x20text-blue-400\x20dark:text-blue-400\x20bg-transparent\x20p-1\x20rounded-lg\x20border-b\x20border-zinc-300\x20dark:border-zinc-700\x20animate-fadeInUp',_0x417237[_0x1bef2e(0x3cdf)]=_0x1bef2e(0x43bf)+_0x249467+'\x0a\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x20\x20',_0x30c188[_0x1bef2e(0x335a)](_0x417237);}}(_0x4dde82,_0x2ed8f4,_0xf10d5e),0x64*_0xf10d5e);});}function _0xc9afe9(_0x7871cc){const _0xcbe1be=_0x37e46c,_0xbfdfce=function(_0x5425a4){const _0x38089e=a0_0x11e7;let _0x4ba26c=_0x5b2838(_0x5425a4)[_0x38089e(0x4a5b)]()[_0x38089e(0x3ab5)](_0x38089e(0x26f6))[_0x38089e(0x4833)](_0x101a09=>{const _0x56f3b7=_0x38089e;let _0x2a38d5=_0x5b2838(_0x101a09),_0x40e386=_0x2a38d5[_0x56f3b7(0x1d99)]()[_0x56f3b7(0x3ab5)](_0x56f3b7(0x26f6)),_0x44ed60=_0x2a38d5[_0x56f3b7(0x268a)]()[_0x56f3b7(0x3ab5)](_0x56f3b7(0x26f6))[_0x56f3b7(0x1465)](_0x39d822=>!_0x40e386['includes'](_0x39d822));return{'sentence':_0x101a09,'entities':_0x40e386,'nouns':_0x44ed60};});return _0x4ba26c;}(_0x7871cc),_0x31ea58=function(_0xd0fc0a){const _0x56870f=a0_0x11e7;let _0x1316ec=[];return _0xd0fc0a[_0x56870f(0xa21)](({sentence:_0x88764f,entities:_0x8bb44a,nouns:_0x27d12d})=>{const _0x473b48=_0x56870f;let _0x25fe7f=_0x5b2838(_0x88764f)[_0x473b48(0x4a03)]()[_0x473b48(0x3ab5)](_0x473b48(0x26f6))[_0x473b48(0x1465)](_0x473fcb=>!_0x8bb44a[_0x473b48(0x2628)](_0x473fcb)&&!_0x27d12d[_0x473b48(0x2628)](_0x473fcb)),_0x4524f6=_0x8bb44a[_0x473b48(0x1b19)]>0x0?_0x8bb44a:_0x27d12d;_0x4524f6['forEach'](_0x565424=>{const _0x3601f1=_0x473b48;let _0xd9c604=_0x88764f[_0x3601f1(0x741)](_0x565424,_0x3601f1(0x4a56)),_0x2bad48=_0x565424,_0x21a67c=_0x4524f6[_0x3601f1(0x1465)](_0x4a76c9=>_0x4a76c9!==_0x565424)[_0x3601f1(0x1d1d)](_0x27d12d[_0x3601f1(0x1465)](_0x5666df=>_0x5666df!==_0x565424),_0x25fe7f);_0xa21fff(_0x21a67c);let _0x15388b=_0x21a67c[_0x3601f1(0x384c)](0x0,0x3);for(;_0x15388b[_0x3601f1(0x1b19)]<0x3&&_0x25fe7f[_0x3601f1(0x1b19)]>0x0;){let _0x3ea054=_0x25fe7f['pop']();_0x15388b[_0x3601f1(0x2628)](_0x3ea054)||_0x15388b['push'](_0x3ea054);}let _0x11d5c4={'question':_0xd9c604,'answers':[]};_0x15388b[_0x3601f1(0xa21)](_0x337ed2=>{const _0x498444=_0x3601f1;_0x11d5c4[_0x498444(0x43be)][_0x498444(0x1715)]({'text':_0x337ed2,'correct':!0x1,'explanation':_0x498444(0x4ce1)});}),_0x11d5c4['answers'][_0x3601f1(0x1715)]({'text':_0x2bad48,'correct':!0x0,'explanation':_0x3601f1(0x86b)}),_0xa21fff(_0x11d5c4[_0x3601f1(0x43be)]),_0x1316ec[_0x3601f1(0x1715)](_0x11d5c4);});}),_0x1316ec;}(_0xbfdfce);return function(_0x100451){const _0x5b4475=a0_0x11e7;let _0x329fb8=_0x100451['filter'](_0x21c845);return _0xa21fff(_0x329fb8),{'questions':_0x329fb8[_0x5b4475(0x384c)](0x0,0x3)[_0x5b4475(0x4833)](_0x1a846e=>({..._0x1a846e,'question':_0x54cfde(_0x1a846e[_0x5b4475(0x328c)])}))};}(_0x31ea58)[_0xcbe1be(0x11b3)];}function _0x54cfde(_0x3ee0e0){const _0x5d6645=_0x37e46c;return _0x3ee0e0=(_0x3ee0e0=_0x3ee0e0[_0x5d6645(0x741)](/<\/?[^>]+(>|$)/g,''))[_0x5d6645(0x741)](/\b[A-Z]{1,2}\s*:\s*/g,'');}function _0xa21fff(_0x376a2f){const _0x335f99=_0x37e46c;for(let _0xda9e04=_0x376a2f['length']-0x1;_0xda9e04>0x0;_0xda9e04--){const _0x40ac13=Math[_0x335f99(0x2e2d)](Math[_0x335f99(0xe98)]()*(_0xda9e04+0x1));[_0x376a2f[_0xda9e04],_0x376a2f[_0x40ac13]]=[_0x376a2f[_0x40ac13],_0x376a2f[_0xda9e04]];}}function _0x21c845(_0x43d4d2){const _0x157462=_0x37e46c;return _0x54cfde(_0x43d4d2['question'])[_0x157462(0x741)](/______/g,'')[_0x157462(0x1b23)]()[_0x157462(0x1117)](/\s+/)[_0x157462(0x1b19)]>=0x3;}function _0x54d9cc(_0x513a5f){const _0x57b5ff=_0x37e46c;let _0x87d1ea=!0x0;const _0x594fa5=_0x513a5f[_0x57b5ff(0xffd)](_0x57b5ff(0x4856)),_0x5b4d8e=_0x513a5f[_0x57b5ff(0xffd)](_0x57b5ff(0x1652));function _0x58d6cc(){const _0x4820e7=_0x57b5ff;_0x87d1ea=!_0x87d1ea,_0x87d1ea?(_0x594fa5[_0x4820e7(0x1a84)][_0x4820e7(0x3cd6)]=_0x594fa5[_0x4820e7(0x768)]+'px',_0x5b4d8e[_0x4820e7(0x1745)][_0x4820e7(0x42a1)](_0x4820e7(0x75b))):(_0x594fa5[_0x4820e7(0x1a84)][_0x4820e7(0x3cd6)]='0',_0x5b4d8e[_0x4820e7(0x1745)][_0x4820e7(0x362c)]('rotate-180'));}_0x594fa5[_0x57b5ff(0x1a84)][_0x57b5ff(0xa69)]='hidden',_0x5b4d8e[_0x57b5ff(0xc61)](_0x57b5ff(0x364d),_0x58d6cc),_0x513a5f[_0x57b5ff(0x33e0)]=_0x58d6cc;}const _0x3d63f1=_0x37e46c(0x41ff);function _0x1ab784(_0x3be8a8){const _0xd6452a=_0x37e46c,_0x3259cf=_0x3be8a8['querySelector']('#helpModal'),_0x4d3404=_0x3259cf['classList'][_0xd6452a(0x2b31)](_0xd6452a(0x53f)),_0x13c576=_0x3259cf[_0xd6452a(0x2842)]('#youtubeVideo-container');_0x4d3404?(_0x3259cf[_0xd6452a(0x1745)][_0xd6452a(0x42a1)](_0xd6452a(0x53f)),document[_0xd6452a(0xc61)](_0xd6452a(0x364d),_0x145595),_0x13c576['innerHTML']=_0xd6452a(0x3832)):(_0x3259cf[_0xd6452a(0x1745)][_0xd6452a(0x362c)]('hidden'),document[_0xd6452a(0x2cf7)]('click',_0x145595),function(_0x7c083c){const _0x499258=_0xd6452a;var _0x38aac6=_0x7c083c['querySelector']('iframe');_0x38aac6&&_0x38aac6[_0x499258(0x3d6)]['includes'](_0x499258(0x4f83))&&_0x38aac6[_0x499258(0x973)][_0x499258(0x3570)](JSON[_0x499258(0x3cbd)]({'event':'command','func':'pauseVideo'}),'*');}(_0x3259cf),_0x13c576[_0xd6452a(0x3cdf)]='');}function _0x2eaeba(_0x4bdf96){const _0x385c82=_0x37e46c,_0x1f25a3=(function(){const _0x4d887d=a0_0x11e7,_0x10c085=document[_0x4d887d(0x3ac9)](_0x4d887d(0x4c88));_0x10c085['className']=_0x4d887d(0x4e69),_0x10c085['id']=_0x4d887d(0x1ce7);const _0x1b613d=document['createElement'](_0x4d887d(0x4c88));_0x1b613d['className']=_0x4d887d(0x6b7),_0x1b613d[_0x4d887d(0x1a84)][_0x4d887d(0x3974)]=_0x4d887d(0x49b0),_0x1b613d['id']='helpModalContainer',_0x1b613d['style'][_0x4d887d(0x17d2)]='100%',_0x1b613d[_0x4d887d(0x4acd)]=_0x5e367b=>_0x5e367b[_0x4d887d(0x2e45)]();const _0x1bd7bd=document[_0x4d887d(0x3ac9)](_0x4d887d(0x4c88));_0x1bd7bd[_0x4d887d(0x1cea)]=_0x4d887d(0x49ae),_0x1bd7bd[_0x4d887d(0x1a84)][_0x4d887d(0x3d28)]=_0x4d887d(0xef8);const _0x399cc1=document['createElement']('div');_0x399cc1['className']=_0x4d887d(0x2eb1),_0x399cc1[_0x4d887d(0x3cdf)]=_0x4d887d(0x3e24),_0x1bd7bd[_0x4d887d(0x335a)](_0x399cc1),_0x1b613d['appendChild'](_0x1bd7bd);const _0x41cc59=document[_0x4d887d(0x3ac9)]('div');_0x41cc59[_0x4d887d(0x1cea)]='flex\x20justify-between\x20items-center\x20p-4';const _0x57c4bb=document[_0x4d887d(0x3ac9)](_0x4d887d(0x18b7));_0x57c4bb['innerText']=_0x4d887d(0x9b0),_0x57c4bb[_0x4d887d(0x1cea)]=_0x4d887d(0x2b72),_0x57c4bb['id']=_0x4d887d(0xe39),_0x57c4bb[_0x4d887d(0x4acd)]=()=>{const _0xc591f5=_0x4d887d;window[_0xc591f5(0x167e)][_0xc591f5(0xe63)]=_0xc591f5(0x63a);};const _0x45849f=document[_0x4d887d(0x3ac9)](_0x4d887d(0x18b7));return _0x45849f[_0x4d887d(0x4eb2)]=_0x4d887d(0xe7f),_0x45849f[_0x4d887d(0x1cea)]=_0x4d887d(0x446),_0x45849f['id']='closeButtonHelpModal',_0x41cc59['appendChild'](_0x57c4bb),_0x41cc59[_0x4d887d(0x335a)](_0x45849f),_0x1b613d[_0x4d887d(0x335a)](_0x41cc59),_0x10c085[_0x4d887d(0x335a)](_0x1b613d),_0x10c085;}()),_0xe529a1=_0x4bdf96[_0x385c82(0x2842)](_0x385c82(0x2b15));_0xe529a1[_0x385c82(0x2b31)](_0x1f25a3)||_0xe529a1[_0x385c82(0x335a)](_0x1f25a3),function(_0x13858c){const _0x580214=_0x385c82,_0xdd25bd=_0x13858c[_0x580214(0x2842)](_0x580214(0x4f6e));_0xdd25bd[_0x580214(0x4acd)]=()=>_0x1ab784(_0x13858c);}(_0x4bdf96),document[_0x385c82(0xc61)]('keydown',function(_0x49f992){const _0x22bdfa=_0x385c82;_0x22bdfa(0x153e)!==_0x49f992[_0x22bdfa(0x49fe)]||_0x1f25a3[_0x22bdfa(0x1745)]['contains'](_0x22bdfa(0x53f))||_0x1f25a3[_0x22bdfa(0x1745)][_0x22bdfa(0x362c)](_0x22bdfa(0x53f));}),_0x1f25a3['querySelector'](_0x385c82(0x24b4))['onclick']=()=>_0x1ab784(_0x4bdf96);}function _0x145595(_0x1a46c5){const _0x5a2e03=_0x37e46c,_0x10321b=document[_0x5a2e03(0x2842)](_0x5a2e03(0x2b15)),_0x25ff6f=document[_0x5a2e03(0x2842)](_0x5a2e03(0x1292));!_0x10321b||_0x10321b[_0x5a2e03(0x2b31)](_0x1a46c5['target'])||_0x25ff6f[_0x5a2e03(0x2b31)](_0x1a46c5[_0x5a2e03(0x1bc7)])||_0x1ab784(document);}const _0x2c9456=new Worker(new URL(_0x2d589b['p']+_0x2d589b['u'](0x8e),_0x2d589b['b'])),_0x39c3c4=new Map();let _0x10f42a=0x0;_0x2c9456[_0x37e46c(0xe36)]=function(_0x568c38){const _0x357df5=_0x37e46c,_0x3af8d7=_0x39c3c4[_0x357df5(0xf9e)](_0x568c38['data']['id']);'success'===_0x568c38['data']['status']&&_0x3af8d7?_0x3af8d7['resolve'](_0x568c38[_0x357df5(0x5139)][_0x357df5(0x5139)]):_0x3af8d7&&_0x3af8d7[_0x357df5(0x26fe)](_0x568c38[_0x357df5(0x5139)][_0x357df5(0x4cdf)]),_0x39c3c4[_0x357df5(0x5be)](_0x568c38[_0x357df5(0x5139)]['id']);},new class{constructor(_0x2cd44e,_0x2ad355,_0x5c2d5b='\x0a\x0a'){const _0x261a71=_0x37e46c;this['interval']=_0x2cd44e,this['overlap']=_0x2ad355,this[_0x261a71(0x6e7)]=_0x5c2d5b;}['split'](_0x445620){const _0x55320c=_0x37e46c,_0x2d21dd=[],_0x1bb073=_0x445620[_0x55320c(0x1117)](this[_0x55320c(0x6e7)]);let _0x34cb04=[],_0x1f9a7c=0x0;return _0x1bb073[_0x55320c(0xa21)](_0x30bb13=>{const _0x1d248a=_0x55320c;(_0x30bb13[_0x1d248a(0x2d96)](/\w+|[^\w\s]+/g)||[])[_0x1d248a(0xa21)](_0x196dff=>{const _0x3d411d=_0x1d248a;_0x1f9a7c+_0x196dff['length']+0x1>this[_0x3d411d(0x15ab)]&&(_0x2d21dd[_0x3d411d(0x1715)](_0x34cb04[_0x3d411d(0x3541)]('\x20')),_0x34cb04=_0x34cb04[_0x3d411d(0x384c)](-this[_0x3d411d(0x13ed)]),_0x1f9a7c=_0x34cb04['join']('\x20')[_0x3d411d(0x1b19)]+0x1),_0x34cb04[_0x3d411d(0x1715)](_0x196dff),_0x1f9a7c+=_0x196dff[_0x3d411d(0x1b19)]+0x1;});}),_0x34cb04[_0x55320c(0x1b19)]>0x0&&_0x2d21dd['push'](_0x34cb04[_0x55320c(0x3541)]('\x20')),_0x2d21dd;}}(0xc8,0x14,'\x0a\x0a');const _0x25ac05=navigator['hardwareConcurrency']||0x2,_0x2b788f=[];for(let _0x2dcf8f=0x0;_0x2dcf8f<_0x25ac05;_0x2dcf8f++){const _0x8c1116=new Worker(new URL(_0x2d589b['p']+_0x2d589b['u'](0x2f9),_0x2d589b['b']));_0x8c1116[_0x37e46c(0xe36)]=_0x50e291,_0x2b788f[_0x37e46c(0x1715)](_0x8c1116);}function _0x50e291(_0x52a492){const _0x5d6922=_0x37e46c,_0xc31f30=_0x39c3c4[_0x5d6922(0xf9e)](_0x52a492[_0x5d6922(0x5139)]['id']);_0xc31f30&&(_0x5d6922(0x3754)===_0x52a492['data']['status']?_0xc31f30[_0x5d6922(0x1521)](_0x52a492[_0x5d6922(0x5139)]['data']):_0xc31f30[_0x5d6922(0x26fe)](_0x52a492['data'][_0x5d6922(0x4cdf)]),_0x39c3c4['delete'](_0x52a492[_0x5d6922(0x5139)]['id']));}async function _0x4f17f5(_0x566735,_0x4126b8,_0xee977b,_0x2cdfbb=!0x0){return new Promise((_0x33fec3,_0x2ec75d)=>{const _0x2d2117=a0_0x11e7,_0x307656=_0x10f42a++;_0x39c3c4[_0x2d2117(0x1fa)](_0x307656,{'resolve':_0x33fec3,'reject':_0x2ec75d});const _0x3efbe1={'id':_0x307656,'command':_0x566735,'useLocal':_0x2cdfbb,'token':_0xee977b,..._0x4126b8};switch(_0x566735){case _0x2d2117(0x1c36):case _0x2d2117(0x1d3a):case _0x2d2117(0x3190):case _0x2d2117(0x5be):case _0x2d2117(0x38d6):_0x2c9456[_0x2d2117(0x3570)](_0x3efbe1);break;default:_0x2ec75d('Unknown\x20action'),_0x39c3c4[_0x2d2117(0x5be)](_0x307656);}});}let _0xedabf3;function _0xadc701(_0x553a08){const _0x26dc29=_0x37e46c;_0xedabf3=parseInt(_0x553a08[_0x26dc29(0x384c)](-0x1)),_0x4f17f5(_0x26dc29(0x1c36),{'dbName':_0x553a08},'',!0x1);}function _0xf8dc17(_0x1411da,_0x502406,_0x5c5203,_0x4c8293){const _0x13c925=_0x37e46c;_0xedabf3!==_0x1411da&&_0xadc701('vectorDB_'+_0x1411da),_0x4f17f5(_0x13c925(0x1d3a),{'chatId':_0x1411da,'role':_0x502406,'text':''+_0x5c5203},_0x4c8293,!0x1);}function _0x487f9d(_0x5a59d2,_0x582d20){const _0x504472=_0x37e46c,_0x610c8=_0x582d20[_0x504472(0x492f)]('.'+_0x5a59d2);let _0x4323bc=!0x1;_0x610c8[_0x504472(0xa21)](_0x588ce4=>{const _0x206356=_0x504472,_0x5a32fc=document[_0x206356(0x3ac9)](_0x206356(0x4c88));_0x5a32fc[_0x206356(0x1cea)]=_0x206356(0x1827),_0x5a32fc['role']=_0x206356(0x14a9),_0x5a32fc[_0x206356(0x3cdf)]=_0x206356(0x45b4),_0x588ce4[_0x206356(0x1f1f)](_0x5a32fc),_0x4323bc=!0x0;}),_0x4323bc&&function(_0x5e3066,_0x4819b6){const _0x426afd=_0x504472,_0x50f1b4=_0x4819b6[_0x426afd(0x492f)]('#'+_0x5e3066);_0x50f1b4[_0x426afd(0xa21)](_0x5dc6c0=>{const _0x3b0034=_0x426afd;_0x5dc6c0[_0x3b0034(0x42a1)]();});}(_0x5a59d2,_0x582d20);}const _0x566371=_0x37e46c(0x404b);let _0x631a4f,_0x53193e,_0x4e0808,_0x17f678,_0x50f50c,_0x5c71e9=_0x18d7d6(_0x37e46c(0xf68)),_0x5b280a=_0x18d7d6(_0x37e46c(0x3277)),_0x2864ce=_0x18d7d6(_0x37e46c(0x1cff)),_0x4bd324='',_0x41252b='',_0x18bf1f=_0x4714de,_0x47e824=0x0;async function _0x51bf6d(){const _0x593041=_0x37e46c,_0x28bec3=document[_0x593041(0x3ac9)](_0x593041(0x4c88));_0x28bec3[_0x593041(0x1a84)]['position']=_0x593041(0x1aaf),_0x28bec3[_0x593041(0x1a84)][_0x593041(0x24df)]='9999',document[_0x593041(0x4f1a)][_0x593041(0x335a)](_0x28bec3),_0x53193e=_0x28bec3[_0x593041(0x4c68)]({'mode':'open'});const _0xe08589=document[_0x593041(0x3ac9)](_0x593041(0x4b32));_0xe08589[_0x593041(0x3f3a)]=_0x593041(0x2db),_0xe08589['href']=_0x593041(0x4c61),_0x53193e[_0x593041(0x335a)](_0xe08589);const _0x5872fc=document['createElement'](_0x593041(0x1a84));_0x5872fc[_0x593041(0x2f80)]=_0x593041(0x1e4a)+_0x4bd3f3()+_0x593041(0x3dc7),_0x53193e['appendChild'](_0x5872fc),_0x53193e[_0x593041(0x3cdf)]+=_0x2a6bba,window[_0x593041(0xc61)](_0x593041(0x259),_0x4e571e=>{const _0x2c1ec9=_0x593041;_0x4e571e[_0x2c1ec9(0x212b)]&&_0x4e571e[_0x2c1ec9(0x212b)]['containerId']&&(_0x4e0808=function(_0x427c63,_0x325b0c){const _0x16baed=_0x2c1ec9,_0x477f45=_0x427c63['querySelector'](_0x16baed(0x745));_0x477f45&&(_0x477f45[_0x16baed(0x3cdf)]='');}(_0x53193e,_0x4e571e[_0x2c1ec9(0x212b)]['containerId']));}),window[_0x593041(0xc61)](_0x593041(0x5216),async _0x2aa4ed=>{const _0x22b954=_0x593041;_0x5a6252(_0x53193e);const _0x14fd5=_0x2aa4ed['detail'][_0x22b954(0xcfc)],_0x365f0c=_0x2aa4ed[_0x22b954(0x212b)][_0x22b954(0x4006)],_0x58e0d1=_0x2aa4ed['detail'][_0x22b954(0x4fe0)],_0x5a18e5=_0x2aa4ed[_0x22b954(0x212b)][_0x22b954(0x3610)];let _0x72ce8b=_0x2aa4ed[_0x22b954(0x212b)][_0x22b954(0x40f7)]||!0x1;const _0x2d2ea2=_0x2aa4ed[_0x22b954(0x212b)][_0x22b954(0x2f1b)]||!0x1;switch(_0x14fd5){case _0x22b954(0xf68):await async function(_0x2b00cd,_0x89623b=[_0x22b954(0x3eb5)],_0x12df0c=0x3,_0x4cc550=0x3){const _0xb1cfd2=_0x22b954;_0x5c71e9[_0xb1cfd2(0x2ddd)](),_0x5c71e9[_0xb1cfd2(0x16f6)](_0xb1cfd2(0x3567),_0x2b00cd),_0x5c71e9[_0xb1cfd2(0x16f6)]('understanding',_0x12df0c),_0x5c71e9[_0xb1cfd2(0x16f6)](_0xb1cfd2(0x2be8),_0x4cc550);let _0x1d6127=_0x5c71e9[_0xb1cfd2(0x170d)](_0xb1cfd2(0x4d19));const _0x45ef72=_0x18bf1f+_0xb1cfd2(0xc98)+_0x2b00cd+_0xb1cfd2(0x4c0b)+_0x12df0c+_0xb1cfd2(0x3a42)+_0x1d6127;_0x5c71e9[_0xb1cfd2(0x16f6)]('prompt',_0x45ef72);const _0x1af3b8=_0x5c71e9[_0xb1cfd2(0x3377)]();_0x1af3b8[_0xb1cfd2(0x1556)]=_0x17f678,_0x4bd324='';const _0x3c976e=_0x4bd324+_0x566371,_0x2ff866=_0x5bc834(_0x53193e,'ai'),_0x150a4f=_0x1e5a00(_0x2ff866);_0x802b77(_0x54c8b7,_0x150a4f);let _0x4dd9a0=0x0;_0x29a6aa(_0x150a4f,_0x4dd9a0),_0x2eb380(_0x3c976e,_0x2ff866);let _0x4848f7=0x0;try{for await(let _0x10acc0 of _0xe54f68(_0x1af3b8,_0x631a4f))0x0===_0x4848f7&&(_0x4dd9a0+=0x1,_0x29a6aa(_0x150a4f,_0x4dd9a0),_0x4bd324+=_0xb1cfd2(0x4aa5)+_0x2b00cd+_0xb1cfd2(0xe0a)),_0x4848f7+=0x1,_0x4dd9a0+=0x1,_0x29a6aa(_0x150a4f,_0x4dd9a0),_0x4bd324+=_0x10acc0,_0x2eb380(_0x4bd324,_0x2ff866),_0x5a6252(_0x53193e);_0x4bd324=_0x4bd324[_0xb1cfd2(0x741)](/\t/g,'\x20\x20\x20\x20'),_0x2eb380(_0x4bd324,_0x2ff866);}catch(_0x1f52a6){_0x57a9ad(_0x53193e,_0x1f52a6+_0xb1cfd2(0x386a),_0xb1cfd2(0x3d85));}_0xf8dc17(_0x4e0808||0x1,'assistant',_0x4bd324,_0x631a4f);const _0x406fca=_0x40bfe6(_0x53193e,0x0,[]);_0x370a32(_0x53193e,_0x2ff866,_0x406fca),_0x4a9b57(_0x2ff866,'progress'),_0x4ac7fa(_0x2ff866),_0x487f9d(_0xb1cfd2(0x2105),_0x53193e),_0x5a6252(_0x53193e);}(_0x365f0c,_0x58e0d1),_0xf8dc17(_0x4e0808||0x1,_0x22b954(0x4b31),_0x365f0c,_0x631a4f),_0x47e824+=0x1;break;case _0x22b954(0x1cff):await async function(_0x1cf742,_0x51c772=!0x1,_0x2fb0a2=0x3,_0x344bb3=0x3){const _0x5027c9=_0x22b954;let _0x1439d8=_0x1cf742;const _0x50b4be=_0x5bc834(_0x53193e,'human');_0x51c772&&(_0x1439d8=function(_0x566629){const _0x21a151=a0_0x11e7,_0x2a1d73=/~\+~([^~]+?)~\+~/g;let _0x2fe630,_0xe58ff=[];for(;null!==(_0x2fe630=_0x2a1d73[_0x21a151(0x198d)](_0x566629));)_0xe58ff[_0x21a151(0x1715)](_0x2fe630[0x1]['trim']());return _0xe58ff;}(_0x1439d8),_0x1439d8='\x0a:::\x20spoiler\x20quote\x0a*'+_0x1439d8[0x0]+_0x5027c9(0x374f)+_0x1439d8[0x1]);const _0x1d4347=function(_0x30eee8){const _0x5608ef=_0x5027c9,_0x2383a7=_0x9f2120[_0x5608ef(0x17d9)](_0x30eee8),_0x103e38=document[_0x5608ef(0x3ac9)](_0x5608ef(0x4c88));return _0x103e38[_0x5608ef(0x3cdf)]=_0x2383a7,_0x230e00(_0x103e38),_0x103e38[_0x5608ef(0x3cdf)];}(_0x1439d8);_0x50b4be[_0x5027c9(0x2842)]('p')[_0x5027c9(0x3cdf)]=_0x1d4347,_0x4bd324='';const _0x592424=_0x5bc834(_0x53193e,'ai'),_0x3c4dca=_0x1e5a00(_0x592424);_0x802b77(_0x24a2f2,_0x3c4dca);let _0x310f6c=0x0;_0x29a6aa(_0x3c4dca,_0x310f6c),_0x2eb380(_0x566371,_0x592424),_0x5a6252(_0x53193e);let _0xf40aee=[],_0x15d30c=[];const _0x525907=0x7d0,_0x3558af=new Promise((_0x5d3545,_0x57aec4)=>setTimeout(()=>_0x57aec4(new Error(_0x5027c9(0x4b6c))),_0x525907));try{_0x47e824>0x0&&(_0xf40aee=await async function(_0x488cea,_0x271d14,_0xf21d69=0x5,_0x540061){const _0xd1a494=_0x5027c9;let _0x2bf172,_0x5512ea='vectorDB_'+_0x488cea;return _0xedabf3!==_0x488cea&&(_0x5512ea=_0xd1a494(0x23c3)+_0x488cea,_0xadc701(_0x5512ea)),_0x2bf172=await _0x4f17f5(_0xd1a494(0x3190),{'chatId':_0x488cea,'text':_0x271d14,'k':_0xf21d69},_0x540061,!0x1),_0x2bf172=function(_0x733a2f){const _0xbb1f78=_0xd1a494,_0x1c3209=[];let _0x2bd6aa=null,_0x2e4852='';return _0x733a2f[_0xbb1f78(0xa21)](_0x3e20d4=>{const _0xea6c66=_0xbb1f78,{role:_0x5cd6e6,text:_0x14bc4d}=_0x3e20d4[_0xea6c66(0x20c7)];_0x5cd6e6!==_0x2bd6aa?(null!==_0x2bd6aa&&_0x1c3209[_0xea6c66(0x1715)]({'role':_0x2bd6aa,'content':_0x2e4852}),_0x2bd6aa=_0x5cd6e6,_0x2e4852=_0x14bc4d):_0x2e4852+='\x20'+_0x14bc4d;}),null!==_0x2bd6aa&&_0x1c3209[_0xbb1f78(0x1715)]({'role':_0x2bd6aa,'content':_0x2e4852}),_0x1c3209;}(_0x2bf172),_0x2bf172;}(_0x4e0808||0x1,_0x1cf742,0x5,_0x631a4f)),_0x15d30c=await Promise[_0x5027c9(0x3445)]([_0x173a68(_0x1cf742,_0x631a4f),_0x3558af]),_0x310f6c+=0x1,_0x29a6aa(_0x3c4dca,_0x310f6c);}catch(_0x221ff5){0x0===_0x15d30c['length']&&(_0x15d30c=[{'text':_0x5027c9(0x3b4f),'url':_0x5027c9(0x3eb5)}],_0x57a9ad(_0x53193e,_0x5027c9(0xc8d),'error'));}let _0x54ede7=[];const _0x341316=_0x15d30c[_0x5027c9(0x4833)](_0x4ce632=>(_0x54ede7['push'](_0x4ce632['url']),_0x4ce632[_0x5027c9(0x4006)]))[_0x5027c9(0x3541)]('\x0a');let _0x1cb711='',_0x197eda=_0x2864ce['get_field']('prompt');const _0x445163=_0x18bf1f+_0x5027c9(0x8a3)+_0x341316+_0x5027c9(0x4763)+_0x1cf742+_0x5027c9(0x3a42)+_0x197eda,_0x5c4e28={'prompt':_0x445163,'messages':_0xf40aee};_0x5c4e28[_0x5027c9(0x1556)]=_0x17f678,_0x310f6c+=0x1,_0x29a6aa(_0x3c4dca,_0x310f6c);let _0x3941c2=0x0;_0x310f6c+=0x1,_0x29a6aa(_0x3c4dca,_0x310f6c);try{for await(let _0x5cf82d of _0xe54f68(_0x5c4e28,_0x631a4f,!0x0,!0x0))0x0===_0x3941c2&&(_0x310f6c+=0x1,_0x29a6aa(_0x3c4dca,_0x310f6c)),_0x3941c2+=0x1,_0x1cb711+=_0x5cf82d,_0x2eb380(_0x1cb711,_0x592424);}catch(_0x539e6b){_0x57a9ad(_0x53193e,_0x539e6b+'.\x20Please\x20try\x20again.','error');}_0xf8dc17(_0x4e0808||0x1,_0x5027c9(0x27fd),_0x1cb711,_0x631a4f),_0x4bd324=_0x1cb711+_0x5027c9(0x51d0);const _0x5b4997=_0x40bfe6(_0x53193e,0x0,_0x54ede7);_0x370a32(_0x53193e,_0x592424,_0x5b4997),await _0x4a9b57(_0x592424,'progress'),_0x4ac7fa(_0x592424),_0x5a6252(_0x53193e),_0x487f9d('skeleton-loader',_0x53193e),await _0x3debb3();}(_0x365f0c,_0x2d2ea2),_0xf8dc17(_0x4e0808||0x1,_0x22b954(0x4b31),_0x365f0c,_0x631a4f),_0x47e824+=0x1;break;case _0x22b954(0x3277):await async function(_0x16b3ec,_0x622afc=0x3,_0x3c1190=0x14){const _0x25d994=_0x22b954;_0x5b280a['set_field']('quote',_0x16b3ec),_0x5b280a[_0x25d994(0x16f6)](_0x25d994(0xbf1),_0x622afc),_0x5b280a[_0x25d994(0x16f6)](_0x25d994(0x2be8),_0x3c1190);const _0x2ea42f=_0x5b280a[_0x25d994(0x3377)]();_0x2ea42f[_0x25d994(0x1556)]=_0x25d994(0x55a);const _0x4ab51c=_0x5bc834(_0x53193e,'ai');let _0x506b0b='';const _0x18524c=_0x1e5a00(_0x4ab51c);_0x802b77(_0x5e5ccb,_0x18524c);let _0x37362e=0x0;_0x29a6aa(_0x18524c,_0x37362e),_0x2eb380(_0x566371,_0x4ab51c),_0x37362e+=0x1,_0x29a6aa(_0x18524c,_0x37362e);try{if(_0x506b0b=await async function(_0x2751dc,_0x452a8a){const _0x495e17=_0x25d994;try{const _0x31dc69=await fetch(_0x2c6561,{'method':'POST','headers':{'Content-Type':_0x495e17(0x21d9),'Authorization':_0x495e17(0x4942)+_0x452a8a},'body':JSON['stringify'](_0x2751dc)});if(!_0x31dc69['ok'])throw new Error(_0x495e17(0x3b55)+_0x31dc69[_0x495e17(0x45c6)]);return await _0x31dc69[_0x495e17(0x3289)]();}catch(_0x155736){}}(_0x2ea42f,_0x631a4f),_0x23eb08(_0x4ab51c),_0x37362e+=0x1,_0x29a6aa(_0x18524c,_0x37362e),Array[_0x25d994(0x22b4)](_0x506b0b))_0x4973e0(_0x4ab51c,_0x506b0b);else{if(null!==_0x506b0b&&_0x25d994(0x20c7)==typeof _0x506b0b){const _0x12463f=Object[_0x25d994(0x1ea9)](_0x506b0b);if(_0x12463f['length']>0x0&&_0x506b0b[_0x12463f[0x0]][_0x25d994(0x1b19)]>0x0){const _0x4973aa=_0x12463f[0x0];_0x4973e0(_0x4ab51c,_0x506b0b[_0x4973aa]);}else _0x506b0b='0',_0x4973e0(_0x4ab51c,_0x506b0b),_0x57a9ad(_0x53193e,'AI\x20returned\x20an\x20unexpected\x20response.\x20Building\x20quiz\x20locally.',_0x25d994(0x5e5));}else _0x506b0b=_0xc9afe9(_0x16b3ec),_0x4973e0(_0x4ab51c,_0x506b0b),_0x57a9ad(_0x4ab51c,_0x25d994(0x2667),_0x25d994(0x5e5));}}catch(_0x58f059){const _0x3451af=_0xc9afe9(_0x16b3ec);_0x23eb08(_0x4ab51c),_0x4973e0(_0x4ab51c,_0x3451af),_0x57a9ad(_0x4ab51c,_0x25d994(0x2667),_0x25d994(0x5e5));}_0xf8dc17(_0x4e0808||0x1,_0x25d994(0x27fd),_0x506b0b,_0x631a4f),_0x37362e+=0x1,_0x29a6aa(_0x18524c,_0x37362e),_0x37362e+=0x1,_0x29a6aa(_0x18524c,_0x37362e),_0x4a9b57(_0x4ab51c,_0x25d994(0x357b)),_0x5a6252(_0x53193e),_0x487f9d(_0x25d994(0x2105),_0x53193e),await _0x3debb3();}(_0x365f0c),_0xf8dc17(_0x4e0808||0x1,_0x22b954(0x4b31),_0x365f0c,_0x631a4f),_0x47e824+=0x1;break;case'general':await async function(_0x4c5533,_0x43c3b6=['https://harvard-edge.github.io/cs249r_book/'],_0x2bd295=0x3,_0x4a1009=0x3){const _0x12a8ae=_0x22b954;_0x5c71e9['set_field']('quote',''),_0x5c71e9[_0x12a8ae(0x16f6)](_0x12a8ae(0xbf1),_0x2bd295),_0x5c71e9[_0x12a8ae(0x16f6)](_0x12a8ae(0x2be8),_0x4a1009);let _0x22566b=_0x5c71e9[_0x12a8ae(0x170d)](_0x12a8ae(0x4d19));_0x22566b=_0x4c5533,_0x5c71e9[_0x12a8ae(0x16f6)]('prompt',_0x18bf1f+'\x20'+_0x22566b);const _0x23fd40=_0x5c71e9['return_all_fields']();_0x23fd40[_0x12a8ae(0x1556)]=_0x17f678,_0x4bd324='';const _0x2aa975=_0x4bd324+_0x566371,_0x403374=_0x5bc834(_0x53193e,'ai'),_0x264f77=_0x1e5a00(_0x403374);_0x802b77(_0x36c8bc,_0x264f77);let _0x3d3429=0x0;_0x29a6aa(_0x264f77,_0x3d3429),_0x2eb380(_0x2aa975,_0x403374),_0x3d3429+=0x1,_0x29a6aa(_0x264f77,_0x3d3429);let _0x32773e=0x0;_0x3d3429+=0x1,_0x29a6aa(_0x264f77,_0x3d3429);try{for await(let _0x23d8da of _0xe54f68(_0x23fd40,_0x631a4f))_0x32773e+=0x1,_0x4bd324+=_0x23d8da,_0x2eb380(_0x4bd324,_0x403374);}catch(_0x3b1d67){_0x57a9ad(_0x53193e,_0x3b1d67+_0x12a8ae(0x386a),_0x12a8ae(0x3d85));}_0xf8dc17(_0x4e0808||0x1,_0x12a8ae(0x27fd),_0x4bd324,_0x631a4f),_0x4bd324+=_0x12a8ae(0x1272),_0x3d3429+=0x1,_0x29a6aa(_0x264f77,_0x3d3429);const _0x14f74c=_0x40bfe6(_0x53193e,0x0,[]);_0x370a32(_0x53193e,_0x403374,_0x14f74c),_0x4a9b57(_0x403374,_0x12a8ae(0x357b)),_0x4ac7fa(_0x403374),_0x5a6252(_0x53193e),_0x487f9d(_0x12a8ae(0x2105),_0x53193e),await _0x3debb3();}(_0x365f0c,_0x58e0d1),_0xf8dc17(_0x4e0808||0x1,_0x22b954(0x4b31),_0x365f0c,_0x631a4f),_0x47e824+=0x1;break;case _0x22b954(0x14bf):await _0x1e6ad4(_0x365f0c,[_0x22b954(0x3eb5)],0x3,0x3,_0x72ce8b),_0xf8dc17(_0x4e0808||0x1,_0x22b954(0x4b31),_0x365f0c,_0x631a4f),_0x47e824+=0x1;break;case _0x22b954(0x2e7):_0x18bf1f=_0x4714de+_0x365f0c;break;case _0x22b954(0x3610):_0x30b9db=_0x5a18e5[_0x22b954(0x204)],_0x4ea4e8=!_0x30b9db,_0x17f678=_0x5a18e5[_0x22b954(0x3e60)];break;default:_0x57a9ad(_0x53193e,_0x22b954(0x233c)+_0x14fd5,_0x22b954(0x5e5));}var _0x30b9db;}),_0x188dc7(_0x53193e),_0x5bb300=_0x53193e,function(_0x452e20){const _0x1862c3=_0x593041,_0x506f74=_0x452e20[_0x1862c3(0x2842)](_0x1862c3(0x3d20)),_0x41edba=_0x452e20[_0x1862c3(0x2842)]('#settings-btn'),_0x152130=_0x506f74[_0x1862c3(0x2842)]('#close-btn');let _0x2ffef2=_0x452e20['querySelector'](_0x1862c3(0x2e11));if(!_0x2ffef2){const _0x4cae48=_0x452e20[_0x1862c3(0x2842)]('#text-selection-menu');_0x2ffef2=document[_0x1862c3(0x3ac9)]('div'),_0x2ffef2['classList']['add'](_0x1862c3(0x2afb)),_0x2ffef2[_0x1862c3(0x1a84)][_0x1862c3(0x32f)]=_0x1862c3(0x1eeb),_0x4cae48[_0x1862c3(0x335a)](_0x2ffef2);}const _0x4b38e0=()=>{const _0x562cf1=_0x1862c3;_0x506f74[_0x562cf1(0x1a84)][_0x562cf1(0x12ca)]='none',_0x2ffef2[_0x562cf1(0x1a84)][_0x562cf1(0x12ca)]='none';};_0x41edba[_0x1862c3(0xc61)](_0x1862c3(0x364d),()=>{const _0x2f048a=_0x1862c3,_0x164f96=_0x2f048a(0x1f2e)===_0x506f74['style'][_0x2f048a(0x12ca)];_0x506f74['style'][_0x2f048a(0x12ca)]=_0x164f96?_0x2f048a(0x28b):'block',_0x2ffef2[_0x2f048a(0x1a84)][_0x2f048a(0x12ca)]=_0x164f96?_0x2f048a(0x28b):_0x2f048a(0x1f2e);}),_0x152130[_0x1862c3(0xc61)]('click',_0x4b38e0),_0x2ffef2[_0x1862c3(0xc61)](_0x1862c3(0x364d),_0x4b38e0),_0x506f74[_0x1862c3(0xc61)](_0x1862c3(0x364d),_0x41f134=>_0x41f134[_0x1862c3(0x2e45)]()),window[_0x1862c3(0xc61)](_0x1862c3(0x364d),_0x3e6d34=>{const _0x233582=_0x1862c3;_0x3e6d34[_0x233582(0x1bc7)]===_0x506f74&&_0x4b38e0();}),window[_0x1862c3(0xc61)](_0x1862c3(0x301f),_0x3a11eb=>{const _0x3ea8b2=_0x1862c3;_0x3ea8b2(0x153e)===_0x3a11eb[_0x3ea8b2(0x49fe)]&&_0x4b38e0();});}(_0x53193e),function(_0x561f8d){const _0x536ab6=_0x593041,_0x1c35c1=document[_0x536ab6(0x3ac9)](_0x536ab6(0x15c6));_0x1c35c1[_0x536ab6(0x3cdf)]=_0x2f6a27;const _0x5e7f08=_0x561f8d[_0x536ab6(0x2842)](_0x536ab6(0x508d));_0x5e7f08&&(_0x5e7f08['innerHTML']='',_0x5e7f08[_0x536ab6(0x335a)](_0x1c35c1['content'][_0x536ab6(0x51c2)](!0x0)));}(_0x53193e),_0x227b35(_0x53193e),_0x49164e(_0x53193e),function(_0x3414bc){const _0xb08c43=_0x593041,_0x470d63=_0x3414bc[_0xb08c43(0xffd)]('menu-toggle');_0x470d63&&_0x470d63[_0xb08c43(0xc61)](_0xb08c43(0x1efc),function(){const _0x3ce028=_0xb08c43;this[_0x3ce028(0x494d)]&&function(_0x31ca58,_0x352972='text-selection-menu-highlight'){const _0x355e07=_0x3ce028,_0x49d9ab=_0x31ca58[_0x355e07(0x2842)]('#'+_0x352972);_0x49d9ab&&(_0x49d9ab[_0x355e07(0x1745)][_0x355e07(0x2b31)](_0x355e07(0x53f))||_0x49d9ab[_0x355e07(0x1745)][_0x355e07(0x362c)](_0x355e07(0x53f)));}(_0x3414bc);});}(_0x53193e),_0x54d9cc(_0x53193e),function(_0x5da5d7){const _0x56b52a=_0x593041,_0xc61400=document[_0x56b52a(0x3ac9)](_0x56b52a(0x15c6));_0xc61400[_0x56b52a(0x3cdf)]=_0x3d63f1;const _0x48aafa=_0x5da5d7[_0x56b52a(0x2842)](_0x56b52a(0x2e27));_0x48aafa&&_0x48aafa['appendChild'](_0xc61400[_0x56b52a(0x484f)]['cloneNode'](!0x0));}(_0x53193e),function(_0xd1203d){const _0x3e665a=_0x593041,_0x21e216=_0xd1203d[_0x3e665a(0x2842)](_0x3e665a(0xcd1)),_0xd7d90c=_0xd1203d['querySelector']('#closeModal'),_0x41a64e=_0xd1203d[_0x3e665a(0x2842)]('#myModal');_0x21e216[_0x3e665a(0xc61)](_0x3e665a(0x364d),()=>{const _0x4be8d6=_0x3e665a;_0x41a64e[_0x4be8d6(0x1745)][_0x4be8d6(0x42a1)](_0x4be8d6(0x53f));}),_0xd7d90c[_0x3e665a(0xc61)](_0x3e665a(0x364d),()=>{const _0x4e19d4=_0x3e665a;_0x41a64e[_0x4e19d4(0x1745)][_0x4e19d4(0x362c)](_0x4e19d4(0x53f));});}(_0x53193e);const _0x571482=new _0x20129f(_0x53193e);_0x571482['init'](),_0x18bf1f=_0x571482['generateSystemPrompt'](),_0x4e0808=await async function(_0x1c9322){const _0x237442=_0x593041;let _0x2e0697;try{const _0x2f5eb1=(function(){const _0x2847b9=localStorage['getItem']('mostRecentID');return _0x2847b9?parseInt(_0x2847b9,0xa):null;}());null!==_0x2f5eb1&&_0x366533(_0x1c9322,_0x2f5eb1)['then'](()=>{})['catch'](_0x2cf2b3=>{});}catch(_0x1f996b){}return _0xadc701(_0x237442(0x23c3)+(_0x2e0697||0x1)),_0x2e0697;}(_0x53193e),_0x2eaeba(_0x53193e),function(_0x5c26ee,_0x12b959){const _0x21a602=_0x593041;if(_0x12b959){const _0x35dd9f=_0x5c26ee[_0x21a602(0xffd)](_0x21a602(0x17c7));return _0x35dd9f[_0x21a602(0x1a84)][_0x21a602(0x12ca)]=_0x21a602(0x1f2e),setTimeout(()=>{const _0x481b56=_0x21a602;_0x35dd9f[_0x481b56(0x1a84)][_0x481b56(0x12ca)]=_0x481b56(0x28b);},0xbb8),_0x12b959;}{const _0x498c04=_0x5c26ee[_0x21a602(0xffd)](_0x21a602(0x411c));_0x498c04[_0x21a602(0x1a84)][_0x21a602(0x12ca)]=_0x21a602(0x1f2e),setTimeout(()=>{const _0x37faef=_0x21a602;_0x498c04[_0x37faef(0x1a84)][_0x37faef(0x12ca)]=_0x37faef(0x28b);},0xbb8);}}(_0x53193e,_0x631a4f);}function _0x49747c(_0x46c613){const _0x2b9c06=_0x37e46c,_0x42686c=_0x46c613[_0x2b9c06(0x4fe9)];_0x46c613[_0x2b9c06(0x4fe9)]='',_0x1e6ad4(_0x42686c,[_0x2b9c06(0x3eb5)],0x3,0x3,!0x0),_0x5a6252(_0x53193e);}async function _0x1e6ad4(_0x4681cd,_0x5be1ae=[_0x37e46c(0x3eb5)],_0x4191cc=0x3,_0x815bad=0x3,_0x3555e5=!0x1){const _0x4f9462=_0x37e46c,_0x5906d6=window[_0x4f9462(0x167e)]['href'];_0x50f50c!==_0x5906d6&&(_0x50f50c=_0x5906d6,_0x3555e5=!0x0);const _0x51c219=_0x5bc834(_0x53193e,'ai'),_0x579229=_0x1e5a00(_0x51c219);_0x802b77(_0x3f1935,_0x579229);let _0xa43351=0x0;_0x29a6aa(_0x579229,_0xa43351),_0x2eb380(_0x566371,_0x51c219),_0xa43351+=0x1,_0x29a6aa(_0x579229,_0xa43351),_0x5c71e9[_0x4f9462(0x16f6)](_0x4f9462(0x3567),''),_0x5c71e9[_0x4f9462(0x16f6)](_0x4f9462(0xbf1),_0x4191cc),_0x5c71e9[_0x4f9462(0x16f6)](_0x4f9462(0x2be8),_0x815bad);let _0x500a32=_0x5c71e9[_0x4f9462(0x170d)](_0x4f9462(0x4d19));_0x500a32=_0x4f9462(0x4738)+_0x4681cd,_0x5c71e9[_0x4f9462(0x16f6)](_0x4f9462(0x4d19),_0x500a32);const _0x437cee=_0x5c71e9[_0x4f9462(0x3377)]();_0x437cee[_0x4f9462(0x1556)]=_0x17f678;try{let _0xa47b38;if(!_0x41252b||_0x3555e5){let _0x46ac3d='';for await(let _0x366dc9 of _0xe54f68(_0x437cee,_0x631a4f))_0x46ac3d+=_0x366dc9;_0xa47b38=_0x46ac3d,_0x41252b=_0xa47b38;}_0xa43351+=0x1,_0x29a6aa(_0x579229,_0xa43351);const _0x2bfcf5=Math[_0x4f9462(0xe98)]()['toString'](0x24)[_0x4f9462(0x37b5)](0x2,0xf);_0x2eb380(await _0x24d8ea(0x0,0x0,_0x2bfcf5,_0x41252b,0xa),_0x51c219,0x1);const _0x543426=_0x53193e[_0x4f9462(0x2842)](_0x4f9462(0x512d)+_0x2bfcf5),_0x3fac19=_0x53193e['querySelector'](_0x4f9462(0x21b0)+_0x2bfcf5),_0x5a2397=_0x53193e[_0x4f9462(0x2842)]('#more-papers-search-'+_0x2bfcf5),_0xf7a078=_0x53193e[_0x4f9462(0x2842)](_0x4f9462(0x4f01)+_0x2bfcf5);_0x543426[_0x4f9462(0xc61)](_0x4f9462(0x364d),function(){const _0x4d0f26=_0x4f9462;_0x3fac19&&(_0x3fac19['style'][_0x4d0f26(0x12ca)]=_0x4d0f26(0x1f2e),this[_0x4d0f26(0x1a84)][_0x4d0f26(0x12ca)]=_0x4d0f26(0x28b));}),_0x5a2397[_0x4f9462(0xc61)]('keydown',function(_0xb5b0d8){const _0x2ff399=_0x4f9462;_0x2ff399(0x3c55)===_0xb5b0d8[_0x2ff399(0x49fe)]&&_0x3fac19&&_0x49747c(this);}),_0xf7a078[_0x4f9462(0xc61)](_0x4f9462(0x364d),function(){_0x3fac19&&_0x49747c(_0x5a2397);}),_0xa43351+=0x1,_0x29a6aa(_0x579229,_0xa43351);}catch(_0x33c1c0){_0x57a9ad(_0x53193e,_0x33c1c0+'.\x20Please\x20try\x20again.',_0x4f9462(0x3d85));}_0xa43351+=0x1,_0x29a6aa(_0x579229,_0xa43351);const _0xd2ee68=_0x40bfe6(_0x53193e,0x0,[]);_0x370a32(_0x53193e,_0x51c219,_0xd2ee68),_0x4a9b57(_0x51c219,'progress'),_0x4ac7fa(_0x51c219),function(_0x44ab9c){const _0x41dfb3=_0x4f9462;_0x44ab9c[_0x41dfb3(0x492f)]('a')[_0x41dfb3(0xa21)](_0x1c839c=>{const _0x7d36f0=_0x41dfb3;_0x1c839c[_0x7d36f0(0x998)](_0x7d36f0(0x1bc7),'_blank'),_0x1c839c[_0x7d36f0(0x998)](_0x7d36f0(0x3f3a),_0x7d36f0(0x3e6a));});}(_0x51c219),_0x5a6252(_0x53193e),_0x487f9d(_0x4f9462(0x2105),_0x53193e),await _0x3debb3();}!(async function(){const _0x5a5dd8=_0x37e46c;_0x50f50c=window['location'][_0x5a5dd8(0xe63)];const _0x4021bc=new URLSearchParams(window[_0x5a5dd8(0x167e)][_0x5a5dd8(0x3190)]),_0x393ece=_0x4021bc['get']('x'),_0x1c4c37=_0x4021bc['get']('y');_0x393ece&&_0x1c4c37&&window[_0x5a5dd8(0x2c23)](parseInt(_0x393ece,0xa),parseInt(_0x1c4c37,0xa)),_0x631a4f=await(async function(){const _0x1a7aa2=_0x5a5dd8;return fetch(_0x3e8a84+'token',{'method':'POST','headers':{'Content-Type':_0x1a7aa2(0x21d9)},'body':JSON[_0x1a7aa2(0x3cbd)]({'username':_0x1a7aa2(0x1c17),'password':'password'})})[_0x1a7aa2(0xaf5)](_0x5c3c40=>{const _0x513a7e=_0x1a7aa2;if(!_0x5c3c40['ok'])throw new Error(_0x513a7e(0x194c));return _0x5c3c40[_0x513a7e(0x3289)]();})['then'](async _0x19b60c=>{const _0x1951ea=_0x1a7aa2,_0x5899f8=_0x19b60c['token'];return localStorage['setItem'](_0x1951ea(0x3c4e),_0x5899f8),_0x5899f8;})[_0x1a7aa2(0x31a3)](_0x4ad862=>{throw _0x4ad862;});}()),_0x5a5dd8(0x3d2a)===document[_0x5a5dd8(0x3a3f)]?document[_0x5a5dd8(0xc61)]('DOMContentLoaded',_0x51bf6d):_0x51bf6d();}());let _0x4a0cb=_0x4e0808;async function _0x3debb3(){const _0x52577a=_0x37e46c;var _0x41474d;await new Promise(_0x54a74a=>setTimeout(_0x54a74a,0x7d0)),_0x4e0808=await _0x2ff3c7(_0x53193e,_0x4e0808),_0x41474d=_0x4e0808,localStorage[_0x52577a(0x4d4f)](_0x52577a(0x205f),_0x41474d[_0x52577a(0x8e8)]()),_0x4e0808!=_0x4a0cb&&(_0x47e824=0x0);}})());})()));function a0_0x11e7(_0x2076b3,_0x576e72){const _0x1bd587=a0_0x1bd5();return a0_0x11e7=function(_0x11e78a,_0x243937){_0x11e78a=_0x11e78a-0x1f4;let _0x1681a5=_0x1bd587[_0x11e78a];return _0x1681a5;},a0_0x11e7(_0x2076b3,_0x576e72);}function a0_0x1bd5(){const _0x2921cc=['(always|nearly|barely|practically)\x20[there]','bessel_first_kind','WheelGraph','MEDIAN','AllowedCloudExtraParameters','intra','Icon','CoprimeQ','iterable','has_evar','layer_tile_region','ĺ','ds_grid_set','waypointStatements','While','font-face','(nominating|special|conference|executive|steering|central|congressional)\x20committee','Types','isPlayer','vertex_usage_blendindices','instance_activate_all','SymmetrizedReplacePart','UInt32','markerBrush','NS_INLINE','cssSelector','CommunityRegionStyle','$SharedVariables','noshowcancelled','weaponsItemsCargo','iap_storeload_ok','true¦0:2K;1:2Q;2:2H;3:2B;a2Ob2Bc1Xd1Ses1Rf1Pg1Kh1Gi1Bj17k12l0Zm0On06o04pYqVrSsJtEuBverAw6y4zacatec2S;akut0o0Cu4;cat2k06;a5est\x204isconsin,yomi1K;bengal,virgin0;rwick3shington4;!\x20dc;acruz,mont;dmurt0t4;ah,tar4;\x202La0Y;a6e5laxca1Rripu1Xu4;scaEva;langa1nnessee,x2F;bas0Wm4smOtar25;aulip2Dil\x20nadu;a9i7o5taf12u4ylh1F;ffZrr05s1A;me1Cno1Quth\x204;cWdV;ber0c4kkim,naloa;hu2ily;n5skatchew2xo4;ny;\x20luis\x20potosi,ta\x20catari1;a4hodeA;j4ngp08;asth2shahi;ingh25u4;e4intana\x20roo;bec,en6retaro;ara8e6rince\x20edward4unjab;\x20i4;sl0C;i,nnsylv4rnambu0C;an0;!na;axa0Ydisha,h4klaho20ntar4reg7ss0Cx0H;io;aKeEo6u4;evo\x20le4nav0W;on;r4tt17va\x20scot0;f9mandy,th4;\x204ampton3;c6d5yo4;rk3;ako1N;aroli1;olk;bras1Mva0Cw4;\x205foundland4;!\x20and\x20labrador;brunswick,hamp3jers5mexiTyork4;!\x20state;ey;galPyarit;aAeghala0Mi6o4;nta1r4;dov0elos;ch6dlanDn5ss4zor11;issippi,ouri;as\x20geraPneso18;ig2oac2;dhy12harasht0Gine,ni5r4ssachusetts;anhao,i\x20el,ylG;p4toba;ur;anca3e4incoln3ouisI;e4iR;ds;a6e5h4omi;aka06ul1;ntucky,ra01;bardino,lmyk0ns0Qr4;achay,el0nata0X;alis6har4iangxi;kh4;and;co;daho,llino7n4owa;d5gush4;et0;ia1;is;a6ert5i4un2;dalFm0D;ford3;mp3rya1waii;ansu,eorg0lou7oa,u4;an4izhou,jarat;ajuato,gdo4;ng;cester3;lori4uji2;da;sex;ageUe7o5uran4;go;rs4;et;lawaMrby3;aFeaEh9o4rim08umbr0;ahui7l6nnectic5rsi4ventry;ca;ut;i03orado;la;e5hattisgarh,i4uvash0;apRhuahua;chn5rke4;ss0;ya;ra;lGm4;bridge3peche;a9ihar,r8u4;ck4ryat0;ingham3;shi4;re;emen,itish\x20columb0;h0ja\x20cal8lk7s4v7;hkorto4que;st2;an;ar0;iforn0;ia;dygHguascalientes,lBndhr9r5ss4;am;izo1kans5un4;achal\x207;as;na;a\x204;pradesh;a6ber5t4;ai;ta;ba5s4;ka;ma;ea','setSlingLoad','ropeAttachedTo','#Infinitive\x20and\x20[%Noun|Verb%]','colour_get_red','ButtonStyleMenuListing','vk_f4','move_bounce_solid','getCurrentCarrier','$DefaultProxyRules','nextMenuItemIndex','Continuation','update_recordset','RecurrenceFilter','num','HEBREW_CHARSET','#Verb\x20(up|down|in|on|for)$','PersistentSymbol','Write\x20something...','time','federal','^[-\x5c*]{3,}','Ŀ','SocketOpen','∸','DAY','showWatch','TLSA','all\x20any\x20no-route\x20self\x20urpf-failed\x20egress|5\x20unknown','inputController','article','cloneNode','INCLUDE','NotebookTemplate','house','SocketListen','foo-out','would-be','draw_rectangle_colour','sort_asc','LyonsGroupLy','motion_set','findAny','cksum','phy_joint_anchor_2_x','\x0a\x0a---\x0a---\x0a---\x0a','draw_text_ext_colour','⋘̸','issubclass','numericCast','Hint','^#PastTense$','audio_get_name','EntityFunction','stdcall','place_empty','[:]{1,2}','HistogramDistribution','a-nice-inf','CreateScheduledTask','Plus','))[pP][+-]?(','RiskReductionImportance','isImperative','cutText','optional','PositiveDefiniteMatrixQ','CellLabel','class\x20family\x20instance\x20where','SetErrorLevel','TernaryPlotCorners','Presposition','Extended\x20Backus-Naur\x20Form','selectWeapon','ContraharmonicMean','investments','is_int64','getpgrp','nearestMines','encodeURI','binary_semaphore','growLeft','isAdjective','���','mb_left','Possessives','ChoiceButtons','playableUnits','win8_livetile_queue_enable','current_catalog','dread','Intel\x20x86\x20Assembly','qr_thin','value-to-value','lassign|10','ImagingDevice','ev_joystick2_up','CreateDataStructure','ResamplingMethod','WebAudioSearch','attachTo','(minus|negative)\x20#Value','scad','ClippingStyle','StructuredArrayHeadQ','ImagePadding','MittagLefflerE','Error\x20updating\x20content:\x20','$PersistencePath','overcastForecast','roleDescription','INDEX','ς','#Person+','audio_create_buffer_sound','aiActionCompleted','object_set_sprite','$Echo','tac','tags','(one|1|a|an)','difficultyOption','SubtitleEncoding','street','cbool','SelectFirst','NotRightTriangle','removeItemFromUniform','LessSlantEqual','getObjectArgument','chicago','⪌','Sunrise','link_no_ip_fuzzy','AssociationFormat','ConvexPolygonQ','FileSystemMap','ftype','`','skip,\x20excludeEnd,\x20returnEnd\x20not\x20compatible\x20with\x20endScope:\x20{}','ev_gesture_rotate_end','Х','inverted-colors','chcp','FailureDistribution','EdgeCycleMatrix','Heap','gushed','and-5-cents','CreateFont','TextOrdinal','FilePrint','caves','draw_text_transformed_color','ButtonBar','curatorRegisteredObjects','enclave','ParametricPlot3D','Primes','[a-zA-Z_]\x5cw*','link_open','addItemCargo','SSSTriangle','layer_background_create','IntDict','RSolve','roadAt','draw_background_tiled','notice','ExternalTypeSignature','NINE','addressof\x20and\x20andalso\x20await\x20directcast\x20gettype\x20getxmlnamespace\x20is\x20isfalse\x20isnot\x20istrue\x20like\x20mod\x20nameof\x20new\x20not\x20or\x20orelse\x20trycast\x20typeof\x20xor\x20cbool\x20cbyte\x20cchar\x20cdate\x20cdbl\x20cdec\x20cint\x20clng\x20cobj\x20csbyte\x20cshort\x20csng\x20cstr\x20cuint\x20culng\x20cushort','ViewVector','einjection','MassImpermeableBoundaryValue','padding','NotGreaterFullEqual','VIEW','GetLinebreakInformationPacket','NetSharedArray','os_version','(\x5cb(with|overriding)\x5cs+)?\x5cb(function|procedure)\x5cs+','Ŗ','MemoryAvailable','QuestionGenerator','array_prepend','isWalking','RunProcess','[march]\x20(up|down|back|toward)','DefaultValues','DelaunayMesh','boundingCenter','markerSize','CellTrayPosition','instance_activate_layer','ViewRange','Subsequences','constr_eq','enableTraffic','TreeFold','lastused','Sow','setMimic','=','achievement_login','≑','(is|are)','true\x20false\x20null\x20_','\x5cb0[xX]_*(','@synthesize','countSide','multimap','²','date_and_time','FiniteGroupCount','caption_health','FileExistsQ','Chunks','selector-tag','FileNames','SortBy','ARABIC','taskkill','FileHandler','tile_rotate','TreeOutline','ExponentialGeneratingFunction','^@(?:BASE|USE|CLASS|OPTIONS)$','CoreNilpotentDecomposition','ResourceVersion','IndentingNewlineSpacings','waitContinue','NotHumpEqual','GroupTogetherNestedGrouping','#chicago','TooltipStyle','$ParentProcessID','$1i','#Percent','col!','changecompany','current_hour','setEffectiveCommander','[;@]','position_empty','import\x20include','derf','docker','UNICODE','objectCurators','↾','101258GJkfis','chrw','#Cardinal\x20#Cardinal','≙','ListPointPlot3D','QueueingProcess','set','isHidden','vtypex','ctrlMapAnimAdd','level','CopyFile','\x5c.[a-zA-Z-][a-zA-Z0-9_-]*','Pyramid','CanberraDistance','#Value\x20of\x20#Month','show_progress','gesture_pinch_angle_away','AudioMeasurements','will','RecalibrationFunction','%[0-9]+','SystemException','camCommand','say','objectStoreNames','actionKeysNames','MEL','FourierCosCoefficient','ChartBaseStyle','Cone','diag_frameno','ranpoi','RemoteRun','/(PROG|ATTR|MN|POS|END)\x5cb','ShowUninstDetails','TravelTime','assignAsDriver','application_surface_enable','POS','Success','phrase_length','ds_list_write','generated','diag_matrix','subroutine|10','¹','ImageStitch','EdgeLabelStyle','HeunCPrime','exist','vk_end','Contains','AlternatingFactorial','s_close','camPreloaded','int_step','(urban|cardiac|cardiovascular|respiratory|medical|clinical|visual|graphic|creative|dental|exotic|fine|certified|registered|technical|virtual|professional|amateur|junior|senior|special|pharmaceutical|theoretical)+\x20#Noun?\x20#Actor','[\x5c)\x5cn]','not\x20null\x20constant\x20access\x20function\x20procedure\x20in\x20out\x20aliased\x20exception','EndRegion','⫁','ptrs','ScientificForm','SpellingCorrectionList','fft2','NotSquareSubsetEqual','rethrows','discardableResult','\x1b[0m','asensitive','wand','TableHeadings','Â','matchAtStart','#Multiple','shopt','MoleculePlot','LightBrown','StirlingS1','Stan','VALUE','AnatomyData','⦍','ShiftRegisterSequence','25587dDcvce','DELETE','restore','ManhattanDistance','clearMagazineCargo','◬','$atanh','MaxTrainingRounds','activateKey','waypointTimeout','the\x20[#Acronym]','ListPlot','Ϝ','inGroup','layer_y','collision_ellipse','resetAIChat','menuSetShortcut','positive','WhittakerM','#Place+\x20#Possessive','fetchobs','^[(dude|man|girl)]\x20#Pronoun','onEachFrame','MathieuGroupM24','maxNesting','SocketObject','FourierSinCoefficient','new_line','army','InstTypeSetText','ExternalCall','layer_background_get_alpha','(supposing|although)','layer_sprite_exists','layer_exists','ure','vartype','cache','camera_get_proj_mat','HTMLSave','windStr','SolidMechanicsPDEComponent','physics_world_create','⨤','PageBreakWithin','UnregisterExternalEvaluator','BesselFilterModel','audio_get_recorder_info','Graph','HeatTemperatureCondition','Connect','sha224sum','Ι','$ParentLink','MiscButtonText','ProductDistribution','outline-width','guild','LimitsPositioning','ShowGroupOpener','//[gim]*','$dist_exponential','parseSimpleArray','CloudObjectNameFormat','[a-z][a-zA-Z0-9_]*','none','os_ios','MeijerGReduce',')\x5cs+','OptionValueBox','gatewayIP','ArrayPlot','$RootDirectory','Print','parseLinkLabel','document','LatticeReduce','HoytDistribution','negative','REPLACE','Initialization','xs:dateTimeStamp','ads_engagement_launch','union','ChannelListener','ReadRegStr','YuleDissimilarity','⤌','"','financial','yml','^facet\x20','aside','Cosine\x20searching\x20vector\x20database','inv_wishart','icl','part','iap_activate','stname','unnest','MB_DEFBUTTON3','factor',':(:)?(','$coverage_save','unsafe','vk_space','CurlyDoubleQuote','\x5cb(\x5cd+(_\x5cd+)*#[a-fA-F0-9]+(_[a-fA-F0-9]+)*|\x5cd+(_\x5cd+)*(\x5c.\x5cd+(_\x5cd+)*)?([eE][-+]?\x5cd+)?)','multi_gp_cholesky','gradle','checkboxes','keyPressed','Compiled','PartialQuickSort','WeierstrassHalfPeriods','blank','DefaultMenuStyle','FontForm','ArgumentsOptions','DiscreteLQRegulatorGains','room_instance_clear','⫃','CloudEvaluate','xs:hexBinary','expert\x20understanding','ugc_match_Items_ReadyToUse','DenseMatrix','FACTDOUBLE','comp','ϕ','super','toComparative','Scala','tagRank','VideoInsert','[(his|her)\x20(majesty|honour|worship|excellency|honorable)]\x20#Person','ne\x27er','BatchSize','setCaptive','margin-inline-start','xs:gYear','surface_get_depth_disable','isempty','randsequence','clearInterval','stylesheet','device-height','VerifyDigitalSignature','LineIndentMaxFraction','StyleBox','VerifyInterpretation','GridBoxItemSize','this_image','nest','|||<[?][\x5cs\x5cS]*?[?]>|]*>|)','help','missionNameSource','system_prompt','Ellipsoid','BinomialDistribution','peri','generateSettings','CreateUUID','TagBox','TransferFunctionExpand','Text3DBox','Minors','titlecase-acronym-titlecase','boolean_vector','removeMagazine','Graphics3D','^=end','quickSplit','win8_settingscharm_set_xaml_property','setWindStr','ContinuedFraction','million','├','$CurrentLink','dshiftl','CHISQ.DIST.RT','ReadOnlyMemoryError','
','setMissileTarget','ctrlMapWorldToScreen','TakeSmallestBy','Axis3DBoxOptions','township','pickles-and-drinks','(^despite|^during|^before|^through|^throughout)','OutputStream','show_lives','missionConfigFile','setFaceanimation','Illegal\x20input\x20>=\x200x80\x20(not\x20a\x20basic\x20code\x20point)','𝔤','subst\x20patsubst\x20strip\x20findstring\x20filter\x20filter-out\x20sort\x20word\x20wordlist\x20firstword\x20lastword\x20dir\x20notdir\x20suffix\x20basename\x20addsuffix\x20addprefix\x20join\x20wildcard\x20realpath\x20abspath\x20error\x20warning\x20shell\x20origin\x20flavor\x20foreach\x20if\x20or\x20and\x20call\x20eval\x20file\x20value','mousePressed','gl_MaxAtomicCounterBindings\x20gl_MaxAtomicCounterBufferSize\x20gl_MaxClipDistances\x20gl_MaxClipPlanes\x20gl_MaxCombinedAtomicCounterBuffers\x20gl_MaxCombinedAtomicCounters\x20gl_MaxCombinedImageUniforms\x20gl_MaxCombinedImageUnitsAndFragmentOutputs\x20gl_MaxCombinedTextureImageUnits\x20gl_MaxComputeAtomicCounterBuffers\x20gl_MaxComputeAtomicCounters\x20gl_MaxComputeImageUniforms\x20gl_MaxComputeTextureImageUnits\x20gl_MaxComputeUniformComponents\x20gl_MaxComputeWorkGroupCount\x20gl_MaxComputeWorkGroupSize\x20gl_MaxDrawBuffers\x20gl_MaxFragmentAtomicCounterBuffers\x20gl_MaxFragmentAtomicCounters\x20gl_MaxFragmentImageUniforms\x20gl_MaxFragmentInputComponents\x20gl_MaxFragmentInputVectors\x20gl_MaxFragmentUniformComponents\x20gl_MaxFragmentUniformVectors\x20gl_MaxGeometryAtomicCounterBuffers\x20gl_MaxGeometryAtomicCounters\x20gl_MaxGeometryImageUniforms\x20gl_MaxGeometryInputComponents\x20gl_MaxGeometryOutputComponents\x20gl_MaxGeometryOutputVertices\x20gl_MaxGeometryTextureImageUnits\x20gl_MaxGeometryTotalOutputComponents\x20gl_MaxGeometryUniformComponents\x20gl_MaxGeometryVaryingComponents\x20gl_MaxImageSamples\x20gl_MaxImageUnits\x20gl_MaxLights\x20gl_MaxPatchVertices\x20gl_MaxProgramTexelOffset\x20gl_MaxTessControlAtomicCounterBuffers\x20gl_MaxTessControlAtomicCounters\x20gl_MaxTessControlImageUniforms\x20gl_MaxTessControlInputComponents\x20gl_MaxTessControlOutputComponents\x20gl_MaxTessControlTextureImageUnits\x20gl_MaxTessControlTotalOutputComponents\x20gl_MaxTessControlUniformComponents\x20gl_MaxTessEvaluationAtomicCounterBuffers\x20gl_MaxTessEvaluationAtomicCounters\x20gl_MaxTessEvaluationImageUniforms\x20gl_MaxTessEvaluationInputComponents\x20gl_MaxTessEvaluationOutputComponents\x20gl_MaxTessEvaluationTextureImageUnits\x20gl_MaxTessEvaluationUniformComponents\x20gl_MaxTessGenLevel\x20gl_MaxTessPatchComponents\x20gl_MaxTextureCoords\x20gl_MaxTextureImageUnits\x20gl_MaxTextureUnits\x20gl_MaxVaryingComponents\x20gl_MaxVaryingFloats\x20gl_MaxVaryingVectors\x20gl_MaxVertexAtomicCounterBuffers\x20gl_MaxVertexAtomicCounters\x20gl_MaxVertexAttribs\x20gl_MaxVertexImageUniforms\x20gl_MaxVertexOutputComponents\x20gl_MaxVertexOutputVectors\x20gl_MaxVertexTextureImageUnits\x20gl_MaxVertexUniformComponents\x20gl_MaxVertexUniformVectors\x20gl_MaxViewports\x20gl_MinProgramTexelOffset\x20gl_BackColor\x20gl_BackLightModelProduct\x20gl_BackLightProduct\x20gl_BackMaterial\x20gl_BackSecondaryColor\x20gl_ClipDistance\x20gl_ClipPlane\x20gl_ClipVertex\x20gl_Color\x20gl_DepthRange\x20gl_EyePlaneQ\x20gl_EyePlaneR\x20gl_EyePlaneS\x20gl_EyePlaneT\x20gl_Fog\x20gl_FogCoord\x20gl_FogFragCoord\x20gl_FragColor\x20gl_FragCoord\x20gl_FragData\x20gl_FragDepth\x20gl_FrontColor\x20gl_FrontFacing\x20gl_FrontLightModelProduct\x20gl_FrontLightProduct\x20gl_FrontMaterial\x20gl_FrontSecondaryColor\x20gl_GlobalInvocationID\x20gl_InstanceID\x20gl_InvocationID\x20gl_Layer\x20gl_LightModel\x20gl_LightSource\x20gl_LocalInvocationID\x20gl_LocalInvocationIndex\x20gl_ModelViewMatrix\x20gl_ModelViewMatrixInverse\x20gl_ModelViewMatrixInverseTranspose\x20gl_ModelViewMatrixTranspose\x20gl_ModelViewProjectionMatrix\x20gl_ModelViewProjectionMatrixInverse\x20gl_ModelViewProjectionMatrixInverseTranspose\x20gl_ModelViewProjectionMatrixTranspose\x20gl_MultiTexCoord0\x20gl_MultiTexCoord1\x20gl_MultiTexCoord2\x20gl_MultiTexCoord3\x20gl_MultiTexCoord4\x20gl_MultiTexCoord5\x20gl_MultiTexCoord6\x20gl_MultiTexCoord7\x20gl_Normal\x20gl_NormalMatrix\x20gl_NormalScale\x20gl_NumSamples\x20gl_NumWorkGroups\x20gl_ObjectPlaneQ\x20gl_ObjectPlaneR\x20gl_ObjectPlaneS\x20gl_ObjectPlaneT\x20gl_PatchVerticesIn\x20gl_Point\x20gl_PointCoord\x20gl_PointSize\x20gl_Position\x20gl_PrimitiveID\x20gl_PrimitiveIDIn\x20gl_ProjectionMatrix\x20gl_ProjectionMatrixInverse\x20gl_ProjectionMatrixInverseTranspose\x20gl_ProjectionMatrixTranspose\x20gl_SampleID\x20gl_SampleMask\x20gl_SampleMaskIn\x20gl_SamplePosition\x20gl_SecondaryColor\x20gl_TessCoord\x20gl_TessLevelInner\x20gl_TessLevelOuter\x20gl_TexCoord\x20gl_TextureEnvColor\x20gl_TextureMatrix\x20gl_TextureMatrixInverse\x20gl_TextureMatrixInverseTranspose\x20gl_TextureMatrixTranspose\x20gl_Vertex\x20gl_VertexID\x20gl_ViewportIndex\x20gl_WorkGroupID\x20gl_WorkGroupSize\x20gl_in\x20gl_out\x20EmitStreamVertex\x20EmitVertex\x20EndPrimitive\x20EndStreamPrimitive\x20abs\x20acos\x20acosh\x20all\x20any\x20asin\x20asinh\x20atan\x20atanh\x20atomicAdd\x20atomicAnd\x20atomicCompSwap\x20atomicCounter\x20atomicCounterDecrement\x20atomicCounterIncrement\x20atomicExchange\x20atomicMax\x20atomicMin\x20atomicOr\x20atomicXor\x20barrier\x20bitCount\x20bitfieldExtract\x20bitfieldInsert\x20bitfieldReverse\x20ceil\x20clamp\x20cos\x20cosh\x20cross\x20dFdx\x20dFdy\x20degrees\x20determinant\x20distance\x20dot\x20equal\x20exp\x20exp2\x20faceforward\x20findLSB\x20findMSB\x20floatBitsToInt\x20floatBitsToUint\x20floor\x20fma\x20fract\x20frexp\x20ftransform\x20fwidth\x20greaterThan\x20greaterThanEqual\x20groupMemoryBarrier\x20imageAtomicAdd\x20imageAtomicAnd\x20imageAtomicCompSwap\x20imageAtomicExchange\x20imageAtomicMax\x20imageAtomicMin\x20imageAtomicOr\x20imageAtomicXor\x20imageLoad\x20imageSize\x20imageStore\x20imulExtended\x20intBitsToFloat\x20interpolateAtCentroid\x20interpolateAtOffset\x20interpolateAtSample\x20inverse\x20inversesqrt\x20isinf\x20isnan\x20ldexp\x20length\x20lessThan\x20lessThanEqual\x20log\x20log2\x20matrixCompMult\x20max\x20memoryBarrier\x20memoryBarrierAtomicCounter\x20memoryBarrierBuffer\x20memoryBarrierImage\x20memoryBarrierShared\x20min\x20mix\x20mod\x20modf\x20noise1\x20noise2\x20noise3\x20noise4\x20normalize\x20not\x20notEqual\x20outerProduct\x20packDouble2x32\x20packHalf2x16\x20packSnorm2x16\x20packSnorm4x8\x20packUnorm2x16\x20packUnorm4x8\x20pow\x20radians\x20reflect\x20refract\x20round\x20roundEven\x20shadow1D\x20shadow1DLod\x20shadow1DProj\x20shadow1DProjLod\x20shadow2D\x20shadow2DLod\x20shadow2DProj\x20shadow2DProjLod\x20sign\x20sin\x20sinh\x20smoothstep\x20sqrt\x20step\x20tan\x20tanh\x20texelFetch\x20texelFetchOffset\x20texture\x20texture1D\x20texture1DLod\x20texture1DProj\x20texture1DProjLod\x20texture2D\x20texture2DLod\x20texture2DProj\x20texture2DProjLod\x20texture3D\x20texture3DLod\x20texture3DProj\x20texture3DProjLod\x20textureCube\x20textureCubeLod\x20textureGather\x20textureGatherOffset\x20textureGatherOffsets\x20textureGrad\x20textureGradOffset\x20textureLod\x20textureLodOffset\x20textureOffset\x20textureProj\x20textureProjGrad\x20textureProjGradOffset\x20textureProjLod\x20textureProjLodOffset\x20textureProjOffset\x20textureQueryLevels\x20textureQueryLod\x20textureSize\x20transpose\x20trunc\x20uaddCarry\x20uintBitsToFloat\x20umulExtended\x20unpackDouble2x32\x20unpackHalf2x16\x20unpackSnorm2x16\x20unpackSnorm4x8\x20unpackUnorm2x16\x20unpackUnorm4x8\x20usubBorrow','MengerMesh','buffer_resize','backdrop','feb','ACos','Apache\x20config','mathop','MissingStyle','(#Person|#Pronoun)','impure','even-better','base-uri','must\x20&&\x20#Hyphenated\x20.','align-content','ev_game_start','infinity','Information','$ByteOrdering','vk_control','HeatOutflowValue','RegionDifference','selector-class','Makefile','hcRemoveGroup','EquatedTo','ContinuedFractionK','DistributeDefinitions','circus','spedis','SplFixedArray','cssText','TableViewBoxAlignment','import','#PresentTense\x20the\x20[#Gerund]','Modular','bessel_yn','setStaminaScheme','[#Hyphenated\x20(#Hyphenated\x20&&\x20#Gerund)]\x20(#Noun|#Conjunction)','scrollbar-color','JankoGroupJ4','to$','INSTR','LineGraph','avrasm','AltState\x20Application\x20CallType\x20ComponentTokens\x20CreatedJobs\x20CreatedNotices\x20ControlState\x20DialogResult\x20Dialogs\x20EDocuments\x20EDocumentVersionSource\x20Folders\x20GlobalIDs\x20Job\x20Jobs\x20InputValue\x20LookUpReference\x20LookUpRequisiteNames\x20LookUpSearch\x20Object\x20ParentComponent\x20Processes\x20References\x20Requisite\x20ReportName\x20Reports\x20Result\x20Scripts\x20Searches\x20SelectedAttachments\x20SelectedItems\x20SelectMode\x20Sender\x20ServerEvents\x20ServiceFactory\x20ShiftState\x20SubTask\x20SystemDialogs\x20Tasks\x20Wizard\x20Wizards\x20Work\x20ВызовСпособ\x20ИмяОтчета\x20РеквЗнач\x20','\x5c!\x22#$%&\x27()*+,./:;<=>?@[]^_`{|}~-','BSplineCurveBox','src_tlds','sla','NumberDigit','CompoundElement','OutputSizeLimit','not\x20','LeftUpVectorBar','part_type_colour2','RadioButtonBox','Monomorphic','mouse_buttons','path_exists','LaminaData','NuttallWindow','download','border-block-start','synopses','createReader','⥸','LibraryFunctionLoad','form-action','ExpressionCell','\x5c*(\x5c.[a-z\x5c-]+)+','AxisLabel','DNSKEY','TimeUsed','AllowedCloudParameterExtensions','destruct','IMSIN','≮','enableAutoStartUpRTD','FindEdgeIndependentPaths','skeleton_animation_get','Slides','PhrasalVerb\x20Particle','fullscreen','sprite_save','Bond','[qs]__?[a-zA-Z](?:_?[a-zA-Z])+','compquote','LinkFlush','setDataMode','municipality','ColorQ','Cut','getAllSoundControllers','ConicHullRegionBox','mouseMoved','AnglePath','(#Copula|#Gerund)','GeoStyling','Reflect','Day','^#Noun','press','MODE.MULT','bm_inv_src_alpha','respawnVehicle','true¦a08b05d00eYfSheQinPjustOkinda,likewiZmMnJoEpCquite,r9s5t2u0very,well;ltima01p0;\x20to,wards5;h1iny\x20bit,o0wiO;o,t6;en,us;eldom,o0uch;!me1rt0;\x20of;how,times,w0C;a1e0;alS;ndomRth05;ar\x20excellenEer0oint\x20blank;\x20Lhaps;f3n0utright;ce0ly;!\x200;ag05moX;\x20courGten;ewJo0;\x20longWt\x200;onHwithstand9;aybe,eanwhiNore0;!ovT;!\x20aboX;deed,steY;lla,n0;ce;or3u0;ck1l9rther0;!moK;ing;\x200evK;exampCgood,suH;n\x20mas0vI;se;e0irect2;\x202fini0;te0;ly;juAtrop;ackward,y\x200;far,no0;\x20means,w;\x20GbroFd\x20nauseam,gEl7ny5part,s4t\x202w0;ay,hi0;le;be7l0mo7wor7;arge,ea6;\x20soon,i4;mo0way;re;l\x203mo2ongsi1ready,so,togeth0ways;er;de;st;b1t0;hat;ut;ain;ad;lot,posteriori','quotes','ev_gesture_rotating','[even]\x20(#Determiner|#Possessive)','done','diag_activeSQFScripts','isUnit','$InterfaceEnvironment','os_tvos','throw','notset','true¦a0Gb0Bc03d02e01f00gWhUiSkQlNmLnIorHpDrCsAt5u4v3w2y0;a0yz;kutPngtze;ake\x20isHupatki;irgin\x20islands,ostok;laanbaatar,p02;a3eotihuac0Hh1onto,sarskoe\x20selo,u0;lXzigoot;am09e\x200;bronx,hamptons;hiti,j\x20mahE;a0cotts\x20bluff,eine,fo,oho,under9;int\x20lawrence\x20river,khalY;ed\x20s3io\x20grande;a1ek,h0itcairn,ompeii;l,x;cif05pahanaumokuak0rthenX;ea;ange\x20county,d,inoco;e0ile;uschwansteQw\x20eng0;land;a0co,ekong,idLuc;chu\x20picchu,gad00libu,nhatt00;a1gw,hr,incoln\x20memori0;al;s,x;azan\x20kremlJosrae,rasnoyar0ul;sk;ax,cn,nd0st;ianSochina;arlem,kg,nd,ov0;d,enweep;a2odavari,re0;at\x200enwich;britaBlakI;ngHy\x20village;co,ra;urope,vergladF;anube,en,fw,own4xb;arrizo\x20pla6dg,edar\x205gk,h1lt,olosse0;um;a2i0uuk;chen\x20itza,mney\x20rock,na0ricahua;town;morro,tham;breaks,fa5;in;cn,e2kk,ro0;oklyn,wns\x20cany0;on;l\x20air,verly\x20hi0;lls;driadic,frica,lhambra,m7n3rc2sia,tl1zor0;es;!ant2;\x20de\x20triomphe,t1;adyr,tarct0;ic0;\x20oce0;an;ericas,s','application_surface_is_enabled','⋒','sounds-fun','EdgeCoverQ','BitVector','$fflush','WindDirectionData','InverseJacobiDC','Gradient','tagName','DefaultStyleDefinitions','⋽','groupEnd','ecase','uniform','toLower','pmouseY','space-y-2','pull1','PermissionsGroups','beginRe','endif','layer_script_begin','SectionSetSize','⦅','|possessives|adverbs|nouns|verbs','font_delete','ughh','vectorNormalized','BenfordDistribution','day','clng','dsign','10px','ABS','un-skilled','InertEvaluate','Honorific','variable_global_get','translate_?x','\x5cb0[0-7]+n?\x5cb','InverseFunctions','mkfifo','ds_priority_write','diag_activeScripts','FrameMargins','EntityProperties','setObjectTextureGlobal','SystemsModelDimensions','^[do]\x20(you|we|they)','scroll-margin-inline-start','physics_particle_set_flags','object_length','^(has|have)\x20been\x20#Gerund$','draw_point','LogLogisticDistribution','audio_play_in_sync_group','(@hasEllipses|@hasSemicolon|@hasDash|@hasColon)','BubbleScale','outer','sequence','★','Mizar','varargs','allow','ds_queue_copy','SetFont','had-walked','LINEST','
\x0a','in-age','Number','ai-message','grid-column-end','mouse_check_button','buffer_s32','\x5c(|=>','McLaughlinGroupMcL','else','view_surface_id','src','Tilde','nearestObjects','SetCompress','VertexCosineSimilarity','could','monochrome','(same|some|the|that|a)\x20type\x20of\x20[#PresentTense]','FnMut','width_bucket','addMagazineCargoGlobal','dammed-up','DoubleLeftRightArrow','Cssize_t','^#Fraction$','ExpressionUUID','getArtilleryAmmo','tilemap_get_width','ds_grid_value_exists','isnumber','date_inc_day','KnownUnitQ','^do\x20not\x20[#Infinitive]','true¦a\x20few','future','SequenceFoldList','MKDIR','shownChat','assignedItems','URIError','Flat','dunno','file_text_read_real','ControllerInformationData','RomanNumeral','anything','zcomp','ulong','VARPTR$','getClientStateNumber','css`','GeoGridUnitArea','ImageCompose','foo-point','color_get_hue','CenteredInterval','true¦aPbMcLdKexcept,fIinGmid,notwithstandiWoDpXqua,sCt7u4v2w0;/o,hereSith0;!\x20whHin,oW;ersus,i0;a,s\x20a\x20vis;n1p0;!on;like,til;h1ill,oward0;!s;an,ereby,r0;ough0u;!oM;ans,ince,o\x20that,uch\x20G;f1n0ut;!to;!f;!\x200to;effect,part;or,r0;om;espite,own,u3;hez,irca;ar1e0oBy;sides,tween;ri7;bo8cross,ft7lo6m4propos,round,s1t0;!op;!\x200;a\x20whole,long\x200;as;id0ong0;!st;ng;er;ut','~~~+[\x20]*$','Larger','saveGame','GroupStabilizer','qlog','\x5c$\x5cd+','achievement_show_purchase_prompt','StringDict','rainbow','\x5cW\x5c.|;','percentile_disc','#Month\x20#Value\x20to\x20#Value','KEY','GetFileTimeLocal','toNegative','vert','AGGREGATE','$changed_gclk','SYSTEM|TEMPORARY','appendices','make_color_hsv','unique_ptr','ds_stack_copy','LWSP','(#TextValue\x20&&\x20#Date)\x20#TextValue','ones_array','⟫','playSoundUI','ordering','StreamPlot','erer','TestReportObject','UnhandledMatchError','vspeed','(L|u|U|Lu|LU|uL|UL)?','𝔅','\x5cb(FALSE|TRUE)\x5cb','FindFile','GreaterEqualThan','datalines4','ClusterClassify','[(being|having|getting)]\x20#Verb','^\x5cs*instance\x20of\x20','OptionsPattern','UnitDimensions','the-verb-of','GeometricTransformationBox','x++','Length3D','c_olive','adjustl','removeAllAssignedItems','FileReadWord','built_in','SpanFromLeft','irpf90','TemplateSequence','Separate','(\x5cs*\x5c.\x5cs*','ctrlSetShadow','ß','margin-inline-end','SocketConnect','canMove','\x5cb([gtps][A-Z]{1}[a-zA-Z0-9]*)(\x5c[.+\x5c])?(?:\x5cs*?)','bg-gray-600\x20rounded-lg\x20p-2\x20text-white\x20hover:bg-gray-300','ArgMax','>\x20predict\x20the\x20completion\x20of\x20the\x20query\x20inside\x20the\x20<>.\x20Output\x20only\x20the\x20completed\x20query.','Dynamic','composeText','layer_sprite_get_y','date_diff_millis','journal','ImageMesh','FileOpen','VideoQ','batch','layer_tilemap_exists','HatchShading','sgn','wren','#Year','RoundImplies','validateLink','Irrational','page-break-after','ñ','checkAIFeature','text-justify','configNull','MB_RTLREADING','­','tvExpand','Projections','ambient','font-size','FractionalPart','ImageValuePositions','nav-right','getopts','DiscreteConvolve','Person|Verb','svd','html.handlebars','EmitSound','#Gerund\x20#Determiner\x20[#Infinitive]','currentPilot','ZetaZero','interstate','log_rising_factorial','a-zA-Z_\x5c-!.?+*=<>&\x27','StringByteCount','ds_list_clear','attachInterrupt','removeChild','ContextToFileName','HardcorePointProcess','\x1b[3m',')[fFdD]\x5cb','tbh','normal','below','true¦0:15;1:12;a0Vb0Oc0Dd0Ce08f07g04h02iYjVkTlPmLnIomHpEqatari,rCs7t5u4v3welAz2;am0Gimbabwe0;enezuel0ietnam0I;gAkrai1;aiwTex0hai,rinida0Ju2;ni0Prkmen;a5cotti4e3ingapoOlovak,oma0Spaniard,udRw2y0W;ede,iss;negal0Cr09;sh;mo0uT;o5us0Jw2;and0;a2eru0Fhilippi0Nortugu07uerto\x20r0S;kist3lesti1na2raguay0;ma1;ani;ami00i2orweP;caragu0geri2;an,en;a3ex0Lo2;ngo0Drocc0;cedo1la2;gasy,y07;a4eb9i2;b2thua1;e0Cy0;o,t01;azakh,eny0o2uwaiI;re0;a2orda1;ma0Ap2;anO;celandic,nd4r2sraeli,ta01vo05;a2iB;ni0qi;i0oneU;aiAin2ondur0unO;di;amEe2hanai0reek,uatemal0;or2rm0;gi0;ilipino,ren8;cuadoVgyp4mira3ngli2sto1thiopi0urope0;shm0;ti;ti0;aPominUut3;a9h6o4roat3ub0ze2;ch;!i0;lom2ngol5;bi0;a6i2;le0n2;ese;lifor1m2na3;bo2eroo1;di0;angladeshi,el6o4r3ul2;gaE;azi9it;li2s1;vi0;aru2gi0;si0;fAl7merBngol0r5si0us2;sie,tr2;a2i0;li0;genti2me1;ne;ba1ge2;ri0;ni0;gh0r2;ic0;an','audio_get_listener_info','HeaderStyle','bucket','isupper','RiemannSiegelZ','gpu_get_tex_max_mip','greatest','value|0','\x5c.?','vectorModelToWorld','PaperWidth','GML','health','physics_particle_group_count','#Copula\x20[(who|what|where|why|how|when)]\x20#Noun','VIETNAMESE_CHARSET','NonConstants','rising_factorial','(#Noun|#Adjective)\x20[(he|him|she|it)]','portion','TextStructure','\x5c(This','WeightedGraphQ','⅖','WindowMovable','delta','ControllabilityMatrix','NSum','audio_sound_pitch','FindChannels','DisplayRules','BinaryReadList','LegendMarkers','stanfuncs','Coercion','ds_map_secure_save_buffer','arg','ini_read_string','setMarkerShadowLocal','failMission','border-inline-start','HorizontalGauge','get_save_filename','VideoSplit','\x5c$+','TransitiveClosureGraph','diag_stacktrace','$dist_chi_square','$FormatType','RasterSize','c_double_complex','(?!.*/\x08(','getAnimSpeedCoef','moveToFailed','getObjectDLC','(\x5c.\x5c./|/|\x5cs)((','HumanGrowthData','half','focus','Standardized',')\x20#Ordinal','SERIESSUM','draw_light_enable','worldName','splitBefore','__NAMESPACE__','⋩','AISFinishHeal','NotebookPrint','fromEditor','(so|very|extremely)\x20[#Gerund]','SearchHead','LogisticSigmoid','layer_get_element_type','svg.hidden','GSMServer','out-of','scope','audio_sync_group_debug','@defs','border-image-source','layer_background_get_index','url_encode','DATEDIF','single','Update','CapsuleShape','AutoSubmitting','ReplaceList','ruleslanguage','cvar','Drop','ds_priority_destroy','EchoTiming','Alternatives','grid-column-start','latch','InverseJacobiNC','removePrimaryWeaponItem','FileSystemScan','PrimaryPlaceholder','Alignment','had-time','(i|we|they)','swimInDepth','MinColorDistance','variant','TextString','\x5cb0[bB][0-1](_?[0-1])*r?i?\x5cb','ZernikeR','TEXT:\x20','TKEY','neighborhood','#Adverb\x20[like]','SupersetEqual','GraphDisjointUnion','layer_tile_get_region','Box','TimeSeriesWindow','σ','is-crowded-with','Int16Array','enable','leaderboardInit','cmpres','FILE_ATTRIBUTE_ARCHIVE','Magenta','aspect','HKEY_CLASSES_ROOT','attackEnabled','SymbolName','#Gerund','TransformedField','endShape','__tlds__','endprogram','[#PresentTense]\x20(of|by|for)\x20(a|an|the)\x20#Noun\x20#Copula','HeadCompose','EvaluationMode','StandbyDistribution','bank','pixelGrid','get_string_async','amount','cmos','ActionDelay','setShotParents','LessLess','ev_room_end','GrowCutComponents','SliderBox','IntersectedEntityClass','nonempty','give\x20real\x20life\x20examples\x20of\x20this','[\x5c-;:&=\x5c+\x5c$,\x5c.a-zA-Z0-9_][\x5c-;:&=\x5c+\x5c$,\x5c\x22\x5c.a-zA-Z0-9_]*','form_row','cityNameWrite','high-enough','steam_upload_score_buffer','range','simpleTasks','true\x20false\x20unknown\x20inf\x20minf\x20ind\x20und\x20%e\x20%i\x20%pi\x20%phi\x20%gamma','menuURL','\x5c$\x5c(','waypointDescription','Structure','с','Total','DensityHistogram','html_content','unfreeze','UpperTriangularMatrixQ','Parameters','lower','de-firstname','NotebookCreate','collision_point','layer_depth','›','GlobalRef','debug','(is|are|am|was)\x20going\x20to\x20(#Infinitive|#PresentTense)','copy-namespaces','institutes','IncludeDefinitions','csp','CounterStyle','ListPlot3D','LANGUAGE','.bundle.js','maxof','hidden','physics_particle_delete','text-indent','DuplicateFreeQ','SYSTEM','$async$or$plane','jan-thierson','onCompile','stylus','ev_joystick2_button8','Cell','eachFile','qr_thin_Q','SET_PIN_MODE','autoscroll','DynamicEvaluationTimeout','UnixTime','move_contact_all','apacheconf','while','RepeatingElement','getSensorTargets','emoji-class','lindex|10',')[fFdD]?\x5cb','NotebookFind','aliases','llama3-70b-8192','Center','^#Adverb+','FindIsomorphicSubgraph','primaryWeaponMagazine','%r!','c_green','heading','margin-block-start','RawBoxes','xs:NOTATION','platform::shell','3:rst,hed,hut,cut,set¦4:tbid¦5:dcast,eread,pread,erbid¦ought:uy,eek¦1ied:ny,ly,dy,ry,fy,py,vy,by,ty,cy¦1ung:ling,ting,wing¦1pt:eep¦1ank:rink¦1ore:bear,wear¦1ave:give¦1oze:reeze¦1ound:rind,wind¦1ook:take,hake¦1aw:see¦1old:sell¦1ote:rite¦1ole:teal¦1unk:tink¦1am:wim¦1ay:lie¦1ood:tand¦1eld:hold¦2d:he,ge,re,le,leed,ne,reed,be,ye,lee,pe,we¦2ed:dd,oy,or,ey,gg,rr,us,ew,to¦2ame:ecome,rcome¦2ped:ap¦2ged:ag,og,ug,eg¦2bed:ub,ab,ib,ob¦2lt:neel¦2id:pay¦2ang:pring¦2ove:trive¦2med:um¦2ode:rride¦2at:ysit¦3ted:mit,hat,mat,lat,pot,rot,bat¦3ed:low,end,tow,und,ond,eem,lay,cho,dow,xit,eld,ald,uld,law,lel,eat,oll,ray,ank,fin,oam,out,how,iek,tay,haw,ait,vet,say,cay,bow¦3d:ste,ede,ode,ete,ree,ude,ame,oke,ote,ime,ute,ade¦3red:lur,cur,pur,car¦3ped:hop,rop,uip,rip,lip,tep,top¦3ded:bed,rod,kid¦3ade:orbid¦3led:uel¦3ned:lan,can,kin,pan,tun¦3med:rim,lim¦4ted:quit,llot¦4ed:pear,rrow,rand,lean,mand,anel,pand,reet,link,abel,evel,imit,ceed,ruit,mind,peal,veal,hool,head,pell,well,mell,uell,band,hear,weak¦4led:nnel,qual,ebel,ivel¦4red:nfer,efer,sfer¦4n:sake,trew¦4d:ntee¦4ded:hred¦4ned:rpin¦5ed:light,nceal,right,ndear,arget,hread,eight,rtial,eboot¦5d:edite,nvite¦5ted:egret¦5led:ravel','LLVM\x20IR','𝔦','safe','battlefield','MathieuS','ElementData','VerticalForm','sliderValue','#Determiner\x20[#Gerund]\x20#Noun','DefaultElement','ѕ','ads_event','Uninstall','runInitScript','WolframLanguageData','draw_roundrect_ext','ack','buffer_vbuffer','ŋ','FisherZDistribution','audio_sound_get_track_position','Scheduler','setPilotLight','dhms','game_project_name','would\x20have\x20been\x20#PastTense','ToCodePoint','setVehiclePosition','either','\x5cb_*rig[A-Z][A-Za-z0-9_\x5c-]*','Defined','$IterationLimit','texture','⌒','mathemat','clothes','rainParams','rest-after','SubtractFrom','TreeSize','IApplication\x20IAccessRights\x20IAccountRepository\x20IAccountSelectionRestrictions\x20IAction\x20IActionList\x20IAdministrationHistoryDescription\x20IAnchors\x20IApplication\x20IArchiveInfo\x20IAttachment\x20IAttachmentList\x20ICheckListBox\x20ICheckPointedList\x20IColumn\x20IComponent\x20IComponentDescription\x20IComponentToken\x20IComponentTokenFactory\x20IComponentTokenInfo\x20ICompRecordInfo\x20IConnection\x20IContents\x20IControl\x20IControlJob\x20IControlJobInfo\x20IControlList\x20ICrypto\x20ICrypto2\x20ICustomJob\x20ICustomJobInfo\x20ICustomListBox\x20ICustomObjectWizardStep\x20ICustomWork\x20ICustomWorkInfo\x20IDataSet\x20IDataSetAccessInfo\x20IDataSigner\x20IDateCriterion\x20IDateRequisite\x20IDateRequisiteDescription\x20IDateValue\x20IDeaAccessRights\x20IDeaObjectInfo\x20IDevelopmentComponentLock\x20IDialog\x20IDialogFactory\x20IDialogPickRequisiteItems\x20IDialogsFactory\x20IDICSFactory\x20IDocRequisite\x20IDocumentInfo\x20IDualListDialog\x20IECertificate\x20IECertificateInfo\x20IECertificates\x20IEditControl\x20IEditorForm\x20IEdmsExplorer\x20IEdmsObject\x20IEdmsObjectDescription\x20IEdmsObjectFactory\x20IEdmsObjectInfo\x20IEDocument\x20IEDocumentAccessRights\x20IEDocumentDescription\x20IEDocumentEditor\x20IEDocumentFactory\x20IEDocumentInfo\x20IEDocumentStorage\x20IEDocumentVersion\x20IEDocumentVersionListDialog\x20IEDocumentVersionSource\x20IEDocumentWizardStep\x20IEDocVerSignature\x20IEDocVersionState\x20IEnabledMode\x20IEncodeProvider\x20IEncrypter\x20IEvent\x20IEventList\x20IException\x20IExternalEvents\x20IExternalHandler\x20IFactory\x20IField\x20IFileDialog\x20IFolder\x20IFolderDescription\x20IFolderDialog\x20IFolderFactory\x20IFolderInfo\x20IForEach\x20IForm\x20IFormTitle\x20IFormWizardStep\x20IGlobalIDFactory\x20IGlobalIDInfo\x20IGrid\x20IHasher\x20IHistoryDescription\x20IHyperLinkControl\x20IImageButton\x20IImageControl\x20IInnerPanel\x20IInplaceHint\x20IIntegerCriterion\x20IIntegerList\x20IIntegerRequisite\x20IIntegerValue\x20IISBLEditorForm\x20IJob\x20IJobDescription\x20IJobFactory\x20IJobForm\x20IJobInfo\x20ILabelControl\x20ILargeIntegerCriterion\x20ILargeIntegerRequisite\x20ILargeIntegerValue\x20ILicenseInfo\x20ILifeCycleStage\x20IList\x20IListBox\x20ILocalIDInfo\x20ILocalization\x20ILock\x20IMemoryDataSet\x20IMessagingFactory\x20IMetadataRepository\x20INotice\x20INoticeInfo\x20INumericCriterion\x20INumericRequisite\x20INumericValue\x20IObject\x20IObjectDescription\x20IObjectImporter\x20IObjectInfo\x20IObserver\x20IPanelGroup\x20IPickCriterion\x20IPickProperty\x20IPickRequisite\x20IPickRequisiteDescription\x20IPickRequisiteItem\x20IPickRequisiteItems\x20IPickValue\x20IPrivilege\x20IPrivilegeList\x20IProcess\x20IProcessFactory\x20IProcessMessage\x20IProgress\x20IProperty\x20IPropertyChangeEvent\x20IQuery\x20IReference\x20IReferenceCriterion\x20IReferenceEnabledMode\x20IReferenceFactory\x20IReferenceHistoryDescription\x20IReferenceInfo\x20IReferenceRecordCardWizardStep\x20IReferenceRequisiteDescription\x20IReferencesFactory\x20IReferenceValue\x20IRefRequisite\x20IReport\x20IReportFactory\x20IRequisite\x20IRequisiteDescription\x20IRequisiteDescriptionList\x20IRequisiteFactory\x20IRichEdit\x20IRouteStep\x20IRule\x20IRuleList\x20ISchemeBlock\x20IScript\x20IScriptFactory\x20ISearchCriteria\x20ISearchCriterion\x20ISearchDescription\x20ISearchFactory\x20ISearchFolderInfo\x20ISearchForObjectDescription\x20ISearchResultRestrictions\x20ISecuredContext\x20ISelectDialog\x20IServerEvent\x20IServerEventFactory\x20IServiceDialog\x20IServiceFactory\x20ISignature\x20ISignProvider\x20ISignProvider2\x20ISignProvider3\x20ISimpleCriterion\x20IStringCriterion\x20IStringList\x20IStringRequisite\x20IStringRequisiteDescription\x20IStringValue\x20ISystemDialogsFactory\x20ISystemInfo\x20ITabSheet\x20ITask\x20ITaskAbortReasonInfo\x20ITaskCardWizardStep\x20ITaskDescription\x20ITaskFactory\x20ITaskInfo\x20ITaskRoute\x20ITextCriterion\x20ITextRequisite\x20ITextValue\x20ITreeListSelectDialog\x20IUser\x20IUserList\x20IValue\x20IView\x20IWebBrowserControl\x20IWizard\x20IWizardAction\x20IWizardFactory\x20IWizardFormElement\x20IWizardParam\x20IWizardPickParam\x20IWizardReferenceParam\x20IWizardStep\x20IWorkAccessRights\x20IWorkDescription\x20IWorkflowAskableParam\x20IWorkflowAskableParams\x20IWorkflowBlock\x20IWorkflowBlockResult\x20IWorkflowEnabledMode\x20IWorkflowParam\x20IWorkflowPickParam\x20IWorkflowReferenceParam\x20IWorkState\x20IWorkTreeCustomNode\x20IWorkTreeJobNode\x20IWorkTreeTaskNode\x20IXMLEditorForm\x20SBCrypto\x20','MoonPhase','canStand','secondBest','SharedArrayBuffer','mailto:','Unexpected\x20response\x20structure\x20from\x20Semantic\x20Scholar','#Pronoun\x20[#Adjective]\x20#Determiner\x20#Adjective?\x20#Noun','achievement_post_score','layer_tile_get_xscale','^[\x20\x09]*([*+-]|(\x5cd+\x5c.))(?=\x5cs+)','BoundingRegion','findIndex','PFont','_','⟹','DigitBlock','DATE$','vectored','beta_proportion','client','⏟','transpose','border-spacing','$MaxLicenseSubprocesses','overly-weakened',')\x5c.?|(','wishart','SquareMatrixQ','DivisorSum','will\x20#Adjective','ctrlCommitted','#Month\x20#Ordinal\x20#Cardinal','steam_clear_achievement','⇗','ordered_list_close','cookies','CopulaDistribution','western-line','extract','barn','ppEffectCommitted','src_Cc','DEFAULT_CHARSET','function\x20if\x20in\x20break\x20next\x20repeat\x20else\x20for\x20while','FindRoot','trailz','delete','onBriefingTeamSwitch','й','^[go]\x20to\x20.','(face|embrace|reveal|stop|start|resume)\x20%Adj|Gerund%','TransferFunctionZeros','removeDiarySubject','within','mastery\x20understanding','invalid-input','LessThan','productVersion','surfaceNormal','long-live','ℸ','true¦0:3C;1:3Q;2:3F;a3Tb3Cc33d2Te2Mf2Ag1Wh1Li1Fj1Ek1Bl13m0Xn0So0Rp0Iqu0Gr07sHtCug0vAw4y3za0Q;el10ouN;ary,e6hi5i3ry;ck0Cde,l3n1ry,se;d,y;ny,te;a3i3R;k,ry;a3erda2ulgar;gue,in,st;a6en2Xhi5i4ouZr3;anqu2Cen1ue;dy,g36me0ny;ck,rs28;ll,me,rt,wd3I;aRcaPeOhMiLkin0BlImGoEpDt6u4w3;eet,ift;b3dd0Wperfi21rre28;sta26t21;a8e7iff,r4u3;pUr1;a4ict,o3;ng;ig2Vn0N;a1ep,rn;le,rk,te0;e1Si2Vright0;ci1Yft,l3on,re;emn,id;a3el0;ll,rt;e4i3y;g2Mm0Z;ek,nd2T;ck24l0mp1L;a3iRrill,y;dy,l01rp;ve0Jxy;n1Jr3;ce,y;d,fe,int0l1Hv0V;a8e6i5o3ude;mantic,o19sy,u3;gh;pe,t1P;a3d,mo0A;dy,l;gg4iFndom,p3re,w;id;ed;ai2i3;ck,et;hoAi1Fl9o8r5u3;ny,r3;e,p11;egna2ic4o3;fouSud;ey,k0;liXor;ain,easa2;ny;dd,i0ld,ranL;aive,e5i4o3u14;b0Sisy,rm0Ysy;bb0ce,mb0R;a3r1w;r,t;ad,e5ild,o4u3;nda12te;ist,o1;a4ek,l3;low;s0ty;a8e7i6o3ucky;f0Jn4o15u3ve0w10y0N;d,sy;e0g;ke0l,mp,tt0Eve0;e1Qwd;me,r3te;ge;e4i3;nd;en;ol0ui19;cy,ll,n3;secu6t3;e3ima4;llege2rmedia3;te;re;aAe7i6o5u3;ge,m3ng1C;bYid;me0t;gh,l0;a3fXsita2;dy,rWv3;en0y;nd13ppy,r3;d3sh;!y;aFenEhCiBlAoofy,r3;a8e6i5o3ue0Z;o3ss;vy;m,s0;at,e3y;dy,n;nd,y;ad,ib,ooD;a2d1;a3o3;st0;tDuiS;u1y;aCeebBi9l8o6r5u3;ll,n3r0N;!ny;aCesh,iend0;a3nd,rmD;my;at,ir7;erce,nan3;ci9;le;r,ul3;ty;a6erie,sse4v3xtre0B;il;nti3;al;r4s3;tern,y;ly,th0;appZe9i5ru4u3;mb;nk;r5vi4z3;zy;ne;e,ty;a3ep,n9;d3f,r;!ly;agey,h8l7o5r4u3;dd0r0te;isp,uel;ar3ld,mmon,st0ward0zy;se;evKou1;e3il0;ap,e3;sy;aHiFlCoAr5u3;ff,r0sy;ly;a6i3oad;g4llia2;nt;ht;sh,ve;ld,un3;cy;a4o3ue;nd,o1;ck,nd;g,tt3;er;d,ld,w1;dy;bsu6ng5we3;so3;me;ry;rd','WignerD','≏̸','ev_no_button','setFeatureType','path_rotate','c++','setVehicleCargo','$high','tgamma','#Determiner\x20#Adverb?\x20[close]','clearMagazinePool','Distinct','⫨','OutOfRangeException','ev_right_release','sir','renames','#Multiple\x20#Value','𝔴','script','case_insensitive','Interpreter','Retrieving\x20relevant\x20sentences','Error','numberToDate','analyze','CelestialSystem','#Copula\x20[(just|alone)]$','(\x5cb(','FontReencoding','boundingBoxReal','Locate','RemoveAudioStream','alot','CarlsonRC','ont','phy_joint_max_torque','GeneratorFunction','Exclusions','radians','forupdate','NevilleThetaD','buffer_outofbounds','ScrollingOptions','makensis','some-kind-of','audio_free_play_queue','suffixPatterns','4-pounds','int\x20float\x20string\x20vector\x20matrix\x20if\x20else\x20switch\x20case\x20default\x20while\x20do\x20for\x20in\x20break\x20continue\x20global\x20proc\x20return\x20about\x20abs\x20addAttr\x20addAttributeEditorNodeHelp\x20addDynamic\x20addNewShelfTab\x20addPP\x20addPanelCategory\x20addPrefixToName\x20advanceToNextDrivenKey\x20affectedNet\x20affects\x20aimConstraint\x20air\x20alias\x20aliasAttr\x20align\x20alignCtx\x20alignCurve\x20alignSurface\x20allViewFit\x20ambientLight\x20angle\x20angleBetween\x20animCone\x20animCurveEditor\x20animDisplay\x20animView\x20annotate\x20appendStringArray\x20applicationName\x20applyAttrPreset\x20applyTake\x20arcLenDimContext\x20arcLengthDimension\x20arclen\x20arrayMapper\x20art3dPaintCtx\x20artAttrCtx\x20artAttrPaintVertexCtx\x20artAttrSkinPaintCtx\x20artAttrTool\x20artBuildPaintMenu\x20artFluidAttrCtx\x20artPuttyCtx\x20artSelectCtx\x20artSetPaintCtx\x20artUserPaintCtx\x20assignCommand\x20assignInputDevice\x20assignViewportFactories\x20attachCurve\x20attachDeviceAttr\x20attachSurface\x20attrColorSliderGrp\x20attrCompatibility\x20attrControlGrp\x20attrEnumOptionMenu\x20attrEnumOptionMenuGrp\x20attrFieldGrp\x20attrFieldSliderGrp\x20attrNavigationControlGrp\x20attrPresetEditWin\x20attributeExists\x20attributeInfo\x20attributeMenu\x20attributeQuery\x20autoKeyframe\x20autoPlace\x20bakeClip\x20bakeFluidShading\x20bakePartialHistory\x20bakeResults\x20bakeSimulation\x20basename\x20basenameEx\x20batchRender\x20bessel\x20bevel\x20bevelPlus\x20binMembership\x20bindSkin\x20blend2\x20blendShape\x20blendShapeEditor\x20blendShapePanel\x20blendTwoAttr\x20blindDataType\x20boneLattice\x20boundary\x20boxDollyCtx\x20boxZoomCtx\x20bufferCurve\x20buildBookmarkMenu\x20buildKeyframeMenu\x20button\x20buttonManip\x20CBG\x20cacheFile\x20cacheFileCombine\x20cacheFileMerge\x20cacheFileTrack\x20camera\x20cameraView\x20canCreateManip\x20canvas\x20capitalizeString\x20catch\x20catchQuiet\x20ceil\x20changeSubdivComponentDisplayLevel\x20changeSubdivRegion\x20channelBox\x20character\x20characterMap\x20characterOutlineEditor\x20characterize\x20chdir\x20checkBox\x20checkBoxGrp\x20checkDefaultRenderGlobals\x20choice\x20circle\x20circularFillet\x20clamp\x20clear\x20clearCache\x20clip\x20clipEditor\x20clipEditorCurrentTimeCtx\x20clipSchedule\x20clipSchedulerOutliner\x20clipTrimBefore\x20closeCurve\x20closeSurface\x20cluster\x20cmdFileOutput\x20cmdScrollFieldExecuter\x20cmdScrollFieldReporter\x20cmdShell\x20coarsenSubdivSelectionList\x20collision\x20color\x20colorAtPoint\x20colorEditor\x20colorIndex\x20colorIndexSliderGrp\x20colorSliderButtonGrp\x20colorSliderGrp\x20columnLayout\x20commandEcho\x20commandLine\x20commandPort\x20compactHairSystem\x20componentEditor\x20compositingInterop\x20computePolysetVolume\x20condition\x20cone\x20confirmDialog\x20connectAttr\x20connectControl\x20connectDynamic\x20connectJoint\x20connectionInfo\x20constrain\x20constrainValue\x20constructionHistory\x20container\x20containsMultibyte\x20contextInfo\x20control\x20convertFromOldLayers\x20convertIffToPsd\x20convertLightmap\x20convertSolidTx\x20convertTessellation\x20convertUnit\x20copyArray\x20copyFlexor\x20copyKey\x20copySkinWeights\x20cos\x20cpButton\x20cpCache\x20cpClothSet\x20cpCollision\x20cpConstraint\x20cpConvClothToMesh\x20cpForces\x20cpGetSolverAttr\x20cpPanel\x20cpProperty\x20cpRigidCollisionFilter\x20cpSeam\x20cpSetEdit\x20cpSetSolverAttr\x20cpSolver\x20cpSolverTypes\x20cpTool\x20cpUpdateClothUVs\x20createDisplayLayer\x20createDrawCtx\x20createEditor\x20createLayeredPsdFile\x20createMotionField\x20createNewShelf\x20createNode\x20createRenderLayer\x20createSubdivRegion\x20cross\x20crossProduct\x20ctxAbort\x20ctxCompletion\x20ctxEditMode\x20ctxTraverse\x20currentCtx\x20currentTime\x20currentTimeCtx\x20currentUnit\x20curve\x20curveAddPtCtx\x20curveCVCtx\x20curveEPCtx\x20curveEditorCtx\x20curveIntersect\x20curveMoveEPCtx\x20curveOnSurface\x20curveSketchCtx\x20cutKey\x20cycleCheck\x20cylinder\x20dagPose\x20date\x20defaultLightListCheckBox\x20defaultNavigation\x20defineDataServer\x20defineVirtualDevice\x20deformer\x20deg_to_rad\x20delete\x20deleteAttr\x20deleteShadingGroupsAndMaterials\x20deleteShelfTab\x20deleteUI\x20deleteUnusedBrushes\x20delrandstr\x20detachCurve\x20detachDeviceAttr\x20detachSurface\x20deviceEditor\x20devicePanel\x20dgInfo\x20dgdirty\x20dgeval\x20dgtimer\x20dimWhen\x20directKeyCtx\x20directionalLight\x20dirmap\x20dirname\x20disable\x20disconnectAttr\x20disconnectJoint\x20diskCache\x20displacementToPoly\x20displayAffected\x20displayColor\x20displayCull\x20displayLevelOfDetail\x20displayPref\x20displayRGBColor\x20displaySmoothness\x20displayStats\x20displayString\x20displaySurface\x20distanceDimContext\x20distanceDimension\x20doBlur\x20dolly\x20dollyCtx\x20dopeSheetEditor\x20dot\x20dotProduct\x20doubleProfileBirailSurface\x20drag\x20dragAttrContext\x20draggerContext\x20dropoffLocator\x20duplicate\x20duplicateCurve\x20duplicateSurface\x20dynCache\x20dynControl\x20dynExport\x20dynExpression\x20dynGlobals\x20dynPaintEditor\x20dynParticleCtx\x20dynPref\x20dynRelEdPanel\x20dynRelEditor\x20dynamicLoad\x20editAttrLimits\x20editDisplayLayerGlobals\x20editDisplayLayerMembers\x20editRenderLayerAdjustment\x20editRenderLayerGlobals\x20editRenderLayerMembers\x20editor\x20editorTemplate\x20effector\x20emit\x20emitter\x20enableDevice\x20encodeString\x20endString\x20endsWith\x20env\x20equivalent\x20equivalentTol\x20erf\x20error\x20eval\x20evalDeferred\x20evalEcho\x20event\x20exactWorldBoundingBox\x20exclusiveLightCheckBox\x20exec\x20executeForEachObject\x20exists\x20exp\x20expression\x20expressionEditorListen\x20extendCurve\x20extendSurface\x20extrude\x20fcheck\x20fclose\x20feof\x20fflush\x20fgetline\x20fgetword\x20file\x20fileBrowserDialog\x20fileDialog\x20fileExtension\x20fileInfo\x20filetest\x20filletCurve\x20filter\x20filterCurve\x20filterExpand\x20filterStudioImport\x20findAllIntersections\x20findAnimCurves\x20findKeyframe\x20findMenuItem\x20findRelatedSkinCluster\x20finder\x20firstParentOf\x20fitBspline\x20flexor\x20floatEq\x20floatField\x20floatFieldGrp\x20floatScrollBar\x20floatSlider\x20floatSlider2\x20floatSliderButtonGrp\x20floatSliderGrp\x20floor\x20flow\x20fluidCacheInfo\x20fluidEmitter\x20fluidVoxelInfo\x20flushUndo\x20fmod\x20fontDialog\x20fopen\x20formLayout\x20format\x20fprint\x20frameLayout\x20fread\x20freeFormFillet\x20frewind\x20fromNativePath\x20fwrite\x20gamma\x20gauss\x20geometryConstraint\x20getApplicationVersionAsFloat\x20getAttr\x20getClassification\x20getDefaultBrush\x20getFileList\x20getFluidAttr\x20getInputDeviceRange\x20getMayaPanelTypes\x20getModifiers\x20getPanel\x20getParticleAttr\x20getPluginResource\x20getenv\x20getpid\x20glRender\x20glRenderEditor\x20globalStitch\x20gmatch\x20goal\x20gotoBindPose\x20grabColor\x20gradientControl\x20gradientControlNoAttr\x20graphDollyCtx\x20graphSelectContext\x20graphTrackCtx\x20gravity\x20grid\x20gridLayout\x20group\x20groupObjectsByName\x20HfAddAttractorToAS\x20HfAssignAS\x20HfBuildEqualMap\x20HfBuildFurFiles\x20HfBuildFurImages\x20HfCancelAFR\x20HfConnectASToHF\x20HfCreateAttractor\x20HfDeleteAS\x20HfEditAS\x20HfPerformCreateAS\x20HfRemoveAttractorFromAS\x20HfSelectAttached\x20HfSelectAttractors\x20HfUnAssignAS\x20hardenPointCurve\x20hardware\x20hardwareRenderPanel\x20headsUpDisplay\x20headsUpMessage\x20help\x20helpLine\x20hermite\x20hide\x20hilite\x20hitTest\x20hotBox\x20hotkey\x20hotkeyCheck\x20hsv_to_rgb\x20hudButton\x20hudSlider\x20hudSliderButton\x20hwReflectionMap\x20hwRender\x20hwRenderLoad\x20hyperGraph\x20hyperPanel\x20hyperShade\x20hypot\x20iconTextButton\x20iconTextCheckBox\x20iconTextRadioButton\x20iconTextRadioCollection\x20iconTextScrollList\x20iconTextStaticLabel\x20ikHandle\x20ikHandleCtx\x20ikHandleDisplayScale\x20ikSolver\x20ikSplineHandleCtx\x20ikSystem\x20ikSystemInfo\x20ikfkDisplayMethod\x20illustratorCurves\x20image\x20imfPlugins\x20inheritTransform\x20insertJoint\x20insertJointCtx\x20insertKeyCtx\x20insertKnotCurve\x20insertKnotSurface\x20instance\x20instanceable\x20instancer\x20intField\x20intFieldGrp\x20intScrollBar\x20intSlider\x20intSliderGrp\x20interToUI\x20internalVar\x20intersect\x20iprEngine\x20isAnimCurve\x20isConnected\x20isDirty\x20isParentOf\x20isSameObject\x20isTrue\x20isValidObjectName\x20isValidString\x20isValidUiName\x20isolateSelect\x20itemFilter\x20itemFilterAttr\x20itemFilterRender\x20itemFilterType\x20joint\x20jointCluster\x20jointCtx\x20jointDisplayScale\x20jointLattice\x20keyTangent\x20keyframe\x20keyframeOutliner\x20keyframeRegionCurrentTimeCtx\x20keyframeRegionDirectKeyCtx\x20keyframeRegionDollyCtx\x20keyframeRegionInsertKeyCtx\x20keyframeRegionMoveKeyCtx\x20keyframeRegionScaleKeyCtx\x20keyframeRegionSelectKeyCtx\x20keyframeRegionSetKeyCtx\x20keyframeRegionTrackCtx\x20keyframeStats\x20lassoContext\x20lattice\x20latticeDeformKeyCtx\x20launch\x20launchImageEditor\x20layerButton\x20layeredShaderPort\x20layeredTexturePort\x20layout\x20layoutDialog\x20lightList\x20lightListEditor\x20lightListPanel\x20lightlink\x20lineIntersection\x20linearPrecision\x20linstep\x20listAnimatable\x20listAttr\x20listCameras\x20listConnections\x20listDeviceAttachments\x20listHistory\x20listInputDeviceAxes\x20listInputDeviceButtons\x20listInputDevices\x20listMenuAnnotation\x20listNodeTypes\x20listPanelCategories\x20listRelatives\x20listSets\x20listTransforms\x20listUnselected\x20listerEditor\x20loadFluid\x20loadNewShelf\x20loadPlugin\x20loadPluginLanguageResources\x20loadPrefObjects\x20localizedPanelLabel\x20lockNode\x20loft\x20log\x20longNameOf\x20lookThru\x20ls\x20lsThroughFilter\x20lsType\x20lsUI\x20Mayatomr\x20mag\x20makeIdentity\x20makeLive\x20makePaintable\x20makeRoll\x20makeSingleSurface\x20makeTubeOn\x20makebot\x20manipMoveContext\x20manipMoveLimitsCtx\x20manipOptions\x20manipRotateContext\x20manipRotateLimitsCtx\x20manipScaleContext\x20manipScaleLimitsCtx\x20marker\x20match\x20max\x20memory\x20menu\x20menuBarLayout\x20menuEditor\x20menuItem\x20menuItemToShelf\x20menuSet\x20menuSetPref\x20messageLine\x20min\x20minimizeApp\x20mirrorJoint\x20modelCurrentTimeCtx\x20modelEditor\x20modelPanel\x20mouse\x20movIn\x20movOut\x20move\x20moveIKtoFK\x20moveKeyCtx\x20moveVertexAlongDirection\x20multiProfileBirailSurface\x20mute\x20nParticle\x20nameCommand\x20nameField\x20namespace\x20namespaceInfo\x20newPanelItems\x20newton\x20nodeCast\x20nodeIconButton\x20nodeOutliner\x20nodePreset\x20nodeType\x20noise\x20nonLinear\x20normalConstraint\x20normalize\x20nurbsBoolean\x20nurbsCopyUVSet\x20nurbsCube\x20nurbsEditUV\x20nurbsPlane\x20nurbsSelect\x20nurbsSquare\x20nurbsToPoly\x20nurbsToPolygonsPref\x20nurbsToSubdiv\x20nurbsToSubdivPref\x20nurbsUVSet\x20nurbsViewDirectionVector\x20objExists\x20objectCenter\x20objectLayer\x20objectType\x20objectTypeUI\x20obsoleteProc\x20oceanNurbsPreviewPlane\x20offsetCurve\x20offsetCurveOnSurface\x20offsetSurface\x20openGLExtension\x20openMayaPref\x20optionMenu\x20optionMenuGrp\x20optionVar\x20orbit\x20orbitCtx\x20orientConstraint\x20outlinerEditor\x20outlinerPanel\x20overrideModifier\x20paintEffectsDisplay\x20pairBlend\x20palettePort\x20paneLayout\x20panel\x20panelConfiguration\x20panelHistory\x20paramDimContext\x20paramDimension\x20paramLocator\x20parent\x20parentConstraint\x20particle\x20particleExists\x20particleInstancer\x20particleRenderInfo\x20partition\x20pasteKey\x20pathAnimation\x20pause\x20pclose\x20percent\x20performanceOptions\x20pfxstrokes\x20pickWalk\x20picture\x20pixelMove\x20planarSrf\x20plane\x20play\x20playbackOptions\x20playblast\x20plugAttr\x20plugNode\x20pluginInfo\x20pluginResourceUtil\x20pointConstraint\x20pointCurveConstraint\x20pointLight\x20pointMatrixMult\x20pointOnCurve\x20pointOnSurface\x20pointPosition\x20poleVectorConstraint\x20polyAppend\x20polyAppendFacetCtx\x20polyAppendVertex\x20polyAutoProjection\x20polyAverageNormal\x20polyAverageVertex\x20polyBevel\x20polyBlendColor\x20polyBlindData\x20polyBoolOp\x20polyBridgeEdge\x20polyCacheMonitor\x20polyCheck\x20polyChipOff\x20polyClipboard\x20polyCloseBorder\x20polyCollapseEdge\x20polyCollapseFacet\x20polyColorBlindData\x20polyColorDel\x20polyColorPerVertex\x20polyColorSet\x20polyCompare\x20polyCone\x20polyCopyUV\x20polyCrease\x20polyCreaseCtx\x20polyCreateFacet\x20polyCreateFacetCtx\x20polyCube\x20polyCut\x20polyCutCtx\x20polyCylinder\x20polyCylindricalProjection\x20polyDelEdge\x20polyDelFacet\x20polyDelVertex\x20polyDuplicateAndConnect\x20polyDuplicateEdge\x20polyEditUV\x20polyEditUVShell\x20polyEvaluate\x20polyExtrudeEdge\x20polyExtrudeFacet\x20polyExtrudeVertex\x20polyFlipEdge\x20polyFlipUV\x20polyForceUV\x20polyGeoSampler\x20polyHelix\x20polyInfo\x20polyInstallAction\x20polyLayoutUV\x20polyListComponentConversion\x20polyMapCut\x20polyMapDel\x20polyMapSew\x20polyMapSewMove\x20polyMergeEdge\x20polyMergeEdgeCtx\x20polyMergeFacet\x20polyMergeFacetCtx\x20polyMergeUV\x20polyMergeVertex\x20polyMirrorFace\x20polyMoveEdge\x20polyMoveFacet\x20polyMoveFacetUV\x20polyMoveUV\x20polyMoveVertex\x20polyNormal\x20polyNormalPerVertex\x20polyNormalizeUV\x20polyOptUvs\x20polyOptions\x20polyOutput\x20polyPipe\x20polyPlanarProjection\x20polyPlane\x20polyPlatonicSolid\x20polyPoke\x20polyPrimitive\x20polyPrism\x20polyProjection\x20polyPyramid\x20polyQuad\x20polyQueryBlindData\x20polyReduce\x20polySelect\x20polySelectConstraint\x20polySelectConstraintMonitor\x20polySelectCtx\x20polySelectEditCtx\x20polySeparate\x20polySetToFaceNormal\x20polySewEdge\x20polyShortestPathCtx\x20polySmooth\x20polySoftEdge\x20polySphere\x20polySphericalProjection\x20polySplit\x20polySplitCtx\x20polySplitEdge\x20polySplitRing\x20polySplitVertex\x20polyStraightenUVBorder\x20polySubdivideEdge\x20polySubdivideFacet\x20polyToSubdiv\x20polyTorus\x20polyTransfer\x20polyTriangulate\x20polyUVSet\x20polyUnite\x20polyWedgeFace\x20popen\x20popupMenu\x20pose\x20pow\x20preloadRefEd\x20print\x20progressBar\x20progressWindow\x20projFileViewer\x20projectCurve\x20projectTangent\x20projectionContext\x20projectionManip\x20promptDialog\x20propModCtx\x20propMove\x20psdChannelOutliner\x20psdEditTextureFile\x20psdExport\x20psdTextureFile\x20putenv\x20pwd\x20python\x20querySubdiv\x20quit\x20rad_to_deg\x20radial\x20radioButton\x20radioButtonGrp\x20radioCollection\x20radioMenuItemCollection\x20rampColorPort\x20rand\x20randomizeFollicles\x20randstate\x20rangeControl\x20readTake\x20rebuildCurve\x20rebuildSurface\x20recordAttr\x20recordDevice\x20redo\x20reference\x20referenceEdit\x20referenceQuery\x20refineSubdivSelectionList\x20refresh\x20refreshAE\x20registerPluginResource\x20rehash\x20reloadImage\x20removeJoint\x20removeMultiInstance\x20removePanelCategory\x20rename\x20renameAttr\x20renameSelectionList\x20renameUI\x20render\x20renderGlobalsNode\x20renderInfo\x20renderLayerButton\x20renderLayerParent\x20renderLayerPostProcess\x20renderLayerUnparent\x20renderManip\x20renderPartition\x20renderQualityNode\x20renderSettings\x20renderThumbnailUpdate\x20renderWindowEditor\x20renderWindowSelectContext\x20renderer\x20reorder\x20reorderDeformers\x20requires\x20reroot\x20resampleFluid\x20resetAE\x20resetPfxToPolyCamera\x20resetTool\x20resolutionNode\x20retarget\x20reverseCurve\x20reverseSurface\x20revolve\x20rgb_to_hsv\x20rigidBody\x20rigidSolver\x20roll\x20rollCtx\x20rootOf\x20rot\x20rotate\x20rotationInterpolation\x20roundConstantRadius\x20rowColumnLayout\x20rowLayout\x20runTimeCommand\x20runup\x20sampleImage\x20saveAllShelves\x20saveAttrPreset\x20saveFluid\x20saveImage\x20saveInitialState\x20saveMenu\x20savePrefObjects\x20savePrefs\x20saveShelf\x20saveToolSettings\x20scale\x20scaleBrushBrightness\x20scaleComponents\x20scaleConstraint\x20scaleKey\x20scaleKeyCtx\x20sceneEditor\x20sceneUIReplacement\x20scmh\x20scriptCtx\x20scriptEditorInfo\x20scriptJob\x20scriptNode\x20scriptTable\x20scriptToShelf\x20scriptedPanel\x20scriptedPanelType\x20scrollField\x20scrollLayout\x20sculpt\x20searchPathArray\x20seed\x20selLoadSettings\x20select\x20selectContext\x20selectCurveCV\x20selectKey\x20selectKeyCtx\x20selectKeyframeRegionCtx\x20selectMode\x20selectPref\x20selectPriority\x20selectType\x20selectedNodes\x20selectionConnection\x20separator\x20setAttr\x20setAttrEnumResource\x20setAttrMapping\x20setAttrNiceNameResource\x20setConstraintRestPosition\x20setDefaultShadingGroup\x20setDrivenKeyframe\x20setDynamic\x20setEditCtx\x20setEditor\x20setFluidAttr\x20setFocus\x20setInfinity\x20setInputDeviceMapping\x20setKeyCtx\x20setKeyPath\x20setKeyframe\x20setKeyframeBlendshapeTargetWts\x20setMenuMode\x20setNodeNiceNameResource\x20setNodeTypeFlag\x20setParent\x20setParticleAttr\x20setPfxToPolyCamera\x20setPluginResource\x20setProject\x20setStampDensity\x20setStartupMessage\x20setState\x20setToolTo\x20setUITemplate\x20setXformManip\x20sets\x20shadingConnection\x20shadingGeometryRelCtx\x20shadingLightRelCtx\x20shadingNetworkCompare\x20shadingNode\x20shapeCompare\x20shelfButton\x20shelfLayout\x20shelfTabLayout\x20shellField\x20shortNameOf\x20showHelp\x20showHidden\x20showManipCtx\x20showSelectionInTitle\x20showShadingGroupAttrEditor\x20showWindow\x20sign\x20simplify\x20sin\x20singleProfileBirailSurface\x20size\x20sizeBytes\x20skinCluster\x20skinPercent\x20smoothCurve\x20smoothTangentSurface\x20smoothstep\x20snap2to2\x20snapKey\x20snapMode\x20snapTogetherCtx\x20snapshot\x20soft\x20softMod\x20softModCtx\x20sort\x20sound\x20soundControl\x20source\x20spaceLocator\x20sphere\x20sphrand\x20spotLight\x20spotLightPreviewPort\x20spreadSheetEditor\x20spring\x20sqrt\x20squareSurface\x20srtContext\x20stackTrace\x20startString\x20startsWith\x20stitchAndExplodeShell\x20stitchSurface\x20stitchSurfacePoints\x20strcmp\x20stringArrayCatenate\x20stringArrayContains\x20stringArrayCount\x20stringArrayInsertAtIndex\x20stringArrayIntersector\x20stringArrayRemove\x20stringArrayRemoveAtIndex\x20stringArrayRemoveDuplicates\x20stringArrayRemoveExact\x20stringArrayToString\x20stringToStringArray\x20strip\x20stripPrefixFromName\x20stroke\x20subdAutoProjection\x20subdCleanTopology\x20subdCollapse\x20subdDuplicateAndConnect\x20subdEditUV\x20subdListComponentConversion\x20subdMapCut\x20subdMapSewMove\x20subdMatchTopology\x20subdMirror\x20subdToBlind\x20subdToPoly\x20subdTransferUVsToCache\x20subdiv\x20subdivCrease\x20subdivDisplaySmoothness\x20substitute\x20substituteAllString\x20substituteGeometry\x20substring\x20surface\x20surfaceSampler\x20surfaceShaderList\x20swatchDisplayPort\x20switchTable\x20symbolButton\x20symbolCheckBox\x20sysFile\x20system\x20tabLayout\x20tan\x20tangentConstraint\x20texLatticeDeformContext\x20texManipContext\x20texMoveContext\x20texMoveUVShellContext\x20texRotateContext\x20texScaleContext\x20texSelectContext\x20texSelectShortestPathCtx\x20texSmudgeUVContext\x20texWinToolCtx\x20text\x20textCurves\x20textField\x20textFieldButtonGrp\x20textFieldGrp\x20textManip\x20textScrollList\x20textToShelf\x20textureDisplacePlane\x20textureHairColor\x20texturePlacementContext\x20textureWindow\x20threadCount\x20threePointArcCtx\x20timeControl\x20timePort\x20timerX\x20toNativePath\x20toggle\x20toggleAxis\x20toggleWindowVisibility\x20tokenize\x20tokenizeList\x20tolerance\x20tolower\x20toolButton\x20toolCollection\x20toolDropped\x20toolHasOptions\x20toolPropertyWindow\x20torus\x20toupper\x20trace\x20track\x20trackCtx\x20transferAttributes\x20transformCompare\x20transformLimits\x20translator\x20trim\x20trunc\x20truncateFluidCache\x20truncateHairCache\x20tumble\x20tumbleCtx\x20turbulence\x20twoPointArcCtx\x20uiRes\x20uiTemplate\x20unassignInputDevice\x20undo\x20undoInfo\x20ungroup\x20uniform\x20unit\x20unloadPlugin\x20untangleUV\x20untitledFileName\x20untrim\x20upAxis\x20updateAE\x20userCtx\x20uvLink\x20uvSnapshot\x20validateShelfName\x20vectorize\x20view2dToolCtx\x20viewCamera\x20viewClipPlane\x20viewFit\x20viewHeadOn\x20viewLookAt\x20viewManip\x20viewPlace\x20viewSet\x20visor\x20volumeAxis\x20vortex\x20waitCursor\x20warning\x20webBrowser\x20webBrowserPrefs\x20whatIs\x20window\x20windowPref\x20wire\x20wireContext\x20workspace\x20wrinkle\x20wrinkleContext\x20writeTake\x20xbmLangPathList\x20xform','BackFaceOpacity','nee','sprite_flush','popd','company','Union','current_time','caisse','infra','mb-2','[\x5c.#:&\x5c[>]','sqlstate','Friday','MKS$','pessimisticlock','game_save_id','_Alignof','ComponentwiseContextMenu','date_time_string','PowersRepresentations','guid','\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20Previous\x20Conversations\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20New\x20Chat\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20Delete\x20all\x20Chats\x20\x20\x20\x20\x20','overcast','[about\x20to]\x20#Adverb?\x20#Verb','bm_inv_dest_color','layer_has_instance','ARABIC_CHARSET','fadeSound','ds_grid_add_grid_region','DeviceClose','AbsolutePointSize','⋅','MatchQ','dcmplx','canyon','BoxFrame','dimag','DedekindEta','BooleanQ','Derivative','NumericalSort','CHR$|10','⋰','MorphologicalComponents','unordered_map','ERL','DisableFormatting','SixJSymbol','c_navy','RotationMatrix','Subscripted','steam_download_friends_scores','rtauto','commandRadio','⋙̸','asc','ev_global_right_press','return\x20this','mailto:tinyml.aichat@gmail.com?subject=Help\x20with\x20TinyML','#Infinitive\x20#QuestionWord','endSMS','PlaneCurveData','hcGroupParams','layer_background_xscale','vk_lcontrol','BitXor','being','|\x5cb(case|return|throw)\x5cb)\x5cs*','generate','audio','eelim','DoubleUpDownArrow','DVAR','some\x20#Adjective\x20[#PresentTense]','NAME','camUseNVG','InterpretationBox','cabs','show_debug_message','buffer_grow','Caption','DeBruijnSequence','ConvertToPostScriptPacket','sendSimpleCommand','cxx','GammaRegularized','[0-9]+[bf]','.le.','splitTokens','setDynamicSimulationDistance','capnproto','sysfunc','MultivariateHypergeometricDistribution','þ','vbscript-html','˜','DeviceReadList','AcousticRadiationValue','freeLook','leaveVehicle','∤','canUnloadInCombat','splitWhitespace','quad_form','buffer_get_type','EchoLabel','Coroutine','OrderlessPatternSequence','EthernetClient','\x5cb(AV|CA|CF|CG|CI|CL|CM|CN|CT|MK|MP|MTK|MTL|NS|SCN|SK|UI|WK|XC)\x5cw+','$RandomState','⧥','getBurningValue','setting','SMALL','physics_joint_enable_motor','GeoLabels','#Emoji','build\x20create\x20index\x20delete\x20drop\x20explain\x20infer|10\x20insert\x20merge\x20prepare\x20select\x20update\x20upsert|10','SetOverwrite','layer_sprite_yscale','⫳','bm_src_color','hashValue','ev_create','tvSetPictureRight','Epilog','even\x20left','SpheroidalQS','readOnlyDict','ButtonCell','ine','iap_storeload_failed','xquery','is_vec4','intersection','⦶','TemplateEvaluate','regr_sxx','^Content','νѴѵѶѷ','Harvard','Loopback','achievement_load_friends','\x5cb(state_(entry|exit)|touch(_(start|end))?|(land_)?collision(_(start|end))?|timer|listen|(no_)?sensor|control|(not_)?at_(rot_)?target|money|email|experience_permissions(_denied)?|run_time_permissions|changed|attach|dataserver|moving_(start|end)|link_message|(on|object)_rez|remote_data|http_re(sponse|quest)|path_update|transaction_result)\x5cb','text/xml',')*[\x20]*\x5c|','°','normalizeLink','MessagePacket','#Cardinal','setLightAttenuation','instantiate','to_array_1d','descendant-or-self::','ContainsAll','window_views_mouse_get_x','section','mapAnimAdd','last-of-type','compilerExtensions','linkLevel','green','⌎','NNS','embed','routeros','lady-place','ppEffectEnable','uuid','tvCurSel','DoubleUpArrow','curveTightness','EulerGamma','fwd','bullet','VALUE_LENGTH','Φ','al\x20(#Person|#ProperNoun)','TextStyle','CentralMoment','ropeAttachedObjects','isObjectRTD','bg-white\x20shadow-lg\x20rounded-lg\x20relative\x20max-w-lg\x20mx-auto','DQUOTE','PolarAxesOrigin','roboconf','math_set_epsilon','win8_license_initialize_sandbox','ImageSizeRaw','Paneled','randomGaussian','associates','Path','T.DIST.2T','implements','oneof','military','foo-up','ButtonBoxOptions','GroupOrder','VectorPlot3D','GAMMA.DIST','DynamicUpdating','border-inline','(want|wants)','[so]\x20#Determiner','Line','noFill','DoubleRightArrow','socket','StreamDensityPlot','UnequalTo','waypointCompletionRadius','GatedRecurrentLayer','^\x5cs*[a-z_\x5c.\x5c$][a-z0-9_\x5c.\x5c$]+:','SearchAdjustment','inRangeOfArtillery','URLBuild','transparent','Win8','gov','NumberDecompose','from\x20#Noun\x20to\x20[%Noun|Verb%]','$MachineID','deallocate','TreeCases','asset_shader','RunScheduledTask','nis','allUnits','charToSplit','toLowerCase','TagBoxNote','Style','WriteRegBin','imag','draw_line_colour','ObjectExistsQ','<%{1,2}={0,2}','(we|us)\x20[all]','leaf','OutOfMemoryError','string_width','goto/32','⇝','ugc_visibility_public','defineProperty','cr_handpoint','PointFigureChart','steam_set_achievement','┐','ds_stack_destroy','(//|\x22|#|/\x5c*|\x5cs+/\x5cw+)','isMarkedForCollection','HoldAll','(ncalls|tottime|cumtime)','window_view_mouse_get_y','blockquote','camera_get_end_script','buffer_sha1','IMPRODUCT','lays','PiecewiseExpand','fine-artist','[eE][-+]?','#PastTense\x20#Possessive\x20[#Verb]','INTRATE','EllipticF','DecryptFile','attrs','discriminate','vertex_texcoord','white-space','ButtonSource','>','camera_set_default','$0\x20$1\x20$2\x20$3\x20$4\x20$5\x20$6\x20$7\x20$8\x20$9\x20$10\x20$11\x20$12\x20$13\x20$14\x20$15\x20$16\x20$17\x20$18\x20$19\x20$20\x20$21\x20$22\x20$23\x20$24\x20$25\x20$26\x20$27\x20$28\x20$29\x20$30\x20$31\x20zero\x20at\x20v0\x20v1\x20a0\x20a1\x20a2\x20a3\x20a4\x20a5\x20a6\x20a7\x20t0\x20t1\x20t2\x20t3\x20t4\x20t5\x20t6\x20t7\x20t8\x20t9\x20s0\x20s1\x20s2\x20s3\x20s4\x20s5\x20s6\x20s7\x20s8\x20k0\x20k1\x20gp\x20sp\x20fp\x20ra\x20$f0\x20$f1\x20$f2\x20$f2\x20$f4\x20$f5\x20$f6\x20$f7\x20$f8\x20$f9\x20$f10\x20$f11\x20$f12\x20$f13\x20$f14\x20$f15\x20$f16\x20$f17\x20$f18\x20$f19\x20$f20\x20$f21\x20$f22\x20$f23\x20$f24\x20$f25\x20$f26\x20$f27\x20$f28\x20$f29\x20$f30\x20$f31\x20Context\x20Random\x20EntryLo0\x20EntryLo1\x20Context\x20PageMask\x20Wired\x20EntryHi\x20HWREna\x20BadVAddr\x20Count\x20Compare\x20SR\x20IntCtl\x20SRSCtl\x20SRSMap\x20Cause\x20EPC\x20PRId\x20EBase\x20Config\x20Config1\x20Config2\x20Config3\x20LLAddr\x20Debug\x20DEPC\x20DESAVE\x20CacheErr\x20ECC\x20ErrorEPC\x20TagLo\x20DataLo\x20TagHi\x20DataHi\x20WatchLo\x20WatchHi\x20PerfCtl\x20PerfCnt\x20',')\x5cb','^#{1,6}','FromRomanNumeral','os_type','true\x20false\x20yes\x20no\x20null','𝒯','PolyhedronData','rpc\x20returns','Minimize','_Nullable','$MaxRootDegree','pt_shape_sphere','__opts__','ISOWeek','rust','action_kill_object','ShowClosedCellArea','TemplateUnevaluated','kbd','^[a-zA-Z][a-zA-Z0-9_-]*\x5c(.*\x5c)','been','FourierCosSeries','ArcTan','ArrayComponents','Centroid','RangeFilter','buffer_load_async','\x5c)$','IMCSCH','BooleanRegion','ibits','Dataport','selectionNames','#Verb\x20#Reflexive','window_mouse_set','masc','phy_particle_flag_water','EnterTextPacket','\x5cb0[oO](_?[0-7])+[lL]?(?=','FunctionCompileExportByteArray','InheritScope','⊳','setcap','replace','savings','IrreduciblePolynomialQ','one_hot_array','#message-container','(hard|fast|late|early|high|right|deep|close|direct)','to\x20#Infinitive\x20[#PresentTense]','interpol','LessEqual','PieChart3D','SystemCredentialData','ruble','[cause]\x20#Pronoun\x20#Verb','lnbDeleteRow','(sister|pope|brother|father|aunt|uncle|grandpa|grandfather|grandma)\x20#ProperNoun','font-size-adjust','[i]','FileNameJoin','IncludedContexts','MERGE','date_datetime_string','ProcessStatus','allGroups','twelve','Ë','sprite_set_speed','rotate-180','$1ices','in-month','TimeSeriesThread','GeometricStylingRules','DSUM','ScaleDivisions','(^:','(was|were)','shape-margin','\x5cn\x5c+{4,}$','Cycles','cols','scrollHeight','large\x20object','TemporalData','firstname-firstname','toobject','ctrlFade','^(that|this|those)','^\x5cs*[!#]','iap_purchased','Axiom','private_constant','mp_grid_clear_cell','xs:Name','DDB','SectorChart3D','LongRightArrow','(we|they|you)\x20are','SemialgebraicComponentInstances','Stopwatch','EventHandlerTag','genera','AutocompletionFunction','grid-auto-rows','ќ','3-[org-word]','winphone_tile_back_image','CUMIPMT','%[Qwi]?\x5c{','coulda','mp_potential_step_object','rotate_?x','⫌','⫄','refersTo','𝔙','reverse','airportSide','NotSquareSupersetEqual','ISOYear','DeviceReadBuffer','FindMoleculeSubstructure','darccos','voice-balance','𝕙','(AI|AO|DI|DO|F|RI|RO|UI|UO|GI|GO|SI|SO)\x5c[','Lower','probbeta','ImageFocusCombine','\x5c.[a-zA-Z][a-zA-Z0-9_-]*','⊯','hasContraction','independent','this-verbs','#Month\x20the\x20#Value','⦬','setRandomLip','ASin','VertexLabels','\x20questions\x20correct.','unicode','SubsetPosition','abuses','Ђ','blockquote_open','IDENTIFIER\x20OPTIONS\x20XML_ELEMENT\x20XML_OP\x20XML_ELEMENT_OF\x20DOMDOCCREATE\x20DOMDOCLOADFILE\x20DOMDOCLOADXML\x20DOMDOCSAVEFILE\x20DOMDOCGETROOT\x20DOMDOCADDPI\x20DOMNODEGETNAME\x20DOMNODEGETTYPE\x20DOMNODEGETVALUE\x20DOMNODEGETCHILDCT\x20DOMNODEGETFIRSTCHILD\x20DOMNODEGETSIBLING\x20DOMNODECREATECHILDELEMENT\x20DOMNODESETATTRIBUTE\x20DOMNODEGETCHILDELEMENTCT\x20DOMNODEGETFIRSTCHILDELEMENT\x20DOMNODEGETSIBLINGELEMENT\x20DOMNODEGETATTRIBUTECT\x20DOMNODEGETATTRIBUTEI\x20DOMNODEGETATTRIBUTEBYNAME\x20DOMNODEGETBYNAME','FILTERXML','InverseJacobiSN','drawIcon','APL','☎','weak','dclose','input','2-implicit-suffix','↩','background-attachment','MB_USERICON','isboolean','setFlagAnimationPhase','RangeError','readLightSensor','DeviceExecuteAsynchronous','ExpGammaDistribution','#Conjunction\x20[u]','Reverse','vertex_position','mis','shiftr','base','Term','Retrieving\x20relevant\x20papers\x20from\x20Arxiv','achievement_pic_loaded','TriangleMeasurement','Skewness','PrivateFontOptions','ifmacrodef','choices','READ','InsertionPointObject','DynamicVariable','getpriority','isCompiled','toASCII','UpdatePacletSites','gpu_get_tex_max_aniso','mp_linear_path','ctrlSetPosition','[damn]\x20(#Determiner|#Possessive|them)','saveStream','alive','TensorSymmetry','shmread','CreateSystemModel','WeightedAdjacencyMatrix','EdgeWeight','getEditorObjectScope','MatrixExp','QuantityVariableCanonicalUnit','RealExponent','cdsin','steam_ugc_get_item_update_progress','unmanaged','DeleteObject','XNPV','satir','putchar','⋣','get3DENAttribute','setDiaryRecordText','PEEK','cyn','Extensive','set3DENSelected','(did|does|do)','RootMeanSquare','\x5cs*\x5c(','humidity','HandlerFunctions','ds_set_precision','dialog','FunctionSign','isObjectHidden','$TemplatePath','sign','fcntl','ev_collision','(point|decimal|#Fraction)','Threaded','unordered_multimap','RGBColor','createObjectURL','Inline','HTAB','(those|they|we)','ctrlSetFontHeightH6','↭','Expression','ShortRightArrow','device_mouse_raw_x','draw_getpixel_ext','queryMagazinePool','setFatigue','FrechetDistribution','$dumplimit','StarData','Win10','flagSide','waypointVisible','GraphicsGroupBox','TYPE','filename_path','humanize','⁡','steam_file_share','infoPanelComponentEnabled','MardiaSkewnessTest','three-name-person','anyfunc','StableDistribution','IntFmt','SynchronousUpdating','PacletUninstall','ignorefunc','getTiParameters','FactorialMomentGeneratingFunction','shader_set_uniform_matrix','LUBackSubstitution','pr_trianglelist','cursor','zip_unzip','ds_grid_value_disk_exists','setVectorUp','AddToSearchIndex','typescript','SmoothHistogram','border-left-color','ds_grid_destroy','chillin\x27','AnatomyForm','date_get_second_of_year','both','SetAlphaChannel','#down-chevron','getIn','\x22\x20style=\x22color:blue;\x20hover:purple;\x22\x20target=\x22_blank\x22>↗Link\x20|\x0a\x0a','DeleteINIStr','object_inner_pairs','abnf','^\x5cs*[a-zA-Z_][a-zA-Z\x5cd_]*:','_multiCache','StringCount','format_date','path','like','G-code\x20(ISO\x206983)','bridge','Polyhedron','getOpticsMode','addBackpackCargoGlobal','ClearPermissions','#Pronoun\x20#Copula','ж','rank','SquareUnion','ResamplingAlgorithmData','query-params','use','buffer_text','⥰','unitIsUAV','\x20bg-white\x20text-black\x20border\x20border-blue-500\x20px-4\x20py-2\x20rounded\x22>Show\x20More\x20Papers\x0a\x20.spoiler-content,\x0a\x20\x20details\x20>\x20p,\x0a\x20\x20details\x20>\x20em\x20{\x0a\x20\x20\x20\x20color:\x20black;\x0a\x20\x20\x20\x20font-style:\x20normal\x20!important;\x20/*\x20This\x20will\x20override\x20other\x20styles\x20unless\x20they\x20also\x20use\x20!important\x20*/\x0a\x20\x20}\x0a\x20\x20\x0a\x20\x20.correct-answer::after\x20{\x0a\x20\x20\x20\x20content:\x20\x27✅\x27;\x0a\x20\x20\x20\x20color:\x20green;\x0a\x20\x20\x20\x20margin-right:\x208px;\x0a}\x0a.wrong-answer::after\x20{\x0a\x20\x20\x20\x20content:\x20\x27❌\x27;\x0a\x20\x20\x20\x20color:\x20#f87171;\x0a\x20\x20\x20\x20margin-left:\x208px;\x0a}\x0a\x0a\x0a/*\x20Style\x20adjustments\x20*/\x0a\x20.icon-button\x20{\x0a\x20\x20\x20\x20cursor:\x20pointer;\x0a\x20\x20\x20\x20padding:\x2010px\x2020px;\x0a\x20\x20\x20\x20background-color:\x20#000;\x20/*\x20Make\x20buttons\x20black\x20*/\x0a\x20\x20\x20\x20color:\x20#fff;\x20/*\x20Text\x20color\x20white\x20for\x20contrast\x20*/\x0a\x20\x20\x20\x20border:\x20none;\x0a\x20\x20\x20\x20border-radius:\x205px;\x0a\x20\x20\x20\x20margin:\x2020px;\x0a\x20\x20}\x0a/*\x20/////////////////////////////////////////////////////////////////////\x20*/\x0a\x20\x20\x20\x20/*\x20General\x20styles\x20for\x20the\x20slider\x20*/\x0a\x20\x20input[type=range]\x20{\x0a\x20\x20\x20\x20-webkit-appearance:\x20none;\x20/*\x20Override\x20default\x20CSS\x20styles\x20*/\x0a\x20\x20\x20\x20appearance:\x20none;\x0a\x20\x20\x20\x20width:\x20100%;\x20/*\x20Full-width\x20*/\x0a\x20\x20\x20\x20height:\x205px;\x20/*\x20Specified\x20height\x20*/\x0a\x20\x20\x20\x20background:\x20black;\x20/*\x20Black\x20track\x20*/\x0a\x20\x20\x20\x20outline:\x20none;\x20/*\x20Remove\x20outline\x20*/\x0a\x20\x20\x20\x20opacity:\x200.7;\x20/*\x20Partial\x20transparency\x20*/\x0a\x20\x20\x20\x20transition:\x20opacity\x20.2s;\x20/*\x20Transition\x20for\x20the\x20slider\x20*/\x0a\x20\x20}\x0a\x20\x20\x0a/*\x20Initially\x20hide\x20the\x20popup\x20div\x20*/\x0a\x0a/*\x20Show\x20the\x20popup\x20div\x20when\x20hovering\x20over\x20the\x20button\x20or\x20the\x20popup\x20div\x20itself\x20*/\x0a.hover-btn:hover\x20+\x20.popup-content,\x20.popup-content:hover\x20{\x0a\x20\x20visibility:\x20visible;\x0a\x20\x20opacity:\x201;\x0a\x20\x20transition-delay:\x200s;\x20/*\x20Make\x20popup\x20appear\x20immediately\x20*/\x0a}\x0a\x20\x20\x0a\x20\x20/*\x20Style\x20for\x20Webkit\x20browsers\x20like\x20Chrome,\x20Safari\x20*/\x0a\x20\x20input[type=range]::-webkit-slider-thumb\x20{\x0a\x20\x20\x20\x20-webkit-appearance:\x20none;\x20/*\x20Override\x20default\x20CSS\x20styles\x20*/\x0a\x20\x20\x20\x20appearance:\x20none;\x0a\x20\x20\x20\x20width:\x2025px;\x20/*\x20Width\x20of\x20the\x20thumb\x20*/\x0a\x20\x20\x20\x20height:\x2025px;\x20/*\x20Height\x20of\x20the\x20thumb\x20*/\x0a\x20\x20\x20\x20background:\x20black;\x20/*\x20Black\x20thumb\x20*/\x0a\x20\x20\x20\x20cursor:\x20pointer;\x20/*\x20Cursor\x20on\x20hover\x20*/\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Style\x20for\x20Mozilla\x20Firefox\x20*/\x0a\x20\x20input[type=range]::-moz-range-thumb\x20{\x0a\x20\x20\x20\x20width:\x2025px;\x20/*\x20Width\x20of\x20the\x20thumb\x20*/\x0a\x20\x20\x20\x20height:\x2025px;\x20/*\x20Height\x20of\x20the\x20thumb\x20*/\x0a\x20\x20\x20\x20background:\x20black;\x20/*\x20Black\x20thumb\x20*/\x0a\x20\x20\x20\x20cursor:\x20pointer;\x20/*\x20Cursor\x20on\x20hover\x20*/\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Style\x20for\x20the\x20focus\x20state\x20*/\x0a\x20\x20input[type=range]:focus\x20{\x0a\x20\x20\x20\x20outline:\x20none;\x20/*\x20Remove\x20the\x20outline\x20*/\x0a\x20\x20}\x0a\x20\x20\x0a\x20\x20/*\x20Additional\x20style\x20for\x20the\x20focus\x20state\x20on\x20Webkit\x20browsers\x20*/\x0a\x20\x20input[type=range]:focus::-webkit-slider-thumb\x20{\x0a\x20\x20\x20\x20background:\x20#333;\x20/*\x20Darker\x20shade\x20when\x20focused\x20*/\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Additional\x20style\x20for\x20the\x20focus\x20state\x20on\x20Mozilla\x20Firefox\x20*/\x0a\x20\x20input[type=range]:focus::-moz-range-thumb\x20{\x0a\x20\x20\x20\x20background:\x20#333;\x20/*\x20Darker\x20shade\x20when\x20focused\x20*/\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Show\x20the\x20popup\x20div\x20on\x20hover\x20*/\x0a\x20\x20.hover-btn:hover\x20+\x20.popup-content,\x20.popup-content:hover\x20{\x0a\x20\x20\x20\x20display:\x20block;\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Style\x20for\x20the\x20slider\x20*/\x0a\x20\x20.slider\x20{\x0a\x20\x20\x20\x20width:\x20100%;\x0a\x20\x20\x20\x20margin:\x2020px\x200;\x0a\x20\x20\x20\x20\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Slider\x20labels\x20*/\x0a\x20\x20.slider-labels\x20{\x0a\x20\x20\x20\x20display:\x20flex;\x0a\x20\x20\x20\x20justify-content:\x20space-between;\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Style\x20adjustments\x20for\x20checkboxes\x20*/\x0a\x20\x20.checkbox-container\x20{\x0a\x20\x20\x20\x20margin:\x2010px\x200;\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20Making\x20the\x20icons\x20black,\x20assuming\x20SVG\x20icons\x20*/\x0a\x20\x20.icon-button\x20img\x20{\x0a\x20\x20\x20\x20filter:\x20brightness(0)\x20invert(0);\x20/*\x20A\x20general\x20way\x20to\x20make\x20images\x20black\x20*/\x0a\x20\x20}\x0a\x0a\x20\x20.icon-button\x20{\x0a\x20\x20\x20\x20color:\x20white;\x0a\x20\x20\x20\x20margin:\x20auto;\x0a\x20\x20}\x0a\x0a\x20\x20.button-and-popup-container\x20{\x0a\x20\x20\x20\x20position:\x20relative;\x0a\x20\x20\x20\x20display:\x20inline-block;\x20/*\x20Or\x20any\x20other\x20display\x20type\x20that\x20suits\x20your\x20layout\x20*/\x0a\x20\x20}\x0a\x20\x20\x0a\x20\x20.settings-button:hover\x20+\x20.popup-content,\x0a.popup-content:hover\x20{\x0a\x20\x20visibility:\x20visible;\x20/*\x20Show\x20the\x20popup\x20*/\x0a\x20\x20opacity:\x201;\x0a\x20\x20transition:\x20opacity\x200.5s\x20ease;\x0a\x20\x20transition-delay:\x200s;\x20/*\x20React\x20immediately\x20when\x20hovered\x20*/\x0a}\x0a\x0a\x0a/*\x20Initially\x20hide\x20the\x20modal\x20*/\x0a.modal\x20{\x0a\x20\x20display:\x20none;\x0a\x20\x20position:\x20fixed;\x0a\x20\x20z-index:\x20100;\x0a\x20\x20left:\x2050%;\x0a\x20\x20top:\x2050%;\x0a\x20\x20padding:\x2010px;\x0a\x20\x20width:\x20fit-content;\x0a\x20\x20height:\x20fit-content;\x0a\x20\x20transform:\x20translate(-50%,\x20-50%);\x20/*\x20Center\x20the\x20modal\x20*/\x0a\x20\x20/*\x20overflow:\x20auto;\x20*/\x0a\x20\x20/*\x20background-color:\x20rgba(0,\x200,\x200,\x200.4);\x20*/\x0a}\x0a\x0a/*\x20#popover-container\x20{\x0a\x20\x20position:\x20fixed;\x20\x20\x0a\x20\x20z-index:\x201000;\x20\x20\x0a\x20\x20top:\x2050%;\x20\x20\x20\x20\x20\x20\x20\x0a\x20\x20left:\x2050%;\x0a\x20\x20transform:\x20translate(-50%,\x20-50%);\x20\x20\x0a}\x20*/\x0a\x0a\x0a\x0a#popover-container\x20.overflow-auto\x20{\x0a\x20\x20max-height:\x20250px;\x0a\x20\x20overflow:\x20auto;\x0a}\x0a\x0a/*\x20Modal\x20Content\x20*/\x0a.modal-content\x20{\x0a\x20\x20background-color:\x20#fefefe;\x0a\x20\x20margin:\x2015%\x20auto;\x0a\x20\x20padding:\x2020px;\x0a\x20\x20border:\x201px\x20solid\x20#888;\x0a\x20\x20width:\x2080%;\x0a}\x0a\x0a/*\x20The\x20Close\x20Button\x20*/\x0a.close\x20{\x0a\x20\x20color:\x20#aaaaaa;\x0a\x20\x20float:\x20right;\x0a\x20\x20font-size:\x2028px;\x0a\x20\x20font-weight:\x20bold;\x0a}\x0a\x0a.close:hover,\x0a.close:focus\x20{\x0a\x20\x20color:\x20black;\x0a\x20\x20text-decoration:\x20none;\x0a\x20\x20cursor:\x20pointer;\x0a}\x0a\x0a\x0a\x0a@keyframes\x20fadeInUp\x20{\x0a\x20\x20from\x20{\x20opacity:\x200;\x20transform:\x20translateY(20px);\x20}\x0a\x20\x20to\x20{\x20opacity:\x201;\x20transform:\x20translateY(0);\x20}\x0a}\x0a@keyframes\x20fadeOutDown\x20{\x0a\x20\x20from\x20{\x20opacity:\x201;\x20transform:\x20translateY(0);\x20}\x0a\x20\x20to\x20{\x20opacity:\x200;\x20transform:\x20translateY(20px);\x20}\x0a}\x0a.animate-fadeInUp\x20{\x0a\x20\x20animation:\x20fadeInUp\x200.5s\x20ease-out\x20forwards;\x0a}\x0a.animate-fadeOutDown\x20{\x0a\x20\x20animation:\x20fadeOutDown\x200.5s\x20ease-out\x20forwards;\x0a}\x0a\x0a.rotate-180\x20{\x0a\x20\x20transform:\x20rotate(180deg);\x0a}\x0a\x0a\x0a#enter-btn\x20{\x0a\x20\x20top:\x20-0.3rem;\x20/*\x20Moves\x20the\x20button\x20up\x20by\x200.5\x20rem\x20*/\x0a}\x0a\x0a\x0a/*\x20HEADINGS\x20LISTS\x20*/\x0a\x0a/*\x20Overriding\x20Tailwind\x20styles\x20with\x20higher\x20specificity\x20*/\x0a\x20\x20/*\x20For\x20headers\x20*/\x0a\x20\x20h1\x20{\x0a\x20\x20\x20\x20font-size:\x202rem;\x0a\x20\x20\x20\x20font-weight:\x20700;\x0a\x20\x20}\x0a\x0a\x20\x20h2\x20{\x0a\x20\x20\x20\x20font-size:\x201.5rem;\x0a\x20\x20\x20\x20font-weight:\x20600;\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20...\x20include\x20your\x20other\x20styles\x20...\x20*/\x0a\x0a\x20\x20/*\x20For\x20unordered\x20lists\x20*/\x0a\x20\x20ul\x20{\x0a\x20\x20\x20\x20list-style-type:\x20disc;\x0a\x20\x20\x20\x20padding-left:\x201.5rem;\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20For\x20ordered\x20lists\x20*/\x0a\x20\x20ol\x20{\x0a\x20\x20\x20\x20list-style-type:\x20decimal;\x0a\x20\x20\x20\x20padding-left:\x201.5rem;\x0a\x20\x20}\x0a\x0a\x20\x20/*\x20For\x20list\x20items\x20*/\x0a\x20\x20li\x20{\x0a\x20\x20\x20\x20margin-bottom:\x200.5rem;\x0a\x20\x20}\x0a\x0a\x20\x20ul,\x20ol\x20{\x0a\x20\x20\x20\x20list-style-type:\x20none;\x20/*\x20Removes\x20the\x20default\x20list-style\x20*/\x0a\x20\x20\x20\x20padding:\x200;\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20/*\x20Removes\x20the\x20default\x20padding\x20*/\x0a\x20\x20\x20\x20margin:\x200;\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20/*\x20Removes\x20the\x20default\x20margin\x20*/\x0a\x20\x20}\x0a\x20\x20\x0a\x20\x20li\x20{\x0a\x20\x20\x20\x20/*\x20Add\x20your\x20desired\x20styles\x20for\x20list\x20items\x20here\x20*/\x0a\x20\x20}','MIME','allowDammage','MeshShading','ConstantTimesLayer','port\x20effect\x20module\x20where\x20command\x20subscription\x20exposing','PRP$','JuliaSetIterationCount','indirect','⊍','zombies','excess','PointCountDistribution','token','would\x20be\x20#PastTense','autoescape','csize','ShowCellLabel','Raster','EdgeTransitiveGraphQ','VertexCoverQ','ugc_list_VotedDown','UsingFrontEnd','uint32','lnbSetText','ip\x20eip\x20rip\x20al\x20ah\x20bl\x20bh\x20cl\x20ch\x20dl\x20dh\x20sil\x20dil\x20bpl\x20spl\x20r8b\x20r9b\x20r10b\x20r11b\x20r12b\x20r13b\x20r14b\x20r15b\x20ax\x20bx\x20cx\x20dx\x20si\x20di\x20bp\x20sp\x20r8w\x20r9w\x20r10w\x20r11w\x20r12w\x20r13w\x20r14w\x20r15w\x20eax\x20ebx\x20ecx\x20edx\x20esi\x20edi\x20ebp\x20esp\x20eip\x20r8d\x20r9d\x20r10d\x20r11d\x20r12d\x20r13d\x20r14d\x20r15d\x20rax\x20rbx\x20rcx\x20rdx\x20rsi\x20rdi\x20rbp\x20rsp\x20r8\x20r9\x20r10\x20r11\x20r12\x20r13\x20r14\x20r15\x20cs\x20ds\x20es\x20fs\x20gs\x20ss\x20st\x20st0\x20st1\x20st2\x20st3\x20st4\x20st5\x20st6\x20st7\x20mm0\x20mm1\x20mm2\x20mm3\x20mm4\x20mm5\x20mm6\x20mm7\x20xmm0\x20\x20xmm1\x20\x20xmm2\x20\x20xmm3\x20\x20xmm4\x20\x20xmm5\x20\x20xmm6\x20\x20xmm7\x20\x20xmm8\x20\x20xmm9\x20xmm10\x20\x20xmm11\x20xmm12\x20xmm13\x20xmm14\x20xmm15\x20xmm16\x20xmm17\x20xmm18\x20xmm19\x20xmm20\x20xmm21\x20xmm22\x20xmm23\x20xmm24\x20xmm25\x20xmm26\x20xmm27\x20xmm28\x20xmm29\x20xmm30\x20xmm31\x20ymm0\x20\x20ymm1\x20\x20ymm2\x20\x20ymm3\x20\x20ymm4\x20\x20ymm5\x20\x20ymm6\x20\x20ymm7\x20\x20ymm8\x20\x20ymm9\x20ymm10\x20\x20ymm11\x20ymm12\x20ymm13\x20ymm14\x20ymm15\x20ymm16\x20ymm17\x20ymm18\x20ymm19\x20ymm20\x20ymm21\x20ymm22\x20ymm23\x20ymm24\x20ymm25\x20ymm26\x20ymm27\x20ymm28\x20ymm29\x20ymm30\x20ymm31\x20zmm0\x20\x20zmm1\x20\x20zmm2\x20\x20zmm3\x20\x20zmm4\x20\x20zmm5\x20\x20zmm6\x20\x20zmm7\x20\x20zmm8\x20\x20zmm9\x20zmm10\x20\x20zmm11\x20zmm12\x20zmm13\x20zmm14\x20zmm15\x20zmm16\x20zmm17\x20zmm18\x20zmm19\x20zmm20\x20zmm21\x20zmm22\x20zmm23\x20zmm24\x20zmm25\x20zmm26\x20zmm27\x20zmm28\x20zmm29\x20zmm30\x20zmm31\x20k0\x20k1\x20k2\x20k3\x20k4\x20k5\x20k6\x20k7\x20bnd0\x20bnd1\x20bnd2\x20bnd3\x20cr0\x20cr1\x20cr2\x20cr3\x20cr4\x20cr8\x20dr0\x20dr1\x20dr2\x20dr3\x20dr8\x20tr3\x20tr4\x20tr5\x20tr6\x20tr7\x20r0\x20r1\x20r2\x20r3\x20r4\x20r5\x20r6\x20r7\x20r0b\x20r1b\x20r2b\x20r3b\x20r4b\x20r5b\x20r6b\x20r7b\x20r0w\x20r1w\x20r2w\x20r3w\x20r4w\x20r5w\x20r6w\x20r7w\x20r0d\x20r1d\x20r2d\x20r3d\x20r4d\x20r5d\x20r6d\x20r7d\x20r0h\x20r1h\x20r2h\x20r3h\x20r0l\x20r1l\x20r2l\x20r3l\x20r4l\x20r5l\x20r6l\x20r7l\x20r8l\x20r9l\x20r10l\x20r11l\x20r12l\x20r13l\x20r14l\x20r15l\x20db\x20dw\x20dd\x20dq\x20dt\x20ddq\x20do\x20dy\x20dz\x20resb\x20resw\x20resd\x20resq\x20rest\x20resdq\x20reso\x20resy\x20resz\x20incbin\x20equ\x20times\x20byte\x20word\x20dword\x20qword\x20nosplit\x20rel\x20abs\x20seg\x20wrt\x20strict\x20near\x20far\x20a32\x20ptr','style-src','(ready|available|difficult|hard|easy|made|attempt|try)\x20to\x20[%Noun|Verb%]','Identity','SpanAdjustments','removeBackpackGlobal','BetaRegularized','RadioButtonBoxOptions','ISOWeekday','peekc','LaplaceDistribution','ColumnBackgrounds','FirstName','setDrawIcon','array\x20date\x20decimal\x20duration\x20integer\x20map\x20pair\x20string\x20tag\x20xml\x20null\x20boolean\x20bytes\x20keyword\x20list\x20locale\x20queue\x20set\x20stack\x20staticarray\x20local\x20var\x20variable\x20global\x20data\x20self\x20inherited\x20currentcapture\x20givenblock','bitflags!','achievement_show_achievements','weekday','LaplacianGaussianFilter','χ','setLightIntensity','sprite_merge','forEach','GaussianWindow','fixable','elimtype','punctuation','printFirmwareVersion','cdcos','Anonymous','ENABLE','iap_purchase_details','ByteArray','$dumpvars','uiNamespace','ImageClip','CurveClosed','⇁','getObjectScale','SyntaxPacket','safeZoneW','result','(\x5cbReturn\x5cb)','emitNamedEntityData','starts-thinking','shownWatch','audio_is_paused','PutAppend','connect','MethodError','EntityUnregister','𝔊','ContinuousWaveletTransform','scroll-margin-block','rpmos','TimeValue','$dumpoff','split\x20return\x20print\x20reverse\x20grep','NULL','Round','1-freeze-lexicon','getLanguage','Millisecond','gesture_get_double_tap_time','(til|until)','APOS_STRING_MODE','VectorColorFunction','PropagateAborts','GroupGenerators',')\x5c.)*','animationPhase','physics_particle_group_get_data','format_currency','flanders\x27','char16_t','AAAA','cheatsEnabled','Image3D','skip','EntityValue','bufif1','и','Uri','\x27re','array_sort','debugFSM','NETHOOD','forests','typeahead','Brainfuck','\x5c]\x5c[','getpwuid','PausedTime','System','overflow','#Honorific\x20#FirstName\x20[#Singular]','PageTheme','playGesture','disableSerialization','ParentEdgeStyle','_user_','do3DENAction','clientSide','complex_schur_decompose_t','assignment','win8_secondarytile_pin','ctrlMapScale','possessives','arctan','immutable','latex','buffer_u8','indexc','ContinuousTimeModelQ','keepSpace','TetrahedronBox','OutOfBoundsException','instance_id','ReturnExpressionPacket','metropolis','match_number','injection','𝕝','$CompilerEnvironment','text-xl\x20font-semibold\x20mt-2','PGraphics','linkItem','forceWeatherChange','GraphIntersection','─','exn|5','isText','(the|any)\x20[more]','FrontEndVersion','SAS','CONCAT','Pattern','alias','text_replace','MeshCellStyle','JUMP_TABLE','loadXML','tribunal','AppendLayer','argument13','[%Adj|Past%]\x20and\x20#PastTense','FileFormatProperties','SequenceSplit','visibleWatch','forEachMember','CONFIG','MinIntervalSize','DateObject','PeronaMalikFilter','CallPacket','LED_BUILTIN','istringstream','#Copula\x20#Gerund\x20[#PresentTense]\x20!by?','InputGrouping','RowAlignments','buffer_async_group_option','doptnum','𝔣','[0-9a-fA-F](_*[0-9a-fA-F])*','scrollbar-gutter','#Noun\x20(&|n)\x20#Noun','$swrite','Cardinal','setWantedRPMRTD','IsWindow','WriteUninstaller','ISNUMBER','gonna','ColorNegate','suspend','seventeen','mouseClicked','PolyGamma','_beginMatch','game_load_buffer','BesselK','ControlGroupContentsBox','SingleLetterItalics','$fmonitorb','TotalVariationFilter','RationalFunctions','unresolved_unsigned','WORKDAY.INTL','Step','#Determiner\x20[#Adjective]\x20#Copula','\x5cb\x5cd+[kKmMgGdshdwy]?\x5cb','Proposition','needs','IDENT_RE','layer_background_destroy','SubsuperscriptBoxOptions','onBriefingNotes','mask-border-slice','ninth','DeviceObject','ctrlSetFocus','Underoverscript','leaderboardDeInit','GeometricStep','⊖','⨩','CreateShortCut','ComplexVectorPlot','∇','É','getStamina','vertex_end','[(all|both)]\x20#Determiner\x20#Noun','GeoBoundsRegionBoundary','AngularGauge','Serializable','AutoGeneratedPackage','now','ŗ','argument5','⪙','speed','RegExp','local-link','conversely','$MinorReleaseNumber','groupStart','stop','canVehicleCargo','≩︀','setWaypointForceBehaviour','⥝','layer_get_x','inputc','then','setTrafficSpeed','lbColorRight','query_agent_stream_tiny','curve','FontOpacity','frant','ODDLYIELD','ReleaseHold','fa_right','value-k','\x22\x22\x22','$printtimescale','GeneratedParameters','Groupings','bernoulli','^(#Uncountable|#ProperNoun|#Place|#Pronoun|#Acronym)+$','NumberForm','unformatted','[^\x5c\x5c]`','unchecked','memory','MB_OKCANCEL','magazinesAllTurrets','c_long','UNDERSCORE_TITLE_MODE','matrix_stack_push','worse:bad¦better:good¦4er:fair,gray,poor¦1urther:far¦3ter:fat,hot,wet¦3der:mad,sad¦3er:shy,fun¦4der:glad¦:¦4r:cute,dire,fake,fine,free,lame,late,pale,rare,ripe,rude,safe,sore,tame,wide¦5r:eerie,stale','transform-origin','PrecisionGoal','byval','disableNVGEquipment','camSetBank','WignerSemicircleDistribution','gpu_get_blendmode_dest','auto_reset','𝔛','setTaskMarkerOffset','ACOTH','little','ParentIterator','random_get_seed','qr_R','NevilleThetaN','DeviceRead','MAC_CHARSET','ctrlSetTextSelection','GraphicsComplexBoxOptions','¿','([eE][+-]?','preconditionFailure','$load_coverage_db','TaskWait','addMagazineCargo','^[#Infinitive]\x20(to|for|into|toward|here|there)','setFog','BoxRatios','[object\x20Array]','conversationDisabled','achievement_show_achievement','(i|[fF]i|Li))','path_get_time','packhdr','MaxDuration','Hermitian','ToLowerCase','lnbDeleteColumn','⤧','LongLeftArrow','bins','Compute','Ƶ','modelZ','was-under-cooked','date_get_year','_emit','$fstrobeh','WindowClickSelect','Subresultants','readline','lbSize','diag_activeMissionFSMs','AudioTimeStretch','◼','subLanguage','gamepad_get_device_count','directory_destroy','deleteResources','setGroupIconsVisible','now\x20that','Return','BackFaceSpecularExponent','ps_distr_linear','run\x20cmd\x20entrypoint\x20volume\x20add\x20copy\x20workdir\x20label\x20healthcheck\x20shell','WordCount','border-inline-width','And','#Honorific','padding-top','physics_apply_local_force','ChartElementData','UniformSumDistribution','steam_file_read','bog','WikipediaSearch','worldToModel','Esplora','ers','TrackStartTime','bite','WeakStationarity','((#Cardinal|a)+]\x20[#Fraction+]','GeometricTransformation3DBox','variable_instance_get','ds_grid_read','orientation','stmt','rayleigh','Visual\x20Basic\x20.NET','SplitBy','loglogistic','TextPosition','multiply_lower_tri_self_transpose','footnote','imageMode','FunctionCompileExport','EST','#Adverb\x20#Negative','findloc','\x22,\x20check\x20name','content-visibility','not\x20be\x20[%Adj|Past%\x20#Particle?]','std\x20string\x20wstring\x20cin\x20cout\x20cerr\x20clog\x20stdin\x20stdout\x20stderr\x20stringstream\x20istringstream\x20ostringstream\x20auto_ptr\x20deque\x20list\x20queue\x20stack\x20vector\x20map\x20set\x20pair\x20bitset\x20multiset\x20multimap\x20unordered_set\x20unordered_map\x20unordered_multiset\x20unordered_multimap\x20priority_queue\x20make_pair\x20array\x20shared_ptr\x20abort\x20terminate\x20abs\x20acos\x20asin\x20atan2\x20atan\x20calloc\x20ceil\x20cosh\x20cos\x20exit\x20exp\x20fabs\x20floor\x20fmod\x20fprintf\x20fputs\x20free\x20frexp\x20fscanf\x20future\x20isalnum\x20isalpha\x20iscntrl\x20isdigit\x20isgraph\x20islower\x20isprint\x20ispunct\x20isspace\x20isupper\x20isxdigit\x20tolower\x20toupper\x20labs\x20ldexp\x20log10\x20log\x20malloc\x20realloc\x20memchr\x20memcmp\x20memcpy\x20memset\x20modf\x20pow\x20printf\x20putchar\x20puts\x20scanf\x20sinh\x20sin\x20snprintf\x20sprintf\x20sqrt\x20sscanf\x20strcat\x20strchr\x20strcmp\x20strcpy\x20strcspn\x20strlen\x20strncat\x20strncmp\x20strncpy\x20strpbrk\x20strrchr\x20strspn\x20strstr\x20tanh\x20tan\x20vfprintf\x20vprintf\x20vsprintf\x20endl\x20initializer_list\x20unique_ptr','MailServerConnect','Roots','LocalCache','DayNightTerminator','lambda','(e|E|u&|U&)\x27','specialize','typedef','≓','^[%Noun|Verb%]\x20to','(the|this|those|these)\x20#Adjective\x20[%Verb|Noun%]','StationaryDistribution','buffer_base64_decode_ext','hanging-punctuation','iap_product_details','DatabaseDisconnect','part_type_death','Error\x20retrieving\x20vectorized\x20data\x20or\x20operation\x20timed\x20out.\x20Switching\x20to\x20current\x20page.','EliminationOrder','Gerund\x20#Adjective','[(dark|bright|flat|light|soft|pale|dead|dim|faux|little|wee|sheer|most|near|good|extra|all)]\x20#Adjective','[were]\x20#Noun+\x20to\x20#Infinitive','DensityPlot','TextValue','vertex_format_end','c_yellow','arily','SynchronousInitialization','\x20QUOTE:\x20','MeshCoordinates','translate_regex','IncludeHydrogens','TouchPosition','|\x5cb(return)\x5cb)\x5cs*','menuValue','IIf','RemoveOutputStreamMethod','is-no','addr','SystemsConnectionsModel','_Pragma','hyphens','cr_drag','false\x20nil\x20true','URLParse','gases','addWeaponCargoGlobal','BACKGROUND\x20KNOWLEDGE','TemplateSlot','KeepExistingVersion','whole','RESUME','TimeSeriesInvertibility','ConnectLibraryCallbackFunction','FrameBoxOptions','bat','Ξ','FlatTopWindow','show_message_async','with|0','InverseWishartMatrixDistribution','Slider2D','argument_count','gardens','¯','num_elements','getDoc','(were|was)\x20being\x20[#PresentTense]','shader_set_uniform_matrix_array','recl','PartLayer','ProbabilityPlot','returns','typographer','#Adjective\x20and\x20#Adjective','ContinuousTask','restaurants','FromContinuedFraction','completedFSM','Section','probnegb','FindMinimumCut','ugc_filetype_community','º','LayoutInformation','#openModal-feedback','URLSubmit','$OutputForms','ExtremeValueDistribution','life','|case|contractions|parentheses|quotations|emoji|honorifics|debullet','[a-zA-Z](\x5c.?\x5cw)*','$AddOnsDirectory','FeatureNearest','@try','EndOfBuffer','classical_left','loader','video','tvOSApplicationExtension','please','etc','JohnsonDistribution','OverscriptBoxOptions','configurations','sixtieth','room_set_width','set|0','VectorDisplacementPlot3D','transition-duration',';/?:@&=+$,-_.!~*\x27()#','\x0aGiven\x20a\x20piece\x20of\x20text\x20and\x20a\x20partial\x20query\x20as\x20input:\x20<','Closed','matrixTranspose','c_orange','Intl','setLightFlareMaxDistance','GeometricBrownianMotionProcess','(a|an)\x20#Adverb\x20[#Participle]\x20#Noun','⥳','(-|\x5c+)?\x5cd+([./]\x5cd+)?','fa_bottom','hasHad','setVehicleReportOwnPosition','TrigExpand','ConvexHullMesh','NetReplace','QuantityVariableDimensions','type','skewness','Rename','r\x27\x27\x27','▵','Á','BlankForm','vertex_usage_textcoord','RemoveProperty','mathematica','language-','display_get_timing_method','openNextFile','winphone_tile_front_image','\x5c.\x5cs*','LibraryFunctionInformation','readGreen','FrontEndResourceString','𝒸','Fibonacci','(\x5cn{2}|\x27)','side','C_NUMBER_MODE','joinString','EdgeQ','createTask','rewriterule','url','visited','base64_decode','$PatchLevelID','textureMode','DefaultNotebook','IFS','lbeta','ConnectSystemModelController','MailSettings','#Noun\x20[(#PastTense|#Participle)]\x20by\x20(the|a)\x20#Noun','▒','rehash','sysexec','GeoGridLines','vertex_format_add_color','rep_array','readBytes','triggerArea','steam_ugc_query_set_cloud_filename_filter','Failure','getHideFrom','addBinocularItem','sysmsg','mkd','∧','fileExists','Equals','closeDialog','TestReport','ino','DiskMatrix','TrainingProgressMeasurements','layer_get_name','[a-zA-Z_]\x5cw*::','setTiParameter','LexicographicOrder','GraphicsRow','DelimiterMatching','ACOS','ParametricRegion','FourierSinTransform','KeySortBy','TreeLeafCount','setWeaponReloadingTime','далее\x20возврат\x20вызватьисключение\x20выполнить\x20для\x20если\x20и\x20из\x20или\x20иначе\x20иначеесли\x20исключение\x20каждого\x20конецесли\x20конецпопытки\x20конеццикла\x20не\x20новый\x20перейти\x20перем\x20по\x20пока\x20попытка\x20прервать\x20продолжить\x20тогда\x20цикл\x20экспорт\x20','HammingWindow','Contexts','≔','\x5cb(?:([0-9][0-9_]*)?\x5c.[0-9_]*(?:[eE][+-]?[0-9_]+)?|(0[Xx])?[0-9][0-9_]*(\x5c.[0-9_]*)?(?:[pP](?:[+-]?[0-9_]+)?)?)\x5cb','voice-family','$fclose','Divisible','unbinary','uint8','atMentions','pushd','GegenbauerC','five','synchronizedTriggers','Perl','⪰','#Plural\x20[(who|which|when)]\x20.','endsWithParent','\x5c[(\x5c|\x5c|)?\x5c]|\x5c(\x5c)','WHILE','(west|east|north|south)\x20[%Person|Place%]','CookieFunction','periscopeElevation','#Email','^#\x5cw','EdgeStyle','PermissionsKey','SystemsModelDelayApproximate','StrCmpS','gpu_get_texrepeat','swift','frustum','(u8?|U|L)?\x22','PerpendicularBisector',' ','RegularPolygon','vk_numpad9','sweep','IRPF90','nds','physics_apply_torque','false_type','#Singular','true¦0:92;1:96;2:8H;3:8V;4:8A;5:83;6:85;7:98;8:90;9:8G;A:8X;B:8R;C:8U;D:8S;E:70;F:97;G:8Y;H:81;I:7H;J:79;a9Fb7Uc6Rd6Le6Jf5Ig50h4Biron0j47k40l3Em31n2Yo2Wp2Cquiet\x20Hr1Xs0KtZuXvacuu6QwNyammerBzK;ero\x20Dip\x20LonK;e0k0;by,ov9up;aQeMhLiKor0Mrit19;mp0n3Fpe0r5s5;ackAeel\x20Di0S;aLiKn33;gh\x203Wrd0;n\x20Dr\x20K;do1in,oJ;it\x2079k5lk\x20Lrm\x2069sh\x20Kt83v60;aw3do1o7up;aw3in,oC;rgeBsK;e\x202herE;a00eYhViRoQrMuKypP;ckErn\x20K;do1in,oJup;aLiKot0y\x2030;ckl7Zp\x20F;ck\x20HdK;e\x205Y;n7Wp\x203Es5K;ck\x20MdLe\x20Kghten\x206me0p\x20o0Rre0;aw3ba4do1in,up;e\x20Iy\x202;by,oG;ink\x20Lrow\x20K;aw3ba4in,up;ba4ov9up;aKe\x2077ll62;m\x202r\x205M;ckBke\x20Llk\x20K;ov9shit,u47;aKba4do1in,leave,o4Dup;ba4ft9pa69w3;a0Vc0Te0Mh0Ii0Fl09m08n07o06p01quar5GtQuOwK;earMiK;ngLtch\x20K;aw3ba4o8K;\x20by;cKi6Bm\x202ss0;k\x2064;aReQiPoNrKud35;aigh2Det75iK;ke\x207Sng\x20K;al6Yup;p\x20Krm2F;by,in,oG;c3Ln3Lr\x202tc4O;p\x20F;c3Jmp0nd\x20LrKveAy\x202O;e\x20Ht\x202L;ba4do1up;ar3GeNiMlLrKurB;ead0ingBuc5;a49it\x206H;c5ll\x20o3Cn\x202;ak\x20Fe1Xll0;a3Bber\x202rt0und\x20like;ap\x205Vow\x20Duggl5;ash\x206Noke0;eep\x20NiKow\x206;cLp\x20K;o6Dup;e\x2068;in,oK;ff,v9;de19gn\x204NnKt\x206Gz5;gKkE;\x20al6Ale0;aMoKu5W;ot\x20Kut0w\x207M;aw3ba4f48oC;c2WdeEk6EveA;e\x20Pll1Nnd\x20Orv5tK;\x20Ktl5J;do1foLin,o7upK;!on;ot,r5Z;aw3ba4do1in,o33up;oCto;al66out0rK;ap65ew\x206J;ilAv5;aXeUiSoOuK;b\x205Yle0n\x20Kstl5;aLba4do1inKo2Ith4Nu5P;!to;c2Xr8w3;ll\x20Mot\x20LpeAuK;g3Ind17;a2Wf3Po7;ar8in,o7up;ng\x2068p\x20oKs5;ff,p18;aKelAinEnt0;c6Hd\x20K;o4Dup;c27t0;aZeYiWlToQrOsyc35uK;ll\x20Mn5Kt\x20K;aKba4do1in,oJto47up;pa4Dw3;a3Jdo1in,o21to45up;attleBess\x20KiNop\x202;ah2Fon;iLp\x20Kr4Zu1Gwer\x206N;do1in,o6Nup;nt0;aLuK;gEmp\x206;ce\x20u20y\x206D;ck\x20Kg0le\x204An\x206p5B;oJup;el\x205NncilE;c53ir\x2039n0ss\x20MtLy\x20K;ba4oG;\x20Hc2R;aw3ba4in,oJ;pKw4Y;e4Xt\x20D;aLerd0oK;dAt53;il\x20Hrrow\x20H;aTeQiPoLuK;ddl5ll\x20I;c1FnkeyMp\x206uthAve\x20K;aKdo1in,o4Lup;l4Nw3;\x20wi4K;ss0x\x202;asur5e3SlLss\x20K;a21up;t\x206;ke\x20Ln\x206rKs2Ax0;k\x206ryA;do,fun,oCsure,up;a02eViQoLuK;ck0st\x20I;aNc4Fg\x20MoKse0;k\x20Kse4D;aft9ba4do1forw37in56o0Zu46;in,oJ;d\x206;e\x20NghtMnLsKve\x2000;ten\x20F;e\x202k\x202;\x202e46;ar8do1in;aMt\x20LvelK;\x20oC;do1go,in,o7up;nEve\x20K;in,oK;pKut;en;c5p\x202sh\x20LtchBughAy\x20K;do1o59;in4Po7;eMick\x20Lnock\x20K;do1oCup;oCup;eLy\x20K;in,up;l\x20Ip\x20K;aw3ba4do1f04in,oJto,up;aMoLuK;ic5mpE;ke3St\x20H;c43zz\x202;a01eWiToPuK;nLrrKsh\x206;y\x202;keLt\x20K;ar8do1;r\x20H;lKneErse3K;d\x20Ke\x202;ba4dKfast,o0Cup;ear,o1;de\x20Lt\x20K;ba4on,up;aw3o7;aKlp0;d\x20Ml\x20Ir\x20Kt\x202;fKof;rom;f11in,o03uW;cPm\x202nLsh0ve\x20Kz2P;at,it,to;d\x20Lg\x20KkerP;do1in,o2Tup;do1in,oK;ut,v9;k\x202;aZeTive\x20Rloss\x20IoMrLunK;\x20f0S;ab\x20hold,in43ow\x202U;\x20Kof\x202I;aMb1Mit,oLr8th1IuK;nd9;ff,n,v9;bo7ft9hQw3;aw3bKdo1in,oJrise,up,w3;a4ir2H;ar\x206ek0t\x20K;aLb1Fdo1in,oKr8up;ff,n,ut,v9;cLhKl2Fr8t,w3;ead;ross;d\x20aKng\x202;bo7;a0Ee07iYlUoQrMuK;ck\x20Ke2N;ar8up;eLighten\x20KownBy\x202;aw3oG;eKshe27;\x202z5;g\x202lMol\x20Krk\x20I;aKwi20;bo7r8;d\x206low\x202;aLeKip0;sh0;g\x206ke0mKrKtten\x20H;e\x20F;gRlPnNrLsKzzle0;h\x20F;e\x20Km\x202;aw3ba4up;d0isK;h\x202;e\x20Kl\x201T;aw3fPin,o7;ht\x20ba4ure0;ePnLsK;s\x202;cMd\x20K;fKoG;or;e\x20D;d04l\x202;cNll\x20Krm0t1G;aLbKdo1in,o09sho0Eth08victim;a4ehi2O;pa0C;e\x20K;do1oGup;at\x20Kdge0nd\x2012y5;in,o7up;aOi1HoNrK;aLess\x206op\x20KuN;aw3b03in,oC;gBwB;\x20Ile0ubl1B;m\x202;a0Ah05l02oOrLut\x20K;aw3ba4do1oCup;ackBeep\x20LoKy0;ss\x20Dwd0;by,do1in,o0Uup;me\x20NoLuntK;\x20o2A;k\x206l\x20K;do1oG;aRbQforOin,oNtKu0O;hLoKrue;geth9;rough;ff,ut,v9;th,wK;ard;a4y;paKr8w3;rt;eaLose\x20K;in,oCup;n\x206r\x20F;aNeLiK;ll0pE;ck\x20Der\x20Kw\x20F;on,up;t\x202;lRncel0rOsMtch\x20LveE;\x20in;o1Nup;h\x20Dt\x20K;doubt,oG;ry\x20LvK;e\x2008;aw3oJ;l\x20Km\x20H;aLba4do1oJup;ff,n,ut;r8w3;a0Ve0MiteAl0Fo04rQuK;bblNckl05il0Dlk\x206ndl05rLsKtMy\x20FzzA;t\x2000;n\x200HsK;t\x20D;e\x20I;ov9;anWeaUiLush\x20K;oGup;ghQng\x20K;aNba4do1forMin,oLuK;nd9p;n,ut;th;bo7lKr8w3;ong;teK;n\x202;k\x20K;do1in,o7up;ch0;arTg\x206iRn5oPrNssMttlLunce\x20Kx\x20D;aw3ba4;e\x206;\x20ar8;e\x20H;do1;k\x20Dt\x202;e\x202;l\x206;do1up;d\x202;aPeed0oKurt0;cMw\x20K;aw3ba4do1o7up;ck;k\x20K;in,oC;ck0nk0stA;\x20oQaNef\x202lt0nd\x20K;do1ov9up;er;up;r\x20Lt\x20K;do1in,oCup;do1o7;ff,nK;to;ck\x20Pil0nMrgLsK;h\x20D;ainBe\x20D;g\x20DkB;\x20on;in,o7;aw3do1in,oCup;ff,ut;ay;ct\x20FdQir0sk\x20MuctionA;\x20oG;ff;ar8o7;ouK;nd;\x20o7;d\x20K;do1oKup;ff,n;wn;o7up;ut','variable.language','MexicanHatWavelet','AVR\x20Assembly','collision_ellipse_list','LinkRankCentrality','systemChat','true¦0:4Q;a3Tb3Bc2Od2He2Df27g1Zh1Ti1Pj1Nk1Ll1Gm12n0Po0Mp0Cqu0Br02sTtHuCv9w3xiaomi,y1;amaha,m1Bou1w1B;gov,tu3C;a4e2iki1orld\x20trade\x20organizati33;leaRped0O;lls\x20fargo,st1;fie2Hinghou2R;l1rner\x20br3U;gree3Jl\x20street\x20journ2Im1E;an\x20halOeriz2Xisa,o1;dafo2Yl1;kswagMvo;b4kip,n2ps,s1;a\x20tod3Aps;es3Mi1;lev3Fted\x20natio3C;er,s;\x20mobi32aco\x20beRd\x20bOe9gi\x20frida3Lh3im\x20horto3Amz,o1witt3D;shi49y1;ota,s\x20r\x2005;e\x201in\x20lizzy;b3carpen3Jdaily\x20ma3Dguess\x20w2holli0s1w2;mashing\x20pumpki35uprem0;ho;ea1lack\x20eyed\x20pe3Xyr0Q;ch\x20bo3Dtl0;l2n3Qs1xas\x20instrumen1U;co,la\x20m1F;efoni0Kus;a8cientology,e5ieme2Ymirnoff,np,o3pice\x20gir6quare0Ata1ubaru;rbuc1to34;ks;ny,undgard1;en;a2x\x20pisto1;ls;g1Wrs;few2Minsbur31lesfor03msu2E;adiohead,b8e4o1yana3C;man\x20empi1Xyal\x201;b1dutch\x20she4;ank;a3d\x201max,vl20;bu1c2Ahot\x20chili\x20peppe2Ylobst2N;ll;ders\x20dige1Ll\x20madrid;c,s;ant3Aizn2Q;a8bs,e5fiz2Ihilip4i3r1;emier\x201udenti1D;leagTo2K;nk\x20floyd,zza\x20hut;\x20morrBs;psi2tro1uge0E;br33chi0Tn33;!co;lant2Un1yp16;\x202ason27da2P;ld\x20navy,pec,range\x20juli2xf1;am;us;aAb9e6fl,h5i4o1sa,vid3wa;k2tre\x20dame,vart1;is;ia;ke,ntendo,ss0QvZ;l,s;c,st1Otflix,w1;\x201sweek;kids\x20on\x20the\x20block,york0D;a,c;nd22s2t1;ional\x20aca2Po,we0U;a,c02d0S;aDcdonalCe9i6lb,o3tv,y1;spa1;ce;b1Tnsanto,ody\x20blu0t1;ley\x20cr1or0T;ue;c2t1;as,subisO;helin,rosoft;dica2rcedes\x20benz,talli1;ca;id,re;ds;cs\x20milk,tt19z24;a3e1g,ittle\x20caesa1P;\x20ore09novo,x1;is,mark,us;\x201bour\x20party;pres0Dz\x20boy;atv,fc,kk,lm,m1od1O;art;iffy\x20lu0Roy\x20divisi0Jpmorgan1sa;!\x20cha09;bm,hop,k3n1tv;g,te1;l,rpol;ea;a5ewlett\x20pack1Vi3o1sbc,yundai;me\x20dep1n1P;ot;tac1zbollah;hi;lliburt08sbro;eneral\x206hq,ithub,l5mb,o2reen\x20d0Ou1;cci,ns\x20n\x20ros0;ldman\x20sachs,o1;dye1g0H;ar;axo\x20smith\x20kli04encoW;electr0Nm1;oto0Z;a5bi,c\x20barcelo4da,edex,i2leetwood\x20m03o1rito\x20l0G;rd,xcY;at,fa,nancial1restoZ;\x20tim0;na;cebook,nnie\x20mae;b0Asa,u3xxon1;\x20m1m1;ob0J;!rosceptics;aiml0De5isney,o4u1;nkin\x20donu2po0Zran\x20dur1;an;ts;j,w\x20jon0;a,f\x20lepp12ll,peche\x20mode,r\x20spieg02stiny\x27s\x20chi1;ld;aJbc,hFiDloudflaCnn,o3r1;aigsli5eedence\x20clearwater\x20reviv1ossra09;al;c7inba6l4m1o0Est09;ca2p1;aq;st;dplSg1;ate;se;a\x20c1o\x20chanQ;ola;re;a,sco1tigroup;!\x20systems;ev2i1;ck\x20fil\x20a,na\x20daily;r1y;on;d2pital\x20o1rls\x20jr;ne;bury,ill1;ac;aEbc,eBf9l5mw,ni,o1p,rexiteeU;ei3mbardiIston\x201;glo1pizza;be;ng;o2ue\x20c1;roV;ckbuster\x20video,omingda1;le;\x20g1g1;oodriL;cht2e\x20ge0rkshire\x20hathaw1;ay;el;cardi,idu,nana\x20republ3s1xt5y5;f,kin\x20robbi1;ns;ic;bYcTdidSerosmith,iRlKmEnheuser\x20busDol,ppleAr6s4u3v2y1;er;is,on;di,todesk;hland\x20o1sociated\x20E;il;b3g2m1;co;os;ys;\x20compu1be0;te1;rs;ch;c,d,erican3t1;!r1;ak;\x20ex1;pre1;ss;\x205catel2ta1;ir;!\x20lu1;ce1;nt;jazeera,qae1;da;g,rbnb;as;/dc,a3er,tivision1;!\x20blizz1;ard;demy\x20of\x20scienc0;es;ba','draw_get_swf_aa_level','vertex_get_buffer_size','⋟','id|0','getKey','ctrlSetTextColor','DataStructureQ','WeberE','abbr','gamespeed_microseconds','RESET','zephir','clauses','skipTime','IntegerName','sprintf','may','$MachineAddresses','gpu_get_tex_mip_enable_ext','#PhrasalVerb','endRecord','order','captain-who','TrackingFunction','Not','not_eq','CRITBINOM','FlatShading','CandlestickChart','MROUND','Comparable','iso-dot','string_byte_at','HeatInsulationValue','work-or-prepare','≜','(was|were|had|have)','displayRemoveAllEventHandlers','Backtrack','Texture','gcode','ToDate','CLS','window_set_cursor','steam_ugc_create_query_user','isolated','true¦n0;ever,o0;n,t','log1m','TEXT','sprite_get_bbox_bottom','whence','setParticleParams','Fine','overflow-wrap','Unit','position_change','TopologicalSort','BayesianMaximizationObject','vdir','GroupCentralizer','⪂','𝔗','ColonForm','skeleton_animation_clear','CosineDistance','HalfLine','__strong','EdgeOpacity','pr_trianglefan','WaitNext','FrameRate','÷','the-acronym','Right|0','$TracePattern','skeleton_attachment_get','trior','object_get_name','do-it-better','EdgeColor','⧐','PlotRange','⥓','ℰ','AutoStyleOptions','matrix_transform_vertex','\x5cw+','if\x20.{2,9}\x20then\x20.','findEmptyPosition','CountDistinct','ps_distr_invgaussian','there','css','#Possessive\x20#Noun\x20[%Plural|Verb%]$','Pronouns','ImageChannels','layerelementtype_sprite','break\x20cmake_host_system_information\x20cmake_minimum_required\x20cmake_parse_arguments\x20cmake_policy\x20configure_file\x20continue\x20elseif\x20else\x20endforeach\x20endfunction\x20endif\x20endmacro\x20endwhile\x20execute_process\x20file\x20find_file\x20find_library\x20find_package\x20find_path\x20find_program\x20foreach\x20function\x20get_cmake_property\x20get_directory_property\x20get_filename_component\x20get_property\x20if\x20include\x20include_guard\x20list\x20macro\x20mark_as_advanced\x20math\x20message\x20option\x20return\x20separate_arguments\x20set_directory_properties\x20set_property\x20set\x20site_name\x20string\x20unset\x20variable_watch\x20while\x20add_compile_definitions\x20add_compile_options\x20add_custom_command\x20add_custom_target\x20add_definitions\x20add_dependencies\x20add_executable\x20add_library\x20add_link_options\x20add_subdirectory\x20add_test\x20aux_source_directory\x20build_command\x20create_test_sourcelist\x20define_property\x20enable_language\x20enable_testing\x20export\x20fltk_wrap_ui\x20get_source_file_property\x20get_target_property\x20get_test_property\x20include_directories\x20include_external_msproject\x20include_regular_expression\x20install\x20link_directories\x20link_libraries\x20load_cache\x20project\x20qt_wrap_cpp\x20qt_wrap_ui\x20remove_definitions\x20set_source_files_properties\x20set_target_properties\x20set_tests_properties\x20source_group\x20target_compile_definitions\x20target_compile_features\x20target_compile_options\x20target_include_directories\x20target_link_directories\x20target_link_libraries\x20target_link_options\x20target_sources\x20try_compile\x20try_run\x20ctest_build\x20ctest_configure\x20ctest_coverage\x20ctest_empty_binary_directory\x20ctest_memcheck\x20ctest_read_custom_files\x20ctest_run_script\x20ctest_sleep\x20ctest_start\x20ctest_submit\x20ctest_test\x20ctest_update\x20ctest_upload\x20build_name\x20exec_program\x20export_library_dependencies\x20install_files\x20install_programs\x20install_targets\x20load_command\x20make_directory\x20output_required_files\x20remove\x20subdir_depends\x20subdirs\x20use_mangled_mesa\x20utility_source\x20variable_requires\x20write_file\x20qt5_use_modules\x20qt5_use_package\x20qt5_wrap_cpp\x20on\x20off\x20true\x20false\x20and\x20or\x20not\x20command\x20policy\x20target\x20test\x20exists\x20is_newer_than\x20is_directory\x20is_symlink\x20is_absolute\x20matches\x20less\x20greater\x20equal\x20less_equal\x20greater_equal\x20strless\x20strgreater\x20strequal\x20strless_equal\x20strgreater_equal\x20version_less\x20version_greater\x20version_equal\x20version_less_equal\x20version_greater_equal\x20in_list\x20defined','fadeRadio','audio_listener_orientation','xs:unsignedInt','params','ValueDimensions','Delimiters','AiryAiZero','systask','Root','NotebookObject','↽','setnetent','skeleton_get_bounds','TrackGeometryWindow','hideBody','Timer','UNDERSTANDING','strncat','models','loadFont','WilksW','callable','setEditorMode','color-gamut','beach','PolygonAngle','meaning-alluring','MeanFilter','buffer_wrap','vk_shift','highz1','president-of','border-inline-end-width','xor_eq','GeoScaleBar','return','OVERRIDE','TrueQ','localtime','╧','TakeLargestBy','getSocket','vk_pause','IfRebootFlag','ChemicalInstance','\x5cb(Procedure|Declare)(C|CDLL|DLL)?\x5cb','DownArrowUpArrow','_open','*\x0a:::\x0a\x0a\x0a\x0a','EntityStart','puts','NOISE','fraction','PhoneNumber','doing-better-for-x','ctrlCommit',')[jJ](?=','scroll','_Decimal64','plugin','ParametricPlot','function\x20module\x20include\x20use\x20for\x20intersection_for\x20if\x20else\x20\x5c%','ctrlSetFade','1000','env','digits','≕','fprintf','PROGRAMFILES32','SubtypeName','getCenterOfMass','filled','ctrlMapMouseOver','HjorthDistribution','#close-btn2','InstallButtonText','$AudioDecoders','println!','#scroll-target','Option\x20','exp_mod_normal','⫽','nan','iap_ev_storeload','Run','ius','zeros_row_vector','\x5cb0[xX](','Č','ifdef','ErrorBoxOptions','camera_get_default','onmessage','keywords','physics_particle_get_radius','emailButtonHelpModal','Prop','maxLoad','empir','Extern','InvalidArgumentException','PositionIndex','StatusCentrality','preloadTitleObj','setAirplaneThrottle','3-org-acronym','gamepad_button_check_released','a\x20[bit]','GEOMEAN','ev_boundary','Interactive','diag_lightNewLoad','Serial','ℤ','strong0','missing|5','[\x20\x09]*:','LinearOptimization','#else','Nothing','PastTense','font','audio_free_buffer_sound','⋚︀','array_pop','forever','Scilab','Vim\x20Script','prepend','phy_particle_flag_colormixing','HintDb','\x5cb-?\x5cd+','word-spacing','fromCodePoint','ISEVEN','DeleteMissing','TrackDistanceWindow','href','debriefingText','initHighlightingOnLoad()\x20deprecated.\x20\x20Use\x20highlightAll()\x20now.','LightYellow','$dumpportsflush','JoinedCurveBox','WinVista','$TestFileName','DayRange','pre\x20code','CompilerEnvironment','PalindromeQ','trigger','triggerType','theater','(#Verb\x20&&\x20@hasHyphen)\x20over','TextAlignment','[#Noun]\x20me','iostat_eor','end_frame','conseil','array_length','Verilog','ev_outside','InlineRules','INTERNAL1V1','win8_livetile_tile_clear','IdDict','Close','padEnd','color','align-self','[_a-z][\x5cw\x27]*','IncludeGeneratorTasks','building','⅙','tran','NotLess','actionKeys','⩾̸','VideoCombine','didJIP','#Possessive\x20#Ordinal\x20[#PastTense]','∬','freq','ŵ','GeoResolution','$set_coverage_db_name','subMap','DefaultAttachedCellStyle','⥬','2451576KWrajr','overflow-block','random','ev_gesture','path_action_restart','Saveable','3-place-acronym','deHyphenate','MeanAround','getSuppression','diag_dumpScriptAssembly','YunServer','ChromaticityPlot','BeginDialogPacket','HorizontalForm','IndentMaxFraction','Prime','Hessian','╛','SameAs','(#Negative|#Auxiliary|#Modal|#Adverb|#Prefix)','keyboard_virtual_height','gone','T.INV.2T','gpu_get_tex_filter_ext','$AudioEncoders','CompatibleUnitQ','writeText','references','GrammarToken','#download-btn','winphone_tile_back_title','instance_deactivate_all','findCover','EqualRows','phy_joint_reaction_force_x','[A-Za-z0-9_$]+','PlanarAngle','InverseRadonTransform','lmgamma','NumericArrayQ','createVehicle','𝔎','ImageScaled','room_set_viewport','TargetFunctions','$displayo','ignoreMatch','DirichletDistribution','cmpfunc_equal','NestedGreaterGreater','IntOp','RandomWord','MarkdownIt.\x20Failed\x20to\x20enable\x20unknown\x20rule(s):\x20','⤼','ropes','GraphicsGridBox','ImageData','ReflectionMatrix','pass','HelmholtzPDEComponent','succeeds','strlen','CUBEMEMBERPROPERTY','InputPacket','BINARY_NUMBER_MODE','ctrlEnable','Conditioned','Noun|Gerund','LaTeX','LOAD','justify-content','ColorCombine','stringstream','nclob','vectorUp','⊣','DEC2OCT','getOut','GetContext','SearchIndexObject','variable_instance_set','EffectiveInterest','lrepeat|10','font-feature-settings','CMake','PalettesMenuSettings','ο','CellEvaluationLanguage','^[(shut|close|open|start|stop|end|keep)]\x20#Determiner\x20#Noun','⊨','StrLen','unbox','cerr','generalizing','ï','grid-template-rows','InverseJacobiSD','500px','break\x20continue\x20discard\x20do\x20else\x20for\x20if\x20return\x20while\x20switch\x20case\x20default\x20attribute\x20binding\x20buffer\x20ccw\x20centroid\x20centroid\x20varying\x20coherent\x20column_major\x20const\x20cw\x20depth_any\x20depth_greater\x20depth_less\x20depth_unchanged\x20early_fragment_tests\x20equal_spacing\x20flat\x20fractional_even_spacing\x20fractional_odd_spacing\x20highp\x20in\x20index\x20inout\x20invariant\x20invocations\x20isolines\x20layout\x20line_strip\x20lines\x20lines_adjacency\x20local_size_x\x20local_size_y\x20local_size_z\x20location\x20lowp\x20max_vertices\x20mediump\x20noperspective\x20offset\x20origin_upper_left\x20out\x20packed\x20patch\x20pixel_center_integer\x20point_mode\x20points\x20precise\x20precision\x20quads\x20r11f_g11f_b10f\x20r16\x20r16_snorm\x20r16f\x20r16i\x20r16ui\x20r32f\x20r32i\x20r32ui\x20r8\x20r8_snorm\x20r8i\x20r8ui\x20readonly\x20restrict\x20rg16\x20rg16_snorm\x20rg16f\x20rg16i\x20rg16ui\x20rg32f\x20rg32i\x20rg32ui\x20rg8\x20rg8_snorm\x20rg8i\x20rg8ui\x20rgb10_a2\x20rgb10_a2ui\x20rgba16\x20rgba16_snorm\x20rgba16f\x20rgba16i\x20rgba16ui\x20rgba32f\x20rgba32i\x20rgba32ui\x20rgba8\x20rgba8_snorm\x20rgba8i\x20rgba8ui\x20row_major\x20sample\x20shared\x20smooth\x20std140\x20std430\x20stream\x20triangle_strip\x20triangles\x20triangles_adjacency\x20uniform\x20varying\x20vertices\x20volatile\x20writeonly','StringTake','ColorOutput','thisContext','setLightAmbient','margin-top','BadFunctionCallException','weak1','Monitor','skeleton_attachment_create','intrr','tablesample','#clear-btn','(?![a-zA-Z@:_])','(?!struct)(','$DefaultFont','CellElementSpacings','FileType','BuildingData','FunctionCompileExportString','CellTags','LocatorAutoCreate','isEqualType','ThermodynamicData','valley','glanceAt','DEPOT_PATH','subroutine\x20function\x20program','NormFunction','offset','amax0',':-3','LiveCode','Dims','process','macAddress','waited-until-release','politburo','keyboard_virtual_show','uint16','sendDigitalPortPair','ColorConvert','OutputControllabilityMatrix','LocalObject','dos','friend','$tanh','Tidy_Off','removeMagazineGlobal','isBleeding','facebook_init','Blue','setWaypointCombatMode','iap_ev_restore','[;#](?!\x5cs*$)','^#!/usr/bin/env','NetArray','µ','PrivatePaths','fuzzyIP','#Noun','⦫','delay_mode_distributed','CreatePalette','readString','intro','adv','instanceof','CloudObjectInformationData','this\x20super','𝓏','PERCENTRANK','WolframAlphaDate','PrincipalComponents','TabFilling','expectedDestination','->\x20','IDOK','http_request','Matrices','GeoZoomLevel','Unequal','dexp','Illegal\x20lexeme\x20\x22','ArithmeticGeometricMean','TickLabelOrientation','𝒜','perl','RebuildPacletData','PageHeaderLines','phy_joint_length_1','(?:','Boole','Dart','FileWriteUTF16LE','VectorAround','INSTDIR','dnum','two-fold','reserve','he-adj-the','lnbValue','iap_unavailable','historical','whend','FindVertexCover','SectorSpacing','lateral','TextBox','AP_LD','would-you-','explain','SymmetricKey','endgenerate','MaxWordGap','Б','compctl','getOwnPropertyNames','attached','how-he-is-x','(%|em|ex|ch|rem|vw|vh|vmin|vmax|cm|mm|in|pt|pc|px|deg|grad|rad|turn|s|ms|Hz|kHz|dpi|dpcm|dppx)?','PlusMinus','VHDL','(`{3,})[^`](.|\x5cn)*?\x5c1`*[\x20]*','frozen','hostname','Install','TokenWords','#Value+\x20#Adverb?\x20#Adjective','a-bit-confused','os_get_info','cpp','$InstallationDate','ands','room_get_camera','⫋︀','city','AnchoredSearch','ev_joystick2_left','EdgeTags','ugc_query_NotYetRated','GaugeFaceElementFunction','win8_share_file',']\x20!#Noun?','norm','AppearanceRules','LetterNumber','ParentEdgeShapeFunction','$TextStyle','step','Relate','SelfLoops','publicVariableClient','showWarrant','tbody_close','setVectorDir','transient','argument12','конецпроцедуры\x20конецфункции','lnbSort','nopass','hasQuotation','AmbiguityList','grid-auto-columns','Purple','get','digamma','Task','pinMode','here','mp_grid_path','DynamicSetting','#LastName+','currentNamespace','EarthImpactData','GridDefaultElement','ImageKeypoints','code_block','\x5cb|','CompressionLevel','ds_grid_value_disk_x','BackSlash','eglise','LineIntegralConvolutionPlot','NumberCompose','titleFadeOut','#Determiner\x20[%Adj|Noun%]\x20#Conjunction','shmwrite','CellEvaluationFunction','toAdverb','inPolygon','getgrnam','≈','MissingQ','growRight','optical','Corollary','RMDIR','tense','variable_instance_exists','allocated','SystemModelSimulate','PacletInformation','IgnoreCase','TrimmedMean','ErlangDistribution','CloudDisconnect','network_type_disconnect','ℍ','\x5cb0[bB][01](_*[01])*[lL]?\x5cb','shared_mutex','ObservableModelQ','mend','prefixPatterns','isSubordinate','Sunset','RoundingRadius','LossFunction','EntityProperty','#PresentTense\x20#Plural','triangle','д','$MachineEpsilon','𝕠','(a|one)\x20#Cardinal?+\x20#Ordinal','CreateTemporary','#function','PolygonCoordinates','(found|found)\x20it\x20#Adverb?\x20[#Gerund]','RandomPrime','#MaleName','memorial','openCursor','isWeaponRested','endprimitive','setcomp','skew_normal','self::','FindMaxValue','Out','PageHeaders','path_duplicate','texture_get_texel_width','Leaf','✗','ŧ','getLines','ef_ring','GraphDistance','#Value\x20[seconds]','ReentrantLock','$SynchronousEvaluation','c_funptr','disableTIEquipment','setMarkerTypeLocal','generic','#Cardinal\x20and\x20a\x20half','citation:\x20','FitRegularization','terminatorEnd','getElementById','#FirstName','≤','(got|were|was|is|are|am)\x20(#PastTense|#Participle)','destructor','voice-range','enterprise','MessageObject','audio_debug','ў','RemoveAsynchronousTask','ugc_query_CreatedByFollowedUsersRankedByPublicationDate','SystemInformation','forceplaceholders','ReadString','room_set_persistent','OpenAppend','PowerSpectralDensity','MultiLetterStyle','transaction_safe','FourierCosTransform','__slice','TransformedProcess','⤷','MassOutflowValue','FactorSquareFree','wholly','gp_padr','⊢','abstype\x20and\x20andalso\x20as\x20case\x20datatype\x20do\x20else\x20end\x20eqtype\x20exception\x20fn\x20fun\x20functor\x20handle\x20if\x20in\x20include\x20infix\x20infixr\x20let\x20local\x20nonfix\x20of\x20op\x20open\x20orelse\x20raise\x20rec\x20sharing\x20sig\x20signature\x20struct\x20structure\x20then\x20type\x20val\x20with\x20withtype\x20where\x20while','$assertpassoff','replacements','DefaultFormatType','less','view_angle','allCurators','WaveletImagePlot','\x5cn\x5cn','cov_exp_quad','5pm-central','labs','CycleGraph','FindCurvePath','physics_set_friction','ds_priority_read','hotel','(#Pronoun|#Person)\x20(had|#Adverb)?\x20[better]\x20#PresentTense','Rasterize','package\x20body','CompilerEnvironmentObject','AFSDB','Hash','part_type_color_mix','messageAvailable','qcmpres','build','$sync$or$array','chomp','CoInductive','SearchIndices','VertexTransitiveGraphQ','ngs','bg-white','Ŵ','fade_at','path_add','LocatorBoxOptions','make_shared','DIM','EclipseType','handgunMagazine','updateIR','parseCommand','AASTriangle','DivideBy','ListLineIntegralConvolutionPlot','StridedVector','PRICE','C/AL','EQUALS','SpellingDictionaries','GraphAssortativity','TrackedSymbols','HeatFluxValue','setTerrainHeight','SetAccuracy','Cyclotomic','LibraryDataType','url_open','imaginary','gemspec','setCamUseTi','(#=>|=>|\x5c|>>|-?->|!->)','get_command_argument',')\x5c.)+','EvaluationOrder','bbox_left','xs:base64Binary','fopen','deg','BaseDecode','\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20',')\x5cs*\x5c(','vertex_usage_texcoord','taskMarkerOffset','gearSlotAmmoCount','setMarkerColorLocal','Gray','NetGANOperator','(i|he|she|we|you|they)','musicVolume','string_insert','€','absolute_url','font_get_texture','AllowedHeads','part_type_color2','erfc','$ImagingDevice','❘','mouseX','aimag','NotPrecedesTilde','readJoystickY','Slot','obj-c++','public\x20protected\x20internal\x20private\x20constructor','layer_tile_get_sprite','getFriend','TensorReduce','ImageAlign','ComplexityFunction','EntityInstance','Ν','WeakReference','chan','json_encode','magnitude-and-value','thrift','pizza','IncludeWindowTimes','createObjectStore','auxiliary','BarChart','rest','logname','HEX2DEC','CUBERANKEDMEMBER','contain','ev_global_middle_button','physics_set_density','|[()[\x5c]{}.,\x22\x27?!\x5c-;]).|\x5c[(?:(?!','ds_queue_head','uninstfinalize','JacobiSC','basin','Contours','proof','(?:else|fi|or):','cast','[A-Za-zА-Яа-яёЁ_][A-Za-zА-Яа-яёЁ_0-9]+','MaximalBy','))(','column-rule','path_get_kind','mutex','insert','addMusicEventHandler','channelEnabled','IMSUB','Inspect','tr_open','red','BlackmanNuttallWindow','vivo','EllipticExpPrime','operator','#Possessive\x20[%Adj|Gerund%]\x20#Noun','to_matrix','specularbrdf','fractions','setservent','Column','⋢','layer_tile_get_alpha','lbAdd','layer_sprite_get_speed','audio_falloff_none','postTagger','getHitIndex','','GeneratedCell','FactorTerms','get3DENActionState','setGroupIdGlobal','background-clip','RemoteBatchSubmit','optimisticlock','cr_size_all','does-that-work','Ќ','if\x20then\x20not\x20for\x20in\x20while\x20do\x20return\x20else\x20elseif\x20break\x20continue\x20switch\x20and\x20or\x20unless\x20when\x20class\x20extends\x20super\x20local\x20import\x20export\x20from\x20using','LongestCommonSequence','DimensionalCombinations','PrecedesSlantEqual','ℝ','win8_search_enable','MeijerG','pseudo','RadialGradientImage','InputAliases','SurfaceAppearance','𝔳','CellFrame','arch','break-before','cflow','Num','background-color','enough','Resize','array_count','physics_joint_set_value','pde','EnterExpressionPacket','questions','getVehicleCargo','PeriodogramArray','⇃','^1\x5c.\x5c.(\x5cd+)$','StackComplete','vk_f9','scroll-padding-inline-end','draw_roundrect','nonisolated','HistoricalPeriodData','PEN','node-repl','Clear','ListSliceVectorPlot3D','(he|him|his)','\x20of','DynamicName','[$%\x5c[]','BetaPrimeDistribution','setoid_transitivity','hintCadet','Blacklist','⥐','Protected','ExternalFunctionName','ArgumentError','WebSearch','ffs','repeat','border-bottom-right-radius','ideolog','willSet','GrammarApply','will\x20','⫩','$ftell','name','[\x20]*(?=(TODO|FIXME|NOTE|BUG|OPTIMIZE|HACK|XXX):)','stateNamedEntity','fmt','zpty','audio_listener_position','ctrlFontHeight','SocketReadMessage','vertex_normal','$MachineDomains','ParametricRampLayer','font_exists','file_bin_close','delete3DENEntities','layer_tile_get_y','timeline_speed','prefix','DeletePermissionsKey','char\x20uchar\x20unichar\x20int\x20uint\x20long\x20ulong\x20short\x20ushort\x20int8\x20int16\x20int32\x20int64\x20uint8\x20uint16\x20uint32\x20uint64\x20float\x20double\x20bool\x20struct\x20enum\x20string\x20void\x20weak\x20unowned\x20owned\x20async\x20signal\x20static\x20abstract\x20interface\x20override\x20virtual\x20delegate\x20if\x20while\x20do\x20for\x20foreach\x20else\x20switch\x20case\x20break\x20default\x20return\x20try\x20catch\x20public\x20private\x20protected\x20internal\x20using\x20new\x20this\x20get\x20set\x20const\x20stdout\x20stdin\x20stderr\x20var','getAllOwnedMines','StateSpaceRealization','Method','Generalize','last-child','oil','subset','NormalizationLayer','Vec','ListPickerBox','```','BITRSHIFT','#Copula\x20[still]\x20(in|#Gerund|#Adjective)','percent','so-gerund','rammed','list_item_close','NS_VALUERETURN','dlog','AddUsers','CharacterEncoding','ImageBoundingBoxes','ButtonNote','“”‘’','û','caption-side','insensitive','backpackSpaceFor','AudioTrim','ServiceConnect','$sync$nand$array','$dumpall','SocketWaitAll','Auxiliary\x20Infinitive','CompileError','(#FirstName\x20&&\x20!#Possessive)\x20[#Singular]\x20#Verb','xus','(this|that|#Comparative|#Superlative)','pathname','\x5c/\x5c*','deleteMarker','project','setDynamicSimulationDistanceCoef','HorizontalScrollPosition','AnnotationValue','Point','setVehicleId','of_challenge_win','BitShiftLeft',''','translate_?z?','pushStyle','#end','beginShape','AtMention','gesture_flick_speed','\x5cw+\x5cs*=','gp_padl','AudioJoin','lbSetPictureRight','as\x20if','pre','PermutationProduct','object_get_mask','lake','TitleGrouping','approx','BackFaceTexture','Back','__bridge_retain','$MaxPiecewiseCases','CircularQuaternionMatrixDistribution','AudioOutputDevice','ѓ','FeatureSpacePlot','findAll','underwater','$asserton','mineActive','FloatList','DrawEdges','COVARIANCE.S','LQGRegulator','mp_grid_draw','MeyerWavelet','⤒','marine','vertex_create_buffer_from_buffer','BrightnessEqualize','more-papers-','cityNameRead','⩺','SpatialBinnedPointData','#Place','mice','LightCyan','∼⃒','escaping','sql','documentroot','enableTeamSwitch','budget','cstringarray','IsSimple','𝕖','GuidedFilter','linkify','ParallelArray','UninstallIcon','MoleculeDraw','kbv_type_phone','graph','steam_is_cloud_enabled_for_account','Cosr','$SummaryBoxDataSizeLimit','3-[place-of-foo]','⊗','^[#Infinitive]\x20(him|her|it|us|me|there)','▓','Url','deleteLocation','prolog','lbSetPictureColorSelected','RightUpVectorBar','𝔪','ImageVectorscopePlot','tbody_open','LOWER_Z','vk_subtract','(#Noun\x20&&\x20#Hyphenated)\x20(#Adjective\x20&&\x20#Hyphenated)','$SystemCredentialStore','CDFWavelet',':^)','gpu_get_tex_max_aniso_ext','cet','\x0a\x0a---\x0a---\x0a---\x0a\x0a','TagSet','KillProcess','territory','EditDistance','clock_str','DISC','[\x20\x5ct\x5cn\x5cr]+(::)?[a-zA-Z_]((::)?[a-zA-Z0-9_])*','getprotobyname','won\x27t','NS_ASSUME_NONNULL_BEGIN','Error\x20counting\x20rows:\x20','EchoEvaluation','@page\x20@font-face','avg','setRectangular','lag','CoulombG','ugc_sortorder_CreationOrderDesc','procedure','createMissionDisplay','ordered','developing-a','BETAINV','LOGEST','lock_guard','FoxH','addplugindir','vertex_type_colour','ImportAutoReplacements','camera_set_view_speed','lbSetTextRight','#helpModalContainer','removeDiaryRecord','data\x20family\x20type\x20newtype\x20deriving','external_free','isNegative','date_get_month','⦋','showCommandingMenu','⋱','Lemma','text/plain','UseGraphicsRange','border-bottom','newOutputStream','setWaypointName','mailto','GridBoxOptions','EdgeTaggedGraph','only-child','difficultyEnabledRTD','LengthException','!@#$%^&*()','revoke','disconnect','HitMissTransform','so-adv','SplQueue','fromPresent','ef_flare','allObjects','newReader','Enum','ensemble','^time:','audio_sound_get_pitch','⊊︀','Overflow:\x20input\x20needs\x20wider\x20integers\x20to\x20process','room_add','Brown','(%)?','_webout_','nullif','bless','LabelVisibility','^(had|have)\x20been\x20#Passive','^#Adverb','implicit|10','ReImPlot','SiegelTukeyTest','MathieuCPrime','Cross','still-advb','jbessel','PlanetData',')\x5cb|\x5c.)?|(','Џ','display','vestMagazines','ClassPriors','RandomPermutation','Attachments','AllowIncomplete','$writeh','NExpectation','([G])([0-9]+\x5c.?[0-9]?)','hippopotami','SetSecuredAuthenticationKey','Conjugate','YEAR','switchableUnits','every','selectPlayer','image_index','SemidefiniteOptimization','current_date','ArrayQ','simulCloudDensity','kbv_returnkey_default','currency_symbol','███████','hxx','DocumentWeightingRules','*^:','Initial','addPlayerScores','ĶķĸƘƙǨǩΚκЌЖКжкќҚқҜҝҞҟҠҡ','$dumpon','will\x20want',':^p','format_time','platform','lbData','shl','null\x20истина\x20ложь\x20неопределено','part_system_depth','DiscreteMaxLimit','NotebookEvaluate','true\x20false\x20null','figure','DateListPlot','selectRandom','to\x20[(shit|hell)]','fork','$Username','padding-inline-start','vk_f7','layerelementtype_undefined','table_open','$SubtitleDecoders','#Infinitive\x20(this|that|the)\x20[#Infinitive]','Buffer','A_[a-zA-Z0-9]+','(?!%\x5c})(','⥪','python','Global','TetrahedronBoxOptions','CDFInformation','what\x20does\x20this\x20mean?','LeftRightVector','$fmonitor','techn','zsh','camSetFovRange','Evaluate','physics_fixture_delete','draw_roundrect_color','isstring','Erf','$GeoLocationPrecision','⇤','ahk','^TAP\x20version\x20(\x5cd+)$','ConnectedMoleculeQ','neb','COVAR','LOG','AssociationMap','attrn','popped','probit','VertexDelete','image_yscale','SurfaceArea','IFERROR','Parallelogram','reshape','toNice','\x5cs+:\x5cs+','keep-it-cool','sdf','UpdateDynamicObjectsSynchronous','\x22\x20style=\x22display:\x20none;\x22>\x0a\x0a','rest_arg','isalpha','EstimatedVariogramModel','ev_left_press','diskcopy','@media\x20','quintillion','asking-questions','#Modal\x20[%Person|Verb%]','LEFT$','NSIS','steam_ugc_query_add_excluded_tag','mb_middle','ExternalEvaluate','current_schema','RequestExecutionLevel','row-gap','PixelValuePositions','paste','1450NbbOUJ','pre\x20code.hljs\x20{\x0a\x20\x20display:\x20block;\x0a\x20\x20overflow-x:\x20auto;\x0a\x20\x20padding:\x201em\x0a}\x0acode.hljs\x20{\x0a\x20\x20padding:\x203px\x205px\x0a}\x0a/*!\x0a\x20\x20Theme:\x20GitHub\x0a\x20\x20Description:\x20Light\x20theme\x20as\x20seen\x20on\x20github.com\x0a\x20\x20Author:\x20github.com\x0a\x20\x20Maintainer:\x20@Hirse\x0a\x20\x20Updated:\x202021-05-15\x0a\x0a\x20\x20Outdated\x20base\x20version:\x20https://github.com/primer/github-syntax-light\x0a\x20\x20Current\x20colors\x20taken\x20from\x20GitHub\x27s\x20CSS\x0a*/\x0a.hljs\x20{\x0a\x20\x20color:\x20#24292e;\x0a\x20\x20background:\x20#ffffff\x0a}\x0a.hljs-doctag,\x0a.hljs-keyword,\x0a.hljs-meta\x20.hljs-keyword,\x0a.hljs-template-tag,\x0a.hljs-template-variable,\x0a.hljs-type,\x0a.hljs-variable.language_\x20{\x0a\x20\x20/*\x20prettylights-syntax-keyword\x20*/\x0a\x20\x20color:\x20#d73a49\x0a}\x0a.hljs-title,\x0a.hljs-title.class_,\x0a.hljs-title.class_.inherited__,\x0a.hljs-title.function_\x20{\x0a\x20\x20/*\x20prettylights-syntax-entity\x20*/\x0a\x20\x20color:\x20#6f42c1\x0a}\x0a.hljs-attr,\x0a.hljs-attribute,\x0a.hljs-literal,\x0a.hljs-meta,\x0a.hljs-number,\x0a.hljs-operator,\x0a.hljs-variable,\x0a.hljs-selector-attr,\x0a.hljs-selector-class,\x0a.hljs-selector-id\x20{\x0a\x20\x20/*\x20prettylights-syntax-constant\x20*/\x0a\x20\x20color:\x20#005cc5\x0a}\x0a.hljs-regexp,\x0a.hljs-string,\x0a.hljs-meta\x20.hljs-string\x20{\x0a\x20\x20/*\x20prettylights-syntax-string\x20*/\x0a\x20\x20color:\x20#032f62\x0a}\x0a.hljs-built_in,\x0a.hljs-symbol\x20{\x0a\x20\x20/*\x20prettylights-syntax-variable\x20*/\x0a\x20\x20color:\x20#e36209\x0a}\x0a.hljs-comment,\x0a.hljs-code,\x0a.hljs-formula\x20{\x0a\x20\x20/*\x20prettylights-syntax-comment\x20*/\x0a\x20\x20color:\x20#6a737d\x0a}\x0a.hljs-name,\x0a.hljs-quote,\x0a.hljs-selector-tag,\x0a.hljs-selector-pseudo\x20{\x0a\x20\x20/*\x20prettylights-syntax-entity-tag\x20*/\x0a\x20\x20color:\x20#22863a\x0a}\x0a.hljs-subst\x20{\x0a\x20\x20/*\x20prettylights-syntax-storage-modifier-import\x20*/\x0a\x20\x20color:\x20#24292e\x0a}\x0a.hljs-section\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-heading\x20*/\x0a\x20\x20color:\x20#005cc5;\x0a\x20\x20font-weight:\x20bold\x0a}\x0a.hljs-bullet\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-list\x20*/\x0a\x20\x20color:\x20#735c0f\x0a}\x0a.hljs-emphasis\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-italic\x20*/\x0a\x20\x20color:\x20#24292e;\x0a\x20\x20font-style:\x20italic\x0a}\x0a.hljs-strong\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-bold\x20*/\x0a\x20\x20color:\x20#24292e;\x0a\x20\x20font-weight:\x20bold\x0a}\x0a.hljs-addition\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-inserted\x20*/\x0a\x20\x20color:\x20#22863a;\x0a\x20\x20background-color:\x20#f0fff4\x0a}\x0a.hljs-deletion\x20{\x0a\x20\x20/*\x20prettylights-syntax-markup-deleted\x20*/\x0a\x20\x20color:\x20#b31d28;\x0a\x20\x20background-color:\x20#ffeef0\x0a}\x0a.hljs-char.escape_,\x0a.hljs-link,\x0a.hljs-params,\x0a.hljs-property,\x0a.hljs-punctuation,\x0a.hljs-tag\x20{\x0a\x20\x20/*\x20purposely\x20ignored\x20*/\x0a\x20\x20\x0a}','\x5c.(\x5c.|','SetCurInstType','⩛','FileClose','foreach\x20do\x20while\x20for\x20if\x20from\x20to\x20step\x20else\x20on-error\x20and\x20or\x20not\x20in','GOTO','3:12:31','nearEntities','ExactSizeIterator','after','dust','puppet','PermutationMax','#\x5cw+(\x5c.\x5cw+)?#(','NORMSDIST','targetKnowledge','CentralFeature','go-to-toronto','$async$and$plane','mma','GibbsPointProcess','audio_pause_sync_group','beginWrite','keyword','waypointTimeoutCurrent','rule','╜','onDoubleClick','over','dispatchEvent','ev_gui_end','StringExtract','antagonize','KVertexConnectedGraphQ','(will\x20&&\x20@isTitleCase)\x20#ProperNoun','@hasHyphen\x20.','#Honorific+','setter','move_random','fade_in','actionKeysNamesArray','ProbabilityDistribution','kbv_type_ascii','Sunday','#Singular\x20is\x20#Adverb?\x20[#PastTense$]','endproperty','LeastSquares','network_config_enable_reliable_udp','$ln','LogBarnesG','true\x20false\x20yes\x20no\x20nothing\x20nil\x20null','phy_joint_frequency','AstroBackground','SechDistribution','s_until_with','WeightedData','10.6.0','♦','gpu_set_tex_max_mip_ext','$Assumptions','ColorReplace','alignas','configName','input[type=\x22checkbox\x22]','char8_t','BezierCurveBoxOptions','\x5cb\x5cd+(\x5c.\x5cd+)?(ms|s|h|m)?','EqualTo','com4','THAI_CHARSET','freeExtension','display_set_gui_maximize','ev_gesture_pinch_in','ProcessInformation','2-regex-\x27','TagSetDelayed','co_max','Example','╨','this','class','ds_list_mark_as_list','DiffusionPDETerm','camera_get_view_border_y','fourtieth','Attribute','BufferedReader','[A-Za-z_]\x5cw*(::\x5cw+)*(\x5c?|!)?','⦼','c_ltgray','AugmentedPolyhedron','NonNegative','goto','DefaultInputFormatType','ADJUST','ResizeLayer','ev_global_gesture_tap','%f\x20%F\x20%t\x20%T\x20%pi\x20%eps\x20%inf\x20%nan\x20%e\x20%i\x20%z\x20%s','DiscreteUniformDistribution','VideoEncoding','patch','assume','[A-Za-zА-Яа-яёЁ_!][A-Za-zА-Яа-яёЁ_0-9]*','stddev_pop','(dc|atlanta|minnesota|manchester|newcastle|sheffield)\x20united','RollPitchYawMatrix','OpenTemporary','rancau','TagUnset','Crystal','BoxObject','his-excellency','Deployed','⁃','intersect','VariogramFunction','PunctuationCharacter','Manipulate','FilteredEntityClass','◺','⫮','sum','LeftTee','ŭ','NullSpace','canBe','isVerb','AllowInlineCells','PHP\x20template','LeftDownVector','DominatorTreeGraph','int64','bring-to-noun','VectorScale','node','iso-slash','scriptengineminorversion','deleteVehicle','overly','pub','scroll-padding-inline','Covariance','Not\x20available','BubbleChart3D','part_system_clear','scheme','intro_pattern','ROW','createTreeWalker','Collection','brewing-giant','setFirmwareVersion','removeAllSecondaryWeaponItems','Standardize','makefile','UnlabeledTree','FeatureExtract','Comparative','$PlotTheme','FILES','attrJoin','shelve','false|true|PI|undef','RANK.EQ','tilemap_get_height','Previous','pullup','uses','InverseShortTimeFourier','getcap','log1p_exp','ColorSetterBoxOptions','setGroupIcon','overlap','ℌ','FunctionContinuous','#Infinitive\x20[that]\x20','abstract\x20alias\x20annotation\x20as\x20as?\x20asm\x20begin\x20break\x20case\x20class\x20def\x20do\x20else\x20elsif\x20end\x20ensure\x20enum\x20extend\x20for\x20fun\x20if\x20include\x20instance_sizeof\x20is_a?\x20lib\x20macro\x20module\x20next\x20nil?\x20of\x20out\x20pointerof\x20private\x20protected\x20rescue\x20responds_to?\x20return\x20require\x20select\x20self\x20sizeof\x20struct\x20super\x20then\x20type\x20typeof\x20union\x20uninitialized\x20unless\x20until\x20verbatim\x20when\x20while\x20with\x20yield\x20__DIR__\x20__END_LINE__\x20__FILE__\x20__LINE__','radix','display_portrait_flipped','TWOPI','ifmissing','pt_shape_disk','leq','intrinsic','FeatureSet','RoundFromZero','CoulombF','infix\x20infixl\x20infixr','owner','importScripts','datan2','shader_reset','CommonestFilter','softfloat','nativeint','GeometricOptimization','IndependentVertexSetQ','Hypergeometric2F1Regularized','lnbSetTextRight','viruses','ToCharacterCode','ctrlSetFontH1B','profileName','LoadError','mask-border-mode','format_args','file_text_write_string','TextCases','also','$Linked','IMSUM','government-of-x','FunctionPeriod','setAmmoCargo','#Noun\x20[that]\x20#Copula','in-range','ConvexRegionQ','ctrlSetFontH2','fence','automatic','setType','inflame','PTR','audio_system','Rewrite','isA','QnDispersion','⦕','gp_shoulderl','vertex_format_add_custom','handlebars','MaxValue','cycle','\x5c*\x5cs','FindGeoLocation','DirectoryStack','taskDestination','VideoJoin','debugMode','ColorSpace','division','$Version','Gradle','requestImage','COUPDAYSNC','⪢','simulCloudOcclusion','list-style-image','webцвета\x20windowsцвета\x20windowsшрифты\x20библиотекакартинок\x20рамкистиля\x20символы\x20цветастиля\x20шрифтыстиля\x20автоматическоесохранениеданныхформывнастройках\x20автонумерациявформе\x20автораздвижениесерий\x20анимациядиаграммы\x20вариантвыравниванияэлементовизаголовков\x20вариантуправлениявысотойтаблицы\x20вертикальнаяпрокруткаформы\x20вертикальноеположение\x20вертикальноеположениеэлемента\x20видгруппыформы\x20виддекорацииформы\x20виддополненияэлементаформы\x20видизмененияданных\x20видкнопкиформы\x20видпереключателя\x20видподписейкдиаграмме\x20видполяформы\x20видфлажка\x20влияниеразмеранапузырекдиаграммы\x20горизонтальноеположение\x20горизонтальноеположениеэлемента\x20группировкаколонок\x20группировкаподчиненныхэлементовформы\x20группыиэлементы\x20действиеперетаскивания\x20дополнительныйрежимотображения\x20допустимыедействияперетаскивания\x20интервалмеждуэлементамиформы\x20использованиевывода\x20использованиеполосыпрокрутки\x20используемоезначениеточкибиржевойдиаграммы\x20историявыборапривводе\x20источникзначенийоситочекдиаграммы\x20источникзначенияразмерапузырькадиаграммы\x20категориягруппыкоманд\x20максимумсерий\x20начальноеотображениедерева\x20начальноеотображениесписка\x20обновлениетекстаредактирования\x20ориентациядендрограммы\x20ориентациядиаграммы\x20ориентацияметокдиаграммы\x20ориентацияметоксводнойдиаграммы\x20ориентацияэлементаформы\x20отображениевдиаграмме\x20отображениевлегендедиаграммы\x20отображениегруппыкнопок\x20отображениезаголовкашкалыдиаграммы\x20отображениезначенийсводнойдиаграммы\x20отображениезначенияизмерительнойдиаграммы\x20отображениеинтерваладиаграммыганта\x20отображениекнопки\x20отображениекнопкивыбора\x20отображениеобсужденийформы\x20отображениеобычнойгруппы\x20отображениеотрицательныхзначенийпузырьковойдиаграммы\x20отображениепанелипоиска\x20отображениеподсказки\x20отображениепредупрежденияприредактировании\x20отображениеразметкиполосырегулирования\x20отображениестраницформы\x20отображениетаблицы\x20отображениетекстазначениядиаграммыганта\x20отображениеуправленияобычнойгруппы\x20отображениефигурыкнопки\x20палитрацветовдиаграммы\x20поведениеобычнойгруппы\x20поддержкамасштабадендрограммы\x20поддержкамасштабадиаграммыганта\x20поддержкамасштабасводнойдиаграммы\x20поисквтаблицепривводе\x20положениезаголовкаэлементаформы\x20положениекартинкикнопкиформы\x20положениекартинкиэлементаграфическойсхемы\x20положениекоманднойпанелиформы\x20положениекоманднойпанелиэлементаформы\x20положениеопорнойточкиотрисовки\x20положениеподписейкдиаграмме\x20положениеподписейшкалызначенийизмерительнойдиаграммы\x20положениесостоянияпросмотра\x20положениестрокипоиска\x20положениетекстасоединительнойлинии\x20положениеуправленияпоиском\x20положениешкалывремени\x20порядокотображенияточекгоризонтальнойгистограммы\x20порядоксерийвлегендедиаграммы\x20размеркартинки\x20расположениезаголовкашкалыдиаграммы\x20растягиваниеповертикалидиаграммыганта\x20режимавтоотображениясостояния\x20режимвводастроктаблицы\x20режимвыборанезаполненного\x20режимвыделениядаты\x20режимвыделениястрокитаблицы\x20режимвыделениятаблицы\x20режимизмененияразмера\x20режимизменениясвязанногозначения\x20режимиспользованиядиалогапечати\x20режимиспользованияпараметракоманды\x20режиммасштабированияпросмотра\x20режимосновногоокнаклиентскогоприложения\x20режимоткрытияокнаформы\x20режимотображениявыделения\x20режимотображениягеографическойсхемы\x20режимотображениязначенийсерии\x20режимотрисовкисеткиграфическойсхемы\x20режимполупрозрачностидиаграммы\x20режимпробеловдиаграммы\x20режимразмещениянастранице\x20режимредактированияколонки\x20режимсглаживаниядиаграммы\x20режимсглаживанияиндикатора\x20режимсписказадач\x20сквозноевыравнивание\x20сохранениеданныхформывнастройках\x20способзаполнениятекстазаголовкашкалыдиаграммы\x20способопределенияограничивающегозначениядиаграммы\x20стандартнаягруппакоманд\x20стандартноеоформление\x20статусоповещенияпользователя\x20стильстрелки\x20типаппроксимациилиниитрендадиаграммы\x20типдиаграммы\x20типединицышкалывремени\x20типимпортасерийслоягеографическойсхемы\x20типлиниигеографическойсхемы\x20типлиниидиаграммы\x20типмаркерагеографическойсхемы\x20типмаркерадиаграммы\x20типобластиоформления\x20типорганизацииисточникаданныхгеографическойсхемы\x20типотображениясериислоягеографическойсхемы\x20типотображенияточечногообъектагеографическойсхемы\x20типотображенияшкалыэлементалегендыгеографическойсхемы\x20типпоискаобъектовгеографическойсхемы\x20типпроекциигеографическойсхемы\x20типразмещенияизмерений\x20типразмещенияреквизитовизмерений\x20типрамкиэлементауправления\x20типсводнойдиаграммы\x20типсвязидиаграммыганта\x20типсоединениязначенийпосериямдиаграммы\x20типсоединенияточекдиаграммы\x20типсоединительнойлинии\x20типстороныэлементаграфическойсхемы\x20типформыотчета\x20типшкалырадарнойдиаграммы\x20факторлиниитрендадиаграммы\x20фигуракнопки\x20фигурыграфическойсхемы\x20фиксациявтаблице\x20форматдняшкалывремени\x20форматкартинки\x20ширинаподчиненныхэлементовформы\x20виддвижениябухгалтерии\x20виддвижениянакопления\x20видпериодарегистрарасчета\x20видсчета\x20видточкимаршрутабизнеспроцесса\x20использованиеагрегатарегистранакопления\x20использованиегруппиэлементов\x20использованиережимапроведения\x20использованиесреза\x20периодичностьагрегатарегистранакопления\x20режимавтовремя\x20режимзаписидокумента\x20режимпроведениядокумента\x20авторегистрацияизменений\x20допустимыйномерсообщения\x20отправкаэлементаданных\x20получениеэлементаданных\x20использованиерасшифровкитабличногодокумента\x20ориентациястраницы\x20положениеитоговколоноксводнойтаблицы\x20положениеитоговстроксводнойтаблицы\x20положениетекстаотносительнокартинки\x20расположениезаголовкагруппировкитабличногодокумента\x20способчтениязначенийтабличногодокумента\x20типдвустороннейпечати\x20типзаполненияобластитабличногодокумента\x20типкурсоровтабличногодокумента\x20типлиниирисункатабличногодокумента\x20типлинииячейкитабличногодокумента\x20типнаправленияпереходатабличногодокумента\x20типотображениявыделениятабличногодокумента\x20типотображениялинийсводнойтаблицы\x20типразмещениятекстатабличногодокумента\x20типрисункатабличногодокумента\x20типсмещениятабличногодокумента\x20типузоратабличногодокумента\x20типфайлатабличногодокумента\x20точностьпечати\x20чередованиерасположениястраниц\x20отображениевремениэлементовпланировщика\x20типфайлаформатированногодокумента\x20обходрезультатазапроса\x20типзаписизапроса\x20видзаполнениярасшифровкипостроителяотчета\x20типдобавленияпредставлений\x20типизмеренияпостроителяотчета\x20типразмещенияитогов\x20доступкфайлу\x20режимдиалогавыборафайла\x20режимоткрытияфайла\x20типизмеренияпостроителязапроса\x20видданныханализа\x20методкластеризации\x20типединицыинтервалавременианализаданных\x20типзаполнениятаблицырезультатаанализаданных\x20типиспользованиячисловыхзначенийанализаданных\x20типисточникаданныхпоискаассоциаций\x20типколонкианализаданныхдереворешений\x20типколонкианализаданныхкластеризация\x20типколонкианализаданныхобщаястатистика\x20типколонкианализаданныхпоискассоциаций\x20типколонкианализаданныхпоискпоследовательностей\x20типколонкимоделипрогноза\x20типмерырасстоянияанализаданных\x20типотсеченияправилассоциации\x20типполяанализаданных\x20типстандартизациианализаданных\x20типупорядочиванияправилассоциациианализаданных\x20типупорядочиванияшаблоновпоследовательностейанализаданных\x20типупрощениядереварешений\x20wsнаправлениепараметра\x20вариантxpathxs\x20вариантзаписидатыjson\x20вариантпростоготипаxs\x20видгруппымоделиxs\x20видфасетаxdto\x20действиепостроителяdom\x20завершенностьпростоготипаxs\x20завершенностьсоставноготипаxs\x20завершенностьсхемыxs\x20запрещенныеподстановкиxs\x20исключениягруппподстановкиxs\x20категорияиспользованияатрибутаxs\x20категорияограниченияидентичностиxs\x20категорияограниченияпространствименxs\x20методнаследованияxs\x20модельсодержимогоxs\x20назначениетипаxml\x20недопустимыеподстановкиxs\x20обработкапробельныхсимволовxs\x20обработкасодержимогоxs\x20ограничениезначенияxs\x20параметрыотбораузловdom\x20переносстрокjson\x20позициявдокументеdom\x20пробельныесимволыxml\x20типатрибутаxml\x20типзначенияjson\x20типканоническогоxml\x20типкомпонентыxs\x20типпроверкиxml\x20типрезультатаdomxpath\x20типузлаdom\x20типузлаxml\x20формаxml\x20формапредставленияxs\x20форматдатыjson\x20экранированиесимволовjson\x20видсравнениякомпоновкиданных\x20действиеобработкирасшифровкикомпоновкиданных\x20направлениесортировкикомпоновкиданных\x20расположениевложенныхэлементоврезультатакомпоновкиданных\x20расположениеитоговкомпоновкиданных\x20расположениегруппировкикомпоновкиданных\x20расположениеполейгруппировкикомпоновкиданных\x20расположениеполякомпоновкиданных\x20расположениереквизитовкомпоновкиданных\x20расположениересурсовкомпоновкиданных\x20типбухгалтерскогоостаткакомпоновкиданных\x20типвыводатекстакомпоновкиданных\x20типгруппировкикомпоновкиданных\x20типгруппыэлементовотборакомпоновкиданных\x20типдополненияпериодакомпоновкиданных\x20типзаголовкаполейкомпоновкиданных\x20типмакетагруппировкикомпоновкиданных\x20типмакетаобластикомпоновкиданных\x20типостаткакомпоновкиданных\x20типпериодакомпоновкиданных\x20типразмещениятекстакомпоновкиданных\x20типсвязинаборовданныхкомпоновкиданных\x20типэлементарезультатакомпоновкиданных\x20расположениелегендыдиаграммыкомпоновкиданных\x20типпримененияотборакомпоновкиданных\x20режимотображенияэлементанастройкикомпоновкиданных\x20режимотображениянастроеккомпоновкиданных\x20состояниеэлементанастройкикомпоновкиданных\x20способвосстановлениянастроеккомпоновкиданных\x20режимкомпоновкирезультата\x20использованиепараметракомпоновкиданных\x20автопозицияресурсовкомпоновкиданных\x20вариантиспользованиягруппировкикомпоновкиданных\x20расположениересурсоввдиаграммекомпоновкиданных\x20фиксациякомпоновкиданных\x20использованиеусловногооформлениякомпоновкиданных\x20важностьинтернетпочтовогосообщения\x20обработкатекстаинтернетпочтовогосообщения\x20способкодированияинтернетпочтовоговложения\x20способкодированиянеasciiсимволовинтернетпочтовогосообщения\x20типтекстапочтовогосообщения\x20протоколинтернетпочты\x20статусразборапочтовогосообщения\x20режимтранзакциизаписижурналарегистрации\x20статустранзакциизаписижурналарегистрации\x20уровеньжурналарегистрации\x20расположениехранилищасертификатовкриптографии\x20режимвключениясертификатовкриптографии\x20режимпроверкисертификатакриптографии\x20типхранилищасертификатовкриптографии\x20кодировкаименфайловвzipфайле\x20методсжатияzip\x20методшифрованияzip\x20режимвосстановленияпутейфайловzip\x20режимобработкиподкаталоговzip\x20режимсохраненияпутейzip\x20уровеньсжатияzip\x20звуковоеоповещение\x20направлениепереходакстроке\x20позициявпотоке\x20порядокбайтов\x20режимблокировкиданных\x20режимуправленияблокировкойданных\x20сервисвстроенныхпокупок\x20состояниефоновогозадания\x20типподписчикадоставляемыхуведомлений\x20уровеньиспользованиязащищенногосоединенияftp\x20направлениепорядкасхемызапроса\x20типдополненияпериодамисхемызапроса\x20типконтрольнойточкисхемызапроса\x20типобъединениясхемызапроса\x20типпараметрадоступнойтаблицысхемызапроса\x20типсоединениясхемызапроса\x20httpметод\x20автоиспользованиеобщегореквизита\x20автопрефиксномеразадачи\x20вариантвстроенногоязыка\x20видиерархии\x20видрегистранакопления\x20видтаблицывнешнегоисточникаданных\x20записьдвиженийприпроведении\x20заполнениепоследовательностей\x20индексирование\x20использованиебазыпланавидоврасчета\x20использованиебыстроговыбора\x20использованиеобщегореквизита\x20использованиеподчинения\x20использованиеполнотекстовогопоиска\x20использованиеразделяемыхданныхобщегореквизита\x20использованиереквизита\x20назначениеиспользованияприложения\x20назначениерасширенияконфигурации\x20направлениепередачи\x20обновлениепредопределенныхданных\x20оперативноепроведение\x20основноепредставлениевидарасчета\x20основноепредставлениевидахарактеристики\x20основноепредставлениезадачи\x20основноепредставлениепланаобмена\x20основноепредставлениесправочника\x20основноепредставлениесчета\x20перемещениеграницыприпроведении\x20периодичностьномерабизнеспроцесса\x20периодичностьномерадокумента\x20периодичностьрегистрарасчета\x20периодичностьрегистрасведений\x20повторноеиспользованиевозвращаемыхзначений\x20полнотекстовыйпоискпривводепостроке\x20принадлежностьобъекта\x20проведение\x20разделениеаутентификацииобщегореквизита\x20разделениеданныхобщегореквизита\x20разделениерасширенийконфигурацииобщегореквизита\x20режимавтонумерацииобъектов\x20режимзаписирегистра\x20режимиспользованиямодальности\x20режимиспользованиясинхронныхвызововрасширенийплатформыивнешнихкомпонент\x20режимповторногоиспользованиясеансов\x20режимполученияданныхвыборапривводепостроке\x20режимсовместимости\x20режимсовместимостиинтерфейса\x20режимуправленияблокировкойданныхпоумолчанию\x20сериикодовпланавидовхарактеристик\x20сериикодовпланасчетов\x20сериикодовсправочника\x20созданиепривводе\x20способвыбора\x20способпоискастрокипривводепостроке\x20способредактирования\x20типданныхтаблицывнешнегоисточникаданных\x20типкодапланавидоврасчета\x20типкодасправочника\x20типмакета\x20типномерабизнеспроцесса\x20типномерадокумента\x20типномеразадачи\x20типформы\x20удалениедвижений\x20важностьпроблемыприменениярасширенияконфигурации\x20вариантинтерфейсаклиентскогоприложения\x20вариантмасштабаформклиентскогоприложения\x20вариантосновногошрифтаклиентскогоприложения\x20вариантстандартногопериода\x20вариантстандартнойдатыначала\x20видграницы\x20видкартинки\x20видотображенияполнотекстовогопоиска\x20видрамки\x20видсравнения\x20видцвета\x20видчисловогозначения\x20видшрифта\x20допустимаядлина\x20допустимыйзнак\x20использованиеbyteordermark\x20использованиеметаданныхполнотекстовогопоиска\x20источникрасширенийконфигурации\x20клавиша\x20кодвозвратадиалога\x20кодировкаxbase\x20кодировкатекста\x20направлениепоиска\x20направлениесортировки\x20обновлениепредопределенныхданных\x20обновлениеприизмененииданных\x20отображениепанелиразделов\x20проверказаполнения\x20режимдиалогавопрос\x20режимзапускаклиентскогоприложения\x20режимокругления\x20режимоткрытияформприложения\x20режимполнотекстовогопоиска\x20скоростьклиентскогосоединения\x20состояниевнешнегоисточникаданных\x20состояниеобновленияконфигурациибазыданных\x20способвыборасертификатаwindows\x20способкодированиястроки\x20статуссообщения\x20типвнешнейкомпоненты\x20типплатформы\x20типповеденияклавишиenter\x20типэлементаинформацииовыполненииобновленияконфигурациибазыданных\x20уровеньизоляциитранзакций\x20хешфункция\x20частидаты','Nop','GenerateDerivedKey','ArrayFlatten','url_open_ext','forceMap','GoochShading','Continue','FortranForm','Transpose','TopHatTransform','UnionedEntityClass','AbstractChannel','Stdev','StringTakeDrop','get3DENEntity','AbelianGroup','POISSON.DIST','keepPunct','(#Modal|i|they|we|do)\x20not?\x20[like]','peace-and-flowers','list','NORMDIST','CellMargins','OverlayBoxOptions','SymmetricDifference','fjord','Dodecahedron','Show','a-is-one','TEXTJOIN','hylang','hyperref','Λ','enumerate','⋻','echoes','erb','GraphDistanceMatrix','#FirstName\x20(bin|al)\x20#Noun','(#Comparative|#Superlative)','gss','log10','⩚','filter','hypot','⩮','on:begin','HEXCOLOR','.\x20#LastName','vtransform','LOWER_A','setPlayerRespawnTime','RequireAdmin','c_int_least64_t','text-align-last','background','numfmt','EstimatedPointNormals','typeOf','EdgeRenderingFunction','Shell\x20Session','currency_name','upto','WaveletPsi','DirVar','(#PresentTense|#PastTense)\x20[will\x20be]$','eprintfn','getMusicPlayedTime','CapitalDifferentialD','hwy','𝕕','FindHiddenMarkovStates','bnot','string_height','common','toHTML','FilesystemIterator','[A-Za-z$_](?:-[0-9A-Za-z$_]|[0-9A-Za-z$_])*','Ltac','esfun','animation','SurvivalDistribution','cpl','^even\x20(if|though)','ShowAutoSpellCheck','a-pound','ShellRegion','vk_numpad7','RegionConvert','br_if','FeatureExtractor','clonglong','pre-wrap','ExitDialog','nexttime','NS_HANDLER','IF\x20DO\x20WHILE\x20ENDWHILE\x20CALL\x20ENDIF\x20SUB\x20ENDSUB\x20GOTO\x20REPEAT\x20ENDREPEAT\x20EQ\x20LT\x20GT\x20NE\x20GE\x20LE\x20OR\x20XOR','minute','⪊','UNICHAR','draw_light_define_ambient','entertainment','AirportData','Plain\x20text','flush','draw_rectangle','c_gray','-undef','iso_c_binding',' ','TriangulateMesh','alert','menuEnabled','array_resize','border-image','simplify_eq','(still|even|just)','GraphQ','getCruiseControl','program_directory','function','arguments','engineOn','HEXDIG','RiccatiSolve','setHitPointDamage','(eastern|mountain|pacific|central|atlantic)\x20(standard|daylight|summer)?\x20time','sha1_file','EntityPropertyClass','FontTracking','ǵ','(?:new|renew|provide)?command','RealSign','research','@hasContraction','TickLabelPositioning','╣','macro_rules!','(had|have|#PastTense)\x20#Adjective\x20[#PresentTense]','BarcodeImage','getCustomSoundController','DeleteWithContents','Arduino','fa_directory','first-child','⌜','MeshRefinementFunction','tcl_startOfNextWord','deleteVehicleCrew','matrix_build_projection_ortho','view_set_xport','LPRINT','getTerrainHeightASL','startLoadingScreen','audio_set_master_gain','HornerForm','≦̸','notA','DoubleBracketingBar','setPilotCameraRotation','some-kind','STDEVPA','ConicOptimization','WeekDay','RegionUnion','CircularArcThrough','font_get_bold','GraphPropertyDistribution','RankedMin','BeckmannDistribution','\x5cb(void|bool|int8|int16|int32|int64|int|uint8|uint16|uint32|uint64|uint|string|ref|array|double|float|auto|dictionary)','does','date_valid_datetime','addToRemainsCollector','FilterRules','Densify','mouse_lastbutton','setDefaultCamera','Dilation','setWaypointSpeed','PlotRegion','das','SystemModelSimulateSensitivity','ShowContents','part_system_position','permanent','festive','⩲','rannor','KendallTau','vk_numpad2','$MessageList','CosIntegral','NotPrecedesEqual','nav-up','ConnectedGraphComponents','FrontEndDynamicExpression','flatten','NotebookDelete','draw_sprite_part','true¦elle,h3i2me,she,th0us,we,you;e0ou;e,m,y;!l,t;e,im','AstroAngularSeparation','steam_ugc_set_item_preview','TravelMethod','breakOut','ControllerInformation','TableViewBoxItemStyle','validate','general','ļ','clone','char_length','nav-left','digitalRead','steam_file_write','UnitConvert','endprotoent','\x5c+\x5c/','$async$and$array','doFollow','DateReduction','DynamicWrapperBox','keyboard_virtual_hide','DiagonalMatrixQ','ShapiroWilkTest','Second','Reap','3am','ChoiceDialog','#PhrasalVerb\x20#Particle\x20#Preposition\x20[#PresentTense]','DivisionByZeroError','resolve','StippleShading','late','ᵁ<Õıʊҝջאٵ۞ޢߖࠏ੊ઑඡ๭༉༦჊ረዡᐕᒝᓃᓟᔥ\x00\x00\x00\x00\x00\x00ᕫᛍᦍᰒᷝ὾⁠↰⊍⏀⏻⑂⠤⤒ⴈ⹈⿎〖㊺㘹㞬㣾㨨㩱㫠㬮ࠀEMabcfglmnoprstu\x5cbfms\x7f\u0084\u008b\u0090\u0095\u0098¦³¹ÈÏlig耻Æ䃆P耻&䀦cute耻Á䃁reve;䄂Āiyx}rc耻Â䃂;䐐r;쀀𝔄rave耻À䃀pha;䎑acr;䄀d;橓Āgp\u009d¡on;䄄f;쀀𝔸plyFunction;恡ing耻Å䃅Ācs¾Ãr;쀀𝒜ign;扔ilde耻Ã䃃ml耻Ä䃄ЀaceforsuåûþėĜĢħĪĀcrêòkslash;或Ŷöø;櫧ed;挆y;䐑ƀcrtąċĔause;戵noullis;愬a;䎒r;쀀𝔅pf;쀀𝔹eve;䋘còēmpeq;扎܀HOacdefhilorsuōőŖƀƞƢƵƷƺǜȕɳɸɾcy;䐧PY耻©䂩ƀcpyŝŢźute;䄆Ā;iŧŨ拒talDifferentialD;慅leys;愭ȀaeioƉƎƔƘron;䄌dil耻Ç䃇rc;䄈nint;戰ot;䄊ĀdnƧƭilla;䂸terDot;䂷òſi;䎧rcleȀDMPTLJNjǑǖot;抙inus;抖lus;投imes;抗oĀcsǢǸkwiseContourIntegral;戲eCurlyĀDQȃȏoubleQuote;思uote;怙ȀlnpuȞȨɇɕonĀ;eȥȦ户;橴ƀgitȯȶȺruent;扡nt;戯ourIntegral;戮ĀfrɌɎ;愂oduct;成nterClockwiseContourIntegral;戳oss;樯cr;쀀𝒞pĀ;Cʄʅ拓ap;才րDJSZacefiosʠʬʰʴʸˋ˗ˡ˦̳ҍĀ;oŹʥtrahd;椑cy;䐂cy;䐅cy;䐏ƀgrsʿ˄ˇger;怡r;憡hv;櫤Āayː˕ron;䄎;䐔lĀ;t˝˞戇a;䎔r;쀀𝔇Āaf˫̧Ācm˰̢riticalȀADGT̖̜̀̆cute;䂴oŴ̋̍;䋙bleAcute;䋝rave;䁠ilde;䋜ond;拄ferentialD;慆Ѱ̽\x00\x00\x00͔͂\x00Ѕf;쀀𝔻ƀ;DE͈͉͍䂨ot;惜qual;扐blèCDLRUVͣͲ΂ϏϢϸontourIntegraìȹoɴ͹\x00\x00ͻ»͉nArrow;懓Āeo·ΤftƀARTΐΖΡrrow;懐ightArrow;懔eåˊngĀLRΫτeftĀARγιrrow;柸ightArrow;柺ightArrow;柹ightĀATϘϞrrow;懒ee;抨pɁϩ\x00\x00ϯrrow;懑ownArrow;懕erticalBar;戥ǹABLRTaВЪаўѿͼrrowƀ;BUНОТ憓ar;椓pArrow;懵reve;䌑eft˒к\x00ц\x00ѐightVector;楐eeVector;楞ectorĀ;Bљњ憽ar;楖ightǔѧ\x00ѱeeVector;楟ectorĀ;BѺѻ懁ar;楗eeĀ;A҆҇护rrow;憧ĀctҒҗr;쀀𝒟rok;䄐ࠀNTacdfglmopqstuxҽӀӄӋӞӢӧӮӵԡԯԶՒ՝ՠեG;䅊H耻Ð䃐cute耻É䃉ƀaiyӒӗӜron;䄚rc耻Ê䃊;䐭ot;䄖r;쀀𝔈rave耻È䃈ement;戈ĀapӺӾcr;䄒tyɓԆ\x00\x00ԒmallSquare;旻erySmallSquare;斫ĀgpԦԪon;䄘f;쀀𝔼silon;䎕uĀaiԼՉlĀ;TՂՃ橵ilde;扂librium;懌Āci՗՚r;愰m;橳a;䎗ml耻Ë䃋Āipժկsts;戃onentialE;慇ʀcfiosօֈ֍ֲ׌y;䐤r;쀀𝔉lledɓ֗\x00\x00֣mallSquare;旼erySmallSquare;斪Ͱֺ\x00ֿ\x00\x00ׄf;쀀𝔽All;戀riertrf;愱cò׋؀JTabcdfgorstר׬ׯ׺؀ؒؖ؛؝أ٬ٲcy;䐃耻>䀾mmaĀ;d׷׸䎓;䏜reve;䄞ƀeiy؇،ؐdil;䄢rc;䄜;䐓ot;䄠r;쀀𝔊;拙pf;쀀𝔾eater̀EFGLSTصلَٖٛ٦qualĀ;Lؾؿ扥ess;招ullEqual;执reater;檢ess;扷lantEqual;橾ilde;扳cr;쀀𝒢;扫ЀAacfiosuڅڋږڛڞڪھۊRDcy;䐪Āctڐڔek;䋇;䁞irc;䄤r;愌lbertSpace;愋ǰگ\x00ڲf;愍izontalLine;攀Āctۃۅòکrok;䄦mpńېۘownHumðįqual;扏܀EJOacdfgmnostuۺ۾܃܇܎ܚܞܡܨ݄ݸދޏޕcy;䐕lig;䄲cy;䐁cute耻Í䃍Āiyܓܘrc耻Î䃎;䐘ot;䄰r;愑rave耻Ì䃌ƀ;apܠܯܿĀcgܴܷr;䄪inaryI;慈lieóϝǴ݉\x00ݢĀ;eݍݎ戬Āgrݓݘral;戫section;拂isibleĀCTݬݲomma;恣imes;恢ƀgptݿރވon;䄮f;쀀𝕀a;䎙cr;愐ilde;䄨ǫޚ\x00ޞcy;䐆l耻Ï䃏ʀcfosuެ޷޼߂ߐĀiyޱ޵rc;䄴;䐙r;쀀𝔍pf;쀀𝕁ǣ߇\x00ߌr;쀀𝒥rcy;䐈kcy;䐄΀HJacfosߤߨ߽߬߱ࠂࠈcy;䐥cy;䐌ppa;䎚Āey߶߻dil;䄶;䐚r;쀀𝔎pf;쀀𝕂cr;쀀𝒦րJTaceflmostࠥࠩࠬࡐࡣ঳সে্਷ੇcy;䐉耻<䀼ʀcmnpr࠷࠼ࡁࡄࡍute;䄹bda;䎛g;柪lacetrf;愒r;憞ƀaeyࡗ࡜ࡡron;䄽dil;䄻;䐛Āfsࡨ॰tԀACDFRTUVarࡾࢩࢱࣦ࣠ࣼयज़ΐ४Ānrࢃ࢏gleBracket;柨rowƀ;BR࢙࢚࢞憐ar;懤ightArrow;懆eiling;挈oǵࢷ\x00ࣃbleBracket;柦nǔࣈ\x00࣒eeVector;楡ectorĀ;Bࣛࣜ懃ar;楙loor;挊ightĀAV࣯ࣵrrow;憔ector;楎Āerँगeƀ;AVउऊऐ抣rrow;憤ector;楚iangleƀ;BEतथऩ抲ar;槏qual;抴pƀDTVषूौownVector;楑eeVector;楠ectorĀ;Bॖॗ憿ar;楘ectorĀ;B॥०憼ar;楒ightáΜs̀EFGLSTॾঋকঝঢভqualGreater;拚ullEqual;扦reater;扶ess;檡lantEqual;橽ilde;扲r;쀀𝔏Ā;eঽা拘ftarrow;懚idot;䄿ƀnpw৔ਖਛgȀLRlr৞৷ਂਐeftĀAR০৬rrow;柵ightArrow;柷ightArrow;柶eftĀarγਊightáοightáϊf;쀀𝕃erĀLRਢਬeftArrow;憙ightArrow;憘ƀchtਾੀੂòࡌ;憰rok;䅁;扪Ѐacefiosuਗ਼੝੠੷੼અઋ઎p;椅y;䐜Ādl੥੯iumSpace;恟lintrf;愳r;쀀𝔐nusPlus;戓pf;쀀𝕄cò੶;䎜ҀJacefostuણધભીଔଙඑ඗ඞcy;䐊cute;䅃ƀaey઴હાron;䅇dil;䅅;䐝ƀgswે૰଎ativeƀMTV૓૟૨ediumSpace;怋hiĀcn૦૘ë૙eryThiî૙tedĀGL૸ଆreaterGreateòٳessLesóੈLine;䀊r;쀀𝔑ȀBnptଢନଷ଺reak;恠BreakingSpace;䂠f;愕ڀ;CDEGHLNPRSTV୕ୖ୪୼஡௫ఄ౞಄ದ೘ൡඅ櫬Āou୛୤ngruent;扢pCap;扭oubleVerticalBar;戦ƀlqxஃஊ஛ement;戉ualĀ;Tஒஓ扠ilde;쀀≂̸ists;戄reater΀;EFGLSTஶஷ஽௉௓௘௥扯qual;扱ullEqual;쀀≧̸reater;쀀≫̸ess;批lantEqual;쀀⩾̸ilde;扵umpń௲௽ownHump;쀀≎̸qual;쀀≏̸eĀfsఊధtTriangleƀ;BEచఛడ拪ar;쀀⧏̸qual;括s̀;EGLSTవశ఼ౄోౘ扮qual;扰reater;扸ess;쀀≪̸lantEqual;쀀⩽̸ilde;扴estedĀGL౨౹reaterGreater;쀀⪢̸essLess;쀀⪡̸recedesƀ;ESಒಓಛ技qual;쀀⪯̸lantEqual;拠ĀeiಫಹverseElement;戌ghtTriangleƀ;BEೋೌ೒拫ar;쀀⧐̸qual;拭ĀquೝഌuareSuĀbp೨೹setĀ;E೰ೳ쀀⊏̸qual;拢ersetĀ;Eഃആ쀀⊐̸qual;拣ƀbcpഓതൎsetĀ;Eഛഞ쀀⊂⃒qual;抈ceedsȀ;ESTലള഻െ抁qual;쀀⪰̸lantEqual;拡ilde;쀀≿̸ersetĀ;E൘൛쀀⊃⃒qual;抉ildeȀ;EFT൮൯൵ൿ扁qual;扄ullEqual;扇ilde;扉erticalBar;戤cr;쀀𝒩ilde耻Ñ䃑;䎝܀Eacdfgmoprstuvලෂ෉෕ෛ෠෧෼ขภยา฿ไlig;䅒cute耻Ó䃓Āiy෎ීrc耻Ô䃔;䐞blac;䅐r;쀀𝔒rave耻Ò䃒ƀaei෮ෲ෶cr;䅌ga;䎩cron;䎟pf;쀀𝕆enCurlyĀDQฎบoubleQuote;怜uote;怘;橔Āclวฬr;쀀𝒪ash耻Ø䃘iŬื฼de耻Õ䃕es;樷ml耻Ö䃖erĀBP๋๠Āar๐๓r;怾acĀek๚๜;揞et;掴arenthesis;揜Ҁacfhilors๿ງຊຏຒດຝະ໼rtialD;戂y;䐟r;쀀𝔓i;䎦;䎠usMinus;䂱Āipຢອncareplanåڝf;愙Ȁ;eio຺ູ໠໤檻cedesȀ;EST່້໏໚扺qual;檯lantEqual;扼ilde;找me;怳Ādp໩໮uct;戏ortionĀ;aȥ໹l;戝Āci༁༆r;쀀𝒫;䎨ȀUfos༑༖༛༟OT耻\x22䀢r;쀀𝔔pf;愚cr;쀀𝒬؀BEacefhiorsu༾གྷཇའཱིྦྷྪྭ႖ႩႴႾarr;椐G耻®䂮ƀcnrཎནབute;䅔g;柫rĀ;tཛྷཝ憠l;椖ƀaeyཧཬཱron;䅘dil;䅖;䐠Ā;vླྀཹ愜erseĀEUྂྙĀlq྇ྎement;戋uilibrium;懋pEquilibrium;楯r»ཹo;䎡ghtЀACDFTUVa࿁࿫࿳ဢဨၛႇϘĀnr࿆࿒gleBracket;柩rowƀ;BL࿜࿝࿡憒ar;懥eftArrow;懄eiling;按oǵ࿹\x00စbleBracket;柧nǔည\x00နeeVector;楝ectorĀ;Bဝသ懂ar;楕loor;挋Āerိ၃eƀ;AVဵံြ抢rrow;憦ector;楛iangleƀ;BEၐၑၕ抳ar;槐qual;抵pƀDTVၣၮၸownVector;楏eeVector;楜ectorĀ;Bႂႃ憾ar;楔ectorĀ;B႑႒懀ar;楓Āpuႛ႞f;愝ndImplies;楰ightarrow;懛ĀchႹႼr;愛;憱leDelayed;槴ڀHOacfhimoqstuფჱჷჽᄙᄞᅑᅖᅡᅧᆵᆻᆿĀCcჩხHcy;䐩y;䐨FTcy;䐬cute;䅚ʀ;aeiyᄈᄉᄎᄓᄗ檼ron;䅠dil;䅞rc;䅜;䐡r;쀀𝔖ortȀDLRUᄪᄴᄾᅉownArrow»ОeftArrow»࢚ightArrow»࿝pArrow;憑gma;䎣allCircle;战pf;쀀𝕊ɲᅭ\x00\x00ᅰt;戚areȀ;ISUᅻᅼᆉᆯ斡ntersection;抓uĀbpᆏᆞsetĀ;Eᆗᆘ抏qual;抑ersetĀ;Eᆨᆩ抐qual;抒nion;抔cr;쀀𝒮ar;拆ȀbcmpᇈᇛሉላĀ;sᇍᇎ拐etĀ;Eᇍᇕqual;抆ĀchᇠህeedsȀ;ESTᇭᇮᇴᇿ扻qual;檰lantEqual;扽ilde;承Tháྌ;我ƀ;esሒሓሣ拑rsetĀ;Eሜም抃qual;抇et»ሓրHRSacfhiorsሾቄ቉ቕ቞ቱቶኟዂወዑORN耻Þ䃞ADE;愢ĀHc቎ቒcy;䐋y;䐦Ābuቚቜ;䀉;䎤ƀaeyብቪቯron;䅤dil;䅢;䐢r;쀀𝔗Āeiቻ኉Dzኀ\x00ኇefore;戴a;䎘Ācn኎ኘkSpace;쀀\u205f\u200aSpace;怉ldeȀ;EFTካኬኲኼ戼qual;扃ullEqual;扅ilde;扈pf;쀀𝕋ipleDot;惛Āctዖዛr;쀀𝒯rok;䅦ૡዷጎጚጦ\x00ጬጱ\x00\x00\x00\x00\x00ጸጽ፷ᎅ\x00᏿ᐄᐊᐐĀcrዻጁute耻Ú䃚rĀ;oጇገ憟cir;楉rǣጓ\x00጖y;䐎ve;䅬Āiyጞጣrc耻Û䃛;䐣blac;䅰r;쀀𝔘rave耻Ù䃙acr;䅪Ādiፁ፩erĀBPፈ፝Āarፍፐr;䁟acĀekፗፙ;揟et;掵arenthesis;揝onĀ;P፰፱拃lus;抎Āgp፻፿on;䅲f;쀀𝕌ЀADETadps᎕ᎮᎸᏄϨᏒᏗᏳrrowƀ;BDᅐᎠᎤar;椒ownArrow;懅ownArrow;憕quilibrium;楮eeĀ;AᏋᏌ报rrow;憥ownáϳerĀLRᏞᏨeftArrow;憖ightArrow;憗iĀ;lᏹᏺ䏒on;䎥ing;䅮cr;쀀𝒰ilde;䅨ml耻Ü䃜ҀDbcdefosvᐧᐬᐰᐳᐾᒅᒊᒐᒖash;披ar;櫫y;䐒ashĀ;lᐻᐼ抩;櫦Āerᑃᑅ;拁ƀbtyᑌᑐᑺar;怖Ā;iᑏᑕcalȀBLSTᑡᑥᑪᑴar;戣ine;䁼eparator;杘ilde;所ThinSpace;怊r;쀀𝔙pf;쀀𝕍cr;쀀𝒱dash;抪ʀcefosᒧᒬᒱᒶᒼirc;䅴dge;拀r;쀀𝔚pf;쀀𝕎cr;쀀𝒲Ȁfiosᓋᓐᓒᓘr;쀀𝔛;䎞pf;쀀𝕏cr;쀀𝒳ҀAIUacfosuᓱᓵᓹᓽᔄᔏᔔᔚᔠcy;䐯cy;䐇cy;䐮cute耻Ý䃝Āiyᔉᔍrc;䅶;䐫r;쀀𝔜pf;쀀𝕐cr;쀀𝒴ml;䅸ЀHacdefosᔵᔹᔿᕋᕏᕝᕠᕤcy;䐖cute;䅹Āayᕄᕉron;䅽;䐗ot;䅻Dzᕔ\x00ᕛoWidtè૙a;䎖r;愨pf;愤cr;쀀𝒵௡ᖃᖊᖐ\x00ᖰᖶᖿ\x00\x00\x00\x00ᗆᗛᗫᙟ᙭\x00ᚕ᚛ᚲᚹ\x00ᚾcute耻á䃡reve;䄃̀;Ediuyᖜᖝᖡᖣᖨᖭ戾;쀀∾̳;房rc耻â䃢te肻´̆;䐰lig耻æ䃦Ā;r²ᖺ;쀀𝔞rave耻à䃠ĀepᗊᗖĀfpᗏᗔsym;愵èᗓha;䎱ĀapᗟcĀclᗤᗧr;䄁g;樿ɤᗰ\x00\x00ᘊʀ;adsvᗺᗻᗿᘁᘇ戧nd;橕;橜lope;橘;橚΀;elmrszᘘᘙᘛᘞᘿᙏᙙ戠;榤e»ᘙsdĀ;aᘥᘦ戡ѡᘰᘲᘴᘶᘸᘺᘼᘾ;榨;榩;榪;榫;榬;榭;榮;榯tĀ;vᙅᙆ戟bĀ;dᙌᙍ抾;榝Āptᙔᙗh;戢»¹arr;捼Āgpᙣᙧon;䄅f;쀀𝕒΀;Eaeiop዁ᙻᙽᚂᚄᚇᚊ;橰cir;橯;扊d;手s;䀧roxĀ;e዁ᚒñᚃing耻å䃥ƀctyᚡᚦᚨr;쀀𝒶;䀪mpĀ;e዁ᚯñʈilde耻ã䃣ml耻ä䃤Āciᛂᛈoninôɲnt;樑ࠀNabcdefiklnoprsu᛭ᛱᜰ᜼ᝃᝈ᝸᝽០៦ᠹᡐᜍ᤽᥈ᥰot;櫭Ācrᛶ᜞kȀcepsᜀᜅᜍᜓong;扌psilon;䏶rime;怵imĀ;e᜚᜛戽q;拍Ŷᜢᜦee;抽edĀ;gᜬᜭ挅e»ᜭrkĀ;t፜᜷brk;掶Āoyᜁᝁ;䐱quo;怞ʀcmprtᝓ᝛ᝡᝤᝨausĀ;eĊĉptyv;榰séᜌnoõēƀahwᝯ᝱ᝳ;䎲;愶een;扬r;쀀𝔟g΀costuvwឍឝឳេ៕៛៞ƀaiuបពរðݠrc;旯p»፱ƀdptឤឨឭot;樀lus;樁imes;樂ɱឹ\x00\x00ើcup;樆ar;昅riangleĀdu៍្own;施p;斳plus;樄eåᑄåᒭarow;植ƀako៭ᠦᠵĀcn៲ᠣkƀlst៺֫᠂ozenge;槫riangleȀ;dlr᠒᠓᠘᠝斴own;斾eft;旂ight;斸k;搣Ʊᠫ\x00ᠳƲᠯ\x00ᠱ;斒;斑4;斓ck;斈ĀeoᠾᡍĀ;qᡃᡆ쀀=⃥uiv;쀀≡⃥t;挐Ȁptwxᡙᡞᡧᡬf;쀀𝕓Ā;tᏋᡣom»Ꮜtie;拈؀DHUVbdhmptuvᢅᢖᢪᢻᣗᣛᣬ᣿ᤅᤊᤐᤡȀLRlrᢎᢐᢒᢔ;敗;敔;敖;敓ʀ;DUduᢡᢢᢤᢦᢨ敐;敦;敩;敤;敧ȀLRlrᢳᢵᢷᢹ;敝;敚;敜;教΀;HLRhlrᣊᣋᣍᣏᣑᣓᣕ救;敬;散;敠;敫;敢;敟ox;槉ȀLRlrᣤᣦᣨᣪ;敕;敒;攐;攌ʀ;DUduڽ᣷᣹᣻᣽;敥;敨;攬;攴inus;抟lus;択imes;抠ȀLRlrᤙᤛᤝ᤟;敛;敘;攘;攔΀;HLRhlrᤰᤱᤳᤵᤷ᤻᤹攂;敪;敡;敞;攼;攤;攜Āevģ᥂bar耻¦䂦Ȁceioᥑᥖᥚᥠr;쀀𝒷mi;恏mĀ;e᜚᜜lƀ;bhᥨᥩᥫ䁜;槅sub;柈Ŭᥴ᥾lĀ;e᥹᥺怢t»᥺pƀ;Eeįᦅᦇ;檮Ā;qۜۛೡᦧ\x00᧨ᨑᨕᨲ\x00ᨷᩐ\x00\x00᪴\x00\x00᫁\x00\x00ᬡᬮ᭍᭒\x00᯽\x00ᰌƀcpr᦭ᦲ᧝ute;䄇̀;abcdsᦿᧀᧄ᧊᧕᧙戩nd;橄rcup;橉Āau᧏᧒p;橋p;橇ot;橀;쀀∩︀Āeo᧢᧥t;恁îړȀaeiu᧰᧻ᨁᨅǰ᧵\x00᧸s;橍on;䄍dil耻ç䃧rc;䄉psĀ;sᨌᨍ橌m;橐ot;䄋ƀdmnᨛᨠᨦil肻¸ƭptyv;榲t脀¢;eᨭᨮ䂢räƲr;쀀𝔠ƀceiᨽᩀᩍy;䑇ckĀ;mᩇᩈ朓ark»ᩈ;䏇r΀;Ecefms᩟᩠ᩢᩫ᪤᪪᪮旋;槃ƀ;elᩩᩪᩭ䋆q;扗eɡᩴ\x00\x00᪈rrowĀlr᩼᪁eft;憺ight;憻ʀRSacd᪒᪔᪖᪚᪟»ཇ;擈st;抛irc;抚ash;抝nint;樐id;櫯cir;槂ubsĀ;u᪻᪼晣it»᪼ˬ᫇᫔᫺\x00ᬊonĀ;eᫍᫎ䀺Ā;qÇÆɭ᫙\x00\x00᫢aĀ;t᫞᫟䀬;䁀ƀ;fl᫨᫩᫫戁îᅠeĀmx᫱᫶ent»᫩eóɍǧ᫾\x00ᬇĀ;dኻᬂot;橭nôɆƀfryᬐᬔᬗ;쀀𝕔oäɔ脀©;sŕᬝr;愗Āaoᬥᬩrr;憵ss;朗Ācuᬲᬷr;쀀𝒸Ābpᬼ᭄Ā;eᭁᭂ櫏;櫑Ā;eᭉᭊ櫐;櫒dot;拯΀delprvw᭠᭬᭷ᮂᮬᯔ᯹arrĀlr᭨᭪;椸;椵ɰ᭲\x00\x00᭵r;拞c;拟arrĀ;p᭿ᮀ憶;椽̀;bcdosᮏᮐᮖᮡᮥᮨ截rcap;橈Āauᮛᮞp;橆p;橊ot;抍r;橅;쀀∪︀Ȁalrv᮵ᮿᯞᯣrrĀ;mᮼᮽ憷;椼yƀevwᯇᯔᯘqɰᯎ\x00\x00ᯒreã᭳uã᭵ee;拎edge;拏en耻¤䂤earrowĀlrᯮ᯳eft»ᮀight»ᮽeäᯝĀciᰁᰇoninôǷnt;戱lcty;挭ঀAHabcdefhijlorstuwz᰸᰻᰿ᱝᱩᱵᲊᲞᲬᲷ᳻᳿ᴍᵻᶑᶫᶻ᷆᷍rò΁ar;楥Ȁglrs᱈ᱍ᱒᱔ger;怠eth;愸òᄳhĀ;vᱚᱛ怐»ऊūᱡᱧarow;椏aã̕Āayᱮᱳron;䄏;䐴ƀ;ao̲ᱼᲄĀgrʿᲁr;懊tseq;橷ƀglmᲑᲔᲘ耻°䂰ta;䎴ptyv;榱ĀirᲣᲨsht;楿;쀀𝔡arĀlrᲳᲵ»ࣜ»သʀaegsv᳂͸᳖᳜᳠mƀ;oș᳊᳔ndĀ;ș᳑uit;晦amma;䏝in;拲ƀ;io᳧᳨᳸䃷de脀÷;o᳧ᳰntimes;拇nø᳷cy;䑒cɯᴆ\x00\x00ᴊrn;挞op;挍ʀlptuwᴘᴝᴢᵉᵕlar;䀤f;쀀𝕕ʀ;emps̋ᴭᴷᴽᵂqĀ;d͒ᴳot;扑inus;戸lus;戔quare;抡blebarwedgåúnƀadhᄮᵝᵧownarrowóᲃarpoonĀlrᵲᵶefôᲴighôᲶŢᵿᶅkaro÷གɯᶊ\x00\x00ᶎrn;挟op;挌ƀcotᶘᶣᶦĀryᶝᶡ;쀀𝒹;䑕l;槶rok;䄑Ādrᶰᶴot;拱iĀ;fᶺ᠖斿Āah᷀᷃ròЩaòྦangle;榦Āci᷒ᷕy;䑟grarr;柿ऀDacdefglmnopqrstuxḁḉḙḸոḼṉṡṾấắẽỡἪἷὄ὎὚ĀDoḆᴴoôᲉĀcsḎḔute耻é䃩ter;橮ȀaioyḢḧḱḶron;䄛rĀ;cḭḮ扖耻ê䃪lon;払;䑍ot;䄗ĀDrṁṅot;扒;쀀𝔢ƀ;rsṐṑṗ檚ave耻è䃨Ā;dṜṝ檖ot;檘Ȁ;ilsṪṫṲṴ檙nters;揧;愓Ā;dṹṺ檕ot;檗ƀapsẅẉẗcr;䄓tyƀ;svẒẓẕ戅et»ẓpĀ1;ẝẤijạả;怄;怅怃ĀgsẪẬ;䅋p;怂ĀgpẴẸon;䄙f;쀀𝕖ƀalsỄỎỒrĀ;sỊị拕l;槣us;橱iƀ;lvỚớở䎵on»ớ;䏵ȀcsuvỪỳἋἣĀioữḱrc»Ḯɩỹ\x00\x00ỻíՈantĀglἂἆtr»ṝess»Ṻƀaeiἒ἖Ἒls;䀽st;扟vĀ;DȵἠD;橸parsl;槥ĀDaἯἳot;打rr;楱ƀcdiἾὁỸr;愯oô͒ĀahὉὋ;䎷耻ð䃰Āmrὓὗl耻ë䃫o;悬ƀcipὡὤὧl;䀡sôծĀeoὬὴctatioîՙnentialåչৡᾒ\x00ᾞ\x00ᾡᾧ\x00\x00ῆῌ\x00ΐ\x00ῦῪ\u2000\x00\u2008⁚llingdotseñṄy;䑄male;晀ƀilrᾭᾳ῁lig;耀ffiɩᾹ\x00\x00᾽g;耀ffig;耀ffl;쀀𝔣lig;耀filig;쀀fjƀaltῙ῜ῡt;晭ig;耀flns;斱of;䆒ǰ΅\x00ῳf;쀀𝕗ĀakֿῷĀ;vῼ´拔;櫙artint;樍Āao‌⁕Ācs‑⁒ႉ‸⁅⁈\x00⁐β•‥‧‪‬\x00‮耻½䂽;慓耻¼䂼;慕;慙;慛Ƴ‴\x00‶;慔;慖ʴ‾⁁\x00\x00⁃耻¾䂾;慗;慜5;慘ƶ⁌\x00⁎;慚;慝8;慞l;恄wn;挢cr;쀀𝒻ࢀEabcdefgijlnorstv₂₉₟₥₰₴⃰⃵⃺⃿℃ℒℸ̗ℾ⅒↞Ā;lٍ₇;檌ƀcmpₐₕ₝ute;䇵maĀ;dₜ᳚䎳;檆reve;䄟Āiy₪₮rc;䄝;䐳ot;䄡Ȁ;lqsؾق₽⃉ƀ;qsؾٌ⃄lanô٥Ȁ;cdl٥⃒⃥⃕c;檩otĀ;o⃜⃝檀Ā;l⃢⃣檂;檄Ā;e⃪⃭쀀⋛︀s;檔r;쀀𝔤Ā;gٳ؛mel;愷cy;䑓Ȁ;Eajٚℌℎℐ;檒;檥;檤ȀEaesℛℝ℩ℴ;扩pĀ;p℣ℤ檊rox»ℤĀ;q℮ℯ檈Ā;q℮ℛim;拧pf;쀀𝕘Āci⅃ⅆr;愊mƀ;el٫ⅎ⅐;檎;檐茀>;cdlqr׮ⅠⅪⅮⅳⅹĀciⅥⅧ;檧r;橺ot;拗Par;榕uest;橼ʀadelsↄⅪ←ٖ↛ǰ↉\x00↎proø₞r;楸qĀlqؿ↖lesó₈ií٫Āen↣↭rtneqq;쀀≩︀Å↪ԀAabcefkosy⇄⇇⇱⇵⇺∘∝∯≨≽ròΠȀilmr⇐⇔⇗⇛rsðᒄf»․ilôکĀdr⇠⇤cy;䑊ƀ;cwࣴ⇫⇯ir;楈;憭ar;意irc;䄥ƀalr∁∎∓rtsĀ;u∉∊晥it»∊lip;怦con;抹r;쀀𝔥sĀew∣∩arow;椥arow;椦ʀamopr∺∾≃≞≣rr;懿tht;戻kĀlr≉≓eftarrow;憩ightarrow;憪f;쀀𝕙bar;怕ƀclt≯≴≸r;쀀𝒽asè⇴rok;䄧Ābp⊂⊇ull;恃hen»ᱛૡ⊣\x00⊪\x00⊸⋅⋎\x00⋕⋳\x00\x00⋸⌢⍧⍢⍿\x00⎆⎪⎴cute耻í䃭ƀ;iyݱ⊰⊵rc耻î䃮;䐸Ācx⊼⊿y;䐵cl耻¡䂡ĀfrΟ⋉;쀀𝔦rave耻ì䃬Ȁ;inoܾ⋝⋩⋮Āin⋢⋦nt;樌t;戭fin;槜ta;愩lig;䄳ƀaop⋾⌚⌝ƀcgt⌅⌈⌗r;䄫ƀelpܟ⌏⌓inåގarôܠh;䄱f;抷ed;䆵ʀ;cfotӴ⌬⌱⌽⍁are;愅inĀ;t⌸⌹戞ie;槝doô⌙ʀ;celpݗ⍌⍐⍛⍡al;抺Āgr⍕⍙eróᕣã⍍arhk;樗rod;樼Ȁcgpt⍯⍲⍶⍻y;䑑on;䄯f;쀀𝕚a;䎹uest耻¿䂿Āci⎊⎏r;쀀𝒾nʀ;EdsvӴ⎛⎝⎡ӳ;拹ot;拵Ā;v⎦⎧拴;拳Ā;iݷ⎮lde;䄩ǫ⎸\x00⎼cy;䑖l耻ï䃯̀cfmosu⏌⏗⏜⏡⏧⏵Āiy⏑⏕rc;䄵;䐹r;쀀𝔧ath;䈷pf;쀀𝕛ǣ⏬\x00⏱r;쀀𝒿rcy;䑘kcy;䑔Ѐacfghjos␋␖␢␧␭␱␵␻ppaĀ;v␓␔䎺;䏰Āey␛␠dil;䄷;䐺r;쀀𝔨reen;䄸cy;䑅cy;䑜pf;쀀𝕜cr;쀀𝓀஀ABEHabcdefghjlmnoprstuv⑰⒁⒆⒍⒑┎┽╚▀♎♞♥♹♽⚚⚲⛘❝❨➋⟀⠁⠒ƀart⑷⑺⑼rò৆òΕail;椛arr;椎Ā;gঔ⒋;檋ar;楢ॣ⒥\x00⒪\x00⒱\x00\x00\x00\x00\x00⒵Ⓔ\x00ⓆⓈⓍ\x00⓹ute;䄺mptyv;榴raîࡌbda;䎻gƀ;dlࢎⓁⓃ;榑åࢎ;檅uo耻«䂫rЀ;bfhlpst࢙ⓞⓦⓩ⓫⓮⓱⓵Ā;f࢝ⓣs;椟s;椝ë≒p;憫l;椹im;楳l;憢ƀ;ae⓿─┄檫il;椙Ā;s┉┊檭;쀀⪭︀ƀabr┕┙┝rr;椌rk;杲Āak┢┬cĀek┨┪;䁻;䁛Āes┱┳;榋lĀdu┹┻;榏;榍Ȁaeuy╆╋╖╘ron;䄾Ādi═╔il;䄼ìࢰâ┩;䐻Ȁcqrs╣╦╭╽a;椶uoĀ;rนᝆĀdu╲╷har;楧shar;楋h;憲ʀ;fgqs▋▌উ◳◿扤tʀahlrt▘▤▷◂◨rrowĀ;t࢙□aé⓶arpoonĀdu▯▴own»њp»०eftarrows;懇ightƀahs◍◖◞rrowĀ;sࣴࢧarpoonó྘quigarro÷⇰hreetimes;拋ƀ;qs▋ও◺lanôবʀ;cdgsব☊☍☝☨c;檨otĀ;o☔☕橿Ā;r☚☛檁;檃Ā;e☢☥쀀⋚︀s;檓ʀadegs☳☹☽♉♋pproøⓆot;拖qĀgq♃♅ôউgtò⒌ôছiíলƀilr♕࣡♚sht;楼;쀀𝔩Ā;Eজ♣;檑š♩♶rĀdu▲♮Ā;l॥♳;楪lk;斄cy;䑙ʀ;achtੈ⚈⚋⚑⚖rò◁orneòᴈard;楫ri;旺Āio⚟⚤dot;䅀ustĀ;a⚬⚭掰che»⚭ȀEaes⚻⚽⛉⛔;扨pĀ;p⛃⛄檉rox»⛄Ā;q⛎⛏檇Ā;q⛎⚻im;拦Ѐabnoptwz⛩⛴⛷✚✯❁❇❐Ānr⛮⛱g;柬r;懽rëࣁgƀlmr⛿✍✔eftĀar০✇ightá৲apsto;柼ightá৽parrowĀlr✥✩efô⓭ight;憬ƀafl✶✹✽r;榅;쀀𝕝us;樭imes;樴š❋❏st;戗áፎƀ;ef❗❘᠀旊nge»❘arĀ;l❤❥䀨t;榓ʀachmt❳❶❼➅➇ròࢨorneòᶌarĀ;d྘➃;業;怎ri;抿̀achiqt➘➝ੀ➢➮➻quo;怹r;쀀𝓁mƀ;egল➪➬;檍;檏Ābu┪➳oĀ;rฟ➹;怚rok;䅂萀<;cdhilqrࠫ⟒☹⟜⟠⟥⟪⟰Āci⟗⟙;檦r;橹reå◲mes;拉arr;楶uest;橻ĀPi⟵⟹ar;榖ƀ;ef⠀भ᠛旃rĀdu⠇⠍shar;楊har;楦Āen⠗⠡rtneqq;쀀≨︀Å⠞܀Dacdefhilnopsu⡀⡅⢂⢎⢓⢠⢥⢨⣚⣢⣤ઃ⣳⤂Dot;戺Ȁclpr⡎⡒⡣⡽r耻¯䂯Āet⡗⡙;時Ā;e⡞⡟朠se»⡟Ā;sျ⡨toȀ;dluျ⡳⡷⡻owîҌefôएðᏑker;斮Āoy⢇⢌mma;権;䐼ash;怔asuredangle»ᘦr;쀀𝔪o;愧ƀcdn⢯⢴⣉ro耻µ䂵Ȁ;acdᑤ⢽⣀⣄sôᚧir;櫰ot肻·Ƶusƀ;bd⣒ᤃ⣓戒Ā;uᴼ⣘;横ţ⣞⣡p;櫛ò−ðઁĀdp⣩⣮els;抧f;쀀𝕞Āct⣸⣽r;쀀𝓂pos»ᖝƀ;lm⤉⤊⤍䎼timap;抸ఀGLRVabcdefghijlmoprstuvw⥂⥓⥾⦉⦘⧚⧩⨕⨚⩘⩝⪃⪕⪤⪨⬄⬇⭄⭿⮮ⰴⱧⱼ⳩Āgt⥇⥋;쀀⋙̸Ā;v⥐௏쀀≫⃒ƀelt⥚⥲⥶ftĀar⥡⥧rrow;懍ightarrow;懎;쀀⋘̸Ā;v⥻ే쀀≪⃒ightarrow;懏ĀDd⦎⦓ash;抯ash;抮ʀbcnpt⦣⦧⦬⦱⧌la»˞ute;䅄g;쀀∠⃒ʀ;Eiop඄⦼⧀⧅⧈;쀀⩰̸d;쀀≋̸s;䅉roø඄urĀ;a⧓⧔普lĀ;s⧓ସdz⧟\x00⧣p肻\u00a0ଷmpĀ;e௹ఀʀaeouy⧴⧾⨃⨐⨓ǰ⧹\x00⧻;橃on;䅈dil;䅆ngĀ;dൾ⨊ot;쀀⩭̸p;橂;䐽ash;怓΀;Aadqsxஒ⨩⨭⨻⩁⩅⩐rr;懗rĀhr⨳⨶k;椤Ā;oᏲᏰot;쀀≐̸uiöୣĀei⩊⩎ar;椨í஘istĀ;s஠டr;쀀𝔫ȀEest௅⩦⩹⩼ƀ;qs஼⩭௡ƀ;qs஼௅⩴lanô௢ií௪Ā;rஶ⪁»ஷƀAap⪊⪍⪑rò⥱rr;憮ar;櫲ƀ;svྍ⪜ྌĀ;d⪡⪢拼;拺cy;䑚΀AEadest⪷⪺⪾⫂⫅⫶⫹rò⥦;쀀≦̸rr;憚r;急Ȁ;fqs఻⫎⫣⫯tĀar⫔⫙rro÷⫁ightarro÷⪐ƀ;qs఻⪺⫪lanôౕĀ;sౕ⫴»శiíౝĀ;rవ⫾iĀ;eచథiäඐĀpt⬌⬑f;쀀𝕟膀¬;in⬙⬚⬶䂬nȀ;Edvஉ⬤⬨⬮;쀀⋹̸ot;쀀⋵̸ǡஉ⬳⬵;拷;拶iĀ;vಸ⬼ǡಸ⭁⭃;拾;拽ƀaor⭋⭣⭩rȀ;ast୻⭕⭚⭟lleì୻l;쀀⫽⃥;쀀∂̸lint;樔ƀ;ceಒ⭰⭳uåಥĀ;cಘ⭸Ā;eಒ⭽ñಘȀAait⮈⮋⮝⮧rò⦈rrƀ;cw⮔⮕⮙憛;쀀⤳̸;쀀↝̸ghtarrow»⮕riĀ;eೋೖ΀chimpqu⮽⯍⯙⬄୸⯤⯯Ȁ;cerല⯆ഷ⯉uå൅;쀀𝓃ortɭ⬅\x00\x00⯖ará⭖mĀ;e൮⯟Ā;q൴൳suĀbp⯫⯭å೸åഋƀbcp⯶ⰑⰙȀ;Ees⯿ⰀഢⰄ抄;쀀⫅̸etĀ;eഛⰋqĀ;qണⰀcĀ;eലⰗñസȀ;EesⰢⰣൟⰧ抅;쀀⫆̸etĀ;e൘ⰮqĀ;qൠⰣȀgilrⰽⰿⱅⱇìௗlde耻ñ䃱çృiangleĀlrⱒⱜeftĀ;eచⱚñదightĀ;eೋⱥñ೗Ā;mⱬⱭ䎽ƀ;esⱴⱵⱹ䀣ro;愖p;怇ҀDHadgilrsⲏⲔⲙⲞⲣⲰⲶⳓⳣash;抭arr;椄p;쀀≍⃒ash;抬ĀetⲨⲬ;쀀≥⃒;쀀>⃒nfin;槞ƀAetⲽⳁⳅrr;椂;쀀≤⃒Ā;rⳊⳍ쀀<⃒ie;쀀⊴⃒ĀAtⳘⳜrr;椃rie;쀀⊵⃒im;쀀∼⃒ƀAan⳰⳴ⴂrr;懖rĀhr⳺⳽k;椣Ā;oᏧᏥear;椧ቓ᪕\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00ⴭ\x00ⴸⵈⵠⵥ⵲ⶄᬇ\x00\x00ⶍⶫ\x00ⷈⷎ\x00ⷜ⸙⸫⸾⹃Ācsⴱ᪗ute耻ó䃳ĀiyⴼⵅrĀ;c᪞ⵂ耻ô䃴;䐾ʀabios᪠ⵒⵗLjⵚlac;䅑v;樸old;榼lig;䅓Ācr⵩⵭ir;榿;쀀𝔬ͯ⵹\x00\x00⵼\x00ⶂn;䋛ave耻ò䃲;槁Ābmⶈ෴ar;榵Ȁacitⶕ⶘ⶥⶨrò᪀Āir⶝ⶠr;榾oss;榻nå๒;槀ƀaeiⶱⶵⶹcr;䅍ga;䏉ƀcdnⷀⷅǍron;䎿;榶pf;쀀𝕠ƀaelⷔ⷗ǒr;榷rp;榹΀;adiosvⷪⷫⷮ⸈⸍⸐⸖戨rò᪆Ȁ;efmⷷⷸ⸂⸅橝rĀ;oⷾⷿ愴f»ⷿ耻ª䂪耻º䂺gof;抶r;橖lope;橗;橛ƀclo⸟⸡⸧ò⸁ash耻ø䃸l;折iŬⸯ⸴de耻õ䃵esĀ;aǛ⸺s;樶ml耻ö䃶bar;挽ૡ⹞\x00⹽\x00⺀⺝\x00⺢⺹\x00\x00⻋ຜ\x00⼓\x00\x00⼫⾼\x00⿈rȀ;astЃ⹧⹲຅脀¶;l⹭⹮䂶leìЃɩ⹸\x00\x00⹻m;櫳;櫽y;䐿rʀcimpt⺋⺏⺓ᡥ⺗nt;䀥od;䀮il;怰enk;怱r;쀀𝔭ƀimo⺨⺰⺴Ā;v⺭⺮䏆;䏕maô੶ne;明ƀ;tv⺿⻀⻈䏀chfork»´;䏖Āau⻏⻟nĀck⻕⻝kĀ;h⇴⻛;愎ö⇴sҀ;abcdemst⻳⻴ᤈ⻹⻽⼄⼆⼊⼎䀫cir;樣ir;樢Āouᵀ⼂;樥;橲n肻±ຝim;樦wo;樧ƀipu⼙⼠⼥ntint;樕f;쀀𝕡nd耻£䂣Ԁ;Eaceinosu່⼿⽁⽄⽇⾁⾉⾒⽾⾶;檳p;檷uå໙Ā;c໎⽌̀;acens່⽙⽟⽦⽨⽾pproø⽃urlyeñ໙ñ໎ƀaes⽯⽶⽺pprox;檹qq;檵im;拨iíໟmeĀ;s⾈ຮ怲ƀEas⽸⾐⽺ð⽵ƀdfp໬⾙⾯ƀals⾠⾥⾪lar;挮ine;挒urf;挓Ā;t໻⾴ï໻rel;抰Āci⿀⿅r;쀀𝓅;䏈ncsp;怈̀fiopsu⿚⋢⿟⿥⿫⿱r;쀀𝔮pf;쀀𝕢rime;恗cr;쀀𝓆ƀaeo⿸〉〓tĀei⿾々rnionóڰnt;樖stĀ;e【】䀿ñἙô༔઀ABHabcdefhilmnoprstux぀けさすムㄎㄫㅇㅢㅲㆎ㈆㈕㈤㈩㉘㉮㉲㊐㊰㊷ƀartぇおがròႳòϝail;検aròᱥar;楤΀cdenqrtとふへみわゔヌĀeuねぱ;쀀∽̱te;䅕iãᅮmptyv;榳gȀ;del࿑らるろ;榒;榥å࿑uo耻»䂻rր;abcfhlpstw࿜ガクシスゼゾダッデナp;極Ā;f࿠ゴs;椠;椳s;椞ë≝ð✮l;楅im;楴l;憣;憝Āaiパフil;椚oĀ;nホボ戶aló༞ƀabrョリヮrò៥rk;杳ĀakンヽcĀekヹ・;䁽;䁝Āes㄂㄄;榌lĀduㄊㄌ;榎;榐Ȁaeuyㄗㄜㄧㄩron;䅙Ādiㄡㄥil;䅗ì࿲âヺ;䑀Ȁclqsㄴㄷㄽㅄa;椷dhar;楩uoĀ;rȎȍh;憳ƀacgㅎㅟངlȀ;ipsླྀㅘㅛႜnåႻarôྩt;断ƀilrㅩဣㅮsht;楽;쀀𝔯ĀaoㅷㆆrĀduㅽㅿ»ѻĀ;l႑ㆄ;楬Ā;vㆋㆌ䏁;䏱ƀgns㆕ㇹㇼht̀ahlrstㆤㆰ㇂㇘㇤㇮rrowĀ;t࿜ㆭaéトarpoonĀduㆻㆿowîㅾp»႒eftĀah㇊㇐rrowó࿪arpoonóՑightarrows;應quigarro÷ニhreetimes;拌g;䋚ingdotseñἲƀahm㈍㈐㈓rò࿪aòՑ;怏oustĀ;a㈞㈟掱che»㈟mid;櫮Ȁabpt㈲㈽㉀㉒Ānr㈷㈺g;柭r;懾rëဃƀafl㉇㉊㉎r;榆;쀀𝕣us;樮imes;樵Āap㉝㉧rĀ;g㉣㉤䀩t;榔olint;樒arò㇣Ȁachq㉻㊀Ⴜ㊅quo;怺r;쀀𝓇Ābu・㊊oĀ;rȔȓƀhir㊗㊛㊠reåㇸmes;拊iȀ;efl㊪ၙᠡ㊫方tri;槎luhar;楨;愞ൡ㋕㋛㋟㌬㌸㍱\x00㍺㎤\x00\x00㏬㏰\x00㐨㑈㑚㒭㒱㓊㓱\x00㘖\x00\x00㘳cute;䅛quï➺Ԁ;Eaceinpsyᇭ㋳㋵㋿㌂㌋㌏㌟㌦㌩;檴ǰ㋺\x00㋼;檸on;䅡uåᇾĀ;dᇳ㌇il;䅟rc;䅝ƀEas㌖㌘㌛;檶p;檺im;择olint;樓iíሄ;䑁otƀ;be㌴ᵇ㌵担;橦΀Aacmstx㍆㍊㍗㍛㍞㍣㍭rr;懘rĀhr㍐㍒ë∨Ā;oਸ਼਴t耻§䂧i;䀻war;椩mĀin㍩ðnuóñt;朶rĀ;o㍶⁕쀀𝔰Ȁacoy㎂㎆㎑㎠rp;景Āhy㎋㎏cy;䑉;䑈rtɭ㎙\x00\x00㎜iäᑤaraì⹯耻­䂭Āgm㎨㎴maƀ;fv㎱㎲㎲䏃;䏂Ѐ;deglnprካ㏅㏉㏎㏖㏞㏡㏦ot;橪Ā;q኱ኰĀ;E㏓㏔檞;檠Ā;E㏛㏜檝;檟e;扆lus;樤arr;楲aròᄽȀaeit㏸㐈㐏㐗Āls㏽㐄lsetmé㍪hp;樳parsl;槤Ādlᑣ㐔e;挣Ā;e㐜㐝檪Ā;s㐢㐣檬;쀀⪬︀ƀflp㐮㐳㑂tcy;䑌Ā;b㐸㐹䀯Ā;a㐾㐿槄r;挿f;쀀𝕤aĀdr㑍ЂesĀ;u㑔㑕晠it»㑕ƀcsu㑠㑹㒟Āau㑥㑯pĀ;sᆈ㑫;쀀⊓︀pĀ;sᆴ㑵;쀀⊔︀uĀbp㑿㒏ƀ;esᆗᆜ㒆etĀ;eᆗ㒍ñᆝƀ;esᆨᆭ㒖etĀ;eᆨ㒝ñᆮƀ;afᅻ㒦ְrť㒫ֱ»ᅼaròᅈȀcemt㒹㒾㓂㓅r;쀀𝓈tmîñiì㐕aræᆾĀar㓎㓕rĀ;f㓔ឿ昆Āan㓚㓭ightĀep㓣㓪psiloîỠhé⺯s»⡒ʀbcmnp㓻㕞ሉ㖋㖎Ҁ;Edemnprs㔎㔏㔑㔕㔞㔣㔬㔱㔶抂;櫅ot;檽Ā;dᇚ㔚ot;櫃ult;櫁ĀEe㔨㔪;櫋;把lus;檿arr;楹ƀeiu㔽㕒㕕tƀ;en㔎㕅㕋qĀ;qᇚ㔏eqĀ;q㔫㔨m;櫇Ābp㕚㕜;櫕;櫓c̀;acensᇭ㕬㕲㕹㕻㌦pproø㋺urlyeñᇾñᇳƀaes㖂㖈㌛pproø㌚qñ㌗g;晪ڀ123;Edehlmnps㖩㖬㖯ሜ㖲㖴㗀㗉㗕㗚㗟㗨㗭耻¹䂹耻²䂲耻³䂳;櫆Āos㖹㖼t;檾ub;櫘Ā;dሢ㗅ot;櫄sĀou㗏㗒l;柉b;櫗arr;楻ult;櫂ĀEe㗤㗦;櫌;抋lus;櫀ƀeiu㗴㘉㘌tƀ;enሜ㗼㘂qĀ;qሢ㖲eqĀ;q㗧㗤m;櫈Ābp㘑㘓;櫔;櫖ƀAan㘜㘠㘭rr;懙rĀhr㘦㘨ë∮Ā;oਫ਩war;椪lig耻ß䃟௡㙑㙝㙠ዎ㙳㙹\x00㙾㛂\x00\x00\x00\x00\x00㛛㜃\x00㜉㝬\x00\x00\x00㞇ɲ㙖\x00\x00㙛get;挖;䏄rë๟ƀaey㙦㙫㙰ron;䅥dil;䅣;䑂lrec;挕r;쀀𝔱Ȁeiko㚆㚝㚵㚼Dz㚋\x00㚑eĀ4fኄኁaƀ;sv㚘㚙㚛䎸ym;䏑Ācn㚢㚲kĀas㚨㚮pproø዁im»ኬsðኞĀas㚺㚮ð዁rn耻þ䃾Ǭ̟㛆⋧es膀×;bd㛏㛐㛘䃗Ā;aᤏ㛕r;樱;樰ƀeps㛡㛣㜀á⩍Ȁ;bcf҆㛬㛰㛴ot;挶ir;櫱Ā;o㛹㛼쀀𝕥rk;櫚á㍢rime;怴ƀaip㜏㜒㝤dåቈ΀adempst㜡㝍㝀㝑㝗㝜㝟ngleʀ;dlqr㜰㜱㜶㝀㝂斵own»ᶻeftĀ;e⠀㜾ñम;扜ightĀ;e㊪㝋ñၚot;旬inus;樺lus;樹b;槍ime;樻ezium;揢ƀcht㝲㝽㞁Āry㝷㝻;쀀𝓉;䑆cy;䑛rok;䅧Āio㞋㞎xô᝷headĀlr㞗㞠eftarro÷ࡏightarrow»ཝऀAHabcdfghlmoprstuw㟐㟓㟗㟤㟰㟼㠎㠜㠣㠴㡑㡝㡫㢩㣌㣒㣪㣶ròϭar;楣Ācr㟜㟢ute耻ú䃺òᅐrǣ㟪\x00㟭y;䑞ve;䅭Āiy㟵㟺rc耻û䃻;䑃ƀabh㠃㠆㠋ròᎭlac;䅱aòᏃĀir㠓㠘sht;楾;쀀𝔲rave耻ù䃹š㠧㠱rĀlr㠬㠮»ॗ»ႃlk;斀Āct㠹㡍ɯ㠿\x00\x00㡊rnĀ;e㡅㡆挜r»㡆op;挏ri;旸Āal㡖㡚cr;䅫肻¨͉Āgp㡢㡦on;䅳f;쀀𝕦̀adhlsuᅋ㡸㡽፲㢑㢠ownáᎳarpoonĀlr㢈㢌efô㠭ighô㠯iƀ;hl㢙㢚㢜䏅»ᏺon»㢚parrows;懈ƀcit㢰㣄㣈ɯ㢶\x00\x00㣁rnĀ;e㢼㢽挝r»㢽op;挎ng;䅯ri;旹cr;쀀𝓊ƀdir㣙㣝㣢ot;拰lde;䅩iĀ;f㜰㣨»᠓Āam㣯㣲rò㢨l耻ü䃼angle;榧ހABDacdeflnoprsz㤜㤟㤩㤭㦵㦸㦽㧟㧤㧨㧳㧹㧽㨁㨠ròϷarĀ;v㤦㤧櫨;櫩asèϡĀnr㤲㤷grt;榜΀eknprst㓣㥆㥋㥒㥝㥤㦖appá␕othinçẖƀhir㓫⻈㥙opô⾵Ā;hᎷ㥢ïㆍĀiu㥩㥭gmá㎳Ābp㥲㦄setneqĀ;q㥽㦀쀀⊊︀;쀀⫋︀setneqĀ;q㦏㦒쀀⊋︀;쀀⫌︀Āhr㦛㦟etá㚜iangleĀlr㦪㦯eft»थight»ၑy;䐲ash»ံƀelr㧄㧒㧗ƀ;beⷪ㧋㧏ar;抻q;扚lip;拮Ābt㧜ᑨaòᑩr;쀀𝔳tré㦮suĀbp㧯㧱»ജ»൙pf;쀀𝕧roð໻tré㦴Ācu㨆㨋r;쀀𝓋Ābp㨐㨘nĀEe㦀㨖»㥾nĀEe㦒㨞»㦐igzag;榚΀cefoprs㨶㨻㩖㩛㩔㩡㩪irc;䅵Ādi㩀㩑Ābg㩅㩉ar;機eĀ;qᗺ㩏;扙erp;愘r;쀀𝔴pf;쀀𝕨Ā;eᑹ㩦atèᑹcr;쀀𝓌ૣណ㪇\x00㪋\x00㪐㪛\x00\x00㪝㪨㪫㪯\x00\x00㫃㫎\x00㫘ៜ៟tré៑r;쀀𝔵ĀAa㪔㪗ròσrò৶;䎾ĀAa㪡㪤ròθrò৫að✓is;拻ƀdptឤ㪵㪾Āfl㪺ឩ;쀀𝕩imåឲĀAa㫇㫊ròώròਁĀcq㫒ីr;쀀𝓍Āpt៖㫜ré។Ѐacefiosu㫰㫽㬈㬌㬑㬕㬛㬡cĀuy㫶㫻te耻ý䃽;䑏Āiy㬂㬆rc;䅷;䑋n耻¥䂥r;쀀𝔶cy;䑗pf;쀀𝕪cr;쀀𝓎Ācm㬦㬩y;䑎l耻ÿ䃿Ԁacdefhiosw㭂㭈㭔㭘㭤㭩㭭㭴㭺㮀cute;䅺Āay㭍㭒ron;䅾;䐷ot;䅼Āet㭝㭡træᕟa;䎶r;쀀𝔷cy;䐶grarr;懝pf;쀀𝕫cr;쀀𝓏Ājn㮅㮇;怍j;怌','timer','RiemannSiegelTheta','BitShiftRight','f90','FILTER_ACCEPT','ef_cloud','steam_get_persona_name','PhrasalVerb\x20#Particle','getNth','DateOverlapsQ','institut','file_text_eof','COOKIES','nmiss','Accumulate','sngl','↕','blue','VertexList','unlimit','econstructor','ScreenRectangle','Codeunit','POST','FindList','Escape','$realtime','layer_sprite_get_xscale','object-position','BSAVE|10','⥑','DefaultColor','DefaultTextFormatType','phy_joint_speed','matrix_exp','UnitRange','GeoCircle','TableView','minn','mip_off','open-the','json_query','safeMode','⪏','UtilityFunction','matrix_power','ral','GeoLocation','isRealTime','model','⇔','ev_user15','onPlayerDisconnected','_infile_','c_long_long','CylindricalDecompositionFunction','Subscript','regr_slope','insert_recordset','wingsForcesRTD','activatedAddons','soundVolume','unlinkItem','switchGesture','safeZoneH','iap_status_uninitialised','following-sibling::','BackFaceColor','PreIncrement','bytearray','MeshCellMarker','CurrentNotebookImage','ColorDetect','q|0','DiscreteAsymptotic','RandomSeed','WordFrequencyData','≥⃒','audio_stop_all','[A-Za-z_][0-9A-Za-z_]*','CorrelationTest','getLoadedModsInfo','<[A-Za-z][A-Za-z0-9\x5c-]*(?:\x5cs+[a-zA-Z_:][a-zA-Z0-9:._-]*(?:\x5cs*=\x5cs*(?:[^\x22\x27=<>`\x5cx00-\x5cx20]+|\x27[^\x27]*\x27|\x22[^\x22]*\x22))?)*\x5cs*\x5c/?>','methods','or\x20not$','commit','StartOfString','schema_test','application_surface','UpTee','FARIMAProcess','#Copula\x20#Adverb+?\x20[#Adjective]','⌽','null','(#Modal|#PhrasalVerb)','phy_inertia','FilledCurve','draw_circle_color','FemaleName','auto_load','Int\x20Real\x20Char\x20Bool','steam_ugc_query_set_match_any_tag','𝒳','addItemPool','TypeDeclaration','DotEqual','SourcePDETerm','TrackSpeedAt','saveStatus','GraphicsGroup','Handlebars','⌋','GeoGraphValuePlot','SQL.REQUEST','steam_ugc_query_set_search_text','WriteString','$UserAgentString','cover','ServiceExecute','formatnumber','InteractiveTradingChart','PointLight','(?:[Ss]igma|varsigma|tau|[Uu]psilon|[Pp]hi|varphi|chi|[Pp]si|[Oo]mega)','help\x20[(stop|end|make|start)]','NEGBINOMDIST','__index__','Show\x20answers','iostat','FindGeometricTransform','[a-z]+\x22','∩︀','bquote','tag','sep','interval','ERROR_PROG','createHashMapFromArray','servername','refine','ShortUpArrow','[a-zA-Z_][a-zA-Z0-9_.]*(!|\x5c?)?','Uint8Array','IMP','V2Get','Rule','clip','DownTeeArrow','true¦bEeBf5mEnine7one,s4t0zero;en,h2rDw0;e0o;lve,n5;irt6ousands,ree;even2ix2;i3o0;r1ur0;!t2;ty;ft0ve;e2y;ight0lev1;!e0y;en;illions','$PersistenceBase','ArgumentCountError','IncludeAromaticBonds','HiddenSurface','record','soft','InstType','esq','𝔼','RankedMax','ev_step_begin','⦔','united-sportsteam','template','csrf_token','³','setAttributes','isTrulyOpeningTag','part_emitter_region','hamlet','caption_lives','xs:short','FormBox','form_theme','LiveScript','split_Rabs','$fdisplayh','HIP','FileSystemTree',' ','isOnRoad','DataDistribution','delimiters','~~~','steam_create_leaderboard','$NewSymbol','NeighborhoodGraph','audio_get_listener_mask','𝕒','lnbColor','ColorProfileData','nebulae','Closing','\x5cb\x5cd+','Beta','$TracePostAction','Please\x20use\x20highlight(code,\x20options)\x20instead.\x0ahttps://github.com/highlightjs/highlight.js/issues/2277','NextCell','getMissionConfig','class\x20object\x20trait\x20type','⩭','val','Ζ','VertexComponent','љ','abstract\x20add\x20and\x20array\x20as\x20asc\x20aspect\x20assembly\x20async\x20begin\x20break\x20block\x20by\x20case\x20class\x20concat\x20const\x20copy\x20constructor\x20continue\x20create\x20default\x20delegate\x20desc\x20distinct\x20div\x20do\x20downto\x20dynamic\x20each\x20else\x20empty\x20end\x20ensure\x20enum\x20equals\x20event\x20except\x20exit\x20extension\x20external\x20false\x20final\x20finalize\x20finalizer\x20finally\x20flags\x20for\x20forward\x20from\x20function\x20future\x20global\x20group\x20has\x20if\x20implementation\x20implements\x20implies\x20in\x20index\x20inherited\x20inline\x20interface\x20into\x20invariants\x20is\x20iterator\x20join\x20locked\x20locking\x20loop\x20matching\x20method\x20mod\x20module\x20namespace\x20nested\x20new\x20nil\x20not\x20notify\x20nullable\x20of\x20old\x20on\x20operator\x20or\x20order\x20out\x20override\x20parallel\x20params\x20partial\x20pinned\x20private\x20procedure\x20property\x20protected\x20public\x20queryable\x20raise\x20read\x20readonly\x20record\x20reintroduce\x20remove\x20repeat\x20require\x20result\x20reverse\x20sealed\x20select\x20self\x20sequence\x20set\x20shl\x20shr\x20skip\x20static\x20step\x20soft\x20take\x20then\x20to\x20true\x20try\x20tuple\x20type\x20union\x20unit\x20unsafe\x20until\x20uses\x20using\x20var\x20virtual\x20raises\x20volatile\x20where\x20while\x20with\x20write\x20xor\x20yield\x20await\x20mapped\x20deprecated\x20stdcall\x20cdecl\x20pascal\x20register\x20safecall\x20overload\x20library\x20platform\x20reference\x20packed\x20strict\x20published\x20autoreleasepool\x20selector\x20strong\x20weak\x20unretained','curatorSelected','RemoveChannelSubscribers','MissingDataMethod','can\x27t','turn-down','moveToCompleted','layer_background_get_vtiled','layer_background_blend','HTML,\x20XML','set3DENIconsVisible','disableUAVConnectability','NoncentralFRatioDistribution','ds_list_create','[=>\x27/<($\x22]','\x5cb[a-zA-Z][a-zA-Z0-9_-]*','ImagePerspectiveTransformation','xs:int','maxexponent','getMissionPath','х','part_system_create_layer','setObjectTexture','sis','object_get_visible','Deletable','LinkReadHeld','diag_dynamicSimulationEnd','NoTrayIcon','GeoIdentify','EccentricityCentrality','OutputResponse','clearBackpackCargoGlobal','uname','septillion','CountsBy','([O])([0-9]+)','CloudObjectURLType','findDisplay','found-it-gerund','DiagonalMatrix','Inequality','true¦0:AI;1:BS;2:BI;3:BA;4:A8;5:84;6:AV;7:AN;8:AF;9:7H;A:BQ;B:AY;C:BC;D:BH;E:9Y;aA2b9Ec8Fd7We79f6Ng6Eh61i4Xj4Wk4Tl4Im41n3Po36p2Oquart7Pr2Ds1Dt14uSvOwFye29;aMeKhIiHoF;man5oFrth7G;dADzy;despreB1n\x20w97s86;acked1UoleF;!sa6;ather1PeFll\x20o70ste1D;!k5;nt1Ist6Ate4;aHeGiFola5T;bBUce\x20versa,gi3Lle;ng67rsa5R;ca1gBSluAV;lt0PnLpHrGsFttermoBL;ef9Ku3;b96ge1;\x20Hb32pGsFtiAH;ca6ide\x20d4R;er,i85;f52to\x20da2;a0Fbeco0Hc0Bd04e02f01gu1XheaBGiXkn4OmUnTopp06pRrNsJtHus0wF;aFiel3K;nt0rra0P;app0eXoF;ld,uS;eHi37o5ApGuF;perv06spec39;e1ok9O;en,ttl0;eFu5;cogn06gul2RlGqu84sF;erv0olv0;at0en33;aFrecede0E;id,rallel0;am0otic0;aFet;rri0tF;ch0;nFq26vers3;sur0terFv7U;eFrupt0;st0;air,inish0orese98;mploy0n7Ov97xpF;ect0lain0;eHisFocume01ue;clFput0;os0;cid0rF;!a8Scov9ha8Jlyi8nea8Gprivileg0sMwF;aFei9I;t9y;hGircumcFonvin2U;is0;aFeck0;lleng0rt0;b20ppea85ssuGttend0uthorF;iz0;mi8;i4Ara;aLeIhoHip\x2025oGrF;anspare1encha1i2;geth9leADp\x20notch,rpB;rny,ugh6H;ena8DmpGrFs6U;r49tia4;eCo8P;leFst4M;nt0;a0Dc09e07h06i04ki03l01mug,nobbi4XoVpRqueami4XtKuFymb94;bHccinAi\x20generis,pFr5;erFre7N;!\x20dup9b,vi70;du0li7Lp6IsFurb7J;eq9Atanda9X;aKeJi16o2QrGubboFy4Q;rn;aightFin5GungS;\x20fFfF;or7V;adfa9Pri6;lwa6Ftu82;arHeGir6NlendBot\x20Fry;on;c3Qe1S;k5se;\x20call0lImb9phistic16rHuFviV;ndFth1B;proof;dBry;dFub6;\x20o2A;e60ipF;pe4shod;ll0n\x20d7R;g2HnF;ceEg6ist9;am3Se9;co1Zem5lfFn6Are7;\x20suf4Xi43;aGholFient3A;ar5;rlFt4A;et;cr0me,tisfac7F;aOeIheumatoBiGoF;bu8Ztt7Gy3;ghtFv3;\x201Sf6X;cJdu8PlInown0pro69sGtF;ard0;is47oF;lu2na1;e1Suc45;alcit8Xe1ondi2;bBci3mpa1;aSePicayu7laOoNrGuF;bl7Tnjabi;eKiIoF;b7VfGmi49pFxi2M;er,ort81;a7uD;maFor,sti7va2;!ry;ciDexis0Ima2CpaB;in55puli8G;cBid;ac2Ynt\x203IrFti2;ma40tFv7W;!i3Z;i2YrFss7R;anoBtF;\x205XiF;al,s5V;bSffQkPld\x20OnMrLth9utKverF;!aIbMdHhGni75seas,t,wF;ei74rou74;a63e7A;ue;ll;do1Ger,si6A;d3Qg2Aotu5Z;\x20bFbFe\x20on\x20o7g3Uli7;oa80;fashion0school;!ay;\x20gua7XbFha5Uli7;eat;eHligGsF;ce7er0So1C;at0;diFse;a1e1;aOeNiMoGuF;anc0de;\x20moEnHrthFt6V;!eFwe7L;a7Krn;chaGdescri7Iprof30sF;top;la1;ght5;arby,cessa4ighbor5wlyw0xt;k0usiaFv3;ti8;aQeNiLoHuF;dIltiF;facet0p6;deHlGnFot,rbBst;ochro4Xth5;dy;rn,st;ddle\x20ag0nF;dbloZi,or;ag9diocEga,naGrFtropolit4Q;e,ry;ci8;cIgenta,inHj0Fkeshift,mmGnFri4Oscu61ver18;da5Dy;ali4Lo4U;!stream;abEho;aOeLiIoFumberi8;ngFuti1R;stan3RtF;erm,i4H;ghtGteraF;l,ry,te;heart0wei5O;ft\x20JgFss9th3;al,eFi0M;nda4;nguBps0te5;apGind5noF;wi8;ut;ad0itte4uniW;ce\x20co0Hgno6Mll0Cm04nHpso\x202UrF;a2releF;va1;\x20ZaYcoWdReQfOgrNhibi4Ri05nMoLsHtFvalu5M;aAeF;nDrdepe2K;a7iGolFuboI;ub6ve1;de,gF;nifica1;rdi5N;a2er;own;eriIiLluenVrF;ar0eq5H;pt,rt;eHiGoFul1O;or;e,reA;fiFpe26termi5E;ni2;mpFnsideCrreA;le2;ccuCdeq5Ene,ppr4J;fFsitu,vitro;ro1;mJpF;arHeGl15oFrop9;li2r11;n2LrfeA;ti3;aGeFi18;d4BnD;tuE;egGiF;c0YteC;al,iF;tiF;ma2;ld;aOelNiLoFuma7;a4meInHrrGsFur5;ti6;if4E;e58o3U;\x20ma3GsF;ick;ghfalut2HspF;an49;li00pf33;i4llow0ndGrdFtM;\x2005coEworki8;sy,y;aLener44iga3Blob3oKrGuF;il1Nng\x20ho;aFea1Fizzl0;cGtF;ef2Vis;ef2U;ld3Aod;iFuc2D;nf2R;aVeSiQlOoJrF;aGeFil5ug3;q43tf2O;gFnt3S;i6ra1;lk13oHrF;\x20keeps,eFge0Vm9tu41;g0Ei2Ds3R;liF;sh;ag4Mowe4uF;e1or45;e4nF;al,i2;d\x20Gmini7rF;ti6ve1;up;bl0lDmIr\x20Fst\x20pac0ux;oGreacF;hi8;ff;ed,ili0R;aXfVlTmQnOqu3rMthere3veryday,xF;aApIquisi2traHuF;be48lF;ta1;!va2L;edRlF;icF;it;eAstF;whi6;\x20Famor0ough,tiE;rou2sui2;erGiF;ne1;ge1;dFe2Aoq34;er5;ficF;ie1;g9sF;t,ygF;oi8;er;aWeMiHoGrFue;ea4owY;ci6mina1ne,r31ti8ubQ;dact2Jfficult,m,sGverF;ge1se;creGePjoi1paCtF;a1inA;et,te;\x20Nadp0WceMfiLgeneCliJmuEpeIreliAsGvoF;id,ut;pFtitu2ul1L;eCoF;nde1;ca2ghF;tf13;a1ni2;as0;facto;i5ngero0I;ar0Ce09h07i06l05oOrIuF;rmudgeon5stoma4teF;sy;ly;aIeHu1EystalF;\x20cleFli7;ar;epy;fFv17z0;ty;erUgTloSmPnGrpoCunterclVveFy;rt;cLdJgr21jIsHtrF;aFi2;dic0Yry;eq1Yta1;oi1ug3;escenFuN;di8;a1QeFiD;it0;atoDmensuCpF;ass1SulF;so4;ni3ss3;e1niza1;ci1J;ockwiD;rcumspeAvil;eFintzy;e4wy;leGrtaF;in;ba2;diac,ef00;a00ePiLliJoGrFuck\x20nak0;and\x20new,isk,on22;gGldface,naF;\x20fi05fi05;us;nd,tF;he;gGpartisFzarE;an;tiF;me;autifOhiNlLnHsFyoN;iWtselF;li8;eGiFt;gn;aFfi03;th;at0oF;v0w;nd;ul;ckwards,rF;e,rT;\x20priori,b13c0Zd0Tf0Ng0Ihe0Hl09mp6nt06pZrTsQttracti0MuLvIwF;aGkF;wa1B;ke,re;ant\x20garGeraF;ge;de;diIsteEtF;heFoimmu7;nt07;re;to4;hGlFtu2;eep;en;bitIchiv3roHtF;ifiFsy;ci3;ga1;ra4;ry;pFt;aHetizi8rF;oprF;ia2;llFre1;ed,i8;ng;iquFsy;at0e;ed;cohKiJkaHl,oGriFterX;ght;ne,of;li7;ne;ke,ve;olF;ic;ad;ain07gressiIi6rF;eeF;ab6;le;ve;fGraB;id;ectGlF;ue1;ioF;na2;\x20JaIeGvF;erD;pt,qF;ua2;ma1;hoc,infinitum;cuCquiGtu3u2;al;esce1;ra2;erSjeAlPoNrKsGuF;nda1;e1olu2trF;aAuD;se;te;eaGuF;pt;st;aFve;rd;aFe;ze;ct;ra1;nt','setArmoryPoints','buffer_sizeof','weaponDirection','ö','color_get_value','may-be','vertex_format_add_normal','setWaypointFormation','FlushINI','SwatchLegend','$1ews','PartProtection','sub_col','loadStrings','diarySubjectExists','skipSpaces','CaputoD','ℵ','setTimeout','custom','_top','ctrlIDC','isCollapsed','src_path','zeros_array','iso-dash','nal','noSmooth','cacti','quick-plural','table','\x5cb(deque|list|queue|priority_queue|pair|stack|vector|map|set|bitset|multiset|multimap|unordered_map|unordered_set|unordered_multiset|unordered_multimap|array|tuple|optional|variant|function)\x5cs*<(?!<)','background-blend-mode','radioEnabled','AddOnHelpPath','revgoals','Uncountable','MATCH_NOTHING_RE','Subgraph','setHorizonParallaxCoef','sml','autoFill','ugc_query_FavoritedByFriendsRankedByPublicationDate','TransformationMatrix','chunk','setoid_replace','http://','^\x5cs*[=~]\x5cs*','toSingular','AcousticPDEComponent','∫','place_free','kbv_returnkey_send','GlobIterator','asm','carrot','nfc','setMarkerShadow','three','chgrp','KirchhoffMatrix','TFT','beginSpeaker','deleteMarkerLocal','vk_numpad4','START','$SubtitleEncoders','SquareIntersection','$LicenseProcesses','selectionVectorDirAndUp','OpenerView','⦝','Insphere','new\x20return\x20throw\x20await\x20else','liras','Coth','Decompose','FindVertexIndependentPaths','USING','BITAND','removeAllItemsWithMagazines','ResourceSystemBase','uhoh','Tuple','PolygonHoleScale','tilemap_get_tileset','_name_','layer_background_get_sprite','Text\x20copied\x20to\x20clipboard!','delegate','𝒥','vlabel','vertical-align','TextSentences','sensitive','rules2','true¦fri2mon2s1t0wednesd3;hurs1ues1;aturd1und1;!d0;ay0;!s','html_to_markdown','ev_joystick1_right','location','Clip','Ķ','$1men','lockedCargo','SokalSneathDissimilarity','must','LoopFreeGraphQ','={0,1}%>','phy_joint_motor_force','fa_top','Annotation','CalendarData','NegativeMultinomialDistribution','empty','DensifyGeodetic','Exception','RobotControl','date_compare_time','Portal','ChannelBase','university\x20of\x20#Place','entries','\x5cb(0[xX][a-fA-F0-9_]+[Lln]?|0[oO][0-7_]+[Lln]?|0[bB][01_]+[Lln]?|[0-9][0-9_]*([Lln]|(\x5c.[0-9_]*)?([eE][-+]?[0-9_]+)?)?)','[id]','CSS','island','LensFlare','$fell_gclk','EllipticPi','array_push','buffer_load','icon','OverflowException','time-range','ForAllType','isSingular','(&[hH][0-9a-fA-F]{1,4})','KelvinKer','\x5cbforeign\x5cb','iterator','local','IntoIterator','copula-noun-lastname','InverseFourier','NestList','getEditorMode','$GeneratedAssetLocation','final\x20class\x20struct',':-\x5c|-->','terrainIntersect','#fileLiteral','L[^(;:\x0a]*;','ItemStyle','phy_debug_render_aabb','src_domain','GeodesicOpening','!','being-walked','(\x5c.','PageRankCentrality','⤞','neq','RegionDilation','SolidFixedCondition','path_get_point_x','copy','PRICEMAT','async','MixedRadixQuantity','HannPoissonWindow','$coverage_control','Nouns','$strobe','getShadowDistance','menuCollapse','RecursiveIterator','@[^@\x5cs]+','\x5cs+','displayAddEventHandler','Most','eat-my-shorts','@\x5c{','TogglerBar','browser_firefox','bsCount','∲','variable_global_set','keyb','IMCOT','angelscript','continue','VertexWeightedGraphQ','get_string','\x5c]|\x5c?>','variable_struct_get_names','til','NumberFieldClassNumber','MeshCellHighlight','CategoricalDistribution','chateaux','createDiaryLink','TemplateExpression','GeomagneticModelData','AcousticImpedanceValue','Triangle','ReflectionTransform','DatabinUpload','tcl_endOfWord','were-he','LegendLayout','physics_particle_set_radius','part_system_draw_order','remoteExecutedOwner','ForwardCloudCredentials','AtomDiagramCoordinates','jobs-that-work','drawLaser','ColumnAlignments','doStop','set_field','\x5cs*\x5c()','system_user','CaseSensitive','([fF]|L|i|[fF]i|Li)?|','this-month','Tidy_On','MedianDeviation','dts','thieves','NonNegativeIntegers','dirty','pushBackUnique','Time','pointcut\x20after\x20before\x20around\x20throwing\x20returning','TooBig','mask-border-outset','initcap','_Complex','SolidMechanicsStress','betainv','REPORT_DIGITAL','hljs','get_field','bezierVertex','reg','VerticalTilde','gpu_set_tex_mip_filter_ext','ReverseUpEquilibrium','turretLocal','AugmentedSymmetricPolynomial','push','MIDBs','Emoticon','RegionDistance','ctrlSetURL','cdsqrt','MemoryInUse','localhost|www\x5c.|\x5c.\x5cd{1,3}\x5c.|(?:\x5c.(?:%TLDS%)(?:','ds_grid_value_x','getobject','audio_mono','ISERR','col','PointProcessParameterQ','^---\x5cs*$','ϵ','LogicException','CanonicalWarpingCorrespondence','BesselI','$ConditionHold','setStatValue','terrainIntersectASL',':-\x5c','logsdf','macro','𝒴','lockedCameraTo','.\x20.\x20if\x20.{4}','local.set','#Determiner\x20#Adjective?\x20[(shed|thought|rose|bid|saw|spelt)]','trench','[a-zA-Z_]\x5cw*[!?=]?|[-+~]@|<<|>>|[=!]~|===?|<=>|[<>]=?|\x5c*\x5c*|[-/+%^&*~|]|//|//=|&[-+*]=?|&\x5c*\x5c*|\x5c[\x5c][=?]?','objcMembers','CoordinatesToolOptions','True\x20False','character_storage_size','TruncatedPolyhedron','isMultiplayer','preinitialization','diag_captureFrame','translate-x-full','Combined','DynamicModuleBoxOptions','adj-to','Notation','summarize-btn','XML','killUnicode','classList','park','src_Any','$1..','SmoothHistogram3D','alpha','groupFromNetId','DefineResourceFunction','gesture_get_flick_speed','buffer_peek','\x5cb0[oO][0-7](_?[0-7])*r?i?\x5cb','vk_return','(he|she|they|it|we)\x20is','Ratios','ERR:\x20contains\x20`self`\x20is\x20not\x20supported\x20at\x20the\x20top-level\x20of\x20a\x20language.\x20\x20See\x20documentation.','$SystemMemory','LineLegend','scoped','ilike','$VideoEncoders','≅','InitializationValue','(ocf|systemd|service|lsb):[\x5cw_:-]+','ifnan','SplMinHeap','(got|was|were)','voice-duration','BlockchainTransactionData','StringForm','HermiteH','LambertW','HoeffdingD','camPreparePos','SolarTime','$atan2','leadz','test','Trig','$assertvacuousoff','got','forceliterals','errorCode','AttachedCell','Minute','DGaussianWavelet','SplPriorityQueue','ă','goNext','look-what','≧','(^|[><|]|\x22|\x5c(|','\x5c(\x5cs*','$ProgressReporting','ISNONTEXT','dir','\x5c\x5cbegin(?=[\x20\x09]*(\x5cr?\x5cn[\x20\x09]*)?\x5c{','VBG','false\x20true','getMass','gpu_set_lightingenable','move_wrap','CholeskyDecomposition','lkj_corr','qasin','LinkFunction','Noun','ı','MellinTransform','PaulWavelet','PacletEnable','ev_middle_button','image','Therefore','ReplicateLayer','text-emphasis','current_transform_group_for_type','buffer_base64_decode','Annulus','local_data_key!','vestContainer','open','Ŧ','mapCenterOnCamera','will-adj','SectionSetInstTypes','forward','TODAY','iscntrl','ListContourPlot','diag_allMissionEventHandlers','lisp','UpSet','Existing','WindowTitle','2-punct-colon\x27\x27','power','\x5cb[A-Z][A-Za-z0-9_]*','isCardinal','GaugeLabels','𝒪','Cancel','gravity_direction','CDBL','#macro','\x5c*\x5c)','setLightFlareSize','DoubleDownArrow','HigmanSimsGroupHS','repeatElement','NotSquareSubset','sprite_set_cache_size_ext','ConvertDirection','FunctionRange','sha1sum','Expectation','Head','Rationals','SpheroidalQSPrime','nth-last-of-type','QuotientRemainder','MoleculeProperty','\x22|$','setTurretOpticsMode','hon','unassignVehicle','instance_position','Undefined','TrendStyle','|[@/\x5c[\x5c]()]).)+@)?','cuchar','success-notice','BattleLemarieWavelet','addPrimaryWeaponItem','ImageSizeMultipliers','larvae','HeunGPrime','getWingsOrientationRTD','SamplingPeriod','getForcedSpeed','(they|their)','recv','width','SuperStar','#Adverb\x20#Verb','displayWidth','mp_grid_clear_all','compileScript','⊊','render','AbstractDict','HypercubeGraph','wordWrap','matrices','CepstrogramArray','draw_flush','shownUAVFeed','ris','Congruent','WeierstrassHalfPeriodW3','new\x20throw\x20return\x20else','BigUint64Array','⋺','CloseKernels','%TLDS%','AudioLooping','hundred\x20sextillion','#Infinitive\x20#Pronoun\x20[like]','SystemInformationData','physics_particle_group_polygon','vertex_color','DSolveChangeVariables','getUnitTrait','FontProperties','Dithering','telecommunications','NotSubset','have\x20been','ŕ','noglob','header','𝕄','⫯','#Determiner\x20[#Singular]\x20said','robotNameRead','TreeTraversalOrder','isWeaponDeployed','Error\x20in\x20getting\x20row:\x20','GeoGraphPlot','pulsestyle_onevent','GET','%EF%BF%BD','RANK.AVG','(\x5c.\x5c.\x5c.)','OverflowError','systems','xs:token','march','AstroProjection','KnotData','kdb','GeoElevationData','DisplayTemporary','[a-zA-Z_][a-zA-Z0-9\x5c._]*','nulls\x20first','office','getText','buffer_s8','animatePylon','phy_joint_max_length','forceFlagTexture','Dependent','clin','⟧','isServer','registerLanguage','bitRead','SpatialEstimate','CellFrameMargins','tenn','\x5c*/','Ù','(?:if|cs|exp):w','FLOOR.PRECISE','(\x5c([^()]*(\x5c([^()]*(\x5c([^()]*\x5c)[^()]*)*\x5c)[^()]*)*\x5c)|','ExpandNumerator','layer_tile_get_yscale','bg-red-100\x20border\x20border-red-400\x20text-red-700\x20px-4\x20py-3\x20rounded\x20relative','xlsx','(u8?|U)?R\x22','c_backspace','binomial','str_to_duration','\x20~+~\x20and\x20this\x20query:\x20~+~','postfix','active','$past_gclk','initializer','countFriendly','˙','TraceAbove','rectangle_in_rectangle','pkg::create','DynamicImage','È','afterWords','Vee','defined','toUnicode','getRotorBrakeRTD','als','CloudObjects','SpatialJ','joinAs','Snowfall','need','parameter_string','Commonest',')((-|/)[a-zA-Z0-9]+)*\x5cs','$sync$nand$plane','assignToAirport','LabelingFunction','endinterface','$1oes','HankelH1','ClusteringTree','file','validateNumericCharacterReference','ev_joystick2_button6','mdivide_left_tri_low','cross','include_once','temp','ProcessStateDomain','ams','saveStrings','gamma_q','INPUT_PULLUP','RIGHTB','skeleton_bone_state_get','physics_particle_group_get_ang_vel','\x5cb\x5cd+(\x5c.\x5cd+)?(e-?\x5cd+)?','::\x5cs*','HeunG','⟉','$random','∥','__compiled__','vmode','getservbyname','text-decoration-style','EdgeForm','^#Infinitive$','tShift','ParseError','fail','allControls','GammaDistribution','dtan','endScope','endScope\x20must\x20be\x20object','import\x20as\x20exposing','defimpl\x20defmodule\x20defprotocol\x20defrecord','#up-chevron','focus-within','fb_login_use_system_account','abstract\x20from\x20to','pipe','fj','⩿','#Adverb+$','bezierInterpolation','degrees','be-walking','getAssignedCuratorUnit','enableSimulationGlobal','afterTags','#Gerund\x20#Actor','array_equals','SeriesCoefficient','NoRewindIterator','noTone','tornados','ExoplanetData','Byte','hooks','object_get_parent','QuantityVariable','DoubleContourIntegral','LengthGeodetic','sideUnknown','[A-Z_][A-Z0-9_.]*','BooleanMaxterms','[used\x20to]\x20#PresentTense','that-are','kbv_returnkey_continue','CUMPRINC','ellipseMode','atoll','getPilotCameraPosition','DefaultValue','steam_get_quota_total','WordStem','WeaklyConnectedComponents','comptry','⊻','modify','complete','ConnectionSettings','^[0-9]+>\x20','ButtonMinHeight','CSGRegionQ','AlphabeticSort','community','AutoQuoteCharacters','pauseMode','⌣','bessel_j1','TreeForm','LIST','RADIANS','date_day_span','clinic','beginTFT','URLDownloadSubmit','xs:ENTITY','syms','shouldnt','EulerCharacteristic','categorical','PrependTo','button','NeymanScottPointProcess','camera_get_view_width','Noun-&-Noun','DisplayFunction','TimeConstrained','WaveletPhi','sprite_prefetch_multi','cue','CloudDeploy','DefaultDuration','MIMETypeToFormatList','[A-Za-z$_][0-9A-Za-z$_]*','removeMPEventHandler','atmosphere','sci','IfShellVarContextAll','MapIndexed','write!','ChromaticityPlot3D','forceselectorder','PHRASAL_WORDS_MODE','onerror','StringDelete','dequote','#Verb\x20\x20[like]','magazinesAmmoFull','ev_joystick1_button1','DenseVecOrMat','__halt_compiler','analyses','unitCombatMode','openMap','memoranda','LeviCivitaTensor','RegionEmbeddingDimension','window','dynamicSimulationSystemEnabled','[wit]\x20(me|it)','LeastSquaresFilterKernel','child::','GraphData','scoped_lock','BetaBinomialDistribution','VBScript','moveOut',')|r)?i?\x5cb','Universes','Test','border-style','Optimize','#TextValue\x20#NumericValue','comобъект\x20ftpсоединение\x20httpзапрос\x20httpсервисответ\x20httpсоединение\x20wsопределения\x20wsпрокси\x20xbase\x20анализданных\x20аннотацияxs\x20блокировкаданных\x20буфердвоичныхданных\x20включениеxs\x20выражениекомпоновкиданных\x20генераторслучайныхчисел\x20географическаясхема\x20географическиекоординаты\x20графическаясхема\x20группамоделиxs\x20данныерасшифровкикомпоновкиданных\x20двоичныеданные\x20дендрограмма\x20диаграмма\x20диаграммаганта\x20диалогвыборафайла\x20диалогвыборацвета\x20диалогвыборашрифта\x20диалограсписаниярегламентногозадания\x20диалогредактированиястандартногопериода\x20диапазон\x20документdom\x20документhtml\x20документацияxs\x20доставляемоеуведомление\x20записьdom\x20записьfastinfoset\x20записьhtml\x20записьjson\x20записьxml\x20записьzipфайла\x20записьданных\x20записьтекста\x20записьузловdom\x20запрос\x20защищенноесоединениеopenssl\x20значенияполейрасшифровкикомпоновкиданных\x20извлечениетекста\x20импортxs\x20интернетпочта\x20интернетпочтовоесообщение\x20интернетпочтовыйпрофиль\x20интернетпрокси\x20интернетсоединение\x20информациядляприложенияxs\x20использованиеатрибутаxs\x20использованиесобытияжурналарегистрации\x20источникдоступныхнастроеккомпоновкиданных\x20итераторузловdom\x20картинка\x20квалификаторыдаты\x20квалификаторыдвоичныхданных\x20квалификаторыстроки\x20квалификаторычисла\x20компоновщикмакетакомпоновкиданных\x20компоновщикнастроеккомпоновкиданных\x20конструктормакетаоформлениякомпоновкиданных\x20конструкторнастроеккомпоновкиданных\x20конструкторформатнойстроки\x20линия\x20макеткомпоновкиданных\x20макетобластикомпоновкиданных\x20макетоформлениякомпоновкиданных\x20маскаxs\x20менеджеркриптографии\x20наборсхемxml\x20настройкикомпоновкиданных\x20настройкисериализацииjson\x20обработкакартинок\x20обработкарасшифровкикомпоновкиданных\x20обходдереваdom\x20объявлениеатрибутаxs\x20объявлениенотацииxs\x20объявлениеэлементаxs\x20описаниеиспользованиясобытиядоступжурналарегистрации\x20описаниеиспользованиясобытияотказвдоступежурналарегистрации\x20описаниеобработкирасшифровкикомпоновкиданных\x20описаниепередаваемогофайла\x20описаниетипов\x20определениегруппыатрибутовxs\x20определениегруппымоделиxs\x20определениеограниченияидентичностиxs\x20определениепростоготипаxs\x20определениесоставноготипаxs\x20определениетипадокументаdom\x20определенияxpathxs\x20отборкомпоновкиданных\x20пакетотображаемыхдокументов\x20параметрвыбора\x20параметркомпоновкиданных\x20параметрызаписиjson\x20параметрызаписиxml\x20параметрычтенияxml\x20переопределениеxs\x20планировщик\x20полеанализаданных\x20полекомпоновкиданных\x20построительdom\x20построительзапроса\x20построительотчета\x20построительотчетаанализаданных\x20построительсхемxml\x20поток\x20потоквпамяти\x20почта\x20почтовоесообщение\x20преобразованиеxsl\x20преобразованиекканоническомуxml\x20процессорвыводарезультатакомпоновкиданныхвколлекциюзначений\x20процессорвыводарезультатакомпоновкиданныхвтабличныйдокумент\x20процессоркомпоновкиданных\x20разыменовательпространствименdom\x20рамка\x20расписаниерегламентногозадания\x20расширенноеимяxml\x20результатчтенияданных\x20своднаядиаграмма\x20связьпараметравыбора\x20связьпотипу\x20связьпотипукомпоновкиданных\x20сериализаторxdto\x20сертификатклиентаwindows\x20сертификатклиентафайл\x20сертификаткриптографии\x20сертификатыудостоверяющихцентровwindows\x20сертификатыудостоверяющихцентровфайл\x20сжатиеданных\x20системнаяинформация\x20сообщениепользователю\x20сочетаниеклавиш\x20сравнениезначений\x20стандартнаядатаначала\x20стандартныйпериод\x20схемаxml\x20схемакомпоновкиданных\x20табличныйдокумент\x20текстовыйдокумент\x20тестируемоеприложение\x20типданныхxml\x20уникальныйидентификатор\x20фабрикаxdto\x20файл\x20файловыйпоток\x20фасетдлиныxs\x20фасетколичестваразрядовдробнойчастиxs\x20фасетмаксимальноговключающегозначенияxs\x20фасетмаксимальногоисключающегозначенияxs\x20фасетмаксимальнойдлиныxs\x20фасетминимальноговключающегозначенияxs\x20фасетминимальногоисключающегозначенияxs\x20фасетминимальнойдлиныxs\x20фасетобразцаxs\x20фасетобщегоколичестваразрядовxs\x20фасетперечисленияxs\x20фасетпробельныхсимволовxs\x20фильтрузловdom\x20форматированнаястрока\x20форматированныйдокумент\x20фрагментxs\x20хешированиеданных\x20хранилищезначения\x20цвет\x20чтениеfastinfoset\x20чтениеhtml\x20чтениеjson\x20чтениеxml\x20чтениеzipфайла\x20чтениеданных\x20чтениетекста\x20чтениеузловdom\x20шрифт\x20элементрезультатакомпоновкиданных\x20comsafearray\x20деревозначений\x20массив\x20соответствие\x20списокзначений\x20структура\x20таблицазначений\x20фиксированнаяструктура\x20фиксированноесоответствие\x20фиксированныймассив\x20','returning','unativeint','declared-intentions','cue-before','(?=\x5c:\x5cs)','device_mouse_check_button_released','(go|goes|went)\x20to\x20[#Infinitive]','(?=[.\x5cs\x5cn[:,(])','StandardDeviation','SystemStub','InverseBetaRegularized','clickable_change','socketpair','EngineeringForm','\x0a','reflexpr','getSteamFriendsServers','FORECAST.ETS.STAT','\x20\x20\x20\x20Submit\x20\x20\x20','NetPrepend','sus','NotebookPath','⤜','will\x20have\x20been','VersionedPreferences','ComponentText','message-container','MatrixFunction','toggle','#elseif','os_psvita','as_const','o\x27er','gherkin','DateObjectQ','Proxy','⇄','zstyle','GraphProduct','DateInterval','Nginx\x20config','actor\x20addressof\x20and\x20as\x20be\x20break\x20class\x20compile_error\x20compile_intrinsic\x20consume\x20continue\x20delegate\x20digestof\x20do\x20else\x20elseif\x20embed\x20end\x20error\x20for\x20fun\x20if\x20ifdef\x20in\x20interface\x20is\x20isnt\x20lambda\x20let\x20match\x20new\x20not\x20object\x20or\x20primitive\x20recover\x20repeat\x20return\x20struct\x20then\x20trait\x20try\x20type\x20until\x20use\x20var\x20where\x20while\x20with\x20xor','vk_numpad6','MINIFS','setVehicleVarName','EulerMatrix','timeserial','when','capitalize','path_mirror','QUOTIENT','List','EchoFunction','Int64','⋈','createGuardedPoint','contract','ImageRotated','fa_volumeid','ContourLabels','RightDownVector','isEqualTypeParams','В','lock','semget','Masking','machine','FeatureSetByRelationshipName','DeleteCloudExpression','WiFiServer','mouseDragged','ExampleData','fun\x20macro','event_perform_object','all3DENEntities','and\x20not','BioSequenceBackTranslateList','clock_millis','SequenceIndicesLayer','ContentFieldOptions','dmin1','rotorsForcesRTD','convenience','c_signed_char','musician','titleCut','setMarkerSize','physics_fixture_bind_ext','zlib','FixedPoint','getOrDefaultCall','kbv_autocapitalize_sentences','Ï','Evaluator','compound','ReferenceMarkers','Network\x20response\x20was\x20not\x20ok','overlay','sync','surface_getpixel_ext','UnderflowException','MannedSpaceMissionData','vk_multiply','TrackFieldWindow','⇂','ctrlSetPositionY','nearObjects','physics_joint_gear_create','IndexStyle','⪇','FeatureExtraction','∢','on\x20off\x20all\x20deny\x20allow','removeAllWeapons','audio_sound_gain','a-minor-in','echotc','view','noHighlightRe','⋮','DiscretizeGraphics','LSL\x20(Linden\x20Scripting\x20Language)','TestID','disableConversation','WeakSet','national','InstProgressFlags','arcos','module','GeneratedDocumentBinding','CellProlog','ten','MultipleIterator','ImageForestingComponents','θ','buffer_invalidtype','atomic_noexcept','ShowCursorTracker','InputToBoxFormPacket','retry','cinv','DirectedGraph','DateListLogPlot','#imageLiteral','insertBefore','lnbSetColorRight','Polygon3DBoxOptions','LeveneTest','cooly','setLightConePars','|(?:','dash','extends_type_of','⪬︀','Sin','extra','assignedDriver','\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x0a\x0a\x0a\x20\x20\x20\x20\x0a\x20\x20\x20\x20\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a\x0a','FeatureSetByAssociation','StartExternalSession','ColumnForm','exec','possessive-verb','StepRange',')\x20#Cardinal\x20#Cardinal','COUNT','Master\x20Started\x20Slave\x20Stopped\x20start\x20promote\x20demote\x20stop\x20monitor\x20true\x20false','SystemsModelMerge','mercury','cynical','MaxDisplayedChildren','layer_sprite_get_id','regexp_contains','multiRegexes','abbr_class','}','openCuratorInterface','Kernels','correlate','ppEffectCommit','nginx','MailFolder',' ','clip-path','І','border-image-repeat','ℜ','drawCompass','curatorPoints','layer_get_vspeed','hour','physics_fixture_set_polygon_shape','@dynamic','verb-propernoun','ArrayPad','⦴',',(?!','CanonicalizePolygon','historic','coliseum','no\x20[#Adjective]\x20#Modal','OpenerBoxOptions','pointwiseMin','audio_listener_set_position','FunctionSurjective','SpaceForm','⩽̸','date_inc_month','Background','array_height_2d','rtranif1','GestureHandler','okla','getDammage','$warning','$AllowDataUpdates','steam_ugc_set_item_tags','td_close','gpu_set_cullmode','PolynomialQuotientRemainder','ctAddHeader','VertexShapeFunction','GrammarRules','how-is','isAutotest','AsRef','min1','OPTION','COMBIN','\x5cd+','\x5cb\x5cd{3}\x5cb','AsymptoticSum','grouping-separator','license','TouchscreenControlPlacement','vertex_position_3d','addRule','asr','NotPrecedesSlantEqual','↝̸','removeEventHandler','renderToken','Goal','vk_ralt','buildNet','NetDelete','weekdayname','window_set_colour','title.class','layer_get_all','BitRate','swear2-verb','ShowStringCharacters','c_float_complex','path_reverse','org-of-place','MIPS\x20Assembly','lbSelection','forceUnicode','cexp','ishft','ock','sort_desc','physics_pause_enable','words','LowpassFilter','timezone_utc','Deriv','[so]\x20#Noun','[#ProperNoun]\x20#Person','csr_extract_u','mel','physics_particle_draw','LogSeriesDistribution','SystemModels','RowHeights','atomic','SynthesizeMissingValues','ERROR.TYPE','
','ini_close','--+|--+(?!-)','keyboard_check_released','SpeechRecognize','iochecks','∷','⟺','FilledCurveBoxOptions','setDNS','execution','least','vk_add','[]\x5c{\x5c}%#\x27\x22','had-he','AnimationDisplayTime','^__END__$','saveregisters','isFull','(\x5c$\x5cW)|((\x5c$|@@?)(\x5cw+))(?=[^@$?])(?![A-Za-z])(?![@$?\x27])','print!','southern','⨴','getMarkerColor','bMarks','layer_tile_yscale','UninstallSubCaption','unaffected','OverVector','TextSearch','fb_login_default','pageY','kbv_autocapitalize_words','setUnitPos','handler','BSplineFunction','CellElementsBoundingBox','%[01]+','moonIntensity','SurvivalModel','$DefaultRemoteKernel','KroneckerModelDecomposition','Dendrogram','BinomialProcess','PascalDistribution','subscription','#ProperNoun\x20#Organization','LeftArrow','NSEC3','lsearch|10','StandardForm','os_get_language','SummationLayer','RegionEqual','prefers-color-scheme','BIGINT\x20INT8\x20BIGSERIAL\x20SERIAL8\x20BIT\x20VARYING\x20VARBIT\x20BOOLEAN\x20BOOL\x20BOX\x20BYTEA\x20CHARACTER\x20CHAR\x20VARCHAR\x20CIDR\x20CIRCLE\x20DATE\x20DOUBLE\x20PRECISION\x20FLOAT8\x20FLOAT\x20INET\x20INTEGER\x20INT\x20INT4\x20INTERVAL\x20JSON\x20JSONB\x20LINE\x20LSEG|10\x20MACADDR\x20MACADDR8\x20MONEY\x20NUMERIC\x20DEC\x20DECIMAL\x20PATH\x20POINT\x20POLYGON\x20REAL\x20FLOAT4\x20SMALLINT\x20INT2\x20SMALLSERIAL|10\x20SERIAL2|10\x20SERIAL|10\x20SERIAL4|10\x20TEXT\x20TIME\x20ZONE\x20TIMETZ|10\x20TIMESTAMP\x20TIMESTAMPTZ|10\x20TSQUERY|10\x20TSVECTOR|10\x20TXID_SNAPSHOT|10\x20UUID\x20XML\x20NATIONAL\x20NCHAR\x20INT4RANGE|10\x20INT8RANGE|10\x20NUMRANGE|10\x20TSRANGE|10\x20TSTZRANGE|10\x20DATERANGE|10\x20ANYELEMENT\x20ANYARRAY\x20ANYNONARRAY\x20ANYENUM\x20ANYRANGE\x20CSTRING\x20INTERNAL\x20RECORD\x20PG_DDL_COMMAND\x20VOID\x20UNKNOWN\x20OPAQUE\x20REFCURSOR\x20NAME\x20OID\x20REGPROC|10\x20REGPROCEDURE|10\x20REGOPER|10\x20REGOPERATOR|10\x20REGCLASS|10\x20REGTYPE|10\x20REGROLE|10\x20REGNAMESPACE|10\x20REGCONFIG|10\x20REGDICTIONARY|10\x20','steam_ugc_create_query_all_ex','langPrefix','fipstate','#WeekDay\x20#Month\x20#Ordinal','BenktanderGibratDistribution','Assert','[\x5c]\x5c}]','ReliefPlot','dsin','UnderoverscriptBoxOptions','OrthogonalMatrixQ','CompoundRenewalProcess','border-inline-style','⋕','BarlowProschanImportance','UseEmbeddedLibrary','tcl_startOfPreviousWord','ugc_list_UsedOrPlayed','border-left-width','sget','AutoScroll','CharacterRange','⋑','Σ','MeshCells','addSwitchableUnit','border-top-width','trying','InternalError','CoefficientArrays','NormalizedSquaredEuclideanDistance','StabilityMargins','RenkoChart','hasGroup','class\x20interface','Intros','ComplexArrayPlot','[%Adj|Past%]\x20and\x20#Adjective','Deinitialization','setWaypointVisible','ImageGraphics','item','https://api.semanticscholar.org/graph/v1/paper/search','Equal','draw_healthbar','SequenceReverseLayer',':^(','Assumptions','define','#load-','cr_beam','asymmetric','DisjointQ','phy_joint_angle','log1p','$info','fullSentence','composedPath','event_object','###','face-shocking','⪁','ds_priority_empty','thisJoinPointStaticPart','aspect-ratio','antlrtask','Щ','displayUniqueName','MassSymmetryValue','shownScoretable','target','unfold','Abs','window_handle','','LibraryFunctionUnload','ì','MaxBend','ParetoDistribution','PartOfSpeech','Unitize','diag_post_multiply','WhenEvent','abstracts','phy_mass','StripOnInput','$1a','Catalan','Around','CrossingPolygon','TextLine','aimedAtTarget','posinfif','parseNumber','⥞','\x5cd{4}-\x5cd{2}-\x5cd{2}(\x5cs+)\x5cd{2}:\x5cd{2}:\x5cd{2}.\x5cd+Z','WORKDAY','į','filename_name','⧉','audio_emitter_get_vz','TableAlignments','border-top-left-radius','ds_grid_multiply_grid_region','nMenuItems','PackPaclet','subject','corpora','nextrec','GraphicsColor','reduce_sum','BusinessDayQ','#Adverb\x20[(march|may)]','PCOPY','PIDData','Span','and-a-half','usableFromInline','Р','inc','PolyhedronBox','XQuery','⤸','scriptengine','zftp','𝔱','npv','↗','Beveled','AnatomyPlot3D','addHandgunItem','CounterStyleMenuListing','[','$1ice','delay_mode_path','limit','PatternTest','case\x20if\x20select\x20unless\x20until\x20when\x20while','ΜϺϻМмӍӎ','line_to','2/3rds','requires','#if','#Noun\x20[(who|whom)]','Place','FormLayoutFunction','FONTS','wait','module_function','Tolerance','username','KeypointStrength','gamma','tty','\x5cs*(\x5c{|$)','FileNameTake','FINDSTR','ose','draw_enable_swf_aa','temp_directory','midtown','education','eapply','won-wide-support','setVelocityModelSpace','$NumberMarks','$q_remove','textvalue-date','grouping','bgerror','copyWaypoints','BipartiteGraphQ','steam_is_cloud_enabled_for_app','alarm_set','gpu_get_state','^\x5cs*[A-Za-z._?][A-Za-z0-9_$#@~.?]*(:|\x5cs+label)','[(tastes|smells)]\x20#Adverb?\x20#Adjective','EpilogFunction','⅘','tokens','Annotate','initiate','ds_map_read','CellEpilog','GroupElementToWord','ė','$Initialization','↻','⧏̸','⧞','isSentence','SurvivalFunction','clear3DENInventory','AsymptoticProduct','authors','$LicenseServer','rotateY','eright','israelis','armoryPoints','))?([pP][+-]?(','HarmonicMean','micros','border-inline-start-width','date_inc_minute','steam_activate_overlay_user','come-on','^to\x20#Infinitive','\x5cb(XP_ERROR_(EXPERIENCES_DISABLED|EXPERIENCE_(DISABLED|SUSPENDED)|INVALID_(EXPERIENCE|PARAMETERS)|KEY_NOT_FOUND|MATURITY_EXCEEDED|NONE|NOT_(FOUND|PERMITTED(_LAND)?)|NO_EXPERIENCE|QUOTA_EXCEEDED|RETRY_UPDATE|STORAGE_EXCEPTION|STORE_DISABLED|THROTTLED|UNKNOWN_ERROR)|JSON_APPEND|STATUS_(PHYSICS|ROTATE_[XYZ]|PHANTOM|SANDBOX|BLOCK_GRAB(_OBJECT)?|(DIE|RETURN)_AT_EDGE|CAST_SHADOWS|OK|MALFORMED_PARAMS|TYPE_MISMATCH|BOUNDS_ERROR|NOT_(FOUND|SUPPORTED)|INTERNAL_ERROR|WHITELIST_FAILED)|AGENT(_(BY_(LEGACY_|USER)NAME|FLYING|ATTACHMENTS|SCRIPTED|MOUSELOOK|SITTING|ON_OBJECT|AWAY|WALKING|IN_AIR|TYPING|CROUCHING|BUSY|ALWAYS_RUN|AUTOPILOT|LIST_(PARCEL(_OWNER)?|REGION)))?|CAMERA_(PITCH|DISTANCE|BEHINDNESS_(ANGLE|LAG)|(FOCUS|POSITION)(_(THRESHOLD|LOCKED|LAG))?|FOCUS_OFFSET|ACTIVE)|ANIM_ON|LOOP|REVERSE|PING_PONG|SMOOTH|ROTATE|SCALE|ALL_SIDES|LINK_(ROOT|SET|ALL_(OTHERS|CHILDREN)|THIS)|ACTIVE|PASS(IVE|_(ALWAYS|IF_NOT_HANDLED|NEVER))|SCRIPTED|CONTROL_(FWD|BACK|(ROT_)?(LEFT|RIGHT)|UP|DOWN|(ML_)?LBUTTON)|PERMISSION_(RETURN_OBJECTS|DEBIT|OVERRIDE_ANIMATIONS|SILENT_ESTATE_MANAGEMENT|TAKE_CONTROLS|TRIGGER_ANIMATION|ATTACH|CHANGE_LINKS|(CONTROL|TRACK)_CAMERA|TELEPORT)|INVENTORY_(TEXTURE|SOUND|OBJECT|SCRIPT|LANDMARK|CLOTHING|NOTECARD|BODYPART|ANIMATION|GESTURE|ALL|NONE)|CHANGED_(INVENTORY|COLOR|SHAPE|SCALE|TEXTURE|LINK|ALLOWED_DROP|OWNER|REGION(_START)?|TELEPORT|MEDIA)|OBJECT_(CLICK_ACTION|HOVER_HEIGHT|LAST_OWNER_ID|(PHYSICS|SERVER|STREAMING)_COST|UNKNOWN_DETAIL|CHARACTER_TIME|PHANTOM|PHYSICS|TEMP_(ATTACHED|ON_REZ)|NAME|DESC|POS|PRIM_(COUNT|EQUIVALENCE)|RETURN_(PARCEL(_OWNER)?|REGION)|REZZER_KEY|ROO?T|VELOCITY|OMEGA|OWNER|GROUP(_TAG)?|CREATOR|ATTACHED_(POINT|SLOTS_AVAILABLE)|RENDER_WEIGHT|(BODY_SHAPE|PATHFINDING)_TYPE|(RUNNING|TOTAL)_SCRIPT_COUNT|TOTAL_INVENTORY_COUNT|SCRIPT_(MEMORY|TIME))|TYPE_(INTEGER|FLOAT|STRING|KEY|VECTOR|ROTATION|INVALID)|(DEBUG|PUBLIC)_CHANNEL|ATTACH_(AVATAR_CENTER|CHEST|HEAD|BACK|PELVIS|MOUTH|CHIN|NECK|NOSE|BELLY|[LR](SHOULDER|HAND|FOOT|EAR|EYE|[UL](ARM|LEG)|HIP)|(LEFT|RIGHT)_PEC|HUD_(CENTER_[12]|TOP_(RIGHT|CENTER|LEFT)|BOTTOM(_(RIGHT|LEFT))?)|[LR]HAND_RING1|TAIL_(BASE|TIP)|[LR]WING|FACE_(JAW|[LR]EAR|[LR]EYE|TOUNGE)|GROIN|HIND_[LR]FOOT)|LAND_(LEVEL|RAISE|LOWER|SMOOTH|NOISE|REVERT)|DATA_(ONLINE|NAME|BORN|SIM_(POS|STATUS|RATING)|PAYINFO)|PAYMENT_INFO_(ON_FILE|USED)|REMOTE_DATA_(CHANNEL|REQUEST|REPLY)|PSYS_(PART_(BF_(ZERO|ONE(_MINUS_(DEST_COLOR|SOURCE_(ALPHA|COLOR)))?|DEST_COLOR|SOURCE_(ALPHA|COLOR))|BLEND_FUNC_(DEST|SOURCE)|FLAGS|(START|END)_(COLOR|ALPHA|SCALE|GLOW)|MAX_AGE|(RIBBON|WIND|INTERP_(COLOR|SCALE)|BOUNCE|FOLLOW_(SRC|VELOCITY)|TARGET_(POS|LINEAR)|EMISSIVE)_MASK)|SRC_(MAX_AGE|PATTERN|ANGLE_(BEGIN|END)|BURST_(RATE|PART_COUNT|RADIUS|SPEED_(MIN|MAX))|ACCEL|TEXTURE|TARGET_KEY|OMEGA|PATTERN_(DROP|EXPLODE|ANGLE(_CONE(_EMPTY)?)?)))|VEHICLE_(REFERENCE_FRAME|TYPE_(NONE|SLED|CAR|BOAT|AIRPLANE|BALLOON)|(LINEAR|ANGULAR)_(FRICTION_TIMESCALE|MOTOR_DIRECTION)|LINEAR_MOTOR_OFFSET|HOVER_(HEIGHT|EFFICIENCY|TIMESCALE)|BUOYANCY|(LINEAR|ANGULAR)_(DEFLECTION_(EFFICIENCY|TIMESCALE)|MOTOR_(DECAY_)?TIMESCALE)|VERTICAL_ATTRACTION_(EFFICIENCY|TIMESCALE)|BANKING_(EFFICIENCY|MIX|TIMESCALE)|FLAG_(NO_DEFLECTION_UP|LIMIT_(ROLL_ONLY|MOTOR_UP)|HOVER_((WATER|TERRAIN|UP)_ONLY|GLOBAL_HEIGHT)|MOUSELOOK_(STEER|BANK)|CAMERA_DECOUPLED))|PRIM_(ALLOW_UNSIT|ALPHA_MODE(_(BLEND|EMISSIVE|MASK|NONE))?|NORMAL|SPECULAR|TYPE(_(BOX|CYLINDER|PRISM|SPHERE|TORUS|TUBE|RING|SCULPT))?|HOLE_(DEFAULT|CIRCLE|SQUARE|TRIANGLE)|MATERIAL(_(STONE|METAL|GLASS|WOOD|FLESH|PLASTIC|RUBBER))?|SHINY_(NONE|LOW|MEDIUM|HIGH)|BUMP_(NONE|BRIGHT|DARK|WOOD|BARK|BRICKS|CHECKER|CONCRETE|TILE|STONE|DISKS|GRAVEL|BLOBS|SIDING|LARGETILE|STUCCO|SUCTION|WEAVE)|TEXGEN_(DEFAULT|PLANAR)|SCRIPTED_SIT_ONLY|SCULPT_(TYPE_(SPHERE|TORUS|PLANE|CYLINDER|MASK)|FLAG_(MIRROR|INVERT))|PHYSICS(_(SHAPE_(CONVEX|NONE|PRIM|TYPE)))?|(POS|ROT)_LOCAL|SLICE|TEXT|FLEXIBLE|POINT_LIGHT|TEMP_ON_REZ|PHANTOM|POSITION|SIT_TARGET|SIZE|ROTATION|TEXTURE|NAME|OMEGA|DESC|LINK_TARGET|COLOR|BUMP_SHINY|FULLBRIGHT|TEXGEN|GLOW|MEDIA_(ALT_IMAGE_ENABLE|CONTROLS|(CURRENT|HOME)_URL|AUTO_(LOOP|PLAY|SCALE|ZOOM)|FIRST_CLICK_INTERACT|(WIDTH|HEIGHT)_PIXELS|WHITELIST(_ENABLE)?|PERMS_(INTERACT|CONTROL)|PARAM_MAX|CONTROLS_(STANDARD|MINI)|PERM_(NONE|OWNER|GROUP|ANYONE)|MAX_(URL_LENGTH|WHITELIST_(SIZE|COUNT)|(WIDTH|HEIGHT)_PIXELS)))|MASK_(BASE|OWNER|GROUP|EVERYONE|NEXT)|PERM_(TRANSFER|MODIFY|COPY|MOVE|ALL)|PARCEL_(MEDIA_COMMAND_(STOP|PAUSE|PLAY|LOOP|TEXTURE|URL|TIME|AGENT|UNLOAD|AUTO_ALIGN|TYPE|SIZE|DESC|LOOP_SET)|FLAG_(ALLOW_(FLY|(GROUP_)?SCRIPTS|LANDMARK|TERRAFORM|DAMAGE|CREATE_(GROUP_)?OBJECTS)|USE_(ACCESS_(GROUP|LIST)|BAN_LIST|LAND_PASS_LIST)|LOCAL_SOUND_ONLY|RESTRICT_PUSHOBJECT|ALLOW_(GROUP|ALL)_OBJECT_ENTRY)|COUNT_(TOTAL|OWNER|GROUP|OTHER|SELECTED|TEMP)|DETAILS_(NAME|DESC|OWNER|GROUP|AREA|ID|SEE_AVATARS))|LIST_STAT_(MAX|MIN|MEAN|MEDIAN|STD_DEV|SUM(_SQUARES)?|NUM_COUNT|GEOMETRIC_MEAN|RANGE)|PAY_(HIDE|DEFAULT)|REGION_FLAG_(ALLOW_DAMAGE|FIXED_SUN|BLOCK_TERRAFORM|SANDBOX|DISABLE_(COLLISIONS|PHYSICS)|BLOCK_FLY|ALLOW_DIRECT_TELEPORT|RESTRICT_PUSHOBJECT)|HTTP_(METHOD|MIMETYPE|BODY_(MAXLENGTH|TRUNCATED)|CUSTOM_HEADER|PRAGMA_NO_CACHE|VERBOSE_THROTTLE|VERIFY_CERT)|SIT_(INVALID_(AGENT|LINK_OBJECT)|NO(T_EXPERIENCE|_(ACCESS|EXPERIENCE_PERMISSION|SIT_TARGET)))|STRING_(TRIM(_(HEAD|TAIL))?)|CLICK_ACTION_(NONE|TOUCH|SIT|BUY|PAY|OPEN(_MEDIA)?|PLAY|ZOOM)|TOUCH_INVALID_FACE|PROFILE_(NONE|SCRIPT_MEMORY)|RC_(DATA_FLAGS|DETECT_PHANTOM|GET_(LINK_NUM|NORMAL|ROOT_KEY)|MAX_HITS|REJECT_(TYPES|AGENTS|(NON)?PHYSICAL|LAND))|RCERR_(CAST_TIME_EXCEEDED|SIM_PERF_LOW|UNKNOWN)|ESTATE_ACCESS_(ALLOWED_(AGENT|GROUP)_(ADD|REMOVE)|BANNED_AGENT_(ADD|REMOVE))|DENSITY|FRICTION|RESTITUTION|GRAVITY_MULTIPLIER|KFM_(COMMAND|CMD_(PLAY|STOP|PAUSE)|MODE|FORWARD|LOOP|PING_PONG|REVERSE|DATA|ROTATION|TRANSLATION)|ERR_(GENERIC|PARCEL_PERMISSIONS|MALFORMED_PARAMS|RUNTIME_PERMISSIONS|THROTTLED)|CHARACTER_(CMD_((SMOOTH_)?STOP|JUMP)|DESIRED_(TURN_)?SPEED|RADIUS|STAY_WITHIN_PARCEL|LENGTH|ORIENTATION|ACCOUNT_FOR_SKIPPED_FRAMES|AVOIDANCE_MODE|TYPE(_([ABCD]|NONE))?|MAX_(DECEL|TURN_RADIUS|(ACCEL|SPEED)))|PURSUIT_(OFFSET|FUZZ_FACTOR|GOAL_TOLERANCE|INTERCEPT)|REQUIRE_LINE_OF_SIGHT|FORCE_DIRECT_PATH|VERTICAL|HORIZONTAL|AVOID_(CHARACTERS|DYNAMIC_OBSTACLES|NONE)|PU_(EVADE_(HIDDEN|SPOTTED)|FAILURE_(DYNAMIC_PATHFINDING_DISABLED|INVALID_(GOAL|START)|NO_(NAVMESH|VALID_DESTINATION)|OTHER|TARGET_GONE|(PARCEL_)?UNREACHABLE)|(GOAL|SLOWDOWN_DISTANCE)_REACHED)|TRAVERSAL_TYPE(_(FAST|NONE|SLOW))?|CONTENT_TYPE_(ATOM|FORM|HTML|JSON|LLSD|RSS|TEXT|XHTML|XML)|GCNP_(RADIUS|STATIC)|(PATROL|WANDER)_PAUSE_AT_WAYPOINTS|OPT_(AVATAR|CHARACTER|EXCLUSION_VOLUME|LEGACY_LINKSET|MATERIAL_VOLUME|OTHER|STATIC_OBSTACLE|WALKABLE)|SIM_STAT_PCT_CHARS_STEPPED)\x5cb','GeoBubbleChart','n-acro-noun','NewLine','analogReference','__esModule','TargetUnits','sons','requestFrom','AstroStyling','TRON','PermissionsGroupMemberQ','⦏','condition','\x22\x22\x22(?=[^\x22])','`\x22[^\x0d\x0a]*?\x22\x27','true¦a07b04cXdWexVfTgRhePinYjoule0BkMlJmDnan08oCp9quart0Bsq\x20ft,t7volts,w6y2ze3°1µ0;g,s;c,f,n;dVear1o0;ttR;\x200s\x200;old;att,b;erNon0;!ne02;ascals,e1i0;cXnt00;rcent,tJ;hms,unceY;/s,e4i0m²,²,³;/h,cro2l0;e0liK;!²;grLsR;gCtJ;it1u0;menQx;erPreP;b5elvins,ilo1m0notO;/h,ph,²;!byGgrEmCs;ct0rtzL;aJogrC;allonJb0ig3rB;ps;a0emtEl\x20oz,t4;hrenheit,radG;aby9;eci3m1;aratDe1m0oulombD;²,³;lsius,nti0;gr2lit1m0;et0;er8;am7;b1y0;te5;l,ps;c2tt0;os0;econd1;re0;!s','^wanted\x20to\x20#Infinitive$','Sorted','gms','⩴','log_modified_bessel_first_kind','𝕤','voption','flatDir','ℏ','WaveletFilterCoefficients','mutable','RandomPoint','Context','iap_status_loading',':\x5cw+\x5cs*=>','$AssertFunction','sendString','sorted','SharingList','⩉','$SessionID','unhex','GKInspectable','iget','Month','SUMSQ','clip-rule','ReactionBalance','knobRead','RemoteBatchSubmissionEnvironment','the-planning-process','FillingStyle','king-of-noun','ppEffectForceInNVG','lchoose','wire','seems-filled','Dot','$fwrite','FindSequenceFunction','instance_destroy','surface_set_target_ext','PageBreakBelow','of_challen','JacobiAmplitude','ShowWindow','CONVERT','(=(?!>))?|[-+*/%](?!>)','achievement_show_ui','\x5c$[^01]|#[^0-9a-fA-F]','std_normal','[À-ʸa-zA-Z_$][À-ʸa-zA-Z_$0-9]*','YES','getCompatiblePylonMagazines','writeBlue','KEdgeConnectedGraphQ','ResourceRemove','StripBoxes','⪵','WaveletBestBasis','cornish','CIRCLE','к','vprintf','sqf','LexicographicSort','updateDrawIcon','VOFFSET','ξ','MachineNumberQ','TraceForward','turretOwner','LANGUAGE_HANDLER\x20TRIGGER\x20EVENT_TRIGGER\x20FDW_HANDLER\x20INDEX_AM_HANDLER\x20TSM_HANDLER','#Determiner\x20#Demonym\x20[#PresentTense]','wanna','creek','⩹','FoxHReduce','sprite_replace','privilege','caret-color','severity','PartialD','(-:','(#Possessive|#Pronoun)','chmod','ctrlAnimateModel','WeightedAdjacencyGraph','ValueBoxOptions','KalmanEstimator','BEGIN','kan','InverseJacobiDN','NotTildeFullEqual','expose','layerelementtype_tile','ISREF','isGameFocused','animation-play-state','^(could|must|should|shall)\x20have\x20#PastTense$','PersistenceTime','$DefaultFrontEnd','EllipticNomeQ','TrainingUpdateSchedule','xsl','⋬','sysget','ode_ckrk','LuccioSamiComponents','workers','gotta','setRain','third','xhtmlOut','assertionFailure','$NewMessage','priority_queue','(we|they)','ds_grid_add_disk','\x5cs*(?:=|:=)\x5cs*)?(\x5c(.*\x5c)\x5cs*)?\x5cB!?[-~]{1,2}>\x5c*?','(want|wants|wanted)\x20to\x20#Infinitive','ControlActive','[A-Za-zА-Яа-яёЁ_][A-Za-zА-Яа-яёЁ_0-9]*\x5c(','textLogFormat','respect\x20nulls','timed_mutex','Disk','UNLOCK','NormalDistribution','FixedOrder','matrix_multiply','difficultyEnabled','APA','helpModal','setTriggerArea','⥕','className','ņ','http_get_file','cyan','mask','NotebookOpen','copied!','[\x5c{]','Sleep','Into','.overlay-progressive','SolveAlways','rollup','tile_get_rotate','BetweennessCentrality','PolarPlot','SECH','addUniform','ados','part_emitter_destroy_all','PlaceholderReplace','query','SubtypeCode','CumulativeFeatureImpactPlot','loc','[a-z-]+','uptown','crises','line_color','#progress','≸','f64','LambdaComponents','tempfile','ShannonWavelet','^had\x20been\x20#Gerund$','setPlayable','TreePlot','eighth','Returning\x20results','isprint','DotPlusLayer','bay','$DefaultNetworkInterface','window_set_size','cameraEffect','TagBoxOptions','lognormal','PodWidth','stroke','deptab','concat','send-around','CircleTimes','Atan','CurrencyConvert','a-bit-cold','ā','audio_emitter_get_listener_mask','isAbleToBreathe','stay-away','ExternalStorageObject','rep_row_vector','MAProcess','≆','fmax','Distance',':(?!\x5cs)','plan','LogNormalDistribution','DiscreteRiccatiSolve','visit',')|\x5c.)?|(','diag_captureSlowFrame','ImageFileScan','―','VertexContract','triand','$InterpreterTypes','⊸','create','Servo','OpenerBox','never','#Possessive\x20#Adjective\x20[#Verb]','ReliabilityDistribution','⨢','latter','tumbling','message-name','INPUT','dark-green','remember','NetMapThreadOperator','$assertcontrol','subjects','is_bool','ctrlSetFontH6','YIELDMAT','██████████','cameraView','coffeescript','PaneSelectorBox','EventSeries','FindPeaks','WebAssembly','𝒱','MountainData','à','management','NS_SWIFT_NOTHROW','kbAddDatabaseTargets','#keyPath','ToLocal','md5','#Adverb\x20[%Person|Verb%]','silent','Font','Yellow','ev_user2','(^|(?![.:/\x5c-_@])(?:[$+<=>^`||]|','RemoveAlphaChannel','ads_move','AsymptoticEqual','recursive_timed_mutex','gpu_set_tex_mip_bias','(#Verb\x20&&\x20!#Copula)\x20[being]\x20#Verb','CenterArray','part_type_life','setPINUsed','\x22\x20type=\x22text\x22\x20placeholder=\x22Expand\x20your\x20search\x22\x20class=\x22w-64\x20px-4\x20py-2\x20border\x20border-gray-300\x20rounded-l-md\x20focus:outline-none\x20focus:ring-2\x20focus:ring-blue-500\x20focus:border-transparent\x22\x20/>\x0a

\x0a\x0a','keys','$PerformanceGoal','RoundToZero','Confirm','frequency','FromCoefficientRules','PROGRAM_FILE','#Adjective\x20and\x20[#Gerund]\x20!#Preposition?','Eigensystem','15\x20usd','Key','menuitem','mask_index','turretUnit','exact','lgt','toCardinal','rantbl','\x5cb(\x5cd+|\x5cd+\x5c.|\x5c.\x5cd+|\x5cd+\x5c.\x5cd+)[Ee][-+]?\x5cd+\x5cb','Participle','redo','mask-border','ev_gesture_rotate_start','nav','FindDistributionParameters','lockTurret','scriptDone','abort','TimeSeries','given-up-on-x','initialization','Names','≿̸','Atomics','far-too','SphericalPlot3D','syndicat','clog','AbsoluteCorrelation','))?\x5cb','SHELL','Floor','writeGreen','isAutoStartUpEnabledRTD','HadamardMatrix','ClosenessCentrality','mutating','parents','part_type_colour_hsv','AllowReverseGroupClose','plateau','date_get_timezone','if\x20else\x20in\x20foreach\x20for\x20forv\x20forva\x20forval\x20forvalu\x20forvalue\x20forvalues\x20by\x20bys\x20bysort\x20xi\x20quietly\x20qui\x20capture\x20about\x20ac\x20ac_7\x20acprplot\x20acprplot_7\x20adjust\x20ado\x20adopath\x20adoupdate\x20alpha\x20ameans\x20an\x20ano\x20anov\x20anova\x20anova_estat\x20anova_terms\x20anovadef\x20aorder\x20ap\x20app\x20appe\x20appen\x20append\x20arch\x20arch_dr\x20arch_estat\x20arch_p\x20archlm\x20areg\x20areg_p\x20args\x20arima\x20arima_dr\x20arima_estat\x20arima_p\x20as\x20asmprobit\x20asmprobit_estat\x20asmprobit_lf\x20asmprobit_mfx__dlg\x20asmprobit_p\x20ass\x20asse\x20asser\x20assert\x20avplot\x20avplot_7\x20avplots\x20avplots_7\x20bcskew0\x20bgodfrey\x20bias\x20binreg\x20bip0_lf\x20biplot\x20bipp_lf\x20bipr_lf\x20bipr_p\x20biprobit\x20bitest\x20bitesti\x20bitowt\x20blogit\x20bmemsize\x20boot\x20bootsamp\x20bootstrap\x20bootstrap_8\x20boxco_l\x20boxco_p\x20boxcox\x20boxcox_6\x20boxcox_p\x20bprobit\x20br\x20break\x20brier\x20bro\x20brow\x20brows\x20browse\x20brr\x20brrstat\x20bs\x20bs_7\x20bsampl_w\x20bsample\x20bsample_7\x20bsqreg\x20bstat\x20bstat_7\x20bstat_8\x20bstrap\x20bstrap_7\x20bubble\x20bubbleplot\x20ca\x20ca_estat\x20ca_p\x20cabiplot\x20camat\x20canon\x20canon_8\x20canon_8_p\x20canon_estat\x20canon_p\x20cap\x20caprojection\x20capt\x20captu\x20captur\x20capture\x20cat\x20cc\x20cchart\x20cchart_7\x20cci\x20cd\x20censobs_table\x20centile\x20cf\x20char\x20chdir\x20checkdlgfiles\x20checkestimationsample\x20checkhlpfiles\x20checksum\x20chelp\x20ci\x20cii\x20cl\x20class\x20classutil\x20clear\x20cli\x20clis\x20clist\x20clo\x20clog\x20clog_lf\x20clog_p\x20clogi\x20clogi_sw\x20clogit\x20clogit_lf\x20clogit_p\x20clogitp\x20clogl_sw\x20cloglog\x20clonevar\x20clslistarray\x20cluster\x20cluster_measures\x20cluster_stop\x20cluster_tree\x20cluster_tree_8\x20clustermat\x20cmdlog\x20cnr\x20cnre\x20cnreg\x20cnreg_p\x20cnreg_sw\x20cnsreg\x20codebook\x20collaps4\x20collapse\x20colormult_nb\x20colormult_nw\x20compare\x20compress\x20conf\x20confi\x20confir\x20confirm\x20conren\x20cons\x20const\x20constr\x20constra\x20constrai\x20constrain\x20constraint\x20continue\x20contract\x20copy\x20copyright\x20copysource\x20cor\x20corc\x20corr\x20corr2data\x20corr_anti\x20corr_kmo\x20corr_smc\x20corre\x20correl\x20correla\x20correlat\x20correlate\x20corrgram\x20cou\x20coun\x20count\x20cox\x20cox_p\x20cox_sw\x20coxbase\x20coxhaz\x20coxvar\x20cprplot\x20cprplot_7\x20crc\x20cret\x20cretu\x20cretur\x20creturn\x20cross\x20cs\x20cscript\x20cscript_log\x20csi\x20ct\x20ct_is\x20ctset\x20ctst_5\x20ctst_st\x20cttost\x20cumsp\x20cumsp_7\x20cumul\x20cusum\x20cusum_7\x20cutil\x20d|0\x20datasig\x20datasign\x20datasigna\x20datasignat\x20datasignatu\x20datasignatur\x20datasignature\x20datetof\x20db\x20dbeta\x20de\x20dec\x20deco\x20decod\x20decode\x20deff\x20des\x20desc\x20descr\x20descri\x20describ\x20describe\x20destring\x20dfbeta\x20dfgls\x20dfuller\x20di\x20di_g\x20dir\x20dirstats\x20dis\x20discard\x20disp\x20disp_res\x20disp_s\x20displ\x20displa\x20display\x20distinct\x20do\x20doe\x20doed\x20doedi\x20doedit\x20dotplot\x20dotplot_7\x20dprobit\x20drawnorm\x20drop\x20ds\x20ds_util\x20dstdize\x20duplicates\x20durbina\x20dwstat\x20dydx\x20e|0\x20ed\x20edi\x20edit\x20egen\x20eivreg\x20emdef\x20en\x20enc\x20enco\x20encod\x20encode\x20eq\x20erase\x20ereg\x20ereg_lf\x20ereg_p\x20ereg_sw\x20ereghet\x20ereghet_glf\x20ereghet_glf_sh\x20ereghet_gp\x20ereghet_ilf\x20ereghet_ilf_sh\x20ereghet_ip\x20eret\x20eretu\x20eretur\x20ereturn\x20err\x20erro\x20error\x20esize\x20est\x20est_cfexist\x20est_cfname\x20est_clickable\x20est_expand\x20est_hold\x20est_table\x20est_unhold\x20est_unholdok\x20estat\x20estat_default\x20estat_summ\x20estat_vce_only\x20esti\x20estimates\x20etodow\x20etof\x20etomdy\x20ex\x20exi\x20exit\x20expand\x20expandcl\x20fac\x20fact\x20facto\x20factor\x20factor_estat\x20factor_p\x20factor_pca_rotated\x20factor_rotate\x20factormat\x20fcast\x20fcast_compute\x20fcast_graph\x20fdades\x20fdadesc\x20fdadescr\x20fdadescri\x20fdadescrib\x20fdadescribe\x20fdasav\x20fdasave\x20fdause\x20fh_st\x20file\x20open\x20file\x20read\x20file\x20close\x20file\x20filefilter\x20fillin\x20find_hlp_file\x20findfile\x20findit\x20findit_7\x20fit\x20fl\x20fli\x20flis\x20flist\x20for5_0\x20forest\x20forestplot\x20form\x20forma\x20format\x20fpredict\x20frac_154\x20frac_adj\x20frac_chk\x20frac_cox\x20frac_ddp\x20frac_dis\x20frac_dv\x20frac_in\x20frac_mun\x20frac_pp\x20frac_pq\x20frac_pv\x20frac_wgt\x20frac_xo\x20fracgen\x20fracplot\x20fracplot_7\x20fracpoly\x20fracpred\x20fron_ex\x20fron_hn\x20fron_p\x20fron_tn\x20fron_tn2\x20frontier\x20ftodate\x20ftoe\x20ftomdy\x20ftowdate\x20funnel\x20funnelplot\x20g|0\x20gamhet_glf\x20gamhet_gp\x20gamhet_ilf\x20gamhet_ip\x20gamma\x20gamma_d2\x20gamma_p\x20gamma_sw\x20gammahet\x20gdi_hexagon\x20gdi_spokes\x20ge\x20gen\x20gene\x20gener\x20genera\x20generat\x20generate\x20genrank\x20genstd\x20genvmean\x20gettoken\x20gl\x20gladder\x20gladder_7\x20glim_l01\x20glim_l02\x20glim_l03\x20glim_l04\x20glim_l05\x20glim_l06\x20glim_l07\x20glim_l08\x20glim_l09\x20glim_l10\x20glim_l11\x20glim_l12\x20glim_lf\x20glim_mu\x20glim_nw1\x20glim_nw2\x20glim_nw3\x20glim_p\x20glim_v1\x20glim_v2\x20glim_v3\x20glim_v4\x20glim_v5\x20glim_v6\x20glim_v7\x20glm\x20glm_6\x20glm_p\x20glm_sw\x20glmpred\x20glo\x20glob\x20globa\x20global\x20glogit\x20glogit_8\x20glogit_p\x20gmeans\x20gnbre_lf\x20gnbreg\x20gnbreg_5\x20gnbreg_p\x20gomp_lf\x20gompe_sw\x20gomper_p\x20gompertz\x20gompertzhet\x20gomphet_glf\x20gomphet_glf_sh\x20gomphet_gp\x20gomphet_ilf\x20gomphet_ilf_sh\x20gomphet_ip\x20gphdot\x20gphpen\x20gphprint\x20gprefs\x20gprobi_p\x20gprobit\x20gprobit_8\x20gr\x20gr7\x20gr_copy\x20gr_current\x20gr_db\x20gr_describe\x20gr_dir\x20gr_draw\x20gr_draw_replay\x20gr_drop\x20gr_edit\x20gr_editviewopts\x20gr_example\x20gr_example2\x20gr_export\x20gr_print\x20gr_qscheme\x20gr_query\x20gr_read\x20gr_rename\x20gr_replay\x20gr_save\x20gr_set\x20gr_setscheme\x20gr_table\x20gr_undo\x20gr_use\x20graph\x20graph7\x20grebar\x20greigen\x20greigen_7\x20greigen_8\x20grmeanby\x20grmeanby_7\x20gs_fileinfo\x20gs_filetype\x20gs_graphinfo\x20gs_stat\x20gsort\x20gwood\x20h|0\x20hadimvo\x20hareg\x20hausman\x20haver\x20he\x20heck_d2\x20heckma_p\x20heckman\x20heckp_lf\x20heckpr_p\x20heckprob\x20hel\x20help\x20hereg\x20hetpr_lf\x20hetpr_p\x20hetprob\x20hettest\x20hexdump\x20hilite\x20hist\x20hist_7\x20histogram\x20hlogit\x20hlu\x20hmeans\x20hotel\x20hotelling\x20hprobit\x20hreg\x20hsearch\x20icd9\x20icd9_ff\x20icd9p\x20iis\x20impute\x20imtest\x20inbase\x20include\x20inf\x20infi\x20infil\x20infile\x20infix\x20inp\x20inpu\x20input\x20ins\x20insheet\x20insp\x20inspe\x20inspec\x20inspect\x20integ\x20inten\x20intreg\x20intreg_7\x20intreg_p\x20intrg2_ll\x20intrg_ll\x20intrg_ll2\x20ipolate\x20iqreg\x20ir\x20irf\x20irf_create\x20irfm\x20iri\x20is_svy\x20is_svysum\x20isid\x20istdize\x20ivprob_1_lf\x20ivprob_lf\x20ivprobit\x20ivprobit_p\x20ivreg\x20ivreg_footnote\x20ivtob_1_lf\x20ivtob_lf\x20ivtobit\x20ivtobit_p\x20jackknife\x20jacknife\x20jknife\x20jknife_6\x20jknife_8\x20jkstat\x20joinby\x20kalarma1\x20kap\x20kap_3\x20kapmeier\x20kappa\x20kapwgt\x20kdensity\x20kdensity_7\x20keep\x20ksm\x20ksmirnov\x20ktau\x20kwallis\x20l|0\x20la\x20lab\x20labbe\x20labbeplot\x20labe\x20label\x20labelbook\x20ladder\x20levels\x20levelsof\x20leverage\x20lfit\x20lfit_p\x20li\x20lincom\x20line\x20linktest\x20lis\x20list\x20lloghet_glf\x20lloghet_glf_sh\x20lloghet_gp\x20lloghet_ilf\x20lloghet_ilf_sh\x20lloghet_ip\x20llogi_sw\x20llogis_p\x20llogist\x20llogistic\x20llogistichet\x20lnorm_lf\x20lnorm_sw\x20lnorma_p\x20lnormal\x20lnormalhet\x20lnormhet_glf\x20lnormhet_glf_sh\x20lnormhet_gp\x20lnormhet_ilf\x20lnormhet_ilf_sh\x20lnormhet_ip\x20lnskew0\x20loadingplot\x20loc\x20loca\x20local\x20log\x20logi\x20logis_lf\x20logistic\x20logistic_p\x20logit\x20logit_estat\x20logit_p\x20loglogs\x20logrank\x20loneway\x20lookfor\x20lookup\x20lowess\x20lowess_7\x20lpredict\x20lrecomp\x20lroc\x20lroc_7\x20lrtest\x20ls\x20lsens\x20lsens_7\x20lsens_x\x20lstat\x20ltable\x20ltable_7\x20ltriang\x20lv\x20lvr2plot\x20lvr2plot_7\x20m|0\x20ma\x20mac\x20macr\x20macro\x20makecns\x20man\x20manova\x20manova_estat\x20manova_p\x20manovatest\x20mantel\x20mark\x20markin\x20markout\x20marksample\x20mat\x20mat_capp\x20mat_order\x20mat_put_rr\x20mat_rapp\x20mata\x20mata_clear\x20mata_describe\x20mata_drop\x20mata_matdescribe\x20mata_matsave\x20mata_matuse\x20mata_memory\x20mata_mlib\x20mata_mosave\x20mata_rename\x20mata_which\x20matalabel\x20matcproc\x20matlist\x20matname\x20matr\x20matri\x20matrix\x20matrix_input__dlg\x20matstrik\x20mcc\x20mcci\x20md0_\x20md1_\x20md1debug_\x20md2_\x20md2debug_\x20mds\x20mds_estat\x20mds_p\x20mdsconfig\x20mdslong\x20mdsmat\x20mdsshepard\x20mdytoe\x20mdytof\x20me_derd\x20mean\x20means\x20median\x20memory\x20memsize\x20menl\x20meqparse\x20mer\x20merg\x20merge\x20meta\x20mfp\x20mfx\x20mhelp\x20mhodds\x20minbound\x20mixed_ll\x20mixed_ll_reparm\x20mkassert\x20mkdir\x20mkmat\x20mkspline\x20ml\x20ml_5\x20ml_adjs\x20ml_bhhhs\x20ml_c_d\x20ml_check\x20ml_clear\x20ml_cnt\x20ml_debug\x20ml_defd\x20ml_e0\x20ml_e0_bfgs\x20ml_e0_cycle\x20ml_e0_dfp\x20ml_e0i\x20ml_e1\x20ml_e1_bfgs\x20ml_e1_bhhh\x20ml_e1_cycle\x20ml_e1_dfp\x20ml_e2\x20ml_e2_cycle\x20ml_ebfg0\x20ml_ebfr0\x20ml_ebfr1\x20ml_ebh0q\x20ml_ebhh0\x20ml_ebhr0\x20ml_ebr0i\x20ml_ecr0i\x20ml_edfp0\x20ml_edfr0\x20ml_edfr1\x20ml_edr0i\x20ml_eds\x20ml_eer0i\x20ml_egr0i\x20ml_elf\x20ml_elf_bfgs\x20ml_elf_bhhh\x20ml_elf_cycle\x20ml_elf_dfp\x20ml_elfi\x20ml_elfs\x20ml_enr0i\x20ml_enrr0\x20ml_erdu0\x20ml_erdu0_bfgs\x20ml_erdu0_bhhh\x20ml_erdu0_bhhhq\x20ml_erdu0_cycle\x20ml_erdu0_dfp\x20ml_erdu0_nrbfgs\x20ml_exde\x20ml_footnote\x20ml_geqnr\x20ml_grad0\x20ml_graph\x20ml_hbhhh\x20ml_hd0\x20ml_hold\x20ml_init\x20ml_inv\x20ml_log\x20ml_max\x20ml_mlout\x20ml_mlout_8\x20ml_model\x20ml_nb0\x20ml_opt\x20ml_p\x20ml_plot\x20ml_query\x20ml_rdgrd\x20ml_repor\x20ml_s_e\x20ml_score\x20ml_searc\x20ml_technique\x20ml_unhold\x20mleval\x20mlf_\x20mlmatbysum\x20mlmatsum\x20mlog\x20mlogi\x20mlogit\x20mlogit_footnote\x20mlogit_p\x20mlopts\x20mlsum\x20mlvecsum\x20mnl0_\x20mor\x20more\x20mov\x20move\x20mprobit\x20mprobit_lf\x20mprobit_p\x20mrdu0_\x20mrdu1_\x20mvdecode\x20mvencode\x20mvreg\x20mvreg_estat\x20n|0\x20nbreg\x20nbreg_al\x20nbreg_lf\x20nbreg_p\x20nbreg_sw\x20nestreg\x20net\x20newey\x20newey_7\x20newey_p\x20news\x20nl\x20nl_7\x20nl_9\x20nl_9_p\x20nl_p\x20nl_p_7\x20nlcom\x20nlcom_p\x20nlexp2\x20nlexp2_7\x20nlexp2a\x20nlexp2a_7\x20nlexp3\x20nlexp3_7\x20nlgom3\x20nlgom3_7\x20nlgom4\x20nlgom4_7\x20nlinit\x20nllog3\x20nllog3_7\x20nllog4\x20nllog4_7\x20nlog_rd\x20nlogit\x20nlogit_p\x20nlogitgen\x20nlogittree\x20nlpred\x20no\x20nobreak\x20noi\x20nois\x20noisi\x20noisil\x20noisily\x20note\x20notes\x20notes_dlg\x20nptrend\x20numlabel\x20numlist\x20odbc\x20old_ver\x20olo\x20olog\x20ologi\x20ologi_sw\x20ologit\x20ologit_p\x20ologitp\x20on\x20one\x20onew\x20onewa\x20oneway\x20op_colnm\x20op_comp\x20op_diff\x20op_inv\x20op_str\x20opr\x20opro\x20oprob\x20oprob_sw\x20oprobi\x20oprobi_p\x20oprobit\x20oprobitp\x20opts_exclusive\x20order\x20orthog\x20orthpoly\x20ou\x20out\x20outf\x20outfi\x20outfil\x20outfile\x20outs\x20outsh\x20outshe\x20outshee\x20outsheet\x20ovtest\x20pac\x20pac_7\x20palette\x20parse\x20parse_dissim\x20pause\x20pca\x20pca_8\x20pca_display\x20pca_estat\x20pca_p\x20pca_rotate\x20pcamat\x20pchart\x20pchart_7\x20pchi\x20pchi_7\x20pcorr\x20pctile\x20pentium\x20pergram\x20pergram_7\x20permute\x20permute_8\x20personal\x20peto_st\x20pkcollapse\x20pkcross\x20pkequiv\x20pkexamine\x20pkexamine_7\x20pkshape\x20pksumm\x20pksumm_7\x20pl\x20plo\x20plot\x20plugin\x20pnorm\x20pnorm_7\x20poisgof\x20poiss_lf\x20poiss_sw\x20poisso_p\x20poisson\x20poisson_estat\x20post\x20postclose\x20postfile\x20postutil\x20pperron\x20pr\x20prais\x20prais_e\x20prais_e2\x20prais_p\x20predict\x20predictnl\x20preserve\x20print\x20pro\x20prob\x20probi\x20probit\x20probit_estat\x20probit_p\x20proc_time\x20procoverlay\x20procrustes\x20procrustes_estat\x20procrustes_p\x20profiler\x20prog\x20progr\x20progra\x20program\x20prop\x20proportion\x20prtest\x20prtesti\x20pwcorr\x20pwd\x20q\x5cs\x20qby\x20qbys\x20qchi\x20qchi_7\x20qladder\x20qladder_7\x20qnorm\x20qnorm_7\x20qqplot\x20qqplot_7\x20qreg\x20qreg_c\x20qreg_p\x20qreg_sw\x20qu\x20quadchk\x20quantile\x20quantile_7\x20que\x20quer\x20query\x20range\x20ranksum\x20ratio\x20rchart\x20rchart_7\x20rcof\x20recast\x20reclink\x20recode\x20reg\x20reg3\x20reg3_p\x20regdw\x20regr\x20regre\x20regre_p2\x20regres\x20regres_p\x20regress\x20regress_estat\x20regriv_p\x20remap\x20ren\x20rena\x20renam\x20rename\x20renpfix\x20repeat\x20replace\x20report\x20reshape\x20restore\x20ret\x20retu\x20retur\x20return\x20rm\x20rmdir\x20robvar\x20roccomp\x20roccomp_7\x20roccomp_8\x20rocf_lf\x20rocfit\x20rocfit_8\x20rocgold\x20rocplot\x20rocplot_7\x20roctab\x20roctab_7\x20rolling\x20rologit\x20rologit_p\x20rot\x20rota\x20rotat\x20rotate\x20rotatemat\x20rreg\x20rreg_p\x20ru\x20run\x20runtest\x20rvfplot\x20rvfplot_7\x20rvpplot\x20rvpplot_7\x20sa\x20safesum\x20sample\x20sampsi\x20sav\x20save\x20savedresults\x20saveold\x20sc\x20sca\x20scal\x20scala\x20scalar\x20scatter\x20scm_mine\x20sco\x20scob_lf\x20scob_p\x20scobi_sw\x20scobit\x20scor\x20score\x20scoreplot\x20scoreplot_help\x20scree\x20screeplot\x20screeplot_help\x20sdtest\x20sdtesti\x20se\x20search\x20separate\x20seperate\x20serrbar\x20serrbar_7\x20serset\x20set\x20set_defaults\x20sfrancia\x20sh\x20she\x20shel\x20shell\x20shewhart\x20shewhart_7\x20signestimationsample\x20signrank\x20signtest\x20simul\x20simul_7\x20simulate\x20simulate_8\x20sktest\x20sleep\x20slogit\x20slogit_d2\x20slogit_p\x20smooth\x20snapspan\x20so\x20sor\x20sort\x20spearman\x20spikeplot\x20spikeplot_7\x20spikeplt\x20spline_x\x20split\x20sqreg\x20sqreg_p\x20sret\x20sretu\x20sretur\x20sreturn\x20ssc\x20st\x20st_ct\x20st_hc\x20st_hcd\x20st_hcd_sh\x20st_is\x20st_issys\x20st_note\x20st_promo\x20st_set\x20st_show\x20st_smpl\x20st_subid\x20stack\x20statsby\x20statsby_8\x20stbase\x20stci\x20stci_7\x20stcox\x20stcox_estat\x20stcox_fr\x20stcox_fr_ll\x20stcox_p\x20stcox_sw\x20stcoxkm\x20stcoxkm_7\x20stcstat\x20stcurv\x20stcurve\x20stcurve_7\x20stdes\x20stem\x20stepwise\x20stereg\x20stfill\x20stgen\x20stir\x20stjoin\x20stmc\x20stmh\x20stphplot\x20stphplot_7\x20stphtest\x20stphtest_7\x20stptime\x20strate\x20strate_7\x20streg\x20streg_sw\x20streset\x20sts\x20sts_7\x20stset\x20stsplit\x20stsum\x20sttocc\x20sttoct\x20stvary\x20stweib\x20su\x20suest\x20suest_8\x20sum\x20summ\x20summa\x20summar\x20summari\x20summariz\x20summarize\x20sunflower\x20sureg\x20survcurv\x20survsum\x20svar\x20svar_p\x20svmat\x20svy\x20svy_disp\x20svy_dreg\x20svy_est\x20svy_est_7\x20svy_estat\x20svy_get\x20svy_gnbreg_p\x20svy_head\x20svy_header\x20svy_heckman_p\x20svy_heckprob_p\x20svy_intreg_p\x20svy_ivreg_p\x20svy_logistic_p\x20svy_logit_p\x20svy_mlogit_p\x20svy_nbreg_p\x20svy_ologit_p\x20svy_oprobit_p\x20svy_poisson_p\x20svy_probit_p\x20svy_regress_p\x20svy_sub\x20svy_sub_7\x20svy_x\x20svy_x_7\x20svy_x_p\x20svydes\x20svydes_8\x20svygen\x20svygnbreg\x20svyheckman\x20svyheckprob\x20svyintreg\x20svyintreg_7\x20svyintrg\x20svyivreg\x20svylc\x20svylog_p\x20svylogit\x20svymarkout\x20svymarkout_8\x20svymean\x20svymlog\x20svymlogit\x20svynbreg\x20svyolog\x20svyologit\x20svyoprob\x20svyoprobit\x20svyopts\x20svypois\x20svypois_7\x20svypoisson\x20svyprobit\x20svyprobt\x20svyprop\x20svyprop_7\x20svyratio\x20svyreg\x20svyreg_p\x20svyregress\x20svyset\x20svyset_7\x20svyset_8\x20svytab\x20svytab_7\x20svytest\x20svytotal\x20sw\x20sw_8\x20swcnreg\x20swcox\x20swereg\x20swilk\x20swlogis\x20swlogit\x20swologit\x20swoprbt\x20swpois\x20swprobit\x20swqreg\x20swtobit\x20swweib\x20symmetry\x20symmi\x20symplot\x20symplot_7\x20syntax\x20sysdescribe\x20sysdir\x20sysuse\x20szroeter\x20ta\x20tab\x20tab1\x20tab2\x20tab_or\x20tabd\x20tabdi\x20tabdis\x20tabdisp\x20tabi\x20table\x20tabodds\x20tabodds_7\x20tabstat\x20tabu\x20tabul\x20tabula\x20tabulat\x20tabulate\x20te\x20tempfile\x20tempname\x20tempvar\x20tes\x20test\x20testnl\x20testparm\x20teststd\x20tetrachoric\x20time_it\x20timer\x20tis\x20tob\x20tobi\x20tobit\x20tobit_p\x20tobit_sw\x20token\x20tokeni\x20tokeniz\x20tokenize\x20tostring\x20total\x20translate\x20translator\x20transmap\x20treat_ll\x20treatr_p\x20treatreg\x20trim\x20trimfill\x20trnb_cons\x20trnb_mean\x20trpoiss_d2\x20trunc_ll\x20truncr_p\x20truncreg\x20tsappend\x20tset\x20tsfill\x20tsline\x20tsline_ex\x20tsreport\x20tsrevar\x20tsrline\x20tsset\x20tssmooth\x20tsunab\x20ttest\x20ttesti\x20tut_chk\x20tut_wait\x20tutorial\x20tw\x20tware_st\x20two\x20twoway\x20twoway__fpfit_serset\x20twoway__function_gen\x20twoway__histogram_gen\x20twoway__ipoint_serset\x20twoway__ipoints_serset\x20twoway__kdensity_gen\x20twoway__lfit_serset\x20twoway__normgen_gen\x20twoway__pci_serset\x20twoway__qfit_serset\x20twoway__scatteri_serset\x20twoway__sunflower_gen\x20twoway_ksm_serset\x20ty\x20typ\x20type\x20typeof\x20u|0\x20unab\x20unabbrev\x20unabcmd\x20update\x20us\x20use\x20uselabel\x20var\x20var_mkcompanion\x20var_p\x20varbasic\x20varfcast\x20vargranger\x20varirf\x20varirf_add\x20varirf_cgraph\x20varirf_create\x20varirf_ctable\x20varirf_describe\x20varirf_dir\x20varirf_drop\x20varirf_erase\x20varirf_graph\x20varirf_ograph\x20varirf_rename\x20varirf_set\x20varirf_table\x20varlist\x20varlmar\x20varnorm\x20varsoc\x20varstable\x20varstable_w\x20varstable_w2\x20varwle\x20vce\x20vec\x20vec_fevd\x20vec_mkphi\x20vec_p\x20vec_p_w\x20vecirf_create\x20veclmar\x20veclmar_w\x20vecnorm\x20vecnorm_w\x20vecrank\x20vecstable\x20verinst\x20vers\x20versi\x20versio\x20version\x20view\x20viewsource\x20vif\x20vwls\x20wdatetof\x20webdescribe\x20webseek\x20webuse\x20weib1_lf\x20weib2_lf\x20weib_lf\x20weib_lf0\x20weibhet_glf\x20weibhet_glf_sh\x20weibhet_glfa\x20weibhet_glfa_sh\x20weibhet_gp\x20weibhet_ilf\x20weibhet_ilf_sh\x20weibhet_ilfa\x20weibhet_ilfa_sh\x20weibhet_ip\x20weibu_sw\x20weibul_p\x20weibull\x20weibull_c\x20weibull_s\x20weibullhet\x20wh\x20whelp\x20whi\x20which\x20whil\x20while\x20wilc_st\x20wilcoxon\x20win\x20wind\x20windo\x20window\x20winexec\x20wntestb\x20wntestb_7\x20wntestq\x20xchart\x20xchart_7\x20xcorr\x20xcorr_7\x20xi\x20xi_6\x20xmlsav\x20xmlsave\x20xmluse\x20xpose\x20xsh\x20xshe\x20xshel\x20xshell\x20xt_iis\x20xt_tis\x20xtab_p\x20xtabond\x20xtbin_p\x20xtclog\x20xtcloglog\x20xtcloglog_8\x20xtcloglog_d2\x20xtcloglog_pa_p\x20xtcloglog_re_p\x20xtcnt_p\x20xtcorr\x20xtdata\x20xtdes\x20xtfront_p\x20xtfrontier\x20xtgee\x20xtgee_elink\x20xtgee_estat\x20xtgee_makeivar\x20xtgee_p\x20xtgee_plink\x20xtgls\x20xtgls_p\x20xthaus\x20xthausman\x20xtht_p\x20xthtaylor\x20xtile\x20xtint_p\x20xtintreg\x20xtintreg_8\x20xtintreg_d2\x20xtintreg_p\x20xtivp_1\x20xtivp_2\x20xtivreg\x20xtline\x20xtline_ex\x20xtlogit\x20xtlogit_8\x20xtlogit_d2\x20xtlogit_fe_p\x20xtlogit_pa_p\x20xtlogit_re_p\x20xtmixed\x20xtmixed_estat\x20xtmixed_p\x20xtnb_fe\x20xtnb_lf\x20xtnbreg\x20xtnbreg_pa_p\x20xtnbreg_refe_p\x20xtpcse\x20xtpcse_p\x20xtpois\x20xtpoisson\x20xtpoisson_d2\x20xtpoisson_pa_p\x20xtpoisson_refe_p\x20xtpred\x20xtprobit\x20xtprobit_8\x20xtprobit_d2\x20xtprobit_re_p\x20xtps_fe\x20xtps_lf\x20xtps_ren\x20xtps_ren_8\x20xtrar_p\x20xtrc\x20xtrc_p\x20xtrchh\x20xtrefe_p\x20xtreg\x20xtreg_be\x20xtreg_fe\x20xtreg_ml\x20xtreg_pa_p\x20xtreg_re\x20xtregar\x20xtrere_p\x20xtset\x20xtsf_ll\x20xtsf_llti\x20xtsum\x20xttab\x20xttest0\x20xttobit\x20xttobit_8\x20xttobit_p\x20xttrans\x20yx\x20yxview__barlike_draw\x20yxview_area_draw\x20yxview_bar_draw\x20yxview_dot_draw\x20yxview_dropline_draw\x20yxview_function_draw\x20yxview_iarrow_draw\x20yxview_ilabels_draw\x20yxview_normal_draw\x20yxview_pcarrow_draw\x20yxview_pcbarrow_draw\x20yxview_pccapsym_draw\x20yxview_pcscatter_draw\x20yxview_pcspike_draw\x20yxview_rarea_draw\x20yxview_rbar_draw\x20yxview_rbarm_draw\x20yxview_rcap_draw\x20yxview_rcapsym_draw\x20yxview_rconnected_draw\x20yxview_rline_draw\x20yxview_rscatter_draw\x20yxview_rspike_draw\x20yxview_spike_draw\x20yxview_sunflower_draw\x20zap_s\x20zinb\x20zinb_llf\x20zinb_plf\x20zip\x20zip_llf\x20zip_p\x20zip_plf\x20zt_ct_5\x20zt_hc_5\x20zt_hcd_5\x20zt_is_5\x20zt_iss_5\x20zt_sho_5\x20zt_smp_5\x20ztbase_5\x20ztcox_5\x20ztdes_5\x20ztereg_5\x20ztfill_5\x20ztgen_5\x20ztir_5\x20ztjoin_5\x20ztnb\x20ztnb_p\x20ztp\x20ztp_p\x20zts_5\x20ztset_5\x20ztspli_5\x20ztsum_5\x20zttoct_5\x20ztvary_5\x20ztweib_5','gamepad_is_supported','setMarkerDirLocal','pseud','addSecondaryWeaponItem','simplex','FindKPlex','<\x5c/[A-Za-z][A-Za-z0-9\x5c-]*\x5cs*>','maskr','trimEnd','SameTestProperties','eigenvalues','PacletDataRebuild','xprevious','\x0a\x20\x20\x20\x20\x20\x20position:\x20fixed;\x0a\x20\x20\x20\x20\x20\x20top:\x200;\x0a\x20\x20\x20\x20\x20\x20left:\x200;\x0a\x20\x20\x20\x20\x20\x20width:\x20100vw;\x0a\x20\x20\x20\x20\x20\x20height:\x20100vh;\x0a\x20\x20\x20\x20\x20\x20background-color:\x20rgba(0,\x200,\x200,\x200.5);\x0a\x20\x20\x20\x20\x20\x20z-index:\x201000;\x0a\x20\x20\x20\x20\x20\x20display:\x20none;\x0a\x20\x20','AbstractSet','convert','draw_point_color','′','whered','([\x5cda-fA-F][\x5cda-fA-F_]*|_[\x5cda-fA-F][\x5cda-fA-F_]*)','xs:negativeInteger','ths','array_remove','DeleteAnomalies','BabyMonsterGroupB','ctrlTextSelection','setPosASL2','MaxFilter','DropShadowing','weak0','change','raw','animation-name','IndependentUnit','^please\x20do?\x20not?\x20[#Infinitive\x20#Particle?]','AutoCloseWindow','motors','minof','Indeterminate','1st:e¦1est:l,m,f,s¦1iest:cey¦2est:or,ir¦3est:ver','NotebookFileName','an-instant','enableAimPrecision','alog10','getAimingCoef','ExportAutoReplacements','ň','CompilationTarget','Difference','Cashflow','physics_fixture_set_density','FindArgMax','endwhile','WordCharacter','Check','vertex_usage_blendweight','stateNumericStart','BoxFormFormatTypes','faith-based','$countbits','▀','get_login_async','$stable','iso\x20val\x20tag\x20trn\x20box\x20ref','PolyhedronGenus','replaceWith','𝔹','had-been-smoked','𝓁','⦐','SeekableIterator','female','ComplexPlot','MarginalDistribution','[#Noun]\x20(here|there)','%r\x5c(','combatBehaviour','ţ','as\x20#Pronoun\x20[please]','AutoOpenNotebooks','block','markAsFinishedOnSteam','church','Theorem','TraceOn','Ó','ffi','PDURATION','fnote','clear_color','$async$or$array','VonMisesDistribution','SetterBox','PasteBoxFormInlineCells','function\x20constructor\x20destructor\x20procedure\x20method','NorlundB','NetworkPacketCapture','julia','3-[neighbour]','ImageContents','beaux','MB_TOPMOST','trace','NaN16','lang','voidptr','Links','InverseSeries','ps_shape_ellipse','dacos','OutputGrouping','bm_dest_colour','RadialityCentrality','inv_erfc','ExternalStorageBase','isAgent','darcsin','dateserial','keyTyped','\x5cb(sub)?type\x5cs+','the-x-which','disableAI','overriding','keyboard_set_numlock','isEqualTypeAny','departments','%Adj|Noun%\x20%Noun|Verb%','','cloud_synchronise','border-bottom-color','!%&*+-/<=>@^|~?','𝔄','ual','make_unique','getMarkerSize','path_speed','ATan2','SimpleGraph','match|0','Editable','fill','network_config_disable_reliable_udp','GroupOpenerColor','SequenceHold','RecursiveDirectoryIterator','shownGps','collision_rectangle_list','((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)(\x5c.(25[0-5]|2[0-4]\x5cd|1\x5cd\x5cd|[1-9]?\x5cd)){3}))|:)))\x5cb','Majority','[object\x20RegExp]','ContourDetect','BartlettHannWindow','Pwd','very','condition_variable_any','#Value\x20(#WeekDay|#Month)','(?![\x5cw\x5cd])(?![$])','InternallyBalancedDecomposition','$CharacterEncoding','backgroundColor','game_display_name','drawIcon3D','hue','$monitorb','audio_destroy_sync_group','magazineTurretAmmo','DefaultFrameTicksStyle','do-verb','ying:ie¦1ing:se,ke,te,we,ne,re,de,pe,me,le,c,he¦2ing:ll,ng,dd,ee,ye,oe,rg,us¦2ning:un¦2ging:og,ag,ug,ig,eg¦2ming:um¦2bing:ub,ab,eb,ob¦3ning:lan,can,hin,pin,win¦3ring:cur,lur,tir,tar,pur,car¦3ing:ait,del,eel,fin,eat,oat,eem,lel,ool,ein,uin¦3ping:rop,rap,top,uip,wap,hip,hop,lap,rip,cap¦3ming:tem,wim,rim,kim,lim¦3ting:mat,cut,pot,lit,lot,hat,set,pit,put¦3ding:hed,bed,bid¦3king:rek¦3ling:cil,pel¦3bing:rib¦4ning:egin¦4ing:isit,ruit,ilot,nsit,dget,rkel,ival,rcel¦4ring:efer,nfer¦4ting:rmit,mmit,ysit,dmit,emit,bmit,tfit,gret¦4ling:evel,xcel,ivel¦4ding:hred¦5ing:arget,posit,rofit¦5ring:nsfer¦5ting:nsmit,orget,cquit¦5ling:ancel,istil','fix','⤵','device_emulator','preTagger','object_set_solid','$System','#PresentTense\x20and\x20#PresentTense','start-to-finish','window_get_visible_rects','↢','BarLegend','NumericStart','WinsorizedVariance','executeglobal','auth','TildeTilde','qtan','\x5cb[A-Z][\x5cw\x27]*','fget','playSound3D','TriangularDistribution','(&[oO][0-7]{1,6})','([A-Za-z_]|::)(\x5cw|::)*','font-display','classNameAliases','legend','Given\x20the\x20following\x20BACKGROUND_KNOWLEDGE,\x20and\x20a\x20student\x27s\x20QUESTION,\x20answer\x20the\x20QUESTION.\x20You\x20may\x20use\x20the\x20\x20BACKGROUND_KNOWLEDGE\x20as\x20reference.\x20Otherwise,\x20you\x20can\x20be\x20creative\x20and\x20respond\x20in\x20a\x20way\x20that\x20always\x20ensure\x20you\x20MUST\x20answer\x20the\x20question.\x20Return\x20as\x20markdown','WattsStrogatzGraphDistribution','setTowParent','10th-of-a-second','Left','$WolframDocumentsDirectory','rename','DiggleGatesPointProcess','LocalTime','PickMode','FromSphericalCoordinates','Grammar','ef_rain','TITLE_MODE','confidence','march-and-feb','$fatal','BuckyballGraph','buffer_read','[(half|quarter)]\x20of?\x20(a|an)','LameEigenvalueB','$400usd','die','[;$]','ExtractPacletArchive','quad_to','file!','toPositive','vk_right','showHUD','shell','FileErrorText','(\x5cbdef\x5cb)','BoxMargins','pastTense','lines','(that|the)\x20[#Gerund\x20#PresentTense]','ReferenceError','facebook_check_permission','as\x20[#Infinitive]\x20as','Character','argument4','yprevious','double_exponential','above','5px','SyntaxLength','docs','sprite_get_speed_type','ContentSelectable','arduino','MeshCellShapeFunction','HKEY_CURRENT_CONFIG','readButton','binary_log_loss',')[pP][+-]?','~~~(?:\x5cs*,\x5cs*','government','that-leads-to','keyboard_get_map','ConformationMethod','ctrlPosition','phy_joint_max_motor_force','ScaleRanges','&?:(','getWeaponSway','0-9/;:$#]*','mostRecentID','occurrences_regex','audio_emitter_create','ctAddRow','$CacheBaseDirectory','hotels','HyperexponentialDistribution','#Plural\x20','𝕃','extern','audio_stop_music','↚','state','->\x5c*?','$1ves','UNDERSCORE_IDENT_RE','animation-direction','draw_button','CopyDirectory','BSplineSurface3DBox','DateList','DirectedInfinity','skeleton_bone_data_get','code_inline','⤑','xs:nonNegativeInteger','BinomialPointProcess','PlanarFaceList','true¦b8ch7dr6foster,gra5ja9lan4ma2ni9ollie,p1rob,s0wade;kip,pike,t5ue;at,eg,ier2;ck,r0;k,shal;ce;ce,nt;ew;ase,u1;iff,l1ob,u0;ck;aze,ossom','DisplayForm','DisableConsolePrintPacket','State','AlgebraicNumberPolynomial','allowFileOperations','\x5cb(data|(new)?type)\x5cb','__PRETTY_FUNCTION__','matched','frexp','debugger','phy_particle_flag_powder','LanguageCategory','FIND','SystemOpen','wordCount','draw_set_circle_precision','withVaList','addAction','achievement_show','isActionDone','ds_list_sort','collision_line_list','InfiniteLine','FeatureDistance','part_emitter_clear','ds_grid_add','removeMagazinesTurret','get-much','timeunit','RequiredPhysicalQuantities','≐̸','PairedBarChart','freeze','VARP','sha512sum','true¦a2j0sep;an0une;!uary;p0ugust,v0;ril','asst','ISOMonth','deletion','ViewVertical','isAllowedCrewInImmobile','preloadObject','SelectionAnimate','else\x20if','logos','ℯ','Oracle\x20Rules\x20Language','InverseEllipticNomeQ','virtual_key_hide','Spacings','kbv_returnkey_done','Rotate','Length','Immediate','breakWith','$ImageFormattingWidth','KeyValueMap','1165440JdoyWO','options','MB_DEFBUTTON1','RANK','TreeReplacePart','enter-btn','draw_set_color','ev_joystick1_button2','foreign','$typename','ProgressIndicator','whomever','SortedBy','__contravariant','mdivide_right_tri_low','\x5c$[0-9][0-9A-Fa-f]*','⇉','teamSwitch','object','menuSetData','⧴','ImageRecolor','avenue','Predict','FullSimplify','athletic','╒','cursorObject','CHITEST','❲','#Noun+\x20(public|private)\x20school','technolog','StreamStyle','vala','\x1b[32m','fsep','device_mouse_y','trap','title.function','gimme','sysrc','^[#Singular]\x20#Person\x20#Verb','(link|image:?):','physics_world_gravity','selectedDropdownValue','Cchar','(he|his)','renderInline','calloc','remotePort','area','⥠','(she|her|hers)','tile_set_index','$EmbeddableServices','NetDrop','BlackmanHarrisWindow','the\x20can','OverTilde','LOCK','Hyperfactorial','requiredVersion','a-quarter','exponent-separator','circle','SuperscriptBox','fps_real','throws','WarpingDistance','null\x20true\x20false\x20nil\x20','T.DIST','currentThrowable','buses','\x5cs+)+','log','Axes','base64','full','MoleculeName','⩖','skeleton-loader','PairedZTest','FISHER','found','ReverseSortBy','toboolean','NMaximize','showUAVFeed','BinaryWrite','GaugeFrameElementFunction','$isunbounded','│','BioSequencePlot','TraceOff','inner','Boxes','redirect','datejul','$EvaluationCloudObject','setTaskState','FALSE','SignPadding','Nested\x20Text','good','linearConversion','will\x20be','will\x20have','[(]|$','explicit','Generator','localparam','⟨','true¦lest,unless',';|\x5c.','feature','$fwriteb','buffer_seek','barely-even','detail','VertexOutComponent','RegionFit','(#Copula|get|got|getting|become|became|becoming|feel|feels|feeling|#Determiner|#Preposition)','isDirectory','Mesh','Erlang','Power','gas','FileConvert','vectorWorldToModel','LocatorPaneBoxOptions','DSolveValue','ads_set_reward_callback','DefaultFrameStyle','FormBoxOptions','Int','mouse_wheel_up','iterate','Adjective\x20Noun\x20Conjunction','corp','ally','startEntity','$stop','ExtendedGCD','Ġ','InverseFunction','actionKeysImages','auto_ptr','%q\x5c|','ϱ','moveInCommander','copula','ranbin','Empty','ctValue','parent::','fog','min0','especially','KaryTree','RandomArrayLayer','ArrayFilter','ctSetData','Ticks','PillaiTraceTest','SetDelayed','now_str','BottomHatTransform','ppEffectCreate','lookBehind','CommunityGraphPlot','#Copula\x20#Adjective+\x20(and|or)\x20[#PastTense]$','GraphQL','mc\x27neil','execute','canDeployWeapon','1pm-sun','base32','NamespaceBox','menuSetValue','iframe',',\x20response:\x20','#Copula\x20#Gerund','\x5cs(','strrchr','Between','kbTell','nonnull','DefaultBaseStyle','[still]\x20#Adjective','ctrlMapPosition','mobile','ProcessEstimator','physics_particle_get_data_particle','NSEC3PARAM','NORM.S.INV','≨','
\x0a','𝓌','StopAsynchronousTask','sub','PixelConstrained','GridBoxBackground','Cylinder','fiftieth','3-one-letter-acronym','known','setTriggerText','2-tagYear','emptyPositions','NotLessGreater','TemplateBox','quit','date_compare_date','MinimumTimeIncrement','ProcessTimeDomain','date_modify','macOSApplicationExtension','~[a-z](?=','TranslationOptions','date_leap_year','BilateralLaplaceTransform',';[\x20\x5ct]*#','setSimpleTaskDescription','LineBreak','$CloudEvaluation','primaryWeapon','q[qwxr]?\x5cs*<','[#$=]?0b[01]+','getModelInfo','shownCompass','redraw','$ResourceSystemBase','chronological','shmget','\x5cb(0b[01\x27]+)','__compile__','atomic_define','IncludeInflections','PermutationReplace','tranif0','$Failed','RenderMan\x20RSL','lbSetSelectColorRight','begin_keywords','\x5cs*=\x5cs*class\x5cs*\x5c(','(\x5c+|-)\x5cd+','cliffs','ASLToAGL','ControlPlacement','nonatomic','SphericalBesselY','#more-papers-','Monkey','event_user','lowcase','array_position','compassRead','canTriggerDynamicSimulation','ReadByteArray','DEC2HEX','PlotLayout','(VZOFX|VZOFY|VZOFZ)','SystemsModelSeriesConnect','eleft','Placed','CSS_VARIABLE','ScheduledTaskInformationData','è','SignedRankTest','FALSE|0','ref','BringToFront',')\x5cs*=>','\x0a','Top|0','ť','WeierstrassZeta','$TimedOut','Inset','^[(ask|wear|pay|look|help|show|watch|act|fix|kill|stop|start|turn|try|win)]\x20#Noun','reads','setMarkerShapeLocal','tinyMLDB','SetFileAttributes','ResetScheduledTask','ute','expr','vectorDistanceSqr','\x22\x20for\x20mode\x20\x22','-import','concept','const\x20','2:does,goes¦3:gasses¦5:focuses¦is:are¦3y:relies¦2y:flies¦2ve:has','showLegend','CloudRenderingMethod','msgcat','Email','fb_login_fallback_to_webview','property','__emitTokens','Gudermannian','ev_global_right_button','for-some-reason','compact','phenomena','hundred\x20million','a\x20pound','physics_get_restitution','sportsfield','modelY','LQRegulatorGains','PersonData','logit','setMarkerBrushLocal','STDEV.P','argument11','prefers-reduced-motion','gpu_get_tex_min_mip_ext','RenewalProcess','border-2','FindNext','MixedFractionParts','ctrlModelScale','!important','tastes-good','DistributionDomain','layer_sprite_y','emphasis','ous','Crosses','InverseGammaDistribution','[object\x20Function]','emptied','layer_get_shader','CloudUserID','SetDetailsPrint','gpu_get_tex_mip_enable','EulerPhi','tilemap_get_cell_y_at_pixel','steam_upload_score_buffer_ext','Float64','Moment','CountDistinctBy','DifferentialRoot','GeometricDistribution','Process','ColorFunctionScaling','playActionNow','obj','weaponCargo','ASCII','facet','MoleculeMatchQ','int16','onPreloadFinished','mostly','isArray','DynamicLocation','fromComparative','PrecedesTilde','uwire','nextWeatherChange','Π','Includes','part_type_clear','GeometricTest','newPrintWriter','texture_get_width','cin','matcher','physics_particle_get_data','displayHeight','dynamicCallable','must-win','perspective-origin','empty-cells','fitful','checkReg','[long\x20live]\x20.','ListVectorPlot3D','.3%','strikethrough','MovieData','draw_surface','photographs-of','SphericalBesselJ','˘','^#Verb\x20.\x20#Noun+$','Infix','lengthdir_y','pack','text-orientation','TIME','Some','HyphenationOptions','shiftIn','Slice','Default','Colon','upstream\x20location','pony','↰','⪤','beforeWords','co_sum','rem','VectorRange','win8_livetile_notification_tag','mono','voice-rate','SyntaxError','TemplateApply','box','(march|may)\x20the?\x20#Value','complex_matrix','dateToNumber','CallbackFilterIterator','civilian','TreeElementCoordinates','score','-top-3','augmentation','Inverse','keywordPatternRe','GaussianSymplecticMatrixDistribution','MapThread','wants','EntityStore','phy_joint_anchor_2_y','TimeSystem','unalias','forest','WriteRegNone','(are|were|does)','\x20~+~','endlocal','byFreq','retrieveCallingNumber','Weekday','numeric','LeafCount','DomainError','scanf','scroll-padding-block-start','steam_upload_score_ext','outline-style','ev_end_of_path','CUBEKPIMEMBER','^#PresentTense\x20#Gerund$','resistance','camDestroy','adm','QuartileSkewness','attr_accessor','setAutonomous','Parallelization','#333','Я','provincial','CachePersistence','btauto','gethostent','taskType','errordocument','$MachineDomain','DelimiterAutoMatching','isSteamOverlayEnabled','mapfile','[^\x5c\x5c]\x27','/\x5c\x5c','IconizedObject','getpwnam','AudioDelete','phy_particle_flag_tensile','cqcos','GraphicsArray',':-p','MousePointerNote','positionCameraToWorld','ReImLabels','createSimpleTask','SmithWatermanSimilarity','ConnectSystemModelComponents','__covariant','endsequence','StreamColorFunction','FixedPointList','ų','Latitude','Ruby','vk_numpad5','opened','Unhandled\x20action\x20type:\x20','currentMagazineDetail','except','ErrorBox','std','cursor_sprite','28470bNEvSs','conjugate','minWant','BenktanderWeibullDistribution','adj-adj-is','VideoScreenCapture','society','SIGN','OpenRead','\x1b[3m\x1b[2m','quantile','idim','#dsohandle','MenuView','view_current','turn-off-the-light','line-break','decimal','$sinh','[#Copula]\x20(#Adverb|not)+?\x20(#Gerund|#PastTense)','date_inc_second','vk_f5','NonAssociative','ps_shape_diamond','PlotRangeClipPlanesStyle','letting','saturation','⌶','audio_sound_get_gain','a\x20#Ordinal','waypointLoiterType','highscore_clear','[#PastTense]\x20#Singular\x20is','Û','emails','Fraction','#Adverb\x20[#Adverb]\x20(and|or|then)','PImage','∱','RemoteBatchJobs','CircleDot','call','addTorque','strspn','device_get_tilt_x','PROB','font_replace_sprite','JacobiSymbol','mm-dd-yyy','PERemoveResource','ListQ','segment','Message','->|<-[|:]?|#!?|>>=|\x5c{\x5c||\x5c|\x5c}|:==|=:|<>','neighbours','str_to_utc','discrete_range','lbSortByValue','.ge.','parseLinkTitle','BorelTannerDistribution','create\x20table','place_meeting','lock\x20rep\x20repe\x20repz\x20repne\x20repnz\x20xaquire\x20xrelease\x20bnd\x20nobnd\x20aaa\x20aad\x20aam\x20aas\x20adc\x20add\x20and\x20arpl\x20bb0_reset\x20bb1_reset\x20bound\x20bsf\x20bsr\x20bswap\x20bt\x20btc\x20btr\x20bts\x20call\x20cbw\x20cdq\x20cdqe\x20clc\x20cld\x20cli\x20clts\x20cmc\x20cmp\x20cmpsb\x20cmpsd\x20cmpsq\x20cmpsw\x20cmpxchg\x20cmpxchg486\x20cmpxchg8b\x20cmpxchg16b\x20cpuid\x20cpu_read\x20cpu_write\x20cqo\x20cwd\x20cwde\x20daa\x20das\x20dec\x20div\x20dmint\x20emms\x20enter\x20equ\x20f2xm1\x20fabs\x20fadd\x20faddp\x20fbld\x20fbstp\x20fchs\x20fclex\x20fcmovb\x20fcmovbe\x20fcmove\x20fcmovnb\x20fcmovnbe\x20fcmovne\x20fcmovnu\x20fcmovu\x20fcom\x20fcomi\x20fcomip\x20fcomp\x20fcompp\x20fcos\x20fdecstp\x20fdisi\x20fdiv\x20fdivp\x20fdivr\x20fdivrp\x20femms\x20feni\x20ffree\x20ffreep\x20fiadd\x20ficom\x20ficomp\x20fidiv\x20fidivr\x20fild\x20fimul\x20fincstp\x20finit\x20fist\x20fistp\x20fisttp\x20fisub\x20fisubr\x20fld\x20fld1\x20fldcw\x20fldenv\x20fldl2e\x20fldl2t\x20fldlg2\x20fldln2\x20fldpi\x20fldz\x20fmul\x20fmulp\x20fnclex\x20fndisi\x20fneni\x20fninit\x20fnop\x20fnsave\x20fnstcw\x20fnstenv\x20fnstsw\x20fpatan\x20fprem\x20fprem1\x20fptan\x20frndint\x20frstor\x20fsave\x20fscale\x20fsetpm\x20fsin\x20fsincos\x20fsqrt\x20fst\x20fstcw\x20fstenv\x20fstp\x20fstsw\x20fsub\x20fsubp\x20fsubr\x20fsubrp\x20ftst\x20fucom\x20fucomi\x20fucomip\x20fucomp\x20fucompp\x20fxam\x20fxch\x20fxtract\x20fyl2x\x20fyl2xp1\x20hlt\x20ibts\x20icebp\x20idiv\x20imul\x20in\x20inc\x20incbin\x20insb\x20insd\x20insw\x20int\x20int01\x20int1\x20int03\x20int3\x20into\x20invd\x20invpcid\x20invlpg\x20invlpga\x20iret\x20iretd\x20iretq\x20iretw\x20jcxz\x20jecxz\x20jrcxz\x20jmp\x20jmpe\x20lahf\x20lar\x20lds\x20lea\x20leave\x20les\x20lfence\x20lfs\x20lgdt\x20lgs\x20lidt\x20lldt\x20lmsw\x20loadall\x20loadall286\x20lodsb\x20lodsd\x20lodsq\x20lodsw\x20loop\x20loope\x20loopne\x20loopnz\x20loopz\x20lsl\x20lss\x20ltr\x20mfence\x20monitor\x20mov\x20movd\x20movq\x20movsb\x20movsd\x20movsq\x20movsw\x20movsx\x20movsxd\x20movzx\x20mul\x20mwait\x20neg\x20nop\x20not\x20or\x20out\x20outsb\x20outsd\x20outsw\x20packssdw\x20packsswb\x20packuswb\x20paddb\x20paddd\x20paddsb\x20paddsiw\x20paddsw\x20paddusb\x20paddusw\x20paddw\x20pand\x20pandn\x20pause\x20paveb\x20pavgusb\x20pcmpeqb\x20pcmpeqd\x20pcmpeqw\x20pcmpgtb\x20pcmpgtd\x20pcmpgtw\x20pdistib\x20pf2id\x20pfacc\x20pfadd\x20pfcmpeq\x20pfcmpge\x20pfcmpgt\x20pfmax\x20pfmin\x20pfmul\x20pfrcp\x20pfrcpit1\x20pfrcpit2\x20pfrsqit1\x20pfrsqrt\x20pfsub\x20pfsubr\x20pi2fd\x20pmachriw\x20pmaddwd\x20pmagw\x20pmulhriw\x20pmulhrwa\x20pmulhrwc\x20pmulhw\x20pmullw\x20pmvgezb\x20pmvlzb\x20pmvnzb\x20pmvzb\x20pop\x20popa\x20popad\x20popaw\x20popf\x20popfd\x20popfq\x20popfw\x20por\x20prefetch\x20prefetchw\x20pslld\x20psllq\x20psllw\x20psrad\x20psraw\x20psrld\x20psrlq\x20psrlw\x20psubb\x20psubd\x20psubsb\x20psubsiw\x20psubsw\x20psubusb\x20psubusw\x20psubw\x20punpckhbw\x20punpckhdq\x20punpckhwd\x20punpcklbw\x20punpckldq\x20punpcklwd\x20push\x20pusha\x20pushad\x20pushaw\x20pushf\x20pushfd\x20pushfq\x20pushfw\x20pxor\x20rcl\x20rcr\x20rdshr\x20rdmsr\x20rdpmc\x20rdtsc\x20rdtscp\x20ret\x20retf\x20retn\x20rol\x20ror\x20rdm\x20rsdc\x20rsldt\x20rsm\x20rsts\x20sahf\x20sal\x20salc\x20sar\x20sbb\x20scasb\x20scasd\x20scasq\x20scasw\x20sfence\x20sgdt\x20shl\x20shld\x20shr\x20shrd\x20sidt\x20sldt\x20skinit\x20smi\x20smint\x20smintold\x20smsw\x20stc\x20std\x20sti\x20stosb\x20stosd\x20stosq\x20stosw\x20str\x20sub\x20svdc\x20svldt\x20svts\x20swapgs\x20syscall\x20sysenter\x20sysexit\x20sysret\x20test\x20ud0\x20ud1\x20ud2b\x20ud2\x20ud2a\x20umov\x20verr\x20verw\x20fwait\x20wbinvd\x20wrshr\x20wrmsr\x20xadd\x20xbts\x20xchg\x20xlatb\x20xlat\x20xor\x20cmove\x20cmovz\x20cmovne\x20cmovnz\x20cmova\x20cmovnbe\x20cmovae\x20cmovnb\x20cmovb\x20cmovnae\x20cmovbe\x20cmovna\x20cmovg\x20cmovnle\x20cmovge\x20cmovnl\x20cmovl\x20cmovnge\x20cmovle\x20cmovng\x20cmovc\x20cmovnc\x20cmovo\x20cmovno\x20cmovs\x20cmovns\x20cmovp\x20cmovpe\x20cmovnp\x20cmovpo\x20je\x20jz\x20jne\x20jnz\x20ja\x20jnbe\x20jae\x20jnb\x20jb\x20jnae\x20jbe\x20jna\x20jg\x20jnle\x20jge\x20jnl\x20jl\x20jnge\x20jle\x20jng\x20jc\x20jnc\x20jo\x20jno\x20js\x20jns\x20jpo\x20jnp\x20jpe\x20jp\x20sete\x20setz\x20setne\x20setnz\x20seta\x20setnbe\x20setae\x20setnb\x20setnc\x20setb\x20setnae\x20setcset\x20setbe\x20setna\x20setg\x20setnle\x20setge\x20setnl\x20setl\x20setnge\x20setle\x20setng\x20sets\x20setns\x20seto\x20setno\x20setpe\x20setp\x20setpo\x20setnp\x20addps\x20addss\x20andnps\x20andps\x20cmpeqps\x20cmpeqss\x20cmpleps\x20cmpless\x20cmpltps\x20cmpltss\x20cmpneqps\x20cmpneqss\x20cmpnleps\x20cmpnless\x20cmpnltps\x20cmpnltss\x20cmpordps\x20cmpordss\x20cmpunordps\x20cmpunordss\x20cmpps\x20cmpss\x20comiss\x20cvtpi2ps\x20cvtps2pi\x20cvtsi2ss\x20cvtss2si\x20cvttps2pi\x20cvttss2si\x20divps\x20divss\x20ldmxcsr\x20maxps\x20maxss\x20minps\x20minss\x20movaps\x20movhps\x20movlhps\x20movlps\x20movhlps\x20movmskps\x20movntps\x20movss\x20movups\x20mulps\x20mulss\x20orps\x20rcpps\x20rcpss\x20rsqrtps\x20rsqrtss\x20shufps\x20sqrtps\x20sqrtss\x20stmxcsr\x20subps\x20subss\x20ucomiss\x20unpckhps\x20unpcklps\x20xorps\x20fxrstor\x20fxrstor64\x20fxsave\x20fxsave64\x20xgetbv\x20xsetbv\x20xsave\x20xsave64\x20xsaveopt\x20xsaveopt64\x20xrstor\x20xrstor64\x20prefetchnta\x20prefetcht0\x20prefetcht1\x20prefetcht2\x20maskmovq\x20movntq\x20pavgb\x20pavgw\x20pextrw\x20pinsrw\x20pmaxsw\x20pmaxub\x20pminsw\x20pminub\x20pmovmskb\x20pmulhuw\x20psadbw\x20pshufw\x20pf2iw\x20pfnacc\x20pfpnacc\x20pi2fw\x20pswapd\x20maskmovdqu\x20clflush\x20movntdq\x20movnti\x20movntpd\x20movdqa\x20movdqu\x20movdq2q\x20movq2dq\x20paddq\x20pmuludq\x20pshufd\x20pshufhw\x20pshuflw\x20pslldq\x20psrldq\x20psubq\x20punpckhqdq\x20punpcklqdq\x20addpd\x20addsd\x20andnpd\x20andpd\x20cmpeqpd\x20cmpeqsd\x20cmplepd\x20cmplesd\x20cmpltpd\x20cmpltsd\x20cmpneqpd\x20cmpneqsd\x20cmpnlepd\x20cmpnlesd\x20cmpnltpd\x20cmpnltsd\x20cmpordpd\x20cmpordsd\x20cmpunordpd\x20cmpunordsd\x20cmppd\x20comisd\x20cvtdq2pd\x20cvtdq2ps\x20cvtpd2dq\x20cvtpd2pi\x20cvtpd2ps\x20cvtpi2pd\x20cvtps2dq\x20cvtps2pd\x20cvtsd2si\x20cvtsd2ss\x20cvtsi2sd\x20cvtss2sd\x20cvttpd2pi\x20cvttpd2dq\x20cvttps2dq\x20cvttsd2si\x20divpd\x20divsd\x20maxpd\x20maxsd\x20minpd\x20minsd\x20movapd\x20movhpd\x20movlpd\x20movmskpd\x20movupd\x20mulpd\x20mulsd\x20orpd\x20shufpd\x20sqrtpd\x20sqrtsd\x20subpd\x20subsd\x20ucomisd\x20unpckhpd\x20unpcklpd\x20xorpd\x20addsubpd\x20addsubps\x20haddpd\x20haddps\x20hsubpd\x20hsubps\x20lddqu\x20movddup\x20movshdup\x20movsldup\x20clgi\x20stgi\x20vmcall\x20vmclear\x20vmfunc\x20vmlaunch\x20vmload\x20vmmcall\x20vmptrld\x20vmptrst\x20vmread\x20vmresume\x20vmrun\x20vmsave\x20vmwrite\x20vmxoff\x20vmxon\x20invept\x20invvpid\x20pabsb\x20pabsw\x20pabsd\x20palignr\x20phaddw\x20phaddd\x20phaddsw\x20phsubw\x20phsubd\x20phsubsw\x20pmaddubsw\x20pmulhrsw\x20pshufb\x20psignb\x20psignw\x20psignd\x20extrq\x20insertq\x20movntsd\x20movntss\x20lzcnt\x20blendpd\x20blendps\x20blendvpd\x20blendvps\x20dppd\x20dpps\x20extractps\x20insertps\x20movntdqa\x20mpsadbw\x20packusdw\x20pblendvb\x20pblendw\x20pcmpeqq\x20pextrb\x20pextrd\x20pextrq\x20phminposuw\x20pinsrb\x20pinsrd\x20pinsrq\x20pmaxsb\x20pmaxsd\x20pmaxud\x20pmaxuw\x20pminsb\x20pminsd\x20pminud\x20pminuw\x20pmovsxbw\x20pmovsxbd\x20pmovsxbq\x20pmovsxwd\x20pmovsxwq\x20pmovsxdq\x20pmovzxbw\x20pmovzxbd\x20pmovzxbq\x20pmovzxwd\x20pmovzxwq\x20pmovzxdq\x20pmuldq\x20pmulld\x20ptest\x20roundpd\x20roundps\x20roundsd\x20roundss\x20crc32\x20pcmpestri\x20pcmpestrm\x20pcmpistri\x20pcmpistrm\x20pcmpgtq\x20popcnt\x20getsec\x20pfrcpv\x20pfrsqrtv\x20movbe\x20aesenc\x20aesenclast\x20aesdec\x20aesdeclast\x20aesimc\x20aeskeygenassist\x20vaesenc\x20vaesenclast\x20vaesdec\x20vaesdeclast\x20vaesimc\x20vaeskeygenassist\x20vaddpd\x20vaddps\x20vaddsd\x20vaddss\x20vaddsubpd\x20vaddsubps\x20vandpd\x20vandps\x20vandnpd\x20vandnps\x20vblendpd\x20vblendps\x20vblendvpd\x20vblendvps\x20vbroadcastss\x20vbroadcastsd\x20vbroadcastf128\x20vcmpeq_ospd\x20vcmpeqpd\x20vcmplt_ospd\x20vcmpltpd\x20vcmple_ospd\x20vcmplepd\x20vcmpunord_qpd\x20vcmpunordpd\x20vcmpneq_uqpd\x20vcmpneqpd\x20vcmpnlt_uspd\x20vcmpnltpd\x20vcmpnle_uspd\x20vcmpnlepd\x20vcmpord_qpd\x20vcmpordpd\x20vcmpeq_uqpd\x20vcmpnge_uspd\x20vcmpngepd\x20vcmpngt_uspd\x20vcmpngtpd\x20vcmpfalse_oqpd\x20vcmpfalsepd\x20vcmpneq_oqpd\x20vcmpge_ospd\x20vcmpgepd\x20vcmpgt_ospd\x20vcmpgtpd\x20vcmptrue_uqpd\x20vcmptruepd\x20vcmplt_oqpd\x20vcmple_oqpd\x20vcmpunord_spd\x20vcmpneq_uspd\x20vcmpnlt_uqpd\x20vcmpnle_uqpd\x20vcmpord_spd\x20vcmpeq_uspd\x20vcmpnge_uqpd\x20vcmpngt_uqpd\x20vcmpfalse_ospd\x20vcmpneq_ospd\x20vcmpge_oqpd\x20vcmpgt_oqpd\x20vcmptrue_uspd\x20vcmppd\x20vcmpeq_osps\x20vcmpeqps\x20vcmplt_osps\x20vcmpltps\x20vcmple_osps\x20vcmpleps\x20vcmpunord_qps\x20vcmpunordps\x20vcmpneq_uqps\x20vcmpneqps\x20vcmpnlt_usps\x20vcmpnltps\x20vcmpnle_usps\x20vcmpnleps\x20vcmpord_qps\x20vcmpordps\x20vcmpeq_uqps\x20vcmpnge_usps\x20vcmpngeps\x20vcmpngt_usps\x20vcmpngtps\x20vcmpfalse_oqps\x20vcmpfalseps\x20vcmpneq_oqps\x20vcmpge_osps\x20vcmpgeps\x20vcmpgt_osps\x20vcmpgtps\x20vcmptrue_uqps\x20vcmptrueps\x20vcmplt_oqps\x20vcmple_oqps\x20vcmpunord_sps\x20vcmpneq_usps\x20vcmpnlt_uqps\x20vcmpnle_uqps\x20vcmpord_sps\x20vcmpeq_usps\x20vcmpnge_uqps\x20vcmpngt_uqps\x20vcmpfalse_osps\x20vcmpneq_osps\x20vcmpge_oqps\x20vcmpgt_oqps\x20vcmptrue_usps\x20vcmpps\x20vcmpeq_ossd\x20vcmpeqsd\x20vcmplt_ossd\x20vcmpltsd\x20vcmple_ossd\x20vcmplesd\x20vcmpunord_qsd\x20vcmpunordsd\x20vcmpneq_uqsd\x20vcmpneqsd\x20vcmpnlt_ussd\x20vcmpnltsd\x20vcmpnle_ussd\x20vcmpnlesd\x20vcmpord_qsd\x20vcmpordsd\x20vcmpeq_uqsd\x20vcmpnge_ussd\x20vcmpngesd\x20vcmpngt_ussd\x20vcmpngtsd\x20vcmpfalse_oqsd\x20vcmpfalsesd\x20vcmpneq_oqsd\x20vcmpge_ossd\x20vcmpgesd\x20vcmpgt_ossd\x20vcmpgtsd\x20vcmptrue_uqsd\x20vcmptruesd\x20vcmplt_oqsd\x20vcmple_oqsd\x20vcmpunord_ssd\x20vcmpneq_ussd\x20vcmpnlt_uqsd\x20vcmpnle_uqsd\x20vcmpord_ssd\x20vcmpeq_ussd\x20vcmpnge_uqsd\x20vcmpngt_uqsd\x20vcmpfalse_ossd\x20vcmpneq_ossd\x20vcmpge_oqsd\x20vcmpgt_oqsd\x20vcmptrue_ussd\x20vcmpsd\x20vcmpeq_osss\x20vcmpeqss\x20vcmplt_osss\x20vcmpltss\x20vcmple_osss\x20vcmpless\x20vcmpunord_qss\x20vcmpunordss\x20vcmpneq_uqss\x20vcmpneqss\x20vcmpnlt_usss\x20vcmpnltss\x20vcmpnle_usss\x20vcmpnless\x20vcmpord_qss\x20vcmpordss\x20vcmpeq_uqss\x20vcmpnge_usss\x20vcmpngess\x20vcmpngt_usss\x20vcmpngtss\x20vcmpfalse_oqss\x20vcmpfalsess\x20vcmpneq_oqss\x20vcmpge_osss\x20vcmpgess\x20vcmpgt_osss\x20vcmpgtss\x20vcmptrue_uqss\x20vcmptruess\x20vcmplt_oqss\x20vcmple_oqss\x20vcmpunord_sss\x20vcmpneq_usss\x20vcmpnlt_uqss\x20vcmpnle_uqss\x20vcmpord_sss\x20vcmpeq_usss\x20vcmpnge_uqss\x20vcmpngt_uqss\x20vcmpfalse_osss\x20vcmpneq_osss\x20vcmpge_oqss\x20vcmpgt_oqss\x20vcmptrue_usss\x20vcmpss\x20vcomisd\x20vcomiss\x20vcvtdq2pd\x20vcvtdq2ps\x20vcvtpd2dq\x20vcvtpd2ps\x20vcvtps2dq\x20vcvtps2pd\x20vcvtsd2si\x20vcvtsd2ss\x20vcvtsi2sd\x20vcvtsi2ss\x20vcvtss2sd\x20vcvtss2si\x20vcvttpd2dq\x20vcvttps2dq\x20vcvttsd2si\x20vcvttss2si\x20vdivpd\x20vdivps\x20vdivsd\x20vdivss\x20vdppd\x20vdpps\x20vextractf128\x20vextractps\x20vhaddpd\x20vhaddps\x20vhsubpd\x20vhsubps\x20vinsertf128\x20vinsertps\x20vlddqu\x20vldqqu\x20vldmxcsr\x20vmaskmovdqu\x20vmaskmovps\x20vmaskmovpd\x20vmaxpd\x20vmaxps\x20vmaxsd\x20vmaxss\x20vminpd\x20vminps\x20vminsd\x20vminss\x20vmovapd\x20vmovaps\x20vmovd\x20vmovq\x20vmovddup\x20vmovdqa\x20vmovqqa\x20vmovdqu\x20vmovqqu\x20vmovhlps\x20vmovhpd\x20vmovhps\x20vmovlhps\x20vmovlpd\x20vmovlps\x20vmovmskpd\x20vmovmskps\x20vmovntdq\x20vmovntqq\x20vmovntdqa\x20vmovntpd\x20vmovntps\x20vmovsd\x20vmovshdup\x20vmovsldup\x20vmovss\x20vmovupd\x20vmovups\x20vmpsadbw\x20vmulpd\x20vmulps\x20vmulsd\x20vmulss\x20vorpd\x20vorps\x20vpabsb\x20vpabsw\x20vpabsd\x20vpacksswb\x20vpackssdw\x20vpackuswb\x20vpackusdw\x20vpaddb\x20vpaddw\x20vpaddd\x20vpaddq\x20vpaddsb\x20vpaddsw\x20vpaddusb\x20vpaddusw\x20vpalignr\x20vpand\x20vpandn\x20vpavgb\x20vpavgw\x20vpblendvb\x20vpblendw\x20vpcmpestri\x20vpcmpestrm\x20vpcmpistri\x20vpcmpistrm\x20vpcmpeqb\x20vpcmpeqw\x20vpcmpeqd\x20vpcmpeqq\x20vpcmpgtb\x20vpcmpgtw\x20vpcmpgtd\x20vpcmpgtq\x20vpermilpd\x20vpermilps\x20vperm2f128\x20vpextrb\x20vpextrw\x20vpextrd\x20vpextrq\x20vphaddw\x20vphaddd\x20vphaddsw\x20vphminposuw\x20vphsubw\x20vphsubd\x20vphsubsw\x20vpinsrb\x20vpinsrw\x20vpinsrd\x20vpinsrq\x20vpmaddwd\x20vpmaddubsw\x20vpmaxsb\x20vpmaxsw\x20vpmaxsd\x20vpmaxub\x20vpmaxuw\x20vpmaxud\x20vpminsb\x20vpminsw\x20vpminsd\x20vpminub\x20vpminuw\x20vpminud\x20vpmovmskb\x20vpmovsxbw\x20vpmovsxbd\x20vpmovsxbq\x20vpmovsxwd\x20vpmovsxwq\x20vpmovsxdq\x20vpmovzxbw\x20vpmovzxbd\x20vpmovzxbq\x20vpmovzxwd\x20vpmovzxwq\x20vpmovzxdq\x20vpmulhuw\x20vpmulhrsw\x20vpmulhw\x20vpmullw\x20vpmulld\x20vpmuludq\x20vpmuldq\x20vpor\x20vpsadbw\x20vpshufb\x20vpshufd\x20vpshufhw\x20vpshuflw\x20vpsignb\x20vpsignw\x20vpsignd\x20vpslldq\x20vpsrldq\x20vpsllw\x20vpslld\x20vpsllq\x20vpsraw\x20vpsrad\x20vpsrlw\x20vpsrld\x20vpsrlq\x20vptest\x20vpsubb\x20vpsubw\x20vpsubd\x20vpsubq\x20vpsubsb\x20vpsubsw\x20vpsubusb\x20vpsubusw\x20vpunpckhbw\x20vpunpckhwd\x20vpunpckhdq\x20vpunpckhqdq\x20vpunpcklbw\x20vpunpcklwd\x20vpunpckldq\x20vpunpcklqdq\x20vpxor\x20vrcpps\x20vrcpss\x20vrsqrtps\x20vrsqrtss\x20vroundpd\x20vroundps\x20vroundsd\x20vroundss\x20vshufpd\x20vshufps\x20vsqrtpd\x20vsqrtps\x20vsqrtsd\x20vsqrtss\x20vstmxcsr\x20vsubpd\x20vsubps\x20vsubsd\x20vsubss\x20vtestps\x20vtestpd\x20vucomisd\x20vucomiss\x20vunpckhpd\x20vunpckhps\x20vunpcklpd\x20vunpcklps\x20vxorpd\x20vxorps\x20vzeroall\x20vzeroupper\x20pclmullqlqdq\x20pclmulhqlqdq\x20pclmullqhqdq\x20pclmulhqhqdq\x20pclmulqdq\x20vpclmullqlqdq\x20vpclmulhqlqdq\x20vpclmullqhqdq\x20vpclmulhqhqdq\x20vpclmulqdq\x20vfmadd132ps\x20vfmadd132pd\x20vfmadd312ps\x20vfmadd312pd\x20vfmadd213ps\x20vfmadd213pd\x20vfmadd123ps\x20vfmadd123pd\x20vfmadd231ps\x20vfmadd231pd\x20vfmadd321ps\x20vfmadd321pd\x20vfmaddsub132ps\x20vfmaddsub132pd\x20vfmaddsub312ps\x20vfmaddsub312pd\x20vfmaddsub213ps\x20vfmaddsub213pd\x20vfmaddsub123ps\x20vfmaddsub123pd\x20vfmaddsub231ps\x20vfmaddsub231pd\x20vfmaddsub321ps\x20vfmaddsub321pd\x20vfmsub132ps\x20vfmsub132pd\x20vfmsub312ps\x20vfmsub312pd\x20vfmsub213ps\x20vfmsub213pd\x20vfmsub123ps\x20vfmsub123pd\x20vfmsub231ps\x20vfmsub231pd\x20vfmsub321ps\x20vfmsub321pd\x20vfmsubadd132ps\x20vfmsubadd132pd\x20vfmsubadd312ps\x20vfmsubadd312pd\x20vfmsubadd213ps\x20vfmsubadd213pd\x20vfmsubadd123ps\x20vfmsubadd123pd\x20vfmsubadd231ps\x20vfmsubadd231pd\x20vfmsubadd321ps\x20vfmsubadd321pd\x20vfnmadd132ps\x20vfnmadd132pd\x20vfnmadd312ps\x20vfnmadd312pd\x20vfnmadd213ps\x20vfnmadd213pd\x20vfnmadd123ps\x20vfnmadd123pd\x20vfnmadd231ps\x20vfnmadd231pd\x20vfnmadd321ps\x20vfnmadd321pd\x20vfnmsub132ps\x20vfnmsub132pd\x20vfnmsub312ps\x20vfnmsub312pd\x20vfnmsub213ps\x20vfnmsub213pd\x20vfnmsub123ps\x20vfnmsub123pd\x20vfnmsub231ps\x20vfnmsub231pd\x20vfnmsub321ps\x20vfnmsub321pd\x20vfmadd132ss\x20vfmadd132sd\x20vfmadd312ss\x20vfmadd312sd\x20vfmadd213ss\x20vfmadd213sd\x20vfmadd123ss\x20vfmadd123sd\x20vfmadd231ss\x20vfmadd231sd\x20vfmadd321ss\x20vfmadd321sd\x20vfmsub132ss\x20vfmsub132sd\x20vfmsub312ss\x20vfmsub312sd\x20vfmsub213ss\x20vfmsub213sd\x20vfmsub123ss\x20vfmsub123sd\x20vfmsub231ss\x20vfmsub231sd\x20vfmsub321ss\x20vfmsub321sd\x20vfnmadd132ss\x20vfnmadd132sd\x20vfnmadd312ss\x20vfnmadd312sd\x20vfnmadd213ss\x20vfnmadd213sd\x20vfnmadd123ss\x20vfnmadd123sd\x20vfnmadd231ss\x20vfnmadd231sd\x20vfnmadd321ss\x20vfnmadd321sd\x20vfnmsub132ss\x20vfnmsub132sd\x20vfnmsub312ss\x20vfnmsub312sd\x20vfnmsub213ss\x20vfnmsub213sd\x20vfnmsub123ss\x20vfnmsub123sd\x20vfnmsub231ss\x20vfnmsub231sd\x20vfnmsub321ss\x20vfnmsub321sd\x20rdfsbase\x20rdgsbase\x20rdrand\x20wrfsbase\x20wrgsbase\x20vcvtph2ps\x20vcvtps2ph\x20adcx\x20adox\x20rdseed\x20clac\x20stac\x20xstore\x20xcryptecb\x20xcryptcbc\x20xcryptctr\x20xcryptcfb\x20xcryptofb\x20montmul\x20xsha1\x20xsha256\x20llwpcb\x20slwpcb\x20lwpval\x20lwpins\x20vfmaddpd\x20vfmaddps\x20vfmaddsd\x20vfmaddss\x20vfmaddsubpd\x20vfmaddsubps\x20vfmsubaddpd\x20vfmsubaddps\x20vfmsubpd\x20vfmsubps\x20vfmsubsd\x20vfmsubss\x20vfnmaddpd\x20vfnmaddps\x20vfnmaddsd\x20vfnmaddss\x20vfnmsubpd\x20vfnmsubps\x20vfnmsubsd\x20vfnmsubss\x20vfrczpd\x20vfrczps\x20vfrczsd\x20vfrczss\x20vpcmov\x20vpcomb\x20vpcomd\x20vpcomq\x20vpcomub\x20vpcomud\x20vpcomuq\x20vpcomuw\x20vpcomw\x20vphaddbd\x20vphaddbq\x20vphaddbw\x20vphadddq\x20vphaddubd\x20vphaddubq\x20vphaddubw\x20vphaddudq\x20vphadduwd\x20vphadduwq\x20vphaddwd\x20vphaddwq\x20vphsubbw\x20vphsubdq\x20vphsubwd\x20vpmacsdd\x20vpmacsdqh\x20vpmacsdql\x20vpmacssdd\x20vpmacssdqh\x20vpmacssdql\x20vpmacsswd\x20vpmacssww\x20vpmacswd\x20vpmacsww\x20vpmadcsswd\x20vpmadcswd\x20vpperm\x20vprotb\x20vprotd\x20vprotq\x20vprotw\x20vpshab\x20vpshad\x20vpshaq\x20vpshaw\x20vpshlb\x20vpshld\x20vpshlq\x20vpshlw\x20vbroadcasti128\x20vpblendd\x20vpbroadcastb\x20vpbroadcastw\x20vpbroadcastd\x20vpbroadcastq\x20vpermd\x20vpermpd\x20vpermps\x20vpermq\x20vperm2i128\x20vextracti128\x20vinserti128\x20vpmaskmovd\x20vpmaskmovq\x20vpsllvd\x20vpsllvq\x20vpsravd\x20vpsrlvd\x20vpsrlvq\x20vgatherdpd\x20vgatherqpd\x20vgatherdps\x20vgatherqps\x20vpgatherdd\x20vpgatherqd\x20vpgatherdq\x20vpgatherqq\x20xabort\x20xbegin\x20xend\x20xtest\x20andn\x20bextr\x20blci\x20blcic\x20blsi\x20blsic\x20blcfill\x20blsfill\x20blcmsk\x20blsmsk\x20blsr\x20blcs\x20bzhi\x20mulx\x20pdep\x20pext\x20rorx\x20sarx\x20shlx\x20shrx\x20tzcnt\x20tzmsk\x20t1mskc\x20valignd\x20valignq\x20vblendmpd\x20vblendmps\x20vbroadcastf32x4\x20vbroadcastf64x4\x20vbroadcasti32x4\x20vbroadcasti64x4\x20vcompresspd\x20vcompressps\x20vcvtpd2udq\x20vcvtps2udq\x20vcvtsd2usi\x20vcvtss2usi\x20vcvttpd2udq\x20vcvttps2udq\x20vcvttsd2usi\x20vcvttss2usi\x20vcvtudq2pd\x20vcvtudq2ps\x20vcvtusi2sd\x20vcvtusi2ss\x20vexpandpd\x20vexpandps\x20vextractf32x4\x20vextractf64x4\x20vextracti32x4\x20vextracti64x4\x20vfixupimmpd\x20vfixupimmps\x20vfixupimmsd\x20vfixupimmss\x20vgetexppd\x20vgetexpps\x20vgetexpsd\x20vgetexpss\x20vgetmantpd\x20vgetmantps\x20vgetmantsd\x20vgetmantss\x20vinsertf32x4\x20vinsertf64x4\x20vinserti32x4\x20vinserti64x4\x20vmovdqa32\x20vmovdqa64\x20vmovdqu32\x20vmovdqu64\x20vpabsq\x20vpandd\x20vpandnd\x20vpandnq\x20vpandq\x20vpblendmd\x20vpblendmq\x20vpcmpltd\x20vpcmpled\x20vpcmpneqd\x20vpcmpnltd\x20vpcmpnled\x20vpcmpd\x20vpcmpltq\x20vpcmpleq\x20vpcmpneqq\x20vpcmpnltq\x20vpcmpnleq\x20vpcmpq\x20vpcmpequd\x20vpcmpltud\x20vpcmpleud\x20vpcmpnequd\x20vpcmpnltud\x20vpcmpnleud\x20vpcmpud\x20vpcmpequq\x20vpcmpltuq\x20vpcmpleuq\x20vpcmpnequq\x20vpcmpnltuq\x20vpcmpnleuq\x20vpcmpuq\x20vpcompressd\x20vpcompressq\x20vpermi2d\x20vpermi2pd\x20vpermi2ps\x20vpermi2q\x20vpermt2d\x20vpermt2pd\x20vpermt2ps\x20vpermt2q\x20vpexpandd\x20vpexpandq\x20vpmaxsq\x20vpmaxuq\x20vpminsq\x20vpminuq\x20vpmovdb\x20vpmovdw\x20vpmovqb\x20vpmovqd\x20vpmovqw\x20vpmovsdb\x20vpmovsdw\x20vpmovsqb\x20vpmovsqd\x20vpmovsqw\x20vpmovusdb\x20vpmovusdw\x20vpmovusqb\x20vpmovusqd\x20vpmovusqw\x20vpord\x20vporq\x20vprold\x20vprolq\x20vprolvd\x20vprolvq\x20vprord\x20vprorq\x20vprorvd\x20vprorvq\x20vpscatterdd\x20vpscatterdq\x20vpscatterqd\x20vpscatterqq\x20vpsraq\x20vpsravq\x20vpternlogd\x20vpternlogq\x20vptestmd\x20vptestmq\x20vptestnmd\x20vptestnmq\x20vpxord\x20vpxorq\x20vrcp14pd\x20vrcp14ps\x20vrcp14sd\x20vrcp14ss\x20vrndscalepd\x20vrndscaleps\x20vrndscalesd\x20vrndscaless\x20vrsqrt14pd\x20vrsqrt14ps\x20vrsqrt14sd\x20vrsqrt14ss\x20vscalefpd\x20vscalefps\x20vscalefsd\x20vscalefss\x20vscatterdpd\x20vscatterdps\x20vscatterqpd\x20vscatterqps\x20vshuff32x4\x20vshuff64x2\x20vshufi32x4\x20vshufi64x2\x20kandnw\x20kandw\x20kmovw\x20knotw\x20kortestw\x20korw\x20kshiftlw\x20kshiftrw\x20kunpckbw\x20kxnorw\x20kxorw\x20vpbroadcastmb2q\x20vpbroadcastmw2d\x20vpconflictd\x20vpconflictq\x20vplzcntd\x20vplzcntq\x20vexp2pd\x20vexp2ps\x20vrcp28pd\x20vrcp28ps\x20vrcp28sd\x20vrcp28ss\x20vrsqrt28pd\x20vrsqrt28ps\x20vrsqrt28sd\x20vrsqrt28ss\x20vgatherpf0dpd\x20vgatherpf0dps\x20vgatherpf0qpd\x20vgatherpf0qps\x20vgatherpf1dpd\x20vgatherpf1dps\x20vgatherpf1qpd\x20vgatherpf1qps\x20vscatterpf0dpd\x20vscatterpf0dps\x20vscatterpf0qpd\x20vscatterpf0qps\x20vscatterpf1dpd\x20vscatterpf1dps\x20vscatterpf1qpd\x20vscatterpf1qps\x20prefetchwt1\x20bndmk\x20bndcl\x20bndcu\x20bndcn\x20bndmov\x20bndldx\x20bndstx\x20sha1rnds4\x20sha1nexte\x20sha1msg1\x20sha1msg2\x20sha256rnds2\x20sha256msg1\x20sha256msg2\x20hint_nop0\x20hint_nop1\x20hint_nop2\x20hint_nop3\x20hint_nop4\x20hint_nop5\x20hint_nop6\x20hint_nop7\x20hint_nop8\x20hint_nop9\x20hint_nop10\x20hint_nop11\x20hint_nop12\x20hint_nop13\x20hint_nop14\x20hint_nop15\x20hint_nop16\x20hint_nop17\x20hint_nop18\x20hint_nop19\x20hint_nop20\x20hint_nop21\x20hint_nop22\x20hint_nop23\x20hint_nop24\x20hint_nop25\x20hint_nop26\x20hint_nop27\x20hint_nop28\x20hint_nop29\x20hint_nop30\x20hint_nop31\x20hint_nop32\x20hint_nop33\x20hint_nop34\x20hint_nop35\x20hint_nop36\x20hint_nop37\x20hint_nop38\x20hint_nop39\x20hint_nop40\x20hint_nop41\x20hint_nop42\x20hint_nop43\x20hint_nop44\x20hint_nop45\x20hint_nop46\x20hint_nop47\x20hint_nop48\x20hint_nop49\x20hint_nop50\x20hint_nop51\x20hint_nop52\x20hint_nop53\x20hint_nop54\x20hint_nop55\x20hint_nop56\x20hint_nop57\x20hint_nop58\x20hint_nop59\x20hint_nop60\x20hint_nop61\x20hint_nop62\x20hint_nop63','⊙','ParentList','#FirstName\x20the\x20#Adjective','))?([eE][+-]?(','↝','StirlingS2','MatrixPropertyDistribution','boolean\x20byte\x20char\x20date\x20decimal\x20double\x20integer\x20long\x20object\x20sbyte\x20short\x20single\x20string\x20uinteger\x20ulong\x20ushort','ds_map_find_next','getBackpackCargo','IntList','buffer_save_async','matrix_stack_clear','nearestBuilding','readJoystickSwitch','SearchQueryString','CreateNotebook','NearestTo','Pronoun','LerchPhi','win8_search_add_suggestions','Regularization','TakeWhile','NotVerticalBar','TreeGraphQ','symbol','COLOR','FullInformationOutputRegulator','PointLegend','FloatDict','pointer-events','^[^\x5cn\x22;]+:(?!=)','FunctionSingularities','True','CONFIDENCE.NORM','DifferenceOrder','^\x5cs*%{1,2}={0,2}','NumericHex','ova','⫂','generate|5','grouping\x20sets','vk_up','been-walking','Decrement','gpu_set_tex_repeat_ext','gpu_get_cullmode','buffer_base64_encode','consteval','DenseArray','ScreenStyleEnvironment','is3DENMultiplayer','libname','CurryApplied','NuclearReactorData','Ĥ','Perimeter','square','prn','getEventHandlerInfo','Radon','os_win8native','displaySetEventHandler','ObjectLoader','wstring','vectorDB_','HYPGEOMDIST','^(somebody|everybody)\x20[#Infinitive]','GridLines','[#Cardinal+\x20#Ordinal]\x20of\x20.','noun-or-noun','GreaterTilde','layer','pointLight','\x5cb0x([A-Fa-f0-9_]+)','compile','#Place+\x20#SportsTeam','Exit','readwrite','ě','AiryAi','border-right','HeunTPrime','GreaterSlantEqual','(AR|P|PAYLOAD|PR|R|SR|RSR|LBL|VR|UALM|MESSAGE|UTOOL|UFRAME|TIMER|TIMER_OVERFLOW|JOINT_MAX_SPEED|RESUME_PROG|DIAG_REC)\x5c[','RuntimeOptions','ℭ','SoftwareSerial','ConicHullRegionBoxOptions','VBScript\x20in\x20HTML','setzcomp','None','win8_livetile_tile_notification','compute','DynamicModuleBox','esac','ctSetRowTemplate','PoissonDistribution','#\x5c((?!parameter).+\x5c)','u64','StringJoin','harmonica','⊀','Stack','⨳','os_lock_orientation','startsas','ime-mode','curveDetail','was','SLOPE','_G\x20_ENV\x20_VERSION\x20__index\x20__newindex\x20__mode\x20__call\x20__metatable\x20__tostring\x20__len\x20__gc\x20__add\x20__sub\x20__mul\x20__div\x20__mod\x20__pow\x20__concat\x20__unm\x20__eq\x20__lt\x20__le\x20assert\x20collectgarbage\x20dofile\x20error\x20getfenv\x20getmetatable\x20ipairs\x20load\x20loadfile\x20loadstring\x20module\x20next\x20pairs\x20pcall\x20print\x20rawequal\x20rawget\x20rawset\x20require\x20select\x20setfenv\x20setmetatable\x20tonumber\x20tostring\x20type\x20unpack\x20xpcall\x20arg\x20self\x20coroutine\x20resume\x20yield\x20status\x20wrap\x20create\x20running\x20debug\x20getupvalue\x20debug\x20sethook\x20getmetatable\x20gethook\x20setmetatable\x20setlocal\x20traceback\x20setfenv\x20getinfo\x20setupvalue\x20getlocal\x20getregistry\x20getfenv\x20io\x20lines\x20write\x20close\x20flush\x20open\x20output\x20type\x20read\x20stderr\x20stdin\x20input\x20stdout\x20popen\x20tmpfile\x20math\x20log\x20max\x20acos\x20huge\x20ldexp\x20pi\x20cos\x20tanh\x20pow\x20deg\x20tan\x20cosh\x20sinh\x20random\x20randomseed\x20frexp\x20ceil\x20floor\x20rad\x20abs\x20sqrt\x20modf\x20asin\x20min\x20mod\x20fmod\x20log10\x20atan2\x20exp\x20sin\x20atan\x20os\x20exit\x20setlocale\x20date\x20getenv\x20difftime\x20remove\x20time\x20clock\x20tmpname\x20rename\x20execute\x20package\x20preload\x20loadlib\x20loaded\x20loaders\x20cpath\x20config\x20path\x20seeall\x20string\x20sub\x20upper\x20len\x20gfind\x20rep\x20find\x20match\x20char\x20dump\x20gmatch\x20reverse\x20byte\x20format\x20gsub\x20lower\x20table\x20setn\x20insert\x20getn\x20foreachi\x20maxn\x20foreach\x20concat\x20sort\x20remove','IntervalMarkersStyle','ProcessFailedException','^[\x5c-\x5c.]{4,}\x5cn','RND','qabs','ically','precision','WaveletMatrixPlot','IsNan','^[#Adjective]\x20(the|your)\x20#Noun','DenseVector','IndependentPhysicalQuantity','C_BLOCK_COMMENT_MODE','EstimatedBackground','ConditionalExpression','ChartElementFunction','FindPath','rnmos','SpatialTrendFunction','dance-music','keyframes','parseMatch','$TopDirectory','sourceSets','2-tagYear-close','LineColor','lockInventory','productions','Small','diag_fpsmin','log1m_exp','LPOS','Compile','achievement_filter_favorites_only','$bits','and\x20while','RescalingTransform','StartupSound','mixed','$clog2',')\x20{','mip_markedonly','guard','showWaypoints','vectorModelToWorldVisual','qml','ApplyTo','DEF|0','off','isNil','SameQ','URLSave','bitor','hasOwnProperty','MODE.SNGL','normalize','SpheroidalS2Prime','all\x5cb','OverDot','UnitSimplify','LOGNORM.INV','window_set_fullscreen','-(?!--)|','string','draw_surface_part','(#WeekDay\x20&&\x20@hasComma)\x20#Date','SpeechCases','TotalLayer','strcpy','-behavior','ShortTimeFourier','settle','FontWeight','module_path!','gesture_get_double_tap_distance','charge-back','LOGNORM.DIST','mus','Thin','tvSetColor','view_xport','phy_joint_length_2','border-inline-start-style','wildfly-cli','FontSubstitutions','GM_runtime_version','#citationMenu','hbs','hbound','(one|1|#Copula|#Infinitive)','setHideBehind','InhomogeneousPoissonPointProcess','Κ','#new-chat','Pagination','⊹',' - - - - -
- - -
- - - -
- -
-
-

17  AI for Good

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Illustration of planet Earth wrapped in shimmering neural networks, with diverse humans and AI robots working together on various projects like planting trees, cleaning the oceans, and developing sustainable energy solutions. The positive and hopeful atmosphere represents a united effort to create a better future.
-
-
-

By aligning AI progress with human values, goals, and ethics, the ultimate goal of ML systems (at any scale) is to be a technology that reflects human principles and aspirations. Initiatives under “AI for Good” promote the development of AI to tackle the UN Sustainable Development Goals (SDGs) using embedded AI technologies, expanding access to AI education, amongst other things. While it is now clear that AI will be an instrumental part of progress towards the SDGs, its adoption and impact are limited by the immense power consumption, strong connectivity requirements, and high costs of cloud-based deployments. TinyML can circumvent many of these issues by allowing ML models to run on low-cost and low-power microcontrollers.

-
-

The “AI for Good” movement is critical in cultivating a future where an AI-empowered society is more just, sustainable, and prosperous for all humanity.

-
-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand how TinyML can help advance the UN Sustainable Development Goals in health, agriculture, education, and the environment.

  • -
  • Recognize the versatility of TinyML for enabling localized, low-cost solutions tailored to community needs.

  • -
  • Consider the challenges of adopting TinyML globally, such as limited training, data constraints, accessibility, and cultural barriers.

  • -
  • Appreciate the importance of collaborative, ethical approaches to develop and deploy TinyML to serve local contexts best.

  • -
  • Recognize the potential of TinyML, if responsibly implemented, to promote equity and empower underserved populations worldwide.

  • -
-
-
-
-

17.1 Introduction

-

To give ourselves a framework around which to think about AI for social good, we will be following the UN Sustainable Development Goals (SDGs). The UN SDGs are a collection of 17 global goals, shown in Figure fig-sdg, adopted by the United Nations in 2015 as part of the 2030 Agenda for Sustainable Development. The SDGs address global challenges related to poverty, inequality, climate change, environmental degradation, prosperity, and peace and justice.

-

What is special about the SDGs is that they are a collection of interlinked objectives designed to serve as a “shared blueprint for peace and prosperity for people and the planet, now and into the future.” The SDGs emphasize sustainable development’s interconnected environmental, social, and economic aspects by putting sustainability at their center.

-

A recent study (Vinuesa et al. 2020) highlights the influence of AI on all aspects of sustainable development, particularly on the 17 Sustainable Development Goals (SDGs) and 169 targets internationally defined in the 2030 Agenda for Sustainable Development. The study shows that AI can act as an enabler for 134 targets through technological improvements, but it also highlights the challenges of AI on some targets. The study shows that AI can benefit 67 targets when considering AI and societal outcomes. Still, it also warns about the issues related to the implementation of AI in countries with different cultural values and wealth.

-
-Vinuesa, Ricardo, Hossein Azizpour, Iolanda Leite, Madeline Balaam, Virginia Dignum, Sami Domisch, Anna Felländer, Simone Daniela Langhans, Max Tegmark, and Francesco Fuso Nerini. 2020. “The Role of Artificial Intelligence in Achieving the Sustainable Development Goals.” Nat. Commun. 11 (1): 1–10. https://doi.org/10.1038/s41467-019-14108-y. -
-
-
- -
-
-Figure 17.1: United Nations Sustainable Development Goals (SDG). Credit: United Nations. -
-
-
-

In our book’s context, TinyML could help advance at least some of these SDG goals.

-
    -
  • Goal 1 - No Poverty: TinyML could help provide low-cost solutions for crop monitoring to improve agricultural yields in developing countries.

  • -
  • Goal 2 - Zero Hunger: TinyML could enable localized and precise crop health monitoring and disease detection to reduce crop losses.

  • -
  • Goal 3 - Good Health and Wellbeing: TinyML could help enable low-cost medical diagnosis tools for early detection and prevention of diseases in remote areas.

  • -
  • Goal 6 - Clean Water and Sanitation: TinyML could monitor water quality and detect contaminants to ensure Access to clean drinking water.

  • -
  • Goal 7 - Affordable and Clean Energy: TinyML could optimize energy consumption and enable predictive maintenance for renewable energy infrastructure.

  • -
  • Goal 11 - Sustainable Cities and Communities: TinyML could enable intelligent traffic management, air quality monitoring, and optimized resource management in smart cities.

  • -
  • Goal 13 - Climate Action: TinyML could monitor deforestation and track reforestation efforts. It could also help predict extreme weather events.

  • -
-

The portability, lower power requirements, and real-time analytics enabled by TinyML make it well-suited for addressing several sustainability challenges developing regions face. The widespread deployment of power solutions has the potential to provide localized and cost-effective monitoring to help achieve some of the UN’s SDGs. In the rest of the sections, we will dive into how TinyML is useful across many sectors that can address the UN SDGs.

-
-
-

17.2 Agriculture

-

Agriculture is essential to achieving many of the UN Sustainable Development Goals, including eradicating Hunger and malnutrition, promoting economic growth, and using natural resources sustainably. TinyML can be a valuable tool to help advance sustainable agriculture, especially for smallholder farmers in developing regions.

-

TinyML solutions can provide real-time monitoring and data analytics for crop health and growing conditions - all without reliance on connectivity infrastructure. For example, low-cost camera modules connected to microcontrollers can monitor for disease, pests, and nutritional deficiencies. TinyML algorithms can analyze the images to detect issues early before they spread and damage yields. Precision monitoring can optimize inputs like water, fertilizer, and pesticides - improving efficiency and sustainability.

-

Other sensors, such as GPS units and accelerometers, can track microclimate conditions, soil humidity, and livestock wellbeing. Local real-time data helps farmers respond and adapt better to changes in the field. TinyML analytics at the edge avoids lag, network disruptions, and the high data costs of cloud-based systems. Localized systems allow customization of specific crops, diseases, and regional issues.

-

Widespread TinyML applications can help digitize smallholder farms to increase productivity, incomes, and resilience. The low cost of hardware and minimal connectivity requirements make solutions accessible. Projects across the developing world have shown the benefits:

-
    -
  • Microsoft’s FarmBeats project is an end-to-end approach to enable data-driven farming by using low-cost sensors, drones, and vision and machine learning algorithms. The project aims to solve the problem of limited adoption of technology in farming due to the need for more power and internet connectivity in farms and the farmers’ limited technology savviness. The project aims to increase farm productivity and reduce costs by coupling data with farmers’ knowledge and intuition about their farms. The project has successfully enabled actionable insights from data by building artificial intelligence (AI) or machine learning (ML) models based on fused data sets.

  • -
  • In Sub-Saharan Africa, off-the-shelf cameras and edge AI have cut cassava disease losses from 40% to 5%, protecting a staple crop (Ramcharan et al. 2017).

  • -
  • In Indonesia, sensors monitor microclimates across rice paddies, optimizing water usage even with erratic rains (Tirtalistyani, Murtiningrum, and Kanwar 2022).

  • -
-
-Ramcharan, Amanda, Kelsee Baranowski, Peter McCloskey, Babuali Ahmed, James Legg, and David P. Hughes. 2017. “Deep Learning for Image-Based Cassava Disease Detection.” Front. Plant Sci. 8 (October): 1852. https://doi.org/10.3389/fpls.2017.01852. -
-Tirtalistyani, Rose, Murtiningrum Murtiningrum, and Rameshwar S. Kanwar. 2022. Indonesia Rice Irrigation System: Time for Innovation.” Sustainability 14 (19): 12477. https://doi.org/10.3390/su141912477. -

With greater investment and integration into rural advisory services, TinyML could transform small-scale agriculture and improve farmers’ livelihoods worldwide. The technology effectively brings the benefits of precision agriculture to disconnected regions most in need.

-
-

Exercise 17.1 (Crop Yield Modeling)  

-
-
- -
-
-

This exercise teaches you how to predict crop yields in Nepal by combining satellite data (Sentinel-2), climate data (WorldClim), and on-the-ground measurements. You’ll use a machine learning algorithm called XGBoost Regressor to build a model, split the data for training and testing, and fine-tune the model parameters for the best performance. This notebook lays the foundation for implementing TinyML in the agriculture domain. Consider how you could adapt this process for smaller datasets, fewer features, and simplified models to make it compatible with the power and memory constraints of TinyML devices.

-

-
-
-
-
-
-

17.3 Healthcare

-
-

17.3.1 Expanding Access

-

Universal health coverage and quality care remain out of reach for millions worldwide. In many regions, more medical professionals are required to Access basic diagnosis and treatment. Additionally, healthcare infrastructure like clinics, hospitals, and utilities to power complex equipment needs to be improved. These gaps disproportionately impact marginalized communities, exacerbating health disparities.

-

TinyML offers a promising technological solution to help expand Access to quality healthcare globally. TinyML refers to the ability to deploy machine learning algorithms on microcontrollers, tiny chips with processing power, memory, and connectivity. TinyML enables real-time data analysis and intelligence in low-powered, compact devices.

-

This creates opportunities for transformative medical tools that are portable, affordable, and accessible. TinyML software and hardware can be optimized to run even in resource-constrained environments. For example, a TinyML system could analyze symptoms or make diagnostic predictions using minimal computing power, no continuous internet connectivity, and a battery or solar power source. These capabilities can bring medical-grade screening and monitoring directly to underserved patients.

-
-
-

17.3.2 Early Diagnosis

-

Early detection of diseases is one major application. Small sensors paired with TinyML software can identify symptoms before conditions escalate or visible signs appear. For instance, cough monitors with embedded machine learning can pick up on acoustic patterns indicative of respiratory illness, malaria, or tuberculosis. Detecting diseases at onset improves outcomes and reduces healthcare costs.

-

A detailed example could be given for TinyML monitoring pneumonia in children. Pneumonia is a leading cause of death for children under 5, and detecting it early is critical. A startup called Respira Labs has developed a low-cost wearable audio sensor that uses TinyML algorithms to analyze coughs and identify symptoms of respiratory illnesses like pneumonia. The device contains a microphone sensor and microcontroller that runs a neural network model trained to classify respiratory sounds. It can identify features like wheezing, crackling, and stridor that may indicate pneumonia. The device is designed to be highly accessible - it has a simple strap, requires no battery or charging, and results are provided through LED lights and audio cues.

-

Another example involves researchers at UNIFEI in Brazil who have developed a low-cost device that leverages TinyML to monitor heart rhythms. Their innovative solution addresses a critical need - atrial fibrillation and other heart rhythm abnormalities often go undiagnosed due to the prohibitive cost and limited availability of screening tools. The device overcomes these barriers through its ingenious design. It uses an off-the-shelf microcontroller that costs only a few dollars, along with a basic pulse sensor. By minimizing complexity, the device becomes accessible to under-resourced populations. The TinyML algorithm running locally on the microcontroller analyzes pulse data in real-time to detect irregular heart rhythms. This life-saving heart monitoring device demonstrates how TinyML enables powerful AI capabilities to be deployed in cost-effective, user-friendly designs.

-

TinyML’s versatility also shows promise for tackling infectious diseases. Researchers have proposed applying TinyML to identify malaria-spreading mosquitoes by their wingbeat sounds. When equipped with microphones, small microcontrollers can run advanced audio classification models to determine mosquito species. This compact, low-power solution produces results in real time, suitable for remote field use. By making entomology analytics affordable and accessible, TinyML could revolutionize monitoring insects that endanger human health. TinyML is expanding healthcare access for vulnerable communities from heart disease to malaria.

-
-
-

17.3.3 Infectious Disease Control

-

Mosquitoes remain the most deadly disease vector worldwide, transmitting illnesses that infect over one billion people annually (“Vector-Borne Diseases,” n.d.). Diseases like malaria, dengue, and Zika are especially prevalent in resource-limited regions lacking robust infrastructure for mosquito control. Monitoring local mosquito populations is essential to prevent outbreaks and properly target interventions.

-
-“Vector-Borne Diseases.” n.d. https://www.who.int/news-room/fact-sheets/detail/vector-borne-diseases. -

Traditional monitoring methods are expensive, labor-intensive, and difficult to deploy remotely. The proposed TinyML solution aims to overcome these barriers. Small microphones coupled with machine learning algorithms can classify mosquitoes by species based on minute differences in wing oscillations. The TinyML software runs efficiently on low-cost microcontrollers, eliminating the need for continuous connectivity.

-

A collaborative research team from the University of Khartoum and the ICTP is exploring an innovative solution using TinyML. In a recent paper, they presented a low-cost device that can identify disease-spreading mosquito species through their wing beat sounds (Altayeb, Zennaro, and Rovai 2022).

-
-Altayeb, Moez, Marco Zennaro, and Marcelo Rovai. 2022. “Classifying Mosquito Wingbeat Sound Using TinyML.” In Proceedings of the 2022 ACM Conference on Information Technology for Social Good, 132–37. ACM. https://doi.org/10.1145/3524458.3547258. -

This portable, self-contained system shows great promise for entomology. The researchers suggest it could revolutionize insect monitoring and vector control strategies in remote areas. TinyML could significantly bolster malaria eradication efforts by providing cheaper, easier mosquito analytics. Its versatility and minimal power needs make it ideal for field use in isolated, off-grid regions with scarce resources but high disease burden.

-
-
-

17.3.4 TinyML Design Contest in Healthcare

-

The first TinyML contest in healthcare, TDC’22 (Jia et al. 2023), was held in 2022 to motivate participating teams to design AI/ML algorithms for detecting life-threatening ventricular arrhythmias (VAs) and deploy them on Implantable Cardioverter Defibrillators (ICDs). VAs are the main cause of sudden cardiac death (SCD). People at high risk of SCD rely on the ICD to deliver proper and timely defibrillation treatment (i.e., shocking the heart back into normal rhythm) when experiencing life-threatening VAs.

-
-Jia, Zhenge, Dawei Li, Xiaowei Xu, Na Li, Feng Hong, Lichuan Ping, and Yiyu Shi. 2023. “Life-Threatening Ventricular Arrhythmia Detection Challenge in Implantable Cardioverterdefibrillators.” Nature Machine Intelligence 5 (5): 554–55. https://doi.org/10.1038/s42256-023-00659-9. -

An on-device algorithm for early and timely life-threatening VA detection will increase the chances of survival. The proposed AI/ML algorithm needed to be deployed and executed on an extremely low-power and resource-constrained microcontroller (MCU) (a $10 development board with an ARM Cortex-M4 core at 80 MHz, 256 kB of flash memory and 64 kB of SRAM). The submitted designs were evaluated by metrics measured on the MCU for (1) detection performance, (2) inference latency, and (3) memory occupation by the program of AI/ML algorithms.

-

The champion, GaTech EIC Lab, obtained 0.972 in \(F_\beta\) (F1 score with a higher weight to recall), 1.747 ms in latency, and 26.39 kB in memory footprint with a deep neural network. An ICD with an on-device VA detection algorithm was implanted in a clinical trial.

-
-

Exercise 17.2 (Clinical Data: Unlocking Insights with Named Entity Recognition)  

-
-
- -
-
-

In this exercise, you’ll learn about Named Entity Recognition (NER), a powerful tool for extracting valuable information from clinical text. Using Spark NLP, a specialized library for healthcare NLP, we’ll explore how NER models like BiLSTM-CNN-Char and BERT can automatically identify important medical entities such as diagnoses, medications, test results, and more. You’ll get hands-on experience applying these techniques with a special focus on oncology-related data extraction, helping you unlock insights about cancer types and treatment details from patient records.

-

-
-
-
-
-
-
-

17.4 Science

-

In many scientific fields, researchers are limited by the quality and resolution of data they can collect. They often must indirectly infer the true parameters of interest using approximate correlations and models built on sparse data points. This constrains the accuracy of scientific understanding and predictions.

-

The emergence of TinyML opens new possibilities for gathering high-fidelity scientific measurements. With embedded machine learning, tiny, low-cost sensors can automatically process and analyze data locally in real-time. This creates intelligent sensor networks that capture nuanced data at much greater scales and frequencies.

-

For example, monitoring environmental conditions to model climate change remains challenging due to the need for widespread, continuous data. The Ribbit Project from UC Berkeley is pioneering a crowdsourced TinyML solution (Rao 2021). They developed an open-source CO2 sensor that uses an onboard microcontroller to process the gas measurements. An extensive dataset can be aggregated by distributing hundreds of these low-cost sensors. The TinyML devices compensate for environmental factors and provide previously impossible, granular, accurate readings.

-

The potential to massively scale out intelligent sensing via TinyML has profound scientific implications. Higher-resolution data can lead to discoveries and predictive capabilities in fields ranging from ecology to cosmology. Other applications could include seismic sensors for earthquake early warning systems, distributed weather monitors to track microclimate changes, and acoustic sensors to study animal populations.

-

As sensors and algorithms continue improving, TinyML networks may generate more detailed maps of natural systems than ever before. Democratizing the collection of scientific data can accelerate research and understanding across disciplines. However, it raises new challenges around data quality, privacy, and modeling unknowns. TinyML signifies a growing convergence of AI and the natural sciences to answer fundamental questions.

-
-
-

17.5 Conservation and Environment

-

TinyML is emerging as a powerful tool for environmental conservation and sustainability efforts. Recent research has highlighted numerous applications of tiny machine learning in domains such as wildlife monitoring, natural resource management, and tracking climate change.

-

One example is using TinyML for real-time wildlife tracking and protection. Researchers have developed Smart Wildlife Tracker devices that leverage TinyML algorithms to detect poaching activities. The collars contain sensors like cameras, microphones, and GPS to monitor the surrounding environment continuously. Embedded machine learning models analyze the audio and visual data to identify threats like nearby humans or gunshots. Early poaching detection gives wildlife rangers critical information to intervene and take action.

-

Other projects apply TinyML to study animal behavior through sensors. The smart wildlife collar uses accelerometers and acoustic monitoring to track elephant movements, communication, and moods (Verma 2022). The low-power TinyML collar devices transmit rich data on elephant activities while avoiding burdensome Battery changes. This helps researchers unobtrusively observe elephant populations to inform conservation strategies.

-
-Verma, Team Dual_Boot: Swapnil. 2022. “Elephant AI.” Hackster.io. https://www.hackster.io/dual\_boot/elephant-ai-ba71e9. -

On a broader scale, distributed TinyML devices are envisioned to create dense sensor networks for environmental modeling. Hundreds of low-cost air quality monitors could map pollution across cities. Underwater sensors may detect toxins and give early warning of algal blooms. Such applications underscore TinyML’s versatility in ecology, climatology, and sustainability.

-

Researchers from Moulay Ismail University of Meknes in Morocco (Bamoumen et al. 2022) have published a survey on how TinyML can be used to solve environmental issues. However, thoughtfully assessing benefits, risks, and equitable Access will be vital as TinyML expands environmental research and conservation. With ethical consideration of impacts, TinyML offers data-driven solutions to protect biodiversity, natural resources, and our planet.

-
-Bamoumen, Hatim, Anas Temouden, Nabil Benamar, and Yousra Chtouki. 2022. “How TinyML Can Be Leveraged to Solve Environmental Problems: A Survey.” In 2022 International Conference on Innovation and Intelligence for Informatics, Computing, and Technologies (3ICT), 338–43. IEEE; IEEE. https://doi.org/10.1109/3ict56508.2022.9990661. -
-
-

17.6 Disaster Response

-

In disaster response, speed and safety are paramount. But rubble and wreckage create hazardous, confined environments that impede human search efforts. TinyML enables nimble drones to assist rescue teams in these dangerous scenarios.

-

When buildings collapse after earthquakes, small drones can prove invaluable. Equipped with TinyML navigation algorithms, micro-sized drones like the CrazyFlie can traverse cramped voids and map pathways beyond human reach (Bardienus P. Duisterhof et al. 2019). Obstacle avoidance allows the drones to weave through unstable debris. This autonomous mobility lets them rapidly sweep areas humans cannot access.

-
-Duisterhof, Bardienus P, Srivatsan Krishnan, Jonathan J Cruz, Colby R Banbury, William Fu, Aleksandra Faust, Guido CHE de Croon, and Vijay Janapa Reddi. 2019. “Learning to Seek: Autonomous Source Seeking with Deep Reinforcement Learning Onboard a Nano Drone Microcontroller.” ArXiv Preprint abs/1909.11236. https://arxiv.org/abs/1909.11236. -

The video below presents the (Bardienus P. Duisterhof et al. 2019) paper on deep reinforcement learning using drones for source-seeking.

-
-

Crucially, onboard sensors and TinyML processors analyze real-time data to identify signs of survivors. Thermal cameras detect body heat, microphones pick up calls for help, and gas sensors warn of leaks (Bardienus P. Duisterhof et al. 2021). Processing data locally using TinyML allows for quick interpretation to guide rescue efforts. As conditions evolve, the drones can adapt by adjusting their search patterns and priorities.

-

The following video is an overview of autonomous drones for gas leak detection.

-
-

Additionally, coordinated swarms of drones unlock new capabilities. By collaborating and sharing insights, drone teams comprehensively view the situation. Blanketing disaster sites allows TinyML algorithms to fuse and analyze data from multiple vantage points, amplifying situational awareness beyond individual drones (Bardienus P. Duisterhof et al. 2021).

-
-Duisterhof, Bardienus P., Shushuai Li, Javier Burgues, Vijay Janapa Reddi, and Guido C. H. E. de Croon. 2021. “Sniffy Bug: A Fully Autonomous Swarm of Gas-Seeking Nano Quadcopters in Cluttered Environments.” In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 9099–9106. IEEE; IEEE. https://doi.org/10.1109/iros51168.2021.9636217. -

Most importantly, initial drone reconnaissance enhances safety for human responders. Keeping rescue teams at a safe distance until drone surveys assess hazards saves lives. Once secured, drones can guide precise personnel placement.

-

By combining agile mobility, real-time data, and swarm coordination, TinyML-enabled drones promise to transform disaster response. Their versatility, speed, and safety make them a vital asset for rescue efforts in dangerous, inaccessible environments. Integrating autonomous drones with traditional methods can accelerate responses when it matters most.

-
-
-

17.7 Education and Outreach

-

TinyML holds immense potential to help address challenges in developing regions, but realizing its benefits requires focused education and capacity building. Recognizing this need, academic researchers have spearheaded outreach initiatives to spread TinyML education globally.

-

In 2020, Harvard University, Columbia University, the International Centre for Theoretical Physics (ICTP), and UNIFEI jointly founded the TinyML for Developing Communities (TinyML4D) network (Zennaro, Plancher, and Reddi 2022). This network empowers universities and researchers in developing countries to harness TinyML for local impact.

-
-Zennaro, Marco, Brian Plancher, and V Janapa Reddi. 2022. TinyML: Applied AI for Development.” In The UN 7th Multi-Stakeholder Forum on Science, Technology and Innovation for the Sustainable Development Goals, 2022–05. -

A core focus is expanding Access to applied machine learning education. The TinyML4D network provides training, curricula, and lab resources to members. Hands-on workshops and data collection projects give students practical experience. Members can share best practices and build a community through conferences and academic collaborations.

-

The network prioritizes enabling locally relevant TinyML solutions. Projects address challenges like agriculture, health, and environmental monitoring based on community needs. For example, a member university in Rwanda developed a low-cost flood monitoring system using TinyML and sensors.

-

TinyML4D includes over 50 member institutions across Africa, Asia, and Latin America. However, greater investments and industry partnerships are needed to reach all underserved regions. The ultimate vision is training new generations to ethically apply TinyML for sustainable development. Outreach efforts today lay the foundation for democratizing transformative technology for the future.

-
-
-

17.8 Accessibility

-

Technology has immense potential to break down barriers faced by people with disabilities and bridge gaps in accessibility. TinyML specifically opens new possibilities for developing intelligent, personalized assistive devices.

-

With machine learning algorithms running locally on microcontrollers, compact accessibility tools can operate in real time without reliance on connectivity. The National Institute on Deafness and Other Communication Disorders (NIDCD) states that 20% of the world’s population has some form of hearing loss. Hearing aids leveraging TinyML could recognize multiple speakers and amplify the voice of a chosen target in crowded rooms. This allows people with hearing impairments to focus on specific conversations.

-

Similarly, mobility devices could use on-device vision processing to identify obstacles and terrain characteristics. This enables enhanced navigation and safety for the visually impaired. Companies like Envision are developing smart glasses, converting visual information into speech, with embedded TinyML to guide blind people by detecting objects, text, and traffic signals.

-

The video below shows the different real-life use cases of the Envision visual aid glasses.

-
-

TinyML could even power responsive prosthetic limbs. By analyzing nerve signals and sensory data like muscle tension, prosthetics and exoskeletons with embedded ML can move and adjust grip dynamically, making control more natural and intuitive. Companies are creating affordable, everyday bionic hands using TinyML. For those with speech difficulties, voice-enabled devices with TinyML can generate personalized vocal outputs from non-verbal inputs. Pairs by Anthropic translates gestures into natural speech tailored for individual users.

-

By enabling more customizable assistive tech, TinyML makes services more accessible and tailored to individual needs. And through translation and interpretation applications, TinyML can break down communication barriers. Apps like Microsoft Translator offer real-time translation powered by TinyML algorithms.

-

With its thoughtful and inclusive design, TinyML promises more autonomy and dignity for people with disabilities. However, developers should engage communities directly, avoid compromising privacy, and consider affordability to maximize the benefits. TinyML has huge potential to contribute to a more just, equitable world.

-
-
-

17.9 Infrastructure and Urban Planning

-

As urban populations swell, cities face immense challenges in efficiently managing resources and infrastructure. TinyML presents a powerful tool for developing intelligent systems to optimize city operations and sustainability. It could revolutionize energy efficiency in smart buildings.

-

Machine learning models can learn to predict and regulate energy usage based on occupancy patterns. Miniaturized sensors placed throughout buildings can provide granular, real-time data on space utilization, temperature, and more (Seyedzadeh et al. 2018). This visibility allows TinyML systems to minimize waste by optimizing heating, cooling, lighting, etc.

-
-Seyedzadeh, Saleh, Farzad Pour Rahimian, Ivan Glesk, and Marc Roper. 2018. “Machine Learning for Estimation of Building Energy Consumption and Performance: A Review.” Visualization in Engineering 6 (1): 1–20. https://doi.org/10.1186/s40327-018-0064-7. -

These examples demonstrate TinyML’s huge potential for efficient, sustainable city infrastructure. However, urban planners must consider privacy, security, and accessibility to ensure responsible adoption. With careful implementation, TinyML could profoundly modernize urban life.

-
-
-

17.10 Challenges and Considerations

-

While TinyML presents immense opportunities, thoughtful consideration of challenges and ethical implications will be critical as adoption spreads globally. Researchers have highlighted key factors to address, especially when deploying TinyML in developing regions.

-

A foremost challenge is limited Access to training and hardware (Ooko et al. 2021). Only educational programs exist tailored to TinyML, and emerging economies often need a robust electronics supply chain. Thorough training and partnerships will be needed to nurture expertise and make devices available to underserved communities. Initiatives like the TinyML4D network help provide structured learning pathways.

-
-Ooko, Samson Otieno, Marvin Muyonga Ogore, Jimmy Nsenga, and Marco Zennaro. 2021. TinyML in Africa: Opportunities and Challenges.” In 2021 IEEE Globecom Workshops (GC Wkshps), 1–6. IEEE; IEEE. https://doi.org/10.1109/gcwkshps52748.2021.9682107. -

Data limitations also pose hurdles. TinyML models require quality localized datasets, which are scarce in under-resourced environments. Creating frameworks to crowdsource data ethically could address this. However, data collection should benefit local communities directly, not just extract value.

-

Optimizing power usage and connectivity will be vital for sustainability. TinyML’s low power needs make it ideal for off-grid use cases. Integrating battery or solar can enable continuous operation. Adapting devices for low-bandwidth transmission where the internet is limited also maximizes impact.

-

Cultural and language barriers further complicate adoption. User interfaces and devices should account for all literacy levels and avoid excluding subgroups. Voice-controllable solutions in local dialects can enhance accessibility.

-

Addressing these challenges requires holistic partnerships, funding, and policy support. However, inclusively and ethically scaling TinyML has monumental potential to uplift disadvantaged populations worldwide. With thoughtful implementation, the technology could profoundly democratize opportunity.

-
-
-

17.11 Conclusion

-

TinyML presents a tremendous opportunity to harness the power of artificial intelligence to advance the UN Sustainable Development Goals and drive social impact globally, as highlighted by examples across sectors like healthcare, agriculture, conservation, and more; embedded machine learning unlocks new capabilities for low-cost, accessible solutions tailored to local contexts. TinyML circumvents barriers like poor infrastructure, limited connectivity, and high costs that often exclude developing communities from emerging technology.

-

However, realizing TinyML’s full potential requires holistic collaboration. Researchers, policymakers, companies, and local stakeholders must collaborate to provide training, establish ethical frameworks, co-design solutions, and adapt them to community needs. Through inclusive development and deployment, TinyML can deliver on its promise to bridge inequities and uplift vulnerable populations without leaving any behind.

-

If cultivated responsibly, TinyML could democratize opportunity and accelerate progress on global priorities from poverty alleviation to climate resilience. The technology represents a new wave of applied AI to empower societies, promote sustainability, and propel humanity toward greater justice, prosperity, and peace. TinyML provides a glimpse into an AI-enabled future that is accessible to all.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
- -
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/benchmarking/benchmarking.html b/contents/benchmarking/benchmarking.html deleted file mode 100644 index da220d11..00000000 --- a/contents/benchmarking/benchmarking.html +++ /dev/null @@ -1,1958 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 11  Benchmarking AI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

11  Benchmarking AI

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Photo of a podium set against a tech-themed backdrop. On each tier of the podium, there are AI chips with intricate designs. The top chip has a gold medal hanging from it, the second one has a silver medal, and the third has a bronze medal. Banners with ‘AI Olympics’ are displayed prominently in the background.
-
-
-

Benchmarking is critical to developing and deploying machine learning systems, especially TinyML applications. Benchmarks allow developers to measure and compare the performance of different model architectures, training procedures, and deployment strategies. This provides key insights into which approaches work best for the problem at hand and the constraints of the deployment environment.

-

This chapter will provide an overview of popular ML benchmarks, best practices for benchmarking, and how to use benchmarks to improve model development and system performance. It aims to provide developers with the right tools and knowledge to effectively benchmark and optimize their systems, especially for TinyML systems.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the purpose and goals of benchmarking AI systems, including performance assessment, resource evaluation, validation, and more.

  • -
  • Learn about the different types of benchmarks - micro, macro, and end-to-end - and their role in evaluating different aspects of an AI system.

  • -
  • Become familiar with the key components of an AI benchmark, including datasets, tasks, metrics, baselines, reproducibility rules, and more.

  • -
  • Understand the distinction between training and inference and how each phase warrants specialized ML systems benchmarking.

  • -
  • Learn about system benchmarking concepts like throughput, latency, power, and computational efficiency.

  • -
  • Appreciate the evolution of model benchmarking from accuracy to more holistic metrics like fairness, robustness, and real-world applicability.

  • -
  • Recognize the growing role of data benchmarking in evaluating issues like bias, noise, balance, and diversity.

  • -
  • Understand the limitations of evaluating models, data, and systems in isolation and the emerging need for integrated benchmarking.

  • -
-
-
-
-

11.1 Introduction

-

Benchmarking provides the essential measurements needed to drive machine learning progress and truly understand system performance. As the physicist Lord Kelvin famously said, “To measure is to know.” Benchmarks allow us to quantitatively know the capabilities of different models, software, and hardware. They allow ML developers to measure the inference time, memory usage, power consumption, and other metrics that characterize a system. Moreover, benchmarks create standardized processes for measurement, enabling fair comparisons across different solutions.

-

When benchmarks are maintained over time, they become instrumental in capturing progress across generations of algorithms, datasets, and hardware. The models and techniques that set new records on ML benchmarks from one year to the next demonstrate tangible improvements in what’s possible for on-device machine learning. By using benchmarks to measure, ML practitioners can know the real-world capabilities of their systems and have confidence that each step reflects genuine progress towards the state-of-the-art.

-

Benchmarking has several important goals and objectives that guide its implementation for machine learning systems.

-
    -
  • Performance assessment. This involves evaluating key metrics like a given model’s speed, accuracy, and efficiency. For instance, in a TinyML context, it is crucial to benchmark how quickly a voice assistant can recognize commands, as this evaluates real-time performance.

  • -
  • Resource evaluation. This means assessing the model’s impact on critical system resources, including battery life, memory usage, and computational overhead. A relevant example is comparing the battery drain of two different image recognition algorithms running on a wearable device.

  • -
  • Validation and verification. Benchmarking helps ensure the system functions correctly and meets specified requirements. One way is by checking the accuracy of an algorithm, like a heart rate monitor on a smartwatch, against readings from medical-grade equipment as a form of clinical validation.

  • -
  • Competitive analysis. This enables comparing solutions against competing offerings in the market. For example, benchmarking a custom object detection model versus common TinyML benchmarks like MobileNet and Tiny-YOLO.

  • -
  • Credibility. Accurate benchmarks uphold the credibility of AI solutions and the organizations that develop them. They demonstrate a commitment to transparency, honesty, and quality, which are essential in building trust with users and stakeholders.

  • -
  • Regulation and Standardization. As the AI industry continues to grow, there is an increasing need for regulation and standardization to ensure that AI solutions are safe, ethical, and effective. Accurate and reliable benchmarks are essential to this regulatory framework, as they provide the data and evidence needed to assess compliance with industry standards and legal requirements.

  • -
-

This chapter will cover the 3 types of AI benchmarks, the standard metrics, tools, and techniques designers use to optimize their systems, and the challenges and trends in benchmarking.

-
-
-

11.2 Historical Context

-
-

11.2.1 Standard Benchmarks

-

The evolution of benchmarks in computing vividly illustrates the industry’s relentless pursuit of excellence and innovation. In the early days of computing during the 1960s and 1970s, benchmarks were rudimentary and designed for mainframe computers. For example, the Whetstone benchmark, named after the Whetstone ALGOL compiler, was one of the first standardized tests to measure the floating-point arithmetic performance of a CPU. These pioneering benchmarks prompted manufacturers to refine their architectures and algorithms to achieve better benchmark scores.

-

The 1980s marked a significant shift with the rise of personal computers. As companies like IBM, Apple, and Commodore competed for market share, and so benchmarks became critical tools to enable fair competition. The SPEC CPU benchmarks, introduced by the System Performance Evaluation Cooperative (SPEC), established standardized tests allowing objective comparisons between different machines. This standardization created a competitive environment, pushing silicon manufacturers and system creators to continually enhance their hardware and software offerings.

-

The 1990s brought the era of graphics-intensive applications and video games. The need for benchmarks to evaluate graphics card performance led to Futuremark’s creation of 3DMark. As gamers and professionals sought high-performance graphics cards, companies like NVIDIA and AMD were driven to rapid innovation, leading to major advancements in GPU technology like programmable shaders.

-

The 2000s saw a surge in mobile phones and portable devices like tablets. With portability came the challenge of balancing performance and power consumption. Benchmarks like MobileMark by BAPCo evaluated speed and battery life. This drove companies to develop more energy-efficient System-on-Chips (SOCs), leading to the emergence of architectures like ARM that prioritized power efficiency.

-

The focus of the recent decade has shifted towards cloud computing, big data, and artificial intelligence. Cloud service providers like Amazon Web Services and Google Cloud compete on performance, scalability, and cost-effectiveness. Tailored cloud benchmarks like CloudSuite have become essential, driving providers to optimize their infrastructure for better services.

-
-
-

11.2.2 Custom Benchmarks

-

In addition to industry-standard benchmarks, there are custom benchmarks specifically designed to meet the unique requirements of a particular application or task. They are tailored to the specific needs of the user or developer, ensuring that the performance metrics are directly relevant to the intended use of the AI model or system. Custom benchmarks can be created by individual organizations, researchers, or developers and are often used in conjunction with industry-standard benchmarks to provide a comprehensive evaluation of AI performance.

-

For example, a hospital could develop a benchmark to assess an AI model for predicting patient readmission. This benchmark would incorporate metrics relevant to the hospital’s patient population, like demographics, medical history, and social factors. Similarly, a financial institution’s fraud detection benchmark could focus on identifying fraudulent transactions accurately while minimizing false positives. In automotive, an autonomous vehicle benchmark may prioritize performance in diverse conditions, responding to obstacles, and safety. Retailers could benchmark recommendation systems using click-through rate, conversion rate, and customer satisfaction. Manufacturing companies might benchmark quality control systems on defect identification, efficiency, and waste reduction. In each industry, custom benchmarks provide organizations with evaluation criteria tailored to their unique needs and context. This allows for a more meaningful assessment of how well AI systems meet requirements.

-

The advantage of custom benchmarks lies in their flexibility and relevance. They can be designed to test specific performance aspects critical to the success of the AI solution in its intended application. This allows for a more targeted and accurate assessment of the AI model or system’s capabilities. Custom benchmarks also provide valuable insights into the performance of AI solutions in real-world scenarios, which can be crucial for identifying potential issues and areas for improvement.

-

In AI, benchmarks play a crucial role in driving progress and innovation. While benchmarks have long been used in computing, their application to machine learning is relatively recent. AI-focused benchmarks provide standardized metrics to evaluate and compare the performance of different algorithms, model architectures, and hardware platforms.

-
-
-

11.2.3 Community Consensus

-

A key prerogative for any benchmark to be impactful is that it must reflect the shared priorities and values of the broader research community. Benchmarks designed in isolation risk failing to gain acceptance if they overlook key metrics considered important by leading groups. Through collaborative development with open participation from academic labs, companies, and other stakeholders, benchmarks can incorporate collective input on critical capabilities worth measuring. This helps ensure the benchmarks evaluate aspects the community agrees are essential to advance the field. The process of reaching alignment on tasks and metrics itself supports converging on what matters most.

-

Furthermore, benchmarks published with broad co-authorship from respected institutions carry authority and validity that convinces the community to adopt them as trusted standards. Benchmarks perceived as biased by particular corporate or institutional interests breed skepticism. Ongoing community engagement through workshops and challenges is also key after the initial release, and that is what, for instance, led to the success of ImageNet. As research progresses, collective participation enables continual refinement and expansion of benchmarks over time.

-

Finally, community-developed benchmarks released with open access accelerate adoption and consistent implementation. We shared open-source code, documentation, models, and infrastructure to lower barriers for groups to benchmark solutions on an equal footing using standardized implementations. This consistency is critical for fair comparisons. Without coordination, labs and companies may implement benchmarks differently, reducing result reproducibility.

-

Community consensus brings benchmarks lasting relevance, while fragmentation confuses. Through collaborative development and transparent operation, benchmarks can become authoritative standards for tracking progress. Several of the benchmarks that we discuss in this chapter were developed and built by the community, for the community, and that is what ultimately led to their success.

-
-
-
-

11.3 AI Benchmarks: System, Model, and Data

-

The need for comprehensive benchmarking becomes paramount as AI systems grow in complexity and ubiquity. Within this context, benchmarks are often classified into three primary categories: Hardware, Model, and Data. Let’s delve into why each of these buckets is essential and the significance of evaluating AI from these three distinct dimensions:

-
-

11.3.1 System Benchmarks

-

AI computations, especially those in deep learning, are resource-intensive. The hardware on which these computations run plays an important role in determining AI solutions’ speed, efficiency, and scalability. Consequently, hardware benchmarks help evaluate the performance of CPUs, GPUs, TPUs, and other accelerators in AI tasks. By understanding hardware performance, developers can choose which hardware platforms best suit specific AI applications. Furthermore, hardware manufacturers use these benchmarks to identify areas for improvement, driving innovation in AI-specific chip designs.

-
-
-

11.3.2 Model Benchmarks

-

The architecture, size, and complexity of AI models vary widely. Different models have different computational demands and offer varying levels of accuracy and efficiency. Model benchmarks help us assess the performance of various AI architectures on standardized tasks. They provide insights into different models’ speed, accuracy, and resource demands. By benchmarking models, researchers can identify best-performing architectures for specific tasks, guiding the AI community towards more efficient and effective solutions. Additionally, these benchmarks aid in tracking the progress of AI research, showcasing advancements in model design and optimization.

-
-
-

11.3.3 Data Benchmarks

-

AI, particularly machine learning, is inherently data-driven. The quality, size, and diversity of data influence AI models’ training efficacy and generalization capability. Data benchmarks focus on the datasets used in AI training and evaluation. They provide standardized datasets the community can use to train and test models, ensuring a level playing field for comparisons. Moreover, these benchmarks highlight data quality, diversity, and representation challenges, pushing the community to address biases and gaps in AI training data. By understanding data benchmarks, researchers can also gauge how models might perform in real-world scenarios, ensuring robustness and reliability.

-

In the remainder of the sections, we will discuss each of these benchmark types. The focus will be an in-depth exploration of system benchmarks, as these are critical to understanding and advancing machine learning system performance. We will briefly cover model and data benchmarks for a comprehensive perspective, but the emphasis and majority of the content will be devoted to system benchmarks.

-
-
-
-

11.4 System Benchmarking

-
-

11.4.1 Granularity

-

Machine learning system benchmarking provides a structured and systematic approach to assessing a system’s performance across various dimensions. Given the complexity of ML systems, we can dissect their performance through different levels of granularity and obtain a comprehensive view of the system’s efficiency, identify potential bottlenecks, and pinpoint areas for improvement. To this end, various types of benchmarks have evolved over the years and continue to persist.

-

Figure fig-granularity illustrates the different layers of granularity of an ML system. At the application level, end-to-end benchmarks assess the overall system performance, considering factors like data preprocessing, model training, and inference. While at the model layer, benchmarks focus on assessing the efficiency and accuracy of specific models. This includes evaluating how well models generalize to new data and their computational efficiency during training and inference. Furthermore, benchmarking can extend to hardware and software infrastructure, examining the performance of individual components like GPUs or TPUs.

-
-
-
- -
-
-Figure 11.1: ML system granularity. -
-
-
-
-

Micro Benchmarks

-

Micro-benchmarks in AI are specialized, evaluating distinct components or specific operations within a broader machine learning process. These benchmarks zero in on individual tasks, offering insights into the computational demands of a particular neural network layer, the efficiency of a unique optimization technique, or the throughput of a specific activation function. For instance, practitioners might use micro-benchmarks to measure the computational time required by a convolutional layer in a deep learning model or to evaluate the speed of data preprocessing that feeds data into the model. Such granular assessments are instrumental in fine-tuning and optimizing discrete aspects of AI models, ensuring that each component operates at its peak potential.

-

These types of microbenchmarks include zooming into very specific operations or components of the AI pipeline, such as the following:

-
    -
  • Tensor Operations: Libraries like cuDNN (by NVIDIA) often have benchmarks to measure the performance of individual tensor operations, such as convolutions or matrix multiplications, which are foundational to deep learning computations.
  • -
  • Activation Functions: Benchmarks that measure the speed and efficiency of various activation functions like ReLU, Sigmoid, or Tanh in isolation.
  • -
  • Layer Benchmarks: Evaluations of the computational efficiency of distinct neural network layers, such as LSTM or Transformer blocks, when operating on standardized input sizes.
  • -
-

Example: DeepBench, introduced by Baidu, is a good example of something that assesses the above. DeepBench assesses the performance of basic operations in deep learning models, providing insights into how different hardware platforms handle neural network training and inference.

-
-

Exercise 11.1 (System Benchmarking - Tensor Operations)  

-
-
- -
-
-

Ever wonder how your image filters get so fast? Special libraries like cuDNN supercharge those calculations on certain hardware. In this Colab, we’ll use cuDNN with PyTorch to speed up image filtering. Think of it as a tiny benchmark, showing how the right software can unlock your GPU’s power!

-

-
-
-
-
-
-

Macro Benchmarks

-

Macro benchmarks provide a holistic view, assessing the end-to-end performance of entire machine learning models or comprehensive AI systems. Rather than focusing on individual operations, macro-benchmarks evaluate the collective efficacy of models under real-world scenarios or tasks. For example, a macro-benchmark might assess the complete performance of a deep learning model undertaking image classification on a dataset like ImageNet. This includes gauging accuracy, computational speed, and resource consumption. Similarly, one might measure the cumulative time and resources needed to train a natural language processing model on extensive text corpora or evaluate the performance of an entire recommendation system, from data ingestion to final user-specific outputs.

-

Examples: These benchmarks evaluate the AI model:

-
    -
  • MLPerf Inference(Reddi et al. (2020)): An industry-standard set of benchmarks for measuring the performance of machine learning software and hardware. MLPerf has a suite of dedicated benchmarks for specific scales, such as MLPerf Mobile for mobile class devices and MLPerf Tiny, which focuses on microcontrollers and other resource-constrained devices.

  • -
  • EEMBC’s MLMark: A benchmarking suite for evaluating the performance and power efficiency of embedded devices running machine learning workloads. This benchmark provides insights into how different hardware platforms handle tasks like image recognition or audio processing.

  • -
  • AI-Benchmark(Ignatov et al. (2019)): A benchmarking tool designed for Android devices, it evaluates the performance of AI tasks on mobile devices, encompassing various real-world scenarios like image recognition, face parsing, and optical character recognition.

  • -
-
-Reddi, Vijay Janapa, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, et al. 2020. MLPerf Inference Benchmark.” In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 446–59. IEEE; IEEE. https://doi.org/10.1109/isca45697.2020.00045. -
-Ignatov, Andrey, Radu Timofte, Andrei Kulik, Seungsoo Yang, Ke Wang, Felix Baum, Max Wu, Lirong Xu, and Luc Van Gool. 2019. AI Benchmark: All about Deep Learning on Smartphones in 2019.” In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), 0–0. IEEE. https://doi.org/10.1109/iccvw.2019.00447. -
-
-

End-to-end Benchmarks

-

End-to-end benchmarks provide an all-inclusive evaluation that extends beyond the boundaries of the AI model itself. Instead of focusing solely on a machine learning model’s computational efficiency or accuracy, these benchmarks encompass the entire pipeline of an AI system. This includes initial data preprocessing, the core model’s performance, post-processing of the model’s outputs, and other integral components like storage and network interactions.

-

Data preprocessing is the first stage in many AI systems, transforming raw data into a format suitable for model training or inference. These preprocessing steps’ efficiency, scalability, and accuracy are vital for the overall system’s performance. End-to-end benchmarks assess this phase, ensuring that data cleaning, normalization, augmentation, or any other transformation process doesn’t become a bottleneck.

-

The post-processing phase also takes center stage. This involves interpreting the model’s raw outputs, possibly converting scores into meaningful categories, filtering results, or even integrating with other systems. In real-world applications, this phase is crucial for delivering actionable insights, and end-to-end benchmarks ensure it’s both efficient and effective.

-

Beyond the core AI operations, other system components are important in the overall performance and user experience. Storage solutions, whether cloud-based, on-premises, or hybrid, can significantly impact data retrieval and storage times, especially with vast AI datasets. Similarly, network interactions, vital for cloud-based AI solutions or distributed systems, can become performance bottlenecks if not optimized. End-to-end benchmarks holistically evaluate these components, ensuring that the entire system operates seamlessly, from data retrieval to final output delivery.

-

To date, there are no public, end-to-end benchmarks that take into account the role of data storage, network, and compute performance. Arguably, MLPerf Training and Inference come close to the idea of an end-to-end benchmark, but they are exclusively focused on ML model performance and do not represent real-world deployment scenarios of how models are used in the field. Nonetheless, they provide a very useful signal that helps assess AI system performance.

-

Given the inherent specificity of end-to-end benchmarking, it is typically performed internally at a company by instrumenting real production deployments of AI. This allows engineers to have a realistic understanding and breakdown of the performance, but given the sensitivity and specificity of the information, it is rarely reported outside of the company.

-
-
-

Understanding the Trade-offs

-

Different issues arise at different stages of an AI system. Micro-benchmarks help fine-tune individual components, macro-benchmarks aid in refining model architectures or algorithms, and end-to-end benchmarks guide the optimization of the entire workflow. By understanding where a problem lies, developers can apply targeted optimizations.

-

Moreover, while individual components of an AI system might perform optimally in isolation, bottlenecks can emerge when they interact. End-to-end benchmarks, in particular, are crucial to ensure that the entire system, when operating collectively, meets desired performance and efficiency standards.

-

Finally, organizations can make informed decisions on where to allocate resources by discerning performance bottlenecks or inefficiencies. For instance, if micro-benchmarks reveal inefficiencies in specific tensor operations, investments can be directed toward specialized hardware accelerators. Conversely, if end-to-end benchmarks indicate data retrieval issues, investments might be channeled toward better storage solutions.

-
-
-
-

11.4.2 Benchmark Components

-

At its core, an AI benchmark is more than just a test or a score; it’s a comprehensive evaluation framework. To understand this in-depth, let’s break down the typical components that go into an AI benchmark.

-
-

Standardized Datasets

-

Datasets serve as the foundation for most AI benchmarks. They provide a consistent data set on which models are trained and evaluated, ensuring a level playing field for comparisons.

-

Example: ImageNet, a large-scale dataset containing millions of labeled images spanning thousands of categories, is a popular benchmarking standard for image classification tasks.

-
-
-

Pre-defined Tasks

-

A benchmark should have a clear objective or task that models aim to achieve. This task defines the problem the AI system is trying to solve.

-

Example: Tasks for natural language processing benchmarks might include sentiment analysis, named entity recognition, or machine translation.

-
-
-

Evaluation Metrics

-

Once a task is defined, benchmarks require metrics to quantify performance. These metrics offer objective measures to compare different models or systems.

-

In classification tasks, metrics like accuracy, precision, recall, and F1 score are commonly used. Mean squared or absolute errors might be employed for regression tasks.

-
-
-

Baseline Models

-

Benchmarks often include baseline models or reference implementations. These serve as starting points or minimum performance standards against which new models or techniques can be compared.

-

Example: In many benchmark suites, simple models like linear regression or basic neural networks serve as baselines to provide context for more complex model evaluations.

-
-
-

Hardware and Software Specifications

-

Given the variability introduced by different hardware and software configurations, benchmarks often specify or document the hardware and software environments in which tests are conducted.

-

Example: An AI benchmark might note that evaluations were conducted on an NVIDIA Tesla V100 GPU using TensorFlow v2.4.

-
-
-

Environmental Conditions

-

As external factors can influence benchmark results, it’s essential to either control or document conditions like temperature, power source, or system background processes.

-

Example: Mobile AI benchmarks might specify that tests were conducted at room temperature with devices plugged into a power source to eliminate battery-level variances.

-
-
-

Reproducibility Rules

-

To ensure benchmarks are credible and can be replicated by others in the community, they often include detailed protocols covering everything from random seeds used to exact hyperparameters.

-

Example: A benchmark for a reinforcement learning task might detail the exact training episodes, exploration-exploitation ratios, and reward structures used.

-
-
-

Result Interpretation Guidelines

-

Beyond raw scores or metrics, benchmarks often provide guidelines or context to interpret results, helping practitioners understand the broader implications.

-

Example: A benchmark might highlight that while Model A scored higher than Model B in accuracy, it offers better real-time performance, making it more suitable for time-sensitive applications.

-
-
-
-

11.4.3 Training vs. Inference

-

The development life cycle of a machine learning model involves two critical phases - training and inference. Training is the process of learning patterns from data to create the model. Inference refers to the model making predictions on new unlabeled data. Both phases play indispensable yet distinct roles. Consequently, each phase warrants rigorous benchmarking to evaluate performance metrics like speed, accuracy, and computational efficiency.

-

Benchmarking the training phase provides insights into how different model architectures, hyperparameter values, and optimization algorithms impact the time and resources needed to train the model. For instance, benchmarking shows how neural network depth affects training time on a given dataset. Benchmarking also reveals how hardware accelerators like GPUs and TPUs can speed up training.

-

On the other hand, benchmarking inference evaluates model performance in real-world conditions after deployment. Key metrics include latency, throughput, memory footprint, and power consumption. Inference benchmarking determines if a model meets the requirements of its target application regarding response time and device constraints, which is typically the focus of TinyML. However, we will discuss these broadly to ensure a general understanding.

-
-
-

11.4.4 Training Benchmarks

-

Training represents the phase where the system processes and ingests raw data to adjust and refine its parameters. Therefore, it is an algorithmic activity and involves system-level considerations, including data pipelines, storage, computing resources, and orchestration mechanisms. The goal is to ensure that the ML system can efficiently learn from data, optimizing both the model’s performance and the system’s resource utilization.

-
-

Purpose

-

From an ML systems perspective, training benchmarks evaluate how well the system scales with increasing data volumes and computational demands. It’s about understanding the interplay between hardware, software, and the data pipeline in the training process.

-

Consider a distributed ML system designed to train on vast datasets, like those used in large-scale e-commerce product recommendations. A training benchmark would assess how efficiently the system scales across multiple nodes, manage data sharding and handle failures or node drop-offs during training.

-

Training benchmarks evaluate CPU, GPU, memory, and network utilization during the training phase, guiding system optimizations. When training a model in a cloud-based ML system, it’s crucial to understand how resources are being utilized. Are GPUs being fully leveraged? Is there unnecessary memory overhead? Benchmarks can highlight bottlenecks or inefficiencies in resource utilization, leading to cost savings and performance improvements.

-

Training an ML model is contingent on timely and efficient data delivery. Benchmarks in this context would also assess the efficiency of data pipelines, data preprocessing speed, and storage retrieval times. For real-time analytics systems, like those used in fraud detection, the speed at which training data is ingested, preprocessed, and fed into the model can be critical. Benchmarks would evaluate the latency of data pipelines, the efficiency of storage systems (like SSDs vs. HDDs), and the speed of data augmentation or transformation tasks.

-
-
-

Metrics

-

When viewed from a systems perspective, training metrics offer insights that transcend conventional algorithmic performance indicators. These metrics measure the model’s learning efficacy and gauge the efficiency, scalability, and robustness of the entire ML system during the training phase. Let’s delve deeper into these metrics and their significance.

-

The following metrics are often considered important:

-
    -
  1. Training Time: The time it takes to train a model from scratch until it reaches a satisfactory performance level. It directly measures the computational resources required to train a model. For example, Google’s BERT(Devlin et al. (2019)) is a natural language processing model that requires several days to train on a massive corpus of text data using multiple GPUs. The long training time is a significant resource consumption and cost challenge.

  2. -
  3. Scalability: How well the training process can handle increases in data size or model complexity. Scalability can be assessed by measuring training time, memory usage, and other resource consumption as data size or model complexity increases. OpenAI’s GPT-3(Brown et al. (2020)) model has 175 billion parameters, making it one of the largest language models in existence. Training GPT-3 required extensive engineering efforts to scale the training process to handle the massive model size. This involved using specialized hardware, distributed training, and other techniques to ensure the model could be trained efficiently.

  4. -
  5. Resource Utilization: The extent to which the training process utilizes available computational resources such as CPU, GPU, memory, and disk I/O. High resource utilization can indicate an efficient training process, while low utilization can suggest bottlenecks or inefficiencies. For instance, training a convolutional neural network (CNN) for image classification requires significant GPU resources. Utilizing multi-GPU setups and optimizing the training code for GPU acceleration can greatly improve resource utilization and training efficiency.

  6. -
  7. Memory Consumption: The amount of memory the training process uses. Memory consumption can be a limiting factor for training large models or datasets. For example, Google researchers faced significant memory consumption challenges when training BERT. The model has hundreds of millions of parameters, requiring large amounts of memory. The researchers had to develop techniques to reduce memory consumption, such as gradient checkpointing and model parallelism.

  8. -
  9. ** Energy Consumption: ** The energy consumed during training. As machine learning models become more complex, energy consumption has become an important consideration. Training large machine learning models can consume significant energy, leading to a large carbon footprint. For instance, the training of OpenAI’s GPT-3 was estimated to have a carbon footprint equivalent to traveling by car for 700,000 kilometers.

  10. -
  11. Throughput: The number of training samples processed per unit time. Higher throughput generally indicates a more efficient training process. The throughput is an important metric to consider when training a recommendation system for an e-commerce platform. A high throughput ensures that the model can process large volumes of user interaction data promptly, which is crucial for maintaining the relevance and accuracy of the recommendations. But it’s also important to understand how to balance throughput with latency bounds. Therefore, a latency-bounded throughput constraint is often imposed on service-level agreements for data center application deployments.

  12. -
  13. Cost: The cost of training a model can include both computational and human resources. Cost is important when considering the practicality and feasibility of training large or complex models. Training large language models like GPT-3 is estimated to cost millions of dollars. This cost includes computational, electricity and human resources required for model development and training.

  14. -
  15. Fault Tolerance and Robustness: The ability of the training process to handle failures or errors without crashing or producing incorrect results. This is important for ensuring the reliability of the training process. Network failures or hardware malfunctions can occur in a real-world scenario where a machine-learning model is being trained on a distributed system. In recent years, it has become abundantly clear that faults arising from silent data corruption have emerged as a major issue. A fault-tolerant and robust training process can recover from such failures without compromising the model’s integrity.

  16. -
  17. Ease of Use and Flexibility: The ease with which the training process can be set up and used and its flexibility in handling different types of data and models. In companies like Google, efficiency can sometimes be measured by the number of Software Engineer (SWE) years saved since that translates directly to impact. Ease of use and flexibility can reduce the time and effort required to train a model. TensorFlow and PyTorch are popular machine-learning frameworks that provide user-friendly interfaces and flexible APIs for building and training machine-learning models. These frameworks support many model architectures and are equipped with tools that simplify the training process.

  18. -
  19. Reproducibility: The ability to reproduce the training process results. Reproducibility is important for verifying a model’s correctness and validity. However, variations due to stochastic network characteristics often make it hard to reproduce the precise behavior of applications being trained, which can present a challenge for benchmarking.

  20. -
-
-Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding.” In Proceedings of the 2019 Conference of the North, 4171–86. Minneapolis, Minnesota: Association for Computational Linguistics. https://doi.org/10.18653/v1/n19-1423. -

By benchmarking for these types of metrics, we can obtain a comprehensive view of the training process’s performance and efficiency from a systems perspective. This can help identify areas for improvement and ensure that resources are used effectively.

-
-
-

Tasks

-

Selecting a handful of representative tasks for benchmarking machine learning systems is challenging because machine learning is applied to various domains with unique characteristics and requirements. Here are some of the challenges faced in selecting representative tasks:

-
    -
  1. Diversity of Applications: Machine learning is used in numerous fields such as healthcare, finance, natural language processing, computer vision, and many more. Each field has specific tasks that may not be representative of other fields. For example, image classification tasks in computer vision may not be relevant to financial fraud detection.
  2. -
  3. Variability in Data Types and Quality: Different tasks require different data types, such as text, images, videos, or numerical data. Data quality and availability can vary greatly between tasks, making it difficult to select tasks that are representative of the general challenges faced in machine learning.
  4. -
  5. Task Complexity and Difficulty: The complexity of tasks varies greatly. Some are relatively straightforward, while others are highly complex and require sophisticated models and techniques. Selecting representative tasks that cover the complexities encountered in machine learning is challenging.
  6. -
  7. Ethical and Privacy Concerns: Some tasks may involve sensitive or private data, such as medical records or personal information. These tasks may have ethical and privacy concerns that need to be addressed, making them less suitable as representative tasks for benchmarking.
  8. -
  9. Scalability and Resource Requirements: Different tasks may have different scalability and resource requirements. Some tasks may require extensive computational resources, while others can be performed with minimal resources. Selecting tasks that represent the general resource requirements in machine learning is difficult.
  10. -
  11. Evaluation Metrics: The metrics used to evaluate the performance of machine learning models vary between tasks. Some tasks may have well-established evaluation metrics, while others lack clear or standardized metrics. This can make it challenging to compare performance across different tasks.
  12. -
  13. Generalizability of Results: The results obtained from benchmarking on a specific task may not be generalizable to other tasks. This means that a machine learning system’s performance on a selected task may not be indicative of its performance on other tasks.
  14. -
-

It is important to carefully consider these factors when designing benchmarks to ensure they are meaningful and relevant to the diverse range of tasks encountered in machine learning.

-
-
-

Benchmarks

-

Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for training machine learning systems.

-

MLPerf Training Benchmark

-

MLPerf is a suite of benchmarks designed to measure the performance of machine learning hardware, software, and services. The MLPerf Training benchmark (Mattson et al. 2020a) focuses on the time it takes to train models to a target quality metric. It includes diverse workloads, such as image classification, object detection, translation, and reinforcement learning.

-

Metrics:

-
    -
  • Training time to target quality
  • -
  • Throughput (examples per second)
  • -
  • Resource utilization (CPU, GPU, memory, disk I/O)
  • -
-

DAWNBench

-

DAWNBench (Coleman et al. 2019) is a benchmark suite focusing on end-to-end deep learning training time and inference performance. It includes common tasks such as image classification and question answering.

-
-Coleman, Cody, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. 2019. “Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark.” ACM SIGOPS Operating Systems Review 53 (1): 14–25. https://doi.org/10.1145/3352020.3352024. -

Metrics:

-
    -
  • Time to train to target accuracy
  • -
  • Inference latency
  • -
  • Cost (in terms of cloud computing and storage resources)
  • -
-

Fathom

-

Fathom (Adolf et al. 2016) is a benchmark from Harvard University that evaluates the performance of deep learning models using a diverse set of workloads. These include common tasks such as image classification, speech recognition, and language modeling.

-
-Adolf, Robert, Saketh Rama, Brandon Reagen, Gu-yeon Wei, and David Brooks. 2016. “Fathom: Reference Workloads for Modern Deep Learning Methods.” In 2016 IEEE International Symposium on Workload Characterization (IISWC), 1–10. IEEE; IEEE. https://doi.org/10.1109/iiswc.2016.7581275. -

Metrics:

-
    -
  • Operations per second (to measure computational efficiency)
  • -
  • Time to completion for each workload
  • -
  • Memory bandwidth
  • -
-

Example Use Case

-

Consider a scenario where we want to benchmark the training of an image classification model on a specific hardware platform.

-
    -
  1. Task: The task is to train a convolutional neural network (CNN) for image classification on the CIFAR-10 dataset.
  2. -
  3. Benchmark: We can use the MLPerf Training benchmark for this task. It includes an image classification workload that is relevant to our task.
  4. -
  5. Metrics: We will measure the following metrics:
  6. -
-
    -
  • Training time to reach a target accuracy of 90%.
  • -
  • Throughput in terms of images processed per second.
  • -
  • GPU and CPU utilization during training.
  • -
-

By measuring these metrics, we can assess the performance and efficiency of the training process on the selected hardware platform. This information can then be used to identify potential bottlenecks or areas for improvement.

-
-
-
-

11.4.5 Inference Benchmarks

-

Inference in machine learning refers to using a trained model to make predictions on new, unseen data. It is the phase where the model applies its learned knowledge to solve the problem it was designed for, such as classifying images, recognizing speech, or translating text.

-
-

Purpose

-

When we build machine learning models, our ultimate goal is to deploy them in real-world applications where they can provide accurate and reliable predictions on new, unseen data. This process of using a trained model to make predictions is known as inference. A machine learning model’s real-world performance can differ significantly from its performance on training or validation datasets, which makes benchmarking inference a crucial step in the development and deployment of machine learning models.

-

Benchmarking inference allows us to evaluate how well a machine-learning model performs in real-world scenarios. This evaluation ensures that the model is practical and reliable when deployed in applications, providing a more comprehensive understanding of the model’s behavior with real data. Additionally, benchmarking can help identify potential bottlenecks or limitations in the model’s performance. For example, if a model takes less time to predict, it may be impractical for real-time applications such as autonomous driving or voice assistants.

-

Resource efficiency is another critical aspect of inference, as it can be computationally intensive and require significant memory and processing power. Benchmarking helps ensure that the model is efficient regarding resource usage, which is particularly important for edge devices with limited computational capabilities, such as smartphones or IoT devices. Moreover, benchmarking allows us to compare the performance of our model with competing models or previous versions of the same model. This comparison is essential for making informed decisions about which model to deploy in a specific application.

-

Finally, it is vital to ensure that the model’s predictions are not only accurate but also consistent across different data points. Benchmarking helps verify the model’s accuracy and consistency, ensuring that it meets the application’s requirements. It also assesses the model’s robustness, ensuring that it can handle real-world data variability and still make accurate predictions.

-
-
-

Metrics

-
    -
  1. Accuracy: Accuracy is one of the most vital metrics when benchmarking machine learning models. It quantifies the proportion of correct predictions made by the model compared to the true values or labels. For example, if a spam detection model can correctly classify 95 out of 100 email messages as spam or not, its accuracy would be calculated as 95%.

  2. -
  3. Latency: Latency is a performance metric that calculates the time lag or delay between the input receipt and the production of the corresponding output by the machine learning system. An example that clearly depicts latency is a real-time translation application; if a half-second delay exists from the moment a user inputs a sentence to the time the app displays the translated text, then the system’s latency is 0.5 seconds.

  4. -
  5. Latency-Bounded Throughput: Latency-bounded throughput is a valuable metric that combines the aspects of latency and throughput, measuring the maximum throughput of a system while still meeting a specified latency constraint. For example, in a video streaming application that utilizes a machine learning model to generate and display subtitles automatically, latency-bounded throughput would measure how many video frames the system can process per second (throughput) while ensuring that the subtitles are displayed with no more than a 1-second delay (latency). This metric is particularly important in real-time applications where meeting latency requirements is crucial to the user experience.

  6. -
  7. Throughput: Throughput assesses the system’s capacity by measuring the number of inferences or predictions a machine learning model can handle within a specific unit of time. Consider a speech recognition system that employs a Recurrent Neural Network (RNN) as its underlying model; if this system can process and understand 50 different audio clips in a minute, then its throughput rate stands at 50 clips per minute.

  8. -
  9. Inference Time: Inference time is a crucial metric that measures the duration a machine learning system, such as a Convolutional Neural Network (CNN) used in image recognition tasks, takes to process an input and generate a prediction or output. For instance, if a CNN takes approximately 2 milliseconds to identify and label a cat within a given photo accurately, then its inference time is said to be 2 milliseconds.

  10. -
  11. Energy Efficiency: Energy efficiency is a metric that determines the amount of energy consumed by the machine learning model to perform a single inference. A prime example of this would be a natural language processing model built on a Transformer network architecture; if it utilizes 0.1 Joules of energy to translate a sentence from English to French, its energy efficiency is measured at 0.1 Joules per inference.

  12. -
  13. Memory Usage: Memory usage quantifies the volume of RAM needed by a machine learning model to carry out inference tasks. A relevant example to illustrate this would be a face recognition system based on a CNN; if such a system requires 150 MB of RAM to process and recognize faces within an image, its memory usage is 150 MB.

  14. -
-
-
-

Tasks

-

The challenges in picking representative tasks for benchmarking inference machine learning systems are, by and large, somewhat similar to the taxonomy we have provided for training. Nevertheless, to be pedantic, let’s discuss those in the context of inference machine learning systems.

-
    -
  1. Diversity of Applications: Inference machine learning is employed across numerous domains such as healthcare, finance, entertainment, security, and more. Each domain has unique tasks, and what’s representative in one domain might not be in another. For example, an inference task for predicting stock prices in the financial domain might differ from image recognition tasks in the medical domain.

  2. -
  3. Variability in Data Types: Different inference tasks require different types of data—text, images, videos, numerical data, etc. Ensuring that benchmarks address the wide variety of data types used in real-world applications is challenging. For example, voice recognition systems process audio data, which is vastly different from the visual data processed by facial recognition systems.

  4. -
  5. Task Complexity: The complexity of inference tasks can differ immensely, from basic classification tasks to intricate tasks requiring state-of-the-art models. For example, differentiating between two categories (binary classification) is typically simpler than detecting hundreds of object types in a crowded scene.

  6. -
  7. Real-time Requirements: Some applications demand immediate or real-time responses, while others may allow for some delay. In autonomous driving, real-time object detection and decision-making are paramount, whereas a recommendation engine for a shopping website might tolerate slight delays.

  8. -
  9. Scalability Concerns: Given the varied scale of applications, from edge devices to cloud-based servers, tasks must represent the diverse computational environments where inference occurs. For example, an inference task running on a smartphone’s limited resources differs from a powerful cloud server.

  10. -
  11. Evaluation Metrics Diversity: The metrics used to evaluate performance can differ significantly depending on the task. Finding a common ground or universally accepted metric for diverse tasks is challenging. For example, precision and recall might be vital for a medical diagnosis task, whereas throughput (inferences per second) might be more crucial for video processing tasks.

  12. -
  13. Ethical and Privacy Concerns: Concerns related to ethics and privacy exist, especially in sensitive areas like facial recognition or personal data processing. These concerns can impact the selection and nature of tasks used for benchmarking. For example, using real-world facial data for benchmarking can raise privacy issues, whereas synthetic data might not replicate real-world challenges.

  14. -
  15. Hardware Diversity: With a wide range of devices from GPUs, CPUs, and TPUs to custom ASICs used for inference, ensuring that tasks are representative across varied hardware is challenging. For example, a task optimized for inference on a GPU might perform sub-optimally on an edge device.

  16. -
-
-
-

Benchmarks

-

Here are some original works that laid the fundamental groundwork for developing systematic benchmarks for inference machine learning systems.

-

MLPerf Inference Benchmark

-

MLPerf Inference is a comprehensive benchmark suite that assesses machine learning models’ performance during the inference phase. It encompasses a variety of workloads, including image classification, object detection, and natural language processing, aiming to provide standardized and insightful metrics for evaluating different inference systems.

-

Metrics:

-
    -
  • Inference time
  • -
  • Latency
  • -
  • Throughput
  • -
  • Accuracy
  • -
  • Energy consumption
  • -
-

AI Benchmark

-

AI Benchmark is a benchmarking tool that evaluates the performance of AI and machine learning models on mobile devices and edge computing platforms. It includes tests for image classification, object detection, and natural language processing tasks, providing a detailed analysis of the inference performance on different hardware platforms.

-

Metrics:

-
    -
  • Inference time
  • -
  • Latency
  • -
  • Energy consumption
  • -
  • Memory usage
  • -
  • Throughput
  • -
-

OpenVINO toolkit

-

OpenVINO toolkit provides a benchmark tool to measure the performance of deep learning models for various tasks, such as image classification, object detection, and facial recognition, on Intel hardware. It offers detailed insights into the models’ inference performance on different hardware configurations.

-

Metrics:

-
    -
  • Inference time
  • -
  • Throughput
  • -
  • Latency
  • -
  • CPU and GPU utilization
  • -
-

Example Use Case

-

Consider a scenario where we want to evaluate the inference performance of an object detection model on a specific edge device.

-

Task: The task is to perform real-time object detection on video streams, detecting and identifying objects such as vehicles, pedestrians, and traffic signs.

-

Benchmark: We can use the AI Benchmark for this task as it evaluates inference performance on edge devices, which suits our scenario.

-

Metrics: We will measure the following metrics:

-
    -
  • Inference time to process each video frame
  • -
  • Latency to generate the bounding boxes for detected objects
  • -
  • Energy consumption during the inference process
  • -
  • Throughput in terms of video frames processed per second
  • -
-

By measuring these metrics, we can assess the performance of the object detection model on the edge device and identify any potential bottlenecks or areas for optimization to enhance real-time processing capabilities.

-
-

Exercise 11.2 (Inference Benchmarks - MLPerf)  

-
-
- -
-
-

Get ready to put your AI models to the ultimate test! MLPerf is like the Olympics for machine learning performance. In this Colab, we’ll use a toolkit called CK to run official MLPerf benchmarks, measure how fast and accurate your model is, and even use TVM to give it a super speed boost. Are you ready to see your model earn its medal?

-

-
-
-
-
-
-
-

11.4.6 Benchmark Example

-

To properly illustrate the components of a systems benchmark, we can look at the keyword spotting benchmark in MLPerf Tiny and explain the motivation behind each decision.

-
-

Task

-

Keyword spotting was selected as a task because it is a common use case in TinyML that has been well-established for years. Additionally, the typical hardware used for keyword spotting differs substantially from the offerings of other benchmarks, such as MLPerf Inference’s speech recognition task.

-
-
-

Dataset

-

Google Speech Commands(Warden (2018)) was selected as the best dataset to represent the task. The dataset is well-established in the research community and has permissive licensing, allowing it to be easily used in a benchmark.

-
-Warden, Pete. 2018. “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” ArXiv Preprint abs/1804.03209. https://arxiv.org/abs/1804.03209. -
-
-

Model

-

The next core component is the model, which will act as the primary workload for the benchmark. The model should be well established as a solution to the selected task rather than a state-of-the-art solution. The model selected is a simple depthwise separable convolution model. This architecture is not the state-of-the-art solution to the task, but it is well-established and not designed for a specific hardware platform like many state-of-the-art solutions. Despite being an inference benchmark, the benchmark also establishes a reference training recipe to be fully reproducible and transparent.

-
-
-

Metrics

-

Latency was selected as the primary metric for the benchmark, as keyword spotting systems need to react quickly to maintain user satisfaction. Additionally, given that TinyML systems are often battery-powered, energy consumption is measured to ensure the hardware platform is efficient. The accuracy of the model is also measured to ensure that the optimizations applied by a submitter, such as quantization, don’t degrade the accuracy beyond a threshold.

-
-
-

Benchmark Harness

-

MLPerf Tiny uses EEMBCs EnergyRunner benchmark harness to load the inputs to the model and isolate and measure the device’s energy consumption. When measuring energy consumption, it’s critical to select a harness that is accurate at the expected power levels of the devices under test and simple enough not to become a burden for the benchmark participants.

-
-
-

Baseline Submission

-

Baseline submissions are critical for contextualizing results and as a reference point to help participants get started. The baseline submission should prioritize simplicity and readability over state-of-the-art performance. The keyword spotting baseline uses a standard STM microcontroller as its hardware and TensorFlow Lite for Microcontrollers(David et al. (2021)) as its inference framework.

-
-David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811. -
-
-
-

11.4.7 Challenges and Limitations

-

While benchmarking provides a structured methodology for performance evaluation in complex domains like artificial intelligence and computing, the process also poses several challenges. If not properly addressed, these challenges can undermine the credibility and accuracy of benchmarking results. Some of the predominant difficulties faced in benchmarking include the following:

-
    -
  • Incomplete problem coverage—Benchmark tasks may not fully represent the problem space. For instance, common image classification datasets like CIFAR-10 have limited diversity in image types. Algorithms tuned for such benchmarks may fail to generalize well to real-world datasets.
  • -
  • Statistical insignificance - Benchmarks must have enough trials and data samples to produce statistically significant results. For example, benchmarking an OCR model on only a few text scans may not adequately capture its true error rates.
  • -
  • Limited reproducibility—Varying hardware, software versions, codebases, and other factors can reduce the reproducibility of benchmark results. MLPerf addresses this by providing reference implementations and environment specifications.
  • -
  • Misalignment with end goals - Benchmarks focusing only on speed or accuracy metrics may misalign real-world objectives like cost and power efficiency. Benchmarks must reflect all critical performance axes.
  • -
  • Rapid staleness—Due to the rapid pace of advancements in AI and computing, benchmarks and their datasets can quickly become outdated. Maintaining up-to-date benchmarks is thus a persistent challenge.
  • -
-

But of all these, the most important challenge is benchmark engineering.

-
-

Hardware Lottery

-

The “hardware lottery” in benchmarking machine learning systems refers to the situation where the success or efficiency of a machine learning model is significantly influenced by the compatibility of the model with the underlying hardware (Chu et al. 2021). In other words, some models perform exceptionally well because they are a good fit for the particular characteristics or capabilities of the hardware they are run on rather than because they are intrinsically superior models. Figure fig-hardware-lottery demonstrates the performance of different models on different hardware: notice how (follow the big yellow arrow) the Mobilenet V3 Large model (in green) has the lowest latency among all models when run unquantized on the Pixel4 CPU. At the same time, it performs the worst on Pixel4 DSP Qualcomm Snapdragon 855. Unfortunately, the hardware used is often omitted from papers or only briefly mentioned, making reproducing results difficult, if possible.

-
-Chu, Grace, Okan Arikan, Gabriel Bender, Weijun Wang, Achille Brighton, Pieter-Jan Kindermans, Hanxiao Liu, Berkin Akin, Suyog Gupta, and Andrew Howard. 2021. “Discovering Multi-Hardware Mobile Models via Architecture Search.” In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 3022–31. IEEE. https://doi.org/10.1109/cvprw53098.2021.00337. -
-
-
- -
-
-Figure 11.2: Hardware Lottery. -
-
-
-

For instance, certain machine learning models may be designed and optimized to take advantage of the parallel processing capabilities of specific hardware accelerators, such as Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs). As a result, these models might show superior performance when benchmarked on such hardware compared to other models that are not optimized for the hardware.

-

For example, a 2018 paper introduced a new convolutional neural network architecture for image classification that achieved state-of-the-art accuracy on ImageNet. However, the paper only mentioned that the model was trained on 8 GPUs without specifying the model, memory size, or other relevant details. A follow-up study tried to reproduce the results but found that training the same model on commonly available GPUs achieved 10% lower accuracy, even after hyperparameter tuning. The original hardware likely had far higher memory bandwidth and compute power. As another example, training times for large language models can vary drastically based on the GPUs used.

-

The “hardware lottery” can introduce challenges and biases in benchmarking machine learning systems, as the model’s performance is not solely dependent on the model’s architecture or algorithm but also on the compatibility and synergies with the underlying hardware. This can make it difficult to compare different models fairly and to identify the best model based on its intrinsic merits. It can also lead to a situation where the community converges on models that are a good fit for the popular hardware of the day, potentially overlooking other models that might be superior but incompatible with the current hardware trends.

-
-
-

Benchmark Engineering

-

Hardware lottery occurs when a machine learning model unintentionally performs exceptionally well or poorly on a specific hardware setup due to unforeseen compatibility or incompatibility. The model is not explicitly designed or optimized for that particular hardware by the developers or engineers; rather, it happens to align or (mis)align with the hardware’s capabilities or limitations. In this case, the model’s performance on the hardware is a byproduct of coincidence rather than design.

-

In contrast to the accidental hardware lottery, benchmark engineering involves deliberately optimizing or designing a machine learning model to perform exceptionally well on specific hardware, often to win benchmarks or competitions. This intentional optimization might include tweaking the model’s architecture, algorithms, or parameters to exploit the hardware’s features and capabilities fully.

-
-
-

Problem

-

Benchmark engineering refers to tweaking or modifying an AI system to optimize performance on specific benchmark tests, often at the expense of generalizability or real-world performance. This can include adjusting hyperparameters, training data, or other aspects of the system specifically to achieve high scores on benchmark metrics without necessarily improving the overall functionality or utility of the system.

-

The motivation behind benchmark engineering often stems from the desire to achieve high-performance scores for marketing or competitive purposes. High benchmark scores can demonstrate the superiority of an AI system compared to competitors and can be a key selling point for potential users or investors. This pressure to perform well on benchmarks sometimes leads to prioritizing benchmark-specific optimizations over more holistic improvements to the system.

-

It can lead to several risks and challenges. One of the primary risks is that the AI system may perform better in real-world applications than the benchmark scores suggest. This can lead to user dissatisfaction, reputational damage, and potential safety or ethical concerns. Furthermore, benchmark engineering can contribute to a lack of transparency and accountability in the AI community, as it can be difficult to discern how much of an AI system’s performance is due to genuine improvements versus benchmark-specific optimizations.

-

The AI community must prioritize transparency and accountability to mitigate the risks associated with benchmark engineering. This can include disclosing any optimizations or adjustments made specifically for benchmark tests and providing more comprehensive evaluations of AI systems that include real-world performance metrics and benchmark scores. Researchers and developers must prioritize holistic improvements to AI systems that improve their generalizability and functionality across various applications rather than focusing solely on benchmark-specific optimizations.

-
-
-

Issues

-

One of the primary problems with benchmark engineering is that it can compromise the real-world performance of AI systems. When developers focus on optimizing their systems to achieve high scores on specific benchmark tests, they may neglect other important system performance aspects crucial in real-world applications. For example, an AI system designed for image recognition might be engineered to perform exceptionally well on a benchmark test that includes a specific set of images but needs help to recognize images slightly different from those in the test set accurately.

-

Another area for improvement with benchmark engineering is that it can result in AI systems that lack generalizability. In other words, while the system may perform well on the benchmark test, it may need help handling a diverse range of inputs or scenarios. For instance, an AI model developed for natural language processing might be engineered to achieve high scores on a benchmark test that includes a specific type of text but fails to process text that falls outside of that specific type accurately.

-

It can also lead to misleading results. When AI systems are engineered to perform well on benchmark tests, the results may not accurately reflect the system’s true capabilities. This can be problematic for users or investors who rely on benchmark scores to make informed decisions about which AI systems to use or invest in. For example, an AI system engineered to achieve high scores on a benchmark test for speech recognition might need to be more capable of accurately recognizing speech in real-world situations, leading users or investors to make decisions based on inaccurate information.

-
-
-

Mitigation

-

There are several ways to mitigate benchmark engineering. Transparency in the benchmarking process is crucial to maintaining benchmark accuracy and reliability. This involves clearly disclosing the methodologies, data sets, and evaluation criteria used in benchmark tests, as well as any optimizations or adjustments made to the AI system for the purpose of the benchmark.

-

One way to achieve transparency is through the use of open-source benchmarks. Open-source benchmarks are made publicly available, allowing researchers, developers, and other stakeholders to review, critique, and contribute to them, thereby ensuring their accuracy and reliability. This collaborative approach also facilitates sharing best practices and developing more robust and comprehensive benchmarks.

-

One example is the MLPerf Tiny. It’s an open-source framework designed to make it easy to compare different solutions in the world of TinyML. Its modular design allows components to be swapped out for comparison or improvement. The reference implementations, shown in green and orange in Figure fig-ml-perf, act as the baseline for results. TinyML often needs optimization across the entire system, and users can contribute by focusing on specific parts, like quantization. The modular benchmark design allows users to showcase their contributions and competitive advantage by modifying a reference implementation. In short, MLPerf Tiny offers a flexible and modular way to assess and enhance TinyML applications, making it easier to compare and improve different aspects of the technology.

-
-
-
- -
-
-Figure 11.3: MLPerf Tiny modular design. Credit: Mattson et al. (2020a). -
-
-———, et al. 2020a. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843. -
-
-

Another method for achieving transparency is through peer review of benchmarks. This involves having independent experts review and validate the benchmark’s methodology, data sets, and results to ensure their credibility and reliability. Peer review can provide a valuable means of verifying the accuracy of benchmark tests and help build confidence in the results.

-

Standardization of benchmarks is another important solution to mitigate benchmark engineering. Standardized benchmarks provide a common framework for evaluating AI systems, ensuring consistency and comparability across different systems and applications. This can be achieved by developing industry-wide standards and best practices for benchmarking and through common metrics and evaluation criteria.

-

Third-party verification of results can also be valuable in mitigating benchmark engineering. This involves having an independent third party verify the results of a benchmark test to ensure their credibility and reliability. Third-party verification can build confidence in the results and provide a valuable means of validating the performance and capabilities of AI systems.

-
-
-
-
-

11.5 Model Benchmarking

-

Benchmarking machine learning models is important for determining the effectiveness and efficiency of various machine learning algorithms in solving specific tasks or problems. By analyzing the results obtained from benchmarking, developers and researchers can identify their models’ strengths and weaknesses, leading to more informed decisions on model selection and further optimization.

-

The evolution and progress of machine learning models are intrinsically linked to the availability and quality of data sets. In machine learning, data acts as the raw material that powers the algorithms, allowing them to learn, adapt, and ultimately perform tasks that were traditionally the domain of humans. Therefore, it is important to understand this history.

-
-

11.5.1 Historical Context

-

Machine learning datasets have a rich history and have evolved significantly over the years, growing in size, complexity, and diversity to meet the ever-increasing demands of the field. Let’s take a closer look at this evolution, starting from one of the earliest and most iconic datasets – MNIST.

-
-

MNIST (1998)

-

The MNIST dataset, created by Yann LeCun, Corinna Cortes, and Christopher J.C. Burges in 1998, can be considered a cornerstone in the history of machine learning datasets. It comprises 70,000 labeled 28x28 pixel grayscale images of handwritten digits (0-9). MNIST has been widely used for benchmarking algorithms in image processing and machine learning as a starting point for many researchers and practitioners. Figure fig-mnist shows some examples of handwritten digits.

-
-
-
- -
-
-Figure 11.4: MNIST handwritten digits. Credit: Suvanjanprasai. -
-
-
-
-
-

ImageNet (2009)

-

Fast forward to 2009, and we see the introduction of the ImageNet dataset, which marked a significant leap in the scale and complexity of datasets. ImageNet consists of over 14 million labeled images spanning more than 20,000 categories. Fei-Fei Li and her team developed it to advance object recognition and computer vision research. The dataset became synonymous with the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), an annual competition crucial in developing deep learning models, including the famous AlexNet in 2012.

-
-
-

COCO (2014)

-

The Common Objects in Context (COCO) dataset(Lin et al. (2014)), released in 2014, further expanded the landscape of machine learning datasets by introducing a richer set of annotations. COCO consists of images containing complex scenes with multiple objects, and each image is annotated with object bounding boxes, segmentation masks, and captions. This dataset has been instrumental in advancing research in object detection, segmentation, and image captioning.

-
-Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. “Microsoft Coco: Common Objects in Context.” In Computer VisionECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part v 13, 740–55. Springer. -

Coco dataset. Credit: Coco.https://cocodataset.org/images/jpg/coco-examples.jpg

-
-
-

GPT-3 (2020)

-

While the above examples primarily focus on image datasets, there have also been significant developments in text datasets. One notable example is GPT-3 (Brown et al. 2020), developed by OpenAI. GPT-3 is a language model trained on diverse internet text. Although the dataset used to train GPT-3 is not publicly available, the model itself, consisting of 175 billion parameters, is a testament to the scale and complexity of modern machine learning datasets and models.

-
-Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. -
-
-

Present and Future

-

Today, we have a plethora of datasets spanning various domains, including healthcare, finance, social sciences, and more. The following characteristics help us taxonomize the space and growth of machine learning datasets that fuel model development.

-
    -
  1. Diversity of Data Sets: The variety of data sets available to researchers and engineers has expanded dramatically, covering many fields, including natural language processing, image recognition, and more. This diversity has fueled the development of specialized machine-learning models tailored to specific tasks, such as translation, speech recognition, and facial recognition.

  2. -
  3. Volume of Data: The sheer volume of data that has become available in the digital age has also played a crucial role in advancing machine learning models. Large data sets enable models to capture the complexity and nuances of real-world phenomena, leading to more accurate and reliable predictions.

  4. -
  5. Quality and Cleanliness of Data: The quality of data is another critical factor that influences the performance of machine learning models. Clean, well-labeled, and unbiased data sets are essential for training models that are robust and fair.

  6. -
  7. Open Access to Data: The availability of open-access data sets has also contributed significantly to machine learning’s progress. Open data allows researchers from around the world to collaborate, share insights, and build upon each other’s work, leading to faster innovation and the development of more advanced models.

  8. -
  9. Ethics and Privacy Concerns: As data sets grow in size and complexity, ethical considerations and privacy concerns become increasingly important. There is an ongoing debate about the balance between leveraging data for machine learning advancements and protecting individuals’ privacy rights.

  10. -
-

The development of machine learning models relies heavily on the availability of diverse, large, high-quality, and open-access data sets. As we move forward, addressing the ethical considerations and privacy concerns associated with using large data sets is crucial to ensure that machine learning technologies benefit society. There is a growing awareness that data acts as the rocket fuel for machine learning, driving and fueling the development of machine learning models. Consequently, more focus is being placed on developing the data sets themselves. We will explore this in further detail in the data benchmarking section.

-
-
-
-

11.5.2 Model Metrics

-

Machine learning model evaluation has evolved from a narrow focus on accuracy to a more comprehensive approach considering a range of factors, from ethical considerations and real-world applicability to practical constraints like model size and efficiency. This shift reflects the field’s maturation as machine learning models are increasingly applied in diverse, complex real-world scenarios.

-
-

Accuracy

-

Accuracy is one of the most intuitive and commonly used metrics for evaluating machine learning models. At its core, accuracy measures the proportion of correct predictions made by the model out of all predictions. For example, imagine we have developed a machine learning model to classify images as either containing a cat or not. If we test this model on a dataset of 100 images, and it correctly identifies 90 of them, we would calculate its accuracy as 90%.

-

In the initial stages of machine learning, accuracy was often the primary, if not the only, metric considered when evaluating model performance. This is understandable, given its straightforward nature and ease of interpretation. However, as the field has progressed, the limitations of relying solely on accuracy have become more apparent.

-

Consider the example of a medical diagnosis model with an accuracy of 95%. While at first glance this may seem impressive, we must delve deeper to assess the model’s performance fully. Suppose the model fails to accurately diagnose severe conditions that, while rare, can have severe consequences; its high accuracy may not be as meaningful. A pertinent example of this is Google’s retinopathy machine learning model, which was designed to diagnose diabetic retinopathy and diabetic macular edema from retinal photographs.

-

The Google model demonstrated impressive accuracy levels in lab settings. Still, when deployed in real-world clinical environments in Thailand, it faced significant challenges. In the real-world setting, the model encountered diverse patient populations, varying image quality, and a range of different medical conditions that it had not been exposed to during its training. Consequently, its performance could have been better, and it struggled to maintain the same accuracy levels observed in lab settings. This example serves as a clear reminder that while high accuracy is an important and desirable attribute for a medical diagnosis model, it must be evaluated in conjunction with other factors, such as the model’s ability to generalize to different populations and handle diverse and unpredictable real-world conditions, to understand its value and potential impact on patient care truly.

-

Similarly, if the model performs well on average but exhibits significant disparities in performance across different demographic groups, this, too, would be cause for concern.

-

The evolution of machine learning has thus seen a shift towards a more holistic approach to model evaluation, taking into account not just accuracy, but also other crucial factors such as fairness, transparency, and real-world applicability. A prime example is the Gender Shades project at MIT Media Lab, led by Joy Buolamwini, highlighting significant racial and gender biases in commercial facial recognition systems. The project evaluated the performance of three facial recognition technologies developed by IBM, Microsoft, and Face++. It found that they all exhibited biases, performing better on lighter-skinned and male faces compared to darker-skinned and female faces.

-

While accuracy remains a fundamental and valuable metric for evaluating machine learning models, a more comprehensive approach is required to fully assess a model’s performance. This means considering additional metrics that account for fairness, transparency, and real-world applicability, as well as conducting rigorous testing across diverse datasets to uncover and mitigate any potential biases. The move towards a more holistic approach to model evaluation reflects the maturation of the field and its increasing recognition of the real-world implications and ethical considerations associated with deploying machine learning models.

-
-
-

Fairness

-

Fairness in machine learning models is a multifaceted and critical aspect that requires careful attention, particularly in high-stakes applications that significantly affect people’s lives, such as in loan approval processes, hiring, and criminal justice. It refers to the equitable treatment of all individuals, irrespective of their demographic or social attributes such as race, gender, age, or socioeconomic status.

-

Simply relying on accuracy can be insufficient and potentially misleading when evaluating models. For instance, consider a loan approval model with a 95% accuracy rate. While this figure may appear impressive at first glance, it does not reveal how the model performs across different demographic groups. If this model consistently discriminates against a particular group, its accuracy is less commendable, and its fairness is questioned.

-

Discrimination can manifest in various forms, such as direct discrimination, where a model explicitly uses sensitive attributes like race or gender in its decision-making process, or indirect discrimination, where seemingly neutral variables correlate with sensitive attributes, indirectly influencing the model’s outcomes. An infamous example of the latter is the COMPAS tool used in the US criminal justice system, which exhibited racial biases in predicting recidivism rates despite not explicitly using race as a variable.

-

Addressing fairness involves careful examination of the model’s performance across diverse groups, identifying potential biases, and rectifying disparities through corrective measures such as re-balancing datasets, adjusting model parameters, and implementing fairness-aware algorithms. Researchers and practitioners continuously develop metrics and methodologies tailored to specific use cases to evaluate fairness in real-world scenarios. For example, disparate impact analysis, demographic parity, and equal opportunity are some of the metrics employed to assess fairness.

-

Additionally, transparency and interpretability of models are fundamental to achieving fairness. Understanding how a model makes decisions can reveal potential biases and enable stakeholders to hold developers accountable. Open-source tools like AI Fairness 360 by IBM and Fairness Indicators by TensorFlow are being developed to facilitate fairness assessments and mitigation of biases in machine learning models.

-

Ensuring fairness in machine learning models, particularly in applications that significantly impact people’s lives, requires rigorous evaluation of the model’s performance across diverse groups, careful identification and mitigation of biases, and implementation of transparency and interpretability measures. By comprehensively addressing fairness, we can work towards developing machine learning models that are equitable, just, and beneficial for society.

-
-
-

Complexity

-
-
Parameters*
-

In the initial stages of machine learning, model benchmarking often relied on parameter counts as a proxy for model complexity. The rationale was that more parameters typically lead to a more complex model, which should, in turn, deliver better performance. However, this approach has proven inadequate as it needs to account for the computational cost associated with processing many parameters.

-

For example, GPT-3, developed by OpenAI, is a language model that boasts an astounding 175 billion parameters. While it achieves state-of-the-art performance on various natural language processing tasks, its size and the computational resources required to run it make it impractical for deployment in many real-world scenarios, especially those with limited computational capabilities.

-

Relying on parameter counts as a proxy for model complexity also fails to consider the model’s efficiency. If optimized for efficiency, a model with fewer parameters might be just as effective, if not more so, than a model with a higher parameter count. For instance, MobileNets, developed by Google, is a family of models designed specifically for mobile and edge devices. They utilize depth-wise separable convolutions to reduce the number of parameters and computational costs while still achieving competitive performance.

-

In light of these limitations, the field has moved towards a more holistic approach to model benchmarking that considers parameter counts and other crucial factors such as floating-point operations per second (FLOPs), memory consumption, and latency. FLOPs, in particular, have emerged as an important metric as they provide a more accurate representation of the computational load a model imposes. This shift towards a more comprehensive approach to model benchmarking reflects a recognition of the need to balance performance with practicality, ensuring that models are effective, efficient, and deployable in real-world scenarios.

-
-
-
FLOPS
-

The size of a machine learning model is an essential aspect that directly impacts its usability in practical scenarios, especially when computational resources are limited. Traditionally, the number of parameters in a model was often used as a proxy for its size, with the underlying assumption being that more parameters would translate to better performance. However, this simplistic view does not consider the computational cost of processing these parameters. This is where the concept of floating-point operations per second (FLOPs) comes into play, providing a more accurate representation of the computational load a model imposes.

-

FLOPs measure the number of floating-point operations a model performs to generate a prediction. A model with many FLOPs requires substantial computational resources to process the vast number of operations, which may render it impractical for certain applications. Conversely, a model with a lower FLOP count is more lightweight and can be easily deployed in scenarios where computational resources are limited.

-

Let’s consider an example. BERT Bidirectional Encoder Representations from Transformers, a popular natural language processing model, has over 340 million parameters, making it a large model with high accuracy and impressive performance across various tasks. However, the sheer size of BERT, coupled with its high FLOP count, makes it a computationally intensive model that may not be suitable for real-time applications or deployment on edge devices with limited computational capabilities.

-

In light of this, there has been a growing interest in developing smaller models that can achieve similar performance levels as their larger counterparts while being more efficient in computational load. DistilBERT, for instance, is a smaller version of BERT that retains 97% of its performance while being 40% smaller in terms of parameter count. The size reduction also translates to a lower FLOP count, making DistilBERT a more practical choice for resource-constrained scenarios.

-

In summary, while parameter count provides a useful indication of model size, it is not a comprehensive metric as it needs to consider the computational cost associated with processing these parameters. FLOPs, on the other hand, offer a more accurate representation of a model’s computational load and are thus an essential consideration when deploying machine learning models in real-world scenarios, particularly when computational resources are limited. The evolution from relying solely on parameter count to considering FLOPs signifies a maturation in the field, reflecting a greater awareness of the practical constraints and challenges of deploying machine learning models in diverse settings.

-
-
-
Efficiency
-

Efficiency metrics, such as memory consumption and latency/throughput, have also gained prominence. These metrics are particularly crucial when deploying models on edge devices or in real-time applications, as they measure how quickly a model can process data and how much memory it requires. In this context, Pareto curves are often used to visualize the trade-off between different metrics, helping stakeholders decide which model best suits their needs.

-
-
-
-
-

11.5.3 Lessons Learned

-

Model benchmarking has offered us several valuable insights that can be leveraged to drive innovation in system benchmarks. The progression of machine learning models has been profoundly influenced by the advent of leaderboards and the open-source availability of models and datasets. These elements have served as significant catalysts, propelling innovation and accelerating the integration of cutting-edge models into production environments. However, as we will explore further, these are not the only contributors to the development of machine learning benchmarks.

-

Leaderboards play a vital role in providing an objective and transparent method for researchers and practitioners to evaluate the efficacy of different models, ranking them based on their performance in benchmarks. This system fosters a competitive environment, encouraging the development of models that are not only accurate but also efficient. The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a prime example of this, with its annual leaderboard significantly contributing to developing groundbreaking models such as AlexNet.

-

Open-source access to state-of-the-art models and datasets further democratizes machine learning, facilitating collaboration among researchers and practitioners worldwide. This open access accelerates the process of testing, validation, and deployment of new models in production environments, as evidenced by the widespread adoption of models like BERT and GPT-3 in various applications, from natural language processing to more complex, multi-modal tasks.

-

Community collaboration platforms like Kaggle have revolutionized the field by hosting competitions that unite data scientists from across the globe to solve intricate problems. Specific benchmarks serve as the goalposts for innovation and model development.

-

Moreover, the availability of diverse and high-quality datasets is paramount in training and testing machine learning models. Datasets such as ImageNet have played an instrumental role in the evolution of image recognition models, while extensive text datasets have facilitated advancements in natural language processing models.

-

Lastly, the contributions of academic and research institutions must be supported. Their role in publishing research papers, sharing findings at conferences, and fostering collaboration between various institutions has significantly contributed to advancing machine learning models and benchmarks.

- -
-
-

11.5.4 Limitations and Challenges

-

While model benchmarks are an essential tool in assessing machine learning models, several limitations and challenges should be addressed to ensure that they accurately reflect a model’s performance in real-world scenarios.

-

Dataset does not Correspond to Real-World Scenarios: Often, the data used in model benchmarks is cleaned and preprocessed to such an extent that it may need to accurately represent the data that a model would encounter in real-world applications. This idealized data version can lead to overestimating a model’s performance. In the case of the ImageNet dataset, the images are well-labeled and categorized. Still, in a real-world scenario, a model may need to deal with blurry images that could be better lit or taken from awkward angles. This discrepancy can significantly affect the model’s performance.

-

Sim2Real Gap: The Sim2Real gap refers to the difference in the performance of a model when transitioning from a simulated environment to a real-world environment. This gap is often observed in robotics, where a robot trained in a simulated environment struggles to perform tasks in the real world due to the complexity and unpredictability of real-world environments. A robot trained to pick up objects in a simulated environment may need help to perform the same task in the real world because the simulated environment does not accurately represent the complexities of real-world physics, lighting, and object variability.

-

Challenges in Creating Datasets: Creating a dataset for model benchmarking is a challenging task that requires careful consideration of various factors such as data quality, diversity, and representation. As discussed in the data engineering section, ensuring that the data is clean, unbiased, and representative of the real-world scenario is crucial for the accuracy and reliability of the benchmark. For example, when creating a dataset for a healthcare-related task, it is important to ensure that the data is representative of the entire population and not biased towards a particular demographic. This ensures that the model performs well across diverse patient populations.

-

Model benchmarks are essential in measuring the capability of a model architecture in solving a fixed task, but it is important to address the limitations and challenges associated with them. This includes ensuring that the dataset accurately represents real-world scenarios, addressing the Sim2Real gap, and overcoming the challenges of creating unbiased and representative datasets. By addressing these challenges and many others, we can ensure that model benchmarks provide a more accurate and reliable assessment of a model’s performance in real-world applications.

-

The Speech Commands dataset and its successor MSWC, are common benchmarks for one of the quintessential TinyML applications, keyword spotting. Speech commands establish streaming error metrics beyond the standard top-1 classification accuracy more relevant to the keyword spotting use case. Using case-relevant metrics is what elevates a dataset to a model benchmark.

-
-
-
-

11.6 Data Benchmarking

-

For the past several years, AI has focused on developing increasingly sophisticated machine learning models like large language models. The goal has been to create models capable of human-level or superhuman performance on a wide range of tasks by training them on massive datasets. This model-centric approach produced rapid progress, with models attaining state-of-the-art results on many established benchmarks. Figure fig-superhuman-perf shows the performance of AI systems relative to human performance (marked by the horizontal line at 0) across five applications: handwriting recognition, speech recognition, image recognition, reading comprehension, and language understanding. Over the past decade, the AI performance has surpassed that of humans.

-

However, growing concerns about issues like bias, safety, and robustness persist even in models that achieve high accuracy on standard benchmarks. Additionally, some popular datasets used for evaluating models are beginning to saturate, with models reaching near-perfect performance on existing test splits (Kiela et al. 2021). As a simple example, there are test images in the classic MNIST handwritten digit dataset that may look indecipherable to most human evaluators but were assigned a label when the dataset was created - models that happen to agree with those labels may appear to exhibit superhuman performance but instead may only be capturing idiosyncrasies of the labeling and acquisition process from the dataset’s creation in 1994. In the same spirit, computer vision researchers now ask, “Are we done with ImageNet?” (Beyer et al. 2020). This highlights limitations in the conventional model-centric approach of optimizing accuracy on fixed datasets through architectural innovations.

-
-Beyer, Lucas, Olivier J Hénaff, Alexander Kolesnikov, Xiaohua Zhai, and Aäron van den Oord. 2020. “Are We Done with Imagenet?” ArXiv Preprint abs/2006.07159. https://arxiv.org/abs/2006.07159. -
-
-
- -
-
-Figure 11.5: AI vs human performane. Credit: Kiela et al. (2021). -
-
-Kiela, Douwe, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, et al. 2021. “Dynabench: Rethinking Benchmarking in NLP.” In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 4110–24. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.naacl-main.324. -
-
-

An alternative paradigm is emerging called data-centric AI. Rather than treating data as static and focusing narrowly on model performance, this approach recognizes that models are only as good as their training data. So, the emphasis shifts to curating high-quality datasets that better reflect real-world complexity, developing more informative evaluation benchmarks, and carefully considering how data is sampled, preprocessed, and augmented. The goal is to optimize model behavior by improving the data rather than just optimizing metrics on flawed datasets. Data-centric AI critically examines and enhances the data itself to produce beneficial AI. This reflects an important evolution in mindset as the field addresses the shortcomings of narrow benchmarking.

-

This section will explore the key differences between model-centric and data-centric approaches to AI. This distinction has important implications for how we benchmark AI systems. Specifically, we will see how focusing on data quality and Efficiency can directly improve machine learning performance as an alternative to optimizing model architectures solely. The data-centric approach recognizes that models are only as good as their training data. So, enhancing data curation, evaluation benchmarks, and data handling processes can produce AI systems that are safer, fairer, and more robust. Rethinking benchmarking to prioritize data alongside models represents an important evolution as the field aims to deliver trustworthy real-world impact.

-
-

11.6.1 Limitations of Model-Centric AI

-

In the model-centric AI era, a prominent characteristic was the development of complex model architectures. Researchers and practitioners dedicated substantial effort to devising sophisticated and intricate models in the quest for superior performance. This frequently involved the incorporation of additional layers and the fine-tuning of a multitude of hyperparameters to achieve incremental improvements in accuracy. Concurrently, there was a significant emphasis on leveraging advanced algorithms. These algorithms, often at the forefront of the latest research, were employed to enhance the performance of AI models. The primary aim of these algorithms was to optimize the learning process of models, thereby extracting maximal information from the training data.

-

While the model-centric approach has been central to many advancements in AI, it has several areas for improvement. First, the development of complex model architectures can often lead to overfitting. This is when the model performs well on the training data but needs to generalize to new, unseen data. The additional layers and complexity can capture noise in the training data as if it were a real pattern, harming the model’s performance on new data.

-

Second, relying on advanced algorithms can sometimes obscure the real understanding of a model’s functioning. These algorithms often act as a black box, making it difficult to interpret how the model is making decisions. This lack of transparency can be a significant hurdle, especially in critical applications such as healthcare and finance, where understanding the model’s decision-making process is crucial.

-

Third, the emphasis on achieving state-of-the-art results on benchmark datasets can sometimes be misleading. These datasets need to represent the complexities and variability of real-world data more fully. A model that performs well on a benchmark dataset may not necessarily generalize well to new, unseen data in a real-world application. This discrepancy can lead to false confidence in the model’s capabilities and hinder its practical applicability.

-

Lastly, the model-centric approach often relies on large labeled datasets for training. However, obtaining such datasets takes time and effort in many real-world scenarios. This reliance on large datasets also limits AI’s applicability in domains where data is scarce or expensive to label.

-

As a result of the above reasons, and many more, the AI community is shifting to a more data-centric approach. Rather than focusing just on model architecture, researchers are now prioritizing curating high-quality datasets, developing better evaluation benchmarks, and considering how data is sampled and preprocessed. The key idea is that models are only as good as their training data. So, focusing on getting the right data will allow us to develop AI systems that are more fair, safe, and aligned with human values. This data-centric shift represents an important change in mindset as AI progresses.

-
-
-

11.6.2 The Shift Toward Data-centric AI

-

Data-centric AI is a paradigm that emphasizes the importance of high-quality, well-labeled, and diverse datasets in developing AI models. In contrast to the model-centric approach, which focuses on refining and iterating on the model architecture and algorithm to improve performance, data-centric AI prioritizes the quality of the input data as the primary driver of improved model performance. High-quality data is clean, well-labeled and representative of the real-world scenarios the model will encounter. In contrast, low-quality data can lead to poor model performance, regardless of the complexity or sophistication of the model architecture.

-

Data-centric AI puts a strong emphasis on the cleaning and labeling of data. Cleaning involves the removal of outliers, handling missing values, and addressing other data inconsistencies. Labeling, on the other hand, involves assigning meaningful and accurate labels to the data. Both these processes are crucial in ensuring that the AI model is trained on accurate and relevant data. Another important aspect of the data-centric approach is data augmentation. This involves artificially increasing the size and diversity of the dataset by applying various transformations to the data, such as rotation, scaling, and flipping training images. Data augmentation helps in improving the model’s robustness and generalization capabilities.

-

There are several benefits to adopting a data-centric approach to AI development. First and foremost, it leads to improved model performance and generalization capabilities. By ensuring that the model is trained on high-quality, diverse data, the model can better generalize to new, unseen data (Mattson et al. 2020b).

-

Additionally, a data-centric approach can often lead to simpler models that are easier to interpret and maintain. This is because the emphasis is on the data rather than the model architecture, meaning simpler models can achieve high performance when trained on high-quality data.

-

The shift towards data-centric AI represents a significant paradigm shift. By prioritizing the quality of the input data, this approach aims to improve model performance and generalization capabilities, ultimately leading to more robust and reliable AI systems. As we continue to advance in our understanding and application of AI, the data-centric approach is likely to play an important role in shaping the future of this field.

-
-
-

11.6.3 Benchmarking Data

-

Data benchmarking aims to evaluate common issues in datasets, such as identifying label errors, noisy features, representation imbalance (for example, out of the 1000 classes in Imagenet-1K, there are over 100 categories which are just types of dogs), class imbalance (where some classes have many more samples than others), whether models trained on a given dataset can generalize to out-of-distribution features, or what types of biases might exist in a given dataset (Mattson et al. 2020b). In its simplest form, data benchmarking aims to improve accuracy on a test set by removing noisy or mislabeled training samples while keeping the model architecture fixed. Recent competitions in data benchmarking have invited participants to submit novel augmentation strategies and active learning techniques.

-
-Mattson, Peter, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, et al. 2020b. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843. -

Data-centric techniques continue to gain attention in benchmarking, especially as foundation models are increasingly trained on self-supervised objectives. Compared to smaller datasets like Imagenet-1K, massive datasets commonly used in self-supervised learning, such as Common Crawl, OpenImages, and LAION-5B, contain higher amounts of noise, duplicates, bias, and potentially offensive data.

-

DataComp is a recently launched dataset competition that targets the evaluation of large corpora. DataComp focuses on language-image pairs used to train CLIP models. The introductory whitepaper finds that when the total compute budget for training is constant, the best-performing CLIP models on downstream tasks, such as ImageNet classification, are trained on just 30% of the available training sample pool. This suggests that proper filtering of large corpora is critical to improving the accuracy of foundation models. Similarly, Demystifying CLIP Data (Xu et al. 2023) asks whether the success of CLIP is attributable to the architecture or the dataset.

-
-Xu, Hu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, and Christoph Feichtenhofer. 2023. “Demystifying CLIP Data.” ArXiv Preprint abs/2309.16671. https://arxiv.org/abs/2309.16671. -

DataPerf is another recent effort focusing on benchmarking data in various modalities. DataPerf provides rounds of online competition to spur improvement in datasets. The inaugural offering launched with challenges in vision, speech, acquisition, debugging, and text prompting for image generation.

-
-
-

11.6.4 Data Efficiency

-

As machine learning models grow larger and more complex and compute resources become more scarce in the face of rising demand, it becomes challenging to meet the computation requirements even with the largest machine learning fleets. To overcome these challenges and ensure machine learning system scalability, it is necessary to explore novel opportunities that augment conventional approaches to resource scaling.

-

Improving data quality can be a useful method to impact machine learning system performance significantly. One of the primary benefits of enhancing data quality is the potential to reduce the size of the training dataset while still maintaining or even improving model performance. This data size reduction directly relates to the amount of training time required, thereby allowing models to converge more quickly and efficiently. Achieving this balance between data quality and dataset size is a challenging task that requires the development of sophisticated methods, algorithms, and techniques.

-

Several approaches can be taken to improve data quality. These methods include and are not limited to the following:

-
    -
  • Data Cleaning: This involves handling missing values, correcting errors, and removing outliers. Clean data ensures that the model is not learning from noise or inaccuracies.
  • -
  • Data Interpretability and Explainability: Common techniques include LIME (Ribeiro, Singh, and Guestrin 2016), which provides insight into the decision boundaries of classifiers, and Shapley values (Lundberg and Lee 2017), which estimate the importance of individual samples in contributing to a model’s predictions.
  • -
  • Feature Engineering: Transforming or creating new features can significantly improve model performance by providing more relevant information for learning.
  • -
  • Data Augmentation: Augmenting data by creating new samples through various transformations can help improve model robustness and generalization.
  • -
  • Active Learning: This is a semi-supervised learning approach where the model actively queries a human oracle to label the most informative samples (Coleman et al. 2022). This ensures that the model is trained on the most relevant data.
  • -
  • Dimensionality Reduction: Techniques like PCA can reduce the number of features in a dataset, thereby reducing complexity and training time.
  • -
-
-Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. Why Should i Trust You? Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. -
-Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4765–74. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html. -
-Coleman, Cody, Edward Chou, Julian Katz-Samuels, Sean Culatana, Peter Bailis, Alexander C. Berg, Robert D. Nowak, Roshan Sumbaly, Matei Zaharia, and I. Zeki Yalniz. 2022. “Similarity Search for Efficient Active Learning and Search of Rare Concepts.” In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, the Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, 6402–10. AAAI Press. https://ojs.aaai.org/index.php/AAAI/article/view/20591. -

There are many other methods in the wild. But the goal is the same. Refining the dataset and ensuring it is of the highest quality can reduce the training time required for models to converge. However, achieving this requires developing and implementing sophisticated methods, algorithms, and techniques that can clean, preprocess, and augment data while retaining the most informative samples. This is an ongoing challenge that will require continued research and innovation in the field of machine learning.

-
-
-
-

11.7 The Trifecta

-

While system, model, and data benchmarks have traditionally been studied in isolation, there is a growing recognition that to understand and advance AI fully, we must take a more holistic view. By iterating between benchmarking systems, models, and datasets together, novel insights that are not apparent when these components are analyzed separately may emerge. System performance impacts model accuracy, model capabilities drive data needs, and data characteristics shape system requirements.

-

Benchmarking the triad of system, model, and data in an integrated fashion will likely lead to discoveries about the co-design of AI systems, the generalization properties of models, and the role of data curation and quality in enabling performance. Rather than narrow benchmarks of individual components, the future of AI requires benchmarks that evaluate the symbiotic relationship between computing platforms, algorithms, and training data. This systems-level perspective will be critical to overcoming current limitations and unlocking the next level of AI capabilities.

-

Figure fig-benchmarking-trifecta illustrates the many potential ways to interplay data benchmarking, model benchmarking, and system infrastructure benchmarking together. Exploring these intricate interactions is likely to uncover new optimization opportunities and enhancement capabilities. The data, model, and system benchmark triad offers a rich space for co-design and co-optimization.

-
-
-
- -
-
-Figure 11.6: Benchmarking trifecta. -
-
-
-

While this integrated perspective represents an emerging trend, the field has much more to discover about the synergies and trade-offs between these components. As we iteratively benchmark combinations of data, models, and systems, new insights that remain hidden when these elements are studied in isolation will emerge. This multifaceted benchmarking approach charting the intersections of data, algorithms, and hardware promises to be a fruitful avenue for major progress in AI, even though it is still in its early stages.

-
-
-

11.8 Benchmarks for Emerging Technologies

-

Given their significant differences from existing techniques, emerging technologies can be particularly challenging to design benchmarks for. Standard benchmarks used for existing technologies may not highlight the key features of the new approach. In contrast, new benchmarks may be seen as contrived to favor the emerging technology over others. They may be so different from existing benchmarks that they cannot be understood and lose insightful value. Thus, benchmarks for emerging technologies must balance fairness, applicability, and ease of comparison with existing benchmarks.

-

An example of emerging technology where benchmarking has proven to be especially difficult is in Neuromorphic Computing. Using the brain as a source of inspiration for scalable, robust, and energy-efficient general intelligence, neuromorphic computing (Schuman et al. 2022) directly incorporates biologically realistic mechanisms in both computing algorithms and hardware, such as spiking neural networks (Maass 1997) and non-von Neumann architectures for executing them (Davies et al. 2018; Modha et al. 2023). From a full-stack perspective of models, training techniques, and hardware systems, neuromorphic computing differs from conventional hardware and AI. Thus, there is a key challenge in developing fair and useful benchmarks for guiding the technology.

-
-Schuman, Catherine D., Shruti R. Kulkarni, Maryam Parsa, J. Parker Mitchell, Prasanna Date, and Bill Kay. 2022. “Opportunities for Neuromorphic Computing Algorithms and Applications.” Nature Computational Science 2 (1): 10–19. https://doi.org/10.1038/s43588-021-00184-y. -
-Maass, Wolfgang. 1997. “Networks of Spiking Neurons: The Third Generation of Neural Network Models.” Neural Networks 10 (9): 1659–71. https://doi.org/10.1016/s0893-6080(97)00011-7. -
-Davies, Mike, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, et al. 2018. “Loihi: A Neuromorphic Manycore Processor with on-Chip Learning.” IEEE Micro 38 (1): 82–99. https://doi.org/10.1109/mm.2018.112130359. -
-Modha, Dharmendra S., Filipp Akopyan, Alexander Andreopoulos, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, et al. 2023. “Neural Inference at the Frontier of Energy, Space, and Time.” Science 382 (6668): 329–35. https://doi.org/10.1126/science.adh1174. -
-Yik, Jason, Soikat Hasan Ahmed, Zergham Ahmed, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, et al. 2023. NeuroBench: Advancing Neuromorphic Computing Through Collaborative, Fair and Representative Benchmarking.” https://arxiv.org/abs/2304.04640. -

An ongoing initiative to develop standard neuromorphic benchmarks is NeuroBench (Yik et al. 2023). To suitably benchmark neuromorphic, NeuroBench follows high-level principles of inclusiveness through task and metric applicability to both neuromorphic and non-neuromorphic solutions, actionability of implementation using common tooling, and iterative updates to continue to ensure relevance as the field rapidly grows. NeuroBench and other benchmarks for emerging technologies provide critical guidance for future techniques, which may be necessary as the scaling limits of existing approaches draw nearer.

-
-
-

11.9 Conclusion

-

What gets measured gets improved. This chapter has explored the multifaceted nature of benchmarking spanning systems, models, and data. Benchmarking is important to advancing AI by providing the essential measurements to track progress.

-

ML system benchmarks enable optimization across speed, Efficiency, and scalability metrics. Model benchmarks drive innovation through standardized tasks and metrics beyond accuracy. Data benchmarks highlight issues of quality, balance, and representation.

-

Importantly, evaluating these components in isolation has limitations. In the future, more integrated benchmarking will likely be used to explore the interplay between system, model, and data benchmarks. This view promises new insights into co-designing data, algorithms, and infrastructure.

-

As AI grows more complex, comprehensive benchmarking becomes even more critical. Standards must continuously evolve to measure new capabilities and reveal limitations. Close collaboration between industry, academics, national labels, etc., is essential to developing benchmarks that are rigorous, transparent, and socially beneficial.

-

Benchmarking provides the compass to guide progress in AI. By persistently measuring and openly sharing results, we can navigate toward performant, robust, and trustworthy systems. If AI is to serve societal and human needs properly, it must be benchmarked with humanity’s best interests in mind. To this end, there are emerging areas, such as benchmarking the safety of AI systems, but that’s for another day and something we can discuss further in Generative AI!

-

Benchmarking is a continuously evolving topic. The article The Olympics of AI: Benchmarking Machine Learning Systems covers several emerging subfields in AI benchmarking, including robotics, extended reality, and neuromorphic computing that we encourage the reader to pursue.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/conclusion/conclusion.html b/contents/conclusion/conclusion.html deleted file mode 100644 index 54b10ca4..00000000 --- a/contents/conclusion/conclusion.html +++ /dev/null @@ -1,1076 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 20  Conclusion - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

20  Conclusion

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: An image depicting the last chapter of an ML systems book, open to a two-page spread. The pages summarize key concepts such as neural networks, model architectures, hardware acceleration, and MLOps. One page features a diagram of a neural network and different model architectures, while the other page shows illustrations of hardware components for acceleration and MLOps workflows. The background includes subtle elements like circuit patterns and data points to reinforce the technological theme. The colors are professional and clean, with an emphasis on clarity and understanding..
-
-
-
-

The Evolution

-

We have taken on a comprehensive tour through the multifaceted landscape of this rapidly evolving field. We began by tracing ML’s historical trajectory, from a brief overview of its theoretical foundations to its current state as a transformative force across industries. This journey has highlighted the remarkable progress made in the field and the challenges and opportunities that lie ahead.

-

We dove into the basic intricacies of data engineering, recognizing that data quality, diversity, and ethical sourcing are paramount to building robust and reliable machine learning models. The importance of high-quality data must be balanced, as lapses in data quality can lead to significant negative consequences, such as flawed predictions, project terminations, and even potential harm to communities.

-

We then explored various model architectures, from the foundational perceptron to the sophisticated transformer networks, each tailored to specific tasks and data types. This exploration has showcased machine learning models’ remarkable diversity and adaptability, enabling them to tackle various problems across different domains.

-
-
-

Advancements

-

Over the years, we have witnessed remarkable strides in ML systems, particularly in addressing the challenges of resource constraints and real-world deployment. The evolution of model architectures, from the early MobileNets designed for mobile devices to the more recent TinyML models optimized for microcontrollers, has been a testament to the ingenuity and innovation in the field. These advancements have enabled the deployment of powerful AI capabilities on devices with limited resources, opening up new possibilities for healthcare, agriculture, environmental monitoring applications, and more.

-

We have also explored breakthroughs in hardware acceleration, such as developing specialized chips like Edge TPUs and neuromorphic hardware. These innovations have significantly improved the efficiency and performance of machine learning systems, enabling real-time processing and analysis on edge devices. Integrating these hardware advancements with optimized model architectures has unlocked new possibilities for deploying machine learning in resource-constrained environments.

-

Furthermore, we dove into advanced topics like on-device learning, where models can adapt and learn directly on the device, enhancing privacy and reducing reliance on cloud connectivity. This approach has significant implications for data privacy and security, as sensitive information can be processed locally without the need for transmission to external servers. Techniques like transfer learning and federated learning have further expanded the capabilities of on-device learning, enabling collaborative and efficient model updates across distributed devices. These advancements have paved the way for more secure, privacy-preserving, and decentralized machine learning applications.

-
-
-

The Future

-

As we look to the future, the trajectory of machine learning systems points towards a paradigm shift from a model-centric approach to a more data-centric one. This shift recognizes that the quality and diversity of data are paramount to developing robust, reliable, and fair AI models. As such, we can anticipate a growing emphasis on data curation, labeling, and augmentation techniques to ensure that models are trained on high-quality, representative data that reflects the complexities of real-world scenarios. This focus on data will be crucial in addressing the challenges of bias, fairness, and generalizability in machine learning systems.

-

Furthermore, the proliferation of TinyML is set to revolutionize edge computing. With its ability to deploy machine learning models on resource-constrained devices, TinyML is poised to enable a new wave of intelligent applications in healthcare, agriculture, environmental monitoring, and more. This democratization of AI will empower individuals and communities to leverage the power of machine learning for local problem-solving and sustainable development. By bringing AI capabilities to the edge, TinyML has the potential to unlock innovative solutions and drive positive change in diverse domains.

-

Another promising avenue is neuromorphic computing, which draws inspiration from the human brain’s neural networks to create more efficient and adaptable AI systems. While still in its early stages, neuromorphic computing can revolutionize AI by enabling low-power, real-time learning and decision-making on edge devices. This approach could lead to developing AI systems that are more energy-efficient, resilient, and capable of adapting to dynamic environments. As research in neuromorphic computing advances, we can expect breakthroughs in areas such as autonomous systems, robotics, and intelligent sensor networks.

-

However, as we embrace these advancements, it is crucial to remain mindful of the ethical considerations that will shape the future of AI. Fairness, transparency, accountability, and privacy in AI systems will be paramount as they become more integrated into our lives and decision-making processes. The development of ethical frameworks, regulations, and standards will be essential to guide the responsible and equitable development and deployment of AI technologies. It is the collective responsibility of researchers, practitioners, policymakers, and society to engage in ongoing discussions and collaborations to address these ethical challenges head-on.

-
-
-

Ethical Considerations

-

Despite the remarkable progress, the path forward for ML systems has been. Issues such as bias in data and models, the need for explainability and interpretability, the environmental impact of AI, and the ethical considerations surrounding its use remain critical concerns as we continue to scale AI/ML use cases. As the field advances, researchers, practitioners, and government and political agencies stakeholders must address these challenges head-on. Developing techniques for mitigating bias, promoting transparency, and ensuring the responsible deployment of machine learning systems will be extremely important to building trust and fostering widespread adoption.

-

Moreover, the “black box” nature of many complex machine learning models poses challenges in understanding their decision-making processes. This lack of transparency can hinder trust and accountability, especially in high-stakes applications like healthcare and finance, where the consequences of erroneous or biased decisions can be severe. Developing interpretable models and explainability techniques is crucial for building trust and ensuring responsible AI deployment free from biases. By providing insights into how models arrive at their predictions, we can foster greater understanding, enable oversight, and facilitate identifying and mitigating potential biases or errors.

-

The increasing computational demands of machine learning, particularly for training large models, have raised concerns about their environmental impact due to high energy consumption and carbon emissions. As the scale and complexity of models continue to grow, it is crucial to address the sustainability challenges associated with AI development. The development of energy-efficient algorithms, the use of renewable energy sources, and the exploration of alternative computing paradigms like neuromorphic computing are essential for mitigating the environmental footprint of AI. By prioritizing sustainability and investing in green computing initiatives, the AI community can work towards reducing the environmental impact of machine learning systems.

-

Moreover, it is important to acknowledge that access to AI and machine learning compute resources may not be equally distributed across organizations and regions. This disparity can lead to a widening gap between those who have the means to leverage advanced AI technologies and those who do not. Organizations like the Organisation for Economic Cooperation and Development (OECD) are actively exploring ways to address this issue and promote greater equity in AI access and adoption. By fostering international cooperation, sharing best practices, and supporting capacity-building initiatives, we can ensure that AI’s benefits are more widely accessible and that no one is left behind in the AI revolution.

-
-
-

A Call to Action

-

As we conclude this exploration of machine learning systems, we invite you to embark on your journey of discovery and innovation. The field is ripe with possibilities, and your contributions can shape the future of AI. Continue to learn, experiment, and push the boundaries of what’s possible. Engage with the vibrant machine learning community, participate in open-source projects, and share your knowledge and insights. By actively participating in this ever-evolving field, you can help drive the development of responsible, ethical, and sustainable AI systems that benefit society and contribute to a brighter future for all.

-

Remember that the power of machine learning lies not only in the technology itself but also in the hands of those who wield it. As you navigate this exciting landscape, let your curiosity, creativity, and commitment to ethical principles be your guiding lights. Embrace the challenges, seek out diverse perspectives, and strive to create AI systems that are technically advanced and aligned with the values of fairness, transparency, and social good. Together, we can shape a future where machine learning systems serve as powerful tools for positive change, empowering individuals, communities, and industries to tackle the most pressing challenges of our time.

-

Congratulations on making it to the end!

- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/data_engineering/data_engineering.html b/contents/data_engineering/data_engineering.html deleted file mode 100644 index 12b63254..00000000 --- a/contents/data_engineering/data_engineering.html +++ /dev/null @@ -1,1726 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 5  Data Engineering - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

5  Data Engineering

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Create a rectangular illustration visualizing the concept of data engineering. Include elements such as raw data sources, data processing pipelines, storage systems, and refined datasets. Show how raw data is transformed through cleaning, processing, and storage to become valuable information that can be analyzed and used for decision-making.
-
-
-

Data is the lifeblood of AI systems. Without good data, even the most advanced machine-learning algorithms will not succeed. This section will dive into the intricacies of building high-quality datasets to fuel our AI models. Data engineering involves collecting, storing, processing, and managing data to train machine learning models.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the importance of clearly defining the problem statement and objectives when embarking on an ML project.

  • -
  • Recognize various data sourcing techniques, such as web scraping, crowdsourcing, and synthetic data generation, along with their advantages and limitations.

  • -
  • Appreciate the need for thoughtful data labeling, using manual or AI-assisted approaches, to create high-quality training datasets.

  • -
  • Briefly learn different methods for storing and managing data, such as databases, data warehouses, and data lakes.

  • -
  • Comprehend the role of transparency through metadata and dataset documentation and tracking data provenance to facilitate ethics, auditing, and reproducibility.

  • -
  • Understand how licensing protocols govern legal data access and usage, necessitating careful compliance.

  • -
  • Recognize key challenges in data engineering, including privacy risks, representation gaps, legal restrictions around data access, and balancing competing priorities.

  • -
-
-
-
-

5.1 Introduction

-

Dataset creators face complex privacy and representation challenges when building high-quality training data, especially for sensitive domains like healthcare. Legally, creators may need to remove direct identifiers like names and ages. Even without legal obligations, removing such information can help build user trust. However, excessive anonymization can compromise dataset utility. Techniques like differential privacy\(^{1}\), aggregation, and reducing detail provide alternatives to balance privacy and utility but have downsides. Creators must strike a thoughtful balance based on the use case.

-

Looking beyond privacy, creators need to proactively assess and address representation gaps that could introduce model biases. It is crucial yet insufficient to ensure diversity across individual variables like gender, race, and accent. Combinations of characteristics also require assessment, as models can only work when certain intersections are present. For example, a medical dataset could have balanced gender, age, and diagnosis data individually, but it lacks enough cases to capture older women with a specific condition. Such higher-order gaps are not immediately obvious but can critically impact model performance.

-

Creating useful, ethical training data requires holistic consideration of privacy risks and representation gaps. Perfect solutions are elusive. However, conscientious data engineering practices like anonymization, aggregation, undersampling of overrepresented groups, and synthesized data generation can help balance competing needs. This facilitates models that are both accurate and socially responsible. Cross-functional collaboration and external audits can also strengthen training data. The challenges are multifaceted but surmountable with thoughtful effort.

-

We begin by discussing data collection: Where do we source data, and how do we gather it? Options range from scraping the web, accessing APIs, and utilizing sensors and IoT devices to conducting surveys and gathering user input. These methods reflect real-world practices. Next, we delve into data labeling, including considerations for human involvement. We’ll discuss the tradeoffs and limitations of human labeling and explore emerging methods for automated labeling. Following that, we’ll address data cleaning and preprocessing, a crucial yet frequently undervalued step in preparing raw data for AI model training. Data augmentation comes next, a strategy for enhancing limited datasets by generating synthetic samples. This is particularly pertinent for embedded systems, as many use cases need extensive data repositories readily available for curation. Synthetic data generation emerges as a viable alternative, though it has its own advantages and disadvantages. We’ll also touch upon dataset versioning, emphasizing the importance of tracking data modifications over time. Data is ever-evolving; hence, it’s imperative to devise strategies for managing and storing expansive datasets. By the end of this section, you’ll possess a comprehensive understanding of the entire data pipeline, from collection to storage, essential for operationalizing AI systems. Let’s embark on this journey!

-
-
-

5.2 Problem Definition

-

In many machine learning domains, sophisticated algorithms take center stage, while the fundamental importance of data quality is often overlooked. This neglect gives rise to “Data Cascades” by Sambasivan et al. (2021) (see Figure fig-cascades)—events where lapses in data quality compound, leading to negative downstream consequences such as flawed predictions, project terminations, and even potential harm to communities. In Figure fig-cascades, we have an illustration of potential data pitfalls at every stage and how they influence the entire process down the line. The influence of data collection errors is especially pronounced. Any lapses in this stage will become apparent at later stages (in model evaluation and deployment) and might lead to costly consequences, such as abandoning the entire model and restarting anew. Therefore, investing in data engineering techniques from the onset will help us detect errors early.

-
-
-
- -
-
-Figure 5.1: Data cascades: compounded costs. Credit: Sambasivan et al. (2021). -
-
-Sambasivan, Nithya, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–15. ACM. https://doi.org/10.1145/3411764.3445518. -
-
-

Despite many ML professionals recognizing the importance of data, numerous practitioners report facing these cascades. This highlights a systemic issue: while the allure of developing advanced models remains, data often needs to be more appreciated.

-

Take, for example, Keyword Spotting (KWS) (see Figure fig-keywords). KWS is a prime example of TinyML in action and is a critical technology behind voice-enabled interfaces on endpoint devices such as smartphones. Typically functioning as lightweight wake-word engines, these systems are consistently active, listening for a specific phrase to trigger further actions. When we say “OK, Google” or “Alexa,” this initiates a process on a microcontroller embedded within the device. Despite their limited resources, these microcontrollers play an important role in enabling seamless voice interactions with devices, often operating in environments with high ambient noise. The uniqueness of the wake word helps minimize false positives, ensuring that the system is not triggered inadvertently.

-

It is important to appreciate that these keyword-spotting technologies are not isolated; they integrate seamlessly into larger systems, processing signals continuously while managing low power consumption. These systems extend beyond simple keyword recognition, evolving to facilitate diverse sound detections, such as glass breaking. This evolution is geared towards creating intelligent devices capable of understanding and responding to vocal commands, heralding a future where even household appliances can be controlled through voice interactions.

-
-
-
- -
-
-Figure 5.2: Keyword Spotting example: interacting with Alexa. Credit: Amazon. -
-
-
-

Building a reliable KWS model is a complex task. It demands a deep understanding of the deployment scenario, encompassing where and how these devices will operate. For instance, a KWS model’s effectiveness is not just about recognizing a word; it’s about discerning it among various accents and background noises, whether in a bustling cafe or amid the blaring sound of a television in a living room or a kitchen where these devices are commonly found. It’s about ensuring that a whispered “Alexa” in the dead of night or a shouted “OK Google” in a noisy marketplace are recognized with equal precision.

-

Moreover, many current KWS voice assistants support a limited number of languages, leaving a substantial portion of the world’s linguistic diversity unrepresented. This limitation is partly due to the difficulty in gathering and monetizing data for languages spoken by smaller populations. The long-tail distribution of languages implies that many languages have limited data, making the development of supportive technologies challenging.

-

This level of accuracy and robustness hinges on the availability and quality of data, the ability to label the data correctly, and the transparency of the data for the end user before it is used to train the model. However, it all begins with clearly understanding the problem statement or definition.

-

Generally, in ML, problem definition has a few key steps:

-
    -
  1. Identifying the problem definition clearly

  2. -
  3. Setting clear objectives

  4. -
  5. Establishing success benchmark

  6. -
  7. Understanding end-user engagement/use

  8. -
  9. Understanding the constraints and limitations of deployment

  10. -
  11. Followed by finally doing the data collection.

  12. -
-

A solid project foundation is essential for its trajectory and eventual success. Central to this foundation is first identifying a clear problem, such as ensuring that voice commands in voice assistance systems are recognized consistently across varying environments. Clear objectives, like creating representative datasets for diverse scenarios, provide a unified direction. Benchmarks, such as system accuracy in keyword detection, offer measurable outcomes to gauge progress. Engaging with stakeholders, from end-users to investors, provides invaluable insights and ensures alignment with market needs. Additionally, understanding platform constraints is pivotal when delving into areas like voice assistance. Embedded systems, such as microcontrollers, come with inherent processing power, memory, and energy efficiency limitations. Recognizing these limitations ensures that functionalities, like keyword detection, are tailored to operate optimally, balancing performance with resource conservation.

-

In this context, using KWS as an example, we can break each of the steps out as follows:

-
    -
  1. Identifying the Problem: At its core, KWS aims to detect specific keywords amidst ambient sounds and other spoken words. The primary problem is to design a system that can recognize these keywords with high accuracy, low latency, and minimal false positives or negatives, especially when deployed on devices with limited computational resources.

  2. -
  3. Setting Clear Objectives: The objectives for a KWS system might include:

    -
      -
    • Achieving a specific accuracy rate (e.g., 98% accuracy in keyword detection).
    • -
    • Ensuring low latency (e.g., keyword detection and response within 200 milliseconds).
    • -
    • Minimizing power consumption to extend battery life on embedded devices.
    • -
    • Ensuring the model’s size is optimized for the available memory on the device.
    • -
  4. -
  5. Benchmarks for Success: Establish clear metrics to measure the success of the KWS system. This could include:

    -
      -
    • True Positive Rate: The percentage of correctly identified keywords.
    • -
    • False Positive Rate: The percentage of non-keywords incorrectly identified as keywords.
    • -
    • Response Time: The time taken from keyword utterance to system response.
    • -
    • Power Consumption: Average power used during keyword detection.
    • -
  6. -
  7. Stakeholder Engagement and Understanding: Engage with stakeholders, which include device manufacturers, hardware and software developers, and end-users. Understand their needs, capabilities, and constraints. For instance:

    -
      -
    • Device manufacturers might prioritize low power consumption.
    • -
    • Software developers might emphasize ease of integration.
    • -
    • End-users would prioritize accuracy and responsiveness.
    • -
  8. -
  9. Understanding the Constraints and Limitations of Embedded Systems: Embedded devices come with their own set of challenges:

    -
      -
    • Memory Limitations: KWS models must be lightweight to fit within the memory constraints of embedded devices. Typically, KWS models need to be as small as 16KB to fit in the always-on island of the SoC. Moreover, this is just the model size. Additional application code for preprocessing may also need to fit within the memory constraints.
    • -
    • Processing Power: The computational capabilities of embedded devices are limited (a few hundred MHz of clock speed), so the KWS model must be optimized for efficiency.
    • -
    • Power Consumption: Since many embedded devices are battery-powered, the KWS system must be power-efficient.
    • -
    • Environmental Challenges: Devices might be deployed in various environments, from quiet bedrooms to noisy industrial settings. The KWS system must be robust enough to function effectively across these scenarios.
    • -
  10. -
  11. Data Collection and Analysis: For a KWS system, the quality and diversity of data are paramount. Considerations might include:

    -
      -
    • Variety of Accents: Collect data from speakers with various accents to ensure wide-ranging recognition.
    • -
    • Background Noises: Include data samples with different ambient noises to train the model for real-world scenarios.
    • -
    • Keyword Variations: People might either pronounce keywords differently or have slight variations in the wake word itself. Ensure the dataset captures these nuances.
    • -
  12. -
  13. Iterative Feedback and Refinement: Once a prototype KWS system is developed, it’s crucial to test it in real-world scenarios, gather feedback, and iteratively refine the model. This ensures that the system remains aligned with the defined problem and objectives. This is important because the deployment scenarios change over time as things evolve.

  14. -
-
-

Exercise 5.1 (Keyword Spotting with TensorFlow Lite Micro)  

-
-
- -
-
-

Explore a hands-on guide for building and deploying Keyword Spotting (KWS) systems using TensorFlow Lite Micro. Follow steps from data collection to model training and deployment to microcontrollers. Learn to create efficient KWS models that recognize specific keywords amidst background noise. Perfect for those interested in machine learning on embedded systems. Unlock the potential of voice-enabled devices with TensorFlow Lite Micro!

-

-
-
-
-

The current chapter underscores the essential role of data quality in ML, using Keyword Spotting (KWS) systems as an example. It outlines key steps, from problem definition to stakeholder engagement, emphasizing iterative feedback. The forthcoming chapter will delve deeper into data quality management, discussing its consequences and future trends, focusing on the importance of high-quality, diverse data in AI system development, addressing ethical considerations and data sourcing methods.

-
-
-

5.3 Data Sourcing

-

The quality and diversity of data gathered are important for developing accurate and robust AI systems. Sourcing high-quality training data requires careful consideration of the objectives, resources, and ethical implications. Data can be obtained from various sources depending on the needs of the project:

-
-

5.3.1 Pre-existing datasets

-

Platforms like Kaggle and UCI Machine Learning Repository provide a convenient starting point. Pre-existing datasets are valuable for researchers, developers, and businesses. One of their primary advantages is cost efficiency. Creating a dataset from scratch can be time-consuming and expensive, so accessing ready-made data can save significant resources. Moreover, many datasets, like ImageNet, have become standard benchmarks in the machine learning community, allowing for consistent performance comparisons across different models and algorithms. This data availability means that experiments can be started immediately without any data collection and preprocessing delays. In a fast-moving field like ML, this practicality is important.

-

The quality assurance that comes with popular pre-existing datasets is important to consider because several datasets have errors in them. For instance, the ImageNet dataset was found to have over 6.4% errors. Given their widespread use, the community often identifies and rectifies any errors or biases in these datasets. This assurance is especially beneficial for students and newcomers to the field, as they can focus on learning and experimentation without worrying about data integrity. Supporting documentation often accompanying existing datasets is invaluable, though this generally applies only to widely used datasets. Good documentation provides insights into the data collection process and variable definitions and sometimes even offers baseline model performances. This information not only aids understanding but also promotes reproducibility in research, a cornerstone of scientific integrity; currently, there is a crisis around improving reproducibility in machine learning systems. When other researchers have access to the same data, they can validate findings, test new hypotheses, or apply different methodologies, thus allowing us to build on each other’s work more rapidly.

-

While platforms like Kaggle and UCI Machine Learning Repository are invaluable resources, it’s essential to understand the context in which the data was collected. Researchers should be wary of potential overfitting when using popular datasets, as multiple models might have been trained on them, leading to inflated performance metrics. Sometimes, these datasets do not reflect the real-world data.

-

In addition, bias, validity, and reproducibility issues may exist in these datasets, and there has been a growing awareness of these issues in recent years. Furthermore, using the same dataset to train different models as shown in Figure fig-misalignment can sometimes create misalignment: training multiple models using the same dataset resultsi in a ‘misalignment’ between the models and the world, in which an entire ecosystem of models reflects only a narrow subset of the real-world data.

-
-
-
- -
-
-Figure 5.3: Training different models on the same dataset. Credit: (icons from left to right: Becris; Freepik; Freepik; Paul J; SBTS2018). -
-
-
-
-
-

5.3.2 Web Scraping

-

Web scraping refers to automated techniques for extracting data from websites. It typically involves sending HTTP requests to web servers, retrieving HTML content, and parsing that content to extract relevant information. Popular tools and frameworks for web scraping include Beautiful Soup, Scrapy, and Selenium. These tools offer different functionalities, from parsing HTML content to automating web browser interactions, especially for websites that load content dynamically using JavaScript.

-

Web scraping can effectively gather large datasets for training machine learning models, particularly when human-labeled data is scarce. For computer vision research, web scraping enables the collection of massive volumes of images and videos. Researchers have used this technique to build influential datasets like ImageNet and OpenImages. For example, one could scrape e-commerce sites to amass product photos for object recognition or social media platforms to collect user uploads for facial analysis. Even before ImageNet, Stanford’s LabelMe project scraped Flickr for over 63,000 annotated images covering hundreds of object categories.

-

Beyond computer vision, web scraping supports gathering textual data for natural language tasks. Researchers can scrape news sites for sentiment analysis data, forums and review sites for dialogue systems research, or social media for topic modeling. For example, the training data for chatbot ChatGPT was obtained by scraping much of the public Internet. GitHub repositories were scraped to train GitHub’s Copilot AI coding assistant.

-

Web scraping can also collect structured data, such as stock prices, weather data, or product information, for analytical applications. Once data is scraped, it is essential to store it in a structured manner, often using databases or data warehouses. Proper data management ensures the usability of the scraped data for future analysis and applications.

-

However, while web scraping offers numerous advantages, there are significant limitations and ethical considerations to bear. Not all websites permit scraping, and violating these restrictions can lead to legal repercussions. Scraping copyrighted material or private communications is also unethical and potentially illegal. Ethical web scraping mandates adherence to a website’s ‘robots.txt’ file, which outlines the sections of the site that can be accessed and scraped by automated bots.

-

To deter automated scraping, many websites implement rate limits. If a bot sends too many requests in a short period, it might be temporarily blocked, restricting the speed of data access. Additionally, the dynamic nature of web content means that data scraped at different intervals might need more consistency, posing challenges for longitudinal studies. However, there are emerging trends like Web Navigation where machine learning algorithms can automatically navigate the website to access the dynamic content.

-

The volume of pertinent data available for scraping might be limited for niche subjects. For example, while scraping for common topics like images of cats and dogs might yield abundant data, searching for rare medical conditions might be less fruitful. Moreover, the data obtained through scraping is often unstructured and noisy, necessitating thorough preprocessing and cleaning. It is crucial to understand that not all scraped data will be of high quality or accuracy. Employing verification methods, such as cross-referencing with alternate data sources, can enhance data reliability.

-

Privacy concerns arise when scraping personal data, emphasizing the need for anonymization. Therefore, it is paramount to adhere to a website’s Terms of Service, confine data collection to public domains, and ensure the anonymity of any personal data acquired.

-

While web scraping can be a scalable method to amass large training datasets for AI systems, its applicability is confined to specific data types. For example, web scraping makes sourcing data for Inertial Measurement Units (IMU) for gesture recognition more complex. At most, one can scrape an existing dataset.

-

Web scraping can yield inconsistent or inaccurate data. For example, the photo in Figure fig-traffic-light shows up when you search for ‘traffic light’ on Google Images. It is an image from 1914 that shows outdated traffic lights, which are also barely discernible because of the image’s poor quality. This can be problematic for web-scraped datasets, as it pollutes the dataset with inapplicable (old) data samples.

-
-
-
- -
-
-Figure 5.4: A picture of old traffic lights (1914). Credit: Vox. -
-
-
-
-

Exercise 5.2 (Web Scraping)  

-
-
- -
-
-

Discover the power of web scraping with Python using libraries like Beautiful Soup and Pandas. This exercise will scrape Python documentation for function names and descriptions and explore NBA player stats. By the end, you’ll have the skills to extract and analyze data from real-world websites. Ready to dive in? Access the Google Colab notebook below and start practicing!

-

-
-
-
-
-
-

5.3.3 Crowdsourcing

-

Crowdsourcing for datasets is the practice of obtaining data using the services of many people, either from a specific community or the general public, typically via the Internet. Instead of relying on a small team or specific organization to collect or label data, crowdsourcing leverages the collective effort of a vast, distributed group of participants. Services like Amazon Mechanical Turk enable the distribution of annotation tasks to a large, diverse workforce. This facilitates the collection of labels for complex tasks like sentiment analysis or image recognition requiring human judgment.

-

Crowdsourcing has emerged as an effective approach for data collection and problem-solving. One major advantage of crowdsourcing is scalability—by distributing tasks to a large, global pool of contributors on digital platforms, projects can process huge volumes of data quickly. This makes crowdsourcing ideal for large-scale data labeling, collection, and analysis.

-

In addition, crowdsourcing taps into a diverse group of participants, bringing a wide range of perspectives, cultural insights, and language abilities that can enrich data and enhance creative problem-solving in ways that a more homogenous group may not. Because crowdsourcing draws from a large audience beyond traditional channels, it is more cost-effective than conventional methods, especially for simpler microtasks.

-

Crowdsourcing platforms also allow for great flexibility, as task parameters can be adjusted in real time based on initial results. This creates a feedback loop for iterative improvements to the data collection process. Complex jobs can be broken down into microtasks and distributed to multiple people, with results cross-validated by assigning redundant versions of the same task. When thoughtfully managed, crowdsourcing enables community engagement around a collaborative project, where participants find reward in contributing.

-

However, while crowdsourcing offers numerous advantages, it’s essential to approach it with a clear strategy. While it provides access to a diverse set of annotators, it also introduces variability in the quality of annotations. Additionally, platforms like Mechanical Turk might not always capture a complete demographic spectrum; often, tech-savvy individuals are overrepresented, while children and older people may be underrepresented. Providing clear instructions and training for the annotators is crucial. Periodic checks and validations of the labeled data help maintain quality. This ties back to the topic of clear Problem Definition that we discussed earlier. Crowdsourcing for datasets also requires careful attention to ethical considerations. It’s crucial to ensure that participants are informed about how their data will be used and that their privacy is protected. Quality control through detailed protocols, transparency in sourcing, and auditing is essential to ensure reliable outcomes.

-

For TinyML, crowdsourcing can pose some unique challenges. TinyML devices are highly specialized for particular tasks within tight constraints. As a result, the data they require tends to be very specific. Obtaining such specialized data from a general audience may be difficult through crowdsourcing. For example, TinyML applications often rely on data collected from certain sensors or hardware. Crowdsourcing would require participants to have access to very specific and consistent devices - like microphones, with the same sampling rates. These hardware nuances present obstacles even for simple audio tasks like keyword spotting.

-

Beyond hardware, the data itself needs high granularity and quality, given the limitations of TinyML. It can be hard to ensure this when crowdsourcing from those unfamiliar with the application’s context and requirements. There are also potential issues around privacy, real-time collection, standardization, and technical expertise. Moreover, the narrow nature of many TinyML tasks makes accurate data labeling easier with the proper understanding. Participants may need full context to provide reliable annotations.

-

Thus, while crowdsourcing can work well in many cases, the specialized needs of TinyML introduce unique data challenges. Careful planning is required for guidelines, targeting, and quality control. For some applications, crowdsourcing may be feasible, but others may require more focused data collection efforts to obtain relevant, high-quality training data.

-
-
-

5.3.4 Synthetic Data

-

Synthetic data generation can be useful for addressing some of the data collection limitations. It involves creating data that wasn’t originally captured or observed but is generated using algorithms, simulations, or other techniques to resemble real-world data. As shown in Figure fig-synthetic-data, synthetic data is merged with historical data and then used as input for model training. It has become a valuable tool in various fields, particularly when real-world data is scarce, expensive, or ethically challenging (e.g., TinyML). Various techniques, such as Generative Adversarial Networks (GANs), can produce high-quality synthetic data almost indistinguishable from real data. These techniques have advanced significantly, making synthetic data generation increasingly realistic and reliable.

-

More real-world data may need to be available for analysis or training machine learning models in many domains, especially emerging ones. Synthetic data can fill this gap by producing large volumes of data that mimic real-world scenarios. For instance, detecting the sound of breaking glass might be challenging in security applications where a TinyML device is trying to identify break-ins. Collecting real-world data would require breaking numerous windows, which is impractical and costly.

-

Moreover, having a diverse dataset is crucial in machine learning, especially in deep learning. Synthetic data can augment existing datasets by introducing variations, thereby enhancing the robustness of models. For example, SpecAugment is an excellent data augmentation technique for Automatic Speech Recognition (ASR) systems.

-

Privacy and confidentiality are also big issues. Datasets containing sensitive or personal information pose privacy concerns when shared or used. Synthetic data, being artificially generated, doesn’t have these direct ties to real individuals, allowing for safer use while preserving essential statistical properties.

-

Generating synthetic data, especially once the generation mechanisms have been established, can be a more cost-effective alternative. Synthetic data eliminates the need to break multiple windows to gather relevant data in the security above application scenario.

-

Many embedded use cases deal with unique situations, such as manufacturing plants, that are difficult to simulate. Synthetic data allows researchers complete control over the data generation process, enabling the creation of specific scenarios or conditions that are challenging to capture in real life.

-

While synthetic data offers numerous advantages, it is essential to use it judiciously. Care must be taken to ensure that the generated data accurately represents the underlying real-world distributions and does not introduce unintended biases.

-
-
-
- -
-
-Figure 5.5: Increasing training data size with synthetic data generation. Credit: AnyLogic. -
-
-
-
-

Exercise 5.3 (Synthetic Data)  

-
-
- -
-
-

Let us learn about synthetic data generation using Generative Adversarial Networks (GANs) on tabular data. We’ll take a hands-on approach, diving into the workings of the CTGAN model and applying it to the Synthea dataset from the healthcare domain. From data preprocessing to model training and evaluation, we’ll go step-by-step, learning how to create synthetic data, assess its quality, and unlock the potential of GANs for data augmentation and real-world applications.

-

-
-
-
-
-
-
-

5.4 Data Storage

-

Data sourcing and data storage go hand in hand, and data must be stored in a format that facilitates easy access and processing. Depending on the use case, various kinds of data storage systems can be used to store your datasets. Some examples are shown in Table tbl-databases.

-
-
-
-Table 5.1: Comparative overview of the database, data warehouse, and data lake. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DatabaseData WarehouseData Lake
PurposeOperational and transactionalAnalytical
Data typeStructuredStructured, semi-structured, and/or unstructured
ScaleSmall to large volumes of dataLarge volumes of integrated data
ExamplesMySQLGoogle BigQuery, Amazon Redshift, Microsoft Azure Synapse, Google Cloud Storage, AWS S3, Azure Data Lake Storage
-
-
-
-

The stored data is often accompanied by metadata, defined as ’data about data .’It provides detailed contextual information about the data, such as means of data creation, time of creation, attached data use license, etc. For example, Hugging Face has Dataset Cards. To promote responsible data use, dataset creators should disclose potential biases through the dataset cards. These cards can educate users about a dataset’s contents and limitations. The cards also give vital context on appropriate dataset usage by highlighting biases and other important details. Having this type of metadata can also allow fast retrieval if structured properly. Once the model is developed and deployed to edge devices, the storage systems can continue to store incoming data, model updates, or analytical results.

-

Data Governance: With a large amount of data storage, it is also imperative to have policies and practices (i.e., data governance) that help manage data during its life cycle, from acquisition to disposal. Data governance frames how data is managed and includes making pivotal decisions about data access and control. Figure fig-governance illustrates the different domains involved in data governance. It involves exercising authority and making decisions concerning data to uphold its quality, ensure compliance, maintain security, and derive value. Data governance is operationalized by developing policies, incentives, and penalties, cultivating a culture that perceives data as a valuable asset. Specific procedures and assigned authorities are implemented to safeguard data quality and monitor its utilization and related risks.

-

Data governance utilizes three integrative approaches: planning and control, organizational, and risk-based.

-
    -
  • The planning and control approach, common in IT, aligns business and technology through annual cycles and continuous adjustments, focusing on policy-driven, auditable governance.

  • -
  • The organizational approach emphasizes structure, establishing authoritative roles like Chief Data Officers and ensuring responsibility and accountability in governance.

  • -
  • The risk-based approach, intensified by AI advancements, focuses on identifying and managing inherent risks in data and algorithms. It especially addresses AI-specific issues through regular assessments and proactive risk management strategies, allowing for incidental and preventive actions to mitigate undesired algorithm impacts.

  • -
-
-
-
- -
-
-Figure 5.6: An overview of the data governance framework. Credit: StarCIO.. -
-
-
-

Some examples of data governance across different sectors include:

-
    -
  • Medicine: Health Information Exchanges(HIEs) enable the sharing of health information across different healthcare providers to improve patient care. They implement strict data governance practices to maintain data accuracy, integrity, privacy, and security, complying with regulations such as the Health Insurance Portability and Accountability Act (HIPAA). Governance policies ensure that patient data is only shared with authorized entities and that patients can control access to their information.

  • -
  • ** Finance: ** [[Basel III Framework] .underline] (https://www.bis.org/bcbs/basel3.htm) is an international regulatory framework for banks. It ensures that banks establish clear policies, practices, and responsibilities for data management, ensuring data accuracy, completeness, and timeliness. Not only does it enable banks to meet regulatory compliance, but it also prevents financial crises by more effectively managing risks.

  • -
  • Government: Government agencies managing citizen data, public records, and administrative information implement data governance to manage data transparently and securely. The Social Security System in the US and the Aadhar system in India are good examples of such governance systems.

  • -
-

Special data storage considerations for TinyML

-

Efficient Audio Storage Formats: Keyword spotting systems need specialized audio storage formats to enable quick keyword searching in audio data. Traditional formats like WAV and MP3 store full audio waveforms, which require extensive processing to search through. Keyword spotting uses compressed storage optimized for snippet-based search. One approach is to store compact acoustic features instead of raw audio. Such a workflow would involve:

-
    -
  • Extracting acoustic features: Mel-frequency cepstral coefficients (MFCCs) commonly represent important audio characteristics.

  • -
  • Creating Embeddings: Embeddings transform extracted acoustic features into continuous vector spaces, enabling more compact and representative data storage. This representation is essential in converting high-dimensional data, like audio, into a more manageable and efficient format for computation and storage.

  • -
  • Vector quantization: This technique represents high-dimensional data, like embeddings, with lower-dimensional vectors, reducing storage needs. Initially, a codebook is generated from the training data to define a set of code vectors representing the original data vectors. Subsequently, each data vector is matched to the nearest codeword according to the codebook, ensuring minimal information loss.

  • -
  • Sequential storage: The audio is fragmented into short frames, and the quantized features (or embeddings) for each frame are stored sequentially to maintain the temporal order, preserving the coherence and context of the audio data.

  • -
-

This format enables decoding the features frame-by-frame for keyword matching. Searching the features is faster than decompressing the full audio.

-

Selective Network Output Storage: Another technique for reducing storage is to discard the intermediate audio features stored during training but not required during inference. The network is run on full audio during training. However, only the final outputs are stored during inference.

-
-
-

5.5 Data Processing

-

Data processing refers to the steps involved in transforming raw data into a format suitable for feeding into machine learning algorithms. It is a crucial stage in any ML workflow, yet often overlooked. With proper data processing, ML models are likely to achieve optimal performance. Figure fig-data-engineering shows a breakdown of a data scientist’s time allocation, highlighting the significant portion spent on data cleaning and organizing (%60).

-
-
-
- -
-
-Figure 5.7: Data scientists’ tasks breakdown by time spent. Credit: Forbes. -
-
-
-

Proper data cleaning is a crucial step that directly impacts model performance. Real-world data is often dirty, containing errors, missing values, noise, anomalies, and inconsistencies. Data cleaning involves detecting and fixing these issues to prepare high-quality data for modeling. By carefully selecting appropriate techniques, data scientists can improve model accuracy, reduce overfitting, and enable algorithms to learn more robust patterns. Overall, thoughtful data processing allows machine learning systems to uncover insights better and make predictions from real-world data.

-

Data often comes from diverse sources and can be unstructured or semi-structured. Thus, processing and standardizing it is essential, ensuring it adheres to a uniform format. Such transformations may include:

-
    -
  • Normalizing numerical variables
  • -
  • Encoding categorical variables
  • -
  • Using techniques like dimensionality reduction
  • -
-

Data validation serves a broader role than ensuring adherence to certain standards, like preventing temperature values from falling below absolute zero. These issues arise in TinyML because sensors may malfunction or temporarily produce incorrect readings; such transients are not uncommon. Therefore, it is imperative to catch data errors early before propagating through the data pipeline. Rigorous validation processes, including verifying the initial annotation practices, detecting outliers, and handling missing values through techniques like mean imputation, contribute directly to the quality of datasets. This, in turn, impacts the performance, fairness, and safety of the models trained on them. Let’s take a look at Figure fig-data-engineering-kws2 for an example of a data processing pipeline. In the context of TinyML, the Multilingual Spoken Words Corpus (MSWC) is an example of data processing pipelines—systematic and automated workflows for data transformation, storage, and processing. The input data (which’s a collection of short recordings) goes through sevreral phases of processing, such as audio-word alignemnt and keyword extraction. By streamlining the data flow, from raw data to usable datasets, data pipelines enhance productivity and facilitate the rapid development of machine learning models. The MSWC is an expansive and expanding collection of audio recordings of spoken words in 50 different languages, which are collectively used by over 5 billion people. This dataset is intended for academic study and business uses in areas like keyword identification and speech-based search. It is openly licensed under Creative Commons Attribution 4.0 for broad usage.

-
-
-
- -
-
-Figure 5.8: An overview of the Multilingual Spoken Words Corpus (MSWC) data processing pipeline. Credit: Mazumder et al. (2021). -
-
-Mazumder, Mark, Sharad Chitlangia, Colby Banbury, Yiping Kang, Juan Manuel Ciro, Keith Achorn, Daniel Galvez, et al. 2021. “Multilingual Spoken Words Corpus.” In Thirty-Fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). -
-
-

The MSWC used a forced alignment method to automatically extract individual word recordings to train keyword-spotting models from the Common Voice project, which features crowdsourced sentence-level recordings. Forced alignment refers to long-standing methods in speech processing that predict when speech phenomena like syllables, words, or sentences start and end within an audio recording. In the MSWC data, crowdsourced recordings often feature background noises, such as static and wind. Depending on the model’s requirements, these noises can be removed or intentionally retained.

-

Maintaining the integrity of the data infrastructure is a continuous endeavor. This encompasses data storage, security, error handling, and stringent version control. Periodic updates are crucial, especially in dynamic realms like keyword spotting, to adjust to evolving linguistic trends and device integrations.

-

There is a boom in data processing pipelines, commonly found in ML operations toolchains, which we will discuss in the MLOps chapter. Briefly, these include frameworks like MLOps by Google Cloud. It provides methods for automation and monitoring at all steps of ML system construction, including integration, testing, releasing, deployment, and infrastructure management. Several mechanisms focus on data processing, an integral part of these systems.

-
-

Exercise 5.4 (Data Processing)  

-
-
- -
-
-

Let us explore two significant projects in speech data processing and machine learning. The MSWC is a vast audio dataset with over 340,000 keywords and 23.4 million 1-second spoken examples. It’s used in various applications like voice-enabled devices and call center automation. The Few-Shot Keyword Spotting project introduces a new approach for keyword spotting across different languages, achieving impressive results with minimal training data. We’ll delve into the MSWC dataset, learn how to structure it effectively, and then train a few-shot keyword-spotting model. Let’s get started!

-

-
-
-
-
-
-

5.6 Data Labeling

-

Data labeling is important in creating high-quality training datasets for machine learning models. Labels provide ground truth information, allowing models to learn relationships between inputs and desired outputs. This section covers key considerations for selecting label types, formats, and content to capture the necessary information for tasks. It discusses common annotation approaches, from manual labeling to crowdsourcing to AI-assisted methods, and best practices for ensuring label quality through training, guidelines, and quality checks. We also emphasize the ethical treatment of human annotators. The integration of AI to accelerate and augment human annotation is also explored. Understanding labeling needs, challenges, and strategies are essential for constructing reliable, useful datasets to train performant, trustworthy machine learning systems.

-
-

5.6.1 Label Types

-

Labels capture information about key tasks or concepts. Figure fig-labels includes some common label types: a “classification label” is used for categorizing images with labels (labeling an image with “dog” if it features a dog); a “bounding box” identifies object location (drawing a box around the dog); a “segmentation map” classifies objects at the pixel level (highlighting the dog in a distinct color); a “caption” provides descriptive annotations (describing the dog’s actions, position, color, etc.); and a “transcript” denotes audio content. The choice of label format depends on the use case and resource constraints, as more detailed labels require greater effort to collect (Johnson-Roberson et al. (2017)).

-
-Johnson-Roberson, Matthew, Charles Barto, Rounak Mehta, Sharath Nittur Sridhar, Karl Rosaen, and Ram Vasudevan. 2017. “Driving in the Matrix: Can Virtual Worlds Replace Human-Generated Annotations for Real World Tasks?” In 2017 IEEE International Conference on Robotics and Automation (ICRA), 746–53. Singapore, Singapore: IEEE. https://doi.org/10.1109/icra.2017.7989092. -
-
-
- -
-
-Figure 5.9: An overview of common label types. -
-
-
-

Unless focused on self-supervised learning, a dataset will likely provide labels addressing one or more tasks of interest. Given their unique resource constraints, dataset creators must consider what information labels should capture and how they can practically obtain the necessary labels. Creators must first decide what type(s) of content labels should capture. For example, a creator interested in car detection would want to label cars in their dataset. Still, they might also consider whether to simultaneously collect labels for other tasks that the dataset could potentially be used for, such as pedestrian detection.

-

Additionally, annotators can provide metadata that provides insight into how the dataset represents different characteristics of interest (see sec-data-transparency). The Common Voice dataset, for example, includes various types of metadata that provide information about the speakers, recordings, and dataset quality for each language represented (Ardila et al. (2020)). They include demographic splits showing the number of recordings by speaker age range and gender. This allows us to see who contributed recordings for each language. They also include statistics like average recording duration and total hours of validated recordings. These give insights into the nature and size of the datasets for each language. Additionally, quality control metrics like the percentage of recordings that have been validated are useful to know how complete and clean the datasets are. The metadata also includes normalized demographic splits scaled to 100% for comparison across languages. This highlights representation differences between higher and lower resource languages.

-
-Ardila, Rosana, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. 2020. “Common Voice: A Massively-Multilingual Speech Corpus.” In Proceedings of the Twelfth Language Resources and Evaluation Conference, 4218–22. Marseille, France: European Language Resources Association. https://aclanthology.org/2020.lrec-1.520. -

Next, creators must determine the format of those labels. For example, a creator interested in car detection might choose between binary classification labels that say whether a car is present, bounding boxes that show the general locations of any cars, or pixel-wise segmentation labels that show the exact location of each car. Their choice of label format may depend on their use case and resource constraints, as finer-grained labels are typically more expensive and time-consuming to acquire.

-
-
-

5.6.2 Annotation Methods

-

Common annotation approaches include manual labeling, crowdsourcing, and semi-automated techniques. Manual labeling by experts yields high quality but needs more scalability. Crowdsourcing enables non-experts to distribute annotation, often through dedicated platforms (Sheng and Zhang (2019)). Weakly supervised and programmatic methods can reduce manual effort by heuristically or automatically generating labels (Ratner et al. (2018))

-
-Sheng, Victor S., and Jing Zhang. 2019. “Machine Learning with Crowdsourcing: A Brief Summary of the Past Research and Future Directions.” Proceedings of the AAAI Conference on Artificial Intelligence 33 (01): 9837–43. https://doi.org/10.1609/aaai.v33i01.33019837. -
-Ratner, Alex, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher Ré. 2018. “Snorkel MeTaL: Weak Supervision for Multi-Task Learning.” In Proceedings of the Second Workshop on Data Management for End-to-End Machine Learning. ACM. https://doi.org/10.1145/3209889.3209898. -

After deciding on their labels’ desired content and format, creators begin the annotation process. To collect large numbers of labels from human annotators, creators frequently rely on dedicated annotation platforms, which can connect them to teams of human annotators. When using these platforms, creators may need more insight into annotators’ backgrounds and experience levels with topics of interest. However, some platforms offer access to annotators with specific expertise (e.g., doctors).

-
-
-

5.6.3 Ensuring Label Quality

-

There is no guarantee that the data labels are actually correct. Figure fig-hard-labels shows some examples of hard labeling cases: some errors arise from blurred pictures that make them hard to identify (the frog image), and others stem from a lack of domain knowledge (the black stork case). It is possible that despite the best instructions being given to labelers, they still mislabel some images (Northcutt, Athalye, and Mueller (2021)). Strategies like quality checks, training annotators, and collecting multiple labels per datapoint can help ensure label quality. For ambiguous tasks, multiple annotators can help identify controversial datapoints and quantify disagreement levels.

-
-
-
- -
-
-Figure 5.10: Some examples of hard labeling cases. Credit: Northcutt, Athalye, and Mueller (2021). -
-
-Northcutt, Curtis G, Anish Athalye, and Jonas Mueller. 2021. “Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks.” arXiv. https://doi.org/https://doi.org/10.48550/arXiv.2103.14749 arXiv-issued DOI via DataCite. -
-
-

When working with human annotators, offering fair compensation and otherwise prioritizing ethical treatment is important, as annotators can be exploited or otherwise harmed during the labeling process (Perrigo, 2023). For example, if a dataset is likely to contain disturbing content, annotators may benefit from having the option to view images in grayscale (Google (n.d.)).

-
-Google. n.d. “Information Quality Content Moderation.” https://blog.google/documents/83/. -
-
-

5.6.4 AI-Assisted Annotation

-

ML has an insatiable demand for data. Therefore, more data is needed. This raises the question of how we can get more labeled data. Rather than always generating and curating data manually, we can rely on existing AI models to help label datasets more quickly and cheaply, though often with lower quality than human annotation. This can be done in various ways as shown in Figure fig-weak-supervision, including the following:

-
    -
  • Pre-annotation: AI models can generate preliminary labels for a dataset using methods such as semi-supervised learning (Chapelle, Scholkopf, and Zien (2009)), which humans can then review and correct. This can save a significant amount of time, especially for large datasets.
  • -
  • Active learning: AI models can identify the most informative data points in a dataset, which can then be prioritized for human annotation. This can help improve the labeled dataset’s quality while reducing the overall annotation time.
  • -
  • Quality control: AI models can identify and flag potential errors in human annotations, helping to ensure the accuracy and consistency of the labeled dataset.
  • -
-
-Chapelle, O., B. Scholkopf, and A. Zien Eds. 2009. “Semi-Supervised Learning (Chapelle, O. Et Al., Eds.; 2006) [Book Reviews].” IEEE Trans. Neural Networks 20 (3): 542–42. https://doi.org/10.1109/tnn.2009.2015974. -

Here are some examples of how AI-assisted annotation has been proposed to be useful:

-
    -
  • Medical imaging: AI-assisted annotation labels medical images, such as MRI scans and X-rays (Krishnan, Rajpurkar, and Topol (2022)). Carefully annotating medical datasets is extremely challenging, especially at scale, since domain experts are scarce and become costly. This can help to train AI models to diagnose diseases and other medical conditions more accurately and efficiently.
    -
  • -
  • Self-driving cars: AI-assisted annotation is being used to label images and videos from self-driving cars. This can help to train AI models to identify objects on the road, such as other vehicles, pedestrians, and traffic signs.
  • -
  • Social media: AI-assisted annotation labels social media posts like images and videos. This can help to train AI models to identify and classify different types of content, such as news, advertising, and personal posts.
  • -
-
-Krishnan, Rayan, Pranav Rajpurkar, and Eric J. Topol. 2022. “Self-Supervised Learning in Medicine and Healthcare.” Nat. Biomed. Eng. 6 (12): 1346–52. https://doi.org/10.1038/s41551-022-00914-1. -
-
-
- -
-
-Figure 5.11: Strategies for acquiring additional labeled training data. Credit: Standford AI Lab. -
-
-
-
-
-
-

5.7 Data Version Control

-

Production systems are perpetually inundated with fluctuating and escalating volumes of data, prompting the rapid emergence of numerous data replicas. This increasing data serves as the foundation for training machine learning models. For instance, a global sales company engaged in sales forecasting continuously receives consumer behavior data. Similarly, healthcare systems formulating predictive models for disease diagnosis are consistently acquiring new patient data. TinyML applications, such as keyword spotting, are highly data-hungry regarding the amount of data generated. Consequently, meticulous tracking of data versions and the corresponding model performance is imperative.

-

Data Version Control offers a structured methodology to handle alterations and versions of datasets efficiently. It facilitates monitoring modifications, preserves multiple versions, and guarantees reproducibility and traceability in data-centric projects. Furthermore, data version control provides the versatility to review and utilize specific versions as needed, ensuring that each stage of the data processing and model development can be revisited and audited precisely and easily. It has a variety of practical uses -

-

Risk Management: Data version control allows transparency and accountability by tracking dataset versions.

-

Collaboration and Efficiency: Easy access to different dataset versions in one place can improve data sharing of specific checkpoints and enable efficient collaboration.

-

Reproducibility: Data version control allows for tracking the performance of models concerning different versions of the data, and therefore enabling reproducibility.

-

Key Concepts

-
    -
  • Commits: It is an immutable snapshot of the data at a specific point in time, representing a unique version. Every commit is associated with a unique identifier to allow

  • -
  • Branches: Branching allows developers and data scientists to diverge from the main development line and continue to work independently without affecting other branches. This is especially useful when experimenting with new features or models, enabling parallel development and experimentation without the risk of corrupting the stable main branch.

  • -
  • Merges: Merges help to integrate changes from different branches while maintaining the integrity of the data.

  • -
-

With data version control in place, we can track the changes shown in Figure fig-data-version-ctrl, reproduce previous results by reverting to older versions, and collaborate safely by branching off and isolating the changes.

-
-
-
- -
-
-Figure 5.12: Data versioning. -
-
-
-

Popular Data Version Control Systems

-

DVC: It stands for Data Version Control in short and is an open-source, lightweight tool that works on top of Git Hub and supports all kinds of data formats. It can seamlessly integrate into the workflow if Git is used to manage code. It captures the versions of data and models in the Git commits while storing them on-premises or on the cloud (e.g., AWS, Google Cloud, Azure). These data and models (e.g., ML artifacts) are defined in the metadata files, which get updated in every commit. It can allow metrics tracking of models on different versions of the data.

-

lakeFS: It is an open-source tool that supports the data version control on data lakes. It supports many git-like operations, such as branching and merging of data, as well as reverting to previous versions of the data. It also has a unique UI feature, making exploring and managing data much easier.

-

Git LFS: It is useful for data version control on smaller-sized datasets. It uses Git’s inbuilt branching and merging features but is limited in tracking metrics, reverting to previous versions, or integrating with data lakes.

-
-
-

5.8 Optimizing Data for Embedded AI

-

Creators working on embedded systems may have unusual priorities when cleaning their datasets. On the one hand, models may be developed for unusually specific use cases, requiring heavy filtering of datasets. While other natural language models may be capable of turning any speech into text, a model for an embedded system may be focused on a single limited task, such as detecting a keyword. As a result, creators may aggressively filter out large amounts of data because they need to address the task of interest. An embedded AI system may also be tied to specific hardware devices or environments. For example, a video model may need to process images from a single type of camera, which will only be mounted on doorbells in residential neighborhoods. In this scenario, creators may discard images if they came from a different kind of camera, show the wrong type of scenery, or were taken from the wrong height or angle.

-

On the other hand, embedded AI systems are often expected to provide especially accurate performance in unpredictable real-world settings. This may lead creators to design datasets to represent variations in potential inputs and promote model robustness. As a result, they may define a narrow scope for their project but then aim for deep coverage within those bounds. For example, creators of the doorbell model mentioned above might try to cover variations in data arising from:

-
    -
  • Geographically, socially, and architecturally diverse neighborhoods
  • -
  • Different types of artificial and natural lighting
  • -
  • Different seasons and weather conditions
  • -
  • Obstructions (e.g., raindrops or delivery boxes obscuring the camera’s view)
  • -
-

As described above, creators may consider crowdsourcing or synthetically generating data to include these variations.

-
-
-

5.9 Data Transparency

-

By providing clear, detailed documentation, creators can help developers understand how best to use their datasets. Several groups have suggested standardized documentation formats for datasets, such as Data Cards (Pushkarna, Zaldivar, and Kjartansson (2022)), datasheets (Gebru et al. (2021)), data statements (Bender and Friedman (2018)), or Data Nutrition Labels (Holland et al. (2020)). When releasing a dataset, creators may describe what kinds of data they collected, how they collected and labeled it, and what kinds of use cases may be a good or poor fit for the dataset. Quantitatively, it may be appropriate to show how well the dataset represents different groups (e.g., different gender groups, different cameras).

-
-Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2021. “Datasheets for Datasets.” Commun. ACM 64 (12): 86–92. https://doi.org/10.1145/3458723. -
-Bender, Emily M., and Batya Friedman. 2018. “Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science.” Transactions of the Association for Computational Linguistics 6 (December): 587–604. https://doi.org/10.1162/tacl_a_00041. -
-Holland, Sarah, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. 2020. “The Dataset Nutrition Label: A Framework to Drive Higher Data Quality Standards.” In Data Protection and Privacy. Hart Publishing. https://doi.org/10.5040/9781509932771.ch-001. -

Figure fig-data-card shows an example of a data card for a computer vision (CV) dataset. It includes some basic information about the dataset and instructions on how to use it, including known biases.

-
-
-
- -
-
-Figure 5.13: Data card describing a CV dataset. Credit: Pushkarna, Zaldivar, and Kjartansson (2022). -
-
-Pushkarna, Mahima, Andrew Zaldivar, and Oddur Kjartansson. 2022. “Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI.” In 2022 ACM Conference on Fairness, Accountability, and Transparency. ACM. https://doi.org/10.1145/3531146.3533231. -
-
-

Keeping track of data provenance- essentially the origins and the journey of each data point through the data pipeline- is not merely a good practice but an essential requirement for data quality. Data provenance contributes significantly to the transparency of machine learning systems. Transparent systems make it easier to scrutinize data points, enabling better identification and rectification of errors, biases, or inconsistencies. For instance, if an ML model trained on medical data is underperforming in particular areas, tracing the provenance can help identify whether the issue is with the data collection methods, the demographic groups represented in the data or other factors. This level of transparency doesn’t just help debug the system but also plays a crucial role in enhancing the overall data quality. By improving the reliability and credibility of the dataset, data provenance also enhances the model’s performance and its acceptability among end-users.

-

When producing documentation, creators should also specify how users can access the dataset and how the dataset will be maintained over time. For example, users may need to undergo training or receive special permission from the creators before accessing a protected information dataset, as with many medical datasets. In some cases, users may not access the data directly. Instead, they must submit their model to be trained on the dataset creators’ hardware, following a federated learning setup (Aledhari et al. (2020)). Creators may also describe how long the dataset will remain accessible, how the users can submit feedback on any errors they discover, and whether there are plans to update the dataset.

-
-Aledhari, Mohammed, Rehma Razzak, Reza M. Parizi, and Fahad Saeed. 2020. “Federated Learning: A Survey on Enabling Technologies, Protocols, and Applications.” #IEEE_O_ACC# 8: 140699–725. https://doi.org/10.1109/access.2020.3013541. -

Some laws and regulations also promote data transparency through new requirements for organizations:

-
    -
  • General Data Protection Regulation (GDPR) in the European Union: It establishes strict requirements for processing and protecting the personal data of EU citizens. It mandates plain-language privacy policies that clearly explain what data is collected, why it is used, how long it is stored, and with whom it is shared. GDPR also mandates that privacy notices must include details on the legal basis for processing, data transfers, retention periods, rights to access and deletion, and contact info for data controllers.
  • -
  • California’s Consumer Privacy Act (CCPA): CCPA requires clear privacy policies and opt-out rights to sell personal data. Significantly, it also establishes rights for consumers to request their specific data be disclosed. Businesses must provide copies of collected personal information and details on what it is used for, what categories are collected, and what third parties receive. Consumers can identify data points they believe need to be more accurate. The law represents a major step forward in empowering personal data access.
  • -
-

Ensured data transparency presents several challenges, especially because it requires significant time and financial resources. Data systems are also quite complex, and full transparency can take time. Full transparency may also overwhelm consumers with too much detail. Finally, it is also important to balance the tradeoff between transparency and privacy.

-
-
-

5.10 Licensing

-

Many high-quality datasets either come from proprietary sources or contain copyrighted information. This introduces licensing as a challenging legal domain. Companies eager to train ML systems must engage in negotiations to obtain licenses that grant legal access to these datasets. Furthermore, licensing terms can impose restrictions on data applications and sharing methods. Failure to comply with these licenses can have severe consequences.

-

For instance, ImageNet, one of the most extensively utilized datasets for computer vision research, is a case in point. Most of its images were procured from public online sources without explicit permission, sparking ethical concerns (Prabhu and Birhane, 2020). Accessing the ImageNet dataset for corporations requires registration and adherence to its terms of use, which restricts commercial usage (ImageNet, 2021). Major players like Google and Microsoft invest significantly in licensing datasets to enhance their ML vision systems. However, the cost factor restricts accessibility for researchers from smaller companies with constrained budgets.

-

The legal domain of data licensing has seen major cases that help define fair use parameters. A prominent example is Authors Guild, Inc. v. Google, Inc. This 2005 lawsuit alleged that Google’s book scanning project infringed copyrights by displaying snippets without permission. However, the courts ultimately ruled in Google’s favor, upholding fair use based on the transformative nature of creating a searchable index and showing limited text excerpts. This precedent provides some legal grounds for arguing fair use protections apply to indexing datasets and generating representative samples for machine learning. However, license restrictions remain binding, so a comprehensive analysis of licensing terms is critical. The case demonstrates why negotiations with data providers are important to enable legal usage within acceptable bounds.

-

New Data Regulations and Their Implications

-

New data regulations also impact licensing practices. The legislative landscape is evolving with regulations like the EU’s Artificial Intelligence Act, which is poised to regulate AI system development and use within the European Union (EU). This legislation:

-
    -
  1. Classifies AI systems by risk.

  2. -
  3. Mandates development and usage prerequisites.

  4. -
  5. Emphasizes data quality, transparency, human oversight, and accountability.

  6. -
-

Additionally, the EU Act addresses the ethical dimensions and operational challenges in sectors such as healthcare and finance. Key elements include the prohibition of AI systems posing "unacceptable" risks, stringent conditions for high-risk systems, and minimal obligations for "limited risk" AI systems. The proposed European AI Board will oversee and ensure the implementation of efficient regulation.

-

Challenges in Assembling ML Training Datasets

-

Complex licensing issues around proprietary data, copyright law, and privacy regulations constrain options for assembling ML training datasets. However, expanding accessibility through more open licensing or public-private data collaborations could greatly accelerate industry progress and ethical standards.

-

Sometimes, certain portions of a dataset may need to be removed or obscured to comply with data usage agreements or protect sensitive information. For example, a dataset of user information may have names, contact details, and other identifying data that may need to be removed from the dataset; this is well after the dataset has already been actively sourced and used for training models. Similarly, a dataset that includes copyrighted content or trade secrets may need to filter out those portions before being distributed. Laws such as the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the Amended Act on the Protection of Personal Information (APPI) have been passed to guarantee the right to be forgotten. These regulations legally require model providers to erase user data upon request.

-

Data collectors and providers need to be able to take appropriate measures to de-identify or filter out any proprietary, licensed, confidential, or regulated information as needed. Sometimes, the users may explicitly request that their data be removed.

-

The ability to update the dataset by removing data from the dataset will enable the creators to uphold legal and ethical obligations around data usage and privacy. However, the ability to remove data has some important limitations. We must consider that some models may have already been trained on the dataset, and there is no clear or known way to eliminate a particular data sample’s effect from the trained network. There is no erase mechanism. Thus, this begs the question, should the model be retrained from scratch each time a sample is removed? That’s a costly option. Once data has been used to train a model, simply removing it from the original dataset may not fully eliminate its impact on the model’s behavior. New research is needed around the effects of data removal on already-trained models and whether full retraining is necessary to avoid retaining artifacts of deleted data. This presents an important consideration when balancing data licensing obligations with efficiency and practicality in an evolving, deployed ML system.

-

Dataset licensing is a multifaceted domain that intersects technology, ethics, and law. Understanding these intricacies becomes paramount for anyone building datasets during data engineering as the world evolves.

-
-
-

5.11 Conclusion

-

Data is the fundamental building block of AI systems. Without quality data, even the most advanced machine learning algorithms will fail. Data engineering encompasses the end-to-end process of collecting, storing, processing, and managing data to fuel the development of machine learning models. It begins with clearly defining the core problem and objectives, which guides effective data collection. Data can be sourced from diverse means, including existing datasets, web scraping, crowdsourcing, and synthetic data generation. Each approach involves tradeoffs between cost, speed, privacy, and specificity. Once data is collected, thoughtful labeling through manual or AI-assisted annotation enables the creation of high-quality training datasets. Proper storage in databases, warehouses, or lakes facilitates easy access and analysis. Metadata provides contextual details about the data. Data processing transforms raw data into a clean, consistent format for machine learning model development. Throughout this pipeline, transparency through documentation and provenance tracking is crucial for ethics, auditability, and reproducibility. Data licensing protocols also govern legal data access and use. Key challenges in data engineering include privacy risks, representation gaps, legal restrictions around proprietary data, and the need to balance competing constraints like speed versus quality. By thoughtfully engineering high-quality training data, machine learning practitioners can develop accurate, robust, and responsible AI systems, including embedded and TinyML applications.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/dl_primer/dl_primer.html b/contents/dl_primer/dl_primer.html deleted file mode 100644 index 2f0e7b15..00000000 --- a/contents/dl_primer/dl_primer.html +++ /dev/null @@ -1,1463 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 3  Deep Learning Primer - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

3  Deep Learning Primer

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Photo of a classic classroom with a large blackboard dominating one wall. Chalk drawings showcase a detailed deep neural network with several hidden layers, and each node and connection is precisely labeled with white chalk. The rustic wooden floor and brick walls provide a contrast to the modern concepts. Surrounding the room, posters mounted on frames emphasize deep learning themes: convolutional networks, transformers, neurons, activation functions, and more.
-
-
-

This section briefly introduces deep learning, starting with an overview of its history, applications, and relevance to embedded AI systems. It examines the core concepts like neural networks, highlighting key components like perceptrons, multilayer perceptrons, activation functions, and computational graphs. The primer also briefly explores major deep learning architecture, contrasting their applications and uses. Additionally, it compares deep learning to traditional machine learning to equip readers with the general conceptual building blocks to make informed choices between deep learning and traditional ML techniques based on problem constraints, setting the stage for more advanced techniques and applications that will follow in subsequent chapters.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the basic concepts and definitions of deep neural networks.

  • -
  • Recognize there are different deep learning model architectures.

  • -
  • Comparison between deep learning and traditional machine learning approaches across various dimensions.

  • -
  • Acquire the basic conceptual building blocks to delve deeper into advanced deep-learning techniques and applications.

  • -
-
-
-
-

3.1 Introduction

-
-

3.1.1 Definition and Importance

-

Deep learning, a specialized area within machine learning and artificial intelligence (AI), utilizes algorithms modeled after the structure and function of the human brain, known as artificial neural networks. This field is a foundational element in AI, driving progress in diverse sectors such as computer vision, natural language processing, and self-driving vehicles. Its significance in embedded AI systems is highlighted by its capability to handle intricate calculations and predictions, optimizing the limited resources in embedded settings. Figure fig-ai-ml-dl illustrates the chronological development and relative segmentation of the three fields.

-
-
-
- -
-
-Figure 3.1: Artificial intelligence subfields. Credit: NVIDIA. -
-
-
-
-
-

3.1.2 Brief History of Deep Learning

-

The idea of deep learning has origins in early artificial neural networks. It has experienced several cycles of interest, starting with the introduction of the Perceptron in the 1950s (Rosenblatt 1957), followed by the invention of backpropagation algorithms in the 1980s (Rumelhart, Hinton, and Williams 1986).

-
-Rosenblatt, Frank. 1957. The Perceptron, a Perceiving and Recognizing Automaton Project Para. Cornell Aeronautical Laboratory. -
-Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. “Learning Representations by Back-Propagating Errors.” Nature 323 (6088): 533–36. https://doi.org/10.1038/323533a0. -
-Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1106–14. https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html. -

The term “deep learning” became prominent in the 2000s, characterized by advances in computational power and data accessibility. Important milestones include the successful training of deep networks like AlexNet (Krizhevsky, Sutskever, and Hinton 2012) by Geoffrey Hinton, a leading figure in AI, and the renewed focus on neural networks as effective tools for data analysis and modeling.

-

Deep learning has recently seen exponential growth, transforming various industries. Computational growth followed an 18-month doubling pattern from 1952 to 2010, which then accelerated to a 6-month cycle from 2010 to 2022, as shown in Figure fig-trends. Concurrently, we saw the emergence of large-scale models between 2015 and 2022, appearing 2 to 3 orders of magnitude faster and following a 10-month doubling cycle.

- -

Multiple factors have contributed to this surge, including advancements in computational power, the abundance of big data, and improvements in algorithmic designs. First, the growth of computational capabilities, especially the arrival of Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) (Jouppi et al. 2017), has significantly sped up the training and inference times of deep learning models. These hardware improvements have enabled the construction and training of more complex, deeper networks than what was possible in earlier years.

-
-Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. -

Second, the digital revolution has yielded a wealth of big data, offering rich material for deep learning models to learn from and excel in tasks such as image and speech recognition, language translation, and game playing. Large, labeled datasets have been key in refining and successfully deploying deep learning applications in real-world settings.

-

Additionally, collaborations and open-source efforts have nurtured a dynamic community of researchers and practitioners, accelerating advancements in deep learning techniques. Innovations like deep reinforcement learning, transfer learning, and generative adversarial networks have broadened the scope of what is achievable with deep learning, opening new possibilities in various sectors, including healthcare, finance, transportation, and entertainment.

-

Organizations worldwide recognize deep learning’s transformative potential and invest heavily in research and development to leverage its capabilities in providing innovative solutions, optimizing operations, and creating new business opportunities. As deep learning continues its upward trajectory, it is set to redefine how we interact with technology, enhancing convenience, safety, and connectivity in our lives.

-
-
-

3.1.3 Applications of Deep Learning

-

Deep learning is extensively used across numerous industries today. In finance, it is employed for stock market prediction, risk assessment, and fraud detection. Marketing uses it for customer segmentation, personalization, and content optimization. In healthcare, machine learning aids in diagnosis, treatment planning, and patient monitoring. The transformative impact on society is evident.

-

For instance, deep learning algorithms can predict stock market trends, guide investment strategies, and enhance financial decisions. Similarly, in healthcare, deep learning can make medical predictions that improve patient diagnosis and save lives. The benefits are clear: machine learning predicts with greater accuracy than humans and does so much more quickly.

-

In manufacturing, deep learning has had a significant impact. By continuously learning from vast amounts of data collected during manufacturing, companies can boost productivity while minimizing waste through improved efficiency. This financial benefit for companies translates to better quality products at lower customer prices. Machine learning enables manufacturers to continually refine their processes, producing higher quality goods more efficiently than ever.

-

Deep learning enhances everyday products like Netflix recommendations and Google Translate text translations. Moreover, it helps companies like Amazon and Uber reduce customer service costs by swiftly identifying dissatisfied customers.

-
-
-

3.1.4 Relevance to Embedded AI

-

Embedded AI, the integration of AI algorithms directly into hardware devices, naturally gains from deep learning capabilities. Combining deep learning algorithms and embedded systems has laid the groundwork for intelligent, autonomous devices capable of advanced on-device data processing and analysis. Deep learning aids in extracting complex patterns and information from input data, which is essential in developing smart embedded systems, from household appliances to industrial machinery. This collaboration aims to usher in a new era of intelligent, interconnected devices that can learn and adapt to user behavior and environmental conditions, optimizing performance and offering unprecedented convenience and efficiency.

-
-
-
-

3.2 Neural Networks

-

Deep learning draws inspiration from the human brain’s neural networks to create decision-making patterns. This section delves into the foundational concepts of deep learning, providing insights into the more complex topics discussed later in this primer.

-

Neural networks serve as the foundation of deep learning, inspired by the biological neural networks in the human brain to process and analyze data hierarchically. Below, we examine the primary components and structures in neural networks.

-
-

3.2.1 Perceptrons

-

The Perceptron is the basic unit or node that is the foundation for more complex structures. It takes various inputs, applies weights and biases to them, and then uses an activation function to produce an output. Figure fig-perceptron illustrates the building blocks of a perceptron. In simple terms, think of a perceptron as a tiny decision-maker that learns to make a binary decision (e.g. ‘yes’ or ‘no’). It takes in numbers as inputs (x_1, x_2, ...), each representing a feature of an object we wish to analyze (an image for example). Then it multiplies each input by a weight, adds them up, and if the total is high enough (crosses a certain threshold), it returns “yes” as an asnwer, otherwise, it outputs “no.”

-
-
-
- -
-
-Figure 3.3: Perceptron. Credit: Wikimedia - Chrislb. -
-
-
-

Conceived in the 1950s, perceptrons paved the way for developing more intricate neural networks and have been a fundamental building block in deep learning.

-
-
-

3.2.2 Multilayer Perceptrons

-

Multilayer perceptrons (MLPs) are an evolution of the single-layer perceptron model, featuring multiple layers of nodes connected in a feedforward manner, as shown in Figure fig-mlp. These layers include an input layer for data reception, several hidden layers for data processing, and an output layer for final result generation. MLPs are skilled at identifying non-linear relationships and use a backpropagation technique for training, where weights are optimized through a gradient descent algorithm.

-
-
-
- -
-
-Figure 3.4: Multilayer Perceptron. Credit: Wikimedia - Charlie. -
-
-
-
-

Forward Pass

-

The forward pass is the initial phase where data moves through the network from the input to the output layer. During this phase, each layer performs specific computations on the input data, using weights and biases before passing the resulting values to subsequent layers. The final output of this phase is used to compute the loss, indicating the difference between the predicted output and actual target values.

-

The video below explains how neural networks work using handwritten digit recognition as an example application. It also touches on the math underlying neural nets.

-
-
-
-

Backward Pass (Backpropagation)

-

Backpropagation is a key algorithm in training deep neural networks. This phase involves calculating the gradient of the loss function concerning each weight using the chain rule, effectively moving backward through the network. The gradients calculated in this step guide the adjustment of weights to minimize the loss function, thereby enhancing the network’s performance with each iteration of training.

-

Grasping these foundational concepts paves the way to understanding more intricate deep learning architectures and techniques, fostering the development of more sophisticated and productive applications, especially within embedded AI systems.

-

The following two videos build upon the previous one. They cover gradient descent and backpropagation in neural networks.

-
-
-
-
-
-

3.2.3 Model Architectures

-

Deep learning architectures refer to the various structured approaches that dictate how neurons and layers are organized and interact in neural networks. These architectures have evolved to tackle different problems and data types effectively. This section overviews some well-known deep learning architectures and their characteristics.

-
-

Multilayer Perceptrons (MLPs)

-

MLPs are basic deep learning architectures comprising three layers: an input layer, one or more hidden layers, and an output layer. These layers are fully connected, meaning each neuron in a layer is linked to every neuron in the preceding and following layers. MLPs can model intricate functions and are used in various tasks, such as regression, classification, and pattern recognition. Their capacity to learn non-linear relationships through backpropagation makes them a versatile instrument in the deep learning toolkit.

-

In embedded AI systems, MLPs can function as compact models for simpler tasks like sensor data analysis or basic pattern recognition, where computational resources are limited. Their ability to learn non-linear relationships with relatively less complexity makes them a suitable choice for embedded systems.

-
-

Exercise 3.1 (Multilayer Perceptrons (MLPs))  

-
-
- -
-
-

Get ready to dive into the exciting world of deep learning and TinyML! We’ve just covered the core building blocks of neural networks, from simple perceptrons to complex architectures. Now, you’ll get to apply these concepts in practical examples. In the provided Colab notebooks, you’ll explore:

-

Predicting house prices: Learn how neural networks can analyze housing data to estimate property values.  

-

Image Classification: Discover how to build a network to understand the famous MNIST handwritten digit dataset.  

-

Real-world medical diagnosis: Use deep learning to tackle the important task of breast cancer classification.  

-

Are you excited to start building? Let’s go!  

-
-
-
-
-
-

Convolutional Neural Networks (CNNs)

-

CNNs are mainly used in image and video recognition tasks. This architecture employs convolutional layers that filter input data to identify features like edges, corners, and textures. A typical CNN also includes pooling layers to reduce the spatial dimensions of the data and fully connected layers for classification. CNNs have proven highly effective in image recognition, object detection, and computer vision applications.

-

In embedded AI, CNNs are crucial for image and video recognition tasks, where real-time processing is often needed. They can be optimized for embedded systems using techniques like quantization and pruning to minimize memory usage and computational demands, enabling efficient object detection and facial recognition functionalities in devices with limited computational resources.

-
-

Exercise 3.2 (Convolutional Neural Networks (CNNs))  

-
-
- -
-
-

We discussed that CNNs excel at identifying image features, making them ideal for tasks like object classification. Now, you’ll get to put this knowledge into action! This Colab notebook focuses on building a CNN to classify images from the CIFAR-10 dataset, which includes objects like airplanes, cars, and animals. You’ll learn about the key differences between CIFAR-10 and the MNIST dataset we explored earlier and how these differences influence model choice. By the end of this notebook, you’ll have a grasp of CNNs for image recognition and be well on your way to becoming a TinyML expert!     

-
-
-
-
-
-

Recurrent Neural Networks (RNNs)

-

RNNs are suitable for sequential data analysis, like time series forecasting and natural language processing. In this architecture, connections between nodes form a directed graph along a temporal sequence, allowing information to be carried across sequences through hidden state vectors. Variants of RNNs include Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), designed to capture longer dependencies in sequence data.

-

These networks can be used in voice recognition systems, predictive maintenance, or IoT devices where sequential data patterns are common. Optimizations specific to embedded platforms can assist in managing their typically high computational and memory requirements.

-
-
-

Generative Adversarial Networks (GANs)

-

GANs consist of two networks, a generator and a discriminator, trained simultaneously through adversarial training (Goodfellow et al. 2020). The generator produces data that tries to mimic the real data distribution, while the discriminator aims to distinguish between real and generated data. GANs are widely used in image generation, style transfer, and data augmentation.

-
-Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622. -

In embedded settings, GANs could be used for on-device data augmentation to enhance the training of models directly on the embedded device, enabling continual learning and adaptation to new data without the need for cloud computing resources.

-
-
-

Autoencoders

-

Autoencoders are neural networks for data compression and noise reduction (Bank, Koenigstein, and Giryes 2023). They are structured to encode input data into a lower-dimensional representation and then decode it back to its original form. Variants like Variational Autoencoders (VAEs) introduce probabilistic layers that allow for generative properties, finding applications in image generation and anomaly detection.

-
-Bank, Dor, Noam Koenigstein, and Raja Giryes. 2023. “Autoencoders.” Machine Learning for Data Science Handbook: Data Mining and Knowledge Discovery Handbook, 353–74. -

Using autoencoders can help in efficient data transmission and storage, improving the overall performance of embedded systems with limited computational and memory resources.

-
-
-

Transformer Networks

-

Transformer networks have emerged as a powerful architecture, especially in natural language processing (Vaswani et al. 2017). These networks use self-attention mechanisms to weigh the influence of different input words on each output word, enabling parallel computation and capturing intricate patterns in data. Transformer networks have led to state-of-the-art results in tasks like language translation, summarization, and text generation.

-
-Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” Advances in Neural Information Processing Systems 30. -

These networks can be optimized to perform language-related tasks directly on the device. For example, transformers can be used in embedded systems for real-time translation services or voice-assisted interfaces, where latency and computational efficiency are crucial. Techniques such as model distillation can be employed to deploy these networks on embedded devices with limited resources.

-

These architectures serve specific purposes and excel in different domains, offering a rich toolkit for addressing diverse problems in embedded AI systems. Understanding the nuances of these architectures is crucial in designing effective and efficient deep learning models for various applications.

-
-
-
-

3.2.4 Traditional ML vs Deep Learning

-

To briefly highlight the differences, Table tbl-mlvsdl illustrates the contrasting characteristics between traditional ML and deep learning:

-
-
-
-Table 3.1: Comparison of traditional machine learning and deep learning. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AspectTraditional MLDeep Learning
Data RequirementsLow to Moderate (efficient with smaller datasets)High (requires large datasets for nuanced learning)
Model ComplexityModerate (suitable for well-defined problems)High (detects intricate patterns, suited for complex tasks)
Computational ResourcesLow to Moderate (cost-effective, less resource-intensive)High (demands substantial computational power and resources)
Deployment SpeedFast (quicker training and deployment cycles)Slow (prolonged training times, especially with larger datasets)
InterpretabilityHigh (clear insights into decision pathways)Low (complex layered structures, “black box” nature)
MaintenanceEasier (simple to update and maintain)Complex (requires more efforts in maintenance and updates)
-
-
-
-
-
-

3.2.5 Choosing Traditional ML vs. DL

-
-

Data Availability and Volume

-

Amount of Data: Traditional machine learning algorithms, such as decision trees or Naive Bayes, are often more suitable when data availability is limited. They offer robust predictions even with smaller datasets. This is particularly true in medical diagnostics for disease prediction and customer segmentation in marketing.

-

Data Diversity and Quality: Traditional machine learning algorithms often work well with structured data (the input to the model is a set of features, ideally independent of each other) but may require significant preprocessing effort (i.e., feature engineering). On the other hand, deep learning takes the approach of automatically performing feature engineering as part of the model architecture. This approach enables the construction of end-to-end models capable of directly mapping from unstructured input data (such as text, audio, and images) to the desired output without relying on simplistic heuristics that have limited effectiveness. However, this results in larger models demanding more data and computational resources. In noisy data, the necessity for larger datasets is further emphasized when utilizing Deep Learning.

-
-
-

Complexity of the Problem

-

 Problem Granularity: Problems that are simple to moderately complex, which may involve linear or polynomial relationships between variables, often find a better fit with traditional machine learning methods.     Hierarchical Feature Representation: Deep learning models are excellent in tasks that require hierarchical feature representation, such as image and speech recognition. However, not all problems require this complexity, and traditional machine learning algorithms may sometimes offer simpler and equally effective solutions.

-
-
-

Hardware and Computational Resources

-

 Resource Constraints: The availability of computational resources often influences the choice between traditional ML and deep learning. The former is generally less resource-intensive and thus preferable in environments with hardware limitations or budget constraints.     Scalability and Speed: Traditional machine learning algorithms, like support vector machines (SVM), often allow for faster training times and easier scalability, which is particularly beneficial in projects with tight timelines and growing data volumes.

-
-
-

Regulatory Compliance

-

Regulatory compliance is crucial in various industries, requiring adherence to guidelines and best practices such as the GDPR in the EU. Traditional ML models, due to their inherent interpretability, often align better with these regulations, especially in sectors like finance and healthcare.

-
-
-

Interpretability

-

Understanding the decision-making process is easier with traditional machine learning techniques than deep learning models, which function as “black boxes,” making it challenging to trace decision pathways.

-
-
-
-

3.2.6 Making an Informed Choice

-

Given the constraints of embedded AI systems, understanding the differences between traditional ML techniques and deep learning becomes essential. Both avenues offer unique advantages, and their distinct characteristics often dictate the choice of one over the other in different scenarios.

-

Despite this, deep learning has steadily outperformed traditional machine learning methods in several key areas due to abundant data, computational advancements, and proven effectiveness in complex tasks.

-

Here are some specific reasons why we focus on deep learning in this text:

-

1. Superior Performance in Complex Tasks: Deep learning models, particularly deep neural networks, excel in tasks where the relationships between data points are incredibly intricate. Tasks like image and speech recognition, language translation, and playing complex games like Go and Chess have seen significant advancements primarily through deep learning algorithms.

-

2. Efficient Handling of Unstructured Data: Unlike traditional machine learning methods, deep learning can more effectively process unstructured data. This is crucial in today’s data landscape, where the vast majority of data, such as text, images, and videos, is unstructured.

-

3. Leveraging Big Data: With the availability of big data, deep learning models can learn and improve continually. These models excel at utilizing large datasets to enhance their predictive accuracy, a limitation in traditional machine-learning approaches.

-

4. Hardware Advancements and Parallel Computing: The advent of powerful GPUs and the availability of cloud computing platforms have enabled the rapid training of deep learning models. These advancements have addressed one of deep learning’s significant challenges: the need for substantial computational resources.

-

5. Dynamic Adaptability and Continuous Learning: Deep learning models can dynamically adapt to new information or data. They can be trained to generalize their learning to new, unseen data, crucial in rapidly evolving fields like autonomous driving or real-time language translation.

-

While deep learning has gained significant traction, it’s essential to understand that traditional machine learning is still relevant. As we delve deeper into the intricacies of deep learning, we will also highlight situations where traditional machine learning methods may be more appropriate due to their simplicity, efficiency, and interpretability. By focusing on deep learning in this text, we aim to equip readers with the knowledge and tools to tackle modern, complex problems across various domains while also providing insights into the comparative advantages and appropriate application scenarios for deep learning and traditional machine learning techniques.

-
-
-
-

3.3 Conclusion

-

Deep learning has become a potent set of techniques for addressing intricate pattern recognition and prediction challenges. Starting with an overview, we outlined the fundamental concepts and principles governing deep learning, laying the groundwork for more advanced studies.

-

Central to deep learning, we explored the basic ideas of neural networks, powerful computational models inspired by the human brain’s interconnected neuron structure. This exploration allowed us to appreciate neural networks’ capabilities and potential in creating sophisticated algorithms capable of learning and adapting from data.

-

Understanding the role of libraries and frameworks was a key part of our discussion. We offered insights into the tools that can facilitate developing and deploying deep learning models. These resources ease the implementation of neural networks and open avenues for innovation and optimization.

-

Next, we tackled the challenges one might face when embedding deep learning algorithms within embedded systems, providing a critical perspective on the complexities and considerations of bringing AI to edge devices.

-

Furthermore, we examined deep learning’s limitations. Through discussions, we unraveled the challenges faced in deep learning applications and outlined scenarios where traditional machine learning might outperform deep learning. These sections are crucial for fostering a balanced view of deep learning’s capabilities and limitations.

-

In this primer, we have equipped you with the knowledge to make informed choices between deploying traditional machine learning or deep learning techniques, depending on the unique demands and constraints of a specific problem.

-

As we conclude this chapter, we hope you are now well-equipped with the basic “language” of deep learning and prepared to delve deeper into the subsequent chapters with a solid understanding and critical perspective. The journey ahead is filled with exciting opportunities and challenges in embedding AI within systems.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/dsp_spectral_features_block/dsp_spectral_features_block.html b/contents/dsp_spectral_features_block/dsp_spectral_features_block.html deleted file mode 100644 index ddc12a2e..00000000 --- a/contents/dsp_spectral_features_block/dsp_spectral_features_block.html +++ /dev/null @@ -1,1617 +0,0 @@ - - - - - - - - - -Machine Learning Systems - DSP - Spectral Features - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

DSP - Spectral Features

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: 1950s style cartoon illustration of a Latin male and female scientist in a vibration research room. The man is using a calculus ruler to examine ancient circuitry. The woman is at a computer with complex vibration graphs. The wooden table has boards with sensors, prominently an accelerometer. A classic, rounded-back computer shows the Arduino IDE with code for LED pin assignments and machine learning algorithms for movement detection. The Serial Monitor displays FFT, classification, wavelets, and DSPs. Vintage lamps, tools, and charts with FFT and Wavelets graphs complete the scene.
-
-
-
-

Introduction

-

TinyML projects related to motion (or vibration) involve data from IMUs (usually accelerometers and Gyroscopes). These time-series type datasets should be preprocessed before inputting them into a Machine Learning model training, which is a challenging area for embedded machine learning. Still, Edge Impulse helps overcome this complexity with its digital signal processing (DSP) preprocessing step and, more specifically, the Spectral Features Block for Inertial sensors.

-

But how does it work under the hood? Let’s dig into it.

-
-
-

Extracting Features Review

-

Extracting features from a dataset captured with inertial sensors, such as accelerometers, involves processing and analyzing the raw data. Accelerometers measure the acceleration of an object along one or more axes (typically three, denoted as X, Y, and Z). These measurements can be used to understand various aspects of the object’s motion, such as movement patterns and vibrations. Here’s a high-level overview of the process:

-

Data collection: First, we need to gather data from the accelerometers. Depending on the application, data may be collected at different sampling rates. It’s essential to ensure that the sampling rate is high enough to capture the relevant dynamics of the studied motion (the sampling rate should be at least double the maximum relevant frequency present in the signal).

-

Data preprocessing: Raw accelerometer data can be noisy and contain errors or irrelevant information. Preprocessing steps, such as filtering and normalization, can help clean and standardize the data, making it more suitable for feature extraction.

-
-

The Studio does not perform normalization or standardization, so sometimes, when working with Sensor Fusion, it could be necessary to perform this step before uploading data to the Studio. This is particularly crucial in sensor fusion projects, as seen in this tutorial, Sensor Data Fusion with Spresense and CommonSense.

-
-

Segmentation: Depending on the nature of the data and the application, dividing the data into smaller segments or windows may be necessary. This can help focus on specific events or activities within the dataset, making feature extraction more manageable and meaningful. The window size and overlap (window span) choice depend on the application and the frequency of the events of interest. As a rule of thumb, we should try to capture a couple of “data cycles.”

-

Feature extraction: Once the data is preprocessed and segmented, you can extract features that describe the motion’s characteristics. Some typical features extracted from accelerometer data include:

-
    -
  • Time-domain features describe the data’s statistical properties within each segment, such as mean, median, standard deviation, skewness, kurtosis, and zero-crossing rate.
  • -
  • Frequency-domain features are obtained by transforming the data into the frequency domain using techniques like the Fast Fourier Transform (FFT). Some typical frequency-domain features include the power spectrum, spectral energy, dominant frequencies (amplitude and frequency), and spectral entropy.
  • -
  • Time-frequency domain features combine the time and frequency domain information, such as the Short-Time Fourier Transform (STFT) or the Discrete Wavelet Transform (DWT). They can provide a more detailed understanding of how the signal’s frequency content changes over time.
  • -
-

In many cases, the number of extracted features can be large, which may lead to overfitting or increased computational complexity. Feature selection techniques, such as mutual information, correlation-based methods, or principal component analysis (PCA), can help identify the most relevant features for a given application and reduce the dimensionality of the dataset. The Studio can help with such feature-relevant calculations.

-

Let’s explore in more detail a typical TinyML Motion Classification project covered in this series of Hands-Ons.

-
-
-

A TinyML Motion Classification project

-
-
-

-
-
-

In the hands-on project, Motion Classification and Anomaly Detection, we simulated mechanical stresses in transport, where our problem was to classify four classes of movement:

-
    -
  • Maritime (pallets in boats)
  • -
  • Terrestrial (pallets in a Truck or Train)
  • -
  • Lift (pallets being handled by Fork-Lift)
  • -
  • Idle (pallets in Storage houses)
  • -
-

The accelerometers provided the data on the pallet (or container).

-
-
-

-
-
-

Below is one sample (raw data) of 10 seconds, captured with a sampling frequency of 50Hz:

-
-
-

-
-
-
-

The result is similar when this analysis is done over another dataset with the same principle, using a different sampling frequency, 62.5Hz instead of 50Hz.

-
-
-
-

Data Pre-Processing

-

The raw data captured by the accelerometer (a “time series” data) should be converted to “tabular data” using one of the typical Feature Extraction methods described in the last section.

-

We should segment the data using a sliding window over the sample data for feature extraction. The project captured accelerometer data every 10 seconds with a sample rate of 62.5 Hz. A 2-second window captures 375 data points (3 axis x 2 seconds x 62.5 samples). The window is slid every 80ms, creating a larger dataset where each instance has 375 “raw features.”

-
-
-

-
-
-

On the Studio, the previous version (V1) of the Spectral Analysis Block extracted as time-domain features only the RMS, and for the frequency-domain, the peaks and frequency (using FFT) and the power characteristics (PSD) of the signal over time resulting in a fixed tabular dataset of 33 features (11 per each axis),

-
-
-

-
-
-

Those 33 features were the Input tensor of a Neural Network Classifier.

-

In 2022, Edge Impulse released version 2 of the Spectral Analysis block, which we will explore here.

-
-

Edge Impulse - Spectral Analysis Block V.2 under the hood

-

In Version 2, Time Domain Statistical features per axis/channel are:

-
    -
  • RMS
  • -
  • Skewness
  • -
  • Kurtosis
  • -
-

And the Frequency Domain Spectral features per axis/channel are:

-
    -
  • Spectral Power
  • -
  • Skewness (in the next version)
  • -
  • Kurtosis (in the next version)
  • -
-

In this link, we can have more details about the feature extraction.

-
-

Clone the public project. You can also follow the explanation, playing with the code using my Google CoLab Notebook: Edge Impulse Spectral Analysis Block Notebook.

-
-

Start importing the libraries:

-
import numpy as np
-import matplotlib.pyplot as plt
-import seaborn as sns
-import math
-from scipy.stats import skew, kurtosis
-from scipy import signal
-from scipy.signal import welch
-from scipy.stats import entropy
-from sklearn import preprocessing
-import pywt
-
-plt.rcParams['figure.figsize'] = (12, 6)
-plt.rcParams['lines.linewidth'] = 3
-

From the studied project, let’s choose a data sample from accelerometers as below:

-
    -
  • Window size of 2 seconds: [2,000] ms
  • -
  • Sample frequency: [62.5] Hz
  • -
  • We will choose the [None] filter (for simplicity) and a
  • -
  • FFT length: [16].
  • -
-
f =  62.5 # Hertz
-wind_sec = 2 # seconds
-FFT_Lenght = 16
-axis = ['accX', 'accY', 'accZ']
-n_sensors = len(axis)
-
-
-

-
-
-

Selecting the Raw Features on the Studio Spectral Analysis tab, we can copy all 375 data points of a particular 2-second window to the clipboard.

-
-
-

-
-
-

Paste the data points to a new variable data:

-
data=[-5.6330, 0.2376, 9.8701, -5.9442, 0.4830, 9.8701, -5.4217, ...]
-No_raw_features = len(data)
-N = int(No_raw_features/n_sensors)
-

The total raw features are 375, but we will work with each axis individually, where N= 125 (number of samples per axis).

-

We aim to understand how Edge Impulse gets the processed features.

-
-
-

-
-
-

So, you should also past the processed features on a variable (to compare the calculated features in Python with the ones provided by the Studio) :

-
features = [2.7322, -0.0978, -0.3813, 2.3980, 3.8924, 24.6841, 9.6303, ...]
-N_feat = len(features)
-N_feat_axis = int(N_feat/n_sensors)
-

The total number of processed features is 39, which means 13 features/axis.

-

Looking at those 13 features closely, we will find 3 for the time domain (RMS, Skewness, and Kurtosis):

-
    -
  • [rms] [skew] [kurtosis]
  • -
-

and 10 for the frequency domain (we will return to this later).

-
    -
  • [spectral skew][spectral kurtosis][Spectral Power 1] ... [Spectral Power 8]
  • -
-

Splitting raw data per sensor

-

The data has samples from all axes; let’s split and plot them separately:

-
def plot_data(sensors, axis, title):
-    [plt.plot(x, label=y) for x,y in zip(sensors, axis)]
-    plt.legend(loc='lower right')
-    plt.title(title)
-    plt.xlabel('#Sample')
-    plt.ylabel('Value')
-    plt.box(False)
-    plt.grid()
-    plt.show()
-
-accX = data[0::3]
-accY = data[1::3]
-accZ = data[2::3]
-sensors = [accX, accY, accZ] 
-plot_data(sensors, axis, 'Raw Features')
-
-
-

-
-
-

Subtracting the mean

-

Next, we should subtract the mean from the data. Subtracting the mean from a data set is a common data pre-processing step in statistics and machine learning. The purpose of subtracting the mean from the data is to center the data around zero. This is important because it can reveal patterns and relationships that might be hidden if the data is not centered.

-

Here are some specific reasons why subtracting the mean can be helpful:

-
    -
  • It simplifies analysis: By centering the data, the mean becomes zero, making some calculations simpler and easier to interpret.
  • -
  • It removes bias: If the data is biased, subtracting the mean can remove it and allow for a more accurate analysis.
  • -
  • It can reveal patterns: Centering the data can help uncover patterns that might be hidden if the data is not centered. For example, centering the data can help you identify trends over time if you analyze a time series dataset.
  • -
  • It can improve performance: In some machine learning algorithms, centering the data can improve performance by reducing the influence of outliers and making the data more easily comparable. Overall, subtracting the mean is a simple but powerful technique that can be used to improve the analysis and interpretation of data.
  • -
-
dtmean = [(sum(x)/len(x)) for x in sensors]
-[print('mean_'+x+'= ', round(y, 4)) for x,y in zip(axis, dtmean)][0]
-
-accX = [(x - dtmean[0]) for x in accX]
-accY = [(x - dtmean[1]) for x in accY]
-accZ = [(x - dtmean[2]) for x in accZ]
-sensors = [accX, accY, accZ]
-
-plot_data(sensors, axis, 'Raw Features - Subctract the Mean')
-
-
-

-
-
-
-
-
-

Time Domain Statistical features

-

RMS Calculation

-

The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean of the squares of the values or the square of the function that defines the continuous waveform. In physics, the RMS value of an electrical current is defined as the “value of the direct current that dissipates the same power in a resistor.”

-

In the case of a set of n values {𝑥1, 𝑥2, …, 𝑥𝑛}, the RMS is:

-
-
-

-
-
-
-

NOTE that the RMS value is different for the original raw data, and after subtracting the mean

-
-
# Using numpy and standartized data (subtracting mean)
-rms = [np.sqrt(np.mean(np.square(x))) for x in sensors]
-

We can compare the calculated RMS values here with the ones presented by Edge Impulse:

-
[print('rms_'+x+'= ', round(y, 4)) for x,y in zip(axis, rms)][0]
-print("\nCompare with Edge Impulse result features")
-print(features[0:N_feat:N_feat_axis])
-

rms_accX= 2.7322

-

rms_accY= 0.7833

-

rms_accZ= 0.1383

-

Compared with Edge Impulse result features:

-

[2.7322, 0.7833, 0.1383]

-

Skewness and kurtosis calculation

-

In statistics, skewness and kurtosis are two ways to measure the shape of a distribution.

-

Here, we can see the sensor values distribution:

-
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(13, 4))
-sns.kdeplot(accX, fill=True, ax=axes[0])
-sns.kdeplot(accY, fill=True, ax=axes[1])
-sns.kdeplot(accZ, fill=True, ax=axes[2])
-axes[0].set_title('accX')
-axes[1].set_title('accY')
-axes[2].set_title('accZ')
-plt.suptitle('IMU Sensors distribution', fontsize=16, y=1.02)
-plt.show()
-
-
-

-
-
-

Skewness is a measure of the asymmetry of a distribution. This value can be positive or negative.

-
-
-

-
-
-
    -
  • A negative skew indicates that the tail is on the left side of the distribution, which extends towards more negative values.
  • -
  • A positive skew indicates that the tail is on the right side of the distribution, which extends towards more positive values.
  • -
  • A zero value indicates no skewness in the distribution at all, meaning the distribution is perfectly symmetrical.
  • -
-
skew = [skew(x, bias=False) for x in sensors]
-[print('skew_'+x+'= ', round(y, 4)) for x,y in zip(axis, skew)][0]
-print("\nCompare with Edge Impulse result features")
-features[1:N_feat:N_feat_axis]
-

skew_accX= -0.099

-

skew_accY= 0.1756

-

skew_accZ= 6.9463

-

Compared with Edge Impulse result features:

-

[-0.0978, 0.1735, 6.8629]

-

Kurtosis is a measure of whether or not a distribution is heavy-tailed or light-tailed relative to a normal distribution.

-
-
-

-
-
-
    -
  • The kurtosis of a normal distribution is zero.
  • -
  • If a given distribution has a negative kurtosis, it is said to be playkurtic, which means it tends to produce fewer and less extreme outliers than the normal distribution.
  • -
  • If a given distribution has a positive kurtosis , it is said to be leptokurtic, which means it tends to produce more outliers than the normal distribution.
  • -
-
kurt = [kurtosis(x, bias=False) for x in sensors]
-[print('kurt_'+x+'= ', round(y, 4)) for x,y in zip(axis, kurt)][0]
-print("\nCompare with Edge Impulse result features")
-features[2:N_feat:N_feat_axis]
-

kurt_accX= -0.3475

-

kurt_accY= 1.2673

-

kurt_accZ= 68.1123

-

Compared with Edge Impulse result features:

-

[-0.3813, 1.1696, 65.3726]

-
-
-

Spectral features

-

The filtered signal is passed to the Spectral power section, which computes the FFT to generate the spectral features.

-

Since the sampled window is usually larger than the FFT size, the window will be broken into frames (or “sub-windows”), and the FFT is calculated over each frame.

-

FFT length - The FFT size. This determines the number of FFT bins and the resolution of frequency peaks that can be separated. A low number means more signals will average together in the same FFT bin, but it also reduces the number of features and model size. A high number will separate more signals into separate bins, generating a larger model.

-
    -
  • The total number of Spectral Power features will vary depending on how you set the filter and FFT parameters. With No filtering, the number of features is 1/2 of the FFT Length.
  • -
-

Spectral Power - Welch’s method

-

We should use Welch’s method to split the signal on the frequency domain in bins and calculate the power spectrum for each bin. This method divides the signal into overlapping segments, applies a window function to each segment, computes the periodogram of each segment using DFT, and averages them to obtain a smoother estimate of the power spectrum.

-
# Function used by Edge Impulse instead of scipy.signal.welch().
-def welch_max_hold(fx, sampling_freq, nfft, n_overlap):
-    n_overlap = int(n_overlap)
-    spec_powers = [0 for _ in range(nfft//2+1)]
-    ix = 0
-    while ix <= len(fx):
-        # Slicing truncates if end_idx > len, and rfft will auto-zero pad
-        fft_out = np.abs(np.fft.rfft(fx[ix:ix+nfft], nfft))
-        spec_powers = np.maximum(spec_powers, fft_out**2/nfft)
-        ix = ix + (nfft-n_overlap)
-    return np.fft.rfftfreq(nfft, 1/sampling_freq), spec_powers
-

Applying the above function to 3 signals:

-
fax,Pax = welch_max_hold(accX, fs, FFT_Lenght, 0)
-fay,Pay = welch_max_hold(accY, fs, FFT_Lenght, 0)
-faz,Paz = welch_max_hold(accZ, fs, FFT_Lenght, 0)
-specs = [Pax, Pay, Paz ]
-

We can plot the Power Spectrum P(f):

-
plt.plot(fax,Pax, label='accX')
-plt.plot(fay,Pay, label='accY')
-plt.plot(faz,Paz, label='accZ')
-plt.legend(loc='upper right')
-plt.xlabel('Frequency (Hz)')
-#plt.ylabel('PSD [V**2/Hz]')
-plt.ylabel('Power')
-plt.title('Power spectrum P(f) using Welch's method')
-plt.grid()
-plt.box(False)
-plt.show()
-
-
-

-
-
-

Besides the Power Spectrum, we can also include the skewness and kurtosis of the features in the frequency domain (should be available on a new version):

-
spec_skew = [skew(x, bias=False) for x in specs]
-spec_kurtosis = [kurtosis(x, bias=False) for x in specs]
-

Let’s now list all Spectral features per axis and compare them with EI:

-
print("EI Processed Spectral features (accX): ")
-print(features[3:N_feat_axis][0:])
-print("\nCalculated features:")
-print (round(spec_skew[0],4))
-print (round(spec_kurtosis[0],4))
-[print(round(x, 4)) for x in Pax[1:]][0]
-

EI Processed Spectral features (accX):

-

2.398, 3.8924, 24.6841, 9.6303, 8.4867, 7.7793, 2.9963, 5.6242, 3.4198, 4.2735

-

Calculated features:

-

2.9069 8.5569 24.6844 9.6304 8.4865 7.7794 2.9964 5.6242 3.4198 4.2736

-
print("EI Processed Spectral features (accY): ")
-print(features[16:26][0:]) #13: 3+N_feat_axis;  26 = 2x N_feat_axis
-print("\nCalculated features:")
-print (round(spec_skew[1],4))
-print (round(spec_kurtosis[1],4))
-[print(round(x, 4)) for x in Pay[1:]][0]
-

EI Processed Spectral features (accY):

-

0.9426, -0.8039, 5.429, 0.999, 1.0315, 0.9459, 1.8117, 0.9088, 1.3302, 3.112

-

Calculated features:

-

1.1426 -0.3886 5.4289 0.999 1.0315 0.9458 1.8116 0.9088 1.3301 3.1121

-
print("EI Processed Spectral features (accZ): ")
-print(features[29:][0:]) #29: 3+(2*N_feat_axis);
-print("\nCalculated features:")
-print (round(spec_skew[2],4))
-print (round(spec_kurtosis[2],4))
-[print(round(x, 4)) for x in Paz[1:]][0]
-

EI Processed Spectral features (accZ):

-

0.3117, -1.3812, 0.0606, 0.057, 0.0567, 0.0976, 0.194, 0.2574, 0.2083, 0.166

-

Calculated features:

-

0.3781 -1.4874 0.0606 0.057 0.0567 0.0976 0.194 0.2574 0.2083 0.166

-
-
-

Time-frequency domain

-
-

Wavelets

-

Wavelet is a powerful technique for analyzing signals with transient features or abrupt changes, such as spikes or edges, which are difficult to interpret with traditional Fourier-based methods.

-

Wavelet transforms work by breaking down a signal into different frequency components and analyzing them individually. The transformation is achieved by convolving the signal with a wavelet function, a small waveform centered at a specific time and frequency. This process effectively decomposes the signal into different frequency bands, each of which can be analyzed separately.

-

One of the critical benefits of wavelet transforms is that they allow for time-frequency analysis, which means that they can reveal the frequency content of a signal as it changes over time. This makes them particularly useful for analyzing non-stationary signals, which vary over time.

-

Wavelets have many practical applications, including signal and image compression, denoising, feature extraction, and image processing.

-

Let’s select Wavelet on the Spectral Features block in the same project:

-
    -
  • Type: Wavelet
  • -
  • Wavelet Decomposition Level: 1
  • -
  • Wavelet: bior1.3
  • -
-
-
-

-
-
-

The Wavelet Function

-
wavelet_name='bior1.3'
-num_layer = 1
-
-wavelet = pywt.Wavelet(wavelet_name)
-[phi_d,psi_d,phi_r,psi_r,x] = wavelet.wavefun(level=5)
-plt.plot(x, psi_d, color='red')
-plt.title('Wavelet Function')
-plt.ylabel('Value')
-plt.xlabel('Time')
-plt.grid()
-plt.box(False)
-plt.show()
-
-
-

-
-
-

As we did before, let’s copy and past the Processed Features:

-
-
-

-
-
-
features = [3.6251, 0.0615, 0.0615, -7.3517, -2.7641, 2.8462, 5.0924, ...]
-N_feat = len(features)
-N_feat_axis = int(N_feat/n_sensors)
-

Edge Impulse computes the Discrete Wavelet Transform (DWT) for each one of the Wavelet Decomposition levels selected. After that, the features will be extracted.

-

In the case of Wavelets, the extracted features are basic statistical values, crossing values, and entropy. There are, in total, 14 features per layer as below:

-
    -
  • [11] Statiscal Features: n5, n25, n75, n95, mean, median, standard deviation (std), variance (var) root mean square (rms), kurtosis, and skewness (skew).
  • -
  • [2] Crossing Features: Zero crossing rate (zcross) and mean crossing rate (mcross) are the times that the signal passes through the baseline (y = 0) and the average level (y = u) per unit of time, respectively
  • -
  • [1] Complexity Feature: Entropy is a characteristic measure of the complexity of the signal
  • -
-

All the above 14 values are calculated for each Layer (including L0, the original signal)

-
    -
  • The total number of features varies depending on how you set the filter and the number of layers. For example, with [None] filtering and Level[1], the number of features per axis will be 14 x 2 (L0 and L1) = 28. For the three axes, we will have a total of 84 features.
  • -
-
-
-

Wavelet Analysis

-

Wavelet analysis decomposes the signal (accX, accY, and accZ) into different frequency components using a set of filters, which separate these components into low-frequency (slowly varying parts of the signal containing long-term patterns), such as accX_l1, accY_l1, accZ_l1 and, high-frequency (rapidly varying parts of the signal containing short-term patterns) components, such as accX_d1, accY_d1, accZ_d1, permitting the extraction of features for further analysis or classification.

-

Only the low-frequency components (approximation coefficients, or cA) will be used. In this example, we assume only one level (Single-level Discrete Wavelet Transform), where the function will return a tuple. With a multilevel decomposition, the “Multilevel 1D Discrete Wavelet Transform”, the result will be a list (for detail, please see: Discrete Wavelet Transform (DWT) )

-
(accX_l1, accX_d1) = pywt.dwt(accX, wavelet_name)
-(accY_l1, accY_d1) = pywt.dwt(accY, wavelet_name)
-(accZ_l1, accZ_d1) = pywt.dwt(accZ, wavelet_name)
-sensors_l1 = [accX_l1, accY_l1, accZ_l1]
-
-# Plot power spectrum versus frequency
-plt.plot(accX_l1, label='accX')
-plt.plot(accY_l1, label='accY')
-plt.plot(accZ_l1, label='accZ')
-plt.legend(loc='lower right')
-plt.xlabel('Time')
-plt.ylabel('Value')
-plt.title('Wavelet Approximation')
-plt.grid()
-plt.box(False)
-plt.show()
-
-
-

-
-
-
-
-

Feature Extraction

-

Let’s start with the basic statistical features. Note that we apply the function for both the original signals and the resultant cAs from the DWT:

-
def calculate_statistics(signal):
-    n5 = np.percentile(signal, 5)
-    n25 = np.percentile(signal, 25)
-    n75 = np.percentile(signal, 75)
-    n95 = np.percentile(signal, 95)
-    median = np.percentile(signal, 50)
-    mean = np.mean(signal)
-    std = np.std(signal)
-    var = np.var(signal)
-    rms = np.sqrt(np.mean(np.square(signal)))
-    return [n5, n25, n75, n95, median, mean, std, var, rms]
- 
-stat_feat_l0 = [calculate_statistics(x) for x in sensors]
-stat_feat_l1 = [calculate_statistics(x) for x in sensors_l1]
-

The Skelness and Kurtosis:

-
skew_l0 = [skew(x, bias=False) for x in sensors]
-skew_l1 = [skew(x, bias=False) for x in sensors_l1]
-kurtosis_l0 = [kurtosis(x, bias=False) for x in sensors]
-kurtosis_l1 = [kurtosis(x, bias=False) for x in sensors_l1]
-

Zero crossing (zcross) is the number of times the wavelet coefficient crosses the zero axis. It can be used to measure the signal’s frequency content since high-frequency signals tend to have more zero crossings than low-frequency signals.

-

Mean crossing (mcross), on the other hand, is the number of times the wavelet coefficient crosses the mean of the signal. It can be used to measure the amplitude since high-amplitude signals tend to have more mean crossings than low-amplitude signals.

-
def getZeroCrossingRate(arr):
-    my_array = np.array(arr)
-    zcross = float("{0:.2f}".format((((my_array[:-1] * my_array[1:]) < 0).su    m())/len(arr)))
-    return zcross
-
-def getMeanCrossingRate(arr):
-    mcross = getZeroCrossingRate(np.array(arr) - np.mean(arr))
-    return mcross
-
-def calculate_crossings(list):
-    zcross=[]
-    mcross=[]
-    for i in range(len(list)):
-        zcross_i = getZeroCrossingRate(list[i])
-        zcross.append(zcross_i)
-        mcross_i = getMeanCrossingRate(list[i])
-        mcross.append(mcross_i)
-    return zcross, mcross
-
-cross_l0 = calculate_crossings(sensors)
-cross_l1 = calculate_crossings(sensors_l1)
-

In wavelet analysis, entropy refers to the degree of disorder or randomness in the distribution of wavelet coefficients. Here, we used Shannon entropy, which measures a signal’s uncertainty or randomness. It is calculated as the negative sum of the probabilities of the different possible outcomes of the signal multiplied by their base 2 logarithm. In the context of wavelet analysis, Shannon entropy can be used to measure the complexity of the signal, with higher values indicating greater complexity.

-
def calculate_entropy(signal, base=None):
-    value, counts = np.unique(signal, return_counts=True)
-    return entropy(counts, base=base)
-
-entropy_l0 = [calculate_entropy(x) for x in sensors]
-entropy_l1 = [calculate_entropy(x) for x in sensors_l1]
-

Let’s now list all the wavelet features and create a list by layers.

-
L1_features_names = ["L1-n5", "L1-n25", "L1-n75", "L1-n95", "L1-median", "L1-mean", "L1-std", "L1-var", "L1-rms", "L1-skew", "L1-Kurtosis", "L1-zcross", "L1-mcross", "L1-entropy"]
-
-L0_features_names = ["L0-n5", "L0-n25", "L0-n75", "L0-n95", "L0-median", "L0-mean", "L0-std", "L0-var", "L0-rms", "L0-skew", "L0-Kurtosis", "L0-zcross", "L0-mcross", "L0-entropy"]
-
-all_feat_l0 = []
-for i in range(len(axis)):
-    feat_l0 = stat_feat_l0[i]+[skew_l0[i]]+[kurtosis_l0[i]]+[cross_l0[0][i]]+[cross_l0[1][i]]+[entropy_l0[i]]
-    [print(axis[i]+' '+x+'= ', round(y, 4)) for x,y in zip(L0_features_names, feat_l0)][0]
-    all_feat_l0.append(feat_l0)
-all_feat_l0 = [item for sublist in all_feat_l0 for item in sublist]
-print(f"\nAll L0 Features = {len(all_feat_l0)}")
-
-all_feat_l1 = []
-for i in range(len(axis)):
-feat_l1 = stat_feat_l1[i]+[skew_l1[i]]+[kurtosis_l1[i]]+[cross_l1[0][i]]+[cross_l1[1][i]]+[entropy_l1[i]]
-[print(axis[i]+' '+x+'= ', round(y, 4)) for x,y in zip(L1_features_names, feat_l1)][0]
-all_feat_l1.append(feat_l1)
-all_feat_l1 = [item for sublist in all_feat_l1 for item in sublist]
-print(f"\nAll L1 Features = {len(all_feat_l1)}")
-
-
-

-
-
-
-
-
-

Conclusion

-

Edge Impulse Studio is a powerful online platform that can handle the pre-processing task for us. Still, given our engineering perspective, we want to understand what is happening under the hood. This knowledge will help us find the best options and hyper-parameters for tuning our projects.

-

Daniel Situnayake wrote in his blog: “Raw sensor data is highly dimensional and noisy. Digital signal processing algorithms help us sift the signal from the noise. DSP is an essential part of embedded engineering, and many edge processors have on-board acceleration for DSP. As an ML engineer, learning basic DSP gives you superpowers for handling high-frequency time series data in your models.” I recommend you read Dan’s excellent post in its totality: nn to cpp: What you need to know about porting deep learning models to the edge.

- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/efficient_ai/efficient_ai.html b/contents/efficient_ai/efficient_ai.html deleted file mode 100644 index f5c3dee2..00000000 --- a/contents/efficient_ai/efficient_ai.html +++ /dev/null @@ -1,1400 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 8  Efficient AI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

8  Efficient AI

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: A conceptual illustration depicting efficiency in artificial intelligence using a shipyard analogy. The scene shows a bustling shipyard where containers represent bits or bytes of data. These containers are being moved around efficiently by cranes and vehicles, symbolizing the streamlined and rapid information processing in AI systems. The shipyard is meticulously organized, illustrating the concept of optimal performance within the constraints of limited resources. In the background, ships are docked, representing different platforms and scenarios where AI is applied. The atmosphere should convey advanced technology with an underlying theme of sustainability and wide applicability.
-
-
-

Efficiency in artificial intelligence (AI) is not simply a luxury but a necessity. In this chapter, we dive into the key concepts underpinning AI systems’ efficiency. The computational demands on neural networks can be daunting, even for minimal systems. For AI to be seamlessly integrated into everyday devices and essential systems, it must perform optimally within the constraints of limited resources while maintaining its efficacy. The pursuit of efficiency guarantees that AI models are streamlined, rapid, and sustainable, thereby widening their applicability across various platforms and scenarios.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Recognize the need for efficient AI in TinyML/edge devices.

  • -
  • Understand the need for efficient model architectures like MobileNets and SqueezeNet.

  • -
  • Understand why techniques for model compression are important.

  • -
  • Get an inclination for why efficient AI hardware is important.

  • -
  • Appreciate the significance of numerics and their representations.

  • -
  • We appreciate that we need to understand the nuances of model comparison beyond accuracy.

  • -
  • Recognize efficiency encompasses technology, costs, environment, and ethics.

  • -
-
-
-

The focus is on gaining a conceptual understanding of the motivations and significance of the various strategies for achieving efficient AI, both in terms of techniques and a holistic perspective. Subsequent chapters will dive into the nitty-gritty details of these various concepts.

-
-

8.1 Introduction

-

Training models can consume significant energy, sometimes equivalent to the carbon footprint of sizable industrial processes. We will cover some of these sustainability details in the AI Sustainability chapter. On the deployment side, if these models are not optimized for efficiency, they can quickly drain device batteries, demand excessive memory, or fall short of real-time processing needs. Through this introduction, we aim to elucidate the nuances of efficiency, setting the groundwork for a comprehensive exploration in the subsequent chapters.

-
-
-

8.2 The Need for Efficient AI

-

Efficiency takes on different connotations depending on where AI computations occur. Let’s revisit and differentiate between Cloud, Edge, and TinyML in terms of efficiency. Figure fig-platforms provides a big picture comparison of the three different platforms.

-
-
-
- -
-
-Figure 8.1: Cloud, Mobile and TinyML. Credit: Schizas et al. (2022). -
-
-Schizas, Nikolaos, Aristeidis Karras, Christos Karras, and Spyros Sioutas. 2022. TinyML for Ultra-Low Power AI and Large Scale IoT Deployments: A Systematic Review.” Future Internet 14 (12): 363. https://doi.org/10.3390/fi14120363. -
-
-

For cloud AI, traditional AI models often run in large-scale data centers equipped with powerful GPUs and TPUs (Barroso, Hölzle, and Ranganathan 2019). Here, efficiency pertains to optimizing computational resources, reducing costs, and ensuring timely data processing and return. However, relying on the cloud introduced latency, especially when dealing with large data streams that must be uploaded, processed, and downloaded.

-
-Barroso, Luiz André, Urs Hölzle, and Parthasarathy Ranganathan. 2019. The Datacenter as a Computer: Designing Warehouse-Scale Machines. Springer International Publishing. https://doi.org/10.1007/978-3-031-01761-2. -
-Li, En, Liekang Zeng, Zhi Zhou, and Xu Chen. 2020. “Edge AI: On-demand Accelerating Deep Neural Network Inference via Edge Computing.” IEEE Trans. Wireless Commun. 19 (1): 447–57. https://doi.org/10.1109/twc.2019.2946140. -

For edge AI, edge computing brings AI closer to the data source, processing information directly on local devices like smartphones, cameras, or industrial machines (Li et al. 2020). Here, efficiency encompasses quick real-time responses and reduced data transmission needs. The constraints, however, are tighter—these devices, while more powerful than microcontrollers, have limited computational power compared to cloud setups.

-

Pushing the frontier even further is TinyML, where AI models run on microcontrollers or extremely resource-constrained environments. The difference in processor and memory performance between TinyML and cloud or mobile systems can be several orders of magnitude (Warden and Situnayake 2019). Efficiency in TinyML is about ensuring models are lightweight enough to fit on these devices, use minimal energy (critical for battery-powered devices), and still perform their tasks effectively.

-
-Warden, Pete, and Daniel Situnayake. 2019. Tinyml: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power Microcontrollers. O’Reilly Media. -

The spectrum from Cloud to TinyML represents a shift from vast, centralized computational resources to distributed, localized, and constrained environments. As we transition from one to the other, the challenges and strategies related to efficiency evolve, underlining the need for specialized approaches tailored to each scenario. Having underscored the need for efficient AI, especially within the context of TinyML, we will transition to exploring the methodologies devised to meet these challenges. The following sections outline the main concepts we will delve deeper into later. We will demonstrate the breadth and depth of innovation needed to achieve efficient AI as we delve into these strategies.

-
-
-

8.3 Efficient Model Architectures

-

Choosing the right model architecture is as crucial as optimizing it. In recent years, researchers have explored some novel architectures that can have inherently fewer parameters while maintaining strong performance.

-

MobileNets: MobileNets are efficient mobile and embedded vision application models (Howard et al. 2017). The key idea that led to their success is the use of depth-wise separable convolutions, which significantly reduce the number of parameters and computations in the network. MobileNetV2 and V3 further enhance this design by introducing inverted residuals and linear bottlenecks.

-
-Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint. https://arxiv.org/abs/1704.04861. -
-Iandola, Forrest N, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: Alexnet-level Accuracy with 50x Fewer Parameters and 0.5 MB Model Size.” ArXiv Preprint abs/1602.07360. https://arxiv.org/abs/1602.07360. -

SqueezeNet: SqueezeNet is a class of ML models known for its smaller size without sacrificing accuracy. It achieves this by using a “fire module” that reduces the number of input channels to 3x3 filters, thus reducing the parameters (Iandola et al. 2016). Moreover, it employs delayed downsampling to increase the accuracy by maintaining a larger feature map.

-

ResNet variants: The Residual Network (ResNet) architecture allows for the introduction of skip connections or shortcuts (He et al. 2016). Some variants of ResNet are designed to be more efficient. For instance, ResNet-SE incorporates the “squeeze and excitation” mechanism to recalibrate feature maps (Hu, Shen, and Sun 2018), while ResNeXt offers grouped convolutions for efficiency (Xie et al. 2017).

-
-He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. IEEE. https://doi.org/10.1109/cvpr.2016.90. -
-Hu, Jie, Li Shen, and Gang Sun. 2018. “Squeeze-and-Excitation Networks.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 7132–41. IEEE. https://doi.org/10.1109/cvpr.2018.00745. -
-Xie, Saining, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. 2017. “Aggregated Residual Transformations for Deep Neural Networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1492–1500. IEEE. https://doi.org/10.1109/cvpr.2017.634. -
-
-

8.4 Efficient Model Compression

-

Model compression methods are very important for bringing deep learning models to devices with limited resources. These techniques reduce models’ size, energy consumption, and computational demands without significantly losing accuracy. At a high level, the methods can briefly be binned into the following fundamental methods:

-

Pruning: This is akin to trimming the branches of a tree. This was first thought of in the Optimal Brain Damage paper (LeCun, Denker, and Solla 1989). This was later popularized in the context of deep learning by Han, Mao, and Dally (2016). Certain weights or even entire neurons are removed from the network in pruning based on specific criteria. This can significantly reduce the model size. Various strategies include weight pruning, neuron pruning, and structured pruning. We will explore these in more detail in sec-pruning. Figure fig-pruning is an examples of neural network pruning: removing some of the nodes in the inner layers (based on a specific criteria) reduces the numbers of edges between the nodes and, in turn, the size of the model.

-
-LeCun, Yann, John Denker, and Sara Solla. 1989. “Optimal Brain Damage.” Adv Neural Inf Process Syst 2. -
-Han, Song, Huizi Mao, and William J. Dally. 2016. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.” https://arxiv.org/abs/1510.00149. -
-
-
- -
-
-Figure 8.2: Neural Network Pruning. -
-
-
-

Quantization: quantization is the process of constraining an input from a large set to output in a smaller set, primarily in deep learning; this means reducing the number of bits that represent the weights and biases of the model. For example, using 16-bit or 8-bit representations instead of 32-bit can reduce the model size and speed up computations, with a minor trade-off in accuracy. We will explore these in more detail in sec-quant. Figure fig-quantization shows an example of quantization by rounding to the closest number. The conversion from 32-bit floating point to 16-bit reduces the memory usage by 50%. And going from 32-bit to 8-bit Integer, memory is reduced by 75%. While the loss in numeric precision, and consequently model performance, is minor, the memory usage efficiency is very significant.

-
-
-
- -
-
-Figure 8.3: Different forms of quantization. -
-
-
-

Knowledge Distillation: Knowledge distillation involves training a smaller model (student) to replicate the behavior of a larger model (teacher). The idea is to transfer the knowledge from the cumbersome model to the lightweight one. Hence, the smaller model attains performance close to its larger counterpart but with significantly fewer parameters. We will explore knowledge distillation in more detail in the sec-kd.

-
-
-

8.5 Efficient Inference Hardware

-

Training: An AI model is an intensive task that requires powerful hardware and can take hours to weeks, but inference needs to be as fast as possible, especially in real-time applications. This is where efficient inference hardware comes into play. We can achieve rapid response times and power-efficient operation by optimizing the hardware specifically for inference tasks, which is especially crucial for edge devices and embedded systems.

-

TPUs (Tensor Processing Units): TPUs are custom-built ASICs (Application-Specific Integrated Circuits) by Google to accelerate machine learning workloads (Jouppi et al. 2017). They are optimized for tensor operations, offering high throughput for low-precision arithmetic, and are designed specifically for neural network machine learning. TPUs significantly accelerate model training and inference compared to general-purpose GPU/CPUs. This boost means faster model training and real-time or near-real-time inference capabilities, which are crucial for applications like voice search and augmented reality.

-
-Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. -

Edge TPUs are a smaller, power-efficient version of Google’s TPUs tailored for edge devices. They provide fast on-device ML inferencing for TensorFlow Lite models. Edge TPUs allow for low-latency, high-efficiency inference on edge devices like smartphones, IoT devices, and embedded systems. AI capabilities can be deployed in real-time applications without communicating with a central server, thus saving bandwidth and reducing latency. Consider the table in Figure fig-edge-tpu-perf. It shows the performance differences between running different models on CPUs versus a Coral USB accelerator. The Coral USB accelerator is an accessory by Google’s Coral AI platform that lets developers connect Edge TPUs to Linux computers. Running inference on the Edge TPUs was 70 to 100 times faster than on CPUs.

-
-
-
- -
-
-Figure 8.4: Accelerator vs CPU performance comparison. Credit: TensorFlow Blog. -
-
-
-

NN Accelerators: Fixed-function neural network accelerators are hardware accelerators designed explicitly for neural network computations. They can be standalone chips or part of a larger system-on-chip (SoC) solution. By optimizing the hardware for the specific operations that neural networks require, such as matrix multiplications and convolutions, NN accelerators can achieve faster inference times and lower power consumption than general-purpose CPUs and GPUs. They are especially beneficial in TinyML devices with power or thermal constraints, such as smartwatches, micro-drones, or robotics.

-

But these are all but the most common examples. A number of other types of hardware are emerging that have the potential to offer significant advantages for inference. These include, but are not limited to, neuromorphic hardware, photonic computing, etc. In sec-aihw, we will explore these in greater detail.

-

Efficient hardware for inference speeds up the process, saves energy, extends battery life, and can operate in real-time conditions. As AI continues to be integrated into myriad applications- from smart cameras to voice assistants- the role of optimized hardware will only become more prominent. By leveraging these specialized hardware components, developers and engineers can bring the power of AI to devices and situations that were previously unthinkable.

-
-
-

8.6 Efficient Numerics

-

Machine learning, and especially deep learning, involves enormous amounts of computation. Models can have millions to billions of parameters, often trained on vast datasets. Every operation, every multiplication or addition, demands computational resources. Therefore, the precision of the numbers used in these operations can significantly impact the computational speed, energy consumption, and memory requirements. This is where the concept of efficient numerics comes into play.

-
-

8.6.1 Numerical Formats

-

There are many different types of numerics. Numerics have a long history in computing systems.

-

Floating point: Known as single-precision floating-point, FP32 utilizes 32 bits to represent a number, incorporating its sign, exponent, and fraction. FP32 is widely adopted in many deep learning frameworks and balances accuracy and computational requirements. It’s prevalent in the training phase for many neural networks due to its sufficient precision in capturing minute details during weight updates.

-

Also known as half-precision floating point, FP16 uses 16 bits to represent a number, including its sign, exponent, and fraction. It offers a good balance between precision and memory savings. FP16 is particularly popular in deep learning training on GPUs that support mixed-precision arithmetic, combining the speed benefits of FP16 with the precision of FP32 where needed.

-

Several other numerical formats fall into an exotic class. An exotic example is BF16 or Brain Floating Point. It is a 16-bit numerical format designed explicitly for deep learning applications. It’s a compromise between FP32 and FP16, retaining the 8-bit exponent from FP32 while reducing the mantissa to 7 bits (as compared to FP32’s 23-bit mantissa). This structure prioritizes range over precision. BF16 has achieved training results comparable in accuracy to FP32 while using significantly less memory and computational resources. This makes it suitable not just for inference but also for training deep neural networks.

-

By retaining the 8-bit exponent of FP32, BF16 offers a similar range, which is crucial for deep learning tasks where certain operations can result in very large or very small numbers. At the same time, by truncating precision, BF16 allows for reduced memory and computational requirements compared to FP32. BF16 has emerged as a promising middle ground in the landscape of numerical formats for deep learning, providing an efficient and effective alternative to the more traditional FP32 and FP16 formats.

-

Figure fig-float-point-formats shows three different floating-point formats: Float32, Float16, and BFloat16.

-
-
-
- -
-
-Figure 8.5: Three floating-point formats. -
-
-
-

Integer: These are integer representations using 8, 4, and 2 bits. They are often used during the inference phase of neural networks, where the weights and activations of the model are quantized to these lower precisions. Integer representations are deterministic and offer significant speed and memory advantages over floating-point representations. For many inference tasks, especially on edge devices, the slight loss in accuracy due to quantization is often acceptable, given the efficiency gains. An extreme form of integer numerics is for binary neural networks (BNNs), where weights and activations are constrained to one of two values: +1 or -1.

-

Variable bit widths: Beyond the standard widths, research is ongoing into extremely low bit-width numerics, even down to binary or ternary representations. Extremely low bit-width operations can offer significant speedups and further reduce power consumption. While challenges remain in maintaining model accuracy with such drastic quantization, advances continue to be made in this area.

-

Efficient numerics is not just about reducing the bit-width of numbers but understanding the trade-offs between accuracy and efficiency. As machine learning models become more pervasive, especially in real-world, resource-constrained environments, the focus on efficient numerics will continue to grow. By thoughtfully selecting and leveraging the appropriate numeric precision, one can achieve robust model performance while optimizing for speed, memory, and energy. Table tbl-precision summarizes these trade-offs.

-
-
-
-Table 8.1: Comparing precision levels in deep learning. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PrecisionProsCons
FP32 (Floating Point 32-bit)Standard precision used in most deep learning frameworks.
High accuracy due to ample representational capacity.
Well-suited for training
High memory usage.
Slower inference times compared to quantized models.
Higher energy consumption.
FP16 (Floating Point 16-bit)Reduces memory usage compared to FP32.
Speeds up computations on hardware that supports FP16.
Often used in mixed-precision training to balance speed and accuracy.
Lower representational capacity compared to FP32.
Risk of numerical instability in some models or layers.
INT8 (8-bit Integer)Significantly reduced memory footprint compared to floating-point representations.
Faster inference if hardware supports INT8 computations.
Suitable for many post-training quantization scenarios.
Quantization can lead to some accuracy loss.
Requires careful calibration during quantization to minimize accuracy degradation.
INT4 (4-bit Integer)Even lower memory usage than INT8.< br//> Further speedup potential for inference.Higher risk of accuracy loss compared to INT8.
Calibration during quantization becomes more critical.
BinaryMinimal memory footprint (only 1 bit per parameter).
Extremely fast inference due to bitwise operations.
Power efficient.
Significant accuracy drop for many tasks.
Complex training dynamics due to extreme quantization.
TernaryLow memory usage but slightly more than binary.
Offers a middle ground between representation and efficiency.
accuracy might still be lower than that of higher precision models.
Training dynamics can be complex.
-
-
-
-
-
-

8.6.2 Efficiency Benefits

-

Numerical efficiency matters for machine learning workloads for several reasons:

-

Computational Efficiency : High-precision computations (like FP32 or FP64) can be slow and resource-intensive. Reducing numeric precision can achieve faster computation times, especially on specialized hardware that supports lower precision.

-

Memory Efficiency: Storage requirements decrease with reduced numeric precision. For instance, FP16 requires half the memory of FP32. This is crucial when deploying models to edge devices with limited memory or working with large models.

-

Power Efficiency: Lower precision computations often consume less power, which is especially important for battery-operated devices.

-

Noise Introduction: Interestingly, the noise introduced using lower precision can sometimes act as a regularizer, helping to prevent overfitting in some models.

-

Hardware Acceleration: Many modern AI accelerators and GPUs are optimized for lower precision operations, leveraging the efficiency benefits of such numerics.

-
-
-
-

8.7 Evaluating Models

-

It’s worth noting that the actual benefits and trade-offs can vary based on the specific architecture of the neural network, the dataset, the task, and the hardware being used. Before deciding on a numeric precision, it’s advisable to perform experiments to evaluate the impact on the desired application.

-
-

8.7.1 Efficiency Metrics

-

A deep understanding of model evaluation methods is important to guide this process systematically. When assessing AI models’ effectiveness and suitability for various applications, efficiency metrics come to the forefront.

-

FLOPs (Floating Point Operations) gauge a model’s computational demands. For instance, a modern neural network like BERT has billions of FLOPs, which might be manageable on a powerful cloud server but would be taxing on a smartphone. Higher FLOPs can lead to more prolonged inference times and significant power drain, especially on devices without specialized hardware accelerators. Hence, for real-time applications such as video streaming or gaming, models with lower FLOPs might be more desirable.

-

Memory Usage pertains to how much storage the model requires, affecting both the deploying device’s storage and RAM. Consider deploying a model onto a smartphone: a model that occupies several gigabytes of space not only consumes precious storage but might also be slower due to the need to load large weights into memory. This becomes especially crucial for edge devices like security cameras or drones, where minimal memory footprints are vital for storage and rapid data processing.

-

Power Consumption becomes especially crucial for devices that rely on batteries. For instance, a wearable health monitor using a power-hungry model could drain its battery in hours, rendering it impractical for continuous health monitoring. Optimizing models for low power consumption becomes essential as we move toward an era dominated by IoT devices, where many devices operate on battery power.

-

Inference Time is about how swiftly a model can produce results. In applications like autonomous driving, where split-second decisions are the difference between safety and calamity, models must operate rapidly. If a self-driving car’s model takes even a few seconds too long to recognize an obstacle, the consequences could be dire. Hence, ensuring a model’s inference time aligns with the real-time demands of its application is paramount.

-

In essence, these efficiency metrics are more than numbers dictating where and how a model can be effectively deployed. A model might boast high accuracy, but if its FLOPs, memory usage, power consumption, or inference time make it unsuitable for its intended platform or real-world scenarios, its practical utility becomes limited.

-
-
-

8.7.2 Efficiency Comparisons

-

The ecosystem contains an abundance of models, each boasting its unique strengths and idiosyncrasies. However, pure model accuracy figures or training and inference speeds paint a partial picture. When we dive deeper into comparative analyses, several critical nuances emerge.

-

Often, we encounter the delicate balance between accuracy and efficiency. For instance, while a dense, deep learning model and a lightweight MobileNet variant might excel in image classification, their computational demands could be at two extremes. This differentiation is especially pronounced when comparing deployments on resource-abundant cloud servers versus constrained TinyML devices. In many real-world scenarios, the marginal gains in accuracy could be overshadowed by the inefficiencies of a resource-intensive model.

-

Moreover, the optimal model choice is only sometimes universal but often depends on the specifics of an application. Consider object detection: a model that excels in general scenarios that might falter in niche environments, such as when detecting manufacturing defects on a factory floor. This adaptability- or the lack of it- can dictate a model’s real-world utility.

-

Another important consideration is the relationship between model complexity and its practical benefits. Take voice-activated assistants, such as “Alexa” or “OK Google.” While a complex model might demonstrate a marginally superior understanding of user speech if it’s slower to respond than a simpler counterpart, the user experience could be compromised. Thus, adding layers or parameters only sometimes equates to better real-world outcomes.

-

Furthermore, while benchmark datasets, such as ImageNet (Russakovsky et al. 2015), COCO (Lin et al. 2014), Visual Wake Words (Chowdhery et al. 2019), Google Speech Commands (Warden 2018), etc. provide a standardized performance metric, they might not capture the diversity and unpredictability of real-world data. Two facial recognition models with similar benchmark scores might exhibit varied competencies when faced with diverse ethnic backgrounds or challenging lighting conditions. Such disparities underscore the importance of robustness and consistency across varied data. For example, Figure fig-stoves from the Dollar Street dataset shows stove images across extreme monthly incomes. Stoves have different shapes and technological levels across different regions and income levels. A model that is not trained on diverse datasets might perform well on a benchmark but fail in real-world applications. So, if a model was trained on pictures of stoves found in wealthy countries only, it would fail to recognize stoves from poorer regions.

-
-Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. 2015. ImageNet Large Scale Visual Recognition Challenge.” Int. J. Comput. Vision 115 (3): 211–52. https://doi.org/10.1007/s11263-015-0816-y. -
-Lin, Tsung-Yi, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. “Microsoft Coco: Common Objects in Context.” In Computer VisionECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part v 13, 740–55. Springer. -
-Chowdhery, Aakanksha, Pete Warden, Jonathon Shlens, Andrew Howard, and Rocky Rhodes. 2019. “Visual Wake Words Dataset.” arXiv Preprint arXiv:1906.05721. -
-Warden, Pete. 2018. “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” arXiv Preprint arXiv:1804.03209. -
-
-
- -
-
-Figure 8.6: Different types of stoves. Credit: Dollar Street stove images. -
-
-
-

In essence, a thorough comparative analysis transcends numerical metrics. It’s a holistic assessment intertwined with real-world applications, costs, and the intricate subtleties that each model brings to the table. This is why having standard benchmarks and metrics widely established and adopted by the community becomes important.

-
-
-
-

8.8 Conclusion

-

Efficient AI is extremely important as we push towards broader and more diverse real-world deployment of machine learning. This chapter provided an overview, exploring the various methodologies and considerations behind achieving efficient AI, starting with the fundamental need, similarities, and differences across cloud, Edge, and TinyML systems.

-

We saw that efficient model architectures can be useful for optimizations. Model compression techniques such as pruning, quantization, and knowledge distillation exist to help reduce computational demands and memory footprint without significantly impacting accuracy. Specialized hardware like TPUs and NN accelerators offer optimized silicon for neural network operations and data flow. Efficient numerics balance precision and efficiency, enabling models to attain robust performance using minimal resources. In the subsequent chapters, we will explore these different topics in depth and in detail.

-

Together, these form a holistic framework for efficient AI. But the journey doesn’t end here. Achieving optimally efficient intelligence requires continued research and innovation. As models become more sophisticated, datasets grow, and applications diversify into specialized domains, efficiency must evolve in lockstep. Measuring real-world impact requires nuanced benchmarks and standardized metrics beyond simplistic accuracy figures.

-

Moreover, efficient AI expands beyond technological optimization and encompasses costs, environmental impact, and ethical considerations for the broader societal good. As AI permeates industries and daily lives, a comprehensive outlook on efficiency underpins its sustainable and responsible progress. The subsequent chapters will build upon these foundational concepts, providing actionable insights and hands-on best practices for developing and deploying efficient AI solutions.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

-

Coming soon.

-
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/frameworks/frameworks.html b/contents/frameworks/frameworks.html deleted file mode 100644 index 85bb4bd9..00000000 --- a/contents/frameworks/frameworks.html +++ /dev/null @@ -1,2154 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 6  AI Frameworks - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

6  AI Frameworks

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Illustration in a rectangular format, designed for a professional textbook, where the content spans the entire width. The vibrant chart represents training and inference frameworks for ML. Icons for TensorFlow, Keras, PyTorch, ONNX, and TensorRT are spread out, filling the entire horizontal space, and aligned vertically. Each icon is accompanied by brief annotations detailing their features. The lively colors like blues, greens, and oranges highlight the icons and sections against a soft gradient background. The distinction between training and inference frameworks is accentuated through color-coded sections, with clean lines and modern typography maintaining clarity and focus.
-
-
-

This chapter explores the landscape of AI frameworks that serve as the foundation for developing machine learning systems. AI frameworks provide the tools, libraries, and environments to design, train, and deploy machine learning models. We delve into the evolutionary trajectory of these frameworks, dissect the workings of TensorFlow, and provide insights into the core components and advanced features that define these frameworks.

-

Furthermore, we investigate the specialization of frameworks tailored to specific needs, the emergence of frameworks specifically designed for embedded AI, and the criteria for selecting the most suitable framework for your project. This exploration will be rounded off by a glimpse into the future trends expected to shape the landscape of ML frameworks in the coming years.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the evolution and capabilities of major machine learning frameworks. This includes graph execution models, programming paradigms, hardware acceleration support, and how they have expanded over time.

  • -
  • Learn frameworks’ core components and functionality, such as computational graphs, data pipelines, optimization algorithms, training loops, etc., that enable efficient model building.

  • -
  • Compare frameworks across different environments, such as cloud, edge, and TinyML. Learn how frameworks specialize based on computational constraints and hardware.

  • -
  • Dive deeper into embedded and TinyML-focused frameworks like TensorFlow Lite Micro, CMSIS-NN, TinyEngine, etc., and how they optimize for microcontrollers.

  • -
  • When choosing a framework, explore model conversion and deployment considerations, including latency, memory usage, and hardware support.

  • -
  • Evaluate key factors in selecting the right framework, like performance, hardware compatibility, community support, ease of use, etc., based on the specific project needs and constraints.

  • -
  • Understand the limitations of current frameworks and potential future trends, such as using ML to improve frameworks, decomposed ML systems, and high-performance compilers.

  • -
-
-
-
-

6.1 Introduction

-

Machine learning frameworks provide the tools and infrastructure to efficiently build, train, and deploy machine learning models. In this chapter, we will explore the evolution and key capabilities of major frameworks like TensorFlow (TF), PyTorch, and specialized frameworks for embedded devices. We will dive into the components like computational graphs, optimization algorithms, hardware acceleration, and more that enable developers to construct performant models quickly. Understanding these frameworks is essential to leverage the power of deep learning across the spectrum from cloud to edge devices.

-

ML frameworks handle much of the complexity of model development through high-level APIs and domain-specific languages that allow practitioners to quickly construct models by combining pre-made components and abstractions. For example, frameworks like TensorFlow and PyTorch provide Python APIs to define neural network architectures using layers, optimizers, datasets, and more. This enables rapid iteration compared to coding every model detail from scratch.

-

A key capability framework offered is distributed training engines that can scale model training across clusters of GPUs and TPUs. This makes it feasible to train state-of-the-art models with billions or trillions of parameters on vast datasets. Frameworks also integrate with specialized hardware like NVIDIA GPUs to further accelerate training via optimizations like parallelization and efficient matrix operations.

-

In addition, frameworks simplify deploying finished models into production through tools like TensorFlow Serving for scalable model serving and TensorFlow Lite for optimization on mobile and edge devices. Other valuable capabilities include visualization, model optimization techniques like quantization and pruning, and monitoring metrics during training.

-

They were leading open-source frameworks like TensorFlow, PyTorch, and MXNet, which power much of AI research and development today. Commercial offerings like Amazon SageMaker and Microsoft Azure Machine Learning integrate these open source frameworks with proprietary capabilities and enterprise tools.

-

Machine learning engineers and practitioners leverage these robust frameworks to focus on high-value tasks like model architecture, feature engineering, and hyperparameter tuning instead of infrastructure. The goal is to build and deploy performant models that solve real-world problems efficiently.

-

This chapter, we will explore today’s leading cloud frameworks and how they have adapted models and tools specifically for embedded and edge deployment. We will compare programming models, supported hardware, optimization capabilities, and more to fully understand how frameworks enable scalable machine learning from the cloud to the edge.

-
-
-

6.2 Framework Evolution

-

Machine learning frameworks have evolved significantly to meet the diverse needs of machine learning practitioners and advancements in AI techniques. A few decades ago, building and training machine learning models required extensive low-level coding and infrastructure. Machine learning frameworks have evolved considerably over the past decade to meet the expanding needs of practitioners and rapid advances in deep learning techniques. Insufficient data and computing power constrained early neural network research. Building and training machine learning models required extensive low-level coding and infrastructure. However, the release of large datasets like ImageNet (Deng et al. 2009) and advancements in parallel GPU computing unlocked the potential for far deeper neural networks.

-
-Deng, Jia, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. 2009. ImageNet: A Large-Scale Hierarchical Image Database.” In 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–55. IEEE. https://doi.org/10.1109/cvpr.2009.5206848. -
-Team, The Theano Development, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, et al. 2016. “Theano: A Python Framework for Fast Computation of Mathematical Expressions.” https://arxiv.org/abs/1605.02688. -
-Jia, Yangqing, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. “Caffe: Convolutional Architecture for Fast Feature Embedding.” In Proceedings of the 22nd ACM International Conference on Multimedia, 675–78. ACM. https://doi.org/10.1145/2647868.2654889. -
-Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1106–14. https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html. -
-Chollet, François. 2018. “Introduction to Keras.” March 9th. -
-Tokui, Seiya, Ryosuke Okuta, Takuya Akiba, Yusuke Niitani, Toru Ogawa, Shunta Saito, Shuji Suzuki, Kota Uenishi, Brian Vogel, and Hiroyuki Yamazaki Vincent. 2019. “Chainer: A Deep Learning Framework for Accelerating the Research Cycle.” In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining, 5:1–6. ACM. https://doi.org/10.1145/3292500.3330756. -
-Seide, Frank, and Amit Agarwal. 2016. “Cntk: Microsoft’s Open-Source Deep-Learning Toolkit.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2135–35. ACM. https://doi.org/10.1145/2939672.2945397. -
-Ansel, Jason, Edward Yang, Horace He, Natalia Gimelshein, Animesh Jain, Michael Voznesensky, Bin Bao, et al. 2024. PyTorch 2: Faster Machine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation.” In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, edited by Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, 8024–35. ACM. https://doi.org/10.1145/3620665.3640366. -

The first ML frameworks, Theano by Team et al. (2016) and Caffe by Jia et al. (2014), were developed by academic institutions (Montreal Institute for Learning Algorithms, Berkeley Vision and Learning Center). Amid growing interest in deep learning due to state-of-the-art performance of AlexNet Krizhevsky, Sutskever, and Hinton (2012) on the ImageNet dataset, private companies and individuals began developing ML frameworks, resulting in frameworks such as Keras by Chollet (2018), Chainer by Tokui et al. (2019), TensorFlow from Google (Yu et al. 2018), CNTK by Microsoft (Seide and Agarwal 2016), and PyTorch by Facebook (Ansel et al. 2024).

-

Many of these ML frameworks can be divided into high-level vs. low-level frameworks and static vs. dynamic computational graph frameworks. High-level frameworks provide a higher level of abstraction than low-level frameworks. High-level frameworks have pre-built functions and modules for common ML tasks, such as creating, training, and evaluating common ML models, preprocessing data, engineering features, and visualizing data, which low-level frameworks do not have. Thus, high-level frameworks may be easier to use but are less customizable than low-level frameworks (i.e., users of low-level frameworks can define custom layers, loss functions, optimization algorithms, etc.). Examples of high-level frameworks include TensorFlow/Keras and PyTorch. Examples of low-level ML frameworks include TensorFlow with low-level APIs, Theano, Caffe, Chainer, and CNTK.

-

Frameworks like Theano and Caffe used static computational graphs, which required rigidly defining the full model architecture upfront. Static graphs require upfront declaration and limit flexibility. Dynamic graphs are constructed on the fly for more iterative development. However, around 2016, frameworks began adopting dynamic graphs like PyTorch and TensorFlow 2.0, which can construct graphs on the fly. This provides greater flexibility for model development. We will discuss these concepts and details later in the AI Training section.

-

The development of these frameworks facilitated an explosion in model size and complexity over time—from early multilayer perceptrons and convolutional networks to modern transformers with billions or trillions of parameters. In 2016, ResNet models by He et al. (2016) achieved record ImageNet accuracy with over 150 layers and 25 million parameters. Then, in 2020, the GPT-3 language model from OpenAI (Brown et al. 2020) pushed parameters to an astonishing 175 billion using model parallelism in frameworks to train across thousands of GPUs and TPUs.

-
-He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. IEEE. https://doi.org/10.1109/cvpr.2016.90. -
-Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. -

Each generation of frameworks unlocked new capabilities that powered advancement:

-
    -
  • Theano and TensorFlow (2015) introduced computational graphs and automatic differentiation to simplify model building.

  • -
  • CNTK (2016) pioneered efficient distributed training by combining model and data parallelism.

  • -
  • PyTorch (2016) provided imperative programming and dynamic graphs for flexible experimentation.

  • -
  • TensorFlow 2.0 (2019) defaulted eager execution for intuitiveness and debugging.

  • -
  • TensorFlow Graphics (2020) added 3D data structures to handle point clouds and meshes.

  • -
-

In recent years, the frameworks have converged. Figure fig-ml-framework shows that TensorFlow and PyTorch have become the overwhelmingly dominant ML frameworks, representing more than 95% of ML frameworks used in research and production. Keras was integrated into TensorFlow in 2019; Preferred Networks transitioned Chainer to PyTorch in 2019; and Microsoft stopped actively developing CNTK in 2022 to support PyTorch on Windows.

-
-
-
- -
-
-Figure 6.1: Popularity of ML frameworks in the United States as measured by Google web searches. Credit: Google. -
-
-
-

However, a one-size-fits-all approach only works well across the spectrum from cloud to tiny edge devices. Different frameworks represent various philosophies around graph execution, declarative versus imperative APIs, and more. Declaratives define what the program should do, while imperatives focus on how it should be done step-by-step. For instance, TensorFlow uses graph execution and declarative-style modeling, while PyTorch adopts eager execution and imperative modeling for more Pythonic flexibility. Each approach carries tradeoffs, which we will discuss later in the Basic Components section.

-

Today’s advanced frameworks enable practitioners to develop and deploy increasingly complex models - a key driver of innovation in the AI field. However, they continue to evolve and expand their capabilities for the next generation of machine learning. To understand how these systems continue to evolve, we will dive deeper into TensorFlow as an example of how the framework grew in complexity over time.

-
-
-

6.3 DeepDive into TensorFlow

-

TensorFlow was developed by the Google Brain team and was released as an open-source software library on November 9, 2015. It was designed for numerical computation using data flow graphs and has since become popular for a wide range of machine learning and deep learning applications.

-

TensorFlow is a training and inference framework that provides built-in functionality to handle everything from model creation and training to deployment, as shown in Figure fig-tensorflow-architecture. Since its initial development, the TensorFlow ecosystem has grown to include many different “varieties” of TensorFlow, each intended to allow users to support ML on different platforms. In this section, we will mainly discuss only the core package.

-
-

6.3.1 TF Ecosystem

-
    -
  1. TensorFlow Core: primary package that most developers engage with. It provides a comprehensive, flexible platform for defining, training, and deploying machine learning models. It includes tf—keras as its high-level API.

  2. -
  3. TensorFlow Lite (https://www.tensorflow.org/lite): designed for deploying lightweight models on mobile, embedded, and edge devices. It offers tools to convert TensorFlow models to a more compact format suitable for limited-resource devices and provides optimized pre-trained models for mobile.

  4. -
  5. TensorFlow.js: JavaScript library that allows training and deployment of machine learning models directly in the browser or on Node.js. It also provides tools for porting pre-trained TensorFlow models to the browser-friendly format.

  6. -
  7. TensorFlow on Edge Devices (Coral): platform of hardware components and software tools from Google that allows the execution of TensorFlow models on edge devices, leveraging Edge TPUs for acceleration.

  8. -
  9. TensorFlow Federated (TFF): framework for machine learning and other computations on decentralized data. TFF facilitates federated learning, allowing model training across many devices without centralizing the data.

  10. -
  11. TensorFlow Graphics: library for using TensorFlow to carry out graphics-related tasks, including 3D shapes and point clouds processing, using deep learning.

  12. -
  13. TensorFlow Hub: repository of reusable machine learning model components to allow developers to reuse pre-trained model components, facilitating transfer learning and model composition

  14. -
  15. TensorFlow Serving: framework designed for serving and deploying machine learning models for inference in production environments. It provides tools for versioning and dynamically updating deployed models without service interruption.

  16. -
  17. TensorFlow Extended (TFX): end-to-end platform designed to deploy and manage machine learning pipelines in production settings. TFX encompasses data validation, preprocessing, model training, validation, and serving components.

  18. -
-
-
-
- -
-
-Figure 6.2: Architecture overview of TensorFlow 2.0. Credit: Tensorflow. -
-
-
-

TensorFlow was developed to address the limitations of DistBelief (Yu et al. 2018)—the framework in use at Google from 2011 to 2015—by providing flexibility along three axes: 1) defining new layers, 2) refining training algorithms, and 3) defining new training algorithms. To understand what limitations in DistBelief led to the development of TensorFlow, we will first give a brief overview of the Parameter Server Architecture that DistBelief employed (Dean et al. 2012).

-
-Yu, Yuan, Martı́n Abadi, Paul Barham, Eugene Brevdo, Mike Burrows, Andy Davis, Jeff Dean, et al. 2018. “Dynamic Control Flow in Large-Scale Machine Learning.” In Proceedings of the Thirteenth EuroSys Conference, 265–83. ACM. https://doi.org/10.1145/3190508.3190551. -
-Dean, Jeffrey, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, et al. 2012. “Large Scale Distributed Deep Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1232–40. https://proceedings.neurips.cc/paper/2012/hash/6aca97005c68f1206823815f66102863-Abstract.html. -

The Parameter Server (PS) architecture is a popular design for distributing the training of machine learning models, especially deep neural networks, across multiple machines. The fundamental idea is to separate the storage and management of model parameters from the computation used to update these parameters:

-

Storage: The stateful parameter server processes handled the storage and management of model parameters. Given the large scale of models and the system’s distributed nature, these parameters were sharded across multiple parameter servers. Each server maintained a portion of the model parameters, making it "stateful" as it had to maintain and manage this state across the training process.

-

Computation: The worker processes, which could be run in parallel, were stateless and purely computational. They processed data and computed gradients without maintaining any state or long-term memory (M. Li et al. 2014).

-
-Li, Mu, David G. Andersen, Alexander J. Smola, and Kai Yu. 2014. “Communication Efficient Distributed Machine Learning with the Parameter Server.” In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, edited by Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger, 19–27. https://proceedings.neurips.cc/paper/2014/hash/1ff1de774005f8da13f42943881c655f-Abstract.html. -
-

Exercise 6.1 (TensorFlow Core)  

-
-
- -
-
-

Let’s comprehensively understand core machine learning algorithms using TensorFlow and their practical applications in data analysis and predictive modeling. We will start with linear regression to predict survival rates from the Titanic dataset. Then, using TensorFlow, we will construct classifiers to identify different species of flowers based on their attributes. Next, we will use the K-Means algorithm and its application in segmenting datasets into cohesive clusters. Finally, we will apply hidden Markov models (HMM) to foresee weather patterns.

-

-
-
-
-
-

Exercise 6.2 (TensorFlow Lite)  

-
-
- -
-
-

Here, we will see how to build a miniature machine-learning model for microcontrollers. We will build a mini neural network that is streamlined to learn from data even with limited resources and optimized for deployment by shrinking our model for efficient use on microcontrollers. TensorFlow Lite, a powerful technology derived from TensorFlow, shrinks models for tiny devices and helps enable on-device features like image recognition in smart devices. It is used in edge computing to allow for faster analysis and decisions in devices processing data locally.

-

-
-
-
-

DistBelief and its architecture defined above were crucial in enabling distributed deep learning at Google but also introduced limitations that motivated the development of TensorFlow:

-
-
-

6.3.2 Static Computation Graph

-

Model parameters are distributed across various parameter servers in the parameter server architecture. Since DistBelief was primarily designed for the neural network paradigm, parameters corresponded to a fixed neural network structure. If the computation graph were dynamic, the distribution and coordination of parameters would become significantly more complicated. For example, a change in the graph might require the initialization of new parameters or the removal of existing ones, complicating the management and synchronization tasks of the parameter servers. This made it harder to implement models outside the neural framework or models that required dynamic computation graphs.

-

TensorFlow was designed as a more general computation framework that expresses computation as a data flow graph. This allows for a wider variety of machine learning models and algorithms outside of neural networks and provides flexibility in refining models.

-
-
-

6.3.3 Usability & Deployment

-

The parameter server model delineates roles (worker nodes and parameter servers) and is optimized for data center deployments, which might only be optimal for some use cases. For instance, this division introduces overheads or complexities on edge devices or in other non-data center environments.

-

TensorFlow was built to run on multiple platforms, from mobile devices and edge devices to cloud infrastructure. It also aimed to be lighter and developer-friendly and to provide ease of use between local and distributed training.

-
-
-

6.3.4 Architecture Design

-

Rather than using the parameter server architecture, TensorFlow deploys tasks across a cluster. These tasks are named processes that can communicate over a network, and each can execute TensorFlow’s core construct, the dataflow graph, and interface with various computing devices (like CPUs or GPUs). This graph is a directed representation where nodes symbolize computational operations, and edges depict the tensors (data) flowing between these operations.

-

Despite the absence of traditional parameter servers, some “PS tasks” still store and manage parameters reminiscent of parameter servers in other systems. The remaining tasks, which usually handle computation, data processing, and gradient calculations, are referred to as "worker tasks." TensorFlow’s PS tasks can execute any computation representable by the dataflow graph, meaning they aren’t just limited to parameter storage, and the computation can be distributed. This capability makes them significantly more versatile and gives users the power to program the PS tasks using the standard TensorFlow interface, the same one they’d use to define their models. As mentioned above, dataflow graphs’ structure also makes them inherently good for parallelism, allowing for the processing of large datasets.

-
-
-

6.3.5 Built-in Functionality & Keras

-

TensorFlow includes libraries to help users develop and deploy more use-case-specific models, and since this framework is open-source, this list continues to grow. These libraries address the entire ML development lifecycle: data preparation, model building, deployment, and responsible AI.

-

One of TensorFlow’s biggest advantages is its integration with Keras, though, as we will cover in the next section, Pytorch recently added a Keras integration. Keras is another ML framework built to be extremely user-friendly and, as a result, has a high level of abstraction. We will cover Keras in more depth later in this chapter. However, when discussing its integration with TensorFlow, it was important to note that it was originally built to be backend-agnostic. This means users could abstract away these complexities, offering a cleaner, more intuitive way to define and train models without worrying about compatibility issues with different backends. TensorFlow users had some complaints about the usability and readability of TensorFlow’s API, so as TF gained prominence, it integrated Keras as its high-level API. This integration offered major benefits to TensorFlow users since it introduced more intuitive readability and portability of models while still taking advantage of powerful backend features, Google support, and infrastructure to deploy models on various platforms.

-
-

Exercise 6.3 (Exploring Keras: Building, Training, and Evaluating Neural Networks)  

-
-
- -
-
-

Here, we’ll learn how to use Keras, a high-level neural network API, for model development and training. We will explore the functional API for concise model building, understand loss and metric classes for model evaluation, and use built-in optimizers to update model parameters during training. Additionally, we’ll discover how to define custom layers and metrics tailored to our needs. Lastly, we’ll delve into Keras’ training loops to streamline the process of training neural networks on large datasets. This knowledge will empower us to build and optimize neural network models across various machine learning and artificial intelligence applications.

-

-
-
-
-
-
-

6.3.6 Limitations and Challenges

-

TensorFlow is one of the most popular deep learning frameworks but has criticisms and weaknesses, mostly focusing on usability and resource usage. While advantageous, the rapid pace of updates through its support from Google has sometimes led to backward compatibility issues, deprecated functions, and shifting documentation. Additionally, even with the Keras implementation, TensorFlow’s syntax and learning curve can be difficult for new users. One major critique of TensorFlow is its high overhead and memory consumption due to the range of built-in libraries and support. Some of these concerns can be addressed using pared-down versions, but they can still be limited in resource-constrained environments.

-
-
-

6.3.7 PyTorch vs. TensorFlow

-

PyTorch and TensorFlow have established themselves as frontrunners in the industry. Both frameworks offer robust functionalities but differ in design philosophies, ease of use, ecosystem, and deployment capabilities.

-

Design Philosophy and Programming Paradigm: PyTorch uses a dynamic computational graph termed eager execution. This makes it intuitive and facilitates debugging since operations are executed immediately and can be inspected on the fly. In comparison, earlier versions of TensorFlow were centered around a static computational graph, which required the graph’s complete definition before execution. However, TensorFlow 2.0 introduced eager execution by default, making it more aligned with PyTorch. PyTorch’s dynamic nature and Python-based approach have enabled its simplicity and flexibility, particularly for rapid prototyping. TensorFlow’s static graph approach in its earlier versions had a steeper learning curve; the introduction of TensorFlow 2.0, with its Keras integration as the high-level API, has significantly simplified the development process.

-

Deployment: PyTorch is heavily favored in research environments; deploying PyTorch models in production settings was traditionally challenging. However, deployment has become more feasible with the introduction of TorchScript and the TorchServe tool. One of TensorFlow’s strengths lies in its scalability and deployment capabilities, especially on embedded and mobile platforms with TensorFlow Lite. TensorFlow Serving and TensorFlow.js further facilitate deployment in various environments, thus giving it a broader reach in the ecosystem.

-

Performance: Both frameworks offer efficient hardware acceleration for their operations. However, TensorFlow has a slightly more robust optimization workflow, such as the XLA (Accelerated Linear Algebra) compiler, which can further boost performance. Its static computational graph was also advantageous for certain optimizations in the early versions.

-

Ecosystem: PyTorch has a growing ecosystem with tools like TorchServe for serving models and libraries like TorchVision, TorchText, and TorchAudio for specific domains. As we mentioned earlier, TensorFlow has a broad and mature ecosystem. TensorFlow Extended (TFX) provides an end-to-end platform for deploying production machine learning pipelines. Other tools and libraries include TensorFlow Lite, TensorFlow.js, TensorFlow Hub, and TensorFlow Serving.

-

Table tbl-pytorch_vs_tf provides a comparative analysis:

-
-
-
-Table 6.1: Comparison of PyTorch and TensorFlow. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Feature/AspectPyTorchTensorFlow
Design PhilosophyDynamic computational graph (eager execution)Static computational graph (early versions); Eager execution in TensorFlow 2.0
DeploymentTraditionally challenging; Improved with TorchScript & TorchServeScalable, especially on embedded platforms with TensorFlow Lite
Performance & OptimizationEfficient GPU accelerationRobust optimization with XLA compiler
EcosystemTorchServe, TorchVision, TorchText, TorchAudioTensorFlow Extended (TFX), TensorFlow Lite, TensorFlow.js, TensorFlow Hub, TensorFlow Serving
Ease of UsePreferred for its Pythonic approach and rapid prototypingInitially steep learning curve; Simplified with Keras in TensorFlow 2.0
-
-
-
-
-
-
-

6.4 Basic Framework Components

-
-

6.4.1 Tensor data structures

-

To understand tensors, let us start from the familiar concepts in linear algebra. As demonstrated in Figure fig-tensor-data-structure, vectors can be represented as a stack of numbers in a 1-dimensional array. Matrices follow the same idea, and one can think of them as many vectors stacked on each other, making them 2 dimensional. Higher dimensional tensors work the same way. A 3-dimensional tensor is simply a set of matrices stacked on each other in another direction. Therefore, vectors and matrices can be considered special cases of tensors with 1D and 2D dimensions, respectively.

-
-
-
- -
-
-Figure 6.3: Visualization of Tensor Data Structure. -
-
-
-

Defining formally, in machine learning, tensors are a multi-dimensional array of numbers. The number of dimensions defines the rank of the tensor. As a generalization of linear algebra, the study of tensors is called multilinear algebra. There are noticeable similarities between matrices and higher-ranked tensors. First, extending the definitions given in linear algebra to tensors, such as with eigenvalues, eigenvectors, and rank (in the linear algebra sense), is possible. Furthermore, with the way we have defined tensors, it is possible to turn higher dimensional tensors into matrices. This is critical in practice, as the multiplication of abstract representations of higher dimensional tensors is often completed by first converting them into matrices for multiplication.

-

Tensors offer a flexible data structure that can represent data in higher dimensions. For example, to represent color image data, for each pixel value (in 2 dimensions), one needs the color values for red, green, and blue. With tensors, it is easy to contain image data in a single 3-dimensional tensor, with each number within it representing a certain color value in a certain location of the image. Extending even further, if we wanted to store a series of images, we could extend the dimensions such that the new dimension (to create a 4-dimensional tensor) represents our different images. This is exactly what the famous MNIST dataset does, loading a single 4-dimensional tensor when one calls to load the dataset, allowing a compact representation of all the data in one place.

-
-
-

6.4.2 Computational graphs

-
-

Graph Definition

-

Computational graphs are a key component of deep learning frameworks like TensorFlow and PyTorch. They allow us to express complex neural network architectures efficiently and differentiatedly. A computational graph consists of a directed acyclic graph (DAG) where each node represents an operation or variable, and edges represent data dependencies between them.

-

For example, a node might represent a matrix multiplication operation, taking two input matrices (or tensors) and producing an output matrix (or tensor). To visualize this, consider the simple example in Figure fig-computational-graph. The directed acyclic graph above computes \(z = x \times y\), where each variable is just numbers.

-
-
-
- -
-
-Figure 6.4: Basic example of a computational graph. -
-
-
-

Underneath the hood, the computational graphs represent abstractions for common layers like convolutional, pooling, recurrent, and dense layers, with data including activations, weights, and biases represented in tensors. Convolutional layers form the backbone of CNN models for computer vision. They detect spatial patterns in input data through learned filters. Recurrent layers like LSTMs and GRUs enable sequential data processing for tasks like language translation. Attention layers are used in transformers to draw global context from the entire input.

-

Layers are higher-level abstractions that define computations on top of those tensors. For example, a Dense layer performs a matrix multiplication and addition between input/weight/bias tensors. Note that a layer operates on tensors as inputs and outputs; the layer is not a tensor. Some key differences:

-
    -
  • Layers contain states like weights and biases. Tensors are stateless, just holding data.

  • -
  • Layers can modify internal state during training. Tensors are immutable/read-only.

  • -
  • Layers are higher-level abstractions. Tensors are at a lower level and directly represent data and math operations.

  • -
  • Layers define fixed computation patterns. Tensors flow between layers during execution.

  • -
  • Layers are used indirectly when building models. Tensors flow between layers during execution.

  • -
-

So, while tensors are a core data structure that layers consume and produce, layers have additional functionality for defining parameterized operations and training. While a layer configures tensor operations under the hood, the layer remains distinct from the tensor objects. The layer abstraction makes building and training neural networks much more intuitive. This abstraction enables developers to build models by stacking these layers together without implementing the layer logic. For example, calling tf.keras.layers.Conv2D in TensorFlow creates a convolutional layer. The framework handles computing the convolutions, managing parameters, etc. This simplifies model development, allowing developers to focus on architecture rather than low-level implementations. Layer abstractions utilize highly optimized implementations for performance. They also enable portability, as the same architecture can run on different hardware backends like GPUs and TPUs.

-

In addition, computational graphs include activation functions like ReLU, sigmoid, and tanh that are essential to neural networks, and many frameworks provide these as standard abstractions. These functions introduce non-linearities that enable models to approximate complex functions. Frameworks provide these as simple, predefined operations that can be used when constructing models, for example, if.nn.relu in TensorFlow. This abstraction enables flexibility, as developers can easily swap activation functions for tuning performance. Predefined activations are also optimized by the framework for faster execution.

-

In recent years, models like ResNets and MobileNets have emerged as popular architectures, with current frameworks pre-packaging these as computational graphs. Rather than worrying about the fine details, developers can utilize them as a starting point, customizing as needed by substituting layers. This simplifies and speeds up model development, avoiding reinventing architectures from scratch. Predefined models include well-tested, optimized implementations that ensure good performance. Their modular design also enables transferring learned features to new tasks via transfer learning. These predefined architectures provide high-performance building blocks to create robust models quickly.

-

These layer abstractions, activation functions, and predefined architectures the frameworks provide constitute a computational graph. When a user defines a layer in a framework (e.g., tf.keras.layers.Dense()), the framework configures computational graph nodes and edges to represent that layer. The layer parameters like weights and biases become variables in the graph. The layer computations become operation nodes (such as the x and y in the figure above). When you call an activation function like tf.nn.relu(), the framework adds a ReLU operation node to the graph. Predefined architectures are just pre-configured subgraphs that can be inserted into your model’s graph. Thus, model definition via high-level abstractions creates a computational graph—the layers, activations, and architectures we use become graph nodes and edges.

-

We implicitly construct a computational graph when defining a neural network architecture in a framework. The framework uses this graph to determine operations to run during training and inference. Computational graphs bring several advantages over raw code, and that’s one of the core functionalities that is offered by a good ML framework:

-
    -
  • Explicit representation of data flow and operations

  • -
  • Ability to optimize graph before execution

  • -
  • Automatic differentiation for training

  • -
  • Language agnosticism - graph can be translated to run on GPUs, TPUs, etc.

  • -
  • Portability - graph can be serialized, saved, and restored later

  • -
-

Computational graphs are the fundamental building blocks of ML frameworks. Model definition via high-level abstractions creates a computational graph—the layers, activations, and architectures we use become graph nodes and edges. The framework compilers and optimizers operate on this graph to generate executable code. The abstractions provide a developer-friendly API for building computational graphs. Under the hood, it’s still graphs down! So, while you may not directly manipulate graphs as a framework user, they enable your high-level model specifications to be efficiently executed. The abstractions simplify model-building, while computational graphs make it possible.

-
-
-

Static vs. Dynamic Graphs

-

Deep learning frameworks have traditionally followed one of two approaches for expressing computational graphs.

-

Static graphs (declare-then-execute): With this model, the entire computational graph must be defined upfront before running it. All operations and data dependencies must be specified during the declaration phase. TensorFlow originally followed this static approach - models were defined in a separate context, and then a session was created to run them. The benefit of static graphs is they allow more aggressive optimization since the framework can see the full graph. However, it also tends to be less flexible for research and interactivity. Changes to the graph require re-declaring the full model.

-

For example:

-
x = tf.placeholder(tf.float32)
-y = tf.matmul(x, weights) + biases
-

The model is defined separately from execution, like building a blueprint. For TensorFlow 1. x, this is done using tf.Graph(). All ops and variables must be declared upfront. Subsequently, the graph is compiled and optimized before running. Execution is done later by feeding in tensor values.

-

Dynamic graphs (define-by-run): Unlike declaring (all) first and then executing, the graph is built dynamically as execution happens. There is no separate declaration phase - operations execute immediately as defined. This style is imperative and flexible, facilitating experimentation.

-

PyTorch uses dynamic graphs, building the graph on the fly as execution happens. For example, consider the following code snippet, where the graph is built as the execution is taking place:

-
x = torch.randn(4,784)
-y = torch.matmul(x, weights) + biases
-

The above example does not have separate compile/build/run phases. Ops define and execute immediately. With dynamic graphs, the definition is intertwined with execution, providing a more intuitive, interactive workflow. However, the downside is that there is less potential for optimization since the framework only sees the graph as it is built.

-

Recently, however, the distinction has blurred as frameworks adopt both modes. TensorFlow 2.0 defaults to dynamic graph mode while letting users work with static graphs when needed. Dynamic declaration makes frameworks easier to use, while static models provide optimization benefits. The ideal framework offers both options.

-

Static graph declaration provides optimization opportunities but less interactivity. While dynamic execution offers flexibility and ease of use, it may have performance overhead. Here is a table comparing the pros and cons of static vs dynamic execution graphs:

- ----- - - - - - - - - - - - - - - - - - - - -
Execution GraphProsCons
Static (Declare-then-execute)Enable graph optimizations by seeing full model ahead of time
Can export and deploy frozen graphs
Graph is packaged independently of code
Less flexible for research and iteration
Changes require rebuilding graph
Execution has separate compile and run phases
Dynamic (Define-by-run)Intuitive imperative style like Python code
Interleave graph build with execution
Easy to modify graphs
Debugging seamlessly fits workflow
Harder to optimize without full graph
Possible slowdowns from graph building during execution
Can require more memory
-
-
-
-

6.4.3 Data Pipeline Tools

-

Computational graphs can only be as good as the data they learn from and work on. Therefore, feeding training data efficiently is crucial for optimizing deep neural network performance, though it is often overlooked as one of the core functionalities. Many modern AI frameworks provide specialized pipelines to ingest, process, and augment datasets for model training.

-
-

Data Loaders

-

These pipelines’ cores are data loaders, which handle reading examples from storage formats like CSV files or image folders. Reading training examples from sources like files, databases, object storage, etc., is the job of the data loaders. Deep learning models require diverse data formats depending on the application. Among the popular formats is CSV, a versatile, simple format often used for tabular data. TFRecord: TensorFlow’s proprietary format, optimized for performance. Parquet: Columnar storage, offering efficient data compression and retrieval. JPEG/PNG: Commonly used for image data. WAV/MP3: Prevalent formats for audio data. For instance, tf.data is TensorFlows’s dataloading pipeline: https://www.tensorflow.org/guide/data.

-

Data loaders batch examples to leverage vectorization support in hardware. Batching refers to grouping multiple data points for simultaneous processing, leveraging the vectorized computation capabilities of hardware like GPUs. While typical batch sizes range from 32 to 512 examples, the optimal size often depends on the data’s memory footprint and the specific hardware constraints. Advanced loaders can stream virtually unlimited datasets from disk and cloud storage. They stream large datasets from disks or networks instead of fully loading them into memory, enabling unlimited dataset sizes.

-

Data loaders can also shuffle data across epochs for randomization and preprocess features in parallel with model training to expedite the training process. Randomly shuffling the order of examples between training epochs reduces bias and improves generalization.

-

Data loaders also support caching and prefetching strategies to optimize data delivery for fast, smooth model training. Caching preprocessed batches in memory allows them to be reused efficiently during multiple training steps and eliminates redundant processing. Prefetching, conversely, involves preloading subsequent batches, ensuring that the model never idles waiting for data.

-
-
-
-

6.4.4 Data Augmentation

-

Besides loading, data augmentation expands datasets synthetically. Augmentations apply random transformations for images like flipping, cropping, rotating, altering color, adding noise, etc. For audio, common augmentations involve mixing clips with background noise or modulating speed/pitch/volume.

-

Augmentations increase variation in the training data. Frameworks like TensorFlow and PyTorch simplify applying random augmentations each epoch by integrating them into the data pipeline. By programmatically increasing variation in the training data distribution, augmentations reduce Overfitting and improve model generalization.

-

Many frameworks simplify integrating augmentations into the data pipeline, applying them on the fly each epoch. Together, performant data loaders and extensive augmentations enable practitioners to feed massive, varied datasets to neural networks efficiently. Hands-off data pipelines represent a significant improvement in usability and productivity. They allow developers to focus more on model architecture and less on data wrangling when training deep learning models.

-
-
-

6.4.5 Optimization Algorithms

-

Training a neural network is fundamentally an iterative process that seeks to minimize a loss function. The goal is to fine-tune the model weights and parameters to produce predictions close to the true target labels. Machine learning frameworks have greatly streamlined this process by offering extensive support in three critical areas: loss functions, optimization algorithms, and regularization techniques.

-

Loss Functions are useful to quantify the difference between the model’s predictions and the true values. Different datasets require a different loss function to perform properly, as the loss function tells the computer the “objective” for it to aim. Commonly used loss functions are Mean Squared Error (MSE) for regression tasks and Cross-Entropy Loss for classification tasks.

-

To demonstrate some of the loss functions, imagine you have a set of inputs and the corresponding outputs, \(Y_n\), that denote the output of \(n\)’th value. The inputs are fed into the model, and the model outputs a prediction, which we can call \(\hat{Y_n}\). With the predicted value and the real value, we can, for example, use the MSE to calculate the loss function:

-

\[MSE = \frac{1}{N}\sum_{n=1}^{N}(Y_n - \hat{Y_n})^2\]

-

If the problem is a classification problem, we do not want to use the MSE since the distance between the predicted value and the real value does not have significant meaning. For example, if one wants to recognize handwritten models, while 9 is further away from 2, it does not mean that the model is wrong in making the prediction. Therefore, we use the cross-entropy loss function, which is defined as:

-

\[Cross-Entropy = -\sum_{n=1}^{N}Y_n\log(\hat{Y_n})\]

-

Once a loss like the above is computed, we need methods to adjust the model’s parameters to reduce this loss or error during the training process. To do so, current frameworks use a gradient-based approach, which computes how much changes tuning the weights in a certain way changes the value of the loss function. Knowing this gradient, the model moves in the direction that reduces the gradient. Many challenges are associated with this, primarily stemming from the fact that the optimization problem could not be more, making it very easy to solve. More details about this will come in the AI Training section. Modern frameworks come equipped with efficient implementations of several optimization algorithms, many of which are variants of gradient descent algorithms with stochastic methods and adaptive learning rates. More information with clear examples can be found in the AI Training section.

-

Lastly, overly complex models tend to overfit, meaning they perform well on the training data but must generalize to new, unseen data (see Overfitting). To counteract this, regularization methods are employed to penalize model complexity and encourage it to learn simpler patterns. Dropout randomly sets a fraction of input units to 0 at each update during training, which helps prevent Overfitting.

-

However, there are cases where the problem is more complex than the model can represent, which may result in underfitting. Therefore, choosing the right model architecture is also a critical step in the training process. Further heuristics and techniques are discussed in the AI Training section.

-

Frameworks also efficiently implement gradient descent, Adagrad, Adadelta, and Adam. Adding regularization, such as dropout and L1/L2 penalties, prevents Overfitting during training. Batch normalization accelerates training by normalizing inputs to layers.

-
-
-

6.4.6 Model Training Support

-

A compilation step is required before training a defined neural network model. During this step, the neural network’s high-level architecture is transformed into an optimized, executable format. This process comprises several steps. The first step is to construct the computational graph, which represents all the mathematical operations and data flow within the model. We discussed this earlier.

-

During training, the focus is on executing the computational graph. Every parameter within the graph, such as weights and biases, is assigned an initial value. Depending on the chosen initialization method, this value might be random or based on a predefined logic.

-

The next critical step is memory allocation. Essential memory is reserved for the model’s operations on both CPUs and GPUs, ensuring efficient data processing. The model’s operations are then mapped to the available hardware resources, particularly GPUs or TPUs, to expedite computation. Once the compilation is finalized, the model is prepared for training.

-

The training process employs various tools to enhance efficiency. Batch processing is commonly used to maximize computational throughput. Techniques like vectorization enable operations on entire data arrays rather than proceeding element-wise, which bolsters speed. Optimizations such as kernel fusion (refer to the Optimizations chapter) amalgamate multiple operations into a single action, minimizing computational overhead. Operations can also be segmented into phases, facilitating the concurrent processing of different mini-batches at various stages.

-

Frameworks consistently checkpoint the state, preserving intermediate model versions during training. This ensures that progress is recovered if an interruption occurs, and training can be recommenced from the last checkpoint. Additionally, the system vigilantly monitors the model’s performance against a validation data set. Should the model begin to overfit (if its performance on the validation set declines), training is automatically halted, conserving computational resources and time.

-

ML frameworks incorporate a blend of model compilation, enhanced batch processing methods, and utilities such as checkpointing and early stopping. These resources manage the complex aspects of performance, enabling practitioners to zero in on model development and training. As a result, developers experience both speed and ease when utilizing neural networks’ capabilities.

-
-
-

6.4.7 Validation and Analysis

-

After training deep learning models, frameworks provide utilities to evaluate performance and gain insights into the models’ workings. These tools enable disciplined experimentation and debugging.

-
-

Evaluation Metrics

-

Frameworks include implementations of common evaluation metrics for validation:

-
    -
  • Accuracy - Fraction of correct predictions overall. They are widely used for classification.

  • -
  • Precision - Of positive predictions, how many were positive. Useful for imbalanced datasets.

  • -
  • Recall - Of actual positives, how many did we predict correctly? Measures completeness.

  • -
  • F1-score - Harmonic mean of precision and recall. Combines both metrics.

  • -
  • AUC-ROC - Area under ROC curve. They are used for classification threshold analysis.

  • -
  • MAP - Mean Average Precision. Evaluate ranked predictions in retrieval/detection.

  • -
  • Confusion Matrix - Matrix that shows the true positives, true negatives, false positives, and false negatives. Provides a more detailed view of classification performance.

  • -
-

These metrics quantify model performance on validation data for comparison.

-
-
-

Visualization

-

Visualization tools provide insight into models:

-
    -
  • Loss curves - Plot training and validation loss over time to spot Overfitting.

  • -
  • Activation grids - Illustrate features learned by convolutional filters.

  • -
  • Projection - Reduce dimensionality for intuitive visualization.

  • -
  • Precision-recall curves - Assess classification tradeoffs.

  • -
-

Tools like TensorBoard for TensorFlow and TensorWatch for PyTorch enable real-time metrics and visualization during training.

-
-
-
-

6.4.8 Differentiable programming

-

Machine learning training methods such as backpropagation rely on the change in the loss function with respect to the change in weights (which essentially is the definition of derivatives). Thus, the ability to quickly and efficiently train large machine learning models relies on the computer’s ability to take derivatives. This makes differentiable programming one of the most important elements of a machine learning framework.

-

We can use four primary methods to make computers take derivatives. First, we can manually figure out the derivatives by hand and input them into the computer. This would quickly become a nightmare with many layers of neural networks if we had to compute all the derivatives in the backpropagation steps by hand. Another method is symbolic differentiation using computer algebra systems such as Mathematica, which can introduce a layer of inefficiency, as there needs to be a level of abstraction to take derivatives. Numerical derivatives, the practice of approximating gradients using finite difference methods, suffer from many problems, including high computational costs and larger grid sizes, leading to many errors. This leads to automatic differentiation, which exploits the primitive functions that computers use to represent operations to obtain an exact derivative. With automatic differentiation, the computational complexity of computing the gradient is proportional to computing the function itself. Intricacies of automatic differentiation are not dealt with by end users now, but resources to learn more can be found widely, such as from here. Today’s automatic differentiation and differentiable programming are ubiquitous and are done efficiently and automatically by modern machine learning frameworks.

-
-
-

6.4.9 Hardware Acceleration

-

The trend to continuously train and deploy larger machine-learning models has made hardware acceleration support necessary for machine-learning platforms. Figure fig-hardware-accelerator shows the large number of companies that are offering hardware accelerators in different domains, such as “Very Low Power” and “Embedded” machine learning. Deep layers of neural networks require many matrix multiplications, which attract hardware that can compute matrix operations quickly and in parallel. In this landscape, two hardware architectures, the GPU and TPU, have emerged as leading choices for training machine learning models.

-

The use of hardware accelerators began with AlexNet, which paved the way for future works to utilize GPUs as hardware accelerators for training computer vision models. GPUs, or Graphics Processing Units, excel in handling many computations at once, making them ideal for the matrix operations central to neural network training. Their architecture, designed for rendering graphics, is perfect for the mathematical operations required in machine learning. While they are very useful for machine learning tasks and have been implemented in many hardware platforms, GPUs are still general purpose in that they can be used for other applications.

-

On the other hand, Tensor Processing Units (TPU) are hardware units designed specifically for neural networks. They focus on the multiply and accumulate (MAC) operation, and their hardware consists of a large hardware matrix that contains elements that efficiently compute the MAC operation. This concept, called the systolic array architecture, was pioneered by Kung and Leiserson (1979), but has proven to be a useful structure to efficiently compute matrix products and other operations within neural networks (such as convolutions).

-
-Kung, Hsiang Tsung, and Charles E Leiserson. 1979. “Systolic Arrays (for VLSI).” In Sparse Matrix Proceedings 1978, 1:256–82. Society for industrial; applied mathematics Philadelphia, PA, USA. -

While TPUs can drastically reduce training times, they also have disadvantages. For example, many operations within the machine learning frameworks (primarily TensorFlow here since the TPU directly integrates with it) are not supported by TPUs. They cannot also support custom operations from the machine learning frameworks, and the network design must closely align with the hardware capabilities.

-

Today, NVIDIA GPUs dominate training, aided by software libraries like CUDA, cuDNN, and TensorRT. Frameworks also include optimizations to maximize performance on these hardware types, like pruning unimportant connections and fusing layers. Combining these techniques with hardware acceleration provides greater efficiency. For inference, hardware is increasingly moving towards optimized ASICs and SoCs. Google’s TPUs accelerate models in data centers. Apple, Qualcomm, and others now produce AI-focused mobile chips. The NVIDIA Jetson family targets autonomous robots.

-
-
-
- -
-
-Figure 6.5: Companies offering ML hardware accelerators. Credit: Gradient Flow. -
-
-
-
-
-
-

6.5 Advanced Features

-
-

6.5.1 Distributed training

-

As machine learning models have become larger over the years, it has become essential for large models to utilize multiple computing nodes in the training process. This process, distributed learning, has allowed for higher training capabilities but has also imposed challenges in implementation.

-

We can consider three different ways to spread the work of training machine learning models to multiple computing nodes. Input data partitioning refers to multiple processors running the same model on different input partitions. This is the easiest implementation and is available for many machine learning frameworks. The more challenging distribution of work comes with model parallelism, which refers to multiple computing nodes working on different parts of the model, and pipelined model parallelism, which refers to multiple computing nodes working on different layers of the model on the same input. The latter two mentioned here are active research areas.

-

ML frameworks that support distributed learning include TensorFlow (through its tf.distribute module), PyTorch (through its torch.nn.DataParallel and torch.nn.DistributedDataParallel modules), and MXNet (through its gluon API).

-
-
-

6.5.2 Model Conversion

-

Machine learning models have various methods to be represented and used within different frameworks and for different device types. For example, a model can be converted to be compatible with inference frameworks within the mobile device. The default format for TensorFlow models is checkpoint files containing weights and architectures, which are needed to retrain the models. However, models are typically converted to TensorFlow Lite format for mobile deployment. TensorFlow Lite uses a compact flat buffer representation and optimizations for fast inference on mobile hardware, discarding all the unnecessary baggage associated with training metadata, such as checkpoint file structures.

-

The default format for TensorFlow models is checkpoint files containing weights and architectures. For mobile deployment, models are typically converted to TensorFlow Lite format. TensorFlow Lite uses a compact flat buffer representation and optimizations for fast inference on mobile hardware.

-

Model optimizations like quantization (see Optimizations chapter) can further optimize models for target architectures like mobile. This reduces the precision of weights and activations to uint8 or int8 for a smaller footprint and faster execution with supported hardware accelerators. For post-training quantization, TensorFlow’s converter handles analysis and conversion automatically.

-

Frameworks like TensorFlow simplify deploying trained models to mobile and embedded IoT devices through easy conversion APIs for TFLite format and quantization. Ready-to-use conversion enables high-performance inference on mobile without a manual optimization burden. Besides TFLite, other common targets include TensorFlow.js for web deployment, TensorFlow Serving for cloud services, and TensorFlow Hub for transfer learning. TensorFlow’s conversion utilities handle these scenarios to streamline end-to-end workflows.

-

More information about model conversion in TensorFlow is linked here.

-
-
-

6.5.3 AutoML, No-Code/Low-Code ML

-

In many cases, machine learning can have a relatively high barrier of entry compared to other fields. To successfully train and deploy models, one needs to have a critical understanding of a variety of disciplines, from data science (data processing, data cleaning), model structures (hyperparameter tuning, neural network architecture), hardware (acceleration, parallel processing), and more depending on the problem at hand. The complexity of these problems has led to the introduction of frameworks such as AutoML, which aims to make “Machine learning available for non-Machine Learning exports” and to “automate research in machine learning.” They have constructed AutoWEKA, which aids in the complex process of hyperparameter selection, and Auto-sklearn and Auto-pytorch, an extension of AutoWEKA into the popular sklearn and PyTorch Libraries.

-

While these efforts to automate parts of machine learning tasks are underway, others have focused on making machine learning models easier by deploying no-code/low-code machine learning, utilizing a drag-and-drop interface with an easy-to-navigate user interface. Companies such as Apple, Google, and Amazon have already created these easy-to-use platforms to allow users to construct machine learning models that can integrate into their ecosystem.

-

These steps to remove barriers to entry continue to democratize machine learning, make it easier for beginners to access, and simplify workflow for experts.

-
-
-

6.5.4 Advanced Learning Methods

-
-

Transfer Learning

-

Transfer learning is the practice of using knowledge gained from a pre-trained model to train and improve the performance of a model for a different task. For example, datasets trained on ImageNet datasets such as MobileNet and ResNet can help classify other image datasets. To do so, one may freeze the pre-trained model, utilizing it as a feature extractor to train a much smaller model built on top of the feature extraction. One can also fine-tune the entire model to fit the new task.

-

Transfer learning has challenges, such as the modified model’s inability to conduct its original tasks after transfer learning. Papers such as “Learning without Forgetting” by Z. Li and Hoiem (2018) aims to address these challenges and have been implemented in modern machine learning platforms.

-
-Li, Zhizhong, and Derek Hoiem. 2018. “Learning Without Forgetting.” IEEE Trans. Pattern Anal. Mach. Intell. 40 (12): 2935–47. https://doi.org/10.1109/tpami.2017.2773081. -
-
-

Federated Learning

-

Consider the problem of labeling items in a photo from personal devices and moving the image data from the devices to a central server, where a single model will train Using the image data provided by the devices. However, this presents many potential challenges. First, with many devices, one needs a massive network infrastructure to move and store data from these devices to a central location. With the number of devices present today, this is often not feasible and very costly. Furthermore, privacy challenges like those of Photos central servers are associated with moving personal data.

-

Federated learning by McMahan et al. (2017) is a form of distributed computing that resolves these issues by distributing the models into personal devices for them to be trained on devices (Figure fig-federated-learning). Initially, a base global model is trained on a central server to be distributed to all devices. Using this base model, the devices individually compute the gradients and send them back to the central hub. Intuitively, this transfers model parameters instead of the data itself. This innovative approach allows the model to be trained with many different datasets (in our example, the set of images on personal devices) without transferring a large amount of potentially sensitive data. However, federated learning also comes with a series of challenges.

-
-McMahan, Brendan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. “Communication-Efficient Learning of Deep Networks from Decentralized Data.” In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, edited by Aarti Singh and Xiaojin (Jerry) Zhu, 54:1273–82. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v54/mcmahan17a.html. -

Data collected from devices may come with something other than suitable labels in many real-world situations. Users compound this issue; the primary data source can often be unreliable. This unreliability means that even when data is labeled, its accuracy or relevance is not guaranteed. Furthermore, each user’s data is unique, resulting in a significant variance in the data generated by different users. This non-IID nature of data, coupled with the unbalanced data production where some users generate more data than others, can adversely impact the performance of the global model. Researchers have worked to compensate for this by adding a proximal term to balance the local and global model and adding a frozen global hypersphere classifier.

-

Additional challenges are associated with federated learning. The number of mobile device owners can far exceed the average number of training samples on each device, leading to substantial communication overhead. This issue is particularly pronounced in the context of mobile networks, which are often used for such communication and can be unstable. This instability can result in delayed or failed transmission of model updates, thereby affecting the overall training process.

-

The heterogeneity of device resources is another hurdle. Devices participating in Federated Learning can have varying computational powers and memory capacities. This diversity makes it challenging to design efficient algorithms across all devices. Privacy and security issues are not a guarantee for federated learning. Techniques such as inversion gradient attacks can extract information about the training data from the model parameters. Despite these challenges, the many potential benefits continue to make it a popular research area. Open source programs such as Flower have been developed to simplify implementing federated learning with various machine learning frameworks.

-

Figure fig-federated-learning illustrates an example of federated learning. Consider a model used for medical predictions by diffrent hospitals. Given that medical data is extremely sensitive and must be kept private, it can’t be transferred to a centralized server for training. Instead, each hospital would firen-tune/train the base model using its own private data, while only communicating non-sensitive information with the Federated Server, such as the learned parameters.

-
-
-
- -
-
-Figure 6.6: A centralized-server approach to federated learning. Credit: NVIDIA. -
-
-
-
-
-
-
-

6.6 Framework Specialization

-

Thus far, we have talked about ML frameworks generally. However, typically, frameworks are optimized based on the target environment’s computational capabilities and application requirements, ranging from the cloud to the edge to tiny devices. Choosing the right framework is crucial based on the target environment for deployment. This section provides an overview of the major types of AI frameworks tailored for cloud, edge, and TinyML environments to help understand the similarities and differences between these ecosystems.

-
-

6.6.1 Cloud

-

Cloud-based AI frameworks assume access to ample computational power, memory, and storage resources in the cloud. They generally support both training and inference. Cloud-based AI frameworks are suited for applications where data can be sent to the cloud for processing, such as cloud-based AI services, large-scale data analytics, and web applications. Popular cloud AI frameworks include the ones we mentioned earlier, such as TensorFlow, PyTorch, MXNet, Keras, etc. These frameworks utilize GPUs, TPUs, distributed training, and AutoML to deliver scalable AI. Concepts like model serving, MLOps, and AIOps relate to the operationalization of AI in the cloud. Cloud AI powers services like Google Cloud AI and enables transfer learning using pre-trained models.

-
-
-

6.6.2 Edge

-

Edge AI frameworks are tailored to deploy AI models on IoT devices, smartphones, and edge servers. Edge AI frameworks are optimized for devices with moderate computational resources, balancing power and performance. Edge AI frameworks are ideal for applications requiring real-time or near-real-time processing, including robotics, autonomous vehicles, and smart devices. Key edge AI frameworks include TensorFlow Lite, PyTorch Mobile, CoreML, and others. They employ optimizations like model compression, quantization, and efficient neural network architectures. Hardware support includes CPUs, GPUs, NPUs, and accelerators like the Edge TPU. Edge AI enables use cases like mobile vision, speech recognition, and real-time anomaly detection.

-
-
-

6.6.3 Embedded

-

TinyML frameworks are specialized for deploying AI models on extremely resource-constrained devices, specifically microcontrollers and sensors within the IoT ecosystem. TinyML frameworks are designed for devices with limited resources, emphasizing minimal memory and power consumption. TinyML frameworks are specialized for use cases on resource-constrained IoT devices for predictive maintenance, gesture recognition, and environmental monitoring applications. Major TinyML frameworks include TensorFlow Lite Micro, uTensor, and ARM NN. They optimize complex models to fit within kilobytes of memory through techniques like quantization-aware training and reduced precision. TinyML allows intelligent sensing across battery-powered devices, enabling collaborative learning via federated learning. The choice of framework involves balancing model performance and computational constraints of the target platform, whether cloud, edge, or TinyML. Table tbl-ml_frameworks compares the major AI frameworks across cloud, edge, and TinyML environments:

-
-
-
-Table 6.2: Comparison of framework types for Cloud AI, Edge AI, and TinyML. -
-
- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Framework TypeExamplesKey TechnologiesUse Cases
Cloud AITensorFlow, PyTorch, MXNet, KerasGPUs, TPUs, distributed training, AutoML, MLOpsCloud services, web apps, big data analytics
Edge AITensorFlow Lite, PyTorch Mobile, Core MLModel optimization, compression, quantization, efficient NN architecturesMobile apps, robots, autonomous systems, real-time processing
TinyMLTensorFlow Lite Micro, uTensor, ARM NNQuantization-aware training, reduced precision, neural architecture searchIoT sensors, wearables, predictive maintenance, gesture recognition
-
-
-
-

Key differences:

-
    -
  • Cloud AI leverages massive computational power for complex models using GPUs/TPUs and distributed training

  • -
  • Edge AI optimizes models to run locally on resource-constrained edge devices.

  • -
  • TinyML fits models into extremely low memory and computes environments like microcontrollers

  • -
-
-
-
-

6.7 Embedded AI Frameworks

-
-

6.7.1 Resource Constraints

-

Embedded systems face severe resource constraints that pose unique challenges when deploying machine learning models compared to traditional computing platforms. For example, microcontroller units (MCUs) commonly used in IoT devices often have:

-
    -
  • RAM ranges from tens of kilobytes to a few megabytes. The popular ESP8266 MCU has around 80KB RAM available to developers. This contrasts with 8GB or more on typical laptops and desktops today.

  • -
  • Flash storage ranges from hundreds of kilobytes to a few megabytes. The Arduino Uno microcontroller provides just 32KB of code storage. Standard computers today have disk storage in the order of terabytes.

  • -
  • Processing power from just a few MHz to approximately 200MHz. The ESP8266 operates at 80MHz. This is several orders of magnitude slower than multi-GHz multi-core CPUs in servers and high-end laptops.

  • -
-

These tight constraints often make training machine learning models directly on microcontrollers infeasible. The limited RAM precludes handling large datasets for training. Energy usage for training would also quickly deplete battery-powered devices. Instead, models are trained on resource-rich systems and deployed on microcontrollers for optimized inference. But even inference poses challenges:

-
    -
  1. Model Size: AI models are too large to fit on embedded and IoT devices. This necessitates model compression techniques, such as quantization, pruning, and knowledge distillation. Additionally, as we will see, many of the frameworks used by developers for AI development have large amounts of overhead and built-in libraries that embedded systems can’t support.

  2. -
  3. Complexity of Tasks: With only tens of KBs to a few MBs of RAM, IoT devices and embedded systems are constrained in the complexity of tasks they can handle. Tasks that require large datasets or sophisticated algorithms—for example, LLMs—that would run smoothly on traditional computing platforms might be infeasible on embedded systems without compression or other optimization techniques due to memory limitations.

  4. -
  5. Data Storage and Processing: Embedded systems often process data in real time and might only store small amounts locally. Conversely, traditional computing systems can hold and process large datasets in memory, enabling faster data operations analysis and real-time updates.

  6. -
  7. Security and Privacy: Limited memory also restricts the complexity of security algorithms and protocols, data encryption, reverse engineering protections, and more that can be implemented on the device. This could make some IoT devices more vulnerable to attacks.

  8. -
-

Consequently, specialized software optimizations and ML frameworks tailored for microcontrollers must work within these tight resource bounds. Clever optimization techniques like quantization, pruning, and knowledge distillation compress models to fit within limited memory (see Optimizations section). Learnings from neural architecture search help guide model designs.

-

Hardware improvements like dedicated ML accelerators on microcontrollers also help alleviate constraints. For instance, Qualcomm’s Hexagon DSP accelerates TensorFlow Lite models on Snapdragon mobile chips. Google’s Edge TPU packs ML performance into a tiny ASIC for edge devices. ARM Ethos-U55 offers efficient inference on Cortex-M class microcontrollers. These customized ML chips unlock advanced capabilities for resource-constrained applications.

-

Due to limited processing power, it’s almost always infeasible to train AI models on IoT or embedded systems. Instead, models are trained on powerful traditional computers (often with GPUs) and then deployed on the embedded device for inference. TinyML specifically deals with this, ensuring models are lightweight enough for real-time inference on these constrained devices.

-
-
-

6.7.2 Frameworks & Libraries

-

Embedded AI frameworks are software tools and libraries designed to enable AI and ML capabilities on embedded systems. These frameworks are essential for bringing AI to IoT devices, robotics, and other edge computing platforms, and they are designed to work where computational resources, memory, and power consumption are limited.

-
-
-

6.7.3 Challenges

-

While embedded systems present an enormous opportunity for deploying machine learning to enable intelligent capabilities at the edge, these resource-constrained environments pose significant challenges. Unlike typical cloud or desktop environments rich with computational resources, embedded devices introduce severe constraints around memory, processing power, energy efficiency, and specialized hardware. As a result, existing machine learning techniques and frameworks designed for server clusters with abundant resources do not directly translate to embedded systems. This section uncovers some of the challenges and opportunities for embedded systems and ML frameworks.

-
-

Fragmented Ecosystem

-

The lack of a unified ML framework led to a highly fragmented ecosystem. Engineers at companies like STMicroelectronics, NXP Semiconductors, and Renesas had to develop custom solutions tailored to their specific microcontroller and DSP architectures. These ad-hoc frameworks required extensive manual optimization for each low-level hardware platform. This made porting models extremely difficult, requiring redevelopment for new Arm, RISC-V, or proprietary architectures.

-
-
-

Disparate Hardware Needs

-

Without a shared framework, there was no standard way to assess hardware’s capabilities. Vendors like Intel, Qualcomm, and NVIDIA created integrated solutions, blending models and improving software and hardware. This made it hard to discern the sources of performance gains - whether new chip designs like Intel’s low-power x86 cores or software optimizations were responsible. A standard framework was needed so vendors could evaluate their hardware’s capabilities fairly and reproducibly.

-
-
-

Lack of Portability

-

With standardized tools, adapting models trained in common frameworks like TensorFlow or PyTorch to run efficiently on microcontrollers was easier. It required time-consuming manual translation of models to run on specialized DSPs from companies like CEVA or low-power Arm M-series cores. No turnkey tools were enabling portable deployment across different architectures.

-
-
-

Incomplete Infrastructure

-

The infrastructure to support key model development workflows needed to be improved. More support is needed for compression techniques to fit large models within constrained memory budgets. Tools for quantization to lower precision for faster inference were missing. Standardized APIs for integration into applications were incomplete. Essential functionality like on-device debugging, metrics, and performance profiling was absent. These gaps increased the cost and difficulty of embedded ML development.

-
-
-

No Standard Benchmark

-

Without unified benchmarks, there was no standard way to assess and compare the capabilities of different hardware platforms from vendors like NVIDIA, Arm, and Ambiq Micro. Existing evaluations relied on proprietary benchmarks tailored to showcase the strengths of particular chips. This made it impossible to measure hardware improvements objectively in a fair, neutral manner. The Benchmarking AI chapter discusses this topic in more detail.

-
-
-

Minimal Real-World Testing

-

Much of the benchmarks relied on synthetic data. Rigorously testing models on real-world embedded applications was difficult without standardized datasets and benchmarks, raising questions about how performance claims would translate to real-world usage. More extensive testing was needed to validate chips in actual use cases.

-

The lack of shared frameworks and infrastructure slowed TinyML adoption, hampering the integration of ML into embedded products. Recent standardized frameworks have begun addressing these issues through improved portability, performance profiling, and benchmarking support. However, ongoing innovation is still needed to enable seamless, cost-effective deployment of AI to edge devices.

-
-
-

Summary

-

The absence of standardized frameworks, benchmarks, and infrastructure for embedded ML has traditionally hampered adoption. However, recent progress has been made in developing shared frameworks like TensorFlow Lite Micro and benchmark suites like MLPerf Tiny that aim to accelerate the proliferation of TinyML solutions. However, overcoming the fragmentation and difficulty of embedded deployment remains an ongoing process.

-
-
-
-
-

6.8 Examples

-

Machine learning deployment on microcontrollers and other embedded devices often requires specially optimized software libraries and frameworks to work within tight memory, compute, and power constraints. Several options exist for performing inference on such resource-limited hardware, each with its approach to optimizing model execution. This section will explore the key characteristics and design principles behind TFLite Micro, TinyEngine, and CMSIS-NN, providing insight into how each framework tackles the complex problem of high-accuracy yet efficient neural network execution on microcontrollers. It will also showcase different approaches for implementing efficient TinyML frameworks.

-

Table tbl-compare_frameworks summarizes the key differences and similarities between these three specialized machine-learning inference frameworks for embedded systems and microcontrollers.

-
-
-
-Table 6.3: Comparison of frameworks: TensorFlow Lite Micro, TinyEngine, and CMSIS-NN -
-
- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FrameworkTensorFlow Lite MicroTinyEngineCMSIS-NN
ApproachInterpreter-basedStatic compilationOptimized neural network kernels
Hardware FocusGeneral embedded devicesMicrocontrollersARM Cortex-M processors
Arithmetic SupportFloating pointFloating point, fixed pointFloating point, fixed point
Model SupportGeneral neural network modelsModels co-designed with TinyNASCommon neural network layer types
Code FootprintLarger due to inclusion of interpreter and opsSmall, includes only ops needed for modelLightweight by design
LatencyHigher due to interpretation overheadVery low due to compiled modelLow latency focus
Memory ManagementDynamically managed by interpreterModel-level optimizationTools for efficient allocation
Optimization ApproachSome code generation featuresSpecialized kernels, operator fusionArchitecture-specific assembly optimizations
Key BenefitsFlexibility, portability, ease of updating modelsMaximizes performance, optimized memory usageHardware acceleration, standardized API, portability
-
-
-
-

We will understand each of these in greater detail in the following sections.

-
-

6.8.1 Interpreter

-

TensorFlow Lite Micro (TFLM) is a machine learning inference framework designed for embedded devices with limited resources. It uses an interpreter to load and execute machine learning models, which provides flexibility and ease of updating models in the field (David et al. 2021).

-
-David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811. -

Traditional interpreters often have significant branching overhead, which can reduce performance. However, machine learning model interpretation benefits from the efficiency of long-running kernels, where each kernel runtime is relatively large and helps mitigate interpreter overhead.

-

An alternative to an interpreter-based inference engine is to generate native code from a model during export. This can improve performance, but it sacrifices portability and flexibility, as the generated code needs recompilation for each target platform and must be replaced entirely to modify a model.

-

TFLM balances the simplicity of code compilation and the flexibility of an interpreter-based approach by incorporating certain code-generation features. For example, the library can be constructed solely from source files, offering much of the compilation simplicity associated with code generation while retaining the benefits of an interpreter-based model execution framework.

-

An interpreter-based approach offers several benefits over code generation for machine learning inference on embedded devices:

-
    -
  • Flexibility: Models can be updated in the field without recompiling the entire application.

  • -
  • Portability: The interpreter can be used to execute models on different target platforms without porting the code.

  • -
  • Memory efficiency: The interpreter can share code across multiple models, reducing memory usage.

  • -
  • Ease of development: Interpreters are easier to develop and maintain than code generators.

  • -
-

TensorFlow Lite Micro is a powerful and flexible framework for machine learning inference on embedded devices. Its interpreter-based approach offers several benefits over code generation, including flexibility, portability, memory efficiency, and ease of development.

-
-
-

6.8.2 Compiler-based

-

TinyEngine is an ML inference framework designed specifically for resource-constrained microcontrollers. It employs several optimizations to enable high-accuracy neural network execution within the tight constraints of memory, computing, and storage on microcontrollers (Lin et al. 2020).

-
-Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han. 2020. MCUNet: Tiny Deep Learning on IoT Devices.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html. -

While inference frameworks like TFLite Micro use interpreters to execute the neural network graph dynamically at runtime, this adds significant overhead regarding memory usage to store metadata, interpretation latency, and lack of optimizations. However, TFLite argues that the overhead is small. TinyEngine eliminates this overhead by employing a code generation approach. It analyzes the network graph during compilation and generates specialized code to execute just that model. This code is natively compiled into the application binary, avoiding runtime interpretation costs.

-

Conventional ML frameworks schedule memory per layer, trying to minimize usage for each layer separately. TinyEngine does model-level scheduling instead of analyzing memory usage across layers. It allocates a common buffer size based on the maximum memory needs of all layers. This buffer is then shared efficiently across layers to increase data reuse.

-

TinyEngine also specializes in the kernels for each layer through techniques like tiling, unrolling, and fusing operators. For example, it will generate unrolled compute kernels with the number of loops needed for a 3x3 or 5x5 convolution. These specialized kernels extract maximum performance from the microcontroller hardware. It uses optimized depthwise convolutions to minimize memory allocations by computing each channel’s output in place over the input channel data. This technique exploits the channel-separable nature of depthwise convolutions to reduce peak memory size.

-

Like TFLite Micro, the compiled TinyEngine binary only includes ops needed for a specific model rather than all possible operations. This results in a very small binary footprint, keeping code size low for memory-constrained devices.

-

One difference between TFLite Micro and TinyEngine is that the latter is co-designed with “TinyNAS,” an architecture search method for microcontroller models similar to differential NAS for microcontrollers. TinyEngine’s efficiency allows for exploring larger and more accurate models through NAS. It also provides feedback to TinyNAS on which models can fit within the hardware constraints.

-

Through various custom techniques, such as static compilation, model-based scheduling, specialized kernels, and co-design with NAS, TinyEngine enables high-accuracy deep learning inference within microcontrollers’ tight resource constraints.

-
-
-

6.8.3 Library

-

CMSIS-NN, standing for Cortex Microcontroller Software Interface Standard for Neural Networks, is a software library devised by ARM. It offers a standardized interface for deploying neural network inference on microcontrollers and embedded systems, focusing on optimization for ARM Cortex-M processors (Lai, Suda, and Chandra 2018).

-
-Lai, Liangzhen, Naveen Suda, and Vikas Chandra. 2018. “Cmsis-Nn: Efficient Neural Network Kernels for Arm Cortex-m Cpus.” ArXiv Preprint abs/1801.06601. https://arxiv.org/abs/1801.06601. -

Neural Network Kernels: CMSIS-NN has highly efficient kernels that handle fundamental neural network operations such as convolution, pooling, fully connected layers, and activation functions. It caters to a broad range of neural network models by supporting floating and fixed-point arithmetic. The latter is especially beneficial for resource-constrained devices as it curtails memory and computational requirements (Quantization).

-

Hardware Acceleration: CMSIS-NN harnesses the power of Single Instruction, Multiple Data (SIMD) instructions available on many Cortex-M processors. This allows for parallel processing of multiple data elements within a single instruction, thereby boosting computational efficiency. Certain Cortex-M processors feature Digital Signal Processing (DSP) extensions that CMSIS-NN can exploit for accelerated neural network execution. The library also incorporates assembly-level optimizations tailored to specific microcontroller architectures to enhance performance further.

-

Standardized API: CMSIS-NN offers a consistent and abstracted API that protects developers from the complexities of low-level hardware details. This makes the integration of neural network models into applications simpler. It may also encompass tools or utilities for converting popular neural network model formats into a format that is compatible with CMSIS-NN.

-

Memory Management: CMSIS-NN provides functions for efficient memory allocation and management, which is vital in embedded systems where memory resources are scarce. It ensures optimal memory usage during inference and, in some instances, allows in-place operations to decrease memory overhead.

-

Portability: CMSIS-NN is designed for portability across various Cortex-M processors. This enables developers to write code that can operate on different microcontrollers without significant modifications.

-

Low Latency: CMSIS-NN minimizes inference latency, making it an ideal choice for real-time applications where swift decision-making is paramount.

-

Energy Efficiency: The library is designed with a focus on energy efficiency, making it suitable for battery-powered and energy-constrained devices.

-
-
-
-

6.9 Choosing the Right Framework

-

Choosing the right machine learning framework for a given application requires carefully evaluating models, hardware, and software considerations. By analyzing these three aspects—models, hardware, and software—ML engineers can select the optimal framework and customize it as needed for efficient and performant on-device ML applications. The goal is to balance model complexity, hardware limitations, and software integration to design a tailored ML pipeline for embedded and edge devices.

-
-
-
- -
-
-Figure 6.7: TensorFlow Framework Comparison - General. Credit: TensorFlow. -
-
-
-
-

6.9.1 Model

-

TensorFlow supports significantly more ops than TensorFlow Lite and TensorFlow Lite Micro as it is typically used for research or cloud deployment, which require a large number of and more flexibility with operators (see Figure fig-tf-comparison). TensorFlow Lite supports select ops for on-device training, whereas TensorFlow Micro does not. TensorFlow Lite also supports dynamic shapes and quantization-aware training, but TensorFlow Micro does not. In contrast, TensorFlow Lite and TensorFlow Micro offer native quantization tooling and support, where quantization refers to transforming an ML program into an approximated representation with available lower precision operations.

-
-
-

6.9.2 Software

-
-
-
- -
-
-Figure 6.8: TensorFlow Framework Comparison - Software. Credit: TensorFlow. -
-
-
-

TensorFlow Lite Micro does not have OS support, while TensorFlow and TensorFlow Lite do, to reduce memory overhead, make startup times faster, and consume less energy (see Figure fig-tf-sw-comparison). TensorFlow Lite Micro can be used in conjunction with real-time operating systems (RTOS) like FreeRTOS, Zephyr, and Mbed OS. TensorFlow Lite and TensorFlow Lite Micro support model memory mapping, allowing models to be directly accessed from flash storage rather than loaded into RAM, whereas TensorFlow does not. TensorFlow and TensorFlow Lite support accelerator delegation to schedule code to different accelerators, whereas TensorFlow Lite Micro does not, as embedded systems tend to have a limited array of specialized accelerators.

-
-
-

6.9.3 Hardware

-
-
-
- -
-
-Figure 6.9: TensorFlow Framework Comparison - Hardware. Credit: TensorFlow. -
-
-
-

TensorFlow Lite and TensorFlow Lite Micro have significantly smaller base binary sizes and memory footprints than TensorFlow (see Figure fig-tf-hw-comparison). For example, a typical TensorFlow Lite Micro binary is less than 200KB, whereas TensorFlow is much larger. This is due to the resource-constrained environments of embedded systems. TensorFlow supports x86, TPUs, and GPUs like NVIDIA, AMD, and Intel. TensorFlow Lite supports Arm Cortex-A and x86 processors commonly used on mobile phones and tablets. The latter is stripped of all the unnecessary training logic for on-device deployment. TensorFlow Lite Micro provides support for microcontroller-focused Arm Cortex M cores like M0, M3, M4, and M7, as well as DSPs like Hexagon and SHARC and MCUs like STM32, NXP Kinetis, Microchip AVR.

-

Selecting the appropriate AI framework is essential to ensure that embedded systems can efficiently execute AI models. Key factors to consider when choosing a machine learning framework are ease of use, community support, performance, scalability, integration with data engineering tools, and integration with model optimization tools. By understanding these factors, you can make informed decisions and maximize the potential of your machine-learning initiatives.

-
-
-

6.9.4 Other Factors

-

Several other key factors beyond models, hardware, and software should be considered when evaluating AI frameworks for embedded systems.

-
-

Performance

-

Performance is critical in embedded systems where computational resources are limited. Evaluate the framework’s ability to optimize model inference for embedded hardware. Model quantization and hardware acceleration support are crucial in achieving efficient inference.

-
-
-

Scalability

-

Scalability is essential when considering the potential growth of an embedded AI project. The framework should support the deployment of models on various embedded devices, from microcontrollers to more powerful processors. It should also seamlessly handle both small-scale and large-scale deployments.

-
-
-

Integration with Data Engineering Tools

-

Data engineering tools are essential for data preprocessing and pipeline management. An ideal AI framework for embedded systems should seamlessly integrate with these tools, allowing for efficient data ingestion, transformation, and model training.

-
-
-

Integration with Model Optimization Tools

-

Model optimization ensures that AI models are well-suited for embedded deployment. Evaluate whether the framework integrates with model optimization tools like TensorFlow Lite Converter or ONNX Runtime to facilitate model quantization and size reduction.

-
-
-

Ease of Use

-

The ease of use of an AI framework significantly impacts development efficiency. A framework with a user-friendly interface and clear documentation reduces developers’ learning curve. Consideration should be given to whether the framework supports high-level APIs, allowing developers to focus on model design rather than low-level implementation details. This factor is incredibly important for embedded systems, which have fewer features than typical developers might be accustomed to.

-
-
-

Community Support

-

Community support plays another essential factor. Frameworks with active and engaged communities often have well-maintained codebases, receive regular updates, and provide valuable forums for problem-solving. As a result, community support also plays into Ease of Use because it ensures that developers have access to a wealth of resources, including tutorials and example projects. Community support provides some assurance that the framework will continue to be supported for future updates. There are only a few frameworks that cater to TinyML needs. TensorFlow Lite Micro is the most popular and has the most community support.

-
-
-
- -
-

6.11 Conclusion

-

In summary, selecting the optimal framework requires thoroughly evaluating options against criteria like usability, community support, performance, hardware compatibility, and model conversion abilities. There is no universal best solution, as the right framework depends on the specific constraints and use case.

-

TensorFlow Lite Micro currently provides a strong starting point for extremely resource-constrained microcontroller-based platforms. Its comprehensive optimization tooling, such as quantization mapping and kernel optimizations, enables high performance on devices like Arm Cortex-M and RISC-V processors. The active developer community ensures accessible technical support. Seamless integration with TensorFlow for training and converting models makes the workflow cohesive.

-

For platforms with more capable CPUs like Cortex-A, TensorFlow Lite for Microcontrollers expands possibilities. It provides greater flexibility for custom and advanced models beyond the core operators in TFLite Micro. However, this comes at the cost of a larger memory footprint. These frameworks are ideal for automotive systems, drones, and more powerful edge devices that can benefit from greater model sophistication.

-

Frameworks specifically built for specialized hardware like CMSIS-NN on Cortex-M processors can further maximize performance but sacrifice portability. Integrated frameworks from processor vendors tailor the stack to their architectures, unlocking the full potential of their chips but locking you into their ecosystem.

-

Ultimately, choosing the right framework involves finding the best match between its capabilities and the requirements of the target platform. This requires balancing tradeoffs between performance needs, hardware constraints, model complexity, and other factors. Thoroughly assessing intended models and use cases and evaluating options against key metrics will guide developers in picking the ideal framework for their embedded ML application.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/generative_ai/generative_ai.html b/contents/generative_ai/generative_ai.html deleted file mode 100644 index 11de0eec..00000000 --- a/contents/generative_ai/generative_ai.html +++ /dev/null @@ -1,1046 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 19  Generative AI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

19  Generative AI

-
- - - -
- - - - -
- - - -
- - -

Coming soon!

-
-
-
- -
-
-Learning Objectives -
-
-
-

Coming soon.

-
-
- - - -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/hw_acceleration/hw_acceleration.html b/contents/hw_acceleration/hw_acceleration.html deleted file mode 100644 index 6b218c28..00000000 --- a/contents/hw_acceleration/hw_acceleration.html +++ /dev/null @@ -1,2477 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 10  AI Acceleration - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

10  AI Acceleration

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Create an intricate and colorful representation of a System on Chip (SoC) design in a rectangular format. Showcase a variety of specialized machine learning accelerators and chiplets, all integrated into the processor. Provide a detailed view inside the chip, highlighting the rapid movement of electrons. Each accelerator and chiplet should be designed to interact with neural network neurons, layers, and activations, emphasizing their processing speed. Depict the neural networks as a network of interconnected nodes, with vibrant data streams flowing between the accelerator pieces, showcasing the enhanced computation speed.
-
-
-

Machine learning has emerged as a transformative technology across many industries. However, deploying ML capabilities in real-world edge devices faces challenges due to limited computing resources. Specialized hardware acceleration is essential to enable high-performance machine learning under these constraints. Hardware accelerators optimize compute-intensive operations like inference using custom silicon optimized for matrix multiplications. This provides dramatic speedups over general-purpose CPUs, unlocking real-time execution of advanced models on size, weight, and power-constrained devices.

-

This chapter provides essential background on hardware acceleration techniques for embedded machine learning and their tradeoffs. The goal is to equip readers to make informed hardware selections and software optimizations to develop performant on-device ML capabilities.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand why hardware acceleration is needed for AI workloads

  • -
  • Survey key accelerator options like GPUs, TPUs, FPGAs, and ASICs and their tradeoffs

  • -
  • Learn about programming models, frameworks, and compilers for AI accelerators

  • -
  • Appreciate the importance of benchmarking and metrics for hardware evaluation

  • -
  • Recognize the role of hardware-software co-design in building efficient systems

  • -
  • Gain exposure to cutting-edge research directions like neuromorphic and quantum computing

  • -
  • Understand how ML is beginning to augment and enhance hardware design

  • -
-
-
-
-

10.1 Introduction

-

Machine learning has emerged as a transformative technology across many industries, enabling systems to learn and improve from data. There is a growing demand for embedded ML solutions to deploy machine learning capabilities in real-world environments - where models are built into edge devices like smartphones, home appliances, and autonomous vehicles. However, these edge devices have limited computing resources compared to data center servers.

-

Specialized hardware acceleration enables high-performance machine learning on resource-constrained edge devices. Hardware acceleration refers to using custom silicon chips and architectures to offload compute-intensive ML operations from the main processor. In neural networks, the most intensive computations are the matrix multiplications during inference. Hardware accelerators can optimize these matrix operations, providing 10-100x speedups over general-purpose CPUs. This acceleration unlocks the ability to run advanced neural network models on devices with size, weight, and power constraints in real-time.

-

This chapter overviews hardware acceleration techniques for embedded machine learning and their design tradeoffs. Its goal is to equip readers with an essential background in embedded ML acceleration. This will enable informed hardware selection and software optimization to develop high-performance machine learning capabilities on edge devices.

-
-
-

10.2 Background and Basics

-
-

10.2.1 Historical Background

-

The origins of hardware acceleration date back to the 1960s, with the advent of floating point math co-processors to offload calculations from the main CPU. One early example was the Intel 8087 chip released in 1980 to accelerate floating point operations for the 8086 processor. This established the practice of using specialized processors to handle math-intensive workloads efficiently.

-

In the 1990s, the first graphics processing units (GPUs) emerged to process graphics pipelines for rendering and gaming rapidly. Nvidia’s GeForce 256 in 1999 was one of the earliest programmable GPUs capable of running custom software algorithms. GPUs exemplify domain-specific fixed-function accelerators and evolve into parallel programmable accelerators.

-

In the 2000s, GPUs were applied to general-purpose computing under GPGPU. Their high memory bandwidth and computational throughput made them well-suited for math-intensive workloads. This included breakthroughs in using GPUs to accelerate training of deep learning models such as AlexNet in 2012.

-

In recent years, Google’s Tensor Processing Units (TPUs) represent customized ASICs specifically architected for matrix multiplication in deep learning. During inference, their optimized tensor cores achieve higher TeraOPS/watt than CPUs or GPUs. Ongoing innovation includes model compression techniques like pruning and quantization to fit larger neural networks on edge devices.

-

This evolution demonstrates how hardware acceleration has focused on solving compute-intensive bottlenecks, from floating point math to graphics to matrix multiplication for ML. Understanding this history provides a crucial context for specialized AI accelerators today.

-
-
-

10.2.2 The Need for Acceleration

-

The evolution of hardware acceleration is closely tied to the broader history of computing. In the early decades, chip design was governed by Moore’s Law and Dennard Scaling, which observed that the number of transistors on an integrated circuit doubled yearly, and their performance (speed) increased as transistors became smaller. At the same time, power density (power per unit area) remains constant. These two laws were held through the single-core era. Figure fig-moore-dennard shows the trends of different microprocessor metrics. As the figure denotes, Dennard Scaling fails around the mid-2000s; notice how the clock speed (frequency) remains almost constant even as the number of transistors keeps increasing.

-

However, as Patterson and Hennessy (2016) describes, technological constraints eventually forced a transition to the multicore era, with chips containing multiple processing cores to deliver performance gains. Power limitations prevented further scaling, which led to “dark silicon” (Dark Silicon), where not all chip areas could be simultaneously active (Xiu 2019).

-
-Patterson, David A, and John L Hennessy. 2016. Computer Organization and Design ARM Edition: The Hardware Software Interface. Morgan kaufmann. -
-Xiu, Liming. 2019. “Time Moore: Exploiting Moore’s Law from the Perspective of Time.” IEEE Solid-State Circuits Mag. 11 (1): 39–55. https://doi.org/10.1109/mssc.2018.2882285. -

The concept of dark silicon emerged as a consequence of these constraints. “Dark silicon” refers to portions of the chip that cannot be powered simultaneously due to thermal and power limitations. Essentially, as the density of transistors increased, the proportion of the chip that could be actively used without overheating or exceeding power budgets shrank.

-

This phenomenon meant that while chips had more transistors, not all could be operational simultaneously, limiting potential performance gains. This power crisis necessitated a shift to the accelerator era, with specialized hardware units tailored for specific tasks to maximize efficiency. The explosion in AI workloads further drove demand for customized accelerators. Enabling factors included new programming languages, software tools, and manufacturing advances.

-
-
-
- -
-
-Figure 10.1: Microprocessor trends. Credit: Karl Rupp. -
-
-
-

Fundamentally, hardware accelerators are evaluated on performance, power, and silicon area (PPA)—the nature of the target application—whether memory-bound or compute-bound—heavily influences the design. For example, memory-bound workloads demand high bandwidth and low latency access, while compute-bound applications require maximal computational throughput.

-
-
-

10.2.3 General Principles

-

The design of specialized hardware accelerators involves navigating complex tradeoffs between performance, power efficiency, silicon area, and workload-specific optimizations. This section outlines core considerations and methodologies for achieving an optimal balance based on application requirements and hardware constraints.

-
-

Performance Within Power Budgets

-

Performance refers to the throughput of computational work per unit of time, commonly measured in floating point operations per second (FLOPS) or frames per second (FPS). Higher performance enables completing more work, but power consumption rises with activity.

-

Hardware accelerators aim to maximize performance within set power budgets. This requires careful balancing of parallelism, the chip’s clock frequency, the operating voltage, workload optimization, and other techniques to maximize operations per watt.

-
    -
  • Performance = Throughput * Efficiency
  • -
  • Throughput ~= Parallelism * Clock Frequency
  • -
  • Efficiency = Operations / Watt
  • -
-

For example, GPUs achieve high throughput via massively parallel architectures. However, their efficiency is lower than that of customized application-specific integrated circuits (ASICs) like Google’s TPU, which optimize for a specific workload.

-
-
-

Managing Silicon Area and Costs

-

Chip area directly impacts manufacturing cost. Larger die sizes require more materials, lower yields, and higher defect rates. Mulit-die packages help scale designs but add packaging complexity. Silicon area depends on:

-
    -
  • Computational resources - e.g., number of cores, memory, caches
  • -
  • Manufacturing process node - smaller transistors enable higher density
  • -
  • Programming model - programmed accelerators require more flexibility
  • -
-

Accelerator design involves squeezing maximum performance within area constraints. Techniques like pruning and compression help fit larger models on the chip.

-
-
-

Workload-Specific Optimizations

-

The target workload dictates optimal accelerator architectures. Some of the key considerations include:

-
    -
  • Memory vs Compute boundedness: Memory-bound workloads require more memory bandwidth, while compute-bound apps need arithmetic throughput.
  • -
  • Data locality: Data movement should be minimized for efficiency. Near-compute memory helps.
  • -
  • Bit-level operations: Low precision datatypes like INT8/INT4 optimize compute density.
  • -
  • Data parallelism: Multiple replicated compute units allow parallel execution.
  • -
  • Pipelining: Overlapped execution of operations increases throughput.
  • -
-

Understanding workload characteristics enables customized acceleration. For example, convolutional neural networks use sliding window operations optimally mapped to spatial arrays of processing elements.

-

By navigating these architectural tradeoffs, hardware accelerators can deliver massive performance gains and enable emerging applications in AI, graphics, scientific computing, and other domains.

-
-
-

Sustainable Hardware Design

-

In recent years, AI sustainability has become a pressing concern driven by two key factors - the exploding scale of AI workloads and their associated energy consumption.

-

First, the size of AI models and datasets has rapidly grown. For example, based on OpenAI’s AI computing trends, the amount of computing used to train state-of-the-art models doubles every 3.5 months. This exponential growth requires massive computational resources in data centers.

-

Second, the energy usage of AI training and inference presents sustainability challenges. Data centers running AI applications consume substantial energy, contributing to high carbon emissions. It’s estimated that training a large AI model can have a carbon footprint of 626,000 pounds of CO2 equivalent, almost 5 times the lifetime emissions of an average car.

-

As a result, AI research and practice must prioritize energy efficiency and carbon impact alongside accuracy. There is an increasing focus on model efficiency, data center design, hardware optimization, and other solutions to improve sustainability. Striking a balance between AI progress and environmental responsibility has emerged as a key consideration and an area of active research across the field.

-

The scale of AI systems is expected to keep growing. Developing sustainable AI is crucial for managing the environmental footprint and enabling widespread beneficial deployment of this transformative technology.

-

We will learn about Sustainable AI in a later chapter, where we will discuss it in more detail.

-
-
-
-
-

10.3 Accelerator Types

-

Hardware accelerators can take on many forms. They can exist as a widget (like the Neural Engine in the Apple M1 chip) or as entire chips specially designed to perform certain tasks very well. This section will examine processors for machine learning workloads along the spectrum from highly specialized ASICs to more general-purpose CPUs. We first focus on custom hardware purpose-built for AI to understand the most extreme optimizations possible when design constraints are removed. This establishes a ceiling for performance and efficiency.

-

We then progressively consider more programmable and adaptable architectures, discussing GPUs and FPGAs. These make tradeoffs in customization to maintain flexibility. Finally, we cover general-purpose CPUs that sacrifice optimizations for a particular workload in exchange for versatile programmability across applications.

-

By structuring the analysis along this spectrum, we aim to illustrate the fundamental tradeoffs between utilization, efficiency, programmability, and flexibility in accelerator design. The optimal balance point depends on the constraints and requirements of the target application. This spectrum perspective provides a framework for reasoning about hardware choices for machine learning and the capabilities required at each level of specialization.

-

Figure fig-design-tradeoffs illustrates the complex interplay between flexibility, performance, functional diversity, and area of architecture design. Notice how the ASIC is on the bottom-right corner, with minimal area, flexibility, and power consumption and maximal performance, due to its highly specialized application-specific nature. A key tradeoff is functinoal diversity vs performance: general purpose architechtures can serve diverse applications but their application performance is degraded as compared to more customized architectures.

-

The progression begins with the most specialized option, ASICs purpose-built for AI, to ground our understanding in the maximum possible optimizations before expanding to more generalizable architectures. This structured approach aims to elucidate the accelerator design space.

-
-
-
- -
-
-Figure 10.2: Design tradeoffs. Credit: El-Rayis (2014). -
-
-El-Rayis, A. O. 2014. “Reconfigurable Architectures for the Next Generation of Mobile Device Telecommunications Systems.” : https://www.researchgate.net/publication/292608967. -
-
-
-

10.3.1 Application-Specific Integrated Circuits (ASICs)

-

An Application-Specific Integrated Circuit (ASIC) is a type of integrated circuit (IC) that is custom-designed for a specific application or workload rather than for general-purpose use. Unlike CPUs and GPUs, ASICs do not support multiple applications or workloads. Rather, they are optimized to perform a single task extremely efficiently. The Google TPU is an example of an ASIC.

-

ASICs achieve this efficiency by tailoring every aspect of the chip design - the underlying logic gates, electronic components, architecture, memory, I/O, and manufacturing process - specifically for the target application. This level of customization allows removing any unnecessary logic or functionality required for general computation. The result is an IC that maximizes performance and power efficiency on the desired workload. The efficiency gains from application-specific hardware are so substantial that these software-centric firms dedicate enormous engineering resources to designing customized ASICs.

-

The rise of more complex machine learning algorithms has made the performance advantages enabled by tailored hardware acceleration a key competitive differentiator, even for companies traditionally concentrated on software engineering. ASICs have become a high-priority investment for major cloud providers aiming to offer faster AI computation.

-
-

Advantages

-

Due to their customized nature, ASICs provide significant benefits over general-purpose processors like CPUs and GPUs. The key advantages include the following.

-
-
Maximized Performance and Efficiency
-

The most fundamental advantage of ASICs is maximizing performance and power efficiency by customizing the hardware architecture specifically for the target application. Every transistor and design aspect is optimized for the desired workload - no unnecessary logic or overhead is needed to support generic computation.

-

For example, Google’s Tensor Processing Units (TPUs) contain architectures tailored exactly for the matrix multiplication operations used in neural networks. To design the TPU ASICs, Google’s engineering teams need to define the chip specifications clearly, write the architecture description using Hardware Description Languages like Verilog, synthesize the design to map it to hardware components, and carefully place-and-route transistors and wires based on the fabrication process design rules. This complex design process, known as very-large-scale integration (VLSI), allows them to build an optimized IC for machine learning workloads.

-

As a result, TPU ASICs achieve over an order of magnitude higher efficiency in operations per watt than general-purpose GPUs on ML workloads by maximizing performance and minimizing power consumption through a full-stack custom hardware design.

-
-
-
Specialized On-Chip Memory
-

ASICs incorporate on-chip SRAM and caches specifically optimized to feed data to the computational units. For example, Apple’s M1 system-on-a-chip contains special low-latency SRAM to accelerate the performance of its Neural Engine machine learning hardware. Large local memory with high bandwidth enables data to be kept close to the processing elements. This provides tremendous speed advantages compared to off-chip DRAM access, which can be up to 100x slower.

-

Data locality and optimizing memory hierarchy are crucial for high throughput and low power. Below is a table, “Numbers Everyone Should Know,” from Jeff Dean.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OperationLatencyNotes
L1 cache reference0.5 ns
Branch mispredict5 ns
L2 cache reference7 ns
Mutex lock/unlock25 ns
Main memory reference100 ns
Compress 1K bytes with Zippy3,000 ns3 us
Send 1 KB bytes over 1 Gbps network10,000 ns10 us
Read 4 KB randomly from SSD150,000 ns150 us
Read 1 MB sequentially from memory250,000 ns250 us
Round trip within same datacenter500,000 ns0.5 ms
Read 1 MB sequentially from SSD1,000,000 ns1 ms
Disk seek10,000,000 ns10 ms
Read 1 MB sequentially from disk20,000,000 ns20 ms
Send packet CA->Netherlands->CA150,000,000 ns150 ms
-
-
-
Custom Datatypes and Operations
-

Unlike general-purpose processors, ASICs can be designed to natively support custom datatypes like INT4 or bfloat16, which are widely used in ML models. For instance, Nvidia’s Ampere GPU architecture has dedicated bfloat16 Tensor Cores to accelerate AI workloads. Low-precision datatypes enable higher arithmetic density and performance. ASICs can also directly incorporate non-standard operations common in ML algorithms as primitive operations - for example, natively supporting activation functions like ReLU makes execution more efficient. Please refer to the Efficient Numeric Representations chapter for additional details.

-
-
-
High Parallelism
-

ASIC architectures can leverage higher parallelism tuned for the target workload versus general-purpose CPUs or GPUs. More computational units tailored for the application mean more operations execute simultaneously. Highly parallel ASICs achieve tremendous throughput for data parallel workloads like neural network inference.

-
-
-
Advanced Process Nodes
-

Cutting-edge manufacturing processes allow more transistors to be packed into smaller die areas, increasing density. ASICs designed specifically for high-volume applications can better amortize the costs of cutting-edge process nodes.

-
-
-
-

Disadvantages

-
-
Long Design Timelines
-

The engineering process of designing and validating an ASIC can take 2-3 years. Synthesizing the architecture using hardware description languages, taping out the chip layout, and fabricating the silicon on advanced process nodes involve long development cycles. For example, to tape out a 7nm chip, teams need to define specifications carefully, write the architecture in HDL, synthesize the logic gates, place components, route all interconnections, and finalize the layout to send for fabrication. This very large-scale integration (VLSI) flow means ASIC design and manufacturing can traditionally take 2-5 years.

-

There are a few key reasons why the long design timelines of ASICs, often 2-3 years, can be challenging for machine learning workloads:

-
    -
  • ML algorithms evolve rapidly: New model architectures, training techniques, and network optimizations are constantly emerging. For example, Transformers became hugely popular in NLP last few years. When an ASIC finishes tapeout, the optimal architecture for a workload may have changed.
  • -
  • Datasets grow quickly: ASICs designed for certain model sizes or datatypes can become undersized relative to demand. For instance, natural language models are scaling exponentially with more data and parameters. A chip designed for BERT might not accommodate GPT-3.
  • -
  • ML applications change frequently: The industry focus shifts between computer vision, speech, NLP, recommender systems, etc. An ASIC optimized for image classification may have less relevance in a few years.
  • -
  • Faster design cycles with GPUs/FPGAs: Programmable accelerators like GPUs can adapt much quicker by upgrading software libraries and frameworks. New algorithms can be deployed without hardware changes.
  • -
  • Time-to-market needs: Getting a competitive edge in ML requires rapidly experimenting with and deploying new ideas. Waiting several years for an ASIC is different from fast iteration.
  • -
-

The pace of innovation in ML needs to be better matched to the multi-year timescale for ASIC development. Significant engineering efforts are required to extend ASIC lifespan through modular architectures, process scaling, model compression, and other techniques. However, the rapid evolution of ML makes fixed-function hardware challenging.

-
-
-
High Non-Recurring Engineering Costs
-

The fixed costs of taking an ASIC from design to high-volume manufacturing can be very capital-intensive, often tens of millions of dollars. Photomask fabrication for taping out chips in advanced process nodes, packaging, and one-time engineering efforts is expensive. For instance, a 7nm chip tape-out alone could cost millions. The high non-recurring engineering (NRE) investment narrows ASIC viability to high-volume production use cases where the upfront cost can be amortized.

-
-
-
Complex Integration and Programming
-

ASICs require extensive software integration work, including drivers, compilers, OS support, and debugging tools. They also need expertise in electrical and thermal packaging. Additionally, efficiently programming ASIC architectures can involve challenges like workload partitioning and scheduling across many parallel units. The customized nature necessitates significant integration efforts to turn raw hardware into fully operational accelerators.

-

While ASICs provide massive efficiency gains on target applications by tailoring every aspect of the hardware design to one specific task, their fixed nature results in tradeoffs in flexibility and development costs compared to programmable accelerators, which must be weighed based on the application.

-
-
-
-
-

10.3.2 Field-Programmable Gate Arrays (FPGAs)

-

FPGAs are programmable integrated circuits that can be reconfigured for different applications. Their customizable nature provides advantages for accelerating AI algorithms compared to fixed ASICs or inflexible GPUs. While Google, Meta, and NVIDIA are considering putting ASICs in data centers, Microsoft deployed FPGAs in its data centers (Putnam et al. 2014) in 2011 to efficiently serve diverse data center workloads.

-
-

Advantages

-

FPGAs provide several benefits over GPUs and ASICs for accelerating machine learning workloads.

-
-
Flexibility Through Reconfigurable Fabric
-

The key advantage of FPGAs is the ability to reconfigure the underlying fabric to implement custom architectures optimized for different models, unlike fixed-function ASICs. For example, quant trading firms use FPGAs to accelerate their algorithms because they change frequently, and the low NRE cost of FPGAs is more viable than tapping out new ASICs. Figure fig-different-fpgas contains a table comparing three different FPGAs.

-
-
-
- -
-
-Figure 10.3: Comparison of FPGAs. Credit: Gwennap (n.d.). -
-
-Gwennap, Linley. n.d. “Certus-NX Innovates General-Purpose FPGAs.” -
-
-

FPGAs comprise basic building blocks - configurable logic blocks, RAM blocks, and interconnects. Vendors provide a base amount of these resources, and engineers program the chips by compiling HDL code into bitstreams that rearrange the fabric into different configurations. This makes FPGAs adaptable as algorithms evolve.

-

While FPGAs may not achieve the utmost performance and efficiency of workload-specific ASICs, their programmability provides more flexibility as algorithms change. This adaptability makes FPGAs a compelling choice for accelerating evolving machine learning applications. Microsoft has deployed FPGAs in its Azure data centers for machine learning workloads to serve diverse applications instead of ASICs. The programmability enables optimization across changing ML models.

-
-
-
Customized Parallelism and Pipelining
-

FPGA architectures can leverage spatial parallelism and pipelining by tailoring the hardware design to mirror the parallelism in ML models. For example, Intel’s HARPv2 FPGA platform splits the layers of an MNIST convolutional network across separate processing elements to maximize throughput. Unique parallel patterns like tree ensemble evaluations are also possible on FPGAs. Deep pipelines with optimized buffering and dataflow can be customized to each model’s structure and datatypes. This level of tailored parallelism and pipelining is not feasible on GPUs.

-
-
-
Low Latency On-Chip Memory
-

Large amounts of high-bandwidth on-chip memory enable localized storage for weights and activations. For instance, Xilinx Versal FPGAs contain 32MB of low-latency RAM blocks and dual-channel DDR4 interfaces for external memory. Bringing memory physically closer to the compute units reduces access latency. This provides significant speed advantages over GPUs that traverse PCIe or other system buses to reach off-chip GDDR6 memory.

-
-
-
Native Support for Low Precision
-

A key advantage of FPGAs is the ability to natively implement any bit width for arithmetic units, such as INT4 or bfloat16, used in quantized ML models. For example, Intel’s Stratix 10 NX FPGAs have dedicated INT8 cores that can achieve up to 143 INT8 TOPS at ~1 TOPS/W Intel Stratix 10 NX FPGA. Lower bit widths increase arithmetic density and performance. FPGAs can even support mixed precision or dynamic precision tuning at runtime.

-
-
-
-

Disadvantages

-
-
Lower Peak Throughput than ASICs
-

FPGAs cannot match the raw throughput numbers of ASICs customized for a specific model and precision. The overheads of the reconfigurable fabric compared to fixed function hardware result in lower peak performance. For example, the TPU v5e pods allow up to 256 chips to be connected with more than 100 petaOps of INT8 performance, while FPGAs can offer up to 143 INT8 TOPS or 286 INT4 TOPS Intel Stratix 10 NX FPGA.

-

This is because FPGAs comprise basic building blocks—configurable logic blocks, RAM blocks, and interconnects. Vendors provide a set amount of these resources. To program FPGAs, engineers write HDL code and compile it into bitstreams that rearrange the fabric, which has inherent overheads versus an ASIC purpose-built for one computation.

-
-
-
Programming Complexity
-

To optimize FPGA performance, engineers must program the architectures in low-level hardware description languages like Verilog or VHDL. This requires hardware design expertise and longer development cycles than higher-level software frameworks like TensorFlow. Maximizing utilization can be challenging despite advances in high-level synthesis from C/C++.

-
-
-
Reconfiguration Overheads
-

Changing FPGA configurations requires reloading a new bitstream, which has considerable latency and storage size costs. For example, partial reconfiguration on Xilinx FPGAs can take 100s of milliseconds. This makes dynamically swapping architectures in real-time infeasible. The bitstream storage also consumes on-chip memory.

-
-
-
Diminishing Gains on Advanced Nodes
-

While smaller process nodes greatly benefit ASICs, they provide fewer advantages for FPGAs. At 7nm and below, effects like process variation, thermal constraints, and aging disproportionately impact FPGA performance. The overheads of the configurable fabric also diminish gains compared to fixed-function ASICs.

-
-
-
Case Study
-

FPGAs have found widespread application in various fields, including medical imaging, robotics, and finance, where they excel in handling computationally intensive machine learning tasks. In medical imaging, an illustrative example is the application of FPGAs for brain tumor segmentation, a traditionally time-consuming and error-prone process. For instance, Xiong et al. developed a quantized segmentation accelerator, which they retrained using the BraTS19 and BraTS20 datasets. Their work yielded remarkable results, achieving over 5x and 44x performance improvements and 11x and 82x energy efficiency gains compared to GPU and CPU implementations, respectively (Xiong et al. 2021).

-
-Xiong, Siyu, Guoqing Wu, Xitian Fan, Xuan Feng, Zhongcheng Huang, Wei Cao, Xuegong Zhou, et al. 2021. MRI-Based Brain Tumor Segmentation Using FPGA-Accelerated Neural Network.” BMC Bioinf. 22 (1): 421. https://doi.org/10.1186/s12859-021-04347-6. -
-
-
-
-

10.3.3 Digital Signal Processors (DSPs)

-

The first digital signal processor core was built in 1948 by Texas Instruments (The Evolution of Audio DSPs). Traditionally, DSPs would have logic to directly access digital/audio data in memory, perform an arithmetic operation (multiply-add-accumulate-MAC was one of the most common operations), and then write the result back to memory. The DSP would include specialized analog components to retrieve digital/audio data.

-

Once we entered the smartphone era, DSPs started encompassing more sophisticated tasks. They required Bluetooth, Wi-Fi, and cellular connectivity. Media also became much more complex. Today, it’s rare to have entire chips dedicated to just DSP, but a System on Chip would include DSPs and general-purpose CPUs. For example, Qualcomm’s Hexagon Digital Signal Processor claims to be a “world-class processor with both CPU and DSP functionality to support deeply embedded processing needs of the mobile platform for both multimedia and modem functions.” Google Tensors, the chip in the Google Pixel phones, also includes CPUs and specialized DSP engines.

-
-

Advantages

-

DSPs architecturally provide advantages in vector math throughput, low latency memory access, power efficiency, and support for diverse datatypes - making them well-suited for embedded ML acceleration.

-
-
Optimized Architecture for Vector Math
-

DSPs contain specialized data paths, register files, and instructions optimized specifically for vector math operations commonly used in machine learning models. This includes dot product engines, MAC units, and SIMD capabilities tailored for vector/matrix calculations. For example, the CEVA-XM6 DSP (“Ceva SensPro Fuses AI and Vector DSP”) has 512-bit vector units to accelerate convolutions. This efficiency on vector math workloads is far beyond general CPUs.

-
-
-
Low Latency On-Chip Memory
-

DSPs integrate large amounts of fast on-chip SRAM memory to hold data locally for processing. Bringing memory physically closer to the computation units reduces access latency. For example, Analog’s SHARC+ DSP contains 10MB of on-chip SRAM. This high-bandwidth local memory provides speed advantages for real-time applications.

-
-
-
Power Efficiency
-

DSPs are engineered to provide high performance per watt on digital signal workloads. Efficient data paths, parallelism, and memory architectures enable trillions of math operations per second within tight mobile power budgets. For example, Qualcomm’s Hexagon DSP can deliver 4 trillion operations per second (TOPS) while consuming minimal watts.

-
-
-
Support for Integer and Floating Point Math
-

Unlike GPUs that excel at single or half precision, DSPs can natively support 8/16-bit integer and 32-bit floating point datatypes used across ML models. Some DSPs support dot product acceleration at INT8 precision for quantized neural networks.

-
-
-
-

Disadvantages

-

DSPs make architectural tradeoffs that limit peak throughput, precision, and model capacity compared to other AI accelerators. However, their advantages in power efficiency and integer math make them a strong edge computing option. So, while DSPs provide some benefits over CPUs, they also come with limitations for machine learning workloads:

-
-
Lower Peak Throughput than ASICs/GPUs
-

DSPs cannot match the raw computational throughput of GPUs or customized ASICs designed specifically for machine learning. For example, Qualcomm’s Cloud AI 100 ASIC delivers 480 TOPS on INT8, while their Hexagon DSP provides 10 TOPS. DSPs lack the massive parallelism of GPU SM units.

-
-
-
Slower Double Precision Performance
-

Most DSPs must be optimized for the higher precision floating point needed in some ML models. Their dot product engines focus on INT8/16 and FP32, which provide better power efficiency. However, 64-bit floating point throughput is much lower, which can limit usage in models requiring high precision.

-
-
-
Constrained Model Capacity
-

The limited on-chip memory of DSPs constrains the model sizes that can be run. Large deep learning models with hundreds of megabytes of parameters would exceed on-chip SRAM capacity. DSPs are best suited for small to mid-sized models targeted for edge devices.

-
-
-
Programming Complexity
-

Efficient programming of DSP architectures requires expertise in parallel programming and optimizing data access patterns. Their specialized microarchitectures have a steeper learning curve than high-level software frameworks, making development more complex.

-
-
-
-
-

10.3.4 Graphics Processing Units (GPUs)

-

The term graphics processing unit has existed since at least the 1980s. There had always been a demand for graphics hardware in video game consoles (high demand, needed to be relatively lower cost) and scientific simulations (lower demand, but higher resolution, could be at a high price point).

-

The term was popularized, however, in 1999 when NVIDIA launched the GeForce 256, mainly targeting the PC games market sector (Lindholm et al. 2008). As PC games became more sophisticated, NVIDIA GPUs became more programmable. Soon, users realized they could take advantage of this programmability, run various non-graphics-related workloads on GPUs, and benefit from the underlying architecture. And so, in the late 2000s, GPUs became general-purpose graphics processing units or GP-GPUs.

-
-Lindholm, Erik, John Nickolls, Stuart Oberman, and John Montrym. 2008. NVIDIA Tesla: A Unified Graphics and Computing Architecture.” IEEE Micro 28 (2): 39–55. https://doi.org/10.1109/mm.2008.31. -

Intel Arc Graphics and AMD Radeon RX have also developed their GPUs over time.

-
-

Advantages

-
-
High Computational Throughput
-

The key advantage of GPUs is their ability to perform massively parallel floating-point calculations optimized for computer graphics and linear algebra (Raina, Madhavan, and Ng 2009). Modern GPUs like Nvidia’s A100 offer up to 19.5 teraflops of FP32 performance with 6912 CUDA cores and 40GB of graphics memory tightly coupled with 1.6TB/s of graphics memory bandwidth.

-
-Raina, Rajat, Anand Madhavan, and Andrew Y. Ng. 2009. “Large-Scale Deep Unsupervised Learning Using Graphics Processors.” In Proceedings of the 26th Annual International Conference on Machine Learning, edited by Andrea Pohoreckyj Danyluk, Léon Bottou, and Michael L. Littman, 382:873–80. ACM International Conference Proceeding Series. ACM. https://doi.org/10.1145/1553374.1553486. -

This raw throughput stems from the highly parallel streaming multiprocessor (SM) architecture tailored for data-parallel workloads (Zhihao Jia, Zaharia, and Aiken 2019). Each SM contains hundreds of scalar cores optimized for float32/64 math. With thousands of SMs on a chip, GPUs are purpose-built for matrix multiplication and vector operations used throughout neural networks.

-

For example, Nvidia’s latest H100 GPU provides 4000 TFLOPs of FP8, 2000 TFLOPs of FP16, 1000 TFLOPs of TF32, 67 TFLOPs of FP32 and 34 TFLOPs of FP64 Compute performance, which can dramatically accelerate large batch training on models like BERT, GPT-3, and other transformer architectures. The scalable parallelism of GPUs is key to speeding up computationally intensive deep learning.

-
-
-
Mature Software Ecosystem
-

Nvidia provides extensive runtime libraries like cuDNN and cuBLAS that are highly optimized for deep learning primitives. Frameworks like TensorFlow and PyTorch integrate with these libraries to enable GPU acceleration without direct programming. CUDA provides lower-level control for custom computations.

-

This ecosystem enables quick leveraging of GPUs via high-level Python without GPU programming expertise. Known workflows and abstractions provide a convenient on-ramp for scaling up deep learning experiments. The software maturity supplements the throughput advantages.

-
-
-
Broad Availability
-

The economies of scale of graphics processing make GPUs broadly accessible in data centers, cloud platforms like AWS and GCP, and desktop workstations. Their availability in research environments has provided a convenient ML experimentation and innovation platform. For example, nearly every state-of-the-art deep learning result has involved GPU acceleration because of this ubiquity. The broad access supplements the software maturity to make GPUs the standard ML accelerator.

-
-
-
Programmable Architecture
-

While not as flexible as FPGAs, GPUs provide programmability via CUDA and shader languages to customize computations. Developers can optimize data access patterns, create new ops, and tune precisions for evolving models and algorithms.

-
-
-
-

Disadvantages

-

While GPUs have become the standard accelerator for deep learning, their architecture has some key downsides.

-
-
Less Efficient than Custom ASICs
-

The statement “GPUs are less efficient than ASICs” could spark intense debate within the ML/AI field and cause this book to explode.

-

Typically, GPUs are perceived as less efficient than ASICs because the latter are custom-built for specific tasks and thus can operate more efficiently by design. With their general-purpose architecture, GPUs are inherently more versatile and programmable, catering to a broad spectrum of computational tasks beyond ML/AI.

-

However, modern GPUs have evolved to include specialized hardware support for essential AI operations, such as generalized matrix multiplication (GEMM) and other matrix operations, native support for quantization, and native support for pruning, which are critical for running ML models effectively. These enhancements have significantly improved the efficiency of GPUs for AI tasks to the point where they can rival the performance of ASICs for certain applications.

-

Consequently, contemporary GPUs are convergent, incorporating specialized ASIC-like capabilities within a flexible, general-purpose processing framework. This adaptability has blurred the lines between the two types of hardware. GPUs offer a strong balance of specialization and programmability that is well-suited to the dynamic needs of ML/AI research and development.

-
-
-
High Memory Bandwidth Needs
-

The massively parallel architecture requires tremendous memory bandwidth to supply thousands of cores, as shown in Figure 1. For example, the Nvidia A100 GPU requires 1.6TB/sec to fully saturate its computer. GPUs rely on wide 384-bit memory buses to high-bandwidth GDDR6 RAM, but even the fastest GDDR6 tops out at around 1 TB/sec. This dependence on external DRAM incurs latency and power overheads.

-
-
-
Programming Complexity
-

While tools like CUDA help, optimally mapping and partitioning ML workloads across the massively parallel GPU architecture remains challenging, achieving both high utilization and memory locality requires low-level tuning (Zhe Jia et al. 2018). Abstractions like TensorFlow can leave performance on the table.

-
-Jia, Zhe, Marco Maggioni, Benjamin Staiger, and Daniele P. Scarpazza. 2018. “Dissecting the NVIDIA Volta GPU Architecture via Microbenchmarking.” ArXiv Preprint. https://arxiv.org/abs/1804.06826. -
-
-
Limited On-Chip Memory
-

GPUs have relatively small on-chip memory caches compared to ML models’ large working set requirements during training. They rely on high bandwidth access to external DRAM, which ASICs minimize with large on-chip SRAM.

-
-
-
Fixed Architecture
-

Unlike FPGAs, the fundamental GPU architecture cannot be altered post-manufacture. This constraint limits adapting to novel ML workloads or layers. The CPU-GPU boundary also creates data movement overheads.

-
-
-
-

Case Study

-

The recent groundbreaking research conducted by OpenAI (Brown et al. 2020) with their GPT-3 model. GPT-3, a language model with 175 billion parameters, demonstrated unprecedented language understanding and generation capabilities. Its training, which would have taken months on conventional CPUs, was accomplished in a matter of days using powerful GPUs, thus pushing the boundaries of natural language processing (NLP) capabilities.

-
-
-
-

10.3.5 Central Processing Units (CPUs)

-

The term CPUs has a long history that dates back to 1955 (Weik 1955) while the first microprocessor CPU-the Intel 4004-was invented in 1971 (Who Invented the Microprocessor?). Compilers compile high-level programming languages like Python, Java, or C to assemble instructions (x86, ARM, RISC-V, etc.) for CPUs to process. The set of instructions a CPU understands is called the “instruction set.” It must be agreed upon by both the hardware and software running atop it (See section 5 for a more in-depth description of instruction set architectures-ISAs).

-
-Weik, Martin H. 1955. A Survey of Domestic Electronic Digital Computing Systems. Ballistic Research Laboratories. -

An overview of significant developments in CPUs:

-
    -
  • ** Single-core Era (1950s- 2000): ** This era is known for aggressive microarchitectural improvements. Techniques like speculative execution (executing an instruction before the previous one was done), out-of-order execution (re-ordering instructions to be more effective), and wider issue widths (executing multiple instructions at once) were implemented to increase instruction throughput. The term “System on Chip” also originated in this era as different analog components (components designed with transistors) and digital components (components designed with hardware description languages that are mapped to transistors) were put on the same platform to achieve some task.
  • -
  • Multicore Era (2000s): Driven by the decrease of Moore’s Law, this era is marked by scaling the number of cores within a CPU. Now, tasks can be split across many different cores, each with its own datapath and control unit. Many of the issues in this era pertained to how to share certain resources, which resources to share, and how to maintain coherency and consistency across all the cores.
  • -
  • Sea of accelerators (2010s): Again, driven by the decrease of Moore’s law, this era is marked by offloading more complicated tasks to accelerators (widgets) attached to the main datapath in CPUs. It’s common to see accelerators dedicated to various AI workloads, as well as image/digital processing, and cryptography. In these designs, CPUs are often described more as judges, deciding which tasks should be processed rather than doing the processing itself. Any task could still be run on the CPU rather than the accelerators, but the CPU would generally be slower. However, the cost of designing and programming the accelerator became a non-trivial hurdle that sparked interest in design-specific libraries (DSLs).
  • -
  • Presence in data centers: Although we often hear that GPUs dominate the data center marker, CPUs are still well suited for tasks that don’t inherently possess a large amount of parallelism. CPUs often handle serial and small tasks and coordinate the data center.
  • -
  • On the edge: Given the tighter resource constraints on the edge, edge CPUs often only implement a subset of the techniques developed in the sing-core era because these optimizations tend to be heavy on power and area consumption. Edge CPUs still maintain a relatively simple datapath with limited memory capacities.
  • -
-

Traditionally, CPUs have been synonymous with general-purpose computing, a term that has also changed as the “average” workload a consumer would run changes over time. For example, floating point components were once considered reserved for “scientific computing,” they were usually implemented as a co-processor (a modular component that worked with the datapath) and seldom deployed to average consumers. Compare this attitude to today, where FPUs are built into every datapath.

-
-

Advantages

-

While raw throughput is limited, general-purpose CPUs provide practical AI acceleration benefits.

-
-
General Programmability
-

CPUs support diverse workloads beyond ML, providing flexible general-purpose programmability. This versatility comes from their standardized instruction sets and mature compiler ecosystems, which allow running any application, from databases and web servers to analytics pipelines (Hennessy and Patterson 2019).

-
-Hennessy, John L., and David A. Patterson. 2019. “A New Golden Age for Computer Architecture.” Commun. ACM 62 (2): 48–60. https://doi.org/10.1145/3282307. -

This avoids the need for dedicated ML accelerators and enables leveraging existing CPU-based infrastructure for basic ML deployment. For example, X86 servers from vendors like Intel and AMD can run common ML frameworks using Python and TensorFlow packages alongside other enterprise workloads.

-
-
-
Mature Software Ecosystem
-

For decades, highly optimized math libraries like BLAS, LAPACK, and FFTW have leveraged vectorized instructions and multithreading on CPUs (Dongarra 2009). Major ML frameworks like PyTorch, TensorFlow, and SciKit-Learn are designed to integrate seamlessly with these CPU math kernels.

-
-Dongarra, Jack J. 2009. “The Evolution of High Performance Computing on System z.” IBM J. Res. Dev. 53: 3–4. -

Hardware vendors like Intel and AMD also provide low-level libraries to optimize performance for deep learning primitives fully (AI Inference Acceleration on CPUs). This robust, mature software ecosystem allows quickly deploying ML on existing CPU infrastructure.

-
-
-
Wide Availability
-

The economies of scale of CPU manufacturing, driven by demand across many markets like PCs, servers, and mobile, make them ubiquitously available. Intel CPUs, for example, have powered most servers for decades (Ranganathan 2011). This wide availability in data centers reduces hardware costs for basic ML deployment.

-
-Ranganathan, Parthasarathy. 2011. “From Microprocessors to Nanostores: Rethinking Data-Centric Systems.” Computer 44 (1): 39–48. https://doi.org/10.1109/mc.2011.18. -

Even small embedded devices typically integrate some CPU, enabling edge inference. The ubiquity reduces the need to purchase specialized ML accelerators in many situations.

-
-
-
Low Power for Inference
-

Optimizations like ARM Neon and Intel AVX vector extensions provide power-efficient integer and floating point throughput optimized for “bursty” workloads such as inference (Ignatov et al. 2018). While slower than GPUs, CPU inference can be deployed in power-constrained environments. For example, ARM’s Cortex-M CPUs now deliver over 1 TOPS of INT8 performance under 1W, enabling keyword spotting and vision applications on edge devices (ARM).

-
-
-
-

Disadvantages

-

While providing some advantages, general-purpose CPUs also have limitations for AI workloads.

-
-
Lower Throughput than Accelerators
-

CPUs lack the specialized architectures for massively parallel processing that GPUs and other accelerators provide. Their general-purpose design reduces computational throughput for the highly parallelizable math operations common in ML models (N. P. Jouppi et al. 2017a).

-
-Jouppi, Norman P., Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, et al. 2017a. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. -
-
-
Not Optimized for Data Parallelism
-

The architectures of CPUs are not specifically optimized for data parallel workloads inherent to AI (Sze et al. 2017). They allocate substantial silicon area to instruction decoding, speculative execution, caching, and flow control that provides little benefit for the array operations used in neural networks (AI Inference Acceleration on CPUs). However, modern CPUs are equipped with vector instructions like AVX-512 specifically to accelerate certain key operations like matrix multiplication.

-

GPU streaming multiprocessors, for example, devote most transistors to floating point units instead of complex branch prediction logic. This specialization allows much higher utilization for ML math.

-
-
-
Higher Memory Latency
-

CPUs suffer from higher latency accessing main memory relative to GPUs and other accelerators (DDR). Techniques like tiling and caching can help, but the physical separation from off-chip RAM bottlenecks data-intensive ML workloads. This emphasizes the need for specialized memory architectures in ML hardware.

-
-
-
Power Inefficiency Under Heavy Workloads
-

While suitable for intermittent inference, sustaining near-peak throughput for training results in inefficient power consumption on CPUs, especially mobile CPUs (Ignatov et al. 2018). Accelerators explicitly optimize the data flow, memory, and computation for sustained ML workloads. CPUs are energy-inefficient for training large models.

-
-
-
-
-

10.3.6 Comparison

- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AcceleratorDescriptionKey AdvantagesKey Disadvantages
ASICsCustom ICs designed for target workloads like AI inferenceMaximizes perf/watt.
Optimized for tensor ops
Low latency on-chip memory
Fixed architecture lacks flexibility
High NRE cost
Long design cycles
FPGAsReconfigurable fabric with programmable logic and routingFlexible architecture
Low latency memory access
Lower perf/watt than ASICs
Complex programming
GPUsOriginally for graphics, now used for neural network accelerationHigh throughput
Parallel scalability
Software ecosystem with CUDA
Not as power efficient as ASICs.
Require high memory bandwidth
CPUsGeneral purpose processorsProgrammability
Ubiquitous availability
Lower performance for AI workloads
-

In general, CPUs provide a readily available baseline, GPUs deliver broadly accessible acceleration, FPGAs offer programmability, and ASICs maximize efficiency for fixed functions. The optimal choice depends on the target application’s scale, cost, flexibility, and other requirements.

-

Although first developed for data center deployment, where [cite some benefit that Google cites], Google has also put considerable effort into developing Edge TPUs. These Edge TPUs maintain the inspiration from systolic arrays but are tailored to the limited resources accessible at the edge.

-
-
-
-

10.4 Hardware-Software Co-Design

-

Hardware-software co-design is based on the principle that AI systems achieve optimal performance and efficiency when the hardware and software components are designed in tight integration. This involves an iterative, collaborative design cycle where the hardware architecture and software algorithms are concurrently developed and refined with continuous feedback between teams.

-

For example, a new neural network model may be prototyped on an FPGA-based accelerator platform to obtain real performance data early in the design process. These results provide feedback to the hardware designers on potential optimizations and the software developers on refinements to the model or framework to better leverage the hardware capabilities. This level of synergy is difficult to achieve with the common practice of software being developed independently to deploy on fixed commodity hardware.

-

Co-design is critical for embedded AI systems facing significant resource constraints like low power budgets, limited memory and compute capacity, and real-time latency requirements. Tight integration between algorithm developers and hardware architects helps unlock optimizations across the stack to meet these restrictions. Enabling techniques include algorithmic improvements like neural architecture search and pruning and hardware advances like specialized dataflows and memory hierarchies.

-

By bringing hardware and software design together, rather than developing them separately, holistic optimizations can be made that maximize performance and efficiency. The next sections provide more details on specific co-design approaches.

-
-

10.4.1 The Need for Co-Design

-

Several key factors make a collaborative hardware-software co-design approach essential for building efficient AI systems.

-
-

Increasing Model Size and Complexity

-

State-of-the-art AI models have been rapidly growing in size, enabled by advances in neural architecture design and the availability of large datasets. For example, the GPT-3 language model contains 175 billion parameters (Brown et al. 2020), requiring huge computational resources for training. This explosion in model complexity necessitates co-design to develop efficient hardware and algorithms in tandem. Techniques like model compression (Cheng et al. 2018) and quantization must be co-optimized with the hardware architecture.

-
-Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html. -
-Cheng, Yu, Duo Wang, Pan Zhou, and Tao Zhang. 2018. “Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges.” IEEE Signal Process Mag. 35 (1): 126–36. https://doi.org/10.1109/msp.2017.2765695. -
-
-

Constraints of Embedded Deployment

-

Deploying AI applications on edge devices like mobile phones or smart home appliances introduces significant constraints on energy, memory, and silicon area (Sze et al. 2017). Enable real-time inference under these restrictions requires co-exploring hardware optimizations like specialized dataflows and compression with efficient neural network design and pruning techniques. Co-design maximizes performance within tight deployment constraints.

-
-
-

Rapid Evolution of AI Algorithms

-

AI is rapidly evolving, with new model architectures, training methodologies, and software frameworks constantly emerging. For example, Transformers have recently become hugely popular for NLP (Young et al. 2018). Keeping pace with these algorithmic innovations requires hardware-software co-design to adapt platforms and avoid accrued technical debt quickly.

-
-Young, Tom, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. 2018. “Recent Trends in Deep Learning Based Natural Language Processing [Review Article].” IEEE Comput. Intell. Mag. 13 (3): 55–75. https://doi.org/10.1109/mci.2018.2840738. -
-
-

Complex Hardware-Software Interactions

-

Many subtle interactions and tradeoffs between hardware architectural choices and software optimizations significantly impact overall efficiency. For instance, techniques like tensor partitioning and batching affect parallelism and data access patterns impact memory utilization. Co-design provides a cross-layer perspective to unravel these dependencies.

-
-
-

Need for Specialization

-

AI workloads benefit from specialized operations like low-precision math and customized memory hierarchies. This motivates incorporating custom hardware tailored to neural network algorithms rather than relying solely on flexible software running on generic hardware (Sze et al. 2017). However, the software stack must explicitly target custom hardware operations to realize the benefits.

-
-
-

Demand for Higher Efficiency

-

With growing model complexity, diminishing returns and overhead from optimizing only the hardware or software in isolation (Putnam et al. 2014) arise. Inevitable tradeoffs arise that require global optimization across layers. Jointly co-designing hardware and software provides large compound efficiency gains.

-
-Putnam, Andrew, Adrian M. Caulfield, Eric S. Chung, Derek Chiou, Kypros Constantinides, John Demme, Hadi Esmaeilzadeh, et al. 2014. “A Reconfigurable Fabric for Accelerating Large-Scale Datacenter Services.” ACM SIGARCH Computer Architecture News 42 (3): 13–24. https://doi.org/10.1145/2678373.2665678. -
-
-
-

10.4.2 Principles of Hardware-Software Co-Design

-

The underlying hardware architecture and software stack must be tightly integrated and co-optimized to build high-performance and efficient AI systems. Neither can be designed in isolation; maximizing their synergies requires a holistic approach known as hardware-software co-design.

-

The key goal is tailoring the hardware capabilities to match the algorithms and workloads run by the software. This requires a feedback loop between hardware architects and software developers to converge on optimized solutions. Several techniques enable effective co-design:

-
-

Hardware-Aware Software Optimization

-

The software stack can be optimized to leverage the underlying hardware capabilities better:

-
    -
  • Parallelism: Parallelize matrix computations like convolution or attention layers to maximize throughput on vector engines.
  • -
  • Memory Optimization: Tune data layouts to improve cache locality based on hardware profiling. This maximizes reuse and minimizes expensive DRAM access.
  • -
  • Compression: Use sparsity in the models to reduce storage space and save on computation by zero-skipping operations.
  • -
  • Custom Operations: Incorporate specialized operations like low-precision INT4 or bfloat16 into models to capitalize on dedicated hardware support.
  • -
  • Dataflow Mapping: Explicitly map model stages to computational units to optimize data movement on hardware.
  • -
-
-
-

Algorithm-Driven Hardware Specialization

-

Hardware can be tailored to suit the characteristics of ML algorithms better:

-
    -
  • Custom Datatypes: Support low precision INT8/4 or bfloat16 in hardware for higher arithmetic density.
  • -
  • On-Chip Memory: Increase SRAM bandwidth and lower access latency to match model memory access patterns.
  • -
  • Domain-Specific Ops: Add hardware units for key ML functions like FFTs or matrix multiplication to reduce latency and energy.
  • -
  • Model Profiling: Use model simulation and profiling to identify computational hotspots and optimize hardware.
  • -
-

The key is collaborative feedback - insights from hardware profiling guide software optimizations, while algorithmic advances inform hardware specialization. This mutual enhancement provides multiplicative efficiency gains compared to isolated efforts.

-
-
-

Algorithm-Hardware Co-exploration

-

A powerful co-design technique involves jointly exploring innovations in neural network architectures and custom hardware design. This allows for finding ideal pairings tailored to each other’s strengths (Sze et al. 2017).

-
-Sze, Vivienne, Yu-Hsin Chen, Tien-Ju Yang, and Joel S. Emer. 2017. “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” Proc. IEEE 105 (12): 2295–2329. https://doi.org/10.1109/jproc.2017.2761740. -
-Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint. https://arxiv.org/abs/1704.04861. -
-Jacob, Benoit, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. “Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2704–13. IEEE. https://doi.org/10.1109/cvpr.2018.00286. -
-Gale, Trevor, Erich Elsen, and Sara Hooker. 2019. “The State of Sparsity in Deep Neural Networks.” ArXiv Preprint abs/1902.09574. https://arxiv.org/abs/1902.09574. -
-Mishra, Asit K., Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius. 2021. “Accelerating Sparse Deep Neural Networks.” CoRR abs/2104.08378. https://arxiv.org/abs/2104.08378. -

For instance, the shift to mobile architectures like MobileNets (Howard et al. 2017) was guided by edge device constraints like model size and latency. The quantization (Jacob et al. 2018) and pruning techniques (Gale, Elsen, and Hooker 2019) that unlocked these efficient models became possible thanks to hardware accelerators with native low-precision integer support and pruning support (Mishra et al. 2021).

-

Attention-based models have thrived on massively parallel GPUs and ASICs, where their computation maps well spatially, as opposed to RNN architectures, which rely on sequential processing. The co-evolution of algorithms and hardware unlocked new capabilities.

-

Effective co-exploration requires close collaboration between algorithm researchers and hardware architects. Rapid prototyping on FPGAs (C. Zhang et al. 2015) or specialized AI simulators allows quick evaluation of different pairings of model architectures and hardware designs pre-silicon.

-
-Zhang, Chen, Peng Li, Guangyu Sun, Yijin Guan, Bingjun Xiao, and Jason Optimizing Cong. 2015. FPGA-Based Accelerator Design for Deep Convolutional Neural Networks Proceedings of the 2015 ACM.” In SIGDA International Symposium on Field-Programmable Gate Arrays-FPGA, 15:161–70. -

For example, Google’s TPU architecture evolved with optimizations to TensorFlow models to maximize performance on image classification. This tight feedback loop yielded models tailored for the TPU that would have been unlikely in isolation.

-

Studies have shown 2-5x higher performance and efficiency gains with algorithm-hardware co-exploration than isolated algorithm or hardware optimization efforts (Suda et al. 2016). Parallelizing the joint development also reduces time-to-deployment.

-
-Suda, Naveen, Vikas Chandra, Ganesh Dasika, Abinash Mohanty, Yufei Ma, Sarma Vrudhula, Jae-sun Seo, and Yu Cao. 2016. “Throughput-Optimized OpenCL-Based FPGA Accelerator for Large-Scale Convolutional Neural Networks.” In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, 16–25. ACM. https://doi.org/10.1145/2847263.2847276. -

Overall, exploring the tight interdependencies between model innovation and hardware advances unlocks opportunities that must be visible when tackled sequentially. This synergistic co-design yields solutions greater than the sum of their parts.

-
-
-
-

10.4.3 Challenges

-

While collaborative co-design can improve efficiency, adaptability, and time to market, it also has engineering and organizational challenges.

-
-

Increased Prototyping Costs

-

More extensive prototyping is required to evaluate different hardware-software pairings. The need for rapid, iterative prototypes on FPGAs or emulators increases validation overhead. For example, Microsoft found that more prototypes were needed to co-design an AI accelerator than sequential design (Fowers et al. 2018).

-
-Fowers, Jeremy, Kalin Ovtcharov, Michael Papamichael, Todd Massengill, Ming Liu, Daniel Lo, Shlomi Alkalay, et al. 2018. “A Configurable Cloud-Scale DNN Processor for Real-Time AI.” In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), 1–14. IEEE; IEEE. https://doi.org/10.1109/isca.2018.00012. -
-
-

Team and Organizational Hurdles

-

Co-design requires close coordination between traditionally disconnected hardware and software groups. This could introduce communication issues or misaligned priorities and schedules. Navigating different engineering workflows is also challenging. Some organizational inertia to adopting integrated practices may exist.

-
-
-

Simulation and Modeling Complexity

-

Capturing subtle interactions between hardware and software layers for joint simulation and modeling adds significant complexity. Full cross-layer abstractions are difficult to construct quantitatively before implementation, making holistic optimizations harder to quantify ahead of time.

-
-
-

Over-Specialization Risks

-

Tight co-design bears the risk of overfitting optimizations to current algorithms, sacrificing generality. For example, hardware tuned exclusively for Transformer models could underperform on future techniques. Maintaining flexibility requires foresight.

-
-
-

Adoption Challenges

-

Engineers comfortable with established discrete hardware or software design practices may only accept familiar collaborative workflows. Despite the long-term benefits, projects could face friction in transitioning to co-design.

-
-
-
-
-

10.5 Software for AI Hardware

-

Specialized hardware accelerators like GPUs, TPUs, and FPGAs are essential to delivering high-performance artificial intelligence applications. However, an extensive software stack is required to leverage these hardware platforms effectively, spanning the entire development and deployment lifecycle. Frameworks and libraries form the backbone of AI hardware, offering sets of robust, pre-built code, algorithms, and functions specifically optimized to perform various AI tasks on different hardware. They are designed to simplify the complexities of utilizing the hardware from scratch, which can be time-consuming and prone to error. Software plays an important role in the following:

-
    -
  • Providing programming abstractions and models like CUDA and OpenCL to map computations onto accelerators.
  • -
  • Integrating accelerators into popular deep learning frameworks like TensorFlow and PyTorch.
  • -
  • Compilers and tools to optimize across the hardware-software stack.
  • -
  • Simulation platforms to model hardware and software together.
  • -
  • Infrastructure to manage deployment on accelerators.
  • -
-

This expansive software ecosystem is as important as the hardware in delivering performant and efficient AI applications. This section overviews the tools available at each stack layer to enable developers to build and run AI systems powered by hardware acceleration.

-
-

10.5.1 Programming Models

-

Programming models provide abstractions to map computations and data onto heterogeneous hardware accelerators:

-
    -
  • CUDA: Nvidia’s parallel programming model to leverage GPUs using extensions to languages like C/C++. Allows launching kernels across GPU cores (Luebke 2008).
  • -
  • OpenCL: Open standard for writing programs spanning CPUs, GPUs, FPGAs, and other accelerators. Specifies a heterogeneous computing framework (Munshi 2009).
  • -
  • OpenGL/WebGL: 3D graphics programming interfaces that can map general-purpose code to GPU cores (Segal and Akeley 1999).
  • -
  • Verilog/VHDL: Hardware description languages (HDLs) used to configure FPGAs as AI accelerators by specifying digital circuits (Gannot and Ligthart 1994).
  • -
  • TVM: A Compiler framework providing a Python frontend to optimize and map deep learning models onto diverse hardware backends (Chen et al. 2018).
  • -
-
-Luebke, David. 2008. CUDA: Scalable Parallel Programming for High-Performance Scientific Computing.” In 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 836–38. IEEE. https://doi.org/10.1109/isbi.2008.4541126. -
-Munshi, Aaftab. 2009. “The OpenCL Specification.” In 2009 IEEE Hot Chips 21 Symposium (HCS), 1–314. IEEE. https://doi.org/10.1109/hotchips.2009.7478342. -
-Segal, Mark, and Kurt Akeley. 1999. “The OpenGL Graphics System: A Specification (Version 1.1).” -
-Gannot, G., and M. Ligthart. 1994. “Verilog HDL Based FPGA Design.” In International Verilog HDL Conference, 86–92. IEEE. https://doi.org/10.1109/ivc.1994.323743. -
-Chen, Tianqi, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, et al. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 578–94. -

Key challenges include expressing parallelism, managing memory across devices, and matching algorithms to hardware capabilities. Abstractions must balance portability with allowing hardware customization. Programming models enable developers to harness accelerators without hardware expertise. These details are discussed in the AI frameworks section.

-
-

Exercise 10.1 (Software for AI Hardware - TVM)  

-
-
- -
-
-

We’ve learned that fancy AI hardware needs special software to work magic. TVM is like a super-smart translator, turning your code into instructions that accelerators understand. In this Colab, we’ll use TVM to make a pretend accelerator called VTA do matrix multiplication super fast. Ready to see how software powers up hardware?

-

-
-
-
-
-
-

10.5.2 Libraries and Runtimes

-

Specialized libraries and runtimes provide software abstractions to access and maximize the utilization of AI accelerators:

-
    -
  • Math Libraries: Highly optimized implementations of linear algebra primitives like GEMM, FFTs, convolutions, etc., tailored to the target hardware. Nvidia cuBLAS, Intel MKL, and Arm compute libraries are examples.
  • -
  • Framework Integrations: Libraries to accelerate deep learning frameworks like TensorFlow, PyTorch, and MXNet on supported hardware. For example, cuDNN accelerates CNNs on Nvidia GPUs.
  • -
  • Runtimes: Software to handle accelerator execution, including scheduling, synchronization, memory management, and other tasks. Nvidia TensorRT is an inference optimizer and runtime.
  • -
  • Drivers and Firmware: Low-level software to interface with hardware, initialize devices, and handle execution. Vendors like Xilinx provide drivers for their accelerator boards.
  • -
-

For instance, PyTorch integrators use cuDNN and cuBLAS libraries to accelerate training on Nvidia GPUs. The TensorFlow XLA runtime optimizes and compiles models for accelerators like TPUs. Drivers initialize devices and offload operations.

-

The challenges include efficiently partitioning and scheduling workloads across heterogeneous devices like multi-GPU nodes. Runtimes must also minimize the overhead of data transfers and synchronization.

-

Libraries, runtimes, and drivers provide optimized building blocks that deep learning developers can leverage to tap into accelerator performance without hardware programming expertise. Their optimization is essential for production deployments.

-
-
-

10.5.3 Optimizing Compilers

-

Optimizing compilers is key in extracting maximum performance and efficiency from hardware accelerators for AI workloads. They apply optimizations spanning algorithmic changes, graph-level transformations, and low-level code generation.

-
    -
  • Algorithm Optimization: Techniques like quantization, pruning, and neural architecture search to enhance model efficiency and match hardware capabilities.
  • -
  • Graph Optimizations: Graph-level optimizations like operator fusion, rewriting, and layout transformations to optimize performance on target hardware.
  • -
  • Code Generation: Generating optimized low-level code for accelerators from high-level models and frameworks.
  • -
-

For example, the TVM open compiler stack applies quantization for a BERT model targeting Arm GPUs. It fuses pointwise convolution operations and transforms the weight layout to optimize memory access. Finally, it emits optimized OpenGL code to run the GPU workload.

-

Key compiler optimizations include maximizing parallelism, improving data locality and reuse, minimizing memory footprint, and exploiting custom hardware operations. Compilers build and optimize machine learning workloads holistically across hardware components like CPUs, GPUs, and other accelerators.

-

However, efficiently mapping complex models introduces challenges like efficiently partitioning workloads across heterogeneous devices. Production-level compilers also require extensive time tuning on representative workloads. Still, optimizing compilers is indispensable in unlocking the full capabilities of AI accelerators.

-
-
-

10.5.4 Simulation and Modeling

-

Simulation software is important in hardware-software co-design. It enables joint modeling of proposed hardware architectures and software stacks:

-
    -
  • Hardware Simulation: Platforms like Gem5 allow detailed simulation of hardware components like pipelines, caches, interconnects, and memory hierarchies. Engineers can model hardware changes without physical prototyping (Binkert et al. 2011).
  • -
  • Software Simulation: Compiler stacks like TVM support the simulation of machine learning workloads to estimate performance on target hardware architectures. This assists with software optimizations.
  • -
  • Co-simulation: Unified platforms like the SCALE-Sim (Samajdar et al. 2018) integrate hardware and software simulation into a single tool. This enables what-if analysis to quantify the system-level impacts of cross-layer optimizations early in the design cycle.
  • -
-
-Binkert, Nathan, Bradford Beckmann, Gabriel Black, Steven K. Reinhardt, Ali Saidi, Arkaprava Basu, Joel Hestness, et al. 2011. “The Gem5 Simulator.” ACM SIGARCH Computer Architecture News 39 (2): 1–7. https://doi.org/10.1145/2024716.2024718. -
-Samajdar, Ananda, Yuhao Zhu, Paul Whatmough, Matthew Mattina, and Tushar Krishna. 2018. “Scale-Sim: Systolic Cnn Accelerator Simulator.” ArXiv Preprint abs/1811.02883. https://arxiv.org/abs/1811.02883. -

For example, an FPGA-based AI accelerator design could be simulated using Verilog hardware description language and synthesized into a Gem5 model. Verilog is well-suited for describing the digital logic and interconnects of the accelerator architecture. Verilog allows the designer to specify the datapaths, control logic, on-chip memories, and other components implemented in the FPGA fabric. Once the Verilog design is complete, it can be synthesized into a model that simulates the behavior of the hardware, such as using the Gem5 simulator. Gem5 is useful for this task because it allows the modeling of full systems, including processors, caches, buses, and custom accelerators. Gem5 supports interfacing Verilog models of hardware to the simulation, enabling unified system modeling.

-

The synthesized FPGA accelerator model could then have ML workloads simulated using TVM compiled onto it within the Gem5 environment for unified modeling. TVM allows optimized compilation of ML models onto heterogeneous hardware like FPGAs. Running TVM-compiled workloads on the accelerator within the Gem5 simulation provides an integrated way to validate and refine the hardware design, software stack, and system integration before physically realizing the accelerator on a real FPGA.

-

This type of co-simulation provides estimations of overall metrics like throughput, latency, and power to guide co-design before expensive physical prototyping. They also assist with partitioning optimizations between hardware and software to guide design tradeoffs.

-

However, accuracy in modeling subtle low-level interactions between components is limited. Quantified simulations are estimates but cannot wholly replace physical prototypes and testing. Still, unified simulation and modeling provide invaluable early insights into system-level optimization opportunities during the co-design process.

-
-
-
-

10.6 Benchmarking AI Hardware

-

Benchmarking is a critical process that quantifies and compares the performance of various hardware platforms designed to speed up artificial intelligence applications. It guides purchasing decisions, development focus, and performance optimization efforts for hardware manufacturers and software developers.

-

The benchmarking chapter explores this topic in great detail, explaining why it has become an indispensable part of the AI hardware development cycle and how it impacts the broader technology landscape. Here, we will briefly review the main concepts, but we recommend that you refer to the chapter for more details.

-

Benchmarking suites such as MLPerf, Fathom, and AI Benchmark offer a set of standardized tests that can be used across different hardware platforms. These suites measure AI accelerator performance across various neural networks and machine learning tasks, from basic image classification to complex language processing. Providing a common ground for Comparison, they help ensure that performance claims are consistent and verifiable. These “tools” are applied not only to guide the development of hardware but also to ensure that the software stack leverages the full potential of the underlying architecture.

-
    -
  • MLPerf: Includes a broad set of benchmarks covering both training (Mattson et al. 2020) and inference (Reddi et al. 2020) for a range of machine learning tasks.
  • -
  • Fathom: Focuses on core operations in deep learning models, emphasizing their execution on different architectures (Adolf et al. 2016).
  • -
  • AI Benchmark: Targets mobile and consumer devices, assessing AI performance in end-user applications (Ignatov et al. 2018).
  • -
-
-Mattson, Peter, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, et al. 2020. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance.” IEEE Micro 40 (2): 8–16. https://doi.org/10.1109/mm.2020.2974843. -
-Reddi, Vijay Janapa, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, et al. 2020. MLPerf Inference Benchmark.” In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 446–59. IEEE; IEEE. https://doi.org/10.1109/isca45697.2020.00045. -
-Adolf, Robert, Saketh Rama, Brandon Reagen, Gu-yeon Wei, and David Brooks. 2016. “Fathom: Reference Workloads for Modern Deep Learning Methods.” In 2016 IEEE International Symposium on Workload Characterization (IISWC), 1–10. IEEE; IEEE. https://doi.org/10.1109/iiswc.2016.7581275. -
-Ignatov, Andrey, Radu Timofte, William Chou, Ke Wang, Max Wu, Tim Hartley, and Luc Van Gool. 2018. AI Benchmark: Running Deep Neural Networks on Android Smartphones,” 0–0. -

Benchmarks also have performance metrics that are the quantifiable measures used to evaluate the effectiveness of AI accelerators. These metrics provide a comprehensive view of an accelerator’s capabilities and are used to guide the design and selection process for AI systems. Common metrics include:

-
    -
  • Throughput: Usually measured in operations per second, this metric indicates the volume of computations an accelerator can handle.
  • -
  • Latency: The time delay from input to output in a system is vital for real-time processing tasks.
  • -
  • Energy Efficiency: Calculated as computations per watt, representing the tradeoff between performance and power consumption.
  • -
  • Cost Efficiency: This evaluates the cost of operation relative to performance, an essential metric for budget-conscious deployments.
  • -
  • Accuracy: In inference tasks, the precision of computations is critical and sometimes balanced against speed.
  • -
  • Scalability: The ability of the system to maintain performance gains as the computational load scales up.
  • -
-

Benchmark results give insights beyond just numbers—they can reveal bottlenecks in the software and hardware stack. For example, benchmarks may show how increased batch size improves GPU utilization by providing more parallelism or how compiler optimizations boost TPU performance. These learnings enable continuous optimization (Zhihao Jia, Zaharia, and Aiken 2019).

-
-Jia, Zhihao, Matei Zaharia, and Alex Aiken. 2019. “Beyond Data and Model Parallelism for Deep Neural Networks.” In Proceedings of Machine Learning and Systems 2019, MLSys 2019, Stanford, CA, USA, March 31 - April 2, 2019, edited by Ameet Talwalkar, Virginia Smith, and Matei Zaharia. mlsys.org. https://proceedings.mlsys.org/book/265.pdf. -
-Zhu, Hongyu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Anand Jayarajan, Amar Phanishayee, Bianca Schroeder, and Gennady Pekhimenko. 2018. “Benchmarking and Analyzing Deep Neural Network Training.” In 2018 IEEE International Symposium on Workload Characterization (IISWC), 88–100. IEEE; IEEE. https://doi.org/10.1109/iiswc.2018.8573476. -

Standardized benchmarking provides a quantified, comparable evaluation of AI accelerators to inform design, purchasing, and optimization. However, real-world performance validation remains essential as well (H. Zhu et al. 2018).

-
-
-

10.7 Challenges and Solutions

-

AI accelerators offer impressive performance improvements, but significant portability and compatibility challenges often need to be improved in their integration into the broader AI landscape. The crux of the issue lies in the diversity of the AI ecosystem—a vast array of machine learning accelerators, frameworks, and programming languages exist, each with its unique features and requirements.

-
-

10.7.1 Portability/Compatibility Issues

-

Developers frequently encounter difficulties transferring their AI models from one hardware environment to another. For example, a machine learning model developed for a desktop environment in Python using the PyTorch framework, optimized for an Nvidia GPU, may not easily transition to a more constrained device such as the Arduino Nano 33 BLE. This complexity stems from stark differences in programming requirements - Python and PyTorch on the desktop versus a C++ environment on an Arduino, not to mention the shift from x86 architecture to ARM ISA.

-

These divergences highlight the intricacy of portability within AI systems. Moreover, the rapid advancement in AI algorithms and models means that hardware accelerators must continually adapt, creating a moving target for compatibility. The absence of universal standards and interfaces compounds the issue, making deploying AI solutions consistently across various devices and platforms challenging.

-
-

Solutions and Strategies

-

To address these hurdles, the AI industry is moving towards several solutions:

-
-
Standardization Initiatives
-

The Open Neural Network Exchange (ONNX) is at the forefront of this pursuit, proposing an open and shared ecosystem that promotes model interchangeability. ONNX facilitates the use of AI models across various frameworks, allowing models trained in one environment to be efficiently deployed in another, significantly reducing the need for time-consuming rewrites or adjustments.

-
-
-
Cross-Platform Frameworks
-

Complementing the standardization efforts, cross-platform frameworks such as TensorFlow Lite and PyTorch Mobile have been developed specifically to create cohesion between diverse computational environments ranging from desktops to mobile and embedded devices. These frameworks offer streamlined, lightweight versions of their parent frameworks, ensuring compatibility and functional integrity across different hardware types without sacrificing performance. This ensures that developers can create applications with the confidence that they will work on many devices, bridging a gap that has traditionally posed a considerable challenge in AI development.

-
-
-
Hardware-agnostic Platforms
-

The rise of hardware-agnostic platforms has also played an important role in democratizing the use of AI. By creating environments where AI applications can be executed on various accelerators, these platforms remove the burden of hardware-specific coding from developers. This abstraction simplifies the development process and opens up new possibilities for innovation and application deployment, free from the constraints of hardware specifications.

-
-
-
Advanced Compilation Tools
-

In addition, the advent of advanced compilation tools like TVM, an end-to-end tensor compiler, offers an optimized path through the jungle of diverse hardware architectures. TVM equips developers with the means to fine-tune machine learning models for a broad spectrum of computational substrates, ensuring optimal performance and avoiding manual model adjustment each time there is a shift in the underlying hardware.

-
-
-
Community and Industry Collaboration
-

The collaboration between open-source communities and industry consortia cannot be understated. These collective bodies are instrumental in forming shared standards and best practices that all developers and manufacturers can adhere to. Such collaboration fosters a more unified and synergistic AI ecosystem, significantly diminishing the prevalence of portability issues and smoothing the path toward global AI integration and advancement. Through these combined efforts, AI is steadily moving toward a future where seamless model deployment across various platforms becomes a standard rather than an exception.

-

Solving the portability challenges is crucial for the AI field to realize the full potential of hardware accelerators in a dynamic and diverse technological landscape. It requires a concerted effort from hardware manufacturers, software developers, and standard bodies to create a more interoperable and flexible environment. With continued innovation and collaboration, the AI community can pave the way for seamless integration and deployment of AI models across many platforms.

-
-
-
-
-

10.7.2 Power Consumption Concerns

-

Power consumption is a crucial issue in the development and operation of data center AI accelerators, like Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) (N. P. Jouppi et al. 2017b) (Norrie et al. 2021) (N. Jouppi et al. 2023). These powerful components are the backbone of contemporary AI infrastructure, but their high energy demands contribute to the environmental impact of technology and drive up operational costs significantly. As data processing needs become more complex, with the popularity of AI and deep learning increasing, there’s a pressing demand for GPUs and TPUs that can deliver the necessary computational power more efficiently. The impact of such advancements is two-fold: they can lower these technologies’ environmental footprint and reduce the cost of running AI applications.

-
-———, et al. 2017b. “In-Datacenter Performance Analysis of a Tensor Processing Unit.” In Proceedings of the 44th Annual International Symposium on Computer Architecture, 1–12. ISCA ’17. New York, NY, USA: ACM. https://doi.org/10.1145/3079856.3080246. -
-Norrie, Thomas, Nishant Patil, Doe Hyun Yoon, George Kurian, Sheng Li, James Laudon, Cliff Young, Norman Jouppi, and David Patterson. 2021. “The Design Process for Google’s Training Chips: Tpuv2 and TPUv3.” IEEE Micro 41 (2): 56–63. https://doi.org/10.1109/mm.2021.3058217. -
-Jouppi, Norm, George Kurian, Sheng Li, Peter Ma, Rahul Nagarajan, Lifeng Nai, Nishant Patil, et al. 2023. TPU V4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings.” In Proceedings of the 50th Annual International Symposium on Computer Architecture. ISCA ’23. New York, NY, USA: ACM. https://doi.org/10.1145/3579371.3589350. -

Emerging hardware technologies are at the cusp of revolutionizing power efficiency in this sector. Photonic computing, for instance, uses light rather than electricity to carry information, offering a promise of high-speed processing with a fraction of the power usage. We delve deeper into this and other innovative technologies in the “Emerging Hardware Technologies” section, exploring their potential to address current power consumption challenges.

-

At the edge of the network, AI accelerators are engineered to process data on devices like smartphones, IoT sensors, and smart wearables. These devices often work under severe power limitations, necessitating a careful balancing act between performance and power usage. A high-performance AI model may provide quick results but at the cost of depleting battery life swiftly and increasing thermal output, which may affect the device’s functionality and durability. The stakes are higher for devices deployed in remote or hard-to-reach areas, where consistent power supply cannot be guaranteed, underscoring the need for low-power-consuming solutions.

-

Latency issues further compound the challenge of power efficiency at the edge. Edge AI applications in fields such as autonomous driving and healthcare monitoring require speed, precision, and reliability, as delays in processing can lead to serious safety risks. For these applications, developers must optimize both the AI algorithms and the hardware design to strike an optimal balance between power consumption and latency.

-

This optimization effort is not just about making incremental improvements to existing technologies; it’s about rethinking how and where we process AI tasks. By designing AI accelerators that are both power-efficient and capable of quick processing, we can ensure these devices serve their intended purposes without unnecessary energy use or compromised performance. Such developments could propel the widespread adoption of AI across various sectors, enabling smarter, safer, and more sustainable use of technology.

-
-
-

10.7.3 Overcoming Resource Constraints

-

Resource constraints also pose a significant challenge for Edge AI accelerators, as these specialized hardware and software solutions must deliver robust performance within the limitations of edge devices. Due to power and size limitations, edge AI accelerators often have restricted computation, memory, and storage capacity (L. Zhu et al. 2023). This scarcity of resources necessitates a careful allocation of processing capabilities to execute machine learning models efficiently.

-
-Zhu, Ligeng, Lanxiang Hu, Ji Lin, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. 2023. PockEngine: Sparse and Efficient Fine-Tuning in a Pocket.” In 56th Annual IEEE/ACM International Symposium on Microarchitecture. ACM. https://doi.org/10.1145/3613424.3614307. -
-Lin, Ji, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. 2023. AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration.” arXiv. -
-Li, Yuhang, Xin Dong, and Wei Wang. 2020. “Additive Powers-of-Two Quantization: An Efficient Non-Uniform Discretization for Neural Networks.” In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. https://openreview.net/forum?id=BkgXT24tDS. -
-Wang, Tianzhe, Kuan Wang, Han Cai, Ji Lin, Zhijian Liu, Hanrui Wang, Yujun Lin, and Song Han. 2020. APQ: Joint Search for Network Architecture, Pruning and Quantization Policy.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2075–84. IEEE. https://doi.org/10.1109/cvpr42600.2020.00215. -

Moreover, managing constrained resources demands innovative approaches, including model quantization (Lin et al. 2023) (Li, Dong, and Wang 2020), pruning (Wang et al. 2020), and optimizing inference pipelines. Edge AI accelerators must strike a delicate balance between providing meaningful AI functionality and not exhausting available resources while maintaining low power consumption. Overcoming these resource constraints is crucial to ensure the successful deployment of AI at the edge, where many applications, from IoT to mobile devices, rely on efficiently using limited hardware resources to deliver real-time and intelligent decision-making.

-
-
-
-

10.8 Emerging Technologies

-

Thus far, we have discussed AI hardware technology in the context of conventional von Neumann architecture design and CMOS-based implementation. These specialized AI chips offer benefits like higher throughput and power efficiency but rely on traditional computing principles. The relentless growth in demand for AI computing power is driving innovations in integration methods for AI hardware.

-

Two leading approaches have emerged for maximizing compute density—wafer-scale integration and chiplet-based architectures—which we will discuss in this section. Looking much further ahead, we will examine emerging technologies that diverge from conventional architectures and adopt fundamentally different approaches for AI-specialized computing.

-

Some of these unconventional paradigms include neuromorphic computing, which mimics biological neural networks; quantum computing, which leverages quantum mechanical effects; and optical computing, which utilizes photons instead of electrons. Beyond novel computing substrates, new device technologies are enabling additional gains through better memory and interconnecting.

-

Examples include memristors for in-memory computing and nanophotonics for integrated photonic communication. Together, these technologies offer the potential for orders of magnitude improvements in speed, efficiency, and scalability compared to current AI hardware. We will examine these in this section.

-
-

10.8.1 Integration Methods

-

Integration methods refer to the approaches used to combine and interconnect an AI chip or system’s various computational and memory components. By closely linking the key processing elements, integration aims to maximize performance, power efficiency, and density.

-

In the past, AI computing was primarily performed on CPUs and GPUs built using conventional integration methods. These discrete components were manufactured separately and connected together on a board. However, this loose integration creates bottlenecks, such as data transfer overheads.

-

As AI workloads have grown, there is increasing demand for tighter integration between computing, memory, and communication elements. Some key drivers of integration include:

-
    -
  • Minimizing data movement: Tight integration reduces latency and power for moving data between components. This improves efficiency.
  • -
  • Customization: Tailoring all system components to AI workloads allows optimizations throughout the hardware stack.
  • -
  • Parallelism: Integrating many processing elements enables massively parallel computation.
  • -
  • Density: Tighter integration allows more transistors and memory to be packed into a given area.
  • -
  • Cost: Economies of scale from large integrated systems can reduce costs.
  • -
-

In response, new manufacturing techniques like wafer-scale fabrication and advanced packaging now allow much higher levels of integration. The goal is to create unified, specialized AI compute complexes tailored for deep learning and other AI algorithms. Tighter integration is key to delivering the performance and efficiency needed for the next generation of AI.

-
-

Wafer-scale AI

-

Wafer-scale AI takes an extremely integrated approach, manufacturing an entire silicon wafer as one gigantic chip. This differs drastically from conventional CPUs and GPUs, which cut each wafer into many smaller individual chips. Figure fig-wafer-scale shows a comparison between Cerebras Wafer Scale Engine 2, which is the largest chip ever built, and the largest GPU. While some GPUs may contain billions of transistors, they still pale in Comparison to the scale of a wafer-size chip with over a trillion transistors.

-

The wafer-scale approach also diverges from more modular system-on-chip designs that still have discrete components communicating by bus. Instead, wafer-scale AI enables full customization and tight integration of computation, memory, and interconnects across the entire die.

-
-
-
- -
-
-Figure 10.4: Wafer-scale vs. GPU. Credit: Cerebras. -
-
-
-

By designing the wafer as one integrated logic unit, data transfer between elements is minimized. This provides lower latency and power consumption than discrete system-on-chip or chiplet designs. While chiplets can offer flexibility by mixing and matching components, communication between chiplets is challenging. The monolithic nature of wafer-scale integration eliminates these inter-chip communication bottlenecks.

-

However, the ultra-large-scale also poses difficulties for manufacturability and yield with wafer-scale designs. Defects in any region of the wafer can make (certain parts of) the chip unusable. Specialized lithography techniques are required to produce such large dies. So, wafer-scale integration pursues the maximum performance gains from integration but requires overcoming substantial fabrication challenges.

-

The following video will provide additional context.

-
-
-
-

Chiplets for AI

-

Chiplet design refers to a semiconductor architecture in which a single integrated circuit (IC) is constructed from multiple smaller, individual components known as chiplets. Each chiplet is a self-contained functional block, typically specialized for a specific task or functionality. These chiplets are then interconnected on a larger substrate or package to create a cohesive system. Figure fig-chiplet illustrates this concept. For AI hardware, chiplets enable the mixing of different types of chips optimized for tasks like matrix multiplication, data movement, analog I/O, and specialized memories. This heterogeneous integration differs greatly from wafer-scale integration, where all logic is manufactured as one monolithic chip. Companies like Intel and AMD have adopted chiplet designs for their CPUs.

-

Chiplets are interconnected using advanced packaging techniques like high-density substrate interposers, 2.5D/3D stacking, and wafer-level packaging. This allows combining chiplets fabricated with different process nodes, specialized memories, and various optimized AI engines.

-
-
-
- -
-
-Figure 10.5: Chiplet partitioning. Credit: Vivet et al. (2021). -
-
-Vivet, Pascal, Eric Guthmuller, Yvain Thonnart, Gael Pillonnet, Cesar Fuguet, Ivan Miro-Panades, Guillaume Moritz, et al. 2021. IntAct: A 96-Core Processor with Six Chiplets 3D-Stacked on an Active Interposer with Distributed Interconnects and Integrated Power Management.” IEEE J. Solid-State Circuits 56 (1): 79–97. https://doi.org/10.1109/jssc.2020.3036341. -
-
-

Some key advantages of using chiplets for AI include:

-
    -
  • Flexibility: Flexibility: Chiplets allow for the combination of different chip types, process nodes, and memories tailored for each function. This is more modular versus a fixed wafer-scale design.
  • -
  • Yield: Smaller chiplets have a higher yield than a gigantic wafer-scale chip. Defects are contained in individual chiplets.
  • -
  • Cost: Leverages existing manufacturing capabilities versus requiring specialized new processes. Reduces costs by reusing mature fabrication.
  • -
  • Compatibility: Can integrate with more conventional system architectures like PCIe and standard DDR memory interfaces.
  • -
-

However, chiplets also face integration and performance challenges:

-
    -
  • Lower density compared to wafer-scale, as chiplets are limited in size.
  • -
  • Added latency when communicating between chiplets versus monolithic integration. Requires optimization for low-latency interconnect.
  • -
  • Advanced packaging adds complexity versus wafer-scale integration, though this is arguable.
  • -
-

The key objective of chiplets is finding the right balance between modular flexibility and integration density for optimal AI performance. Chiplets aim for efficient AI acceleration while working within the constraints of conventional manufacturing techniques. Chiplets take a middle path between the extremes of wafer-scale integration and fully discrete components. This provides practical benefits but may sacrifice some computational density and efficiency versus a theoretical wafer-size system.

-
-
-
-

10.8.2 Neuromorphic Computing

-

Neuromorphic computing is an emerging field aiming to emulate the efficiency and robustness of biological neural systems for machine learning applications. A key difference from classical Von Neumann architectures is the merging of memory and processing in the same circuit (Schuman et al. 2022; Marković et al. 2020; Furber 2016), as illustrated in Figure fig-neuromorphic. The structure of the brain inspires this integrated approach. A key advantage is the potential for orders of magnitude improvement in energy-efficient computation compared to conventional AI hardware. For example, estimates project 100x-1000x gains in energy efficiency versus current GPU-based systems for equivalent workloads.

-
-Marković, Danijela, Alice Mizrahi, Damien Querlioz, and Julie Grollier. 2020. “Physics for Neuromorphic Computing.” Nature Reviews Physics 2 (9): 499–510. https://doi.org/10.1038/s42254-020-0208-2. -
-Furber, Steve. 2016. “Large-Scale Neuromorphic Computing Systems.” J. Neural Eng. 13 (5): 051001. https://doi.org/10.1088/1741-2560/13/5/051001. -
-
-
- -
-
-Figure 10.6: Comparison of the von Neumann architecture with the neuromorphic architecture. Credit: Schuman et al. (2022). -
-
-Schuman, Catherine D., Shruti R. Kulkarni, Maryam Parsa, J. Parker Mitchell, Prasanna Date, and Bill Kay. 2022. “Opportunities for Neuromorphic Computing Algorithms and Applications.” Nature Computational Science 2 (1): 10–19. https://doi.org/10.1038/s43588-021-00184-y. -
-
-

Intel and IBM are leading commercial efforts in neuromorphic hardware. Intel’s Loihi and Loihi 2 chips (Davies et al. 2018, 2021) offer programmable neuromorphic cores with on-chip learning. IBM’s Northpole (Modha et al. 2023) device comprises over 100 million magnetic tunnel junction synapses and 68 billion transistors. These specialized chips deliver benefits like low power consumption for edge inference.

-
-Davies, Mike, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, et al. 2018. “Loihi: A Neuromorphic Manycore Processor with on-Chip Learning.” IEEE Micro 38 (1): 82–99. https://doi.org/10.1109/mm.2018.112130359. -
-Davies, Mike, Andreas Wild, Garrick Orchard, Yulia Sandamirskaya, Gabriel A. Fonseca Guerra, Prasad Joshi, Philipp Plank, and Sumedh R. Risbud. 2021. “Advancing Neuromorphic Computing with Loihi: A Survey of Results and Outlook.” Proc. IEEE 109 (5): 911–34. https://doi.org/10.1109/jproc.2021.3067593. -
-Modha, Dharmendra S., Filipp Akopyan, Alexander Andreopoulos, Rathinakumar Appuswamy, John V. Arthur, Andrew S. Cassidy, Pallab Datta, et al. 2023. “Neural Inference at the Frontier of Energy, Space, and Time.” Science 382 (6668): 329–35. https://doi.org/10.1126/science.adh1174. -
-Maass, Wolfgang. 1997. “Networks of Spiking Neurons: The Third Generation of Neural Network Models.” Neural Networks 10 (9): 1659–71. https://doi.org/10.1016/s0893-6080(97)00011-7. -

Spiking neural networks (SNNs) (Maass 1997) are computational models for neuromorphic hardware. Unlike deep neural networks communicating via continuous values, SNNs use discrete spikes that are more akin to biological neurons. This allows efficient event-based computation rather than constant processing. Additionally, SNNs consider the temporal and spatial characteristics of input data. This better mimics biological neural networks, where the timing of neuronal spikes plays an important role. However, training SNNs remains challenging due to the added temporal complexity. Figure fig-spiking provides an overview of the spiking methodology: (a) Diagram of a neuron; (b) Measuring an action potential propagated along the axon of a neuron. Only the action potential is detectable along the axon; (c) The neuron’s spike is approximated with a binary representation; (d) Event-Driven Processing; (e) Active Pixel Sensor and Dynamic Vision Sensor.

-

You can also watch the video linked below for a more detailed explanation.

-
-
-
- -
-
-Figure 10.7: Neuromoprhic spiking. Credit: Eshraghian et al. (2023). -
-
-Eshraghian, Jason K., Max Ward, Emre O. Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D. Lu. 2023. “Training Spiking Neural Networks Using Lessons from Deep Learning.” Proc. IEEE 111 (9): 1016–54. https://doi.org/10.1109/jproc.2023.3308088. -
-
-
-

Specialized nanoelectronic devices called memristors (Chua 1971) are synaptic components in neuromorphic systems. Memristors act as nonvolatile memory with adjustable conductance, emulating the plasticity of real synapses. Memristors enable in-situ learning without separate data transfers by combining memory and processing functions. However, memristor technology has yet to reach maturity and scalability for commercial hardware.

-
-Chua, L. 1971. “Memristor-the Missing Circuit Element.” #IEEE_J_CT# 18 (5): 507–19. https://doi.org/10.1109/tct.1971.1083337. -

The integration of photonics with neuromorphic computing (Shastri et al. 2021) has recently emerged as an active research area. Using light for computation and communication allows high speeds and reduced energy consumption. However, fully realizing photonic neuromorphic systems requires overcoming design and integration challenges.

-

Neuromorphic computing offers promising capabilities for efficient edge inference but faces obstacles around training algorithms, nanodevice integration, and system design. Ongoing multidisciplinary research across computer science, engineering, materials science, and physics will be key to unlocking this technology’s full potential for AI use cases.

-
-
-

10.8.3 Analog Computing

-

Analog computing is an emerging approach that uses analog signals and components like capacitors, inductors, and amplifiers rather than digital logic for computing. It represents information as continuous electrical signals instead of discrete 0s and 1s. This allows the computation to directly reflect the analog nature of real-world data, avoiding digitization errors and overhead.

-

Analog computing has generated renewed interest in efficient AI hardware, particularly for inference directly on low-power edge devices. Analog circuits, such as multiplication and summation at the core of neural networks, can be used with very low energy consumption. This makes analog well-suited for deploying ML models on energy-constrained end nodes. Startups like Mythic are developing analog AI accelerators.

-

While analog computing was popular in early computers, the boom of digital logic led to its decline. However, analog is compelling for niche applications requiring extreme efficiency (Haensch, Gokmen, and Puri 2019). It contrasts with digital neuromorphic approaches that still use digital spikes for computation. Analog may allow lower precision computation but requires expertise in analog circuit design. Tradeoffs around precision, programming complexity, and fabrication costs remain active research areas.

-
-Haensch, Wilfried, Tayfun Gokmen, and Ruchir Puri. 2019. “The Next Generation of Deep Learning Hardware: Analog Computing.” Proc. IEEE 107 (1): 108–22. https://doi.org/10.1109/jproc.2018.2871057. -
-Hazan, Avi, and Elishai Ezra Tsur. 2021. “Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation.” Front. Neurosci. 15 (February): 627221. https://doi.org/10.3389/fnins.2021.627221. -

Neuromorphic computing, which aims to emulate biological neural systems for efficient ML inference, can use analog circuits to implement the key components and behaviors of brains. For example, researchers have designed analog circuits to model neurons and synapses using capacitors, transistors, and operational amplifiers (Hazan and Ezra Tsur 2021). The capacitors can exhibit the spiking dynamics of biological neurons, while the amplifiers and transistors provide a weighted summation of inputs to mimic dendrites. Variable resistor technologies like memristors can realize analog synapses with spike-timing-dependent plasticity, which can strengthen or weaken connections based on spiking activity.

-

Startups like SynSense have developed analog neuromorphic chips containing these biomimetic components (Bains 2020). This analog approach results in low power consumption and high scalability for edge devices versus complex digital SNN implementations.

-
-Bains, Sunny. 2020. “The Business of Building Brains.” Nature Electronics 3 (7): 348–51. https://doi.org/10.1038/s41928-020-0449-1. -

However, training analog SNNs on chips remains an open challenge. Overall, analog realization is a promising technique for delivering the efficiency, scalability, and biological plausibility envisioned with neuromorphic computing. The physics of analog components combined with neural architecture design could improve inference efficiency over conventional digital neural networks.

-
-
-

10.8.4 Flexible Electronics

-

While much of the new hardware technology in the ML workspace has been focused on optimizing and making systems more efficient, there’s a parallel trajectory aiming to adapt hardware for specific applications (Gates 2009; Musk et al. 2019; Tang et al. 2023; Tang, He, and Liu 2022; Kwon and Dong 2022). One such avenue is the development of flexible electronics for AI use cases.

-
-Gates, Byron D. 2009. “Flexible Electronics.” Science 323 (5921): 1566–67. https://doi.org/10.1126/science.1171230. -
-Tang, Xin, Hao Shen, Siyuan Zhao, Na Li, and Jia Liu. 2023. “Flexible Braincomputer Interfaces.” Nature Electronics 6 (2): 109–18. https://doi.org/10.1038/s41928-022-00913-9. -
-Tang, Xin, Yichun He, and Jia Liu. 2022. “Soft Bioelectronics for Cardiac Interfaces.” Biophysics Reviews 3 (1). https://doi.org/10.1063/5.0069516. -

Flexible electronics refer to electronic circuits and devices fabricated on flexible plastic or polymer substrates rather than rigid silicon. Unlike conventional rigid boards and chips, this allows the electronics to bend, twist, and conform to irregular shapes. Figure fig-flexible-device shows an example of a flexible device prototype that wirelessly measures body temperature, which can be seamlessly integrated into clothing or skin patches. The flexibility and bendability of emerging electronic materials allow them to be integrated into thin, lightweight form factors that are well-suited for embedded AI and TinyML applications.

-

Flexible AI hardware can conform to curvy surfaces and operate efficiently with microwatt power budgets. Flexibility also enables rollable or foldable form factors to minimize device footprint and weight, ideal for small, portable smart devices and wearables incorporating TinyML. Another key advantage of flexible electronics compared to conventional technologies is lower manufacturing costs and simpler fabrication processes, which could democratize access to these technologies. While silicon masks and fabrication costs typically cost millions of dollars, flexible hardware typically costs only tens of cents to manufacture (Huang et al. 2011; Biggs et al. 2021). The potential to fabricate flexible electronics directly onto plastic films using high-throughput printing and coating processes can reduce costs and improve manufacturability at scale versus rigid AI chips (Musk et al. 2019).

-
-Huang, Tsung-Ching, Kenjiro Fukuda, Chun-Ming Lo, Yung-Hui Yeh, Tsuyoshi Sekitani, Takao Someya, and Kwang-Ting Cheng. 2011. “Pseudo-CMOS: A Design Style for Low-Cost and Robust Flexible Electronics.” IEEE Trans. Electron Devices 58 (1): 141–50. https://doi.org/10.1109/ted.2010.2088127. -
-Biggs, John, James Myers, Jedrzej Kufel, Emre Ozer, Simon Craske, Antony Sou, Catherine Ramsdale, Ken Williamson, Richard Price, and Scott White. 2021. “A Natively Flexible 32-Bit Arm Microprocessor.” Nature 595 (7868): 532–36. https://doi.org/10.1038/s41586-021-03625-w. -
-
-
- -
-
-Figure 10.8: Flexible device prototype. Credit: Jabil Circuit. -
-
-
-

The field is enabled by advances in organic semiconductors and nanomaterials that can be deposited on thin, flexible films. However, fabrication remains challenging compared to mature silicon processes. Flexible circuits currently typically exhibit lower performance than rigid equivalents. Still, they promise to transform electronics into lightweight, bendable materials.

-

Flexible electronics use cases are well-suited for intimate integration with the human body. Potential medical AI applications include bio-integrated sensors, soft assistive robots, and implants that monitor or stimulate the nervous system intelligently. Specifically, flexible electrode arrays could enable higher-density, less-invasive neural interfaces compared to rigid equivalents.

-

Therefore, flexible electronics are ushering in a new era of wearables and body sensors, largely due to innovations in organic transistors. These components allow for more lightweight and bendable electronics, ideal for wearables, electronic skin, and body-conforming medical devices.

-

They are well-suited for bioelectronic devices in terms of biocompatibility, opening avenues for applications in brain and cardiac interfaces. For example, research in flexible brain-computer interfaces and soft bioelectronics for cardiac applications demonstrates the potential for wide-ranging medical applications.

-

Companies and research institutions are not only developing and investing great amounts of resources in flexible electrodes, as showcased in Neuralink’s work (Musk et al. 2019). Still, they are also pushing the boundaries to integrate machine learning models within the systems (Kwon and Dong 2022). These smart sensors aim for a seamless, long-lasting symbiosis with the human body.

-
-Musk, Elon et al. 2019. “An Integrated Brain-Machine Interface Platform with Thousands of Channels.” J. Med. Internet Res. 21 (10): e16194. https://doi.org/10.2196/16194. -
-Kwon, Sun Hwa, and Lin Dong. 2022. “Flexible Sensors and Machine Learning for Heart Monitoring.” Nano Energy 102 (November): 107632. https://doi.org/10.1016/j.nanoen.2022.107632. -
-Segura Anaya, L. H., Abeer Alsadoon, N. Costadopoulos, and P. W. C. Prasad. 2017. “Ethical Implications of User Perceptions of Wearable Devices.” Sci. Eng. Ethics 24 (1): 1–28. https://doi.org/10.1007/s11948-017-9872-8. -
-Goodyear, Victoria A. 2017. “Social Media, Apps and Wearable Technologies: Navigating Ethical Dilemmas and Procedures.” Qualitative Research in Sport, Exercise and Health 9 (3): 285–302. https://doi.org/10.1080/2159676x.2017.1303790. -
-Farah, Martha J. 2005. “Neuroethics: The Practical and the Philosophical.” Trends Cogn. Sci. 9 (1): 34–40. https://doi.org/10.1016/j.tics.2004.12.001. -
-Roskies, Adina. 2002. “Neuroethics for the New Millenium.” Neuron 35 (1): 21–23. https://doi.org/10.1016/s0896-6273(02)00763-8. -

Ethically, incorporating smart, machine-learning-driven sensors within the body raises important questions. Issues surrounding data privacy, informed consent, and the long-term societal implications of such technologies are the focus of ongoing work in neuroethics and bioethics (Segura Anaya et al. 2017; Goodyear 2017; Farah 2005; Roskies 2002). The field is progressing at a pace that necessitates parallel advancements in ethical frameworks to guide the responsible development and deployment of these technologies. While there are limitations and ethical hurdles to overcome, the prospects for flexible electronics are expansive and hold immense promise for future research and applications.

-
-
-

10.8.5 Memory Technologies

-

Memory technologies are critical to AI hardware, but conventional DDR DRAM and SRAM create bottlenecks. AI workloads require high bandwidth (>1 TB/s). Extreme scientific applications of AI require extremely low latency (<50 ns) to feed data to compute units (Duarte et al. 2022), high density (>128Gb) to store large model parameters and data sets, and excellent energy efficiency (<100 fJ/b) for embedded use (Verma et al. 2019). New memories are needed to meet these demands. Emerging options include several new technologies:

-
-Duarte, Javier, Nhan Tran, Ben Hawks, Christian Herwig, Jules Muhizi, Shvetank Prakash, and Vijay Janapa Reddi. 2022. FastML Science Benchmarks: Accelerating Real-Time Scientific Edge Machine Learning.” ArXiv Preprint abs/2207.07958. https://arxiv.org/abs/2207.07958. -
-Verma, Naveen, Hongyang Jia, Hossein Valavi, Yinqi Tang, Murat Ozatay, Lung-Yen Chen, Bonan Zhang, and Peter Deaville. 2019. “In-Memory Computing: Advances and Prospects.” IEEE Solid-State Circuits Mag. 11 (3): 43–55. https://doi.org/10.1109/mssc.2019.2922889. -
    -
  • Resistive RAM (ReRAM) can improve density with simple, passive arrays. However, challenges around variability remain (Chi et al. 2016).
  • -
  • Phase change memory (PCM) exploits the unique properties of chalcogenide glass. Crystalline and amorphous phases have different resistances. Intel’s Optane DCPMM provides fast (100ns), high endurance PCM. However, challenges include limited write cycles and high reset current (Burr et al. 2016).
  • -
  • 3D stacking can also boost memory density and bandwidth by vertically integrating memory layers with TSV interconnects (Loh 2008). For example, HBM provides 1024-bit wide interfaces.
  • -
-
-Burr, Geoffrey W., Matthew J. BrightSky, Abu Sebastian, Huai-Yu Cheng, Jau-Yi Wu, Sangbum Kim, Norma E. Sosa, et al. 2016. “Recent Progress in Phase-Change\(<\)?Pub _Newline ?\(>\)Memory Technology.” IEEE Journal on Emerging and Selected Topics in Circuits and Systems 6 (2): 146–62. https://doi.org/10.1109/jetcas.2016.2547718. -
-Loh, Gabriel H. 2008. 3D-Stacked Memory Architectures for Multi-Core Processors.” ACM SIGARCH Computer Architecture News 36 (3): 453–64. https://doi.org/10.1145/1394608.1382159. -

New memory technologies, with their innovative cell architectures and materials, are critical to unlocking the next level of AI hardware performance and efficiency. Realizing their benefits in commercial systems remains an ongoing challenge.

-

In-memory computing is gaining traction as a promising avenue for optimizing machine learning and high-performance computing workloads. At its core, the technology co-locates data storage and computation to improve energy efficiency and reduce latency Wong et al. (2012). Two key technologies under this umbrella are Resistive RAM (ReRAM) and Processing-In-Memory (PIM).

-
-Wong, H.-S. Philip, Heng-Yuan Lee, Shimeng Yu, Yu-Sheng Chen, Yi Wu, Pang-Shiu Chen, Byoungil Lee, Frederick T. Chen, and Ming-Jinn Tsai. 2012. MetalOxide RRAM.” Proc. IEEE 100 (6): 1951–70. https://doi.org/10.1109/jproc.2012.2190369. -
-Chi, Ping, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. “Prime: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory.” ACM SIGARCH Computer Architecture News 44 (3): 27–39. https://doi.org/10.1145/3007787.3001140. -

ReRAM (Wong et al. 2012) and PIM (Chi et al. 2016) are the backbones for in-memory computing, storing and computing data in the same location. ReRAM focuses on issues of uniformity, endurance, retention, multi-bit operation, and scalability. On the other hand, PIM involves CPU units integrated directly into memory arrays, specialized for tasks like matrix multiplication, which are central in AI computations.

-

These technologies find applications in AI workloads and high-performance computing, where the synergy of storage and computation can lead to significant performance gains. The architecture is particularly useful for compute-intensive tasks common in machine learning models.

-

While in-memory computing technologies like ReRAM and PIM offer exciting prospects for efficiency and performance, they come with their own challenges, such as data uniformity and scalability issues in ReRAM (Imani, Rahimi, and S. Rosing 2016). Nonetheless, the field is ripe for innovation, and addressing these limitations can open new frontiers in AI and high-performance computing.

-
-Imani, Mohsen, Abbas Rahimi, and Tajana S. Rosing. 2016. “Resistive Configurable Associative Memory for Approximate Computing.” In Proceedings of the 2016 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1327–32. IEEE; Research Publishing Services. https://doi.org/10.3850/9783981537079_0454. -
-
-

10.8.6 Optical Computing

-

In AI acceleration, a burgeoning area of interest lies in novel technologies that deviate from traditional paradigms. Some emerging technologies mentioned above, such as flexible electronics, in-memory computing, or even neuromorphic computing, are close to becoming a reality, given their ground-breaking innovations and applications. One of the promising and leading next-gen frontiers is optical computing technologies H. Zhou et al. (2022). Companies like [LightMatter] are pioneering the use of light photonics for calculations, thereby utilizing photons instead of electrons for data transmission and computation.

-
-Zhou, Hailong, Jianji Dong, Junwei Cheng, Wenchan Dong, Chaoran Huang, Yichen Shen, Qiming Zhang, et al. 2022. “Photonic Matrix Multiplication Lights up Photonic Accelerator and Beyond.” Light: Science &Amp; Applications 11 (1): 30. https://doi.org/10.1038/s41377-022-00717-8. -
-Shastri, Bhavin J., Alexander N. Tait, T. Ferreira de Lima, Wolfram H. P. Pernice, Harish Bhaskaran, C. D. Wright, and Paul R. Prucnal. 2021. “Photonics for Artificial Intelligence and Neuromorphic Computing.” Nat. Photonics 15 (2): 102–14. https://doi.org/10.1038/s41566-020-00754-y. -

Optical computing utilizes photons and photonic devices rather than traditional electronic circuits for computing and data processing. It takes inspiration from fiber optic communication links that rely on light for fast, efficient data transfer (Shastri et al. 2021). Light can propagate with much less loss than semiconductors’ electrons, enabling inherent speed and efficiency benefits.

-

Some specific advantages of optical computing include:

-
    -
  • High throughput: Photons can transmit with bandwidths >100 Tb/s using wavelength division multiplexing.
  • -
  • Low latency: Photons interact on femtosecond timescales, millions faster than silicon transistors.
  • -
  • Parallelism: Multiple data signals can propagate simultaneously through the same optical medium.
  • -
  • Low power: Photonic circuits utilizing waveguides and resonators can achieve complex logic and memory with only microwatts of power.
  • -
-

However, optical computing currently faces significant challenges:

-
    -
  • Lack of optical memory equivalent to electronic RAM
  • -
  • Requires conversion between optical and electrical domains.
  • -
  • Limited set of available optical components compared to rich electronics ecosystem.
  • -
  • Immature integration methods to combine photonics with traditional CMOS chips.
  • -
  • Complex programming models required to handle parallelism.
  • -
-

As a result, optical computing is still in the very early research stage despite its promising potential. However, technical breakthroughs could enable it to complement electronics and unlock performance gains for AI workloads. Companies like Lightmatter are pioneering early optical AI accelerators. In the long term, if key challenges are overcome, it could represent a revolutionary computing substrate.

-
-
-

10.8.7 Quantum Computing

-

Quantum computers leverage unique phenomena of quantum physics, like superposition and entanglement, to represent and process information in ways not possible classically. Instead of binary bits, the fundamental unit is the quantum bit or qubit. Unlike classical bits, which are limited to 0 or 1, qubits can exist simultaneously in a superposition of both states due to quantum effects.

-

Multiple qubits can also be entangled, leading to exponential information density but introducing probabilistic results. Superposition enables parallel computation on all possible states, while entanglement allows nonlocal correlations between qubits.

-

Quantum algorithms carefully manipulate these inherently quantum mechanical effects to solve problems like optimization or search more efficiently than their classical counterparts in theory.

-
    -
  • Faster training of deep neural networks by exploiting quantum parallelism for linear algebra operations.
  • -
  • Efficient quantum ML algorithms make use of the unique capabilities of qubits.
  • -
  • Quantum neural networks with inherent quantum effects baked into the model architecture.
  • -
  • Quantum optimizers leveraging quantum annealing or adiabatic algorithms for combinatorial optimization problems.
  • -
-

However, quantum states are fragile and prone to errors that require error-correcting protocols. The non-intuitive nature of quantum programming also introduces challenges not present in classical computing.

-
    -
  • Noisy and fragile quantum bits are difficult to scale up. The largest quantum computer today has less than 100 qubits.
  • -
  • Restricted set of available quantum gates and circuits relative to classical programming.
  • -
  • Lack of datasets and benchmarks to evaluate quantum ML in practical domains.
  • -
-

While meaningful quantum advantage for ML remains far off, active research at companies like D-Wave, Rigetti, and IonQ is advancing quantum computer engineering and quantum algorithms. Major technology companies like Google, IBM, and Microsoft are actively exploring quantum computing. Google recently announced a 72-qubit quantum processor called Bristlecone and plans to build a 49-qubit commercial quantum system. Microsoft also has an active research program in topological quantum computing and collaborates with quantum startup IonQ

-

Quantum techniques may first make inroads into optimization before more generalized ML adoption. Realizing quantum ML’s full potential awaits major milestones in quantum hardware development and ecosystem maturity.

-
-
- -
-

10.10 Conclusion

-

Specialized hardware acceleration has become indispensable for enabling performant and efficient artificial intelligence applications as models and datasets explode in complexity. This chapter examined the limitations of general-purpose processors like CPUs for AI workloads. Their lack of parallelism and computational throughput cannot train or run state-of-the-art deep neural networks quickly. These motivations have driven innovations in customized accelerators.

-

We surveyed GPUs, TPUs, FPGAs, and ASICs specifically designed for the math-intensive operations inherent to neural networks. By covering this spectrum of options, we aimed to provide a framework for reasoning through accelerator selection based on constraints around flexibility, performance, power, cost, and other factors.

-

We also explored the role of software in actively enabling and optimizing AI acceleration. This spans programming abstractions, frameworks, compilers, and simulators. We discussed hardware-software co-design as a proactive methodology for building more holistic AI systems by closely integrating algorithm innovation and hardware advances.

-

But there is so much more to come! Exciting frontiers like analog computing, optical neural networks, and quantum machine learning represent active research directions that could unlock orders of magnitude improvements in efficiency, speed, and scale compared to present paradigms.

-

Ultimately, specialized hardware acceleration remains indispensable for unlocking the performance and efficiency necessary to fulfill the promise of artificial intelligence from cloud to edge. We hope this chapter provides useful background and insights into the rapid innovation occurring in this domain.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

Coming soon.

-
-
-
-
-
-
- -
-
-Exercises -
-
-
-
- -
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/image_classification/image_classification.html b/contents/image_classification/image_classification.html deleted file mode 100644 index e3cd7502..00000000 --- a/contents/image_classification/image_classification.html +++ /dev/null @@ -1,1689 +0,0 @@ - - - - - - - - - -Machine Learning Systems - CV on Nicla Vision - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

CV on Nicla Vision

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: Cartoon in a 1950s style featuring a compact electronic device with a camera module placed on a wooden table. The screen displays blue robots on one side and green periquitos on the other. LED lights on the device indicate classifications, while characters in retro clothing observe with interest.
-
-
-
-

Introduction

-

As we initiate our studies into embedded machine learning or TinyML, it’s impossible to overlook the transformative impact of Computer Vision (CV) and Artificial Intelligence (AI) in our lives. These two intertwined disciplines redefine what machines can perceive and accomplish, from autonomous vehicles and robotics to healthcare and surveillance.

-

More and more, we are facing an artificial intelligence (AI) revolution where, as stated by Gartner, Edge AI has a very high impact potential, and it is for now!

-
-
-

-
-
-

In the “bullseye” of the Radar is the Edge Computer Vision, and when we talk about Machine Learning (ML) applied to vision, the first thing that comes to mind is Image Classification, a kind of ML “Hello World”!

-

This exercise will explore a computer vision project utilizing Convolutional Neural Networks (CNNs) for real-time image classification. Leveraging TensorFlow’s robust ecosystem, we’ll implement a pre-trained MobileNet model and adapt it for edge deployment. The focus will be on optimizing the model to run efficiently on resource-constrained hardware without sacrificing accuracy.

-

We’ll employ techniques like quantization and pruning to reduce the computational load. By the end of this tutorial, you’ll have a working prototype capable of classifying images in real-time, all running on a low-power embedded system based on the Arduino Nicla Vision board.

-
-
-

Computer Vision

-

At its core, computer vision aims to enable machines to interpret and make decisions based on visual data from the world, essentially mimicking the capability of the human optical system. Conversely, AI is a broader field encompassing machine learning, natural language processing, and robotics, among other technologies. When you bring AI algorithms into computer vision projects, you supercharge the system’s ability to understand, interpret, and react to visual stimuli.

-

When discussing Computer Vision projects applied to embedded devices, the most common applications that come to mind are Image Classification and Object Detection.

-
-
-

-
-
-

Both models can be implemented on tiny devices like the Arduino Nicla Vision and used on real projects. In this chapter, we will cover Image Classification.

-
-
-

Image Classification Project Goal

-

The first step in any ML project is to define the goal. In this case, it is to detect and classify two specific objects present in one image. For this project, we will use two small toys: a robot and a small Brazilian parrot (named Periquito). Also, we will collect images of a background where those two objects are absent.

-
-
-

-
-
-
-
-

Data Collection

-

Once you have defined your Machine Learning project goal, the next and most crucial step is the dataset collection. You can use the Edge Impulse Studio, the OpenMV IDE we installed, or even your phone for the image capture. Here, we will use the OpenMV IDE for that.

-
-

Collecting Dataset with OpenMV IDE

-

First, create in your computer a folder where your data will be saved, for example, “data.” Next, on the OpenMV IDE, go to Tools > Dataset Editor and select New Dataset to start the dataset collection:

-
-
-

-
-
-

The IDE will ask you to open the file where your data will be saved and choose the “data” folder that was created. Note that new icons will appear on the Left panel.

-
-
-

-
-
-

Using the upper icon (1), enter with the first class name, for example, “periquito”:

-
-
-

-
-
-

Running the dataset_capture_script.py and clicking on the camera icon (2), will start capturing images:

-
-
-

-
-
-

Repeat the same procedure with the other classes

-
-
-

-
-
-
-

We suggest around 60 images from each category. Try to capture different angles, backgrounds, and light conditions.

-
-

The stored images use a QVGA frame size of 320x240 and the RGB565 (color pixel format).

-

After capturing your dataset, close the Dataset Editor Tool on the Tools > Dataset Editor.

-

On your computer, you will end with a dataset that contains three classes: periquito, robot, and background.

-
-
-

-
-
-

You should return to Edge Impulse Studio and upload the dataset to your project.

-
-
-
-

Training the model with Edge Impulse Studio

-

We will use the Edge Impulse Studio for training our model. Enter your account credentials and create a new project:

-
-
-

-
-
-
-

Here, you can clone a similar project: NICLA-Vision_Image_Classification.

-
-
-
-

Dataset

-

Using the EI Studio (or Studio), we will go over four main steps to have our model ready for use on the Nicla Vision board: Dataset, Impulse, Tests, and Deploy (on the Edge Device, in this case, the NiclaV).

-
-
-

-
-
-

Regarding the Dataset, it is essential to point out that our Original Dataset, captured with the OpenMV IDE, will be split into Training, Validation, and Test. The Test Set will be divided from the beginning, and a part will reserved to be used only in the Test phase after training. The Validation Set will be used during training.

-
-
-

-
-
-

On Studio, go to the Data acquisition tab, and on the UPLOAD DATA section, upload the chosen categories files from your computer:

-
-
-

-
-
-

Leave to the Studio the splitting of the original dataset into train and test and choose the label about that specific data:

-
-
-

-
-
-

Repeat the procedure for all three classes. At the end, you should see your “raw data” in the Studio:

-
-
-

-
-
-

The Studio allows you to explore your data, showing a complete view of all the data in your project. You can clear, inspect, or change labels by clicking on individual data items. In our case, a very simple project, the data seems OK.

-
-
-

-
-
-
-
-

The Impulse Design

-

In this phase, we should define how to:

-
    -
  • Pre-process our data, which consists of resizing the individual images and determining the color depth to use (be it RGB or Grayscale) and

  • -
  • Specify a Model, in this case, it will be the Transfer Learning (Images) to fine-tune a pre-trained MobileNet V2 image classification model on our data. This method performs well even with relatively small image datasets (around 150 images in our case).

  • -
-
-
-

-
-
-

Transfer Learning with MobileNet offers a streamlined approach to model training, which is especially beneficial for resource-constrained environments and projects with limited labeled data. MobileNet, known for its lightweight architecture, is a pre-trained model that has already learned valuable features from a large dataset (ImageNet).

-
-
-

-
-
-

By leveraging these learned features, you can train a new model for your specific task with fewer data and computational resources and yet achieve competitive accuracy.

-
-
-

-
-
-

This approach significantly reduces training time and computational cost, making it ideal for quick prototyping and deployment on embedded devices where efficiency is paramount.

-

Go to the Impulse Design Tab and create the impulse, defining an image size of 96x96 and squashing them (squared form, without cropping). Select Image and Transfer Learning blocks. Save the Impulse.

-
-
-

-
-
-
-

Image Pre-Processing

-

All the input QVGA/RGB565 images will be converted to 27,640 features (96x96x3).

-
-
-

-
-
-

Press [Save parameters] and Generate all features:

-
-
-

-
-
-
-
-

Model Design

-

In 2007, Google introduced MobileNetV1, a family of general-purpose computer vision neural networks designed with mobile devices in mind to support classification, detection, and more. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of various use cases. in 2018, Google launched MobileNetV2: Inverted Residuals and Linear Bottlenecks.

-

MobileNet V1 and MobileNet V2 aim at mobile efficiency and embedded vision applications but differ in architectural complexity and performance. While both use depthwise separable convolutions to reduce the computational cost, MobileNet V2 introduces Inverted Residual Blocks and Linear Bottlenecks to enhance performance. These new features allow V2 to capture more complex features using fewer parameters, making it computationally more efficient and generally more accurate than its predecessor. Additionally, V2 employs a non-linear activation in the intermediate expansion layer. It still uses a linear activation for the bottleneck layer, a design choice found to preserve important information through the network. MobileNet V2 offers an optimized architecture for higher accuracy and efficiency and will be used in this project.

-

Although the base MobileNet architecture is already tiny and has low latency, many times, a specific use case or application may require the model to be even smaller and faster. MobileNets introduces a straightforward parameter α (alpha) called width multiplier to construct these smaller, less computationally expensive models. The role of the width multiplier α is that of thinning a network uniformly at each layer.

-

Edge Impulse Studio can use both MobileNetV1 (96x96 images) and V2 (96x96 or 160x160 images), with several different α values (from 0.05 to 1.0). For example, you will get the highest accuracy with V2, 160x160 images, and α=1.0. Of course, there is a trade-off. The higher the accuracy, the more memory (around 1.3MB RAM and 2.6MB ROM) will be needed to run the model, implying more latency. The smaller footprint will be obtained at the other extreme with MobileNetV1 and α=0.10 (around 53.2K RAM and 101K ROM).

-
-
-

-
-
-

We will use MobileNetV2 96x96 0.1 for this project, with an estimated memory cost of 265.3 KB in RAM. This model should be OK for the Nicla Vision with 1MB of SRAM. On the Transfer Learning Tab, select this model:

-
-
-

-
-
-
-
-
-

Model Training

-

Another valuable technique to be used with Deep Learning is Data Augmentation. Data augmentation is a method to improve the accuracy of machine learning models by creating additional artificial data. A data augmentation system makes small, random changes to your training data during the training process (such as flipping, cropping, or rotating the images).

-

Looking under the hood, here you can see how Edge Impulse implements a data Augmentation policy on your data:

-
# Implements the data augmentation policy
-def augment_image(image, label):
-    # Flips the image randomly
-    image = tf.image.random_flip_left_right(image)
-
-    # Increase the image size, then randomly crop it down to
-    # the original dimensions
-    resize_factor = random.uniform(1, 1.2)
-    new_height = math.floor(resize_factor * INPUT_SHAPE[0])
-    new_width = math.floor(resize_factor * INPUT_SHAPE[1])
-    image = tf.image.resize_with_crop_or_pad(image, new_height, new_width)
-    image = tf.image.random_crop(image, size=INPUT_SHAPE)
-
-    # Vary the brightness of the image
-    image = tf.image.random_brightness(image, max_delta=0.2)
-
-    return image, label
-

Exposure to these variations during training can help prevent your model from taking shortcuts by “memorizing” superficial clues in your training data, meaning it may better reflect the deep underlying patterns in your dataset.

-

The final layer of our model will have 12 neurons with a 15% dropout for overfitting prevention. Here is the Training result:

-
-
-

-
-
-

The result is excellent, with 77ms of latency, which should result in 13fps (frames per second) during inference.

-
-
-

Model Testing

-
-
-

-
-
-

Now, you should take the data set aside at the start of the project and run the trained model using it as input:

-
-
-

-
-
-

The result is, again, excellent.

-
-
-

-
-
-
-
-

Deploying the model

-

At this point, we can deploy the trained model as.tflite and use the OpenMV IDE to run it using MicroPython, or we can deploy it as a C/C++ or an Arduino library.

-
-
-

-
-
-
-

Arduino Library

-

First, Let’s deploy it as an Arduino Library:

-
-
-

-
-
-

You should install the library as.zip on the Arduino IDE and run the sketch nicla_vision_camera.ino available in Examples under your library name.

-
-

Note that Arduino Nicla Vision has, by default, 512KB of RAM allocated for the M7 core and an additional 244KB on the M4 address space. In the code, this allocation was changed to 288 kB to guarantee that the model will run on the device (malloc_addblock((void*)0x30000000, 288 * 1024);).

-
-

The result is good, with 86ms of measured latency.

-
-
-

-
-
-

Here is a short video showing the inference results:

-
-
-

OpenMV

-

It is possible to deploy the trained model to be used with OpenMV in two ways: as a library and as a firmware.

-

Three files are generated as a library: the trained.tflite model, a list with labels, and a simple MicroPython script that can make inferences using the model.

-
-
-

-
-
-

Running this model as a .tflite directly in the Nicla was impossible. So, we can sacrifice the accuracy using a smaller model or deploy the model as an OpenMV Firmware (FW). Choosing FW, the Edge Impulse Studio generates optimized models, libraries, and frameworks needed to make the inference. Let’s explore this option.

-

Select OpenMV Firmware on the Deploy Tab and press [Build].

-
-
-

-
-
-

On your computer, you will find a ZIP file. Open it:

-
-
-

-
-
-

Use the Bootloader tool on the OpenMV IDE to load the FW on your board:

-
-
-

-
-
-

Select the appropriate file (.bin for Nicla-Vision):

-
-
-

-
-
-

After the download is finished, press OK:

-
-
-

-
-
-

If a message says that the FW is outdated, DO NOT UPGRADE. Select [NO].

-
-
-

-
-
-

Now, open the script ei_image_classification.py that was downloaded from the Studio and the.bin file for the Nicla.

-
-
-

-
-
-

Run it. Pointing the camera to the objects we want to classify, the inference result will be displayed on the Serial Terminal.

-
-
-

-
-
-
-

Changing the Code to add labels

-

The code provided by Edge Impulse can be modified so that we can see, for test reasons, the inference result directly on the image displayed on the OpenMV IDE.

-

Upload the code from GitHub, or modify it as below:

-
# Marcelo Rovai - NICLA Vision - Image Classification
-# Adapted from Edge Impulse - OpenMV Image Classification Example
-# @24Aug23
-
-import sensor, image, time, os, tf, uos, gc
-
-sensor.reset()                         # Reset and initialize the sensor.
-sensor.set_pixformat(sensor.RGB565)    # Set pxl fmt to RGB565 (or GRAYSCALE)
-sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
-sensor.set_windowing((240, 240))       # Set 240x240 window.
-sensor.skip_frames(time=2000)          # Let the camera adjust.
-
-net = None
-labels = None
-
-try:
-    # Load built in model
-    labels, net = tf.load_builtin_model('trained')
-except Exception as e:
-    raise Exception(e)
-
-clock = time.clock()
-while(True):
-    clock.tick()  # Starts tracking elapsed time.
-
-    img = sensor.snapshot()
-
-    # default settings just do one detection
-    for obj in net.classify(img, 
-                            min_scale=1.0, 
-                            scale_mul=0.8, 
-                            x_overlap=0.5, 
-                            y_overlap=0.5):
-        fps = clock.fps()
-        lat = clock.avg()
-
-        print("**********\nPrediction:")
-        img.draw_rectangle(obj.rect())
-        # This combines the labels and confidence values into a list of tuples
-        predictions_list = list(zip(labels, obj.output()))
-
-        max_val = predictions_list[0][1]
-        max_lbl = 'background'
-        for i in range(len(predictions_list)):
-            val = predictions_list[i][1]
-            lbl = predictions_list[i][0]
-
-            if val > max_val:
-                max_val = val
-                max_lbl = lbl
-
-    # Print label with the highest probability
-    if max_val < 0.5:
-        max_lbl = 'uncertain'
-    print("{} with a prob of {:.2f}".format(max_lbl, max_val))
-    print("FPS: {:.2f} fps ==> latency: {:.0f} ms".format(fps, lat))
-
-    # Draw label with highest probability to image viewer
-    img.draw_string(
-        10, 10,
-        max_lbl + "\n{:.2f}".format(max_val),
-        mono_space = False,
-        scale=2
-        )
-

Here you can see the result:

-
-
-

-
-
-

Note that the latency (136 ms) is almost double of what we got directly with the Arduino IDE. This is because we are using the IDE as an interface and also the time to wait for the camera to be ready. If we start the clock just before the inference:

-
-
-

-
-
-

The latency will drop to only 71 ms.

-
-
-

-
-
-
-

The NiclaV runs about half as fast when connected to the IDE. The FPS should increase once disconnected.

-
-
-
-

Post-Processing with LEDs

-

When working with embedded machine learning, we are looking for devices that can continually proceed with the inference and result, taking some action directly on the physical world and not displaying the result on a connected computer. To simulate this, we will light up a different LED for each possible inference result.

-
-
-

-
-
-

To accomplish that, we should upload the code from GitHub or change the last code to include the LEDs:

-
# Marcelo Rovai - NICLA Vision - Image Classification with LEDs
-# Adapted from Edge Impulse - OpenMV Image Classification Example
-# @24Aug23
-
-import sensor, image, time, os, tf, uos, gc, pyb
-
-ledRed = pyb.LED(1)
-ledGre = pyb.LED(2)
-ledBlu = pyb.LED(3)
-
-sensor.reset()                         # Reset and initialize the sensor.
-sensor.set_pixformat(sensor.RGB565)    # Set pixl fmt to RGB565 (or GRAYSCALE)
-sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
-sensor.set_windowing((240, 240))       # Set 240x240 window.
-sensor.skip_frames(time=2000)          # Let the camera adjust.
-
-net = None
-labels = None
-
-ledRed.off()
-ledGre.off()
-ledBlu.off()
-
-try:
-    # Load built in model
-    labels, net = tf.load_builtin_model('trained')
-except Exception as e:
-    raise Exception(e)
-
-clock = time.clock()
-
-
-def setLEDs(max_lbl):
-
-    if max_lbl == 'uncertain':
-        ledRed.on()
-        ledGre.off()
-        ledBlu.off()
-
-    if max_lbl == 'periquito':
-        ledRed.off()
-        ledGre.on()
-        ledBlu.off()
-
-    if max_lbl == 'robot':
-        ledRed.off()
-        ledGre.off()
-        ledBlu.on()
-
-    if max_lbl == 'background':
-        ledRed.off()
-        ledGre.off()
-        ledBlu.off()
-
-
-while(True):
-    img = sensor.snapshot()
-    clock.tick()  # Starts tracking elapsed time.
-
-    # default settings just do one detection.
-    for obj in net.classify(img, 
-                            min_scale=1.0, 
-                            scale_mul=0.8, 
-                            x_overlap=0.5, 
-                            y_overlap=0.5):
-        fps = clock.fps()
-        lat = clock.avg()
-
-        print("**********\nPrediction:")
-        img.draw_rectangle(obj.rect())
-        # This combines the labels and confidence values into a list of tuples
-        predictions_list = list(zip(labels, obj.output()))
-
-        max_val = predictions_list[0][1]
-        max_lbl = 'background'
-        for i in range(len(predictions_list)):
-            val = predictions_list[i][1]
-            lbl = predictions_list[i][0]
-
-            if val > max_val:
-                max_val = val
-                max_lbl = lbl
-
-    # Print label and turn on LED with the highest probability
-    if max_val < 0.8:
-        max_lbl = 'uncertain'
-
-    setLEDs(max_lbl)
-
-    print("{} with a prob of {:.2f}".format(max_lbl, max_val))
-    print("FPS: {:.2f} fps ==> latency: {:.0f} ms".format(fps, lat))
-
-    # Draw label with highest probability to image viewer
-    img.draw_string(
-        10, 10,
-        max_lbl + "\n{:.2f}".format(max_val),
-        mono_space = False,
-        scale=2
-        )
-

Now, each time that a class scores a result greater than 0.8, the correspondent LED will be lit:

-
    -
  • Led Red 0n: Uncertain (no class is over 0.8)

  • -
  • Led Green 0n: Periquito > 0.8

  • -
  • Led Blue 0n: Robot > 0.8

  • -
  • All LEDs Off: Background > 0.8

  • -
-

Here is the result:

-
-
-

-
-
-

In more detail

-
-
-

-
-
-
-
-
-
-

Image Classification (non-official) Benchmark

-

Several development boards can be used for embedded machine learning (TinyML), and the most common ones for Computer Vision applications (consuming low energy), are the ESP32 CAM, the Seeed XIAO ESP32S3 Sense, the Arduino Nicla Vison, and the Arduino Portenta.

-
-
-

-
-
-

Catching the opportunity, the same trained model was deployed on the ESP-CAM, the XIAO, and the Portenta (in this one, the model was trained again, using grayscaled images to be compatible with its camera). Here is the result, deploying the models as Arduino’s Library:

-
-
-

-
-
-
-
-

Conclusion

-

Before we finish, consider that Computer Vision is more than just image classification. For example, you can develop Edge Machine Learning projects around vision in several areas, such as:

-
    -
  • Autonomous Vehicles: Use sensor fusion, lidar data, and computer vision algorithms to navigate and make decisions.

  • -
  • Healthcare: Automated diagnosis of diseases through MRI, X-ray, and CT scan image analysis

  • -
  • Retail: Automated checkout systems that identify products as they pass through a scanner.

  • -
  • Security and Surveillance: Facial recognition, anomaly detection, and object tracking in real-time video feeds.

  • -
  • Augmented Reality: Object detection and classification to overlay digital information in the real world.

  • -
  • Industrial Automation: Visual inspection of products, predictive maintenance, and robot and drone guidance.

  • -
  • Agriculture: Drone-based crop monitoring and automated harvesting.

  • -
  • Natural Language Processing: Image captioning and visual question answering.

  • -
  • Gesture Recognition: For gaming, sign language translation, and human-machine interaction.

  • -
  • Content Recommendation: Image-based recommendation systems in e-commerce.

  • -
- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/introduction/introduction.html b/contents/introduction/introduction.html deleted file mode 100644 index ca76e380..00000000 --- a/contents/introduction/introduction.html +++ /dev/null @@ -1,1070 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 1  Introduction - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

1  Introduction

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: A detailed, rectangular, flat 2D illustration depicting a roadmap of a book’s chapters on machine learning systems, set on a crisp clean white background. The image features a winding road traveling through various symbolic landmarks. Each landmark represents a chapter topic: Introduction, ML Systems, Deep Learning, AI Workflow, Data Engineering, AI Frameworks, AI Training, Efficient AI, Model Optimizations, AI Acceleration, Benchmarking AI, On-Device Learning, Embedded AIOps, Security & Privacy, Responsible AI, Sustainable AI, AI for Good, Robust AI, Generative AI. The style is clean, modern, and flat, suitable for a technical book, with each landmark clearly labeled with its chapter title.
-
-
-

In the early 1990s, Mark Weiser, a pioneering computer scientist, introduced the world to a revolutionary concept that would forever change how we interact with technology. He envisioned a future where computing would be so seamlessly integrated into our environments that it would become an invisible, integral part of daily life. This vision, which he termed “ubiquitous computing,” promised a world where technology would serve us without demanding our constant attention or interaction. Fast forward to today, and we find ourselves on the cusp of realizing Weiser’s vision, thanks to the advent and proliferation of machine learning systems.

-

Ubiquitous computing (Weiser 1991), as Weiser imagined, is not merely about embedding processors in everyday objects; it is about imbuing our environment with a form of intelligence that anticipates our needs and acts on our behalf, enhancing our experiences without our explicit command. The key to this ubiquitous intelligence lies in developing and deploying machine learning systems at the edge of our networks.

-
-Weiser, Mark. 1991. “The Computer for the 21st Century.” Sci. Am. 265 (3): 94–104. https://doi.org/10.1038/scientificamerican0991-94. -

Machine learning, a subset of artificial intelligence, enables computers to learn from and make decisions based on data rather than following explicitly programmed instructions. When deployed at the edge—closer to where data is generated, and actions are taken—machine learning systems can process information in real-time, responding to environmental changes and user inputs with minimal latency. This capability is critical for applications where timing is crucial, such as autonomous vehicles, real-time language translation, and smart healthcare devices.

-

The migration of machine learning from centralized data centers to the edge of networks marks a significant evolution in computing architecture. The need for speed, privacy, and reduced bandwidth consumption drives this shift. By processing data locally, edge-based machine learning systems can make quick decisions without constantly communicating with a central server. This speeds up response times, conserves bandwidth, and enhances privacy by limiting the amount of data transmitted over the network.

-

Moreover, the ability to deploy machine learning models in diverse environments has led to an explosion of innovative applications. From smart cities that optimize traffic flow in real-time to agricultural drones that monitor crop health and apply treatments precisely where needed, machine learning at the edge enables a level of contextual awareness and responsiveness that was previously unimaginable.

-

Despite the promise of ubiquitous intelligence, deploying machine learning systems at the edge is challenging. These systems must operate within the constraints of limited processing power, memory, and energy availability, often in environments far from the controlled conditions of data centers. Additionally, ensuring the privacy and security of the data in these systems processes is paramount, particularly in applications that handle sensitive personal information.

-

Developing machine learning models that are efficient enough to run at the edge while delivering accurate and reliable results requires innovative model design, training, and deployment approaches. Researchers and engineers are exploring techniques such as model compression, federated learning, and transfer learning to address these challenges.

-

As we stand on the threshold of Weiser’s vision of ubiquitous computing, machine learning systems are clear as the key to unlocking this future. By embedding intelligence in the fabric of our environment, these systems have the potential to make our interactions with technology more natural and intuitive than ever before. As we continue to push the boundaries of what’s possible with machine learning at the edge, we move closer to a world where technology quietly enhances our lives without ever getting in the way.

-

In this book, we will explore the technical foundations of machine learning systems, the challenges of deploying these systems at the edge, and the vast array of applications they enable. Join us as we embark on a journey into the future of ubiquitous intelligence, where the seamless integration of technology into our daily lives transforms the essence of how we live, work, and interact with the world around us.

- - - - -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/kws_feature_eng/kws_feature_eng.html b/contents/kws_feature_eng/kws_feature_eng.html deleted file mode 100644 index cba9db3d..00000000 --- a/contents/kws_feature_eng/kws_feature_eng.html +++ /dev/null @@ -1,1266 +0,0 @@ - - - - - - - - - -Machine Learning Systems - Audio Feature Engineering - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

Audio Feature Engineering

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: 1950s style cartoon scene set in an audio research room. Two scientists, one holding a magnifying glass and the other taking notes, examine large charts pinned to the wall. These charts depict FFT graphs and time curves related to audio data analysis. The room has a retro ambiance, with wooden tables, vintage lamps, and classic audio analysis tools.
-
-
-
-

Introduction

-

In this hands-on tutorial, the emphasis is on the critical role that feature engineering plays in optimizing the performance of machine learning models applied to audio classification tasks, such as speech recognition. It is essential to be aware that the performance of any machine learning model relies heavily on the quality of features used, and we will deal with “under-the-hood” mechanics of feature extraction, mainly focusing on Mel-frequency Cepstral Coefficients (MFCCs), a cornerstone in the field of audio signal processing.

-

Machine learning models, especially traditional algorithms, don’t understand audio waves. They understand numbers arranged in some meaningful way, i.e., features. These features encapsulate the characteristics of the audio signal, making it easier for models to distinguish between different sounds.

-
-

This tutorial will deal with generating features specifically for audio classification. This can be particularly interesting for applying machine learning to a variety of audio data, whether for speech recognition, music categorization, insect classification based on wingbeat sounds, or other sound analysis tasks

-
-
-
-

The KWS

-

The most common TinyML application is Keyword Spotting (KWS), a subset of the broader field of speech recognition. While general speech recognition aims to transcribe all spoken words into text, Keyword Spotting focuses on detecting specific “keywords” or “wake words” in a continuous audio stream. The system is trained to recognize these keywords as predefined phrases or words, such as yes or no. In short, KWS is a specialized form of speech recognition with its own set of challenges and requirements.

-

Here a typical KWS Process using MFCC Feature Converter:

-
-
-

-
-
-
-

Applications of KWS:

-
    -
  • Voice Assistants: In devices like Amazon’s Alexa or Google Home, KWS is used to detect the wake word (“Alexa” or “Hey Google”) to activate the device.
  • -
  • Voice-Activated Controls: In automotive or industrial settings, KWS can be used to initiate specific commands like “Start engine” or “Turn off lights.”
  • -
  • Security Systems: Voice-activated security systems may use KWS to authenticate users based on a spoken passphrase.
  • -
  • Telecommunication Services: Customer service lines may use KWS to route calls based on spoken keywords.
  • -
-
-
-

Differences from General Speech Recognition:

-
    -
  • Computational Efficiency: KWS is usually designed to be less computationally intensive than full speech recognition, as it only needs to recognize a small set of phrases.
  • -
  • Real-time Processing: KWS often operates in real-time and is optimized for low-latency detection of keywords.
  • -
  • Resource Constraints: KWS models are often designed to be lightweight, so they can run on devices with limited computational resources, like microcontrollers or mobile phones.
  • -
  • Focused Task: While general speech recognition models are trained to handle a broad range of vocabulary and accents, KWS models are fine-tuned to recognize specific keywords, often in noisy environments accurately.
  • -
-
-
-
-

Introduction to Audio Signals

-

Understanding the basic properties of audio signals is crucial for effective feature extraction and, ultimately, for successfully applying machine learning algorithms in audio classification tasks. Audio signals are complex waveforms that capture fluctuations in air pressure over time. These signals can be characterized by several fundamental attributes: sampling rate, frequency, and amplitude.

-
    -
  • Frequency and Amplitude: Frequency refers to the number of oscillations a waveform undergoes per unit time and is also measured in Hz. In the context of audio signals, different frequencies correspond to different pitches. Amplitude, on the other hand, measures the magnitude of the oscillations and correlates with the loudness of the sound. Both frequency and amplitude are essential features that capture audio signals’ tonal and rhythmic qualities.

  • -
  • Sampling Rate: The sampling rate, often denoted in Hertz (Hz), defines the number of samples taken per second when digitizing an analog signal. A higher sampling rate allows for a more accurate digital representation of the signal but also demands more computational resources for processing. Typical sampling rates include 44.1 kHz for CD-quality audio and 16 kHz or 8 kHz for speech recognition tasks. Understanding the trade-offs in selecting an appropriate sampling rate is essential for balancing accuracy and computational efficiency. In general, with TinyML projects, we work with 16KHz. Altough music tones can be heard at frequencies up to 20 kHz, voice maxes out at 8 kHz. Traditional telephone systems use an 8 kHz sampling frequency.

  • -
-
-

For an accurate representation of the signal, the sampling rate must be at least twice the highest frequency present in the signal.

-
-
    -
  • Time Domain vs. Frequency Domain: Audio signals can be analyzed in the time and frequency domains. In the time domain, a signal is represented as a waveform where the amplitude is plotted against time. This representation helps to observe temporal features like onset and duration but the signal’s tonal characteristics are not well evidenced. Conversely, a frequency domain representation provides a view of the signal’s constituent frequencies and their respective amplitudes, typically obtained via a Fourier Transform. This is invaluable for tasks that require understanding the signal’s spectral content, such as identifying musical notes or speech phonemes (our case).
  • -
-

The image below shows the words YES and NO with typical representations in the Time (Raw Audio) and Frequency domains:

-
-
-

-
-
-
-

Why Not Raw Audio?

-

While using raw audio data directly for machine learning tasks may seem tempting, this approach presents several challenges that make it less suitable for building robust and efficient models.

-

Using raw audio data for Keyword Spotting (KWS), for example, on TinyML devices poses challenges due to its high dimensionality (using a 16 kHz sampling rate), computational complexity for capturing temporal features, susceptibility to noise, and lack of semantically meaningful features, making feature extraction techniques like MFCCs a more practical choice for resource-constrained applications.

-

Here are some additional details of the critical issues associated with using raw audio:

-
    -
  • High Dimensionality: Audio signals, especially those sampled at high rates, result in large amounts of data. For example, a 1-second audio clip sampled at 16 kHz will have 16,000 individual data points. High-dimensional data increases computational complexity, leading to longer training times and higher computational costs, making it impractical for resource-constrained environments. Furthermore, the wide dynamic range of audio signals requires a significant amount of bits per sample, while conveying little useful information.

  • -
  • Temporal Dependencies: Raw audio signals have temporal structures that simple machine learning models may find hard to capture. While recurrent neural networks like LSTMs can model such dependencies, they are computationally intensive and tricky to train on tiny devices.

  • -
  • Noise and Variability: Raw audio signals often contain background noise and other non-essential elements affecting model performance. Additionally, the same sound can have different characteristics based on various factors such as distance from the microphone, the orientation of the sound source, and acoustic properties of the environment, adding to the complexity of the data.

  • -
  • Lack of Semantic Meaning: Raw audio doesn’t inherently contain semantically meaningful features for classification tasks. Features like pitch, tempo, and spectral characteristics, which can be crucial for speech recognition, are not directly accessible from raw waveform data.

  • -
  • Signal Redundancy: Audio signals often contain redundant information, with certain portions of the signal contributing little to no value to the task at hand. This redundancy can make learning inefficient and potentially lead to overfitting.

  • -
-

For these reasons, feature extraction techniques such as Mel-frequency Cepstral Coefficients (MFCCs), Mel-Frequency Energies (MFEs), and simple Spectograms are commonly used to transform raw audio data into a more manageable and informative format. These features capture the essential characteristics of the audio signal while reducing dimensionality and noise, facilitating more effective machine learning.

-
-
-
-

Introduction to MFCCs

-
-

What are MFCCs?

-

Mel-frequency Cepstral Coefficients (MFCCs) are a set of features derived from the spectral content of an audio signal. They are based on human auditory perceptions and are commonly used to capture the phonetic characteristics of an audio signal. The MFCCs are computed through a multi-step process that includes pre-emphasis, framing, windowing, applying the Fast Fourier Transform (FFT) to convert the signal to the frequency domain, and finally, applying the Discrete Cosine Transform (DCT). The result is a compact representation of the original audio signal’s spectral characteristics.

-

The image below shows the words YES and NO in their MFCC representation:

-
-
-

-
-
-
-

This video explains the Mel Frequency Cepstral Coefficients (MFCC) and how to compute them.

-
-
-
-

Why are MFCCs important?

-

MFCCs are crucial for several reasons, particularly in the context of Keyword Spotting (KWS) and TinyML:

-
    -
  • Dimensionality Reduction: MFCCs capture essential spectral characteristics of the audio signal while significantly reducing the dimensionality of the data, making it ideal for resource-constrained TinyML applications.
  • -
  • Robustness: MFCCs are less susceptible to noise and variations in pitch and amplitude, providing a more stable and robust feature set for audio classification tasks.
  • -
  • Human Auditory System Modeling: The Mel scale in MFCCs approximates the human ear’s response to different frequencies, making them practical for speech recognition where human-like perception is desired.
  • -
  • Computational Efficiency: The process of calculating MFCCs is computationally efficient, making it well-suited for real-time applications on hardware with limited computational resources.
  • -
-

In summary, MFCCs offer a balance of information richness and computational efficiency, making them popular for audio classification tasks, particularly in constrained environments like TinyML.

-
-
-

Computing MFCCs

-

The computation of Mel-frequency Cepstral Coefficients (MFCCs) involves several key steps. Let’s walk through these, which are particularly important for Keyword Spotting (KWS) tasks on TinyML devices.

-
    -
  • Pre-emphasis: The first step is pre-emphasis, which is applied to accentuate the high-frequency components of the audio signal and balance the frequency spectrum. This is achieved by applying a filter that amplifies the difference between consecutive samples. The formula for pre-emphasis is: y(t) = x(t) - \(\alpha\) x(t-1) , where \(\alpha\) is the pre-emphasis factor, typically around 0.97.

  • -
  • Framing: Audio signals are divided into short frames (the frame length), usually 20 to 40 milliseconds. This is based on the assumption that frequencies in a signal are stationary over a short period. Framing helps in analyzing the signal in such small time slots. The frame stride (or step) will displace one frame and the adjacent. Those steps could be sequential or overlapped.

  • -
  • Windowing: Each frame is then windowed to minimize the discontinuities at the frame boundaries. A commonly used window function is the Hamming window. Windowing prepares the signal for a Fourier transform by minimizing the edge effects. The image below shows three frames (10, 20, and 30) and the time samples after windowing (note that the frame length and frame stride are 20 ms):

  • -
-
-
-

-
-
-
    -
  • Fast Fourier Transform (FFT) The Fast Fourier Transform (FFT) is applied to each windowed frame to convert it from the time domain to the frequency domain. The FFT gives us a complex-valued representation that includes both magnitude and phase information. However, for MFCCs, only the magnitude is used to calculate the Power Spectrum. The power spectrum is the square of the magnitude spectrum and measures the energy present at each frequency component.
  • -
-
-

The power spectrum \(P(f)\) of a signal \(x(t)\) is defined as \(P(f) = |X(f)|^2\), where \(X(f)\) is the Fourier Transform of \(x(t)\). By squaring the magnitude of the Fourier Transform, we emphasize stronger frequencies over weaker ones, thereby capturing more relevant spectral characteristics of the audio signal. This is important in applications like audio classification, speech recognition, and Keyword Spotting (KWS), where the focus is on identifying distinct frequency patterns that characterize different classes of audio or phonemes in speech.

-
-
-
-

-
-
-
    -
  • Mel Filter Banks: The frequency domain is then mapped to the Mel scale, which approximates the human ear’s response to different frequencies. The idea is to extract more features (more filter banks) in the lower frequencies and less in the high frequencies. Thus, it performs well on sounds distinguished by the human ear. Typically, 20 to 40 triangular filters extract the Mel-frequency energies. These energies are then log-transformed to convert multiplicative factors into additive ones, making them more suitable for further processing.
  • -
-
-
-

-
-
-
    -
  • Discrete Cosine Transform (DCT): The last step is to apply the Discrete Cosine Transform (DCT) to the log Mel energies. The DCT helps to decorrelate the energies, effectively compressing the data and retaining only the most discriminative features. Usually, the first 12-13 DCT coefficients are retained, forming the final MFCC feature vector.
  • -
-
-
-

-
-
-
-
-
-

Hands-On using Python

-

Let’s apply what we discussed while working on an actual audio sample. Open the notebook on Google CoLab and extract the MLCC features on your audio samples: [Open In Colab]

-
-
-

Conclusion

-
-

What Feature Extraction technique should we use?

-

Mel-frequency Cepstral Coefficients (MFCCs), Mel-Frequency Energies (MFEs), or Spectrogram are techniques for representing audio data, which are often helpful in different contexts.

-

In general, MFCCs are more focused on capturing the envelope of the power spectrum, which makes them less sensitive to fine-grained spectral details but more robust to noise. This is often desirable for speech-related tasks. On the other hand, spectrograms or MFEs preserve more detailed frequency information, which can be advantageous in tasks that require discrimination based on fine-grained spectral content.

-
-

MFCCs are particularly strong for:

-
    -
  1. Speech Recognition: MFCCs are excellent for identifying phonetic content in speech signals.
  2. -
  3. Speaker Identification: They can be used to distinguish between different speakers based on voice characteristics.
  4. -
  5. Emotion Recognition: MFCCs can capture the nuanced variations in speech indicative of emotional states.
  6. -
  7. Keyword Spotting: Especially in TinyML, where low computational complexity and small feature size are crucial.
  8. -
-
-
-

Spectrograms or MFEs are often more suitable for:

-
    -
  1. Music Analysis: Spectrograms can capture harmonic and timbral structures in music, which is essential for tasks like genre classification, instrument recognition, or music transcription.
  2. -
  3. Environmental Sound Classification: In recognizing non-speech, environmental sounds (e.g., rain, wind, traffic), the full spectrogram can provide more discriminative features.
  4. -
  5. Birdsong Identification: The intricate details of bird calls are often better captured using spectrograms.
  6. -
  7. Bioacoustic Signal Processing: In applications like dolphin or bat call analysis, the fine-grained frequency information in a spectrogram can be essential.
  8. -
  9. Audio Quality Assurance: Spectrograms are often used in professional audio analysis to identify unwanted noises, clicks, or other artifacts.
  10. -
- - -
-
-
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/kws_nicla/kws_nicla.html b/contents/kws_nicla/kws_nicla.html deleted file mode 100644 index 5207f7da..00000000 --- a/contents/kws_nicla/kws_nicla.html +++ /dev/null @@ -1,1502 +0,0 @@ - - - - - - - - - -Machine Learning Systems - Keyword Spotting (KWS) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

Keyword Spotting (KWS)

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: 1950s style cartoon scene set in a vintage audio research room. Two Afro-American female scientists are at the center. One holds a magnifying glass, closely examining ancient circuitry, while the other takes notes. On their wooden table, there are multiple boards with sensors, notably featuring a microphone. Behind these boards, a computer with a large, rounded back displays the Arduino IDE. The IDE showcases code for LED pin assignments and machine learning inference for voice command detection. A distinct window in the IDE, the Serial Monitor, reveals outputs indicating the spoken commands ‘yes’ and ‘no’. The room ambiance is nostalgic with vintage lamps, classic audio analysis tools, and charts depicting FFT graphs and time-domain curves.
-
-
-
-

Introduction

-

Having already explored the Nicla Vision board in the Image Classification and Object Detection applications, we are now shifting our focus to voice-activated applications with a project on Keyword Spotting (KWS).

-

As introduced in the Feature Engineering for Audio Classification Hands-On tutorial, Keyword Spotting (KWS) is integrated into many voice recognition systems, enabling devices to respond to specific words or phrases. While this technology underpins popular devices like Google Assistant or Amazon Alexa, it’s equally applicable and feasible on smaller, low-power devices. This tutorial will guide you through implementing a KWS system using TinyML on the Nicla Vision development board equipped with a digital microphone.

-

Our model will be designed to recognize keywords that can trigger device wake-up or specific actions, bringing them to life with voice-activated commands.

-
-
-

How does a voice assistant work?

-

As said, voice assistants on the market, like Google Home or Amazon Echo-Dot, only react to humans when they are “waked up” by particular keywords such as ” Hey Google” on the first one and “Alexa” on the second.

-
-
-

-
-
-

In other words, recognizing voice commands is based on a multi-stage model or Cascade Detection.

-
-
-

-
-
-

Stage 1: A small microprocessor inside the Echo Dot or Google Home continuously listens, waiting for the keyword to be spotted, using a TinyML model at the edge (KWS application).

-

Stage 2: Only when triggered by the KWS application on Stage 1 is the data sent to the cloud and processed on a larger model.

-

The video below shows an example of a Google Assistant being programmed on a Raspberry Pi (Stage 2), with an Arduino Nano 33 BLE as the TinyML device (Stage 1).

-
-
-

To explore the above Google Assistant project, please see the tutorial: Building an Intelligent Voice Assistant From Scratch.

-
-

In this KWS project, we will focus on Stage 1 (KWS or Keyword Spotting), where we will use the Nicla Vision, which has a digital microphone that will be used to spot the keyword.

-
-
-

The KWS Hands-On Project

-

The diagram below gives an idea of how the final KWS application should work (during inference):

-
-
-

-
-
-

Our KWS application will recognize four classes of sound:

-
    -
  • YES (Keyword 1)
  • -
  • NO (Keyword 2)
  • -
  • NOISE (no words spoken; only background noise is present)
  • -
  • UNKNOW (a mix of different words than YES and NO)
  • -
-
-

For real-world projects, it is always advisable to include other sounds besides the keywords, such as “Noise” (or Background) and “Unknown.”

-
-
-

The Machine Learning workflow

-

The main component of the KWS application is its model. So, we must train such a model with our specific keywords, noise, and other words (the “unknown”):

-
-
-

-
-
-
-
-
-

Dataset

-

The critical component of any Machine Learning Workflow is the dataset. Once we have decided on specific keywords, in our case (YES and NO), we can take advantage of the dataset developed by Pete Warden, “Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition.” This dataset has 35 keywords (with +1,000 samples each), such as yes, no, stop, and go. In words such as yes and no, we can get 1,500 samples.

-

You can download a small portion of the dataset from Edge Studio (Keyword spotting pre-built dataset), which includes samples from the four classes we will use in this project: yes, no, noise, and background. For this, follow the steps below:

- -
-

Uploading the dataset to the Edge Impulse Studio

-

Initiate a new project at Edge Impulse Studio (EIS) and select the Upload Existing Data tool in the Data Acquisition section. Choose the files to be uploaded:

-
-
-

-
-
-

Define the Label, select Automatically split between train and test, and Upload data to the EIS. Repeat for all classes.

-
-
-

-
-
-

The dataset will now appear in the Data acquisition section. Note that the approximately 6,000 samples (1,500 for each class) are split into Train (4,800) and Test (1,200) sets.

-
-
-

-
-
-
-
-

Capturing additional Audio Data

-

Although we have a lot of data from Pete’s dataset, collecting some words spoken by us is advised. When working with accelerometers, creating a dataset with data captured by the same type of sensor is essential. In the case of sound, this is optional because what we will classify is, in reality, audio data.

-
-

The key difference between sound and audio is the type of energy. Sound is mechanical perturbation (longitudinal sound waves) that propagate through a medium, causing variations of pressure in it. Audio is an electrical (analog or digital) signal representing sound.

-
-

When we pronounce a keyword, the sound waves should be converted to audio data. The conversion should be done by sampling the signal generated by the microphone at a 16KHz frequency with 16-bit per sample amplitude.

-

So, any device that can generate audio data with this basic specification (16KHz/16bits) will work fine. As a device, we can use the NiclaV, a computer, or even your mobile phone.

-
-
-

-
-
-
-

Using the NiclaV and the Edge Impulse Studio

-

As we learned in the chapter Setup Nicla Vision, EIS officially supports the Nicla Vision, which simplifies the capture of the data from its sensors, including the microphone. So, please create a new project on EIS and connect the Nicla to it, following these steps:

-
    -
  • Download the last updated EIS Firmware and unzip it.

  • -
  • Open the zip file on your computer and select the uploader corresponding to your OS:

  • -
-
-
-

-
-
-
    -
  • Put the NiclaV in Boot Mode by pressing the reset button twice.

  • -
  • Upload the binary arduino-nicla-vision.bin to your board by running the batch code corresponding to your OS.

  • -
-

Go to your project on EIS, and on the Data Acquisition tab, select WebUSB. A window will pop up; choose the option that shows that the Nicla is paired and press [Connect].

-

You can choose which sensor data to pick in the Collect Data section on the Data Acquisition tab. Select: Built-in microphone, define your label (for example, yes), the sampling Frequency[16000Hz], and the Sample length (in milliseconds), for example [10s]. Start sampling.

-
-
-

-
-
-

Data on Pete’s dataset have a length of 1s, but the recorded samples are 10s long and must be split into 1s samples. Click on three dots after the sample name and select Split sample.

-

A window will pop up with the Split tool.

-
-
-

-
-
-

Once inside the tool, split the data into 1-second (1000 ms) records. If necessary, add or remove segments. This procedure should be repeated for all new samples.

-
-
-

Using a smartphone and the EI Studio

-

You can also use your PC or smartphone to capture audio data, using a sampling frequency of 16KHz and a bit depth of 16.

-

Go to Devices, scan the QR Code using your phone, and click on the link. A data Collection app will appear in your browser. Select Collecting Audio, and define your Label, data capture Length, and Category.

-
-
-

-
-
-

Repeat the same procedure used with the NiclaV.

-
-

Note that any app, such as Audacity, can be used for audio recording, provided you use 16KHz/16-bit depth samples.

-
-
-
-
-
-

Creating Impulse (Pre-Process / Model definition)

-

An impulse takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data.

-
-

Impulse Design

-
-
-

-
-
-

First, we will take the data points with a 1-second window, augmenting the data and sliding that window in 500ms intervals. Note that the option zero-pad data is set. It is essential to fill with ‘zeros’ samples smaller than 1 second (in some cases, some samples can result smaller than the 1000 ms window on the split tool to avoid noise and spikes).

-

Each 1-second audio sample should be pre-processed and converted to an image (for example, 13 x 49 x 1). As discussed in the Feature Engineering for Audio Classification Hands-On tutorial, we will use Audio (MFCC), which extracts features from audio signals using Mel Frequency Cepstral Coefficients, which are well suited for the human voice, our case here.

-

Next, we select the Classification block to build our model from scratch using a Convolution Neural Network (CNN).

-
-

Alternatively, you can use the Transfer Learning (Keyword Spotting) block, which fine-tunes a pre-trained keyword spotting model on your data. This approach has good performance with relatively small keyword datasets.

-
-
-
-

Pre-Processing (MFCC)

-

The following step is to create the features to be trained in the next phase:

-

We could keep the default parameter values, but we will use the DSP Autotune parameters option.

-
-
-

-
-
-

We will take the Raw features (our 1-second, 16KHz sampled audio data) and use the MFCC processing block to calculate the Processed features. For every 16,000 raw features (16,000 x 1 second), we will get 637 processed features (13 x 49).

-
-
-

-
-
-

The result shows that we only used a small amount of memory to pre-process data (16KB) and a latency of 34ms, which is excellent. For example, on an Arduino Nano (Cortex-M4f @ 64MHz), the same pre-process will take around 480ms. The parameters chosen, such as the FFT length [512], will significantly impact the latency.

-

Now, let’s Save parameters and move to the Generated features tab, where the actual features will be generated. Using UMAP, a dimension reduction technique, the Feature explorer shows how the features are distributed on a two-dimensional plot.

-
-
-

-
-
-

The result seems OK, with a visually clear separation between yes features (in red) and no features (in blue). The unknown features seem nearer to the no space than the yes. This suggests that the keyword no has more propensity to false positives.

-
-
-

Going under the hood

-

To understand better how the raw sound is preprocessed, look at the Feature Engineering for Audio Classification chapter. You can play with the MFCC features generation by downloading this notebook from GitHub or [Opening it In Colab]

-
-
-
-

Model Design and Training

-

We will use a simple Convolution Neural Network (CNN) model, tested with 1D and 2D convolutions. The basic architecture has two blocks of Convolution + MaxPooling ([8] and [16] filters, respectively) and a Dropout of [0.25] for the 1D and [0.5] for the 2D. For the last layer, after Flattening, we have [4] neurons, one for each class:

-
-
-

-
-
-

As hyper-parameters, we will have a Learning Rate of [0.005] and a model trained by [100] epochs. We will also include a data augmentation method based on SpecAugment. We trained the 1D and the 2D models with the same hyperparameters. The 1D architecture had a better overall result (90.5% accuracy when compared with 88% of the 2D, so we will use the 1D.

-
-
-

-
-
-
-

Using 1D convolutions is more efficient because it requires fewer parameters than 2D convolutions, making them more suitable for resource-constrained environments.

-
-

It is also interesting to pay attention to the 1D Confusion Matrix. The F1 Score for yes is 95%, and for no, 91%. That was expected by what we saw with the Feature Explorer (no and unknown at close distance). In trying to improve the result, you can inspect closely the results of the samples with an error.

-
-
-

-
-
-

Listen to the samples that went wrong. For example, for yes, most of the mistakes were related to a yes pronounced as “yeh”. You can acquire additional samples and then retrain your model.

-
-

Going under the hood

-

If you want to understand what is happening “under the hood,” you can download the pre-processed dataset (MFCC training data) from the Dashboard tab and run this Jupyter Notebook, playing with the code or [Opening it In Colab]. For example, you can analyze the accuracy by each epoch:

-
-
-

-
-
-
-
-
-

Testing

-

Testing the model with the data reserved for training (Test Data), we got an accuracy of approximately 76%.

-
-
-

-
-
-

Inspecting the F1 score, we can see that for YES, we got 0.90, an excellent result since we expect to use this keyword as the primary “trigger” for our KWS project. The worst result (0.70) is for UNKNOWN, which is OK.

-

For NO, we got 0.72, which was expected, but to improve this result, we can move the samples that were not correctly classified to the training dataset and then repeat the training process.

-
-

Live Classification

-

We can proceed to the project’s next step but also consider that it is possible to perform Live Classification using the NiclaV or a smartphone to capture live samples, testing the trained model before deployment on our device.

-
-
-
-

Deploy and Inference

-

The EIS will package all the needed libraries, preprocessing functions, and trained models, downloading them to your computer. Go to the Deployment section, select Arduino Library, and at the bottom, choose Quantized (Int8) and press Build.

-
-
-

-
-
-

When the Build button is selected, a zip file will be created and downloaded to your computer. On your Arduino IDE, go to the Sketch tab, select the option Add .ZIP Library, and Choose the .zip file downloaded by EIS:

-
-
-

-
-
-

Now, it is time for a real test. We will make inferences while completely disconnected from the EIS. Let’s use the NiclaV code example created when we deployed the Arduino Library.

-

In your Arduino IDE, go to the File/Examples tab, look for your project, and select nicla-vision/nicla-vision_microphone (or nicla-vision_microphone_continuous)

-
-
-

-
-
-

Press the reset button twice to put the NiclaV in boot mode, upload the sketch to your board, and test some real inferences:

-
-
-

-
-
-
-
-

Post-processing

-

Now that we know the model is working since it detects our keywords, let’s modify the code to see the result with the NiclaV completely offline (disconnected from the PC and powered by a battery, a power bank, or an independent 5V power supply).

-

The idea is that whenever the keyword YES is detected, the Green LED will light; if a NO is heard, the Red LED will light, if it is a UNKNOW, the Blue LED will light; and in the presence of noise (No Keyword), the LEDs will be OFF.

-

We should modify one of the code examples. Let’s do it now with the nicla-vision_microphone_continuous.

-

Start with initializing the LEDs:

-
...
-void setup()
-{
-        // Once you finish debugging your code, you can comment or delete the Serial part of the code
-    Serial.begin(115200);
-    while (!Serial);
-    Serial.println("Inferencing - Nicla Vision KWS with LEDs");
-    
-    // Pins for the built-in RGB LEDs on the Arduino NiclaV
-    pinMode(LEDR, OUTPUT);
-    pinMode(LEDG, OUTPUT);
-    pinMode(LEDB, OUTPUT);
-
-    // Ensure the LEDs are OFF by default.
-    // Note: The RGB LEDs on the Arduino Nicla Vision
-    // are ON when the pin is LOW, OFF when HIGH.
-    digitalWrite(LEDR, HIGH);
-    digitalWrite(LEDG, HIGH);
-    digitalWrite(LEDB, HIGH);
-...
-}
-

Create two functions, turn_off_leds() function , to turn off all RGB LEDs

-
**
- * @brief      turn_off_leds function - turn-off all RGB LEDs
- */
-void turn_off_leds(){
-    digitalWrite(LEDR, HIGH);
-    digitalWrite(LEDG, HIGH);
-    digitalWrite(LEDB, HIGH);
-}
-

Another turn_on_led() function is used to turn on the RGB LEDs according to the most probable result of the classifier.

-
/**
- * @brief      turn_on_leds function used to turn on the RGB LEDs
- * @param[in]  pred_index     
- *             no:       [0] ==> Red ON
- *             noise:    [1] ==> ALL OFF 
- *             unknown:  [2] ==> Blue ON
- *             Yes:      [3] ==> Green ON
- */
-void turn_on_leds(int pred_index) {
-  switch (pred_index)
-  {
-    case 0:
-      turn_off_leds();
-      digitalWrite(LEDR, LOW);
-      break;
-
-    case 1:
-      turn_off_leds();
-      break;
-    
-    case 2:
-      turn_off_leds();
-      digitalWrite(LEDB, LOW);
-      break;
-
-    case 3:
-      turn_off_leds();
-      digitalWrite(LEDG, LOW);
-      break;
-  }
-}
-

And change the // print the predictions portion of the code on loop():

-
...
-
-    if (++print_results >= (EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW)) {
-        // print the predictions
-        ei_printf("Predictions ");
-        ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
-            result.timing.dsp, result.timing.classification, result.timing.anomaly);
-        ei_printf(": \n");
-
-        int pred_index = 0;     // Initialize pred_index
-        float pred_value = 0;   // Initialize pred_value
-
-        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
-            if (result.classification[ix].value > pred_value){
-                pred_index = ix;
-                pred_value = result.classification[ix].value;
-            }
-            // ei_printf("    %s: ", result.classification[ix].label);
-            // ei_printf_float(result.classification[ix].value);
-            // ei_printf("\n");
-        }
-        ei_printf("  PREDICTION: ==> %s with probability %.2f\n", 
-                  result.classification[pred_index].label, pred_value);
-        turn_on_leds (pred_index);
-
-        
-#if EI_CLASSIFIER_HAS_ANOMALY == 1
-        ei_printf("    anomaly score: ");
-        ei_printf_float(result.anomaly);
-        ei_printf("\n");
-#endif
-
-        print_results = 0;
-    }
-}
-
-...
-

You can find the complete code on the project’s GitHub.

-

Upload the sketch to your board and test some real inferences. The idea is that the Green LED will be ON whenever the keyword YES is detected, the Red will lit for a NO, and any other word will turn on the Blue LED. All the LEDs should be off if silence or background noise is present. Remember that the same procedure can “trigger” an external device to perform a desired action instead of turning on an LED, as we saw in the introduction.

-
-
-
-

Conclusion

-
-

You will find the notebooks and codes used in this hands-on tutorial on the GitHub repository.

-
-

Before we finish, consider that Sound Classification is more than just voice. For example, you can develop TinyML projects around sound in several areas, such as:

-
    -
  • Security (Broken Glass detection, Gunshot)
  • -
  • Industry (Anomaly Detection)
  • -
  • Medical (Snore, Cough, Pulmonary diseases)
  • -
  • Nature (Beehive control, insect sound, pouching mitigation)
  • -
- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/ml_systems/ml_systems.html b/contents/ml_systems/ml_systems.html deleted file mode 100644 index 56d08c9d..00000000 --- a/contents/ml_systems/ml_systems.html +++ /dev/null @@ -1,1431 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 2  ML Systems - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

2  ML Systems

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Illustration in a rectangular format depicting the merger of embedded systems with Embedded AI. The left half of the image portrays traditional embedded systems, including microcontrollers and processors, detailed and precise. The right half showcases the world of artificial intelligence, with abstract representations of machine learning models, neurons, and data flow. The two halves are distinctly separated, emphasizing the individual significance of embedded tech and AI, but they come together in harmony at the center.
-
-
-

Machine learning (ML) systems, built on the foundation of computing systems, hold the potential to transform our world. These systems, with their specialized roles and real-time computational capabilities, represent a critical junction where data and computation meet on a micro-scale. They are specifically tailored to optimize performance, energy usage, and spatial efficiency—key factors essential for the successful implementation of ML systems.

-

As this chapter progresses, we will explore embedded systems’ complex and fascinating world. We’ll gain insights into their structural design and operational features and understand their pivotal role in powering ML applications. Starting with the basics of microcontroller units, we will examine the interfaces and peripherals that enhance their functionalities. This chapter is designed to be a comprehensive guide elucidating the nuanced aspects of embedded systems within the ML systems framework.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Acquire a comprehensive understanding of ML systems, including their definitions, architecture, and programming languages, focusing on the evolution and significance of TinyML.

  • -
  • Explore the design and operational principles of ML systems, including the use of a microcontroller rather than a microprocessor, memory management, System-on-chip (SoC) integration, and the development and deployment of machine learning models.

  • -
  • Examine the interfaces, power management, and real-time operating characteristics essential for efficient ML systems alongside energy efficiency, reliability, and security considerations.

  • -
  • Investigate the distinctions, benefits, challenges, and use cases for Cloud ML, Edge ML, and TinyML, emphasizing selecting the appropriate machine learning approach based on specific application needs and the evolving landscape of embedded systems in machine learning.

  • -
-
-
-
-

2.1 Machine Learning Systems

-

ML is rapidly evolving, with new paradigms reshaping how models are developed, trained, and deployed. One such paradigm is embedded machine learning, which is experiencing significant innovation driven by the proliferation of smart sensors, edge devices, and microcontrollers. Embedded machine learning refers to the integration of machine learning algorithms into the hardware of a device, enabling real-time data processing and analysis without relying on cloud connectivity. This chapter explores the landscape of embedded machine learning, covering the key approaches of Cloud ML, Edge ML, and TinyML (Figure fig-cloud-edge-tinyml-comparison).

-
-
-
- -
-
-Figure 2.1: Cloud vs. Edge vs. TinyML: The Spectrum of Distributed Intelligence. Credit: Massimo Banzi – Arduino. -
-
-
-

We begin by outlining each embedded ML variant’s features or characteristics, benefits, challenges, and use cases. This provides context on where these technologies do well and where they face limitations. We then combine all three approaches into a comparative analysis, evaluating them across critical parameters like latency, privacy, computational demands, and more. This side-by-side perspective highlights the unique strengths and tradeoffs of selecting these strategies.

-

Next, we trace the evolution timeline of embedded systems and machine learning, from the origins of wireless sensor networks to the integration of ML algorithms into microcontrollers. This historical lens enriches our understanding of the rapid pace of advancement in this domain. Finally, practical hands-on exercises offer an opportunity to experiment first-hand with embedded computer vision applications.

-

By the end of this multipronged exploration of embedded ML, you will possess the conceptual and practical knowledge to determine the appropriate ML implementation for your specific use case constraints. The chapter aims to equip you with the contextual clarity and technical skills to navigate this quickly shifting landscape, empowering impactful innovations.

-
-
-

2.2 Cloud ML

-
-

2.2.1 Characteristics

-

Cloud ML is a specialized branch of the broader machine learning field within cloud computing environments. It offers a virtual platform for developing, training, and deploying machine learning models, providing flexibility and scalability.

-

At its foundation, Cloud ML utilizes a powerful blend of high-capacity servers, expansive storage solutions, and robust networking architectures, all located in data centers worldwide (Figure fig-cloudml-example). This setup centralizes computational resources, simplifying the management and scaling of machine learning projects.

-

The cloud environment excels in data processing and model training and is designed to manage large data volumes and complex computations. Models crafted in Cloud ML can leverage vast amounts of data, processed and analyzed centrally, thereby enhancing the model’s learning and predictive performance.

-
-
-
- -
-
-Figure 2.2: Cloud TPU data center at Google. Credit: Google. -
-
-
-
-
-

2.2.2 Benefits

-

Cloud ML is synonymous with immense computational power, adept at handling complex algorithms and large datasets. This is particularly advantageous for machine learning models that demand significant computational resources, effectively circumventing the constraints of local setups.

-

A key advantage of Cloud ML is its dynamic scalability. As data volumes or computational needs grow, the infrastructure can adapt seamlessly, ensuring consistent performance.

-

Cloud ML platforms often offer a wide array of advanced tools and algorithms. Developers can utilize these resources to accelerate the building, training, and deploying sophisticated models, fostering innovation.

-
-
-

2.2.3 Challenges

-

Despite its capabilities, Cloud ML can face latency issues, especially in applications that require real-time responses. The time taken to send data to centralized servers and back can introduce delays, a significant drawback in time-sensitive scenarios.

-

Centralizing data processing and storage can also create data privacy and security vulnerabilities. Data centers become attractive targets for cyber-attacks, requiring substantial investments in security measures to protect sensitive data.

-

Additionally, as data processing needs escalate, so do the costs of using cloud services. Organizations dealing with large data volumes may encounter rising costs, which could affect the long-term scalability and feasibility of their operations.

-
-
-

2.2.4 Example Use Cases

-

Cloud ML is important in powering virtual assistants like Siri and Alexa. These systems harness the cloud’s computational prowess to analyze and process voice inputs, delivering intelligent and personalized responses to users.

-

It also provides the foundation for advanced recommendation systems in platforms like Netflix and Amazon. These systems sift through extensive datasets to identify patterns and preferences, offering personalized content or product suggestions to boost user engagement.

-

In the financial realm, Cloud ML has created robust fraud detection systems. These systems scrutinize vast amounts of transactional data to flag potential fraudulent activities, enabling timely interventions and reducing financial risks.

-

In summary, it’s virtually impossible to navigate the internet today without encountering some form of Cloud ML, directly or indirectly. From the personalized ads on your social media feed to the predictive text features in email services, Cloud ML is deeply integrated into our online experiences. It powers smart algorithms that recommend products on e-commerce sites, fine-tunes search engines to deliver accurate results, and even automates the tagging and categorization of photos on platforms like Facebook.

-

Furthermore, Cloud ML bolsters user security through anomaly detection systems that monitor for unusual activities, potentially shielding users from cyber threats. It acts as the unseen powerhouse, continuously operating behind the scenes to refine, secure, and personalize our digital interactions, making the modern internet a more intuitive and user-friendly environment.

-
-
-
-

2.3 Edge ML

-
-

2.3.1 Characteristics

-

Definition of Edge ML

-

Edge Machine Learning (Edge ML) runs machine learning algorithms directly on endpoint devices or closer to where the data is generated rather than relying on centralized cloud servers. This approach aims to bring computation closer to the data source, reducing the need to send large volumes of data over networks, often resulting in lower latency and improved data privacy.

-

Decentralized Data Processing

-

In Edge ML, data processing happens in a decentralized fashion. Instead of sending data to remote servers, the data is processed locally on devices like smartphones, tablets, or IoT devices (Figure fig-edgeml-example). This local processing allows devices to make quick decisions based on the data they collect without relying heavily on a central server’s resources. This decentralization is particularly important in real-time applications where even a slight delay can have significant consequences.

-

Local Data Storage and Computation

-

Local data storage and computation are key features of Edge ML. This setup ensures that data can be stored and analyzed directly on the devices, thereby maintaining the privacy of the data and reducing the need for constant internet connectivity. Moreover, this often leads to more efficient computation, as data doesn’t have to travel long distances, and computations are performed with a more nuanced understanding of the local context, which can sometimes result in more insightful analyses.

-
-
-
- -
-
-Figure 2.3: Edge ML Examples. Credit: Edge Impulse. -
-
-
-
-
-

2.3.2 Benefits

-

Reduced Latency

-

One of Edge ML’s main advantages is the significant latency reduction compared to Cloud ML. This reduced latency can be a critical benefit in situations where milliseconds count, such as in autonomous vehicles, where quick decision-making can mean the difference between safety and an accident.

-

Enhanced Data Privacy

-

Edge ML also offers improved data privacy, as data is primarily stored and processed locally. This minimizes the risk of data breaches that are more common in centralized data storage solutions. Sensitive information can be kept more secure, as it’s not sent over networks that could be intercepted.

-

Lower Bandwidth Usage

-

Operating closer to the data source means less data must be sent over networks, reducing bandwidth usage. This can result in cost savings and efficiency gains, especially in environments where bandwidth is limited or costly.

-
-
-

2.3.3 Challenges

-

Limited Computational Resources Compared to Cloud ML

-

However, Edge ML has its challenges. One of the main concerns is the limited computational resources compared to cloud-based solutions. Endpoint devices may have a different processing power or storage capacity than cloud servers, limiting the complexity of the machine learning models that can be deployed.

-

Complexity in Managing Edge Nodes

-

Managing a network of edge nodes can introduce complexity, especially regarding coordination, updates, and maintenance. Ensuring all nodes operate seamlessly and are up-to-date with the latest algorithms and security protocols can be a logistical challenge.

-

Security Concerns at the Edge Nodes

-

While Edge ML offers enhanced data privacy, edge nodes can sometimes be more vulnerable to physical and cyber-attacks. Developing robust security protocols that protect data at each node without compromising the system’s efficiency remains a significant challenge in deploying Edge ML solutions.

-
-
-

2.3.4 Example Use Cases

-

Edge ML has many applications, from autonomous vehicles and smart homes to industrial IoT. These examples were chosen to highlight scenarios where real-time data processing, reduced latency, and enhanced privacy are not just beneficial but often critical to the operation and success of these technologies. They demonstrate the pivotal role that Edge ML can play in driving advancements in various sectors, fostering innovation, and paving the way for more intelligent, responsive, and adaptive systems.

-

Autonomous Vehicles

-

Autonomous vehicles stand as a prime example of Edge ML’s potential. These vehicles rely heavily on real-time data processing to navigate and make decisions. Localized machine learning models assist in quickly analyzing data from various sensors to make immediate driving decisions, ensuring safety and smooth operation.

-

Smart Homes and Buildings

-

Edge ML plays a crucial role in efficiently managing various systems in smart homes and buildings, from lighting and heating to security. By processing data locally, these systems can operate more responsively and harmoniously with the occupants’ habits and preferences, creating a more comfortable living environment.

-

Industrial IoT

-

The Industrial Internet of Things (IoT) leverages Edge ML to monitor and control complex industrial processes. Here, machine learning models can analyze data from numerous sensors in real-time, enabling predictive maintenance, optimizing operations, and enhancing safety measures. This revolution in industrial automation and efficiency.

-

The applicability of Edge ML is vast and not limited to these examples. Various other sectors, including healthcare, agriculture, and urban planning, are exploring and integrating Edge ML to develop innovative solutions responsive to real-world needs and challenges, heralding a new era of smart, interconnected systems.

-
-
-
-

2.4 Tiny ML

-
-

2.4.1 Characteristics

-

Definition of TinyML

-

TinyML sits at the crossroads of embedded systems and machine learning, representing a burgeoning field that brings smart algorithms directly to tiny microcontrollers and sensors. These microcontrollers operate under severe resource constraints, particularly regarding memory, storage, and computational power (see a TinyML kit example in Figure fig-tinyml-example).

-

On-Device Machine Learning

-

In TinyML, the focus is on on-device machine learning. This means that machine learning models are deployed and trained on the device, eliminating the need for external servers or cloud infrastructures. This allows TinyML to enable intelligent decision-making right where the data is generated, making real-time insights and actions possible, even in settings where connectivity is limited or unavailable.

-

Low Power and Resource-Constrained Environments

-

TinyML excels in low-power and resource-constrained settings. These environments require highly optimized solutions that function within the available resources. TinyML meets this need through specialized algorithms and models designed to deliver decent performance while consuming minimal energy, thus ensuring extended operational periods, even in battery-powered devices.

-
-
-
- -
-
-Figure 2.4: Examples of TinyML device kits. Credit: Widening Access to Applied Machine Learning with TinyML. -
-
-
-
-

Exercise 2.1 (TinyML with Arduino)  

-
-
- -
-
-

Get ready to bring machine learning to the smallest of devices! In the embedded machine learning world, TinyML is where resource constraints meet ingenuity. This Colab notebook will walk you through building a gesture recognition model designed on an Arduino board. You’ll learn how to train a small but effective neural network, optimize it for minimal memory usage, and deploy it to your microcontroller. If you’re excited about making everyday objects smarter, this is where it begins!

-

-
-
-
-
-
-

2.4.2 Benefits

-

Extremely Low Latency

-

One of the standout benefits of TinyML is its ability to offer ultra-low latency. Since computation occurs directly on the device, the time required to send data to external servers and receive a response is eliminated. This is crucial in applications requiring immediate decision-making, enabling quick responses to changing conditions.

-

High Data Security

-

TinyML inherently enhances data security. Because data processing and analysis happen on the device, the risk of data interception during transmission is virtually eliminated. This localized approach to data management ensures that sensitive information stays on the device, strengthening user data security.

-

Energy Efficiency

-

TinyML operates within an energy-efficient framework, a necessity given its resource-constrained environments. By employing lean algorithms and optimized computational methods, TinyML ensures that devices can execute complex tasks without rapidly depleting battery life, making it a sustainable option for long-term deployments.

-
-
-

2.4.3 Challenges

-

Limited Computational Capabilities

-

However, the shift to TinyML comes with its set of hurdles. The primary limitation is the devices’ constrained computational capabilities. The need to operate within such limits means that deployed models must be simplified, which could affect the accuracy and sophistication of the solutions.

-

Complex Development Cycle

-

TinyML also introduces a complicated development cycle. Crafting lightweight and effective models demands a deep understanding of machine learning principles and expertise in embedded systems. This complexity calls for a collaborative development approach, where multi-domain expertise is essential for success.

-

Model Optimization and Compression

-

A central challenge in TinyML is model optimization and compression. Creating machine learning models that can operate effectively within the limited memory and computational power of microcontrollers requires innovative approaches to model design. Developers often face the challenge of striking a delicate balance and optimizing models to maintain effectiveness while fitting within stringent resource constraints.

-
-
-

2.4.4 Example Use Cases

-

Wearable Devices

-

In wearables, TinyML opens the door to smarter, more responsive gadgets. From fitness trackers offering real-time workout feedback to smart glasses processing visual data on the fly, TinyML transforms how we engage with wearable tech, delivering personalized experiences directly from the device.

-

Predictive Maintenance

-

In industrial settings, TinyML plays a significant role in predictive maintenance. By deploying TinyML algorithms on sensors that monitor equipment health, companies can preemptively identify potential issues, reducing downtime and preventing costly breakdowns. On-site data analysis ensures quick responses, potentially stopping minor issues from becoming major problems.

-

Anomaly Detection

-

TinyML can be employed to create anomaly detection models that identify unusual data patterns. For instance, a smart factory could use TinyML to monitor industrial processes and spot anomalies, helping prevent accidents and improve product quality. Similarly, a security company could use TinyML to monitor network traffic for unusual patterns, aiding in detecting and preventing cyber-attacks. TinyML could monitor patient data for anomalies in healthcare, aiding early disease detection and better patient treatment.

-

Environmental Monitoring

-

In environmental monitoring, TinyML enables real-time data analysis from various field-deployed sensors. These could range from city air quality monitoring to wildlife tracking in protected areas. Through TinyML, data can be processed locally, allowing for quick responses to changing conditions and providing a nuanced understanding of environmental patterns, crucial for informed decision-making.

-

In summary, TinyML serves as a trailblazer in the evolution of machine learning, fostering innovation across various fields by bringing intelligence directly to the edge. Its potential to transform our interaction with technology and the world is immense, promising a future where devices are connected, intelligent, and capable of making real-time decisions and responses.

-
-
-
-

2.5 Comparison

-

Up to this point, we’ve explored each of the different ML variants individually. Now, let’s bring them all together for a comprehensive view. Table tbl-big_vs_tiny offers a comparative analysis of Cloud ML, Edge ML, and TinyML based on various features and aspects. This comparison aims to provide a clear perspective on the unique advantages and distinguishing factors, aiding in making informed decisions based on the specific needs and constraints of a given application or project.

-
-
-
-Table 2.1: Comparison of feature aspects across Cloud ML, Edge ML, and TinyML. -
-
- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Feature/AspectCloud MLEdge MLTinyML
Processing LocationCentralized servers (Data Centers)Local devices (closer to data sources)On-device (microcontrollers, embedded systems)
LatencyHigh (Depends on internet connectivity)Moderate (Reduced latency compared to Cloud ML)Low (Immediate processing without network delay)
Data PrivacyModerate (Data transmitted over networks)High (Data remains on local networks)Very High (Data processed on-device, not transmitted)
Computational PowerHigh (Utilizes powerful data center infrastructure)Moderate (Utilizes local device capabilities)Low (Limited to the power of the embedded system)
Energy ConsumptionHigh (Data centers consume significant energy)Moderate (Less than data centers, more than TinyML)Low (Highly energy-efficient, designed for low power)
ScalabilityHigh (Easy to scale with additional server resources)Moderate (Depends on local device capabilities)Low (Limited by the hardware resources of the device)
CostHigh (Recurring costs for server usage, maintenance)Variable (Depends on the complexity of local setup)Low (Primarily upfront costs for hardware components)
Connectivity DependenceHigh (Requires stable internet connectivity)Low (Can operate with intermittent connectivity)Very Low (Can operate without any network connectivity)
Real-time ProcessingModerate (Can be affected by network latency)High (Capable of real-time processing locally)Very High (Immediate processing with minimal latency)
Application ExamplesBig Data Analysis, Virtual AssistantsAutonomous Vehicles, Smart HomesWearables, Sensor Networks
Development ComplexityModerate to High (Requires knowledge in cloud computing)Moderate (Requires knowledge in local network setup)Moderate to High (Requires expertise in embedded systems)
-
-
-
-
-
-

2.6 Conclusion

-

In this chapter, we’ve offered a panoramic view of the evolving landscape of machine learning, covering cloud, edge, and tiny ML paradigms. Cloud-based machine learning leverages the immense computational resources of cloud platforms to enable powerful and accurate models but comes with limitations, including latency and privacy concerns. Edge ML mitigates these limitations by bringing inference directly to edge devices, offering lower latency and reduced connectivity needs. TinyML takes this further by miniaturizing ML models to run directly on highly resource-constrained devices, opening up a new category of intelligent applications.

-

Each approach has its tradeoffs, including model complexity, latency, privacy, and hardware costs. Over time, we anticipate converging these embedded ML approaches, with cloud pre-training facilitating more sophisticated edge and tiny ML implementations. Advances like federated learning and on-device learning will enable embedded devices to refine their models by learning from real-world data.

-

The embedded ML landscape is rapidly evolving and poised to enable intelligent applications across a broad spectrum of devices and use cases. This chapter serves as a snapshot of the current state of embedded ML. As algorithms, hardware, and connectivity continue to improve, we can expect embedded devices of all sizes to become increasingly capable, unlocking transformative new applications for artificial intelligence.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

-

Coming soon.

-
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/motion_classify_ad/motion_classify_ad.html b/contents/motion_classify_ad/motion_classify_ad.html deleted file mode 100644 index 9bad9093..00000000 --- a/contents/motion_classify_ad/motion_classify_ad.html +++ /dev/null @@ -1,1622 +0,0 @@ - - - - - - - - - -Machine Learning Systems - Motion Classification and Anomaly Detection - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

Motion Classification and Anomaly Detection

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: 1950s style cartoon illustration depicting a movement research room. In the center of the room, there’s a simulated container used for transporting goods on trucks, boats, and forklifts. The container is detailed with rivets and markings typical of industrial cargo boxes. Around the container, the room is filled with vintage equipment, including an oscilloscope, various sensor arrays, and large paper rolls of recorded data. The walls are adorned with educational posters about transportation safety and logistics. The overall ambiance of the room is nostalgic and scientific, with a hint of industrial flair.
-
-
-
-

Introduction

-

Transportation is the backbone of global commerce. Millions of containers are transported daily via various means, such as ships, trucks, and trains, to destinations worldwide. Ensuring these containers’ safe and efficient transit is a monumental task that requires leveraging modern technology, and TinyML is undoubtedly one of them.

-

In this hands-on tutorial, we will work to solve real-world problems related to transportation. We will develop a Motion Classification and Anomaly Detection system using the Arduino Nicla Vision board, the Arduino IDE, and the Edge Impulse Studio. This project will help us understand how containers experience different forces and motions during various phases of transportation, such as terrestrial and maritime transit, vertical movement via forklifts, and stationary periods in warehouses.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Setting up the Arduino Nicla Vision Board
  • -
  • Data Collection and Preprocessing
  • -
  • Building the Motion Classification Model
  • -
  • Implementing Anomaly Detection
  • -
  • Real-world Testing and Analysis
  • -
-
-
-

By the end of this tutorial, you’ll have a working prototype that can classify different types of motion and detect anomalies during the transportation of containers. This knowledge can be a stepping stone to more advanced projects in the burgeoning field of TinyML involving vibration.

-
-
-

IMU Installation and testing

-

For this project, we will use an accelerometer. As discussed in the Hands-On Tutorial, Setup Nicla Vision, the Nicla Vision Board has an onboard 6-axis IMU: 3D gyroscope and 3D accelerometer, the LSM6DSOX. Let’s verify if the LSM6DSOX IMU library is installed. If not, install it.

-
-
-

-
-
-

Next, go to Examples > Arduino_LSM6DSOX > SimpleAccelerometer and run the accelerometer test. You can check if it works by opening the IDE Serial Monitor or Plotter. The values are in g (earth gravity), with a default range of +/- 4g:

-
-
-

-
-
-
-

Defining the Sampling frequency:

-

Choosing an appropriate sampling frequency is crucial for capturing the motion characteristics you’re interested in studying. The Nyquist-Shannon sampling theorem states that the sampling rate should be at least twice the highest frequency component in the signal to reconstruct it properly. In the context of motion classification and anomaly detection for transportation, the choice of sampling frequency would depend on several factors:

-
    -
  1. Nature of the Motion: Different types of transportation (terrestrial, maritime, etc.) may involve different ranges of motion frequencies. Faster movements may require higher sampling frequencies.

  2. -
  3. Hardware Limitations: The Arduino Nicla Vision board and any associated sensors may have limitations on how fast they can sample data.

  4. -
  5. Computational Resources: Higher sampling rates will generate more data, which might be computationally intensive, especially critical in a TinyML environment.

  6. -
  7. Battery Life: A higher sampling rate will consume more power. If the system is battery-operated, this is an important consideration.

  8. -
  9. Data Storage: More frequent sampling will require more storage space, another crucial consideration for embedded systems with limited memory.

  10. -
-

In many human activity recognition tasks, sampling rates of around 50 Hz to 100 Hz are commonly used. Given that we are simulating transportation scenarios, which are generally not high-frequency events, a sampling rate in that range (50-100 Hz) might be a reasonable starting point.

-

Let’s define a sketch that will allow us to capture our data with a defined sampling frequency (for example, 50Hz):

-
/*
- * Based on Edge Impulse Data Forwarder Example (Arduino)
-  - https://docs.edgeimpulse.com/docs/cli-data-forwarder
- * Developed by M.Rovai @11May23
- */
-
-/* Include ----------------------------------------------------------------- */
-#include <Arduino_LSM6DSOX.h>
-
-/* Constant defines -------------------------------------------------------- */
-#define CONVERT_G_TO_MS2 9.80665f
-#define FREQUENCY_HZ        50
-#define INTERVAL_MS         (1000 / (FREQUENCY_HZ + 1))
-
-static unsigned long last_interval_ms = 0;
-float x, y, z;
-
-void setup() {
-  Serial.begin(9600);
-  while (!Serial);
-
-  if (!IMU.begin()) {
-    Serial.println("Failed to initialize IMU!");
-    while (1);
-  }
-}
-
-void loop() {
-  if (millis() > last_interval_ms + INTERVAL_MS) {
-    last_interval_ms = millis();
-    
-    if (IMU.accelerationAvailable()) {
-      // Read raw acceleration measurements from the device
-      IMU.readAcceleration(x, y, z);
-
-      // converting to m/s2
-      float ax_m_s2 = x * CONVERT_G_TO_MS2;
-      float ay_m_s2 = y * CONVERT_G_TO_MS2;
-      float az_m_s2 = z * CONVERT_G_TO_MS2;
-
-      Serial.print(ax_m_s2); 
-      Serial.print("\t");
-      Serial.print(ay_m_s2); 
-      Serial.print("\t");
-      Serial.println(az_m_s2); 
-    }
-  }
-}
-

Uploading the sketch and inspecting the Serial Monitor, we can see that we are capturing 50 samples per second.

-
-
-

-
-
-
-

Note that with the Nicla board resting on a table (with the camera facing down), the z-axis measures around 9.8m/s\(^2\), the expected earth acceleration.

-
-
-
-
-

The Case Study: Simulated Container Transportation

-

We will simulate container (or better package) transportation through different scenarios to make this tutorial more relatable and practical. Using the built-in accelerometer of the Arduino Nicla Vision board, we’ll capture motion data by manually simulating the conditions of:

-
    -
  1. Terrestrial Transportation (by road or train)
  2. -
  3. Maritime-associated Transportation
  4. -
  5. Vertical Movement via Fork-Lift
  6. -
  7. Stationary (Idle) period in a Warehouse
  8. -
-
-
-

-
-
-

From the above images, we can define for our simulation that primarily horizontal movements (x or y axis) should be associated with the “Terrestrial class,” Vertical movements (z-axis) with the “Lift Class,” no activity with the “Idle class,” and movement on all three axes to Maritime class.

-
-
-

-
-
-
-
-

Data Collection

-

For data collection, we can have several options. In a real case, we can have our device, for example, connected directly to one container, and the data collected on a file (for example .CSV) and stored on an SD card (Via SPI connection) or an offline repo in your computer. Data can also be sent remotely to a nearby repository, such as a mobile phone, using Bluetooth (as done in this project: Sensor DataLogger). Once your dataset is collected and stored as a .CSV file, it can be uploaded to the Studio using the CSV Wizard tool.

-
-

In this video, you can learn alternative ways to send data to the Edge Impulse Studio.

-
-
-

Connecting the device to Edge Impulse

-

We will connect the Nicla directly to the Edge Impulse Studio, which will also be used for data pre-processing, model training, testing, and deployment. For that, you have two options:

-
    -
  1. Download the latest firmware and connect it directly to the Data Collection section.
  2. -
  3. Use the CLI Data Forwarder tool to capture sensor data from the sensor and send it to the Studio.
  4. -
-

Option 1 is more straightforward, as we saw in the Setup Nicla Vision hands-on, but option 2 will give you more flexibility regarding capturing your data, such as sampling frequency definition. Let’s do it with the last one.

-

Please create a new project on the Edge Impulse Studio (EIS) and connect the Nicla to it, following these steps:

-
    -
  1. Install the Edge Impulse CLI and the Node.js into your computer.
  2. -
  3. Upload a sketch for data capture (the one discussed previously in this tutorial).
  4. -
  5. Use the CLI Data Forwarder to capture data from the Nicla’s accelerometer and send it to the Studio, as shown in this diagram:
  6. -
-
-
-

-
-
-

Start the CLI Data Forwarder on your terminal, entering (if it is the first time) the following command:

-
$ edge-impulse-data-forwarder --clean
-

Next, enter your EI credentials and choose your project, variables (for example, accX, accY, and accZ), and device name (for example, NiclaV:

-
-
-

-
-
-

Go to the Devices section on your EI Project and verify if the device is connected (the dot should be green):

-
-
-

-
-
-
-

You can clone the project developed for this hands-on: NICLA Vision Movement Classification.

-
-
-
-

Data Collection

-

On the Data Acquisition section, you should see that your board [NiclaV] is connected. The sensor is available: [sensor with 3 axes (accX, accY, accZ)] with a sampling frequency of [50Hz]. The Studio suggests a sample length of [10000] ms (10s). The last thing left is defining the sample label. Let’s start with[terrestrial]:

-
-
-

-
-
-

Terrestrial (palettes in a Truck or Train), moving horizontally. Press [Start Sample]and move your device horizontally, keeping one direction over your table. After 10 s, your data will be uploaded to the studio. Here is how the sample was collected:

-
-
-

-
-
-

As expected, the movement was captured mainly in the Y-axis (green). In the blue, we see the Z axis, around -10 m/s\(^2\) (the Nicla has the camera facing up).

-

As discussed before, we should capture data from all four Transportation Classes. So, imagine that you have a container with a built-in accelerometer facing the following situations:

-

Maritime (pallets in boats into an angry ocean). The movement is captured on all three axes:

-
-
-

-
-
-

Lift (Palettes being handled vertically by a Forklift). Movement captured only in the Z-axis:

-
-
-

-
-
-

Idle (Paletts in a warehouse). No movement detected by the accelerometer:

-
-
-

-
-
-

You can capture, for example, 2 minutes (twelve samples of 10 seconds) for each of the four classes (a total of 8 minutes of data). Using the three dots menu after each one of the samples, select 2 of them, reserving them for the Test set. Alternatively, you can use the automatic Train/Test Split tool on the Danger Zone of Dashboard tab. Below, you can see the resulting dataset:

-
-
-

-
-
-

Once you have captured your dataset, you can explore it in more detail using the Data Explorer, a visual tool to find outliers or mislabeled data (helping to correct them). The data explorer first tries to extract meaningful features from your data (by applying signal processing and neural network embeddings) and then uses a dimensionality reduction algorithm such as PCA or t-SNE to map these features to a 2D space. This gives you a one-look overview of your complete dataset.

-
-
-

-
-
-

In our case, the dataset seems OK (good separation). But the PCA shows we can have issues between maritime (green) and lift (orange). This is expected, once on a boat, sometimes the movement can be only “vertical”.

-
-
-
-

Impulse Design

-

The next step is the definition of our Impulse, which takes the raw data and uses signal processing to extract features, passing them as the input tensor of a learning block to classify new data. Go to Impulse Design and Create Impulse. The Studio will suggest the basic design. Let’s also add a second Learning Block for Anomaly Detection.

-
-
-

-
-
-

This second model uses a K-means model. If we imagine that we could have our known classes as clusters, any sample that could not fit on that could be an outlier, an anomaly such as a container rolling out of a ship on the ocean or falling from a Forklift.

-
-
-

-
-
-

The sampling frequency should be automatically captured, if not, enter it: [50]Hz. The Studio suggests a Window Size of 2 seconds ([2000] ms) with a sliding window of [20]ms. What we are defining in this step is that we will pre-process the captured data (Time-Seres data), creating a tabular dataset features) that will be the input for a Neural Networks Classifier (DNN) and an Anomaly Detection model (K-Means), as shown below:

-
-
-

-
-
-

Let’s dig into those steps and parameters to understand better what we are doing here.

-
-

Data Pre-Processing Overview

-

Data pre-processing is extracting features from the dataset captured with the accelerometer, which involves processing and analyzing the raw data. Accelerometers measure the acceleration of an object along one or more axes (typically three, denoted as X, Y, and Z). These measurements can be used to understand various aspects of the object’s motion, such as movement patterns and vibrations.

-

Raw accelerometer data can be noisy and contain errors or irrelevant information. Preprocessing steps, such as filtering and normalization, can clean and standardize the data, making it more suitable for feature extraction. In our case, we should divide the data into smaller segments or windows. This can help focus on specific events or activities within the dataset, making feature extraction more manageable and meaningful. The window size and overlap (window increase) choice depend on the application and the frequency of the events of interest. As a thumb rule, we should try to capture a couple of “cycles of data”.

-
-

With a sampling rate (SR) of 50Hz and a window size of 2 seconds, we will get 100 samples per axis, or 300 in total (3 axis x 2 seconds x 50 samples). We will slide this window every 200ms, creating a larger dataset where each instance has 300 raw features.

-
-
-
-

-
-
-

Once the data is preprocessed and segmented, you can extract features that describe the motion’s characteristics. Some typical features extracted from accelerometer data include:

-
    -
  • Time-domain features describe the data’s statistical properties within each segment, such as mean, median, standard deviation, skewness, kurtosis, and zero-crossing rate.
  • -
  • Frequency-domain features are obtained by transforming the data into the frequency domain using techniques like the Fast Fourier Transform (FFT). Some typical frequency-domain features include the power spectrum, spectral energy, dominant frequencies (amplitude and frequency), and spectral entropy.
  • -
  • Time-frequency domain features combine the time and frequency domain information, such as the Short-Time Fourier Transform (STFT) or the Discrete Wavelet Transform (DWT). They can provide a more detailed understanding of how the signal’s frequency content changes over time.
  • -
-

In many cases, the number of extracted features can be large, which may lead to overfitting or increased computational complexity. Feature selection techniques, such as mutual information, correlation-based methods, or principal component analysis (PCA), can help identify the most relevant features for a given application and reduce the dimensionality of the dataset. The Studio can help with such feature importance calculations.

-
-
-

EI Studio Spectral Features

-

Data preprocessing is a challenging area for embedded machine learning, still, Edge Impulse helps overcome this with its digital signal processing (DSP) preprocessing step and, more specifically, the Spectral Features Block.

-

On the Studio, the collected raw dataset will be the input of a Spectral Analysis block, which is excellent for analyzing repetitive motion, such as data from accelerometers. This block will perform a DSP (Digital Signal Processing), extracting features such as FFT or Wavelets.

-

For our project, once the time signal is continuous, we should use FFT with, for example, a length of [32].

-

The per axis/channel Time Domain Statistical features are:

- -

The per axis/channel Frequency Domain Spectral features are:

-
    -
  • Spectral Power: 16 features (FFT Length/2)
  • -
  • Skewness: 1 feature
  • -
  • Kurtosis: 1 feature
  • -
-

So, for an FFT length of 32 points, the resulting output of the Spectral Analysis Block will be 21 features per axis (a total of 63 features).

-
-

You can learn more about how each feature is calculated by downloading the notebook Edge Impulse - Spectral Features Block Analysis TinyML under the hood: Spectral Analysis or opening it directly on Google CoLab.

-
-
-
-

Generating features

-

Once we understand what the pre-processing does, it is time to finish the job. So, let’s take the raw data (time-series type) and convert it to tabular data. For that, go to the Spectral Features section on the Parameters tab, define the main parameters as discussed in the previous section ([FFT] with [32] points), and select[Save Parameters]:

-
-
-

-
-
-

At the top menu, select the Generate Features option and the Generate Features button. Each 2-second window data will be converted into one data point of 63 features.

-
-

The Feature Explorer will show those data in 2D using UMAP. Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualization similarly to t-SNE but is also applicable for general non-linear dimension reduction.

-
-

The visualization makes it possible to verify that after the feature generation, the classes present keep their excellent separation, which indicates that the classifier should work well. Optionally, you can analyze how important each one of the features is for one class compared with others.

-
-
-

-
-
-
-
-
-

Models Training

-

Our classifier will be a Dense Neural Network (DNN) that will have 63 neurons on its input layer, two hidden layers with 20 and 10 neurons, and an output layer with four neurons (one per each class), as shown here:

-
-
-

-
-
-

As hyperparameters, we will use a Learning Rate of [0.005], a Batch size of [32], and [20]% of data for validation for [30] epochs. After training, we can see that the accuracy is 98.5%. The cost of memory and latency is meager.

-
-
-

-
-
-

For Anomaly Detection, we will choose the suggested features that are precisely the most important ones in the Feature Extraction, plus the accZ RMS. The number of clusters will be [32], as suggested by the Studio:

-
-
-

-
-
-
-
-

Testing

-

We can verify how our model will behave with unknown data using 20% of the data left behind during the data capture phase. The result was almost 95%, which is good. You can always work to improve the results, for example, to understand what went wrong with one of the wrong results. If it is a unique situation, you can add it to the training dataset and then repeat it.

-

The default minimum threshold for a considered uncertain result is [0.6] for classification and [0.3] for anomaly. Once we have four classes (their output sum should be 1.0), you can also set up a lower threshold for a class to be considered valid (for example, 0.4). You can Set confidence thresholds on the three dots menu, besides the Classy all button.

-
-
-

-
-
-

You can also perform Live Classification with your device (which should still be connected to the Studio).

-
-

Be aware that here, you will capture real data with your device and upload it to the Studio, where an inference will be taken using the trained model (But the model is NOT in your device).

-
-
-
-

Deploy

-

It is time to deploy the preprocessing block and the trained model to the Nicla. The Studio will package all the needed libraries, preprocessing functions, and trained models, downloading them to your computer. You should select the option Arduino Library, and at the bottom, you can choose Quantized (Int8) or Unoptimized (float32) and [Build]. A Zip file will be created and downloaded to your computer.

-
-
-

-
-
-

On your Arduino IDE, go to the Sketch tab, select Add.ZIP Library, and Choose the.zip file downloaded by the Studio. A message will appear in the IDE Terminal: Library installed.

-
-

Inference

-

Now, it is time for a real test. We will make inferences wholly disconnected from the Studio. Let’s change one of the code examples created when you deploy the Arduino Library.

-

In your Arduino IDE, go to the File/Examples tab and look for your project, and on examples, select Nicla_vision_fusion:

-
-
-

-
-
-

Note that the code created by Edge Impulse considers a sensor fusion approach where the IMU (Accelerometer and Gyroscope) and the ToF are used. At the beginning of the code, you have the libraries related to our project, IMU and ToF:

-
/* Includes ---------------------------------------------------------------- */
-#include <NICLA_Vision_Movement_Classification_inferencing.h> 
-#include <Arduino_LSM6DSOX.h> //IMU
-#include "VL53L1X.h" // ToF
-
-

You can keep the code this way for testing because the trained model will use only features pre-processed from the accelerometer. But consider that you will write your code only with the needed libraries for a real project.

-
-

And that is it!

-

You can now upload the code to your device and proceed with the inferences. Press the Nicla [RESET] button twice to put it on boot mode (disconnect from the Studio if it is still connected), and upload the sketch to your board.

-

Now you should try different movements with your board (similar to those done during data capture), observing the inference result of each class on the Serial Monitor:

-
    -
  • Idle and lift classes:
  • -
-
-
-

-
-
-
    -
  • maritime and terrestrial:
  • -
-
-
-

-
-
-

Note that in all situations above, the value of the anomaly score was smaller than 0.0. Try a new movement that was not part of the original dataset, for example, “rolling” the Nicla, facing the camera upside-down, as a container falling from a boat or even a boat accident:

-
    -
  • anomaly detection:
  • -
-
-
-

-
-
-

In this case, the anomaly is much bigger, over 1.00

-
-
-

Post-processing

-

Now that we know the model is working since it detects the movements, we suggest that you modify the code to see the result with the NiclaV completely offline (disconnected from the PC and powered by a battery, a power bank, or an independent 5V power supply).

-

The idea is to do the same as with the KWS project: if one specific movement is detected, a specific LED could be lit. For example, if terrestrial is detected, the Green LED will light; if maritime, the Red LED will light, if it is a lift, the Blue LED will light; and if no movement is detected (idle), the LEDs will be OFF. You can also add a condition when an anomaly is detected, in this case, for example, a white color can be used (all e LEDs light simultaneously).

-
-
-
-

Conclusion

-
-

The notebooks and codes used in this hands-on tutorial will be found on the GitHub repository.

-
-

Before we finish, consider that Movement Classification and Object Detection can be utilized in many applications across various domains. Here are some of the potential applications:

-
-

Case Applications

-
-

Industrial and Manufacturing

-
    -
  • Predictive Maintenance: Detecting anomalies in machinery motion to predict failures before they occur.
  • -
  • Quality Control: Monitoring the motion of assembly lines or robotic arms for precision assessment and deviation detection from the standard motion pattern.
  • -
  • Warehouse Logistics: Managing and tracking the movement of goods with automated systems that classify different types of motion and detect anomalies in handling.
  • -
-
-
-

Healthcare

-
    -
  • Patient Monitoring: Detecting falls or abnormal movements in the elderly or those with mobility issues.
  • -
  • Rehabilitation: Monitoring the progress of patients recovering from injuries by classifying motion patterns during physical therapy sessions.
  • -
  • Activity Recognition: Classifying types of physical activity for fitness applications or patient monitoring.
  • -
-
-
-

Consumer Electronics

-
    -
  • Gesture Control: Interpreting specific motions to control devices, such as turning on lights with a hand wave.
  • -
  • Gaming: Enhancing gaming experiences with motion-controlled inputs.
  • -
-
-
-

Transportation and Logistics

-
    -
  • Vehicle Telematics: Monitoring vehicle motion for unusual behavior such as hard braking, sharp turns, or accidents.
  • -
  • Cargo Monitoring: Ensuring the integrity of goods during transport by detecting unusual movements that could indicate tampering or mishandling.
  • -
-
-
-

Smart Cities and Infrastructure

-
    -
  • Structural Health Monitoring: Detecting vibrations or movements within structures that could indicate potential failures or maintenance needs.
  • -
  • Traffic Management: Analyzing the flow of pedestrians or vehicles to improve urban mobility and safety.
  • -
-
-
-

Security and Surveillance

-
    -
  • Intruder Detection: Detecting motion patterns typical of unauthorized access or other security breaches.
  • -
  • Wildlife Monitoring: Detecting poachers or abnormal animal movements in protected areas.
  • -
-
-
-

Agriculture

-
    -
  • Equipment Monitoring: Tracking the performance and usage of agricultural machinery.
  • -
  • Animal Behavior Analysis: Monitoring livestock movements to detect behaviors indicating health issues or stress.
  • -
-
-
-

Environmental Monitoring

-
    -
  • Seismic Activity: Detecting irregular motion patterns that precede earthquakes or other geologically relevant events.
  • -
  • Oceanography: Studying wave patterns or marine movements for research and safety purposes.
  • -
-
-
-
-

Nicla 3D case

-

For real applications, as some described before, we can add a case to our device, and Eoin Jordan, from Edge Impulse, developed a great wearable and machine health case for the Nicla range of boards. It works with a 10mm magnet, 2M screws, and a 16mm strap for human and machine health use case scenarios. Here is the link: Arduino Nicla Voice and Vision Wearable Case.

-
-
-

-
-
-

The applications for motion classification and anomaly detection are extensive, and the Arduino Nicla Vision is well-suited for scenarios where low power consumption and edge processing are advantageous. Its small form factor and efficiency in processing make it an ideal choice for deploying portable and remote applications where real-time processing is crucial and connectivity may be limited.

- - -
-
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/niclav_sys/niclav_sys.html b/contents/niclav_sys/niclav_sys.html deleted file mode 100644 index 3635cf54..00000000 --- a/contents/niclav_sys/niclav_sys.html +++ /dev/null @@ -1,1441 +0,0 @@ - - - - - - - - - -Machine Learning Systems - Setup Nicla Vision - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

Setup Nicla Vision

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: Illustration reminiscent of a 1950s cartoon where the Arduino NICLA VISION board, equipped with a variety of sensors including a camera, is the focal point on an old-fashioned desk. In the background, a computer screen with rounded edges displays the Arduino IDE. The code seen is related to LED configurations and machine learning voice command detection. Outputs on the Serial Monitor explicitly display the words ‘yes’ and ‘no’.
-
-
-
-

Introduction

-

The Arduino Nicla Vision (sometimes called NiclaV) is a development board that includes two processors that can run tasks in parallel. It is part of a family of development boards with the same form factor but designed for specific tasks, such as the Nicla Sense ME and the Nicla Voice. The Niclas can efficiently run processes created with TensorFlow Lite. For example, one of the cores of the NiclaV runs a computer vision algorithm on the fly (inference), while the other executes low-level operations like controlling a motor and communicating or acting as a user interface. The onboard wireless module allows the management of WiFi and Bluetooth Low Energy (BLE) connectivity simultaneously.

-
-
-

-
-
-
-
-

Hardware

-
-

Two Parallel Cores

-

The central processor is the dual-core STM32H747, including a Cortex M7 at 480 MHz and a Cortex M4 at 240 MHz. The two cores communicate via a Remote Procedure Call mechanism that seamlessly allows calling functions on the other processor. Both processors share all the on-chip peripherals and can run:

-
    -
  • Arduino sketches on top of the Arm Mbed OS

  • -
  • Native Mbed applications

  • -
  • MicroPython / JavaScript via an interpreter

  • -
  • TensorFlow Lite

  • -
-
-
-

-
-
-
-
-

Memory

-

Memory is crucial for embedded machine learning projects. The NiclaV board can host up to 16 MB of QSPI Flash for storage. However, it is essential to consider that the MCU SRAM is the one to be used with machine learning inferences; the STM32H747 is only 1MB, shared by both processors. This MCU also has incorporated 2MB of FLASH, mainly for code storage.

-
-
-

Sensors

-
    -
  • Camera: A GC2145 2 MP Color CMOS Camera.

  • -
  • Microphone: The MP34DT05 is an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and the IC interface.

  • -
  • 6-Axis IMU: 3D gyroscope and 3D accelerometer data from the LSM6DSOX 6-axis IMU.

  • -
  • Time of Flight Sensor: The VL53L1CBV0FY Time-of-Flight sensor adds accurate and low power-ranging capabilities to the Nicla Vision. The invisible near-infrared VCSEL laser (including the analog driver) is encapsulated with receiving optics in an all-in-one small module below the camera.

  • -
-
-
-
-

Arduino IDE Installation

-

Start connecting the board (microUSB) to your computer:

-
-
-

-
-
-

Install the Mbed OS core for Nicla boards in the Arduino IDE. Having the IDE open, navigate to Tools > Board > Board Manager, look for Arduino Nicla Vision on the search window, and install the board.

-
-
-

-
-
-

Next, go to Tools > Board > Arduino Mbed OS Nicla Boards and select Arduino Nicla Vision. Having your board connected to the USB, you should see the Nicla on Port and select it.

-
-

Open the Blink sketch on Examples/Basic and run it using the IDE Upload button. You should see the Built-in LED (green RGB) blinking, which means the Nicla board is correctly installed and functional!

-
-
-

Testing the Microphone

-

On Arduino IDE, go to Examples > PDM > PDMSerialPlotter, open and run the sketch. Open the Plotter and see the audio representation from the microphone:

-
-
-

-
-
-
-

Vary the frequency of the sound you generate and confirm that the mic is working correctly.

-
-
-
-

Testing the IMU

-

Before testing the IMU, it will be necessary to install the LSM6DSOX library. For that, go to Library Manager and look for LSM6DSOX. Install the library provided by Arduino:

-
-
-

-
-
-

Next, go to Examples > Arduino_LSM6DSOX > SimpleAccelerometer and run the accelerometer test (you can also run Gyro and board temperature):

-
-
-

-
-
-
-
-

Testing the ToF (Time of Flight) Sensor

-

As we did with IMU, it is necessary to install the VL53L1X ToF library. For that, go to Library Manager and look for VL53L1X. Install the library provided by Pololu:

-
-
-

-
-
-

Next, run the sketch proximity_detection.ino:

-
-
-

-
-
-

On the Serial Monitor, you will see the distance from the camera to an object in front of it (max of 4m).

-
-
-

-
-
-
-
-

Testing the Camera

-

We can also test the camera using, for example, the code provided on Examples > Camera > CameraCaptureRawBytes. We cannot see the image directly, but it is possible to get the raw image data generated by the camera.

-

Anyway, the best test with the camera is to see a live image. For that, we will use another IDE, the OpenMV.

-
-
-
-

Installing the OpenMV IDE

-

OpenMV IDE is the premier integrated development environment with OpenMV Cameras like the one on the Nicla Vision. It features a powerful text editor, debug terminal, and frame buffer viewer with a histogram display. We will use MicroPython to program the camera.

-

Go to the OpenMV IDE page, download the correct version for your Operating System, and follow the instructions for its installation on your computer.

-
-
-

-
-
-

The IDE should open, defaulting to the helloworld_1.py code on its Code Area. If not, you can open it from Files > Examples > HelloWord > helloword.py

-
-
-

-
-
-

Any messages sent through a serial connection (using print() or error messages) will be displayed on the Serial Terminal during run time. The image captured by a camera will be displayed in the Camera Viewer Area (or Frame Buffer) and in the Histogram area, immediately below the Camera Viewer.

-
-

Before connecting the Nicla to the OpenMV IDE, ensure you have the latest bootloader version. Go to your Arduino IDE, select the Nicla board, and open the sketch on Examples > STM_32H747_System STM32H747_manageBootloader. Upload the code to your board. The Serial Monitor will guide you.

-
-

After updating the bootloader, put the Nicla Vision in bootloader mode by double-pressing the reset button on the board. The built-in green LED will start fading in and out. Now return to the OpenMV IDE and click on the connect icon (Left ToolBar):

-
-
-

-
-
-

A pop-up will tell you that a board in DFU mode was detected and ask how you would like to proceed. First, select Install the latest release firmware (vX.Y.Z). This action will install the latest OpenMV firmware on the Nicla Vision.

-
-
-

-
-
-

You can leave the option Erase internal file system unselected and click [OK].

-

Nicla’s green LED will start flashing while the OpenMV firmware is uploaded to the board, and a terminal window will then open, showing the flashing progress.

-
-
-

-
-
-

Wait until the green LED stops flashing and fading. When the process ends, you will see a message saying, “DFU firmware update complete!”. Press [OK].

-
-
-

-
-
-

A green play button appears when the Nicla Vison connects to the Tool Bar.

-
-
-

-
-
-

Also, note that a drive named “NO NAME” will appear on your computer.:

-
-
-

-
-
-

Every time you press the [RESET] button on the board, it automatically executes the main.py script stored on it. You can load the main.py code on the IDE (File > Open File...).

-
-
-

-
-
-
-

This code is the “Blink” code, confirming that the HW is OK.

-
-

For testing the camera, let’s run helloword_1.py. For that, select the script on File > Examples > HelloWorld > helloword.py,

-

When clicking the green play button, the MicroPython script (hellowolrd.py) on the Code Area will be uploaded and run on the Nicla Vision. On-Camera Viewer, you will start to see the video streaming. The Serial Monitor will show us the FPS (Frames per second), which should be around 14fps.

-
-
-

-
-
-

Here is the helloworld.py script:

-
# Hello World Example 2
-#
-# Welcome to the OpenMV IDE! Click on the green run arrow button below to run the script!
-
-import sensor, image, time
-
-sensor.reset()                      # Reset and initialize the sensor.
-sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
-sensor.set_framesize(sensor.QVGA)   # Set frame size to QVGA (320x240)
-sensor.skip_frames(time = 2000)     # Wait for settings take effect.
-clock = time.clock()                # Create a clock object to track the FPS.
-
-while(True):
-    clock.tick()                    # Update the FPS clock.
-    img = sensor.snapshot()         # Take a picture and return the image.
-    print(clock.fps())
-

In GitHub, you can find the Python scripts used here.

-

The code can be split into two parts:

-
    -
  • Setup: Where the libraries are imported, initialized and the variables are defined and initiated.

  • -
  • Loop: (while loop) part of the code that runs continually. The image (img variable) is captured (one frame). Each of those frames can be used for inference in Machine Learning Applications.

  • -
-

To interrupt the program execution, press the red [X] button.

-
-

Note: OpenMV Cam runs about half as fast when connected to the IDE. The FPS should increase once disconnected.

-
-

In the GitHub, You can find other Python scripts. Try to test the onboard sensors.

-
-
-

Connecting the Nicla Vision to Edge Impulse Studio

-

We will need the Edge Impulse Studio later in other exercises. Edge Impulse is a leading development platform for machine learning on edge devices.

-

Edge Impulse officially supports the Nicla Vision. So, for starting, please create a new project on the Studio and connect the Nicla to it. For that, follow the steps:

-
    -
  • Download the most updated EI Firmware and unzip it.

  • -
  • Open the zip file on your computer and select the uploader corresponding to your OS:

  • -
-
-
-

-
-
-
    -
  • Put the Nicla-Vision on Boot Mode, pressing the reset button twice.

  • -
  • Execute the specific batch code for your OS for uploading the binary arduino-nicla-vision.bin to your board.

  • -
-

Go to your project on the Studio, and on the Data Acquisition tab, select WebUSB (1). A window will pop up; choose the option that shows that the Nicla is paired (2) and press [Connect] (3).

-
-
-

-
-
-

In the Collect Data section on the Data Acquisition tab, you can choose which sensor data to pick.

-
-
-

-
-
-

For example. IMU data:

-
-
-

-
-
-

Or Image (Camera):

-
-
-

-
-
-

And so on. You can also test an external sensor connected to the ADC (Nicla pin 0) and the other onboard sensors, such as the microphone and the ToF.

-
-
-

Expanding the Nicla Vision Board (optional)

-

A last item to be explored is that sometimes, during prototyping, it is essential to experiment with external sensors and devices, and an excellent expansion to the Nicla is the Arduino MKR Connector Carrier (Grove compatible).

-

The shield has 14 Grove connectors: five single analog inputs (A0-A5), one double analog input (A5/A6), five single digital I/Os (D0-D4), one double digital I/O (D5/D6), one I2C (TWI), and one UART (Serial). All connectors are 5V compatible.

-
-

Note that all 17 Nicla Vision pins will be connected to the Shield Groves, but some Grove connections remain disconnected.

-
-
-
-

-
-
-

This shield is MKR compatible and can be used with the Nicla Vision and Portenta.

-
-
-

-
-
-

For example, suppose that on a TinyML project, you want to send inference results using a LoRaWAN device and add information about local luminosity. Often, with offline operations, a local low-power display such as an OLED is advised. This setup can be seen here:

-
-
-

-
-
-

The Grove Light Sensor would be connected to one of the single Analog pins (A0/PC4), the LoRaWAN device to the UART, and the OLED to the I2C connector.

-

The Nicla Pins 3 (Tx) and 4 (Rx) are connected with the Serial Shield connector. The UART communication is used with the LoRaWan device. Here is a simple code to use the UART:

-
# UART Test - By: marcelo_rovai - Sat Sep 23 2023
-
-import time
-from pyb import UART
-from pyb import LED
-
-redLED = LED(1) # built-in red LED
-
-# Init UART object.
-# Nicla Vision's UART (TX/RX pins) is on "LP1"
-uart = UART("LP1", 9600)
-
-while(True):
-    uart.write("Hello World!\r\n")
-    redLED.toggle()
-    time.sleep_ms(1000)
-

To verify that the UART is working, you should, for example, connect another device as the Arduino UNO, displaying “Hello Word” on the Serial Monitor. Here is the code.

-
-
-

-
-
-

Below is the Hello World code to be used with the I2C OLED. The MicroPython SSD1306 OLED driver (ssd1306.py), created by Adafruit, should also be uploaded to the Nicla (the ssd1306.py script can be found in GitHub).

-
# Nicla_OLED_Hello_World - By: marcelo_rovai - Sat Sep 30 2023
-
-#Save on device: MicroPython SSD1306 OLED driver, I2C and SPI interfaces created by Adafruit
-import ssd1306
-
-from machine import I2C
-i2c = I2C(1)
-
-oled_width = 128
-oled_height = 64
-oled = ssd1306.SSD1306_I2C(oled_width, oled_height, i2c)
-
-oled.text('Hello, World', 10, 10)
-oled.show()
-

Finally, here is a simple script to read the ADC value on pin “PC4” (Nicla pin A0):

-

-# Light Sensor (A0) - By: marcelo_rovai - Wed Oct 4 2023
-
-import pyb
-from time import sleep
-
-adc = pyb.ADC(pyb.Pin("PC4"))     # create an analog object from a pin
-val = adc.read()                  # read an analog value
-
-while (True):
-
-    val = adc.read()  
-    print ("Light={}".format (val))
-    sleep (1)
-

The ADC can be used for other sensor variables, such as Temperature.

-
-

Note that the above scripts (downloaded from Github) introduce only how to connect external devices with the Nicla Vision board using MicroPython.

-
-
-
-

Conclusion

-

The Arduino Nicla Vision is an excellent tiny device for industrial and professional uses! However, it is powerful, trustworthy, low power, and has suitable sensors for the most common embedded machine learning applications such as vision, movement, sensor fusion, and sound.

-
-

On the GitHub repository, you will find the last version of all the codes used or commented on in this hands-on exercise.

-
- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/object_detection_fomo/object_detection_fomo.html b/contents/object_detection_fomo/object_detection_fomo.html deleted file mode 100644 index 3c4ed6a9..00000000 --- a/contents/object_detection_fomo/object_detection_fomo.html +++ /dev/null @@ -1,1467 +0,0 @@ - - - - - - - - - -Machine Learning Systems - Object Detection - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

Object Detection

-
- - - -
- - - - -
- - - -
- - -
-
-

-
DALL·E 3 Prompt: Cartoon in the style of the 1940s or 1950s showcasing a spacious industrial warehouse interior. A conveyor belt is prominently featured, carrying a mixture of toy wheels and boxes. The wheels are distinguishable with their bright yellow centers and black tires. The boxes are white cubes painted with alternating black and white patterns. At the end of the moving conveyor stands a retro-styled robot, equipped with tools and sensors, diligently classifying and counting the arriving wheels and boxes. The overall aesthetic is reminiscent of mid-century animation with bold lines and a classic color palette.
-
-
-
-

Introduction

-

This is a continuation of CV on Nicla Vision, now exploring Object Detection on microcontrollers.

-
-
-

-
-
-
-

Object Detection versus Image Classification

-

The main task with Image Classification models is to produce a list of the most probable object categories present on an image, for example, to identify a tabby cat just after his dinner:

-
-
-

-
-
-

But what happens when the cat jumps near the wine glass? The model still only recognizes the predominant category on the image, the tabby cat:

-
-
-

-
-
-

And what happens if there is not a dominant category on the image?

-
-
-

-
-
-

The model identifies the above image completely wrong as an “ashcan,” possibly due to the color tonalities.

-
-

The model used in all previous examples is the MobileNet, trained with a large dataset, the ImageNet.

-
-

To solve this issue, we need another type of model, where not only multiple categories (or labels) can be found but also where the objects are located on a given image.

-

As we can imagine, such models are much more complicated and bigger, for example, the MobileNetV2 SSD FPN-Lite 320x320, trained with the COCO dataset. This pre-trained object detection model is designed to locate up to 10 objects within an image, outputting a bounding box for each object detected. The below image is the result of such a model running on a Raspberry Pi:

-
-
-

-
-
-

Those models used for Object detection (such as the MobileNet SSD or YOLO) usually have several MB in size, which is OK for use with Raspberry Pi but unsuitable for use with embedded devices, where the RAM usually is lower than 1M Bytes.

-
-
-

An innovative solution for Object Detection: FOMO

-

Edge Impulse launched in 2022, FOMO (Faster Objects, More Objects), a novel solution to perform object detection on embedded devices, not only on the Nicla Vision (Cortex M7) but also on Cortex M4F CPUs (Arduino Nano33 and OpenMV M4 series) as well the Espressif ESP32 devices (ESP-CAM and XIAO ESP32S3 Sense).

-

In this Hands-On exercise, we will explore using FOMO with Object Detection, not entering many details about the model itself. To understand more about how the model works, you can go into the official FOMO announcement by Edge Impulse, where Louis Moreau and Mat Kelcey explain in detail how it works.

-
-
-
-

The Object Detection Project Goal

-

All Machine Learning projects need to start with a detailed goal. Let’s assume we are in an industrial facility and must sort and count wheels and special boxes.

-
-
-

-
-
-

In other words, we should perform a multi-label classification, where each image can have three classes:

-
    -
  • Background (No objects)

  • -
  • Box

  • -
  • Wheel

  • -
-

Here are some not labeled image samples that we should use to detect the objects (wheels and boxes):

-
-
-

-
-
-

We are interested in which object is in the image, its location (centroid), and how many we can find on it. The object’s size is not detected with FOMO, as with MobileNet SSD or YOLO, where the Bounding Box is one of the model outputs.

-

We will develop the project using the Nicla Vision for image capture and model inference. The ML project will be developed using the Edge Impulse Studio. But before starting the object detection project in the Studio, let’s create a raw dataset (not labeled) with images that contain the objects to be detected.

-
-
-

Data Collection

-

We can use the Edge Impulse Studio, the OpenMV IDE, your phone, or other devices for the image capture. Here, we will use again the OpenMV IDE for our purpose.

-
-

Collecting Dataset with OpenMV IDE

-

First, create in your computer a folder where your data will be saved, for example, “data.” Next, on the OpenMV IDE, go to Tools > Dataset Editor and select New Dataset to start the dataset collection:

-
-
-

-
-
-

Edge impulse suggests that the objects should be of similar size and not overlapping for better performance. This is OK in an industrial facility, where the camera should be fixed, keeping the same distance from the objects to be detected. Despite that, we will also try with mixed sizes and positions to see the result.

-
-

We will not create separate folders for our images because each contains multiple labels.

-
-

Connect the Nicla Vision to the OpenMV IDE and run the dataset_capture_script.py. Clicking on the Capture Image button will start capturing images:

-
-
-

-
-
-

We suggest around 50 images mixing the objects and varying the number of each appearing on the scene. Try to capture different angles, backgrounds, and light conditions.

-
-

The stored images use a QVGA frame size 320x240 and RGB565 (color pixel format).

-
-

After capturing your dataset, close the Dataset Editor Tool on the Tools > Dataset Editor.

-
-
-
-

Edge Impulse Studio

-
-

Setup the project

-

Go to Edge Impulse Studio, enter your credentials at Login (or create an account), and start a new project.

-
-
-

-
-
-
-

Here, you can clone the project developed for this hands-on: NICLA_Vision_Object_Detection.

-
-

On your Project Dashboard, go down and on Project info and select Bounding boxes (object detection) and Nicla Vision as your Target Device:

-
-
-

-
-
-
-
-

Uploading the unlabeled data

-

On Studio, go to the Data acquisition tab, and on the UPLOAD DATA section, upload from your computer files captured.

-
-
-

-
-
-
-

You can leave for the Studio to split your data automatically between Train and Test or do it manually.

-
-
-
-

-
-
-

All the not labeled images (51) were uploaded but they still need to be labeled appropriately before using them as a dataset in the project. The Studio has a tool for that purpose, which you can find in the link Labeling queue (51).

-

There are two ways you can use to perform AI-assisted labeling on the Edge Impulse Studio (free version):

-
    -
  • Using yolov5
  • -
  • Tracking objects between frames
  • -
-
-

Edge Impulse launched an auto-labeling feature for Enterprise customers, easing labeling tasks in object detection projects.

-
-

Ordinary objects can quickly be identified and labeled using an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset). But since, in our case, the objects are not part of COCO datasets, we should select the option of tracking objects. With this option, once you draw bounding boxes and label the images in one frame, the objects will be tracked automatically from frame to frame, partially labeling the new ones (not all are correctly labeled).

-
-

You can use the EI uploader to import your data if you already have a labeled dataset containing bounding boxes.

-
-
-
-

Labeling the Dataset

-

Starting with the first image of your unlabeled data, use your mouse to drag a box around an object to add a label. Then click Save labels to advance to the next item.

-
-
-

-
-
-

Continue with this process until the queue is empty. At the end, all images should have the objects labeled as those samples below:

-
-
-

-
-
-

Next, review the labeled samples on the Data acquisition tab. If one of the labels was wrong, you can edit it using the three dots menu after the sample name:

-
-
-

-
-
-

You will be guided to replace the wrong label, correcting the dataset.

-
-
-

-
-
-
-
-
-

The Impulse Design

-

In this phase, you should define how to:

-
    -
  • Pre-processing consists of resizing the individual images from 320 x 240 to 96 x 96 and squashing them (squared form, without cropping). Afterwards, the images are converted from RGB to Grayscale.

  • -
  • Design a Model, in this case, “Object Detection.”

  • -
-
-
-

-
-
-
-

Preprocessing all dataset

-

In this section, select Color depth as Grayscale, which is suitable for use with FOMO models and Save parameters.

-
-
-

-
-
-

The Studio moves automatically to the next section, Generate features, where all samples will be pre-processed, resulting in a dataset with individual 96x96x1 images or 9,216 features.

-
-
-

-
-
-

The feature explorer shows that all samples evidence a good separation after the feature generation.

-
-

One of the samples (46) apparently is in the wrong space, but clicking on it can confirm that the labeling is correct.

-
-
-
-
-

Model Design, Training, and Test

-

We will use FOMO, an object detection model based on MobileNetV2 (alpha 0.35) designed to coarsely segment an image into a grid of background vs objects of interest (here, boxes and wheels).

-

FOMO is an innovative machine learning model for object detection, which can use up to 30 times less energy and memory than traditional models like Mobilenet SSD and YOLOv5. FOMO can operate on microcontrollers with less than 200 KB of RAM. The main reason this is possible is that while other models calculate the object’s size by drawing a square around it (bounding box), FOMO ignores the size of the image, providing only the information about where the object is located in the image, by means of its centroid coordinates.

-

How FOMO works?

-

FOMO takes the image in grayscale and divides it into blocks of pixels using a factor of 8. For the input of 96x96, the grid would be 12x12 (96/8=12). Next, FOMO will run a classifier through each pixel block to calculate the probability that there is a box or a wheel in each of them and, subsequently, determine the regions which have the highest probability of containing the object (If a pixel block has no objects, it will be classified as background). From the overlap of the final region, the FOMO provides the coordinates (related to the image dimensions) of the centroid of this region.

-
-
-

-
-
-

For training, we should select a pre-trained model. Let’s use the FOMO (Faster Objects, More Objects) MobileNetV2 0.35`. This model uses around 250KB RAM and 80KB of ROM (Flash), which suits well with our board since it has 1MB of RAM and ROM.

-
-
-

-
-
-

Regarding the training hyper-parameters, the model will be trained with:

-
    -
  • Epochs: 60,
  • -
  • Batch size: 32
  • -
  • Learning Rate: 0.001.
  • -
-

For validation during training, 20% of the dataset (validation_dataset) will be spared. For the remaining 80% (train_dataset), we will apply Data Augmentation, which will randomly flip, change the size and brightness of the image, and crop them, artificially increasing the number of samples on the dataset for training.

-

As a result, the model ends with practically 1.00 in the F1 score, with a similar result when using the Test data.

-
-

Note that FOMO automatically added a 3rd label background to the two previously defined (box and wheel).

-
-
-
-

-
-
-
-

In object detection tasks, accuracy is generally not the primary evaluation metric. Object detection involves classifying objects and providing bounding boxes around them, making it a more complex problem than simple classification. The issue is that we do not have the bounding box, only the centroids. In short, using accuracy as a metric could be misleading and may not provide a complete understanding of how well the model is performing. Because of that, we will use the F1 score.

-
-
-

Test model with “Live Classification”

-

Since Edge Impulse officially supports the Nicla Vision, let’s connect it to the Studio. For that, follow the steps:

-
    -
  • Download the last EI Firmware and unzip it.

  • -
  • Open the zip file on your computer and select the uploader related to your OS:

  • -
-
-
-

-
-
-
    -
  • Put the Nicla-Vision on Boot Mode, pressing the reset button twice.

  • -
  • Execute the specific batch code for your OS for uploading the binary (arduino-nicla-vision.bin) to your board.

  • -
-

Go to Live classification section at EI Studio, and using webUSB, connect your Nicla Vision:

-
-
-

-
-
-

Once connected, you can use the Nicla to capture actual images to be tested by the trained model on Edge Impulse Studio.

-
-
-

-
-
-

One thing to be noted is that the model can produce false positives and negatives. This can be minimized by defining a proper Confidence Threshold (use the Three dots menu for the set-up). Try with 0.8 or more.

-
-
-
-

Deploying the Model

-

Select OpenMV Firmware on the Deploy Tab and press [Build].

-
-
-

-
-
-

When you try to connect the Nicla with the OpenMV IDE again, it will try to update its FW. Choose the option Load a specific firmware instead.

-
-
-

-
-
-

You will find a ZIP file on your computer from the Studio. Open it:

-
-
-

-
-
-

Load the .bin file to your board:

-
-
-

-
-
-

After the download is finished, a pop-up message will be displayed. Press OK, and open the script ei_object_detection.py downloaded from the Studio.

-

Before running the script, let’s change a few lines. Note that you can leave the window definition as 240 x 240 and the camera capturing images as QVGA/RGB. The captured image will be pre-processed by the FW deployed from Edge Impulse

-
# Edge Impulse - OpenMV Object Detection Example
-
-import sensor, image, time, os, tf, math, uos, gc
-
-sensor.reset()                         # Reset and initialize the sensor.
-sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
-sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
-sensor.set_windowing((240, 240))       # Set 240x240 window.
-sensor.skip_frames(time=2000)          # Let the camera adjust.
-
-net = None
-labels = None
-

Redefine the minimum confidence, for example, to 0.8 to minimize false positives and negatives.

-
min_confidence = 0.8
-

Change if necessary, the color of the circles that will be used to display the detected object’s centroid for a better contrast.

-
try:
-    # Load built in model
-    labels, net = tf.load_builtin_model('trained')
-except Exception as e:
-    raise Exception(e)
-
-colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
-    (255, 255,   0), # background: yellow (not used)
-    (  0, 255,   0), # cube: green
-    (255,   0,   0), # wheel: red
-    (  0,   0, 255), # not used
-    (255,   0, 255), # not used
-    (  0, 255, 255), # not used
-    (255, 255, 255), # not used
-]
-

Keep the remaining code as it is and press the green Play button to run the code:

-
-
-

-
-
-

On the camera view, we can see the objects with their centroids marked with 12 pixel-fixed circles (each circle has a distinct color, depending on its class). On the Serial Terminal, the model shows the labels detected and their position on the image window (240X240).

-
-

Be ware that the coordinate origin is in the upper left corner.

-
-
-
-

-
-
-

Note that the frames per second rate is around 8 fps (similar to what we got with the Image Classification project). This happens because FOMO is cleverly built over a CNN model, not with an object detection model like the SSD MobileNet. For example, when running a MobileNetV2 SSD FPN-Lite 320x320 model on a Raspberry Pi 4, the latency is around 5 times higher (around 1.5 fps)

-

Here is a short video showing the inference results:

-
-
-

Conclusion

-

FOMO is a significant leap in the image processing space, as Louis Moreau and Mat Kelcey put it during its launch in 2022:

-
-

FOMO is a ground-breaking algorithm that brings real-time object detection, tracking, and counting to microcontrollers for the first time.

-
-

Multiple possibilities exist for exploring object detection (and, more precisely, counting them) on embedded devices, for example, to explore the Nicla doing sensor fusion (camera + microphone) and object detection. This can be very useful on projects involving bees, for example.

-
-
-

-
-
- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/ondevice_learning/ondevice_learning.html b/contents/ondevice_learning/ondevice_learning.html deleted file mode 100644 index 5e9638cf..00000000 --- a/contents/ondevice_learning/ondevice_learning.html +++ /dev/null @@ -1,2122 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 12  On-Device Learning - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

12  On-Device Learning

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Drawing of a smartphone with its internal components exposed, revealing diverse miniature engineers of different genders and skin tones actively working on the ML model. The engineers, including men, women, and non-binary individuals, are tuning parameters, repairing connections, and enhancing the network on the fly. Data flows into the ML model, being processed in real-time, and generating output inferences.
-
-
-

On-device Learning represents a significant innovation for embedded and edge IoT devices, enabling models to train and update directly on small local devices. This contrasts with traditional methods, where models are trained on expansive cloud computing resources before deployment. With On-Device Learning, devices like smart speakers, wearables, and industrial sensors can refine models in real-time based on local data without needing to transmit data externally. For example, a voice-enabled smart speaker could learn and adapt to its owner’s speech patterns and vocabulary right on the device. However, there is no such thing as a free lunch; therefore, in this chapter, we will discuss both the benefits and the limitations of on-device learning.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand on-device learning and how it differs from cloud-based training

  • -
  • Recognize the benefits and limitations of on-device learning

  • -
  • Examine strategies to adapt models through complexity reduction, optimization, and data compression

  • -
  • Understand related concepts like federated learning and transfer learning

  • -
  • Analyze the security implications of on-device learning and mitigation strategies

  • -
-
-
-
-

12.1 Introduction

-

On-device Learning refers to training ML models directly on the device where they are deployed, as opposed to traditional methods where models are trained on powerful servers and then deployed to devices. This method is particularly relevant to TinyML, where ML systems are integrated into tiny, resource-constrained devices.

-

An example of On-Device Learning can be seen in a smart thermostat that adapts to user behavior over time. Initially, the thermostat may have a generic model that understands basic usage patterns. However, as it is exposed to more data, such as the times the user is home or away, preferred temperatures, and external weather conditions, the thermostat can refine its model directly on the device to provide a personalized experience. This is all done without sending data back to a central server for processing.

-

Another example is in predictive text on smartphones. As users type, the phone learns from the user’s language patterns and suggests words or phrases that are likely to be used next. This learning happens directly on the device, and the model updates in real-time as more data is collected. A widely used real-world example of on-device learning is Gboard. On an Android phone, Gboard learns from typing and dictation patterns to enhance the experience for all users. On-device learning is also called federated learning. Figure fig-federated-cycle shows the cycle of federated learning on mobile devices: A. the device learns from user patterns; B. local model updates are communicated to the cloud; C. the cloud server updates the global model and sends the new model to all the devices.

-
-
-
- -
-
-Figure 12.1: Federated learning cycle. Credit: Google Research. -
-
-
-
-
-

12.2 Advantages and Limitations

-

On-device learning provides several advantages over traditional cloud-based ML. By keeping data and models on the device, it eliminates the need for costly data transmission and addresses privacy concerns. This allows for more personalized, responsive experiences, as the model can adapt in real-time to user behavior.

-

However, On-Device Learning also comes with tradeoffs. The limited computing resources on consumer devices can make it challenging to run complex models locally. Datasets are also more restricted since they consist only of user-generated data from a single device. Additionally, updating models requires pushing out new versions rather than seamless cloud updates.

-

On-device learning opens up new capabilities by enabling offline AI while maintaining user privacy. However, it requires carefully managing model and data complexity within the constraints of consumer devices. Finding the right balance between localization and cloud offloading is key to optimizing on-device experiences.

-
-

12.2.1 Benefits

-
-

Privacy and Data Security

-

One of the significant advantages of on-device learning is the enhanced privacy and security of user data. For instance, consider a smartwatch that monitors sensitive health metrics such as heart rate and blood pressure. By processing data and adapting models directly on the device, the biometric data remains localized, circumventing the need to transmit raw data to cloud servers where it could be susceptible to breaches.

-

Server breaches are far from rare, with millions of records compromised annually. For example, the 2017 Equifax breach exposed the personal data of 147 million people. By keeping data on the device, the risk of such exposures is drastically minimized. On-device learning eliminates reliance on centralized cloud storage and safeguards against unauthorized access from various threats, including malicious actors, insider threats, and accidental exposure.

-

Regulations like the Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) mandate stringent data privacy requirements that on-device learning adeptly addresses. By ensuring data remains localized and is not transferred to other systems, on-device learning facilitates compliance with these regulations.

-

On-device learning is not just beneficial for individual users; it has significant implications for organizations and sectors dealing with highly sensitive data. For instance, within the military, on-device learning empowers frontline systems to adapt models and function independently of connections to central servers that could potentially be compromised. Critical and sensitive information is staunchly protected by localizing data processing and learning. However, this comes with the tradeoff that individual devices take on more value and may incentivize theft or destruction as they become the sole carriers of specialized AI models. Care must be taken to secure devices themselves when transitioning to on-device learning.

-

It is also important to preserve the privacy, security, and regulatory compliance of personal and sensitive data. Instead of in the cloud, training and operating models locally substantially augment privacy measures, ensuring that user data is safeguarded from potential threats.

-

However, this is only partially intuitive because on-device learning could instead open systems up to new privacy attacks. With valuable data summaries and model updates permanently stored on individual devices, it may be much harder to physically and digitally protect them than a large computing cluster. While on-device learning reduces the amount of data compromised in any one breach, it could also introduce new dangers by dispersing sensitive information across many decentralized endpoints. Careful security practices are still essential for on-device systems.

-
-
-

Regulatory Compliance

-

On-device learning helps address major privacy regulations like (GDPR) and CCPA. These regulations require data localization, restricting cross-border data transfers to approved countries with adequate controls. GDPR also mandates privacy by design and consent requirements for data collection. By keeping data processing and model training localized on-device, sensitive user data is not transferred across borders. This avoids major compliance headaches for organizations.

-

For example, a healthcare provider monitoring patient vitals with wearables must ensure cross-border data transfers comply with HIPAA and GDPR if using the cloud. Determining which country’s laws apply and securing approvals for international data flows introduces legal and engineering burdens. With on-device learning, no data leaves the device, simplifying compliance. The time and resources spent on compliance are reduced significantly.

-

Industries like healthcare, finance, and government, which have highly regulated data, can benefit greatly from on-device learning. By localizing data and learning, regulatory privacy and data sovereignty requirements are more easily met. On-device solutions provide an efficient way to build compliant AI applications.

-

Major privacy regulations impose restrictions on cross-border data movement that on-device learning inherently addresses through localized processing. This reduces the compliance burden for organizations working with regulated data.

-
-
-

Reduced Bandwidth, Costs, and Increased Efficiency

-

One major advantage of on-device learning is the significant reduction in bandwidth usage and associated cloud infrastructure costs. By keeping data localized for model training rather than transmitting raw data to the cloud, on-device learning can result in substantial bandwidth savings. For instance, a network of cameras analyzing video footage can achieve significant reductions in data transfer by training models on-device rather than streaming all video footage to the cloud for processing.

-

This reduction in data transmission saves bandwidth and translates to lower costs for servers, networking, and data storage in the cloud. Large organizations, which might spend millions on cloud infrastructure to train models on-device data, can experience dramatic cost reductions through on-device learning. In the era of Generative AI, where costs have been escalating significantly, finding ways to keep expenses down has become increasingly important.

-

Furthermore, the energy and environmental costs of running large server farms are also diminished. Data centers consume vast amounts of energy, contributing to greenhouse gas emissions. By reducing the need for extensive cloud-based infrastructure, on-device learning plays a part in mitigating the environmental impact of data processing (Wu et al. 2022).

-
-Wu, Carole-Jean, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, et al. 2022. “Sustainable Ai: Environmental Implications, Challenges and Opportunities.” Proceedings of Machine Learning and Systems 4: 795–813. -

Specifically for endpoint applications, on-device learning minimizes the number of network API calls needed to run inference through a cloud provider. The cumulative costs associated with bandwidth and API calls can quickly escalate for applications with millions of users. In contrast, performing training and inferences locally is considerably more efficient and cost-effective. Under state-of-the-art optimizations, on-device learning has been shown to reduce training memory requirements, drastically improve memory efficiency, and reduce up to 20% in per-iteration latency (Dhar et al. 2021).

-

Another key benefit of on-device learning is the potential for IoT devices to continuously adapt their ML model to new data for continuous, lifelong learning. On-device models can quickly become outdated as user behavior, data patterns, and preferences change. Continuous learning enables the model to efficiently adapt to new data and improvements and maintain high model performance over time.

-
-
-
-

12.2.2 Limitations

-

While traditional cloud-based ML systems have access to nearly endless computing resources, on-device learning is often restricted by the limitations in computational and storage power of the edge device that the model is trained on. By definition, an edge device is a device with restrained computing, memory, and energy resources that cannot be easily increased or decreased. Thus, the reliance on edge devices can restrict the complexity, efficiency, and size of on-device ML models.

-
-

Compute resources

-

Traditional cloud-based ML systems utilize large servers with multiple high-end GPUs or TPUs, providing nearly endless computational power and memory. For example, services like Amazon Web Services (AWS) EC2 allow configuring clusters of GPU instances for massively parallel training.

-

In contrast, on-device learning is restricted by the hardware limitations of the edge device on which it runs. Edge devices refer to endpoints like smartphones, embedded electronics, and IoT devices. By definition, these devices have highly restrained computing, memory, and energy resources compared to the cloud.

-

For example, a typical smartphone or Raspberry Pi may only have a few CPU cores, a few GB of RAM, and a small battery. Even more resource-constrained are TinyML microcontroller devices such as the Arduino Nano BLE Sense. The resources are fixed on these devices and can’t easily be increased on demand, such as scaling cloud infrastructure. This reliance on edge devices directly restricts the complexity, efficiency, and size of models that can be deployed for on-device training:

-
    -
  • Complexity: Limits on memory, computing, and power restrict model architecture design, constraining the number of layers and parameters.
  • -
  • Efficiency: Models must be heavily optimized through methods like quantization and pruning to run faster and consume less energy.
  • -
  • Size: Actual model files must be compressed as much as possible to fit within the storage limitations of edge devices.
  • -
-

Thus, while the cloud offers endless scalability, on-device learning must operate within the tight resource constraints of endpoint hardware. This requires careful codesign of streamlined models, training methods, and optimizations tailored specifically for edge devices.

-
-
-

Dataset Size, Accuracy, and Generalization

-

In addition to limited computing resources, on-device learning is also constrained by the dataset available for training models.

-

In the cloud, models are trained on massive, diverse datasets like ImageNet or Common Crawl. For example, ImageNet contains over 14 million images carefully categorized across thousands of classes.

-

On-device learning instead relies on smaller, decentralized data silos unique to each device. A smartphone camera roll may contain only thousands of photos of users’ interests and environments.

-

This decentralized data leads to a need for IID (independent and identically distributed) data. For instance, two friends may take many photos of the same places and objects, meaning their data distributions are highly correlated rather than independent.

-

Reasons data may be non-IID in on-device settings:

-
    -
  • User heterogeneity: Different users have different interests and environments.
  • -
  • Device differences: Sensors, regions, and demographics affect data.
  • -
  • Temporal effects: time of day, seasonal impacts on data.
  • -
-

The effectiveness of ML relies heavily on large, diverse training data. With small, localized datasets, on-device models may fail to generalize across different user populations and environments. For example, a disease detection model trained only on images from a single hospital would not generalize well to other patient demographics. Withel’s real-world performance would only improve with extensive, diverse medical improvement. Thus, while cloud-based learning leverages massive datasets, on-device learning relies on much smaller, decentralized data silos unique to each user.

-

The limited data and optimizations required for on-device learning can negatively impact model accuracy and generalization:

-
    -
  • Small datasets increase overfitting risk. For example, a fruit classifier trained on 100 images risks overfitting compared to one trained on 1 million diverse images.
  • -
  • Noisy user-generated data reduces quality. Sensor noise or improper data labeling by non-experts may degrade training.
  • -
  • Optimizations like pruning and quantization trade off accuracy for efficiency. An 8-bit quantized model runs faster but less accurately than a 32-bit model.
  • -
-

So while cloud models achieve high accuracy with massive datasets and no constraints, on-device models can struggle to generalize. Some studies show that on-device training matches cloud accuracy on select tasks. However, performance on real-world workloads requires further study (Lin et al. 2022).

-

For instance, a cloud model can accurately detect pneumonia in chest X-rays from thousands of hospitals. However, an on-device model trained only on a small local patient population may fail to generalize.

-

Unreliable accuracy limits the real-world applicability of on-device learning for mission-critical uses like disease diagnosis or self-driving vehicles.

-

On-device training is also slower than the cloud due to limited resources. Even if each iteration is faster, the overall training process takes longer.

-

For example, a real-time robotics application may require model updates within milliseconds. On-device training on small embedded hardware may take seconds or minutes per update - too slow for real-time use.

-

Accuracy, generalization, and speed challenges pose hurdles to adopting on-device learning for real-world production systems, especially when reliability and low latency are critical.

-
-
-
-
-

12.3 On-device Adaptation

-

In an ML task, resource consumption mainly comes from three sources:

-
    -
  • The ML model itself;
  • -
  • The optimization process during model learning
  • -
  • Storing and processing the dataset used for learning.
  • -
-

Correspondingly, there are three approaches to adapting existing ML algorithms onto resource-constrained devices:

-
    -
  • Reducing the complexity of the ML model
  • -
  • Modifying optimizations to reduce training resource requirements
  • -
  • Creating new storage-efficient data representations
  • -
-

In the following section, we will review these on-device learning adaptation methods. The Model Optimizations chapter provides more details on model optimizations.

-
-

12.3.1 Reducing Model Complexity

-

In this section, we will briefly discuss ways to reduce model complexity when adapting ML models on-device. For details on reducing model complexity, please refer to the Model Optimization Chapter.

-
-

Traditional ML Algorithms

-

Due to edge devices’ computing and memory limitations, select traditional ML algorithms are great candidates for on-device learning applications due to their lightweight nature. Some example algorithms with low resource footprints include Naive Bayes Classifiers, Support Vector Machines (SVMs), Linear Regression, Logistic Regression, and select Decision Tree algorithms.

-

With some refinements, these classical ML algorithms can be adapted to specific hardware architectures and perform simple tasks. Their low-performance requirements make it easy to integrate continuous learning even on edge devices.

-
-
-

Pruning

-

Pruning is a technique for reducing the size and complexity of an ML model to improve its efficiency and generalization performance. This is beneficial for training models on edge devices, where we want to minimize resource usage while maintaining competitive accuracy.

-

The primary goal of pruning is to remove parts of the model that do not contribute significantly to its predictive power while retaining the most informative aspects. In the context of decision trees, pruning involves removing some branches (subtrees) from the tree, leading to a smaller and simpler tree. In the context of DNN, pruning is used to reduce the number of neurons (units) or connections in the network, as shown in Figure fig-ondevice-pruning.

-
-
-
- -
-
-Figure 12.2: Network pruning. -
-
-
-
-
-

Reducing Complexity of Deep Learning Models

-

Traditional cloud-based DNN frameworks have too much memory overhead to be used on-device. For example, deep learning systems like PyTorch and TensorFlow require hundreds of megabytes of memory overhead when training models such as MobilenetV2, and the overhead scales as the number of training parameters increases.

-

Traditional cloud-based DNN frameworks have too much memory overhead to be used on-device. For example, deep learning systems like PyTorch and TensorFlow require hundreds of megabytes of memory overhead when training models such as MobilenetV2-w0.35, and the overhead scales as the number of training parameters increases.

-

Current research for lightweight DNNs mostly explores CNN architectures. Several bare-metal frameworks designed for running Neural Networks on MCUs by keeping computational overhead and memory footprint low also exist. Some examples include MNN, TVM, and TensorFlow Lite. However, they can only perform inference during forward passes and lack support for backpropagation. While these models are designed for edge deployment, their reduction in model weights and architectural connections led to reduced resource requirements for continuous learning.

-

The tradeoff between performance and model support is clear when adapting the most popular DNN systems. How do we adapt existing DNN models to resource-constrained settings while maintaining support for backpropagation and continuous learning? The latest research suggests algorithm and system codesign techniques that help reduce the resource consumption of ML training on edge devices. Utilizing techniques such as quantization-aware scaling (QAS), sparse updates, and other cutting-edge techniques, on-device learning is possible on embedded systems with a few hundred kilobytes of RAM without additional memory while maintaining high accuracy.

-
-
-
-

12.3.2 Modifying Optimization Processes

-

Choosing the right optimization strategy is important for DNN training on a device since this allows for finding a good local minimum. Since training occurs on a device, this strategy must also consider limited memory and power.

-
-

Quantization-Aware Scaling

-

Quantization is a common method for reducing the memory footprint of DNN training. Although this could introduce new errors, these errors can be mitigated by designing a model to characterize this statistical error. For example, models could use stochastic rounding or introduce the quantization error into the gradient updates.

-

A specific algorithmic technique is Quantization-Aware Scaling (QAS), which improves the performance of neural networks on low-precision hardware, such as edge devices, mobile devices, or TinyML systems, by adjusting the scale factors during the quantization process.

-

As we discussed in the Model Optimizations chapter, quantization is the process of mapping a continuous range of values to a discrete set of values. In the context of neural networks, quantization often involves reducing the precision of the weights and activations from 32-bit floating point to lower-precision formats such as 8-bit integers. This reduction in precision can significantly reduce the computational cost and memory footprint of the model, making it suitable for deployment on low-precision hardware. Figure fig-float-int-quantization is an example of float-to-integer quantization.

-
-
-
- -
-
-Figure 12.3: Float to integer qunatization. Credit: Nvidia. -
-
-
-

However, the quantization process can also introduce quantization errors that can degrade the model’s performance. Quantization-aware scaling is a technique that aims to minimize these errors by adjusting the scale factors used in the quantization process.

-

The QAS process involves two main steps:

-
    -
  • Quantization-aware training: In this step, the neural network is trained with quantization in mind, using simulated quantization to mimic the effects of quantization during the forward and backward passes. This allows the model to learn to compensate for the quantization errors and improve its performance on low-precision hardware. Refer to the QAT section in Model Optimizations for details.

  • -
  • Quantization and scaling: After training, the model is quantized to a low-precision format, and the scale factors are adjusted to minimize the quantization errors. The scale factors are chosen based on the distribution of the weights and activations in the model and are adjusted to ensure that the quantized values are within the range of the low-precision format.

  • -
-

QAS is used to overcome the difficulties of optimizing models on tiny devices without needing hyperparameter tuning; QAS automatically scales tensor gradients with various bit precisions. This stabilizes the training process and matches the accuracy of floating-point precision.

-
-
-

Sparse Updates

-

Although QAS enables the optimization of a quantized model, it uses a large amount of memory, which is unrealistic for on-device training. So, spare updates are used to reduce the memory footprint of full backward computation. Instead of pruning weights for inference, sparse update prunes the gradient during backward propagation to update the model sparsely. In other words, sparse update skips computing gradients of less important layers and sub-tensors.

-

However, determining the optimal sparse update scheme given a constraining memory budget can be challenging due to the large search space. For example, the MCUNet model has 43 convolutional layers and a search space of approximately 1030. One technique to address this issue is contribution analysis. Contribution analysis measures the accuracy improvement from biases (updating the last few biases compared to only updating the classifier) and weights (updating the weight of one extra layer compared to only having a bias update). By trying to maximize these improvements, contribution analysis automatically derives an optimal sparse update scheme for enabling on-device training.

-
-
-

Layer-Wise Training

-

Other methods besides quantization can help optimize routines. One such method is layer-wise training. A significant memory consumer of DNN training is end-to-end backpropagation, which requires all intermediate feature maps to be stored so the model can calculate gradients. An alternative to this approach that reduces the memory footprint of DNN training is sequential layer-by-layer training (T. Chen et al. 2016). Instead of training end-to-end, training a single layer at a time helps avoid having to store intermediate feature maps.

-
-Chen, Tianqi, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. “Training Deep Nets with Sublinear Memory Cost.” ArXiv Preprint abs/1604.06174. https://arxiv.org/abs/1604.06174. -
-
-

Trading Computation for Memory

-

The strategy of trading computation for memory involves releasing some of the memory being used to store intermediate results. Instead, these results can be recomputed as needed. Reducing memory in exchange for more computation is shown to reduce the memory footprint of DNN training to fit into almost any budget while also minimizing computational cost (Gruslys et al. 2016).

-
-Gruslys, Audrunas, Rémi Munos, Ivo Danihelka, Marc Lanctot, and Alex Graves. 2016. “Memory-Efficient Backpropagation Through Time.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 4125–33. https://proceedings.neurips.cc/paper/2016/hash/a501bebf79d570651ff601788ea9d16d-Abstract.html. -
-
-
-

12.3.3 Developing New Data Representations

-

The dimensionality and volume of the training data can significantly impact on-device adaptation. So, another technique for adapting models onto resource-constrained devices is to represent datasets more efficiently.

-
-

Data Compression

-

The goal of data compression is to reach high accuracies while limiting the amount of training data. One method to achieve this is prioritizing sample complexity: the amount of training data required for the algorithm to reach a target accuracy (Dhar et al. 2021).

-
-Dhar, Sauptik, Junyao Guo, Jiayi (Jason) Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah. 2021. “A Survey of on-Device Machine Learning: An Algorithms and Learning Theory Perspective.” ACM Transactions on Internet of Things 2 (3): 1–49. https://doi.org/10.1145/3450494. -
-Darvish Rouhani, Bita, Azalia Mirhoseini, and Farinaz Koushanfar. 2017. TinyDL: Just-in-time Deep Learning Solution for Constrained Embedded Systems.” In 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 1–4. IEEE. https://doi.org/10.1109/iscas.2017.8050343. -
-Li, Xiang, Tao Qin, Jian Yang, and Tie-Yan Liu. 2016. LightRNN: Memory and Computation-Efficient Recurrent Neural Networks.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 4385–93. https://proceedings.neurips.cc/paper/2016/hash/c3e4035af2a1cde9f21e1ae1951ac80b-Abstract.html. -

Other more common methods of data compression focus on reducing the dimensionality and the volume of the training data. For example, an approach could take advantage of matrix sparsity to reduce the memory footprint of storing training data. Training data can be transformed into a lower-dimensional embedding and factorized into a dictionary matrix multiplied by a block-sparse coefficient matrix (Darvish Rouhani, Mirhoseini, and Koushanfar 2017). Another example could involve representing words from a large language training dataset in a more compressed vector format (Li et al. 2016).

-
-
-
-
-

12.4 Transfer Learning

-

Transfer learning is an ML technique in which a model developed for a particular task is reused as the starting point for a model on a second task. In the context of on-device AI, transfer learning allows us to leverage pre-trained models that have already learned useful representations from large datasets and finetune them for specific tasks using smaller datasets directly on the device. This can significantly reduce the computational resources and time required for training models from scratch.

-

Figure fig-transfer-learning-apps includes some intuitive examples of transfer learning from the real world. For instance, if you can ride a bicycle, you know how to balance yourself on two-wheel vehicles. Then, it would be easier for you to learn how to ride a motorcycle than it would be for someone who cannot ride a bicycle.

-
-
-
- -
-
-Figure 12.4: Transferring knowledge between tasks. Credit: Zhuang et al. (2021). -
-
-Zhuang, Fuzhen, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He. 2021. “A Comprehensive Survey on Transfer Learning.” Proc. IEEE 109 (1): 43–76. https://doi.org/10.1109/jproc.2020.3004555. -
-
-

Let’s take the example of a smart sensor application that uses on-device AI to recognize objects in images captured by the device. Traditionally, this would require sending the image data to a server, where a large neural network model processes the data and sends back the results. With on-device AI, the model is stored and runs directly on-device, eliminating the need to send data to a server.

-

If we want to customize the model for the on-device characteristics, training a neural network model from scratch on the device would be impractical due to the limited computational resources and battery life. This is where transfer learning comes in. Instead of training a model from scratch, we can take a pre-trained model, such as a convolutional neural network (CNN) or a transformer network trained on a large dataset of images, and finetune it for our specific object recognition task. This finetuning can be done directly on the device using a smaller dataset of images relevant to the task. By leveraging the pre-trained model, we can reduce the computational resources and time required for training while still achieving high accuracy for the object recognition task.

-

Transfer learning is important in making on-device AI practical by allowing us to leverage pre-trained models and finetune them for specific tasks, thereby reducing the computational resources and time required for training. The combination of on-device AI and transfer learning opens up new possibilities for AI applications that are more privacy-conscious and responsive to user needs.

-

Transfer learning has revolutionized the way models are developed and deployed, both in the cloud and at the edge. Transfer learning is being used in the real world. One such example is the use of transfer learning to develop AI models that can detect and diagnose diseases from medical images, such as X-rays, MRI scans, and CT scans. For example, researchers at Stanford University developed a transfer learning model that can detect cancer in skin images with an accuracy of 97% (Esteva et al. 2017). This model was pre-trained on 1.28 million images to classify a broad range of objects and then specialized for cancer detection by training on a dermatologist-curated dataset of skin images.

-
-Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. 2017. “Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks.” Nature 542 (7639): 115–18. https://doi.org/10.1038/nature21056. -

Implementation in production scenarios can be broadly categorized into two stages: pre-deployment and post-deployment.

-
-

12.4.1 Pre-Deployment Specialization

-

In the pre-deployment stage, transfer learning acts as a catalyst to expedite the development process. Here’s how it typically works: Imagine we are creating a system to recognize different breeds of dogs. Rather than starting from scratch, we can utilize a pre-trained model that has already mastered the broader task of recognizing animals in images.

-

This pre-trained model serves as a solid foundation and contains a wealth of knowledge acquired from extensive data. We then finetune this model using a specialized dataset containing images of various dog breeds. This finetuning process tailors the model to our specific need — precisely identifying dog breeds. Once finetuned and validated to meet performance criteria, this specialized model is then ready for deployment.

-

Here’s how it works in practice:

-
    -
  • Start with a Pre-Trained Model: Begin by selecting a model that has already been trained on a comprehensive dataset, usually related to a general task. This model serves as the foundation for the task at hand.
  • -
  • Finetuning: The pre-trained model is then finetuned on a smaller, more specialized dataset specific to the desired task. This step allows the model to adapt and specialize its knowledge to the specific requirements of the application.
  • -
  • Validation: After finetuning, the model is validated to ensure it meets the performance criteria for the specialized task.
  • -
  • Deployment: Once validated, the specialized model is then deployed into the production environment.
  • -
-

This method significantly reduces the time and computational resources required to train a model from scratch (Pan and Yang 2010). By adopting transfer learning, embedded systems can achieve high accuracy on specialized tasks without the need to gather extensive data or expend significant computational resources on training from the ground up.

-
-Pan, Sinno Jialin, and Qiang Yang. 2010. “A Survey on Transfer Learning.” IEEE Trans. Knowl. Data Eng. 22 (10): 1345–59. https://doi.org/10.1109/tkde.2009.191. -
-
-

12.4.2 Post-Deployment Adaptation

-

Deployment to a device need not mark the culmination of an ML model’s educational trajectory. With the advent of transfer learning, we open the doors to the deployment of adaptive ML models in real-world scenarios, catering to users’ personalized needs.

-

Consider a real-world application where a parent wishes to identify their child in a collection of images from a school event on their smartphone. In this scenario, the parent is faced with the challenge of locating their child amidst images of many other children. Transfer learning can be employed here to finetune an embedded system’s model to this unique and specialized task. Initially, the system might use a generic model trained to recognize faces in images. However, with transfer learning, the system can adapt this model to recognize the specific features of the user’s child.

-

Here’s how it works:

-
    -
  1. Data Collection: The embedded system gathers images that include the child, ideally with the parent’s input to ensure accuracy and relevance. This can be done directly on the device, maintaining the user’s data privacy.
  2. -
  3. Model Finetuning: The pre-existing face recognition model, which has been trained on a large and diverse dataset, is then finetuned using the newly collected images of the child. This process adapts the model to recognize the child’s specific facial features, distinguishing them from other children in the images.
  4. -
  5. Validation: The refined model is then validated to ensure it accurately recognizes the child in various images. This can involve the parent verifying the model’s performance and providing feedback for further improvements.
  6. -
  7. Deployment: Once validated, the adapted model is deployed on the device, enabling the parent to easily identify their child in images without having to sift through them manually.
  8. -
-

This on-the-fly customization enhances the model’s efficacy for the individual user, ensuring that they benefit from ML personalization. This is, in part, how iPhotos or Google Photos works when they ask us to recognize a face, and then, based on that information, they index all the photos by that face. Because the learning and adaptation occur on the device itself, there are no risks to personal privacy. The parent’s images are not uploaded to a cloud server or shared with third parties, protecting the family’s privacy while still reaping the benefits of a personalized ML model. This approach represents a significant step forward in the quest to provide users with tailored ML solutions that respect and uphold their privacy.

-
-
-

12.4.3 Benefits

-

Transfer learning has become an important technique in ML and artificial intelligence, and it is particularly valuable for several reasons.

-
    -
  1. Data Scarcity: In many real-world scenarios, acquiring a sufficiently large labeled dataset to train an ML model from scratch is challenging. Transfer learning mitigates this issue by allowing the use of pre-trained models that have already learned valuable features from a vast dataset.
  2. -
  3. Computational Expense: Training a model from scratch requires significant computational resources and time, especially for complex models like deep neural networks. By using transfer learning, we can leverage the computation that has already been done during the training of the source model, thereby saving both time and computational power.
  4. -
  5. Limited Annotated Data: For some specific tasks, there might be ample raw data available, but the process of labeling that data for supervised learning can be costly and time-consuming. Transfer learning enables us to utilize pre-trained models that have been trained on a related task with labeled data, hence requiring less annotated data for the new task.
  6. -
-

There are advantages to reusing the features:

-
    -
  1. Hierarchical Feature Learning: Deep learning models, particularly Convolutional Neural Networks (CNNs), can learn hierarchical features. Lower layers typically learn generic features like edges and shapes, while higher layers learn more complex and task-specific features. Transfer learning allows us to reuse the generic features learned by a model and finetune the higher layers for our specific task.
  2. -
  3. Boosting Performance: Transfer learning has been proven to boost the performance of models on tasks with limited data. The knowledge gained from the source task can provide a valuable starting point and lead to faster convergence and improved accuracy on the target task.
  4. -
-
-

Exercise 12.1 (Transfer Learning)  

-
-
- -
-
-

Imagine training an AI to recognize flowers like a pro, but without needing a million flower pictures! That’s the power of transfer learning. In this Colab, we’ll take an AI that already knows about images and teach it to become a flower expert with less effort. Get ready to make your AI smarter, not harder!

-

-
-
-
-
-
-

12.4.4 Core Concepts

-

Understanding the core concepts of transfer learning is essential for effectively utilizing this powerful approach in ML. Here, we’ll break down some of the main principles and components that underlie the process of transfer learning.

-
-

Source and Target Tasks

-

In transfer learning, there are two main tasks involved: the source task and the target task. The source task is the task for which the model has already been trained and has learned valuable information. The target task is the new task we want the model to perform. The goal of transfer learning is to leverage the knowledge gained from the source task to improve performance on the target task.

-

Suppose we have a model trained to recognize various fruits in images (source task), and we want to create a new model to recognize different vegetables in images (target task). In that case, we can use transfer learning to leverage the knowledge gained during the fruit recognition task to improve the performance of the vegetable recognition model.

-
-
-

Representation Transfer

-

Representation transfer is about transferring the learned representations (features) from the source task to the target task. There are three main types of representation transfer:

-
    -
  • Instance Transfer: This involves reusing the data instances from the source task in the target task.
  • -
  • Feature-Representation Transfer: This involves transferring the learned feature representations from the source task to the target task.
  • -
  • Parameter Transfer: This involves transferring the model’s learned parameters (weights) from the source task to the target task.
  • -
-

In natural language processing, a model trained to understand the syntax and grammar of a language (source task) can have its learned representations transferred to a new model designed to perform sentiment analysis (target task).

-
-
-

Finetuning

-

Finetuning is the process of adjusting the parameters of a pre-trained model to adapt it to the target task. This typically involves updating the weights of the model’s layers, especially the last few layers, to make the model more relevant for the new task. In image classification, a model pre-trained on a general dataset like ImageNet (source task) can be finetuned by adjusting the weights of its layers to perform well on a specific classification task, like recognizing specific animal species (target task).

-
-
-

Feature Extractions

-

Feature extraction involves using a pre-trained model as a fixed feature extractor, where the output of the model’s intermediate layers is used as features for the target task. This approach is particularly useful when the target task has a small dataset, as the pre-trained model’s learned features can significantly enhance performance. In medical image analysis, a model pre-trained on a large dataset of general medical images (source task) can be used as a feature extractor to provide valuable features for a new model designed to recognize specific types of tumors in X-ray images (target task).

-
-
-
-

12.4.5 Types of Transfer Learning

-

Transfer learning can be classified into three main types based on the nature of the source and target tasks and data. Let’s explore each type in detail:

-
-

Inductive Transfer Learning

-

In inductive transfer learning, the goal is to learn the target predictive function with the help of source data. It typically involves finetuning a pre-trained model on the target task with available labeled data. A common example of inductive transfer learning is image classification tasks. For instance, a model pre-trained on the ImageNet dataset (source task) can be finetuned to classify specific types of birds (target task) using a smaller labeled dataset of bird images.

-
-
-

Transductive Transfer Learning

-

Transductive transfer learning involves using source and target data, but only the source task. The main aim is to transfer knowledge from the source domain to the target domain, even though the tasks remain the same. Sentiment analysis for different languages can serve as an example of transductive transfer learning. A model trained to perform sentiment analysis in English (source task) can be adapted to perform sentiment analysis in another language, like French (target task), by leveraging parallel datasets of English and French sentences with the same sentiments.

-
-
-

Unsupervised Transfer Learning

-

Unsupervised transfer learning is used when the source and target tasks are related, but there is no labeled data available for the target task. The goal is to leverage the knowledge gained from the source task to improve performance on the target task, even without labeled data. An example of unsupervised transfer learning is topic modeling in text data. A model trained to extract topics from news articles (source task) can be adapted to extract topics from social media posts (target task) without needing labeled data for the social media posts.

-
-
-

Comparison and Tradeoffs

-

By leveraging these different types of transfer learning, practitioners can choose the approach that best fits the nature of their tasks and available data, ultimately leading to more effective and efficient ML models. So, in summary:

-
    -
  • Inductive: different source and target tasks, different domains
  • -
  • Transductive: different source and target tasks, same domain
  • -
  • Unsupervised: unlabeled source data, transfers feature representations
  • -
-

Table tbl-tltypes presents a matrix that outlines in a bit more detail the similarities and differences between the types of transfer learning:

-
-
-
-Table 12.1: Comparison of transfer learning types. -
-
- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Inductive Transfer LearningTransductive Transfer LearningUnsupervised Transfer Learning
Labeled Data for Target TaskRequiredNot RequiredNot Required
Source TaskCan be differentSameSame or Different
Target TaskCan be differentSameCan be different
ObjectiveImprove target task performance with source dataTransfer knowledge from source to target domainLeverage source task to improve target task performance without labeled data
ExampleImageNet to bird classificationSentiment analysis in different languagesTopic modeling for different text data
-
-
-
-
-
-
-

12.4.6 Constraints and Considerations

-

When engaging in transfer learning, there are several factors that must be considered to ensure successful knowledge transfer and model performance. Here’s a breakdown of some key factors:

-
-

Domain Similarity

-

Domain similarity refers to how closely related the source and target domains are. The more similar the domains, the more likely the transfer learning will be successful. Transferring knowledge from a model trained on images of outdoor scenes (source domain) to a new task that involves recognizing objects in indoor scenes (target domain) might be more successful than transferring knowledge from outdoor scenes to a task involving text analysis, as the domains (images vs. text) are quite different.

-
-
-

Task Similarity

-

Task similarity refers to how closely related the source and target tasks are. Similar tasks are likely to benefit more from transfer learning. A model trained to recognize different breeds of dogs (source task) can be more easily adapted to recognize different breeds of cats (target task) than it can be adapted to perform a completely different task like language translation.

-
-
-

Data Quality and Quantity

-

The quality and quantity of data available for the target task can significantly impact the success of transfer learning. More high-quality data can result in better model performance. Suppose we have a large dataset with clear, well-labeled images to recognize specific bird species. In that case, the transfer learning process will likely be more successful than if we have a small, noisy dataset.

-
-
-

Feature Space Overlap

-

Feature space overlap refers to how well the features learned by the source model align with the features needed for the target task. Greater overlap can lead to more successful transfer learning. A model trained on high-resolution images (source task) may not transfer well to a target task that involves low-resolution images, as the feature space (high-res vs. low-res) is different.

-
-
-

Model Complexity

-

The complexity of the source model can also impact the success of transfer learning. Sometimes, a simpler model might transfer better than a complex one, as it is less likely to overfit the source task. For example, a simple convolutional neural network (CNN) model trained on image data (source task) may transfer more successfully to a new image classification task (target task) than a complex CNN with many layers, as the simpler model is less likely to overfit the source task.

-

By considering these factors, ML practitioners can make informed decisions about when and how to utilize transfer learning, ultimately leading to more successful model performance on the target task. The success of transfer learning hinges on the degree of similarity between the source and target domains. Overfitting is risky, especially when finetuning occurs on a limited dataset. On the computational front, certain pre-trained models, owing to their size, might not comfortably fit into the memory constraints of some devices or may run prohibitively slowly. Over time, as data evolves, there is potential for model drift, indicating the need for periodic re-training or ongoing adaptation.

-

Learn more about transfer learning in the video below.

-
-
-
-
-
-

12.5 Federated Machine Learning

-

Federated Learning Overview

-

The modern internet is full of large networks of connected devices. Whether it’s cell phones, thermostats, smart speakers, or other IOT products, countless edge devices are a goldmine for hyper-personalized, rich data. However, with that rich data comes an assortment of problems with information transfer and privacy. Constructing a training dataset in the cloud from these devices would involve high volumes of bandwidth, cost-efficient data transfer, and violation of users’ privacy.

-

Federated learning offers a solution to these problems: train models partially on the edge devices and only communicate model updates to the cloud. In 2016, a team from Google designed architecture for federated learning that attempts to address these problems.

-

In their initial paper, Google outlines a principle federated learning algorithm called FederatedAveraging, which is shown in Figure fig-federated-avg-algo. Specifically, FederatedAveraging performs stochastic gradient descent (SGD) over several different edge devices. In this process, each device calculates a gradient \(g_k = \nabla F_k(w_t)\) which is then applied to update the server-side weights as (with \(\eta\) as learning rate across \(k\) clients): \[ -w_{t+1} \rightarrow w_t - \eta \sum_{k=1}^{K} \frac{n_k}{n}g_k -\] This summarizes the basic algorithm for federated learning on the right. For each round of training, the server takes a random set of client devices and calls each client to train on its local batch using the most recent server-side weights. Those weights are then returned to the server, where they are collected individually and averaged to update the global model weights.

-
-
-
- -
-
-Figure 12.5: Google’s Proposed FederatedAverage Algorithm. Credit: McMahan et al. (2017). -
-
-
-

With this proposed structure, there are a few key vectors for further optimizing federated learning. We will outline each in the following subsections.

-

The following video is an overview of federated learning.

-
-
-

12.5.1 Communication Efficiency

-

One of the key bottlenecks in federated learning is communication. Every time a client trains the model, they must communicate their updates back to the server. Similarly, once the server has averaged all the updates, it must send them back to the client. This incurs huge bandwidth and resource costs on large networks of millions of devices. As the field of federated learning advances, a few optimizations have been developed to minimize this communication. To address the footprint of the model, researchers have developed model compression techniques. In the client-server protocol, federated learning can also minimize communication through the selective sharing of updates on clients. Finally, efficient aggregation techniques can also streamline the communication process.

-
-
-

12.5.2 Model Compression

-

In standard federated learning, the server communicates the entire model to each client, and then the client sends back all of the updated weights. This means that the easiest way to reduce the client’s memory and communication footprint is to minimize the size of the model needed to be communicated. We can employ all of the previously discussed model optimization strategies to do this.

-

In 2022, another team at Google proposed that each client communicates via a compressed format and decompresses the model on the fly for training (Yang et al. 2023), allocating and deallocating the full memory for the model only for a short period while training. The model is compressed through a range of various quantization strategies elaborated upon in their paper. Meanwhile, the server can update the uncompressed model by decompressing and applying updates as they come in.

-
-Yang, Tien-Ju, Yonghui Xiao, Giovanni Motta, Françoise Beaufays, Rajiv Mathews, and Mingqing Chen. 2023. “Online Model Compression for Federated Learning with Large Models.” In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1–5. IEEE; IEEE. https://doi.org/10.1109/icassp49357.2023.10097124. -
-
-

12.5.3 Selective Update Sharing

-

There are many methods for selectively sharing updates. The general principle is that reducing the portion of the model that the clients are training on the edge reduces the memory necessary for training and the size of communication to the server. In basic federated learning, the client trains the entire model. This means that when a client sends an update to the server, it has gradients for every weight in the network.

-

However, we cannot just reduce communication by sending pieces of those gradients from each client to the server because the gradients are part of an entire update required to improve the model. Instead, you need to architecturally design the model such that each client trains only a small portion of the broader model, reducing the total communication while still gaining the benefit of training on client data. A paper (Shi and Radu 2022) from the University of Sheffield applies this concept to a CNN by splitting the global model into two parts: an upper and a lower part, as shown in Z. Chen and Xu (2023).

-
-Shi, Hongrui, and Valentin Radu. 2022. “Data Selection for Efficient Model Update in Federated Learning.” In Proceedings of the 2nd European Workshop on Machine Learning and Systems, 72–78. ACM. https://doi.org/10.1145/3517207.3526980. -
-
-
- -
-
-Figure 12.6: Split model architecture for selective sharing. Credit: Shi et al., (2022). -
-
-
-

The lower part is designed to focus on generic features in the dataset, while the upper part, trained on those generic features, is designed to be more sensitive to the activation maps. This means that the lower part of the model is trained through standard federated averaging across all of the clients. Meanwhile, the upper part of the model is trained entirely on the server side from the activation maps generated by the clients. This approach drastically reduces communication for the model while still making the network robust to various types of input found in the data on the client devices.

-
-
-

12.5.4 Optimized Aggregation

-

In addition to reducing the communication overhead, optimizing the aggregation function can improve model training speed and accuracy in certain federated learning use cases. While the standard for aggregation is just averaging, various other approaches can improve model efficiency, accuracy, and security. One alternative is clipped averaging, which clips the model updates within a specific range. Another strategy to preserve security is differential privacy average aggregation. This approach integrates differential privacy into the aggregations tep to protect client identities. Each client adds a layer of random noise to their updates before communicating to the server. The server then updates the server with the noisy updates, meaning that the amount of noise needs to be tuned carefully to balance privacy and accuracy.

-

In addition to security-enhancing aggregation methods, there are several modifications to the aggregation methods that can improve training speed and performance by adding client metadata along with the weight updates. Momentum aggregation is a technique that helps address the convergence problem. In federated learning, client data can be extremely heterogeneous depending on the different environments in which the devices are used. That means that many models with heterogeneous data may need help to converge. Each client stores a momentum term locally, which tracks the pace of change over several updates. With clients communicating this momentum, the server can factor in the rate of change of each update when changing the global model to accelerate convergence. Similarly, weighted aggregation can factor in the client performance or other parameters like device type or network connection strength to adjust the weight with which the server should incorporate the model updates. Further description of specific aggregation algorithms is described by Moshawrab et al. (2023).

-
-Moshawrab, Mohammad, Mehdi Adda, Abdenour Bouzouane, Hussein Ibrahim, and Ali Raad. 2023. “Reviewing Federated Learning Aggregation Algorithms; Strategies, Contributions, Limitations and Future Perspectives.” Electronics 12 (10): 2287. https://doi.org/10.3390/electronics12102287. -
-
-

12.5.5 Handling non-IID Data

-

When using federated learning to train a model across many client devices, it is convenient to consider the data to be independent and identically distributed (IID) across all clients. When data is IID, the model will converge faster and perform better because each local update on any given client is more representative of the broader dataset. This makes aggregation straightforward, as you can directly average all clients. However, this differs from how data often appears in the real world. Consider a few of the following ways in which data may be non-IID:

-
    -
  • If you are learning on a set of health-monitor devices, different device models could mean different sensor qualities and properties. This means that low-quality sensors and devices may produce data, and therefore, model updates distinctly different than high-quality ones

  • -
  • A smart keyboard trained to perform autocorrect. If you have a disproportionate amount of devices from a certain region, the slang, sentence structure, or even language they were using could skew more model updates towards a certain style of typing

  • -
  • If you have wildlife sensors in remote areas, connectivity may not be equally distributed, causing some clients in certain regions to be unable to send more model updates than others. If those regions have different wildlife activity from certain species, that could skew the updates toward those animals

  • -
-

There are a few approaches to addressing non-IID data in federated learning. One approach would be to change the aggregation algorithm. If you use a weighted aggregation algorithm, you can adjust based on different client properties like region, sensor properties, or connectivity (Zhao et al. 2018).

-
-Zhao, Yue, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. “Federated Learning with Non-Iid Data.” ArXiv Preprint abs/1806.00582. https://arxiv.org/abs/1806.00582. -
-
-

12.5.6 Client Selection

-

Considering all of the factors influencing the efficacy of federated learning, like IID data and communication, client selection is a key component to ensuring a system trains well. Selecting the wrong clients can skew the dataset, resulting in non-IID data. Similarly, choosing clients randomly with bad network connections can slow down communication. Therefore, several key characteristics must be considered when selecting the right subset of clients.

-

When selecting clients, there are three main components to consider: data heterogeneity, resource allocation, and communication cost. We can select clients on the previously proposed metrics in the non-IID section to address data heterogeneity. In federated learning, all devices may have different amounts of computing, resulting in some being more inefficient at training than others. When selecting a subset of clients for training, one must consider a balance of data heterogeneity and available resources. In an ideal scenario, you can always select the subset of clients with the greatest resources. However, this may skew your dataset, so a balance must be struck. Communication differences add another layer; you want to avoid being bottlenecked by waiting for devices with poor connections to transmit all their updates. Therefore, you must also consider choosing a subset of diverse yet well-connected devices.

-
-
-

12.5.7 An Example of Deployed Federated Learning: G board

-

A primary example of a deployed federated learning system is Google’s Keyboard, Gboard, for Android devices. In implementing federated learning for the keyboard, Google focused on employing differential privacy techniques to protect the user’s data and identity. Gboard leverages language models for several key features, such as Next Word Prediction (NWP), Smart Compose (SC), and On-The-Fly rescoring (OTF) (Xu et al. 2023), as shown in Figure fig-gboard-features.

-
-Xu, Zheng, Yanxiang Zhang, Galen Andrew, Christopher A Choquette-Choo, Peter Kairouz, H Brendan McMahan, Jesse Rosenstock, and Yuanbo Zhang. 2023. “Federated Learning of Gboard Language Models with Differential Privacy.” ArXiv Preprint abs/2305.18465. https://arxiv.org/abs/2305.18465. -

NWP will anticipate the next word the user tries to type based on the previous one. SC gives inline suggestions to speed up the typing based on each character. OTF will re-rank the proposed next words based on the active typing process. All three of these models need to run quickly on the edge, and federated learning can accelerate training on the users’ data. However, uploading every word a user typed to the cloud for training would be a massive privacy violation. Therefore, federated learning emphasizes differential privacy, which protects the user while enabling a better user experience.

-
-
-
- -
-
-Figure 12.7: Google G Board Features. Credit: Zheng et al., (2023). -
-
-
-

To accomplish this goal, Google employed its algorithm DP-FTRL, which provides a formal guarantee that trained models will not memorize specific user data or identities. The algorithm system design is shown in Figure fig-differential-privacy. DP-FTRL, combined with secure aggregation, encrypts model updates and provides an optimal balance of privacy and utility. Furthermore, adaptive clipping is applied in the aggregation process to limit the impact of individual users on the global model (step 3 in Figure fig-differential-privacy). By combining all these techniques, Google can continuously refine its keyboard while preserving user privacy in a formally provable way.

-
-
-
- -
-
-Figure 12.8: Differential Privacy in G Board. Credit: Zheng et al., (2023). -
-
-
-
-

Exercise 12.2 (Federated Learning - Text Generation)  

-
-
- -
-
-

Have you ever used those smart keyboards to suggest the next word? With federated learning, we can make them even better without sacrificing privacy. In this Colab, we’ll teach an AI to predict words by training on text data spread across devices. Get ready to make your typing even smoother!

-

-
-
-
-
-

Exercise 12.3 (Federated Learning - Image Classification)  

-
-
- -
-
-

Want to train an image-savvy AI without sending your photos to the cloud? Federated learning is the answer! In this Colab, we’ll train a model across multiple devices, each learning from its images. Privacy is protected, and teamwork makes the AI dream work!

-

-
-
-
-
-
-

12.5.8 Benchmarking for Federated Learning: MedPerf

-

One of the richest examples of data on the edge is medical devices. These devices store some of the most personal data on users but offer huge advances in personalized treatment and better accuracy in medical AI. Given these two factors, medical devices are the perfect use case for federated learning. MedPerf is an open-source platform used to benchmark models using federated evaluation (Karargyris et al. 2023). Instead of just training models via federated learning, MedPerf takes the model to edge devices to test it against personalized data while preserving privacy. In this way, a benchmark committee can evaluate various models in the real world on edge devices while still preserving patient anonymity.

-
-Karargyris, Alexandros, Renato Umeton, Micah J Sheller, Alejandro Aristizabal, Johnu George, Anna Wuest, Sarthak Pati, et al. 2023. “Federated Benchmarking of Medical Artificial Intelligence with MedPerf.” Nature Machine Intelligence 5 (7): 799–810. https://doi.org/10.1038/s42256-023-00652-2. -
-
-
-

12.6 Security Concerns

-

Performing ML model training and adaptation on end-user devices also introduces security risks that must be addressed. Some key security concerns include:

-
    -
  • Exposure of private data: Training data may be leaked or stolen from devices
  • -
  • Data poisoning: Adversaries can manipulate training data to degrade model performance
  • -
  • Model extraction: Attackers may attempt to steal trained model parameters
  • -
  • Membership inference: Models may reveal the participation of specific users’ data
  • -
  • Evasion attacks: Specially crafted inputs can cause misclassification
  • -
-

Any system that performs learning on-device introduces security concerns, as it may expose vulnerabilities in larger-scale models. Numerous security risks are associated with any ML model, but these risks have specific consequences for on-device learning. Fortunately, there are methods to mitigate these risks and improve the real-world performance of on-device learning.

-
-

12.6.1 Data Poisoning

-

On-device ML introduces unique data security challenges compared to traditional cloud-based training. In particular, data poisoning attacks pose a serious threat during on-device learning. Adversaries can manipulate training data to degrade model performance when deployed.

-

Several data poisoning attack techniques exist:

-
    -
  • Label Flipping: It involves applying incorrect labels to samples. For instance, in image classification, cat photos may be labeled as dogs to confuse the model. Flipping even 10% of labels can have significant consequences on the model.
  • -
  • Data Insertion: It introduces fake or distorted inputs into the training set. This could include pixelated images, noisy audio, or garbled text.
  • -
  • ** Logic Corruption: ** This alters the underlying [patterns] (https://www.worldscientific.com/doi/10.1142/S0218001414600027) in data to mislead the model. In sentiment analysis, highly negative reviews may be marked positive through this technique. For this reason, recent surveys have shown that many companies are more afraid of data poisoning than other adversarial ML concerns.
  • -
-

What makes data poisoning alarming is how it exploits the discrepancy between curated datasets and live training data. Consider a cat photo dataset collected from the internet. In the weeks later, when this data trains a model on-device, new cat photos on the web differ significantly.

-

With data poisoning, attackers purchase domains and upload content that influences a portion of the training data. Even small data changes significantly impact the model’s learned behavior. Consequently, poisoning can instill racist, sexist, or other harmful biases if unchecked.

-

Microsoft Tay was a chatbot launched by Microsoft in 2016. It was designed to learn from its interactions with users on social media platforms like Twitter. Unfortunately, Microsoft Tay became a prime example of data poisoning in ML models. Within 24 hours of its launch, Microsoft had to take Tay offline because it had started producing offensive and inappropriate messages, including hate speech and racist comments. This occurred because some users on social media intentionally fed Tay with harmful and offensive input, which the chatbot then learned from and incorporated into its responses.

-

This incident is a clear example of data poisoning because malicious actors intentionally manipulated the data used to train and inform the chatbot’s responses. The data poisoning resulted in the chatbot adopting harmful biases and producing output that its developers did not intend. It demonstrates how even small amounts of maliciously crafted data can significantly impact the behavior of ML models and highlights the importance of implementing robust data filtering and validation mechanisms to prevent such incidents from occurring.

-

Such biases could have dangerous real-world impacts. Rigorous data validation, anomaly detection, and tracking of data provenance are critical defensive measures. Adopting frameworks like Five Safes ensures models are trained on high-quality, representative data (Desai et al. 2016).

-
-Desai, Tanvi, Felix Ritchie, Richard Welpton, et al. 2016. “Five Safes: Designing Data Access for Research.” Economics Working Paper Series 1601: 28. -

Data poisoning is a pressing concern for secure on-device learning since data at the endpoint cannot be easily monitored in real-time. If models are allowed to adapt on their own, then we run the risk of the device acting maliciously. However, continued research in adversarial ML aims to develop robust solutions to detect and mitigate such data attacks.

-
-
-

12.6.2 Adversarial Attacks

-

During the training phase, attackers might inject malicious data into the training dataset, which can subtly alter the model’s behavior. For example, an attacker could add images of cats labeled as dogs to a dataset used to train an image classification model. If done cleverly, the model’s accuracy might not significantly drop, and the attack could be noticed. The model would then incorrectly classify some cats as dogs, which could have consequences depending on the application.

-

In an embedded security camera system, for instance, this could allow an intruder to avoid detection by wearing a specific pattern that the model has been tricked into classifying as non-threatening.

-

During the inference phase, attackers can use adversarial examples to fool the model. Adversarial examples are inputs that have been slightly altered in a way that causes the model to make incorrect predictions. For instance, an attacker might add a small amount of noise to an image in a way that causes a face recognition system to misidentify a person. These attacks can be particularly concerning in applications where safety is at stake, such as autonomous vehicles. In the example you mentioned, the researchers were able to cause a traffic sign recognition system to misclassify a stop sign as a speed sign. This type of misclassification could lead to accidents if it occurred in a real-world autonomous driving system.

-

To mitigate these risks, several defenses can be employed:

-
    -
  • Data Validation and Sanitization: Before incorporating new data into the training dataset, it should be thoroughly validated and sanitized to ensure it is not malicious.
  • -
  • Adversarial Training: The model can be trained on adversarial examples to make it more robust to these types of attacks.
  • -
  • Input Validation: During inference, inputs should be validated to ensure they have not been manipulated to create adversarial examples.
  • -
  • Regular Auditing and Monitoring: Regularly auditing and monitoring the model’s behavior can help detect and mitigate adversarial attacks. However, this is easier said than done in the context of tiny ML systems. It is often hard to monitor embedded ML systems at the endpoint due to communication bandwidth limitations, which we will discuss in the MLOps chapter.
  • -
-

By understanding the potential risks and implementing these defenses, we can help secure on-device training at the endpoint/edge and mitigate the impact of adversarial attacks. Most people easily confuse data poisoning and adversarial attacks. So Table tbl-attacks compares data poisoning and adversarial attacks:

-
-
-
-Table 12.2: Comparison of data poisoning and adversarial attacks. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AspectData PoisoningAdversarial Attacks
TimingTraining phaseInference phase
TargetTraining dataInput data
GoalNegatively affect model’s performanceCause incorrect predictions
MethodInsert malicious examples into training data, often with incorrect labelsAdd carefully crafted noise to input data
ExampleAdding images of cats labeled as dogs to a dataset used for training an image classification modelAdding a small amount of noise to an image in a way that causes a face recognition system to misidentify a person
Potential EffectsModel learns incorrect patterns and makes incorrect predictionsImmediate and potentially dangerous incorrect predictions
Applications AffectedAny ML modelAutonomous vehicles, security systems, etc
-
-
-
-
-
-

12.6.3 Model Inversion

-

Model inversion attacks are a privacy threat to on-device machine learning models trained on sensitive user data (Nguyen et al. 2023). Understanding this attack vector and mitigation strategies will be important for building secure and ethical on-device AI. For example, imagine an iPhone app that uses on-device learning to categorize photos in your camera roll into groups like “beach,” “food,” or “selfies” for easier searching.

-
-Nguyen, Ngoc-Bao, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, and Ngai-Man Cheung. 2023. “Re-Thinking Model Inversion Attacks Against Deep Neural Networks.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 16384–93. IEEE. https://doi.org/10.1109/cvpr52729.2023.01572. -

The on-device model may be trained by Apple on a dataset of iCloud photos from consenting users. A malicious attacker could attempt to extract parts of those original iCloud training photos using model inversion. Specifically, the attacker feeds crafted synthetic inputs into the on-device photo classifier. By tweaking the synthetic inputs and observing how the model categorizes them, they can refine the inputs until they reconstruct copies of the original training data - like a beach photo from a user’s iCloud. Now, the attacker has breached that user’s privacy by obtaining one of their photos without consent. This demonstrates why model inversion is dangerous - it can potentially leak highly sensitive training data.

-

Photos are an especially high-risk data type because they often contain identifiable people, location information, and private moments. However, the same attack methodology could apply to other personal data, such as audio recordings, text messages, or users’ health data.

-

To defend against model inversion, one would need to take precautions like adding noise to the model outputs or using privacy-preserving machine learning techniques like federated learning to train the on-device model. The goal is to prevent attackers from being able to reconstruct the original training data.

-
-
-

12.6.4 On-Device Learning Security Concerns

-

While data poisoning and adversarial attacks are common concerns for ML models in general, on-device learning introduces unique security risks. When on-device variants of large-scale models are published, adversaries can exploit these smaller models to attack their larger counterparts. Research has demonstrated that as on-device models and full-scale models become more similar, the vulnerability of the original large-scale models increases significantly. For instance, evaluations across 19 Deep Neural Networks (DNNs) revealed that exploiting on-device models could increase the vulnerability of the original large-scale models by up to 100 times.

-

There are three primary types of security risks specific to on-device learning:

-
    -
  • Transfer-Based Attacks: These attacks exploit the transferability property between a surrogate model (an approximation of the target model, similar to an on-device model) and a remote target model (the original full-scale model). Attackers generate adversarial examples using the surrogate model, which can then be used to deceive the target model. For example, imagine an on-device model designed to identify spam emails. An attacker could use this model to generate a spam email that is not detected by the larger, full-scale filtering system.

  • -
  • Optimization-Based Attacks: These attacks generate adversarial examples for transfer-based attacks using some form of the objective function and iteratively modify inputs to achieve the desired outcome. Gradient estimation attacks, for example, approximate the model’s gradient using query outputs (such as softmax confidence scores), while gradient-free attacks use the model’s final decision (the predicted class) to approximate the gradient, albeit requiring many more queries.

  • -
  • Query Attacks with Transfer Priors: These attacks combine elements of transfer-based and optimization-based attacks. They reverse engineer on-device models to serve as surrogates for the target full-scale model. In other words, attackers use the smaller on-device model to understand how the larger model works and then use this knowledge to attack the full-scale model.

  • -
-

By understanding these specific risks associated with on-device learning, we can develop more robust security protocols to protect both on-device and full-scale models from potential attacks.

-
-
-

12.6.5 Mitigation of On-Device Learning Risks

-

Various methods can be employed to mitigate the numerous security risks associated with on-device learning. These methods may be specific to the type of attack or serve as a general tool to bolster security.

-

One strategy to reduce security risks is to diminish the similarity between on-device models and full-scale models, thereby reducing transferability by up to 90%. This method, known as similarity-unpairing, addresses the problem that arises when adversaries exploit the input-gradient similarity between the two models. By finetuning the full-scale model to create a new version with similar accuracy but different input gradients, we can construct the on-device model by quantizing this updated full-scale model. This unpairing reduces the vulnerability of on-device models by limiting the exposure of the original full-scale model. Importantly, the order of finetuning and quantization can be varied while still achieving risk mitigation (Hong, Carlini, and Kurakin 2023).

-

To tackle data poisoning, it is imperative to source datasets from trusted and reliable vendors.

-

Several strategies can be employed to combat adversarial attacks. A proactive approach involves generating adversarial examples and incorporating them into the model’s training dataset, thereby fortifying the model against such attacks. Tools like CleverHans, an open-source training library, are instrumental in creating adversarial examples. Defense distillation is another effective strategy, wherein the on-device model outputs probabilities of different classifications rather than definitive decisions (Hong, Carlini, and Kurakin 2023), making it more challenging for adversarial examples to exploit the model.

-
-Hong, Sanghyun, Nicholas Carlini, and Alexey Kurakin. 2023. “Publishing Efficient on-Device Models Increases Adversarial Vulnerability.” In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), 271–90. IEEE; IEEE. https://doi.org/10.1109/satml54575.2023.00026. -

The theft of intellectual property is another significant concern when deploying on-device models. Intellectual property theft is a concern when deploying on-device models, as adversaries may attempt to reverse-engineer the model to steal the underlying technology. To safeguard against intellectual property theft, the binary executable of the trained model should be stored on a microcontroller unit with encrypted software and secured physical interfaces of the chip. Furthermore, the final dataset used for training the model should be kept private.

-

Furthermore, on-device models often utilize well-known or open-source datasets, such as MobileNet’s Visual Wake Words. As such, it is important to maintain the privacy of the final dataset used for training the model. Additionally, protecting the data augmentation process and incorporating specific use cases can minimize the risk of reverse-engineering an on-device model.

-

Lastly, the Adversarial Threat Landscape for Artificial Intelligence Systems (ATLAS) serves as a valuable matrix tool that helps assess the risk profile of on-device models, empowering developers to identify and mitigate potential risks proactively.

-
-
-

12.6.6 Securing Training Data

-

There are various ways to secure on-device training data. Each concept is really deep and could be worth a class by itself. So here, we’ll briefly allude to those concepts so you’re aware of what to learn further.

-
-

Encryption

-

Encryption serves as the first line of defense for training data. This involves implementing end-to-end encryption for local storage on devices and communication channels to prevent unauthorized access to raw training data. Trusted execution environments, such as Intel SGX and ARM TrustZone, are essential for facilitating secure training on encrypted data.

-

Additionally, when aggregating updates from multiple devices, secure multi-party computation protocols can be employed to enhance security (Kairouz, Oh, and Viswanath 2015); a practical application of this is in collaborative on-device learning, where cryptographic privacy-preserving aggregation of user model updates can be implemented. This technique effectively hides individual user data even during the aggregation phase.

-
-Kairouz, Peter, Sewoong Oh, and Pramod Viswanath. 2015. “Secure Multi-Party Differential Privacy.” In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, edited by Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Garnett, 2008–16. https://proceedings.neurips.cc/paper/2015/hash/a01610228fe998f515a72dd730294d87-Abstract.html. -
-
-

Differential Privacy

-

Differential privacy is another crucial strategy for protecting training data. By injecting calibrated statistical noise into the data, we can mask individual records while still extracting valuable population patterns (Dwork and Roth 2013). Managing the privacy budget across multiple training iterations and reducing noise as the model converges is also vital (Abadi et al. 2016). Methods such as formally provable differential privacy, which may include adding Laplace or Gaussian noise scaled to the dataset’s sensitivity, can be employed.

-
-Dwork, Cynthia, and Aaron Roth. 2013. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends in Theoretical Computer Science 9 (3-4): 211–407. https://doi.org/10.1561/0400000042. -
-Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318. -
-
-

Anomaly Detection

-

Anomaly detection plays an important role in identifying and mitigating potential data poisoning attacks. This can be achieved through statistical analyses like Principal Component Analysis (PCA) and clustering, which help to detect deviations in aggregated training data. Time-series methods such as Cumulative Sum (CUSUM) charts are useful for identifying shifts indicative of potential poisoning. Comparing current data distributions with previously seen clean data distributions can also help to flag anomalies. Moreover, suspected poisoned batches should be removed from the training update aggregation process. For example, spot checks on subsets of training images on devices can be conducted using photoDNA hashes to identify poisoned inputs.

-
-
-

Input Data Validation

-

Lastly, input data validation is essential for ensuring the integrity and validity of input data before it is fed into the training model, thereby protecting against adversarial payloads. Similarity measures, such as cosine distance, can be employed to catch inputs that deviate significantly from the expected distribution. Suspicious inputs that may contain adversarial payloads should be quarantined and sanitized. Furthermore, parser access to training data should be restricted to validated code paths only. Leveraging hardware security features, such as ARM Pointer Authentication, can prevent memory corruption (ARM Limited, 2023). An example of this is implementing input integrity checks on audio training data used by smart speakers before processing by the speech recognition model (Z. Chen and Xu 2023).

-
-Chen, Zhiyong, and Shugong Xu. 2023. “Learning Domain-Heterogeneous Speaker Recognition Systems with Personalized Continual Federated Learning.” EURASIP Journal on Audio, Speech, and Music Processing 2023 (1): 33. https://doi.org/10.1186/s13636-023-00299-2. -
-
-
-
-

12.7 On-Device Training Frameworks

-

Embedded inference frameworks like TF-Lite Micro (David et al. 2021), TVM (T. Chen et al. 2018), and MCUNet (Lin et al. 2020) provide a slim runtime for running neural network models on microcontrollers and other resource-constrained devices. However, they don’t support on-device training. Training requires its own set of specialized tools due to the impact of quantization on gradient calculation and the memory footprint of backpropagation (Lin et al. 2022).

-
-David, Robert, Jared Duke, Advait Jain, Vijay Janapa Reddi, Nat Jeffries, Jian Li, Nick Kreeger, et al. 2021. “Tensorflow Lite Micro: Embedded Machine Learning for Tinyml Systems.” Proceedings of Machine Learning and Systems 3: 800–811. -
-Chen, Tianqi, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, et al. 2018. TVM: An Automated End-to-End Optimizing Compiler for Deep Learning.” In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), 578–94. -
-Lin, Ji, Wei-Ming Chen, Yujun Lin, John Cohn, Chuang Gan, and Song Han. 2020. MCUNet: Tiny Deep Learning on IoT Devices.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/86c51678350f656dcc7f490a43946ee5-Abstract.html. -
-Lin, Ji, Ligeng Zhu, Wei-Ming Chen, Wei-Chen Wang, Chuang Gan, and Song Han. 2022. “On-Device Training Under 256kb Memory.” Adv. Neur. In. 35: 22941–54. -

In recent years, a handful of tools and frameworks have started to emerge that enable on-device training. These include Tiny Training Engine (Lin et al. 2022), TinyTL (Cai et al. 2020), and TinyTrain (Kwon et al. 2023).

-
-

12.7.1 Tiny Training Engine

-

Tiny Training Engine (TTE) uses several techniques to optimize memory usage and speed up the training process. An overview of the TTE workflow is shown in Figure fig-tte-workflow. First, TTE offloads the automatic differentiation to compile time instead of runtime, significantly reducing overhead during training. Second, TTE performs graph optimization like pruning and sparse updates to reduce memory requirements and accelerate computations.

-
-
-
- -
-
-Figure 12.9: TTE workflow. -
-
-
-

Specifically, TTE follows four main steps:

-
    -
  • During compile time, TTE traces the forward propagation graph and derives the corresponding backward graph for backpropagation. This allows differentiation to happen at compile time rather than runtime.
  • -
  • TTE prunes any nodes representing frozen weights from the backward graph. Frozen weights are weights that are not updated during training to reduce certain neurons’ impact. Pruning their nodes saves memory.
  • -
  • TTE reorders the gradient descent operators to interleave them with the backward pass computations. This scheduling minimizes memory footprints.
  • -
  • TTE uses code generation to compile the optimized forward and backward graphs, which are then deployed for on-device training.
  • -
-
-
-

12.7.2 Tiny Transfer Learning

-

Tiny Transfer Learning (TinyTL) enables memory-efficient on-device training through a technique called weight freezing. During training, much of the memory bottleneck comes from storing intermediate activations and updating the weights in the neural network.

-

To reduce this memory overhead, TinyTL freezes the majority of the weights so they do not need to be updated during training. This eliminates the need to store intermediate activations for frozen parts of the network. TinyTL only finetunes the bias terms, which are much smaller than the weights. An overview of TinyTL workflow is shown in Figure fig-tinytl-workflow.

-
-
-
- -
-
-Figure 12.10: TinyTL workflow. Credit: Cai et al. (2020).) -
-
-Cai, Han, Chuang Gan, Ligeng Zhu, and Song Han. 2020. TinyTL: Reduce Memory, Not Parameters for Efficient on-Device Learning.” In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, Virtual, edited by Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin. https://proceedings.neurips.cc/paper/2020/hash/81f7acabd411274fcf65ce2070ed568a-Abstract.html. -
-
-

Freezing weights apply to fully connected layers as well as convolutional and normalization layers. However, only adapting the biases limits the model’s ability to learn and adapt to new data.

-

To increase adaptability without much additional memory, TinyTL uses a small residual learning model. This refines the intermediate feature maps to produce better outputs, even with fixed weights. The residual model introduces minimal overhead - less than 3.8% on top of the base model.

-

By freezing most weights, TinyTL significantly reduces memory usage during on-device training. The residual model then allows it to adapt and learn effectively for the task. The combined approach provides memory-efficient on-device training with minimal impact on model accuracy.

-
-
-

12.7.3 Tiny Train

-

TinyTrain significantly reduces the time required for on-device training by selectively updating only certain parts of the model. It does this using a technique called task-adaptive sparse updating, as shown in Figure fig-tiny-train.

-

Based on the user data, memory, and computing available on the device, TinyTrain dynamically chooses which neural network layers to update during training. This layer selection is optimized to reduce computation and memory usage while maintaining high accuracy.

-
-
-
- -
-
-Figure 12.11: TinyTrain workflow. Credit: Kwon et al. (2023). -
-
-Kwon, Young D, Rui Li, Stylianos I Venieris, Jagmohan Chauhan, Nicholas D Lane, and Cecilia Mascolo. 2023. TinyTrain: Deep Neural Network Training at the Extreme Edge.” ArXiv Preprint abs/2307.09988. https://arxiv.org/abs/2307.09988. -
-
-

More specifically, TinyTrain first does offline pretraining of the model. During pretraining, it not only trains the model on the task data but also meta-trains the model. Meta-training means training the model on metadata about the training process itself. This meta-learning improves the model’s ability to adapt accurately even when limited data is available for the target task.

-

Then, during the online adaptation stage, when the model is being customized on the device, TinyTrain performs task-adaptive sparse updates. Using the criteria around the device’s capabilities, it selects only certain layers to update through backpropagation. The layers are chosen to balance accuracy, memory usage, and computation time.

-

By sparsely updating layers tailored to the device and task, TinyTrain significantly reduces on-device training time and resource usage. The offline meta-training also improves accuracy when adapting to limited data. Together, these methods enable fast, efficient, and accurate on-device training.

-
-
-

12.7.4 Comparison

-

Here is a table summarizing the key similarities and differences between the Tiny Training Engine, TinyTL, and TinyTrain frameworks:

- ----- - - - - - - - - - - - - - - - - - - - - - - - - -
FrameworkSimilaritiesDifferences
Tiny Training EngineOn-device training
Optimize memory & computation
Leverage pruning, sparsity, etc
Traces forward & backward graphs
Prunes frozen weights
Interleaves backprop & gradients
Code generation
TinyTLOn-device training
Optimize memory & computation
Leverage freezing, sparsity, etc
Freezes most weights
Only adapts biases
Uses residual model
TinyTrainOn-device training
Optimize memory & computation
Leverage sparsity, etc
Meta-training in pretraining
Task-adaptive sparse updating
Selective layer updating
-
-
-
-

12.8 Conclusion

-

The concept of on-device learning is increasingly important for increasing the usability and scalability of TinyML. This chapter explored the intricacies of on-device learning, exploring its advantages and limitations, adaptation strategies, key related algorithms and techniques, security implications, and existing and emerging on-device training frameworks.

-

On-device learning is, undoubtedly, a groundbreaking paradigm that brings forth numerous advantages for embedded and edge ML deployments. By performing training directly on the endpoint devices, on-device learning obviates the need for continuous cloud connectivity, making it particularly well-suited for IoT and edge computing applications. It comes with benefits such as improved privacy, ease of compliance, and resource efficiency. At the same time, on-device learning faces limitations related to hardware constraints, limited data size, and reduced model accuracy and generalization.

-

Mechanisms such as reduced model complexity, optimization and data compression techniques, and related learning methods such as transfer learning and federated learning allow models to adapt to learn and evolve under resource constraints, thus serving as the bedrock for effective ML on edge devices.

-

The critical security concerns in on-device learning highlighted in this chapter, ranging from data poisoning and adversarial attacks to specific risks introduced by on-device learning, must be addressed in real workloads for on-device learning to be a viable paradigm. Effective mitigation strategies, such as data validation, encryption, differential privacy, anomaly detection, and input data validation, are crucial to safeguard on-device learning systems from these threats.

-

The emergence of specialized on-device training frameworks like Tiny Training Engine, Tiny Transfer Learning, and Tiny Train presents practical tools to enable efficient on-device training. These frameworks employ various techniques to optimize memory usage, reduce computational overhead, and streamline the on-device training process.

-

In conclusion, on-device learning stands at the forefront of TinyML, promising a future where models can autonomously acquire knowledge and adapt to changing environments on edge devices. The application of on-device learning has the potential to revolutionize various domains, including healthcare, industrial IoT, and smart cities. However, the transformative potential of on-device learning must be balanced with robust security measures to protect against data breaches and adversarial threats. Embracing innovative on-device training frameworks and implementing stringent security protocols are key steps in unlocking the full potential of on-device learning. As this technology continues to evolve, it holds the promise of making our devices smarter, more responsive, and better integrated into our daily lives.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides serve as a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we also offer a series of hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/ops/ops.html b/contents/ops/ops.html deleted file mode 100644 index 9b41fdb1..00000000 --- a/contents/ops/ops.html +++ /dev/null @@ -1,2180 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 13  ML Operations - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

13  ML Operations

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Create a detailed, wide rectangular illustration of an AI workflow. The image should showcase the process across six stages, with a flow from left to right: 1. Data collection, with diverse individuals of different genders and descents using a variety of devices like laptops, smartphones, and sensors to gather data. 2. Data processing, displaying a data center with active servers and databases with glowing lights. 3. Model training, represented by a computer screen with code, neural network diagrams, and progress indicators. 4. Model evaluation, featuring people examining data analytics on large monitors. 5. Deployment, where the AI is integrated into robotics, mobile apps, and industrial equipment. 6. Monitoring, showing professionals tracking AI performance metrics on dashboards to check for accuracy and concept drift over time. Each stage should be distinctly marked and the style should be clean, sleek, and modern with a dynamic and informative color scheme.
-
-
-

This chapter explores the practices and architectures needed to effectively develop, deploy, and manage ML models across their entire lifecycle. We examine the various phases of the ML process, including data collection, model training, evaluation, deployment, and monitoring. The importance of automation, collaboration, and continuous improvement is also discussed. We contrast different environments for ML model deployment, from cloud servers to embedded edge devices, and analyze their distinct constraints. We demonstrate how to tailor ML system design and operations through concrete examples for reliable and optimized model performance in any target environment. The goal is to provide readers with a comprehensive understanding of ML model management so they can successfully build and run ML applications that sustainably deliver value.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand what MLOps is and why it is needed

  • -
  • Learn the architectural patterns for traditional MLOps

  • -
  • Contrast traditional vs. embedded MLOps across the ML lifecycle

  • -
  • Identify key constraints of embedded environments

  • -
  • Learn strategies to mitigate embedded ML challenges

  • -
  • Examine real-world case studies demonstrating embedded MLOps principles

  • -
  • Appreciate the need for holistic technical and human approaches

  • -
-
-
-
-

13.1 Introduction

-

Machine Learning Operations (MLOps) is a systematic approach that combines machine learning (ML), data science, and software engineering to automate the end-to-end ML lifecycle. This includes everything from data preparation and model training to deployment and maintenance. MLOps ensures that ML models are developed, deployed, and maintained efficiently and effectively.

-

Let’s start by taking a general example (i.e., non-edge ML) case. Consider a ridesharing company that wants to deploy a machine-learning model to predict real-time rider demand. The data science team spends months developing a model, but when it’s time to deploy, they realize it needs to be compatible with the engineering team’s production environment. Deploying the model requires rebuilding it from scratch, which costs weeks of additional work. This is where MLOps comes in.

-

With MLOps, protocols, and tools, the model developed by the data science team can be seamlessly deployed and integrated into the production environment. In essence, MLOps removes friction during the development, deployment, and maintenance of ML systems. It improves collaboration between teams through defined workflows and interfaces. MLOps also accelerates iteration speed by enabling continuous delivery for ML models.

-

For the ridesharing company, implementing MLOps means their demand prediction model can be frequently retrained and deployed based on new incoming data. This keeps the model accurate despite changing rider behavior. MLOps also allows the company to experiment with new modeling techniques since models can be quickly tested and updated.

-

Other MLOps benefits include enhanced model lineage tracking, reproducibility, and auditing. Cataloging ML workflows and standardizing artifacts - such as logging model versions, tracking data lineage, and packaging models and parameters - enables deeper insight into model provenance. Standardizing these artifacts facilitates tracing a model back to its origins, replicating the model development process, and examining how a model version has changed over time. This also facilitates regulation compliance, which is especially critical in regulated industries like healthcare and finance, where being able to audit and explain models is important.

-

Major organizations adopt MLOps to boost productivity, increase collaboration, and accelerate ML outcomes. It provides the frameworks, tools, and best practices to effectively manage ML systems throughout their lifecycle. This results in better-performing models, faster time-to-value, and sustained competitive advantage. As we explore MLOps further, consider how implementing these practices can help address embedded ML challenges today and in the future.

-
-
-

13.2 Historical Context

-

MLOps has its roots in DevOps, a set of practices combining software development (Dev) and IT operations (Ops) to shorten the development lifecycle and provide continuous delivery of high-quality software. The parallels between MLOps and DevOps are evident in their focus on automation, collaboration, and continuous improvement. In both cases, the goal is to break down silos between different teams (developers, operations, and, in the case of MLOps, data scientists and ML engineers) and to create a more streamlined and efficient process. It is useful to understand the history of this evolution better to understand MLOps in the context of traditional systems.

-
-

13.2.1 DevOps

-

The term “DevOps” was first coined in 2009 by Patrick Debois, a consultant and Agile practitioner. Debois organized the first DevOpsDays conference in Ghent, Belgium, in 2009. The conference brought together development and operations professionals to discuss ways to improve collaboration and automate processes.

-

DevOps has its roots in the Agile movement, which began in the early 2000s. Agile provided the foundation for a more collaborative approach to software development and emphasized small, iterative releases. However, Agile primarily focuses on collaboration between development teams. As Agile methodologies became more popular, organizations realized the need to extend this collaboration to operations teams.

-

The siloed nature of development and operations teams often led to inefficiencies, conflicts, and delays in software delivery. This need for better collaboration and integration between these teams led to the DevOps movement. DevOps can be seen as an extension of the Agile principles, including operations teams.

-

The key principles of DevOps include collaboration, automation, continuous integration, delivery, and feedback. DevOps focuses on automating the entire software delivery pipeline, from development to deployment. It aims to improve the collaboration between development and operations teams, utilizing tools like Jenkins, Docker, and Kubernetes to streamline the development lifecycle.

-

While Agile and DevOps share common principles around collaboration and feedback, DevOps specifically targets integrating development and IT operations - expanding Agile beyond just development teams. It introduces practices and tools to automate software delivery and enhance the speed and quality of software releases.

-
-
-

13.2.2 MLOps

-

MLOps, on the other hand, stands for MLOps, and it extends the principles of DevOps to the ML lifecycle. MLOps aims to automate and streamline the end-to-end ML lifecycle, from data preparation and model development to deployment and monitoring. The main focus of MLOps is to facilitate collaboration between data scientists, data engineers, and IT operations and to automate the deployment, monitoring, and management of ML models. Some key factors led to the rise of MLOps.

-
    -
  • Data drift: Data drift degrades model performance over time, motivating the need for rigorous monitoring and automated retraining procedures provided by MLOps.
  • -
  • Reproducibility: The lack of reproducibility in machine learning experiments motivated MLOps systems to track code, data, and environment variables to enable reproducible ML workflows.
  • -
  • Explainability: The black box nature and lack of explainability of complex models motivated the need for MLOps capabilities to increase model transparency and explainability.
  • -
  • Monitoring: The inability to reliably monitor model performance post-deployment highlighted the need for MLOps solutions with robust model performance instrumentation and alerting.
  • -
  • Friction: The friction in manually retraining and deploying models motivated the need for MLOps systems that automate machine learning deployment pipelines.
  • -
  • Optimization: The complexity of configuring machine learning infrastructure motivated the need for MLOps platforms with optimized, ready-made ML infrastructure.
  • -
-

While DevOps and MLOps share the common goal of automating and streamlining processes, their focus and challenges differ. DevOps primarily deals with the challenges of software development and IT operations. In contrast, MLOps deals with the additional complexities of managing ML models, such as data versioning, model versioning, and model monitoring. MLOps also requires stakeholder collaboration, including data scientists, engineers, and IT operations.

-

While DevOps and MLOps share similarities in their goals and principles, they differ in their focus and challenges. DevOps focuses on improving the collaboration between development and operations teams and automating software delivery. In contrast, MLOps focuses on streamlining and automating the ML lifecycle and facilitating collaboration between data scientists, data engineers, and IT operations.

-

Table tbl-mlops compares and summarizes them side by side.

-
-
-
-Table 13.1: Comparison of DevOps and MLOps. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AspectDevOpsMLOps
ObjectiveStreamlining software development and operations processesOptimizing the lifecycle of machine learning models
MethodologyContinuous Integration and Continuous Delivery (CI/CD) for software developmentSimilar to CI/CD but focuses on machine learning workflows
Primary ToolsVersion control (Git), CI/CD tools (Jenkins, Travis CI), Configuration management (Ansible, Puppet)Data versioning tools, Model training and deployment tools, CI/CD pipelines tailored for ML
Primary ConcernsCode integration, Testing, Release management, Automation, Infrastructure as codeData management, Model versioning, Experiment tracking, Model deployment, Scalability of ML workflows
Typical OutcomesFaster and more reliable software releases, Improved collaboration between development and operations teamsEfficient management and deployment of machine learning models, Enhanced collaboration between data scientists and engineers
-
-
-
-

Learn more about ML Lifecycles through a case study featuring speech recognition.

-
-
-
-
-

13.3 Key Components of MLOps

-

In this chapter, we will provide an overview of the core components of MLOps, an emerging set of practices that enables robust delivery and lifecycle management of ML models in production. While some MLOps elements like automation and monitoring were covered in previous chapters, we will integrate them into an integrated framework and expand on additional capabilities like governance. Additionally, we will describe and link to popular tools used within each component, such as LabelStudio for data labeling. By the end, we hope that you will understand the end-to-end MLOps methodology that takes models from ideation to sustainable value creation within organizations.

-
-

13.3.1 Data Management

-

Robust data management and data engineering actively empower successful MLOps implementations. Teams properly ingest, store, and prepare raw data from sensors, databases, apps, and other systems for model training and deployment.

-

Teams actively track changes to datasets over time using version control with Git and tools like GitHub or GitLab. Data scientists collaborate on curating datasets by merging changes from multiple contributors. Teams can review or roll back each iteration of a dataset if needed.

-

Teams meticulously label and annotate data using labeling software like LabelStudio, which enables distributed teams to work on tagging datasets together. As the target variables and labeling conventions evolve, teams maintain accessibility to earlier versions.

-

Teams store the raw dataset and all derived assets on cloud storage services like Amazon S3 or Google Cloud Storage. These services provide scalable, resilient storage with versioning capabilities. Teams can set granular access permissions.

-

Robust data pipelines created by teams automate raw data extraction, joining, cleansing, and transformation into analysis-ready datasets. Prefect, Apache Airflow, and dbt are workflow orchestrators that allow engineers to develop flexible, reusable data processing pipelines.

-

For instance, a pipeline may ingest data from PostgreSQL databases, REST APIs, and CSVs stored on S3. It can filter, deduplicate, and aggregate the data, handle errors, and save the output to S3. The pipeline can also push the transformed data into a feature store like Tecton or Feast for low-latency access.

-

In an industrial predictive maintenance use case, sensor data is ingested from devices into S3. A perfect pipeline processes the sensor data, joining it with maintenance records. The enriched dataset is stored in Feast so models can easily retrieve the latest data for training and predictions.

-

The video below is a short overview of data pipelines.

-
-
-
-

13.3.2 CI/CD Pipelines

-

Continuous integration and continuous delivery (CI/CD) pipelines actively automate the progression of ML models from initial development into production deployment. Adapted for ML systems, CI/CD principles empower teams to rapidly and robustly deliver new models with minimized manual errors.

-

CI/CD pipelines orchestrate key steps, including checking out new code changes, transforming data, training and registering new models, validation testing, containerization, deploying to environments like staging clusters, and promoting to production. Teams leverage popular CI/CD solutions like Jenkins, CircleCI and GitHub Actions to execute these MLOps pipelines, while Prefect, Metaflow and Kubeflow offer ML-focused options.

-

Figure fig-ci-cd illustrates a CI/CD pipeline specifically tailored for MLOps. The process starts with a dataset and feature repository (on the left), which feeds into a dataset ingestion stage. Post-ingestion, the data undergoes validation to ensure its quality before being transformed for training. Parallel to this, a retraining trigger can initiate the pipeline based on specified criteria. The data then passes through a model training/tuning phase within a data processing engine, followed by model evaluation and validation. Once validated, the model is registered and stored in a machine learning metadata and artifact repository. The final stage involves deploying the trained model back into the dataset and feature repository, thereby creating a cyclical process for continuous improvement and deployment of machine learning models.

-
-
-
- -
-
-Figure 13.1: MLOps CI/CD diagram. Credit: HarvardX. -
-
-
-

For example, when a data scientist checks improvements to an image classification model into a GitHub repository, this actively triggers a Jenkins CI/CD pipeline. The pipeline reruns data transformations and model training on the latest data, tracking experiments with MLflow. After automated validation testing, teams deploy the model container to a Kubernetes staging cluster for further QA. Once approved, Jenkins facilitates a phased rollout of the model to production with canary deployments to catch any issues. If anomalies are detected, the pipeline enables teams to roll back to the previous model version gracefully.

-

CI/CD pipelines empower teams to iterate and deliver ML models rapidly by connecting the disparate steps from development to deployment under continuous automation. Integrating MLOps tools like MLflow enhances model packaging, versioning, and pipeline traceability. CI/CD is integral for progressing models beyond prototypes into sustainable business systems.

-
-
-

13.3.3 Model Training

-

In the model training phase, data scientists actively experiment with different ML architectures and algorithms to create optimized models that extract insights and patterns from data. MLOps introduces best practices and automation to make this iterative process more efficient and reproducible.

-

Modern ML frameworks like TensorFlow, PyTorch and Keras provide pre-built components that simplify designing neural networks and other model architectures. Data scientists leverage built-in modules for layers, activations, losses, etc., and high-level APIs like Keras to focus more on model architecture.

-

MLOps enables teams to package model training code into reusable, tracked scripts and notebooks. As models are developed, capabilities like hyperparameter tuning, neural architecture search and automatic feature selection rapidly iterate to find the best-performing configurations.

-

Teams use Git to version control training code and host it in repositories like GitHub to track changes over time. This allows seamless collaboration between data scientists.

-

Notebooks like Jupyter create an excellent interactive model development environment. The notebooks contain data ingestion, preprocessing, model declaration, training loop, evaluation, and export code in one reproducible document.

-

Finally, teams orchestrate model training as part of a CI/CD pipeline for automation. For instance, a Jenkins pipeline can trigger a Python script to load new training data, retrain a TensorFlow classifier, evaluate model metrics, and automatically register the model if performance thresholds are met.

-

An example workflow has a data scientist using a PyTorch notebook to develop a CNN model for image classification. The fastai library provides high-level APIs to simplify training CNNs on image datasets. The notebook trains the model on sample data, evaluates accuracy metrics, and tunes hyperparameters like learning rate and layers to optimize performance. This reproducible notebook is version-controlled and integrated into a retraining pipeline.

-

Automating and standardizing model training empowers teams to accelerate experimentation and achieve the rigor needed to produce ML systems.

-
-
-

13.3.4 Model Evaluation

-

Before deploying models, teams perform rigorous evaluation and testing to validate meeting performance benchmarks and readiness for release. MLOps introduces best practices around model validation, auditing, and canary testing.

-

Teams typically evaluate models against holdout test datasets that are not used during training. The test data originates from the same distribution as production data. Teams calculate metrics like accuracy, AUC, precision, recall, and F1 score.

-

Teams also track the same metrics over time against test data samples. If evaluation data comes from live production streams, this catches data drifts that degrade model performance over time.

-

Human oversight for model release remains important. Data scientists review performance across key segments and slices. Error analysis helps identify model weaknesses to guide enhancement. Teams apply fairness and bias detection techniques.

-

Canary testing releases a model to a small subset of users to evaluate real-world performance before wide deployment. Teams incrementally route traffic to the canary release while monitoring for issues.

-

For example, a retailer evaluates a personalized product recommendation model against historical test data, reviewing accuracy and diversity metrics. Teams also calculate metrics on live customer data over time, detecting decreased accuracy over the last 2 weeks. Before full rollout, the new model is released to 5% of web traffic to ensure no degradation.

-

Automating evaluation and canary releases reduces deployment risks. However, human review still needs to be more critical to assess less quantifiable dynamics of model behavior. Rigorous pre-deployment validation provides confidence in putting models into production.

-
-
-

13.3.5 Model Deployment

-

Teams need to properly package, test, and track ML models to reliably deploy them to production. MLOps introduces frameworks and procedures for actively versioning, deploying, monitoring, and updating models in sustainable ways.

-

Teams containerize models using Docker, which bundles code, libraries, and dependencies into a standardized unit. Containers enable smooth portability across environments.

-

Frameworks like TensorFlow Serving and BentoML help serve predictions from deployed models via performance-optimized APIs. These frameworks handle versioning, scaling, and monitoring.

-

Teams first deploy updated models to staging or QA environments for testing before full production rollout. Shadow or canary deployments route a sample of traffic to test model variants. Teams incrementally increase access to new models.

-

Teams build robust rollback procedures in case issues emerge. Rollbacks revert to the last known good model version. Integration with CI/CD pipelines simplifies redeployment if needed.

-

Teams carefully track model artifacts, such as scripts, weights, logs, and metrics, for each version with ML metadata tools like MLflow. This maintains lineage and auditability.

-

For example, a retailer containerizes a product recommendation model in TensorFlow Serving and deploys it to a Kubernetes staging cluster. After monitoring and approving performance on sample traffic, Kubernetes shifts 10% of production traffic to the new model. If no issues are detected after a few days, the new model takes over 100% of traffic. However, teams should keep the previous version accessible for rollback if needed.

-

Model deployment processes enable teams to make ML systems resilient in production by accounting for all transition states.

-
-
-

13.3.6 Infrastructure Management

-

MLOps teams heavily leverage infrastructure as code (IaC) tools and robust cloud architectures to actively manage the resources needed for development, training, and deployment of ML systems.

-

Teams use IaC tools like Terraform, CloudFormation and Ansible to programmatically define, provision and update infrastructure in a version controlled manner. For MLOps, teams widely use Terraform to spin up resources on AWS, GCP and Azure.

-

For model building and training, teams dynamically provision computing resources like GPU servers, container clusters, storage, and databases through Terraform as needed by data scientists. Code encapsulates and preserves infrastructure definitions.

-

Containers and orchestrators like Docker and Kubernetes allow teams to package models and reliably deploy them across different environments. Containers can be predictably spun up or down automatically based on demand.

-

By leveraging cloud elasticity, teams scale resources up and down to meet spikes in workloads like hyperparameter tuning jobs or spikes in prediction requests. Auto-scaling enables optimized cost efficiency.

-

Infrastructure spans on-prem, cloud, and edge devices. A robust technology stack provides flexibility and resilience. Monitoring tools allow teams to observe resource utilization.

-

For example, a Terraform config may deploy a GCP Kubernetes cluster to host trained TensorFlow models exposed as prediction microservices. The cluster scales up pods to handle increased traffic. CI/CD integration seamlessly rolls out new model containers.

-

Carefully managing infrastructure through IaC and monitoring enables teams to prevent bottlenecks in operationalizing ML systems at scale.

-
-
-

13.3.7 Monitoring

-

MLOps teams actively maintain robust monitoring to sustain visibility into ML models deployed in production. Continuous monitoring provides insights into model and system performance so teams can rapidly detect and address issues to minimize disruption.

-

Teams actively monitor key model aspects, including analyzing samples of live predictions to track metrics like accuracy and confusion matrix over time.

-

When monitoring performance, teams must profile incoming data to check for model drift - a steady decline in model accuracy after production deployment. Model drift can occur in two ways: concept drift and data drift. Concept drift refers to a fundamental change observed in the relationship between the input data and the target outcomes. For instance, as the COVID-19 pandemic progressed, e-commerce and retail sites had to correct their model recommendations since purchase data was overwhelmingly skewed towards items like hand sanitizer. Data drift describes changes in the distribution of data over time. For example, image recognition algorithms used in self-driving cars must account for seasonality in observing their surroundings. Teams also track application performance metrics like latency and errors for model integrations.

-

From an infrastructure perspective, teams monitor for capacity issues like high CPU, memory, and disk utilization and system outages. Tools like Prometheus, Grafana, and Elastic enable teams to actively collect, analyze, query, and visualize diverse monitoring metrics. Dashboards make dynamics highly visible.

-

Teams configure alerting for key monitoring metrics like accuracy declines and system faults to enable proactively responding to events that threaten reliability. For example, drops in model accuracy trigger alerts for teams to investigate potential data drift and retrain models using updated, representative data samples.

-

After deployment, comprehensive monitoring enables teams to maintain confidence in model and system health. It empowers teams to catch and resolve deviations preemptively through data-driven alerts and dashboards. Active monitoring is essential for maintaining highly available, trustworthy ML systems.

-

Watch the video below to learn more about monitoring.

-
-
-
-

13.3.8 Governance

-

MLOps teams actively establish proper governance practices as a critical component. Governance provides oversight into ML models to ensure they are trustworthy, ethical, and compliant. Without governance, significant risks exist of models behaving in dangerous or prohibited ways when deployed in applications and business processes.

-

MLOps governance employs techniques to provide transparency into model predictions, performance, and behavior throughout the ML lifecycle. Explainability methods like SHAP and LIME help auditors understand why models make certain predictions by highlighting influential input features behind decisions. Bias detection analyzes model performance across different demographic groups defined by attributes like age, gender, and ethnicity to detect any systematic skews. Teams perform rigorous testing procedures on representative datasets to validate model performance before deployment.

-

Once in production, teams monitor concept drift to determine whether predictive relationships change over time in ways that degrade model accuracy. Teams also analyze production logs to uncover patterns in the types of errors models generate. Documentation about data provenance, development procedures, and evaluation metrics provides additional visibility.

-

Platforms like Watson OpenScale incorporate governance capabilities like bias monitoring and explainability directly into model building, testing, and production monitoring. The key focus areas of governance are transparency, fairness, and compliance. This minimizes the risks of models behaving incorrectly or dangerously when integrated into business processes. Embedding governance practices into MLOps workflows enables teams to ensure trustworthy AI.

-
-
-

13.3.9 Communication & Collaboration

-

MLOps actively breaks down silos and enables the free flow of information and insights between teams through all ML lifecycle stages. Tools like MLflow, Weights & Biases, and data contexts provide traceability and visibility to improve collaboration.

-

Teams use MLflow to systematize tracking of model experiments, versions, and artifacts. Experiments can be programmatically logged from data science notebooks and training jobs. The model registry provides a central hub for teams to store production-ready models before deployment, with metadata like descriptions, metrics, tags, and lineage. Integrations with Github, GitLab facilitate code change triggers.

-

Weights & Biases provides collaborative tools tailored to ML teams. Data scientists log experiments, visualize metrics like loss curves, and share experimentation insights with colleagues. Comparison dashboards highlight model differences. Teams discuss progress and next steps.

-

Establishing shared data contexts—glossaries, data dictionaries, and schema references—ensures alignment on data meaning and usage across roles. Documentation aids understanding for those without direct data access.

-

For example, a data scientist may use Weights & Biases to analyze an anomaly detection model experiment and share the evaluation results with other team members to discuss improvements. The final model can then be registered with MLflow before handing off for deployment.

-

Enabling transparency, traceability, and communication via MLOps empowers teams to remove bottlenecks and accelerate the delivery of impactful ML systems.

-

The following video covers key challenges in model deployment, including concept drift, model drift, and software engineering issues.

-
-
-
-
-

13.4 Hidden Technical Debt in ML Systems

-

Technical debt is increasingly pressing for ML systems (see Figure 14.2). This metaphor, originally proposed in the 1990s, likens the long-term costs of quick software development to financial debt. Just as some financial debt powers beneficial growth, carefully managed technical debt enables rapid iteration. However, left unchecked, accumulating technical debt can outweigh any gains.

-

Figure fig-technical-debt illustrates the various components contributing to ML systems’ hidden technical debt. It shows the interconnected nature of configuration, data collection, and feature extraction, which is foundational to the ML codebase. The box sizes indicate the proportion of the entire system represented by each component. In industry ML systems, the code for the model algorithm makes up only a tiny fraction (see the small black box in the middle compared to all the other large boxes). The complexity of ML systems and the fast-paced nature of the industry make it very easy to accumulate technical debt.

-
-
-
- -
-
-Figure 13.2: ML system components. Credit: Sambasivan et al. (2021a) -
-
-
-
-

13.4.1 Model Boundary Erosion

-

Unlike traditional software, ML lacks clear boundaries between components, as seen in the diagram above. This erosion of abstraction creates entanglements that exacerbate technical debt in several ways:

-
-
-

13.4.2 Entanglement

-

Tight coupling between ML model components makes isolating changes difficult. Modifying one part causes unpredictable ripple effects throughout the system. Changing anything changes everything (also known as CACE) is a phenomenon that applies to any tweak you make to your system. Potential mitigations include decomposing the problem when possible or closely monitoring for changes in behavior to contain their impact.

-
-
-

13.4.3 Correction Cascades

-

The flowchart in Figure fig-correction-cascades-flowchart depicts the concept of correction cascades in the ML workflow, from problem statement to model deployment. The arcs represent the potential iterative corrections needed at each workflow stage, with different colors corresponding to distinct issues such as interacting with physical world brittleness, inadequate application-domain expertise, conflicting reward systems, and poor cross-organizational documentation. The red arrows indicate the impact of cascades, which can lead to significant revisions in the model development process. In contrast, the dotted red line represents the drastic measure of abandoning the process to restart. This visual emphasizes the complex, interconnected nature of ML system development and the importance of addressing these issues early in the development cycle to mitigate their amplifying effects downstream.

-
-
-
- -
-
-Figure 13.3: Correction cascades flowchart. Credit: Sambasivan et al. (2021a). -
-
-———. 2021a. Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM. https://doi.org/10.1145/3411764.3445518. -
-
-

Building models sequentially creates risky dependencies where later models rely on earlier ones. For example, taking an existing model and fine-tuning it for a new use case seems efficient. However, this bakes in assumptions from the original model that may eventually need correction.

-

Several factors inform the decision to build models sequentially or not:

-
    -
  • Dataset size and rate of growth: With small, static datasets, fine-tuning existing models often makes sense. For large, growing datasets, training custom models from scratch allows more flexibility to account for new data.
  • -
  • Available computing resources: Fine-tuning requires fewer resources than training large models from scratch. With limited resources, leveraging existing models may be the only feasible approach.
  • -
-

While fine-tuning can be efficient, modifying foundational components later becomes extremely costly due to the cascading effects on subsequent models. Careful thought should be given to identifying where introducing fresh model architectures, even with large resource requirements, can avoid correction cascades down the line (see Figure 14.3). There are still scenarios where sequential model building makes sense, which entails weighing these tradeoffs around efficiency, flexibility, and technical debt.

-

Figure fig-data-cascades-debt depicts the concept of correction cascades in the ML workflow, from problem statement to model deployment. The arcs represent the potential iterative corrections needed at each stage of the workflow, with different colors corresponding to distinct issues such as interacting with physical world brittleness, inadequate application-domain expertise, conflicting reward systems, and poor cross-organizational documentation. The red arrows indicate the impact of cascades, which can lead to significant revisions in the model development process. In contrast, the dotted red line represents the drastic measure of abandoning the process to restart. This visual emphasizes the complex, interconnected nature of ML system development and the importance of addressing these issues early in the development cycle to mitigate their amplifying effects downstream.

-
-
-
- -
-
-Figure 13.4: Data cascades. Credit: Sambasivan et al. (2021b). -
-
-Sambasivan, Nithya, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021b. Everyone Wants to Do the Model Work, Not the Data Work: Data Cascades in High-Stakes AI.” In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. New York, NY, USA: ACM. https://doi.org/10.1145/3411764.3445518. -
-
-
-
-

13.4.4 Undeclared Consumers

-

Once ML model predictions are made available, many downstream systems may silently consume them as inputs for further processing. However, the original model was not designed to accommodate this broad reuse. Due to the inherent opacity of ML systems, it becomes impossible to fully analyze the impact of the model’s outputs as inputs elsewhere. Changes to the model can then have expensive and dangerous consequences by breaking undiscovered dependencies.

-

Undeclared consumers can also enable hidden feedback loops if their outputs indirectly influence the original model’s training data. Mitigations include restricting access to predictions, defining strict service contracts, and monitoring for signs of un-modelled influences. Architecting ML systems to encapsulate and isolate their effects limits the risks of unanticipated propagation.

-
-
-

13.4.5 Data Dependency Debt

-

Data dependency debt refers to unstable and underutilized data dependencies, which can have detrimental and hard-to-detect repercussions. While this is a key contributor to tech debt for traditional software, those systems can benefit from the use of widely available tools for static analysis by compilers and linkers to identify dependencies of these types. ML systems need similar tooling.

-

One mitigation for unstable data dependencies is to use versioning, which ensures the stability of inputs but comes with the cost of managing multiple sets of data and the potential for staleness. Another mitigation for underutilized data dependencies is to conduct exhaustive leave-one-feature-out evaluation.

-
-
-

13.4.6 Analysis Debt from Feedback Loops

-

Unlike traditional software, ML systems can change their behavior over time, making it difficult to analyze pre-deployment. This debt manifests in feedback loops, both direct and hidden.

-

Direct feedback loops occur when a model influences its future inputs, such as by recommending products to users that, in turn, shape future training data. Hidden loops arise indirectly between models, such as two systems that interact via real-world environments. Gradual feedback loops are especially hard to detect. These loops lead to analysis debt—the inability to predict how a model will act fully after release. They undermine pre-deployment validation by enabling unmodeled self-influence.

-

Careful monitoring and canary deployments help detect feedback. However, fundamental challenges remain in understanding complex model interactions. Architectural choices that reduce entanglement and coupling mitigate analysis debt’s compounding effect.

-
-
-

13.4.7 Pipeline Jungles

-

ML workflows often need more standardized interfaces between components. This leads teams to incrementally “glue” together pipelines with custom code. What emerges are “pipeline jungles”—tangled preprocessing steps that are brittle and resist change. Avoiding modifications to these messy pipelines causes teams to experiment through alternate prototypes. Soon, multiple ways of doing everything proliferate. The need for abstractions and interfaces then impedes sharing, reuse, and efficiency.

-

Technical debt accumulates as one-off pipelines solidify into legacy constraints. Teams sink time into managing idiosyncratic code rather than maximizing model performance. Architectural principles like modularity and encapsulation are needed to establish clean interfaces. Shared abstractions enable interchangeable components, prevent lock-in, and promote best-practice diffusion across teams. Breaking free of pipeline jungles ultimately requires enforcing standards that prevent the accretion of abstraction debt. The benefits of interfaces and APIs that tame complexity outweigh the transitional costs.

-
-
-

13.4.8 Configuration Debt

-

ML systems involve extensive configuration of hyperparameters, architectures, and other tuning parameters. However, the configuration is often an afterthought, needing more rigor and testing—ad hoc configurations increase, amplified by the many knobs available for tuning complex ML models.

-

This accumulation of technical debt has several consequences. Fragile and outdated configurations lead to hidden dependencies and bugs that cause production failures. Knowledge about optimal configurations is isolated rather than shared, leading to redundant work. Reproducing and comparing results becomes difficult when configurations lack documentation. Legacy constraints accumulate as teams fear changing poorly understood configurations.

-

Addressing configuration debt requires establishing standards to document, test, validate, and centrally store configurations. Investing in more automated approaches, such as hyperparameter optimization and architecture search, reduces dependence on manual tuning. Better configuration hygiene makes iterative improvement more tractable by preventing complexity from compounding endlessly. The key is recognizing configuration as an integral part of the ML system lifecycle rather than an ad hoc afterthought.

-
-
-

13.4.9 The Changing World

-

ML systems operate in dynamic real-world environments. Thresholds and decisions that are initially effective become outdated as the world evolves. However, legacy constraints make adapting systems to changing populations, usage patterns, and other shifting contextual factors difficult.

-

This debt manifests in two main ways. First, preset thresholds and heuristics require constant re-evaluation and tuning as their optimal values drift. Second, validating systems through static unit and integration tests fails when inputs and behaviors are moving targets.

-

Responding to a changing world in real-time with legacy ML systems is challenging. Technical debt accumulates as assumptions decay. The lack of modular architecture and the ability to dynamically update components without side effects exacerbates these issues.

-

Mitigating this requires building in configurability, monitoring, and modular updatability. Online learning, where models continuously adapt and robust feedback loops to training pipelines, helps automatically tune to the world. However, anticipating and architecting for change is essential to prevent erosion of real-world performance over time.

-
- -
-

13.4.11 Summary

-

Although financial debt is a good metaphor for understanding tradeoffs, it differs from technical debt’s measurability. Technical debt needs to be fully tracked and quantified. This makes it hard for teams to navigate the tradeoffs between moving quickly and inherently introducing more debt versus taking the time to pay down that debt.

-

The Hidden Technical Debt of Machine Learning Systems paper spreads awareness of the nuances of ML system-specific tech debt. It encourages additional development in the broad area of maintainable ML.

-
-
-
-

13.5 Roles and Responsibilities

-

Given the vastness of MLOps, successfully implementing ML systems requires diverse skills and close collaboration between people with different areas of expertise. While data scientists build the core ML models, it takes cross-functional teamwork to successfully deploy these models into production environments and enable them to deliver sustainable business value.

-

MLOps provides the framework and practices for coordinating the efforts of various roles involved in developing, deploying, and running MLG systems. Bridging traditional silos between data, engineering, and operations teams is key to MLOp’s success. Enabling seamless collaboration through the machine learning lifecycle accelerates benefit realization while ensuring ML models’ long-term reliability and performance.

-

We will look at some key roles involved in MLOps and their primary responsibilities. Understanding the breadth of skills needed to operationalize ML models guides assembling MLOps teams. It also clarifies how the workflows between roles fit under the overarching MLOps methodology.

-
-

13.5.1 Data Engineers

-

Data engineers are responsible for building and maintaining the data infrastructure and pipelines that feed data to ML models. They ensure data is smoothly moved from source systems into the storage, processing, and feature engineering environments needed for ML model development and deployment. Their main responsibilities include:

-
    -
  • Migrating raw data from on-prem databases, sensors, and apps into cloud-based data lakes like Amazon S3 or Google Cloud Storage. This provides cost-efficient, scalable storage.
  • -
  • Building data pipelines with workflow schedulers like Apache Airflow, Prefect, and dbt. These extract data from sources, transform and validate data, and load it into destinations like data warehouses, feature stores, or directly for model training.
  • -
  • Transforming messy, raw data into structured, analysis-ready datasets. This includes handling null or malformed values, deduplicating, joining disparate data sources, aggregating data, and engineering new features.
  • -
  • Maintaining data infrastructure components like cloud data warehouses (Snowflake, Redshift, BigQuery), data lakes, and metadata management systems. Provisioning and optimizing data processing systems.
  • -
  • Establishing data versioning, backup, and archival processes for ML datasets and features and enforcing data governance policies.
  • -
-

For example, a manufacturing firm may use Apache Airflow pipelines to extract sensor data from PLCs on the factory floor into an Amazon S3 data lake. The data engineers would then process this raw data to filter, clean, and join it with product metadata. These pipeline outputs would then load into a Snowflake data warehouse from which features can be read for model training and prediction.

-

The data engineering team builds and sustains the data foundation for reliable model development and operations. Their work enables data scientists and ML engineers to focus on building, training, and deploying ML models at scale.

-
-
-

13.5.2 Data Scientists

-

The job of the data scientists is to focus on the research, experimentation, development, and continuous improvement of ML models. They leverage their expertise in statistics, modeling, and algorithms to create high-performing models. Their main responsibilities include:

-
    -
  • Working with business and data teams to identify opportunities where ML can add value, framing the problem, and defining success metrics.
  • -
  • Performing exploratory data analysis to understand relationships in data, derive insights, and identify relevant features for modeling.
  • -
  • Researching and experimenting with different ML algorithms and model architectures based on the problem and data characteristics and leveraging libraries like TensorFlow, PyTorch, and Keras.
  • -
  • To maximize performance, train and fine-tune models by tuning hyperparameters, adjusting neural network architectures, feature engineering, etc.
  • -
  • Evaluating model performance through metrics like accuracy, AUC, and F1 scores and performing error analysis to identify areas for improvement.
  • -
  • Developing new model versions by incorporating new data, testing different approaches, optimizing model behavior, and maintaining documentation and lineage for models.
  • -
-

For example, a data scientist may leverage TensorFlow and TensorFlow Probability to develop a demand forecasting model for retail inventory planning. They would iterate on different sequence models like LSTMs and experiment with features derived from product, sales, and seasonal data. The model would be evaluated based on error metrics versus actual demand before deployment. The data scientist monitors performance and retrains/enhances the model as new data comes in.

-

Data scientists drive model creation, improvement, and innovation through their expertise in ML techniques. They collaborate closely with other roles to ensure models create maximum business impact.

-
-
-

13.5.3 ML Engineers

-

ML engineers enable models data scientists develop to be productized and deployed at scale. Their expertise makes models reliably serve predictions in applications and business processes. Their main responsibilities include:

-
    -
  • Taking prototype models from data scientists and hardening them for production environments through coding best practices.
  • -
  • Building APIs and microservices for model deployment using tools like Flask, FastAPI. Containerizing models with Docker.
  • -
  • Manage model versions, sync new models into production using CI/CD pipelines, and implement canary releases, A/B tests, and rollback procedures.
  • -
  • Optimizing model performance for high scalability, low latency, and cost efficiency. Leveraging compression, quantization, and multi-model serving.
  • -
  • Monitor models once in production and ensure continued reliability and accuracy. Retraining models periodically.
  • -
-

For example, an ML engineer may take a TensorFlow fraud detection model developed by data scientists and containerize it using TensorFlow Serving for scalable deployment. The model would be integrated into the company’s transaction processing pipeline via APIs. The ML engineer implements a model registry and CI/CD pipeline using MLFlow and Jenkins to deploy model updates reliably. The ML engineers then monitor the running model for continued performance using tools like Prometheus and Grafana. If model accuracy drops, they initiate retraining and deployment of a new model version.

-

The ML engineering team enables data science models to progress smoothly into sustainable and robust production systems. Their expertise in building modular, monitored systems delivers continuous business value.

-
-
-

13.5.4 DevOps Engineers

-

DevOps engineers enable MLOps by building and managing the underlying infrastructure for developing, deploying, and monitoring ML models. They provide the cloud architecture and automation pipelines. Their main responsibilities include:

-
    -
  • Provisioning and managing cloud infrastructure for ML workflows using IaC tools like Terraform, Docker, and Kubernetes.
  • -
  • Developing CI/CD pipelines for model retraining, validation, and deployment. Integrating ML tools into the pipeline, such as MLflow and Kubeflow.
  • -
  • Monitoring model and infrastructure performance using tools like Prometheus, Grafana, ELK stack. Building alerts and dashboards.
  • -
  • Implement governance practices around model development, testing, and promotion to enable reproducibility and traceability.
  • -
  • Embedding ML models within applications. They are exposing models via APIs and microservices for integration.
  • -
  • Optimizing infrastructure performance and costs and leveraging autoscaling, spot instances, and availability across regions.
  • -
-

For example, a DevOps engineer provisions a Kubernetes cluster on AWS using Terraform to run ML training jobs and online deployment. They build a CI/CD pipeline in Jenkins, which triggers model retraining if new data is available. After automated testing, the model is registered with MLflow and deployed in the Kubernetes cluster. The engineer then monitors cluster health, container resource usage, and API latency using Prometheus and Grafana.

-

The DevOps team enables rapid experimentation and reliable deployments for ML through cloud, automation, and monitoring expertise. Their work maximizes model impact while minimizing technical debt.

-
-
-

13.5.5 Project Managers

-

Project managers play a vital role in MLOps by coordinating the activities between the teams involved in delivering ML projects. They help drive alignment, accountability, and accelerated results. Their main responsibilities include:

-
    -
  • Working with stakeholders to define project goals, success metrics, timelines, and budgets; outlining specifications and scope.
  • -
  • Creating a project plan spanning data acquisition, model development, infrastructure setup, deployment, and monitoring.
  • -
  • Coordinating design, development, and testing efforts between data engineers, data scientists, ML engineers, and DevOps roles.
  • -
  • Tracking progress and milestones, identifying roadblocks and resolving them through corrective actions, and managing risks and issues.
  • -
  • Facilitating communication through status reports, meetings, workshops, and documentation and enabling seamless collaboration.
  • -
  • Driving adherence to timelines and budget and escalating anticipated overruns or shortfalls for mitigation.
  • -
-

For example, a project manager would create a project plan for developing and enhancing a customer churn prediction model. They coordinate between data engineers building data pipelines, data scientists experimenting with models, ML engineers productionalizing models, and DevOps setting up deployment infrastructure. The project manager tracks progress via milestones like dataset preparation, model prototyping, deployment, and monitoring. To enact preventive solutions, they surface any risks, delays, or budget issues.

-

Skilled project managers enable MLOps teams to work synergistically to rapidly deliver maximum business value from ML investments. Their leadership and organization align with diverse teams.

-
-
-
-

13.6 Embedded System Challenges

-

We will briefly review the challenges with embedded systems so that it sets the context for the specific challenges that emerge with embedded MLOps, which we will discuss in the following section.

-
-

13.6.1 Limited Compute Resources

-

Embedded devices like microcontrollers and mobile phones have much more constrained computing power than data center machines or GPUs. A typical microcontroller may have only KB of RAM, MHz CPU speed, and no GPU. For example, a microcontroller in a smartwatch may only have a 32-bit processor running at 120MHz with 320KB of RAM (Stm32L4Q5Ag 2021). This allows simple ML models like small linear regressions or random forests, but more complex deep neural networks would be infeasible. Strategies to mitigate this include quantization, pruning, efficient model architectures, and offloading certain computations to the cloud when connectivity allows.

-
-Stm32L4Q5Ag. 2021. STMicroelectronics. -
-
-

13.6.2 Constrained Memory

-

Storing large ML models and datasets directly on embedded devices is often infeasible with limited memory. For example, a deep neural network model can easily take hundreds of MB, which exceeds the storage capacity of many embedded systems. Consider this example. A wildlife camera that captures images to detect animals may have only a 2GB memory card. More is needed to store a deep learning model for image classification that is often hundreds of MB in size. Consequently, this requires optimization of memory usage through weights compression, lower-precision numerics, and streaming inference pipelines.

-
-
-

13.6.3 Intermittent Connectivity

-

Many embedded devices operate in remote environments without reliable internet connectivity. We must rely on something other than constant cloud access for convenient retraining, monitoring, and deployment. Instead, we need smart scheduling and caching strategies to optimize for intermittent connections. For example, a model predicting crop yield on a remote farm may need to make predictions daily but only have connectivity to the cloud once a week when the farmer drives into town. The model needs to operate independently in between connections.

-
-
-

13.6.4 Power Limitations

-

Embedded devices like phones, wearables, and remote sensors are battery-powered. Continual inference and communication can quickly drain those batteries, limiting functionality. For example, a smart collar tagging endangered animals runs on a small battery. Continuously running a GPS tracking model would drain the battery within days. The collar has to schedule when to activate the model carefully. Thus, embedded ML has to manage tasks carefully to conserve power. Techniques include optimized hardware accelerators, prediction caching, and adaptive model execution.

-
-
-

13.6.5 Fleet Management

-

For mass-produced embedded devices, millions of units can be deployed in the field to orchestrate updates. Hypothetically, updating a fraud detection model on 100 million (future smart) credit cards requires securely pushing updates to each distributed device rather than a centralized data center. Such a distributed scale makes fleet-wide management much harder than a centralized server cluster. It requires intelligent protocols for over-the-air updates, handling connectivity issues, and monitoring resource constraints across devices.

-
-
-

13.6.6 On-Device Data Collection

-

Collecting useful training data requires engineering both the sensors on the device and the software pipelines. This is unlike servers, where we can pull data from external sources. Challenges include handling sensor noise. Sensors on an industrial machine detect vibrations and temperature to predict maintenance needs. This requires tuning the sensors and sampling rates to capture useful data.

-
-
-

13.6.7 Device-Specific Personalization

-

A smart speaker learns an individual user’s voice patterns and speech cadence to improve recognition accuracy while protecting privacy. Adapting ML models to specific devices and users is important, but this poses privacy challenges. On-device learning allows personalization without transmitting as much private data. However, balancing model improvement, privacy preservation, and constraints requires novel techniques.

-
-
-

13.6.8 Safety Considerations

-

If extremely large embedded ML in systems like self-driving vehicles is not engineered carefully, there are serious safety risks. To ensure safe operation before deployment, self-driving cars must undergo extensive track testing in simulated rain, snow, and obstacle scenarios. This requires extensive validation, fail-safes, simulators, and standards compliance before deployment.

-
-
-

13.6.9 Diverse Hardware Targets

-

There is a diverse range of embedded processors, including ARM, x86, specialized AI accelerators, FPGAs, etc. Supporting this heterogeneity makes deployment challenging. We need strategies like standardized frameworks, extensive testing, and model tuning for each platform. For example, an object detection model needs efficient implementations across embedded devices like a Raspberry Pi, Nvidia Jetson, and Google Edge TPU.

-
-
-

13.6.10 Testing Coverage

-

Rigorously testing edge cases is difficult with constrained embedded simulation resources, but exhaustive testing is critical in systems like self-driving cars. Exhaustively testing an autopilot model requires millions of simulated kilometers, exposing it to rare events like sensor failures. Therefore, strategies like synthetic data generation, distributed simulation, and chaos engineering help improve coverage.

-
-
-

13.6.11 Concept Drift Detection

-

With limited monitoring data from each remote device, detecting changes in the input data over time is much harder. Drift can lead to degraded model performance. Lightweight methods are needed to identify when retraining is necessary. A model predicting power grid loads shows declining performance as usage patterns change over time. With only local device data, this trend is difficult to spot.

-
-
-
-

13.7 Traditional MLOps vs. Embedded MLOps

-

In traditional MLOps, ML models are typically deployed in cloud-based or server environments, with abundant resources like computing power and memory. These environments facilitate the smooth operation of complex models that require significant computational resources. For instance, a cloud-based image recognition model might be used by a social media platform to tag photos with relevant labels automatically. In this case, the model can leverage the extensive resources available in the cloud to efficiently process vast amounts of data.

-

On the other hand, embedded MLOps involves deploying ML models on embedded systems, specialized computing systems designed to perform specific functions within larger systems. Embedded systems are typically characterized by their limited computational resources and power. For example, an ML model might be embedded in a smart thermostat to optimize heating and cooling based on the user’s preferences and habits. The model must be optimized to run efficiently on the thermostat’s limited hardware without compromising its performance or accuracy.

-

The key difference between traditional and embedded MLOps lies in the embedded system’s resource constraints. While traditional MLOps can leverage abundant cloud or server resources, embedded MLOps must contend with the hardware limitations on which the model is deployed. This requires careful optimization and fine-tuning of the model to ensure it can deliver accurate and valuable insights within the embedded system’s constraints.

-

Furthermore, embedded MLOps must consider the unique challenges posed by integrating ML models with other embedded system components. For example, the model must be compatible with the system’s software and hardware and must be able to interface seamlessly with other components, such as sensors or actuators. This requires a deep understanding of both ML and embedded systems and close collaboration between data scientists, engineers, and other stakeholders.

-

So, while traditional MLOps and embedded MLOps share the common goal of deploying and maintaining ML models in production environments, the unique challenges posed by embedded systems require a specialized approach. Embedded MLOps must carefully balance the need for model accuracy and performance with the constraints of the hardware on which the model is deployed. This requires a deep understanding of both ML and embedded systems and close collaboration between various stakeholders to ensure the successful integration of ML models into embedded systems.

-

This time, we will group the subtopics under broader categories to streamline the structure of our thought process on MLOps. This structure will help you understand how different aspects of MLOps are interconnected and why each is important for the efficient operation of ML systems as we discuss the challenges in the context of embedded systems.

-
    -
  • Model Lifecycle Management -
      -
    • Data Management: Handling data ingestion, validation, and version control.
    • -
    • Model Training: Techniques and practices for effective and scalable model training.
    • -
    • Model Evaluation: Strategies for testing and validating model performance.
    • -
    • Model Deployment: Approaches for deploying models into production environments.
    • -
  • -
  • Development and Operations Integration -
      -
    • CI/CD Pipelines: Integrating ML models into continuous integration and deployment pipelines.
    • -
    • Infrastructure Management: Setting up and maintaining the infrastructure required for training and deploying models.
    • -
    • Communication & Collaboration: Ensuring smooth communication and collaboration between data scientists, ML engineers, and operations teams.
    • -
  • -
  • Operational Excellence -
      -
    • Monitoring: Techniques for monitoring model performance, data drift, and operational health.
    • -
    • Governance: Implementing policies for model auditability, compliance, and ethical considerations.
    • -
  • -
-
-

13.7.1 Model Lifecycle Management

-
-

Data Management

-

In traditional centralized MLOps, data is aggregated into large datasets and data lakes, then processed on cloud or on-prem servers. However, embedded MLOps relies on decentralized data from local on-device sensors. Devices collect smaller batches of incremental data, often noisy and unstructured. With connectivity constraints, this data cannot always be instantly transmitted to the cloud and needs to be intelligently cached and processed at the edge.

-

Due to limited on-device computing, embedded devices can only preprocess and clean data minimally before transmission. Early filtering and processing occur at edge gateways to reduce transmission loads. While leveraging cloud storage, more processing and storage happen at the edge to account for intermittent connectivity. Devices identify and transmit only the most critical subsets of data to the cloud.

-

Labeling also needs centralized data access, requiring more automated techniques like federated learning, where devices collaboratively label peers’ data. With personal edge devices, data privacy and regulations are critical concerns. Data collection, transmission, and storage must be secure and compliant.

-

For instance, a smartwatch may collect the day’s step count, heart rate, and GPS coordinates. This data is cached locally and transmitted to an edge gateway when WiFi is available—the gateway processes and filters data before syncing relevant subsets with the cloud platform to retrain models.

-
-
-

Model Training

-

In traditional centralized MLOps, models are trained using abundant data via deep learning on high-powered cloud GPU servers. However, embedded MLOps need more support in model complexity, data availability, and computing resources for training.

-

The volume of aggregated data is much lower, often requiring techniques like federated learning across devices to create training sets. The specialized nature of edge data also limits public datasets for pre-training. With privacy concerns, data samples must be tightly controlled and anonymized where possible.

-

Furthermore, the models must use simplified architectures optimized for low-power edge hardware. Given the computing limitations, high-end GPUs are inaccessible for intensive deep learning. Training leverages lower-powered edge servers and clusters with distributed approaches to spread load.

-

Strategies like transfer learning become essential to mitigate data scarcity and irregularity (see Figure 14.5). Models can pre-train on large public datasets and then finetune the training on limited domain-specific edge data. Even incremental on-device learning to customize models helps overcome the decentralized nature of embedded data. The lack of broad labeled data also motivates semi-supervised techniques.

-

Figure fig-transfer-learning-mlops illustrates the concept of transfer learning in model training within an MLOps framework. It showcases a neural network where the initial layers (W_{A1} to W_{A4}), which are responsible for general feature extraction, are frozen (indicated by the green dashed line), meaning their weights are not updated during training. This reuse of pre-trained layers accelerates learning by utilizing knowledge gained from previous tasks. The latter layers (W_{A5} to W_{A7}), depicted beyond the blue dashed line, are finetuned for the specific task at hand, focusing on task-specific feature learning. This approach allows the model to adapt to the new task using fewer resources and potentially achieve higher performance on specialized tasks by reusing the general features learned from a broader dataset.

-
-
-
- -
-
-Figure 13.5: Transfer learning in MLOps. Credit: HarvardX. -
-
-
-

For example, a smart home assistant may pre-train an audio recognition model on public YouTube clips, which helps bootstrap with general knowledge. It then transfers learning to a small sample of home data to classify customized appliances and events, specializing in the model. The model transforms into a lightweight neural network optimized for microphone-enabled devices across the home.

-

So, embedded MLOps face acute challenges in constructing training datasets, designing efficient models, and distributing compute for model development compared to traditional settings. Given the embedded constraints, careful adaptation, such as transfer learning and distributed training, is required to train models.

-
-
-

Model Evaluation

-

In traditional centralized MLOps, models are evaluated primarily using accuracy metrics and holdout test datasets. However, embedded MLOps require a more holistic evaluation that accounts for system constraints beyond accuracy.

-

Models must be tested early and often on deployed edge hardware covering diverse configurations. In addition to accuracy, factors like latency, CPU usage, memory footprint, and power consumption are critical evaluation criteria. Models are selected based on tradeoffs between these metrics to meet edge device constraints.

-

Data drift must also be monitored - where models trained on cloud data degrade in accuracy over time on local edge data. Embedded data often has more variability than centralized training sets. Evaluating models across diverse operational edge data samples is key. But sometimes, getting the data for monitoring the drift can be challenging if these devices are in the wild and communication is a barrier.

-

Ongoing monitoring provides visibility into real-world performance post-deployment, revealing bottlenecks not caught during testing. For instance, a smart camera model update may be canary tested on 100 cameras first and rolled back if degraded accuracy is observed before expanding to all 5000 cameras.

-
-
-

Model Deployment

-

In traditional MLOps, new model versions are directly deployed onto servers via API endpoints. However, embedded devices require optimized delivery mechanisms to receive updated models. Over-the-air (OTA) updates provide a standardized approach to wirelessly distributing new software or firmware releases to embedded devices. Rather than direct API access, OTA packages allow remote deploying models and dependencies as pre-built bundles. Alternatively, federated learning allows model updates without direct access to raw training data. This decentralized approach has the potential for continuous model improvement but needs robust MLOps platforms.

-

Model delivery relies on physical interfaces like USB or UART serial connections for deeply embedded devices lacking connectivity. The model packaging still follows similar principles to OTA updates, but the deployment mechanism is tailored to the capabilities of the edge hardware. Moreover, specialized OTA protocols optimized for IoT networks are often used rather than standard WiFi or Bluetooth protocols. Key factors include efficiency, reliability, security, and telemetry, such as progress tracking—solutions like Mender. Io provides embedded-focused OTA services handling differential updates across device fleets.

-

Figure fig-model-lifecycle presents an overview of Model Lifecycle Management in an MLOps context, illustrating the flow from development (top left) to deployment and monitoring (bottom right). The process begins with ML Development, where code and configurations are version-controlled. Data and model management are central to the process, involving datasets and feature repositories. Continuous training, model conversion, and model registry are key stages in the operationalization of training. The model deployment includes serving the model and managing serving logs. Alerting mechanisms are in place to flag issues, which feed into continuous monitoring to ensure model performance and reliability over time. This integrated approach ensures that models are developed and maintained effectively throughout their lifecycle.

-
-
-
- -
-
-Figure 13.6: Model lifecycle management. Credit: HarvardX. -
-
-
-
-
-
-

13.7.2 Development and Operations Integration

-
-

CI/CD Pipelines

-

In traditional MLOps, robust CI/CD infrastructure like Jenkins and Kubernetes enables pipeline automation for large-scale model deployment. However, embedded MLOps need this centralized infrastructure and more tailored CI/CD workflows for edge devices.

-

Building CI/CD pipelines has to account for a fragmented landscape of diverse hardware, firmware versions, and connectivity constraints. There is no standard platform to orchestrate pipelines, and tooling support is more limited.

-

Testing must cover this wide spectrum of target embedded devices early, which is difficult without centralized access. Companies must invest significant effort into acquiring and managing test infrastructure across the heterogeneous embedded ecosystem.

-

Over-the-air updates require setting up specialized servers to distribute model bundles securely to devices in the field. Rollout and rollback procedures must also be carefully tailored for particular device families.

-

With traditional CI/CD tools less applicable, embedded MLOps rely more on custom scripts and integration. Companies take varied approaches, from open-source frameworks to fully in-house solutions. Tight integration between developers, edge engineers, and end customers establishes trusted release processes.

-

Therefore, embedded MLOps can’t leverage centralized cloud infrastructure for CI/CD. Companies combine custom pipelines, testing infrastructure, and OTA delivery to deploy models across fragmented and disconnected edge systems.

-
-
-

Infrastructure Management

-

In traditional centralized MLOps, infrastructure entails provisioning cloud servers, GPUs, and high-bandwidth networks for intensive workloads like model training and serving predictions at scale. However, embedded MLOps require more heterogeneous infrastructure spanning edge devices, gateways, and the cloud.

-

Edge devices like sensors capture and preprocess data locally before intermittent transmission to avoid overloading networks—gateways aggregate and process device data before sending select subsets to the cloud for training and analysis. The cloud provides centralized management and supplemental computing.

-

This infrastructure needs tight integration and balancing processing and communication loads. Network bandwidth is limited, requiring careful data filtering and compression. Edge computing capabilities are modest compared to the cloud, imposing optimization constraints.

-

Managing secure OTA updates across large device fleets presents challenges at the edge. Rollouts must be incremental and rollback-ready for quick mitigation. Given decentralized environments, updating edge infrastructure requires coordination.

-

For example, an industrial plant may perform basic signal processing on sensors before sending data to an on-prem gateway. The gateway handles data aggregation, infrastructure monitoring, and OTA updates. Only curated data is transmitted to the cloud for advanced analytics and model retraining.

-

Embedded MLOps requires holistic management of distributed infrastructure spanning constrained edge, gateways, and centralized cloud. Workloads are balanced across tiers while accounting for connectivity, computing, and security challenges.

-
-
-

Communication & Collaboration

-

In traditional MLOps, collaboration tends to center around data scientists, ML engineers, and DevOps teams. However, embedded MLOps require tighter cross-functional coordination between additional roles to address system constraints.

-

Edge engineers optimize model architectures for target hardware environments. They provide feedback to data scientists during development so models fit device capabilities early on. Similarly, product teams define operational requirements informed by end-user contexts.

-

With more stakeholders across the embedded ecosystem, communication channels must facilitate information sharing between centralized and remote teams. Issue tracking and project management ensure alignment.

-

Collaborative tools optimize models for particular devices. Data scientists can log issues replicated from field devices so models specialize in niche data. Remote device access aids debugging and data collection.

-

For example, data scientists may collaborate with field teams managing fleets of wind turbines to retrieve operational data samples. This data is used to specialize models detecting anomalies specific to that turbine class. Model updates are tested in simulations and reviewed by engineers before field deployment.

-

Embedded MLOps mandates continuous coordination between data scientists, engineers, end customers, and other stakeholders throughout the ML lifecycle. Through close collaboration, models can be tailored and optimized for targeted edge devices.

-
-
-
-

13.7.3 Operational Excellence

-
-

Monitoring

-

Traditional MLOps monitoring focuses on centrally tracking model accuracy, performance metrics, and data drift. However, embedded MLOps must account for decentralized monitoring across diverse edge devices and environments.

-

Edge devices require optimized data collection to transmit key monitoring metrics without overloading networks. Metrics help assess model performance, data patterns, resource usage, and other behaviors on remote devices.

-

With limited connectivity, more analysis occurs at the edge before aggregating insights centrally. Gateways play a key role in monitoring fleet health and coordinating software updates. Confirmed indicators are eventually propagated to the cloud.

-

Broad device coverage is challenging but critical. Issues specific to certain device types may arise, so monitoring needs to cover the full spectrum. Canary deployments help trial monitoring processes before scaling.

-

Anomaly detection identifies incidents requiring rolling back models or retraining on new data. However, interpreting alerts requires understanding unique device contexts based on input from engineers and customers.

-

For example, an automaker may monitor autonomous vehicles for indicators of model degradation using caching, aggregation, and real-time streams. Engineers assess when identified anomalies warrant OTA updates to improve models based on factors like location and vehicle age.

-

Embedded MLOps monitoring provides observability into model and system performance across decentralized edge environments. Careful data collection, analysis, and collaboration deliver meaningful insights to maintain reliability.

-
-
-

Governance

-

In traditional MLOps, governance focuses on model explainability, fairness, and compliance for centralized systems. However, embedded MLOps must also address device-level governance challenges related to data privacy, security, and safety.

-

With sensors collecting personal and sensitive data, local data governance on devices is critical. Data access controls, anonymization, and encrypted caching help address privacy risks and compliance like HIPAA and GDPR. Updates must maintain security patches and settings.

-

Safety governance considers the physical impacts of flawed device behavior. Failures could cause unsafe conditions in vehicles, factories, and critical systems. Redundancy, fail-safes, and warning systems help mitigate risks.

-

Traditional governance, such as bias monitoring and model explainability, remains imperative but is harder to implement for embedded AI. Peeking into black-box models on low-power devices also poses challenges.

-

For example, a medical device may scrub personal data on the device before transmission. Strict data governance protocols approve model updates. Model explainability is limited, but the focus is on detecting anomalous behavior. Backup systems prevent failures.

-

Embedded MLOps governance must encompass privacy, security, safety, transparency, and ethics. Specialized techniques and team collaboration are needed to help establish trust and accountability within decentralized environments.

-
-
-
-

13.7.4 Comparison

-

Here is a comparison table highlighting similarities and differences between Traditional MLOps and Embedded MLOps based on all the things we have learned thus far:

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AreaTraditional MLOpsEmbedded MLOps
Data ManagementLarge datasets, data lakes, feature storesOn-device data capture, edge caching and processing
Model DevelopmentLeverage deep learning, complex neural nets, GPU trainingConstraints on model complexity, need for optimization
DeploymentServer clusters, cloud deployment, low latency at scaleOTA deployment to devices, intermittent connectivity
MonitoringDashboards, logs, alerts for cloud model performanceOn-device monitoring of predictions, resource usage
RetrainingRetrain models on new dataFederated learning from devices, edge retraining
InfrastructureDynamic cloud infrastructureHeterogeneous edge/cloud infrastructure
CollaborationShared experiment tracking and model registryCollaboration for device-specific optimization
-

So, while Embedded MLOps shares foundational MLOps principles, it faces unique constraints in tailoring workflows and infrastructure specifically for resource-constrained edge devices.

-
-
-
-

13.8 Commercial Offerings

-

While understanding the principles is not a substitute for understanding them, an increasing number of commercial offerings help ease the burden of building ML pipelines and integrating tools to build, test, deploy, and monitor ML models in production.

-
-

13.8.1 Traditional MLOps

-

Google, Microsoft, and Amazon offer their version of managed ML services. These include services that manage model training and experimentation, model hosting and scaling, and monitoring. These offerings are available via an API and client SDKs, as well as through web UIs. While it is possible to build your own end-to-end MLOps solutions using pieces from each, the greatest ease of use benefits come by staying within a single provider ecosystem to take advantage of interservice integrations.

-

I will provide a quick overview of the services that fit into each part of the MLOps life cycle described above, providing examples of offerings from different providers. The space is moving very quickly; new companies and products are entering the scene very rapidly, and these are not meant to serve as an endorsement of a particular company’s offering.

-
-

Data Management

-

Data storage and versioning are table stakes for any commercial offering, and most take advantage of existing general-purpose storage solutions such as S3. Others use more specialized options such as git-based storage (Example: Hugging Face’s Dataset Hub This is an area where providers make it easy to support their competitors’ data storage options, as they don’t want this to be a barrier for adoptions of the rest of their MLOps services. For example, Vertex AI’s training pipeline seamlessly supports datasets stored in S3, Google Cloud Buckets, or Hugging Face’s Dataset Hub.

-
-
-

Model Training

-

Managed training services are where cloud providers shine, as they provide on-demand access to hardware that is out of reach for most smaller companies. They bill only for hardware during training time, putting GPU-accelerated training within reach of even the smallest developer teams. The control developers have over their training workflow can vary widely depending on their needs. Some providers have services that provide little more than access to the resources and rely on the developer to manage the training loop, logging, and model storage themselves. Other services are as simple as pointing to a base model and a labeled data set to kick off a fully managed finetuning job (example: Vertex AI Fine Tuning).

-

A word of warning: As of 2023, GPU hardware demand well exceeds supply, and as a result, cloud providers are rationing access to their GPUs. In some data center regions, GPUs may be unavailable or require long-term contracts.

-
-
-

Model Evaluation

-

Model evaluation tasks typically involve monitoring models’ accuracy, latency, and resource usage in both the testing and production phases. Unlike embedded systems, ML models deployed to the cloud benefit from constant internet connectivity and unlimited logging capacities. As a result, it is often feasible to capture and log every request and response. This makes replaying or generating synthetic requests to compare different models and versions tractable.

-

Some providers also offer services that automate the experiment tracking of modifying model hyperparameters. They track the runs and performance and generate artifacts from these model training runs. Example: WeightsAndBiases

-
-
-

Model Deployment

-

Each provider typically has a service referred to as a “model registry,” where training models are stored and accessed. Often, these registries may also provide access to base models that are either open source or provided by larger technology companies (or, in some cases, like LLAMA, both!). These model registries are a common place to compare all the models and their versions to allow easy decision-making on which to pick for a given use case. Example: Vertex AI’s model registry

-

From the model registry, deploying a model to an inference endpoint is quick and simple, and it handles the resource provisioning, model weight downloading, and hosting of a given model. These services typically give access to the model via a REST API where inference requests can be sent. Depending on the model type, specific resources can be configured, such as which type of GPU accelerator may be needed to hit the desired performance. Some providers may also offer serverless inference or batch inference options that do not need a persistent endpoint to access the model. Example: AWS SageMaker Inference

-
-
-
-

13.8.2 Embedded MLOps

-

Despite the proliferation of new ML Ops tools in response to the increase in demand, the challenges described earlier have constrained the availability of such tools in embedded systems environments. More recently, new tools such as Edge Impulse (Janapa Reddi et al. 2023) have made the development process somewhat easier, as described below.

-
-Janapa Reddi, Vijay, Alexander Elium, Shawn Hymel, David Tischler, Daniel Situnayake, Carl Ward, Louis Moreau, et al. 2023. “Edge Impulse: An MLOps Platform for Tiny Machine Learning.” Proceedings of Machine Learning and Systems 5. -
-

Edge Impulse

-

Edge Impulse is an end-to-end development platform for creating and deploying machine learning models onto edge devices such as microcontrollers and small processors. It aims to make embedded machine learning more accessible to software developers through its easy-to-use web interface and integrated tools for data collection, model development, optimization, and deployment. Its key capabilities include the following:

-
    -
  • Intuitive drag-and-drop workflow for building ML models without coding required
  • -
  • Tools for acquiring, labeling, visualizing, and preprocessing data from sensors
  • -
  • Choice of model architectures, including neural networks and unsupervised learning
  • -
  • Model optimization techniques to balance performance metrics and hardware constraints
    -
  • -
  • Seamless deployment onto edge devices through compilation, SDKs, and benchmarks
  • -
  • Collaboration features for teams and integration with other platforms
  • -
-

With Edge Impulse, developers with limited data science expertise can develop specialized ML models that run efficiently within small computing environments. It provides a comprehensive solution for creating embedded intelligence and advancing machine learning.

-
-
User Interface
-

Edge Impulse was designed with seven key principles: accessibility, end-to-end capabilities, a data-centric approach, interactiveness, extensibility, team orientation, and community support. The intuitive user interface, shown in Figure fig-edge-impulse-ui, guides developers at all experience levels through uploading data, selecting a model architecture, training the model, and deploying it across relevant hardware platforms. It should be noted that, like any tool, Edge Impulse is intended to assist with, not replace, foundational considerations such as determining if ML is an appropriate solution or acquiring the requisite domain expertise for a given application.

-
-
-
- -
-
-Figure 13.7: Screenshot of Edge Impulse user interface for building workflows from input data to output features. -
-
-
-

What makes Edge Impulse notable is its comprehensive yet intuitive end-to-end workflow. Developers start by uploading their data through file upload or command line interface (CLI) tools, after which they can examine raw samples and visualize the data distribution in the training and test splits. Next, users can pick from various preprocessing “blocks” to facilitate digital signal processing (DSP). While default parameter values are provided, users can customize the parameters as needed, with considerations around memory and latency displayed. Users can easily choose their neural network architecture - without any code needed.

-

Thanks to the platform’s visual editor, users can customize the architecture’s components and specific parameters while ensuring that the model is still trainable. Users can also leverage unsupervised learning algorithms, such as K-means clustering and Gaussian mixture models (GMM).

-
-
-
Optimizations
-

To accommodate the resource constraints of TinyML applications, Edge Impulse provides a confusion matrix summarizing key performance metrics, including per-class accuracy and F1 scores. The platform elucidates the tradeoffs between model performance, size, and latency using simulations in Renode and device-specific benchmarking. For streaming data use cases, a performance calibration tool leverages a genetic algorithm to find ideal post-processing configurations balancing false acceptance and false rejection rates. Techniques like quantization, code optimization, and device-specific optimization are available to optimize models. For deployment, models can be compiled in appropriate formats for target edge devices. Native firmware SDKs also enable direct data collection on devices.

-

In addition to streamlining development, Edge Impulse scales the modeling process itself. A key capability is the EON Tuner, an automated machine learning (AutoML) tool that assists users in hyperparameter tuning based on system constraints. It runs a random search to generate configurations for digital signal processing and training steps quickly. The resulting models are displayed for the user to select based on relevant performance, memory, and latency metrics. For data, active learning facilitates training on a small labeled subset, followed by manually or automatically labeling new samples based on proximity to existing classes. This expands data efficiency.

-
-
-
Use Cases
-

Beyond the accessibility of the platform itself, the Edge Impulse team has expanded the knowledge base of the embedded ML ecosystem. The platform lends itself to academic environments, having been used in online courses and on-site workshops globally. Numerous case studies featuring industry and research use cases have been published, most notably Oura Ring, which uses ML to identify sleep patterns. The team has made repositories open source on GitHub, facilitating community growth. Users can also make projects public to share techniques and download libraries to share via Apache. Organization-level access enables collaboration on workflows.

-

Overall, Edge Impulse is uniquely comprehensive and integrateable for developer workflows. Larger platforms like Google and Microsoft focus more on cloud versus embedded systems. TinyMLOps frameworks such as Neuton AI and Latent AI offer some functionality but lack Edge Impulse’s end-to-end capabilities. TensorFlow Lite Micro is the standard inference engine due to flexibility, open source status, and TensorFlow integration, but it uses more memory and storage than Edge Impulse’s EON Compiler. Other platforms need to be updated, academic-focused, or more versatile. In summary, Edge Impulse aims to streamline and scale embedded ML through an accessible, automated platform.

-
-
-
-

Limitations

-

While Edge Impulse provides an accessible pipeline for embedded ML, important limitations and risks remain. A key challenge is data quality and availability - the models are only as good as the data used to train them. Users must have sufficient labeled samples that capture the breadth of expected operating conditions and failure modes. Labeled anomalies and outliers are critical yet time-consuming to collect and identify. Insufficient or biased data leads to poor model performance regardless of the tool’s capabilities.

-

Deploying low-powered devices also presents inherent challenges. Optimized models may still need to be more resource-intensive for ultra-low-power MCUs. Striking the right balance of compression versus accuracy takes some experimentation. The tool simplifies but still needs to eliminate the need for foundational ML and signal processing expertise. Embedded environments also constrain debugging and interpretability compared to the cloud.

-

While impressive results are achievable, users shouldn’t view Edge Impulse as a “Push Button ML” solution. Careful project scoping, data collection, model evaluation, and testing are still essential. As with any development tool, reasonable expectations and diligence in application are advised. However, Edge Impulse can accelerate embedded ML prototyping and deployment for developers willing to invest the requisite data science and engineering effort.

-
-

Exercise 13.1 (Edge Impulse)  

-
-
- -
-
-

Ready to level up your tiny machine-learning projects? Let’s combine the power of Edge Impulse with the awesome visualizations of Weights & Biases (WandB). In this Colab, you’ll learn to track your model’s training progress like a pro! Imagine seeing cool graphs of your model getting smarter, comparing different versions, and ensuring your AI performs its best even on tiny devices.

-

-
-
-
-
-
-
-
-

13.9 Case Studies

-
-

13.9.1 Oura Ring

-

The Oura Ring is a wearable that can measure activity, sleep, and recovery when placed on the user’s finger. Using sensors to track physiological metrics, the device uses embedded ML to predict the stages of sleep. To establish a baseline of legitimacy in the industry, Oura conducted a correlation experiment to evaluate the device’s success in predicting sleep stages against a baseline study. This resulted in a solid 62% correlation compared to the 82-83% baseline. Thus, the team set out to determine how to improve their performance even further.

-

The first challenge was to obtain better data in terms of both quantity and quality. They could host a larger study to get a more comprehensive data set, but the data would be so noisy and large that it would be difficult to aggregate, scrub, and analyze. This is where Edge Impulse comes in.

-

We hosted a massive sleep study of 100 men and women between the ages of 15 and 73 across three continents (Asia, Europe, and North America). In addition to wearing the Oura Ring, participants were responsible for undergoing the industry standard PSG testing, which provided a “label” for this data set. With 440 nights of sleep from 106 participants, the data set totaled 3,444 hours in length across Ring and PSG data. With Edge Impulse, Oura could easily upload and consolidate data from different sources into a private S3 bucket. They were also able to set up a Data Pipeline to merge data samples into individual files and preprocess the data without having to conduct manual scrubbing.

-

Because of the time saved on data processing thanks to Edge Impulse, the Oura team could focus on the key drivers of their prediction. They only extracted three types of sensor data: heart rate, motion, and body temperature. After partitioning the data using five-fold cross-validation and classifying sleep stages, the team achieved a correlation of 79% - just a few percentage points off the standard. They readily deployed two types of sleep detection models: one simplified using just the ring’s accelerometer and one more comprehensive leveraging Autonomic Nervous System (ANS)-mediated peripheral signals and circadian features. With Edge Impulse, they plan to conduct further analyses of different activity types and leverage the platform’s scalability to continue experimenting with different data sources and subsets of extracted features.

-

While most ML research focuses on model-dominant steps such as training and finetuning, this case study underscores the importance of a holistic approach to ML Ops, where even the initial steps of data aggregation and preprocessing fundamentally impact successful outcomes.

-
-
-

13.9.2 ClinAIOps

-

Let’s look at MLOps in the context of medical health monitoring to better understand how MLOps “matures” in a real-world deployment. Specifically, let’s consider continuous therapeutic monitoring (CTM) enabled by wearable devices and sensors. CTM captures detailed physiological data from patients, providing the opportunity for more frequent and personalized adjustments to treatments.

-

Wearable ML-enabled sensors enable continuous physiological and activity monitoring outside clinics, opening up possibilities for timely, data-driven therapy adjustments. For example, wearable insulin biosensors (Psoma and Kanthou 2023) and wrist-worn ECG sensors for glucose monitoring (Li et al. 2021) can automate insulin dosing for diabetes, wrist-worn ECG and PPG sensors can adjust blood thinners based on atrial fibrillation patterns (Attia et al. 2018; Guo et al. 2019), and accelerometers tracking gait can trigger preventative care for declining mobility in the elderly (Liu et al. 2022). The variety of signals that can now be captured passively and continuously allows therapy titration and optimization tailored to each patient’s changing needs. By closing the loop between physiological sensing and therapeutic response with TinyML and on-device learning, wearables are poised to transform many areas of personalized medicine.

-
-Psoma, Sotiria D., and Chryso Kanthou. 2023. “Wearable Insulin Biosensors for Diabetes Management: Advances and Challenges.” Biosensors 13 (7): 719. https://doi.org/10.3390/bios13070719. -
-Li, Jingzhen, Igbe Tobore, Yuhang Liu, Abhishek Kandwal, Lei Wang, and Zedong Nie. 2021. “Non-Invasive Monitoring of Three Glucose Ranges Based on ECG by Using DBSCAN-CNN.” #IEEE_J_BHI# 25 (9): 3340–50. https://doi.org/10.1109/jbhi.2021.3072628. -
-Attia, Zachi I., Alan Sugrue, Samuel J. Asirvatham, Michael J. Ackerman, Suraj Kapa, Paul A. Friedman, and Peter A. Noseworthy. 2018. “Noninvasive Assessment of Dofetilide Plasma Concentration Using a Deep Learning (Neural Network) Analysis of the Surface Electrocardiogram: A Proof of Concept Study.” PLoS One 13 (8): e0201059. https://doi.org/10.1371/journal.pone.0201059. -
-Guo, Yutao, Hao Wang, Hui Zhang, Tong Liu, Zhaoguang Liang, Yunlong Xia, Li Yan, et al. 2019. “Mobile Photoplethysmographic Technology to Detect Atrial Fibrillation.” J. Am. Coll. Cardiol. 74 (19): 2365–75. https://doi.org/10.1016/j.jacc.2019.08.019. -
-Liu, Yingcheng, Guo Zhang, Christopher G. Tarolli, Rumen Hristov, Stella Jensen-Roberts, Emma M. Waddell, Taylor L. Myers, et al. 2022. “Monitoring Gait at Home with Radio Waves in Parkinsons Disease: A Marker of Severity, Progression, and Medication Response.” Sci. Transl. Med. 14 (663): eadc9669. https://doi.org/10.1126/scitranslmed.adc9669. -

ML holds great promise in analyzing CTM data to provide data-driven recommendations for therapy adjustments. But simply deploying AI models in silos, without integrating them properly into clinical workflows and decision-making, can lead to poor adoption or suboptimal outcomes. In other words, thinking about MLOps alone is insufficient to make them useful in practice. This study shows that frameworks are needed to incorporate AI and CTM into real-world clinical practice seamlessly.

-

This case study analyzes “ClinAIOps” as a model for embedded ML operations in complex clinical environments (Chen et al. 2023). We provide an overview of the framework and why it’s needed, walk through an application example, and discuss key implementation challenges related to model monitoring, workflow integration, and stakeholder incentives. Analyzing real-world examples like ClinAIOps illuminates crucial principles and best practices for reliable and effective AI Ops across many domains.

-

Traditional MLOps frameworks are insufficient for integrating continuous therapeutic monitoring (CTM) and AI in clinical settings for a few key reasons:

-
    -
  • MLOps focuses on the ML model lifecycle—training, deployment, monitoring. But healthcare involves coordinating multiple human stakeholders—patients and clinicians—not just models.

  • -
  • MLOps aims to automate IT system monitoring and management. However, optimizing patient health requires personalized care and human oversight, not just automation.

  • -
  • CTM and healthcare delivery are complex sociotechnical systems with many moving parts. MLOps doesn’t provide a framework for coordinating human and AI decision-making.

  • -
  • Ethical considerations regarding healthcare AI require human judgment, oversight, and accountability. MLOps frameworks lack processes for ethical oversight.

  • -
  • Patient health data is highly sensitive and regulated. MLOps alone doesn’t ensure the handling of protected health information to privacy and regulatory standards.

  • -
  • Clinical validation of AI-guided treatment plans is essential for provider adoption. MLOps doesn’t incorporate domain-specific evaluation of model recommendations.

  • -
  • Optimizing healthcare metrics like patient outcomes requires aligning stakeholder incentives and workflows, which pure tech-focused MLOps overlooks.

  • -
-

Thus, effectively integrating AI/ML and CTM in clinical practice requires more than just model and data pipelines; it requires coordinating complex human-AI collaborative decision-making, which ClinAIOps aims to address via its multi-stakeholder feedback loops.

-
-

Feedback Loops

-

The ClinAIOps framework, shown in Figure fig-clinaiops, provides these mechanisms through three feedback loops. The loops are useful for coordinating the insights from continuous physiological monitoring, clinician expertise, and AI guidance via feedback loops, enabling data-driven precision medicine while maintaining human accountability. ClinAIOps provides a model for effective human-AI symbiosis in healthcare: the patient is at the center, providing health challenges and goals that inform the therapy regimen; the clinician oversees this regimen, giving inputs for adjustments based on continuous monitoring data and health reports from the patient; whereas AI developers play a crucial role by creating systems that generate alerts for therapy updates, which the clinician then vets.

-

These feedback loops, which we will discuss below, help maintain clinician responsibility and control over treatment plans by reviewing AI suggestions before they impact patients. They help dynamically customize AI model behavior and outputs to each patient’s changing health status. They help improve model accuracy and clinical utility over time by learning from clinician and patient responses. They facilitate shared decision-making and personalized care during patient-clinician interactions. They enable rapid optimization of therapies based on frequent patient data that clinicians cannot manually analyze.

-
-
-
- -
-
-Figure 13.8: ClinAIOps cycle. Credit: Chen et al. (2023). -
-
-
-
-
Patient-AI Loop
-

The patient-AI loop enables frequent therapy optimization driven by continuous physiological monitoring. Patients are prescribed wearables like smartwatches or skin patches to collect relevant health signals passively. For example, a diabetic patient could have a continuous glucose monitor, or a heart disease patient may wear an ECG patch. An AI model analyzes the patient’s longitudinal health data streams in the context of their electronic medical records - their diagnoses, lab tests, medications, and demographics. The AI model suggests adjustments to the treatment regimen tailored to that individual, like changing a medication dose or administration schedule. Minor adjustments within a pre-approved safe range can be made by the patient independently, while major changes are reviewed by the clinician first. This tight feedback between the patient’s physiology and AI-guided therapy allows data-driven, timely optimizations like automated insulin dosing recommendations based on real-time glucose levels for diabetes patients.

-
-
-
Clinician-AI Loop
-

The clinician-AI loop allows clinical oversight over AI-generated recommendations to ensure safety and accountability. The AI model provides the clinician with treatment recommendations and easily reviewed summaries of the relevant patient data on which the suggestions are based. For instance, an AI may suggest lowering a hypertension patient’s blood pressure medication dose based on continuously low readings. The clinician can accept, reject, or modify the AI’s proposed prescription changes. This clinician feedback further trains and improves the model. Additionally, the clinician sets the bounds for the types and extent of treatment changes the AI can autonomously recommend to patients. By reviewing AI suggestions, the clinician maintains ultimate treatment authority based on their clinical judgment and accountability. This loop allows them to oversee patient cases with AI assistance efficiently.

-
-
-
Patient-Clinician Loop
-

Instead of routine data collection, the clinician can focus on interpreting high-level data patterns and collaborating with the patient to set health goals and priorities. The AI assistance will also free up clinicians’ time, allowing them to focus more deeply on listening to patients’ stories and concerns. For instance, the clinician may discuss diet and exercise changes with a diabetes patient to improve their glucose control based on their continuous monitoring data. Appointment frequency can also be dynamically adjusted based on patient progress rather than following a fixed calendar. Freed from basic data gathering, the clinician can provide coaching and care customized to each patient informed by their continuous health data. The patient-clinician relationship is made more productive and personalized.

-
-
-
-

Hypertension Example

-

Let’s consider an example. According to the Centers for Disease Control and Prevention, nearly half of adults have hypertension (48.1%, 119.9 million). Hypertension can be managed through ClinAIOps with the help of wearable sensors using the following approach:

-
-
Data Collection
-

The data collected would include continuous blood pressure monitoring using a wrist-worn device equipped with photoplethysmography (PPG) and electrocardiography (ECG) sensors to estimate blood pressure (Zhang, Zhou, and Zeng 2017). The wearable would also track the patient’s physical activity via embedded accelerometers. The patient would log any antihypertensive medications they take, along with the time and dose. The patient’s demographic details and medical history from their electronic health record (EHR) would also be incorporated. This multimodal real-world data provides valuable context for the AI model to analyze the patient’s blood pressure patterns, activity levels, medication adherence, and responses to therapy.

-
-Zhang, Qingxue, Dian Zhou, and Xuan Zeng. 2017. “Highly Wearable Cuff-Less Blood Pressure and Heart Rate Monitoring with Single-Arm Electrocardiogram and Photoplethysmogram Signals.” BioMedical Engineering OnLine 16 (1): 23. https://doi.org/10.1186/s12938-017-0317-z. -
-
-
AI Model
-

The on-device AI model would analyze the patient’s continuous blood pressure trends, circadian patterns, physical activity levels, medication adherence behaviors, and other contexts. It would use ML to predict optimal antihypertensive medication doses and timing to control the individual’s blood pressure. The model would send dosage change recommendations directly to the patient for minor adjustments or to the reviewing clinician for approval for more significant modifications. By observing clinician feedback on its recommendations and evaluating the resulting blood pressure outcomes in patients, the AI model could be continually retrained and improved to enhance performance. The goal is fully personalized blood pressure management optimized for each patient’s needs and responses.

-
-
-
Patient-AI Loop
-

In the Patient-AI loop, the hypertensive patient would receive notifications on their wearable device or tethered smartphone app recommending adjustments to their antihypertensive medications. For minor dose changes within a pre-defined safe range, the patient could independently implement the AI model’s suggested adjustment to their regimen. However, the patient must obtain clinician approval before changing their dosage for more significant modifications. Providing personalized and timely medication recommendations automates an element of hypertension self-management for the patient. It can improve their adherence to the regimen as well as treatment outcomes. The patient is empowered to leverage AI insights to control their blood pressure better.

-
-
-
Clinician-AI Loop
-

In the Clinician-AI loop, the provider would receive summaries of the patient’s continuous blood pressure trends and visualizations of their medication-taking patterns and adherence. They review the AI model’s suggested antihypertensive dosage changes and decide whether to approve, reject, or modify the recommendations before they reach the patient. The clinician also specifies the boundaries for how much the AI can independently recommend changing dosages without clinician oversight. If the patient’s blood pressure is trending at dangerous levels, the system alerts the clinician so they can promptly intervene and adjust medications or request an emergency room visit. This loop maintains accountability and safety while allowing the clinician to harness AI insights by keeping the clinician in charge of approving major treatment changes.

-
-
-
Patient-Clinician Loop
-

In the Patient-Clinician loop, shown in Figure fig-interactive-loop, the in-person visits would focus less on collecting data or basic medication adjustments. Instead, the clinician could interpret high-level trends and patterns in the patient’s continuous monitoring data and have focused discussions about diet, exercise, stress management, and other lifestyle changes to improve their blood pressure control holistically. The frequency of appointments could be dynamically optimized based on the patient’s stability rather than following a fixed calendar. Since the clinician would not need to review all the granular data, they could concentrate on delivering personalized care and recommendations during visits. With continuous monitoring and AI-assisted optimization of medications between visits, the clinician-patient relationship focuses on overall wellness goals and becomes more impactful. This proactive and tailored data-driven approach can help avoid hypertension complications like stroke, heart failure, and other threats to patient health and well-being.

-
-
-
- -
-
-Figure 13.9: ClinAIOps interactive loop. Credit: Chen et al. (2023). -
-
-Chen, Emma, Shvetank Prakash, Vijay Janapa Reddi, David Kim, and Pranav Rajpurkar. 2023. “A Framework for Integrating Artificial Intelligence for Clinical Care with Continuous Therapeutic Monitoring.” Nat. Biomed. Eng., November. https://doi.org/10.1038/s41551-023-01115-0. -
-
-
-
-
-

MLOps vs. ClinAIOps

-

The hypertension example illustrates well why traditional MLOps are insufficient for many real-world AI applications and why frameworks like ClinAIOps are needed instead.

-

With hypertension, simply developing and deploying an ML model for adjusting medications would only succeed if it considered the broader clinical context. The patient, clinician, and health system have concerns about shaping adoption. The AI model cannot optimize blood pressure outcomes alone—it requires integrating with workflows, behaviors, and incentives.

-
    -
  • Some key gaps the example highlights in a pure MLOps approach:
  • -
  • The model itself would lack the real-world patient data at scale to recommend treatments reliably. ClinAIOps enables this by collecting feedback from clinicians and patients via continuous monitoring.
  • -
  • Clinicians would only trust model recommendations with transparency, explainability, and accountability. ClinAIOps keeps the clinician in the loop to build confidence.
  • -
  • Patients need personalized coaching and motivation - not just AI notifications. The ClinAIOps patient-clinician loop facilitates this.
  • -
  • Sensor reliability and data accuracy would only be sufficient with clinical oversight. ClinAIOps validates recommendations.
  • -
  • Liability for treatment outcomes must be clarified with just an ML model. ClinAIOps maintains human accountability.
  • -
  • Health systems would need to demonstrate value to change workflows. ClinAIOps aligns stakeholders.
  • -
-

The hypertension case clearly shows the need to look beyond training and deploying a performant ML model to consider the entire human-AI sociotechnical system. This is the key gap ClinAIOps aims to address over traditional MLOps. Traditional MLOps is overly tech-focused on automating ML model development and deployment, while ClinAIOps incorporates clinical context and human-AI coordination through multi-stakeholder feedback loops.

-

Table tbl-clinical_ops compares them. This table highlights how, when MLOps is implemented, we need to consider more than just ML models.

-
-
-
-Table 13.2: Comparison of MLOps versus AI operations for clinical use. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Traditional MLOpsClinAIOps
FocusML model development and deploymentCoordinating human and AI decision-making
StakeholdersData scientists, IT engineersPatients, clinicians, AI developers
Feedback loopsModel retraining, monitoringPatient-AI, clinician-AI, patient-clinician
ObjectiveOperationalize ML deploymentsOptimize patient health outcomes
ProcessesAutomated pipelines and infrastructureIntegrates clinical workflows and oversight
Data considerationsBuilding training datasetsPrivacy, ethics, protected health information
Model validationTesting model performance metricsClinical evaluation of recommendations
ImplementationFocuses on technical integrationAligns incentives of human stakeholders
-
-
-
-
-
-

Summary

-

In complex domains like healthcare, successfully deploying AI requires moving beyond a narrow focus on training and deploying performant ML models. As illustrated through the hypertension example, real-world integration of AI necessitates coordinating diverse stakeholders, aligning incentives, validating recommendations, and maintaining accountability. Frameworks like ClinAIOps, which facilitate collaborative human-AI decision-making through integrated feedback loops, are needed to address these multifaceted challenges. Rather than just automating tasks, AI must augment human capabilities and clinical workflows. This allows AI to positively impact patient outcomes, population health, and healthcare efficiency.

-
-
-
-
-

13.10 Conclusion

-

Embedded ML is poised to transform many industries by enabling AI capabilities directly on edge devices like smartphones, sensors, and IoT hardware. However, developing and deploying TinyML models on resource-constrained embedded systems poses unique challenges compared to traditional cloud-based MLOps.

-

This chapter provided an in-depth analysis of key differences between traditional and embedded MLOps across the model lifecycle, development workflows, infrastructure management, and operational practices. We discussed how factors like intermittent connectivity, decentralized data, and limited on-device computing necessitate innovative techniques like federated learning, on-device inference, and model optimization. Architectural patterns like cross-device learning and hierarchical edge-cloud infrastructure help mitigate constraints.

-

Through concrete examples like Oura Ring and ClinAIOps, we demonstrated applied principles for embedded MLOps. The case studies highlighted critical considerations beyond core ML engineering, like aligning stakeholder incentives, maintaining accountability, and coordinating human-AI decision-making. This underscores the need for a holistic approach spanning both technical and human elements.

-

While embedded MLOps face impediments, emerging tools like Edge Impulse and lessons from pioneers help accelerate TinyML innovation. A solid understanding of foundational MLOps principles tailored to embedded environments will empower more organizations to overcome constraints and deliver distributed AI capabilities. As frameworks and best practices mature, seamlessly integrating ML into edge devices and processes will transform industries through localized intelligence.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides serve as a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we also offer a series of hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/optimizations/optimizations.html b/contents/optimizations/optimizations.html deleted file mode 100644 index 45aa273a..00000000 --- a/contents/optimizations/optimizations.html +++ /dev/null @@ -1,2664 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 9  Model Optimizations - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

9  Model Optimizations

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Illustration of a neural network model represented as a busy construction site, with a diverse group of construction workers, both male and female, of various ethnicities, labeled as ‘pruning’, ‘quantization’, and ‘sparsity’. They are working together to make the neural network more efficient and smaller, while maintaining high accuracy. The ‘pruning’ worker, a Hispanic female, is cutting unnecessary connections from the middle of the network. The ‘quantization’ worker, a Caucasian male, is adjusting or tweaking the weights all over the place. The ‘sparsity’ worker, an African female, is removing unnecessary nodes to shrink the model. Construction trucks and cranes are in the background, assisting the workers in their tasks. The neural network is visually transforming from a complex and large structure to a more streamlined and smaller one.
-
-
-

When machine learning models are deployed on systems, especially on resource-constrained embedded systems, the optimization of models is a necessity. While machine learning inherently often demands substantial computational resources, the systems are inherently limited in memory, processing power, and energy. This chapter will dive into the art and science of optimizing machine learning models to ensure they are lightweight, efficient, and effective when deployed in TinyML scenarios.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Learn techniques like pruning, knowledge distillation and specialized model architectures to represent models more efficiently

  • -
  • Understand quantization methods to reduce model size and enable faster inference through reduced precision numerics

  • -
  • Explore hardware-aware optimization approaches to match models to target device capabilities

  • -
  • Discover software tools like frameworks and model conversion platforms that enable deployment of optimized models

  • -
  • Develop holistic thinking to balance tradeoffs in model complexity, accuracy, latency, power etc. based on application requirements

  • -
  • Gain strategic insight into selecting and applying model optimizations based on use case constraints and hardware targets

  • -
-
-
-
-

9.1 Introduction

-

We have structured this chapter in three tiers. First, in sec-model_ops_representation we examine the significance and methodologies of reducing the parameter complexity of models without compromising their inference capabilities. Techniques such as pruning and knowledge distillation are discussed, offering insights into how models can be compressed and simplified while maintaining, or even enhancing, their performance.

-

Going one level lower, in sec-model_ops_numerics, we study the role of numerical precision in model computations and how altering it impacts model size, speed, and accuracy. We will examine the various numerical formats and how reduced-precision arithmetic can be leveraged to optimize models for embedded deployment.

-

Finally, as we go lower and closer to the hardware, in sec-model_ops_hw, we will navigate through the landscape of hardware-software co-design, exploring how models can be optimized by tailoring them to the specific characteristics and capabilities of the target hardware. We will discuss how models can be adapted to exploit the available hardware resources effectively.

-
-
-
- -
-
-Figure 9.1: Three layers to be covered. -
-
-
-
-
-

9.2 Efficient Model Representation

-

The first avenue of attack for model optimization starts in familiar territory for most ML practitioners: efficient model representation is often first tackled at the highest level of parametrization abstraction - the model’s architecture itself.

-

Most traditional ML practitioners design models with a general high-level objective in mind, whether it be image classification, person detection, or keyword spotting as mentioned previously in this textbook. Their designs generally end up naturally fitting into some soft constraints due to limited compute resources during development, but generally these designs are not aware of later constraints, such as those required if the model is to be deployed on a more constrained device instead of the cloud.

-

In this section, we’ll discuss how practitioners can harness principles of hardware-software co-design even at a model’s high level architecture to make their models compatible with edge devices. From most to least hardware aware at this level of modification, we discuss several of the most common strategies for efficient model parametrization: pruning, model compression, and edge-friendly model architectures.

-
-

9.2.1 Pruning

-
-

Overview

-

Model pruning is a technique in machine learning that aims to reduce the size and complexity of a neural network model while maintaining its predictive capabilities as much as possible. The goal of model pruning is to remove redundant or non-essential components of the model, including connections between neurons, individual neurons, or even entire layers of the network.

-

This process typically involves analyzing the machine learning model to identify and remove weights, nodes, or layers that have little impact on the model’s outputs. By selectively pruning a model in this way, the total number of parameters can be reduced significantly without substantial declines in model accuracy. The resulting compressed model requires less memory and computational resources to train and run while enabling faster inference times.

-

Model pruning is especially useful when deploying machine learning models to devices with limited compute resources, such as mobile phones or TinyML systems. The technique facilitates the deployment of larger, more complex models on these devices by reducing their resource demands. Additionally, smaller models require less data to generalize well and are less prone to overfitting. By providing an efficient way to simplify models, model pruning has become a vital technique for optimizing neural networks in machine learning.

-

There are several common pruning techniques used in machine learning, these include structured pruning, unstructured pruning, iterative pruning, bayesian pruning, and even random pruning. In addition to pruning the weights, one can also prune the activations. Activation pruning specifically targets neurons or filters that activate rarely or have overall low activation. There are numerous other methods, such as sensitivity and movement pruning. For a comprehensive list of methods, the reader is encouraged to read the following paper: “A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations” (2023).

-

So how does one choose the type of pruning methods? Many variations of pruning techniques exist where each varies the heuristic of what should be kept and pruned from the model as well the number of times pruning occurs. Traditionally, pruning happens after the model is fully trained, where the pruned model may experience mild accuracy loss. However, as we will discuss further, recent discoveries have found that pruning can be used during training (i.e., iteratively) to identify more efficient and accurate model representations.

-
-
-

Structured Pruning

-

We start with structured pruning, a technique that reduces the size of a neural network by eliminating entire model-specific substructures while maintaining the overall model structure. It removes entire neurons/channels or layers based on importance criteria. For example, for a convolutional neural network (CNN), this could be certain filter instances or channels. For fully connected networks, this could be neurons themselves while maintaining full connectivity or even be elimination of entire model layers that are deemed to be insignificant. This type of pruning often leads to regular, structured sparse networks that are hardware friendly.

-
-
Components
-

Best practices have started to emerge on how to think about structured pruning. There are three main components:

-
    -
  1. Structures to target for pruning
  2. -
  3. Establishing a criteria for pruning
  4. -
  5. Selecting a pruning strategy
  6. -
-
-
-
Structures to target for pruning
-

Given that there are different strategies, each of these structures (i.e., neurons, channels and layers) is pruned based on specific criteria and strategies, ensuring that the reduced model maintains as much of the predictive prowess of the original model as possible while gaining in computational efficiency and reduction in size.

-

The primary structures targeted for pruning include neurons, channels, and sometimes, entire layers, each having its unique implications and methodologies. When neurons are pruned, we are removing entire neurons along with their associated weights and biases, thereby reducing the width of the layer. This type of pruning is often utilized in fully connected layers.

-

With channel pruning, which is predominantly applied in convolutional neural networks (CNNs), it involves eliminating entire channels or filters, which in turn reduces the depth of the feature maps and impacts the network’s ability to extract certain features from the input data. This is particularly crucial in image processing tasks where computational efficiency is paramount.

-

Finally, layer pruning takes a more aggressive approach by removing entire layers of the network. This significantly reduces the network’s depth and thereby its capacity to model complex patterns and hierarchies in the data. This approach necessitates a careful balance to ensure that the model’s predictive capability is not unduly compromised.

-

Figure fig-channel-layer-pruning demonstrates the difference between channel/filter wise pruning and layer pruning. When we prune a channel, we have to reconfigure the model’s architecture in order to adapt to the structural changes. One adjustment is changing the number of input channels in the subsequent layer (here, the third and deepest layer): changing the depths of the filters that are applied to the layer with the pruned channel. On the other hand, pruning an entire layer (removing all the channels in the layer) requires more drastic adjustements. The main one involves modifying the connections between the remaining layers to replace or bypass the pruned layer. In our case, we reconfigured had to connect the first and last layers. In all pruning cases, we have to fine-tune the new structure to adjust the weights.

-
-
-
- -
-
-Figure 9.2: Channel vs layer pruning. -
-
-
-
-
-
Establishing a criteria for pruning
-

Establishing well-defined criteria for determining which specific structures to prune from a neural network model is a crucial component of the model pruning process. The core goal here is to identify and remove components that contribute the least to the model’s predictive capabilities, while retaining structures integral to preserving the model’s accuracy.

-

A widely adopted and effective strategy for systematically pruning structures relies on computing importance scores for individual components like neurons, filters, channels or layers. These scores serve as quantitative metrics to gauge the significance of each structure and its effect on the model’s output.

-

There are several techniques for assigning these importance scores:

-
    -
  • Weight magnitude-based pruning assigns scores based on the absolute values of the weights. Components with very small weights contribute minimally to activations and can be removed.
  • -
  • Gradient-based pruning utilizes the gradients of the loss function with respect to each weight to determine sensitivity. Weights with low gradient magnitudes when altered have little effect on the loss and can be pruned.
  • -
  • Activation-based pruning tracks activation values for neurons/filters over a validation dataset. Consistently low activation values suggest less relevance, warranting removal.
  • -
  • Taylor expansion approximates the change in loss function from removing a given weight. Weights with negligible impact on loss are prime candidates for pruning.
  • -
-

The idea is to measure, either directly or indirectly, the contribution of each component to the model’s output. Structures with minimal influence according to the defined criteria are pruned first. This enables selective, optimized pruning that maximally compresses models while preserving predictive capacity. In general, it is important to evaluate the impact of removing particular structures on the model’s output.

-
-
-
Selecting a pruning strategy
-

The pruning strategy orchestrates how structures are removed and integrates with subsequent model fine-tuning to recover predictive performance. Two main structured pruning strategies exist: iterative pruning and one-shot pruning.

-

Iterative pruning gradually removes structures across multiple cycles of pruning followed by fine-tuning. In each cycle, a small set of structures are pruned based on importance criteria. The model is then fine-tuned, allowing it to adjust smoothly to the structural changes before the next pruning iteration. This gradual, cyclic approach prevents abrupt accuracy drops. It allows the model to slowly adapt as structures are reduced across iterations.

-

Consider a situation where we wish to prune the 6 least effective channels (based on some specific critera) from a convolutional neural network. In Figure fig-iterative-pruning, we show a simplified pruning process carried over 3 iterations. In every iteration, we only prune 2 channels. Removing the channels results in accuracy degradation. In the first iteration, the accuracy drops from 0.995 to 0.971. However, after we fine-tune the model on the new structure, we are able to recover from the performance loss, bringing the accuracy up to 0.992. Since the structural changes are minor and gradual, the network can more easily adapt to them. Running the same process 2 more times, we end up with a final accuracy of 0.991 (a loss of only 0.4% from the original) and 27% decrease in the number of channels. Thus, iterative pruning enables us to maintain performance while benefiting from increased computational efficiency due to the decreased model size.

-
-
-
- -
-
-Figure 9.3: Iterative pruning. -
-
-
-

One-shot pruning takes a more aggressive approach by pruning a large portion of structures simultaneously in one shot based on predefined importance criteria. This is followed by extensive fine-tuning to recover model accuracy. While faster, this aggressive strategy can degrade accuracy if the model cannot recover during fine-tuning.

-

The choice between these strategies involves weighing factors like model size, target sparsity level, available compute and acceptable accuracy losses. One-shot pruning can rapidly compress models, but iterative pruning may enable better accuracy retention for a target level of pruning. In practice, the strategy is tailored based on use case constraints. The overarching aim is to generate an optimal strategy that removes redundancy, achieves efficiency gains through pruning, and finely tunes the model to stabilize accuracy at an acceptable level for deployment.

-

Now consider the same network we had in the iterative pruning example. Whereas in the iterative process we pruned 2 channels at a time, in the one-shot pruning we would prune the 6 channels at once (Figure fig-oneshot-pruning). Removing 27% of the network’s channel simultaneously alters the structure significantly, causing the accuracy to drop from 0.995 to 0.914. Given the major changes, the network is not able to properly adapt during fine-tuning, and the accuracy went up to 0.943, a 5% degradation from the accuracy of the unpruned network. While the final structures in both iterative pruning and oneshot pruning processes are identical, the former is able to maintain high performance while the latter suffers significant degradations.

-
-
-
- -
-
-Figure 9.4: One-shot pruning. -
-
-
-
-
-
-

Advantages of Structured Pruning

-

Structured pruning brings forth a myriad of advantages that cater to various facets of model deployment and utilization, especially in environments where computational resources are constrained.

-
-
Computational Efficiency
-

By eliminating entire structures, such as neurons or channels, structured pruning significantly diminishes the computational load during both training and inference phases, thereby enabling faster model predictions and training convergence. Moreover, the removal of structures inherently reduces the model’s memory footprint, ensuring that it demands less storage and memory during operation, which is particularly beneficial in memory-constrained environments like TinyML systems.

-
-
-
Hardware Efficiency
-

Structured pruning often results in models that are more amenable to deployment on specialized hardware, such as Field-Programmable Gate Arrays (FPGAs) or Application-Specific Integrated Circuits (ASICs), due to the regularity and simplicity of the pruned architecture. With reduced computational requirements, it translates to lower energy consumption, which is crucial for battery-powered devices and sustainable computing practices.

-
-
-
Maintenance and Deployment
-

The pruned model, while smaller, retains its original architectural form, which can simplify the deployment pipeline and ensure compatibility with existing systems and frameworks. Also, with fewer parameters and simpler structures, the pruned model becomes easier to manage and monitor in production environments, potentially reducing the overhead associated with model maintenance and updates. Later on, when we dive into MLOps, this need will become apparent.

-
-
-
-

Unstructured Pruning

-

Unstructured pruning is, as its name suggests, pruning the model without regard to model-specific substructure. As mentioned above, it offers a greater aggression in pruning and can achieve higher model sparsities while maintaining accuracy given less constraints on what can and can’t be pruned. Generally, post-training unstructured pruning consists of an importance criterion for individual model parameters/weights, pruning/removal of weights that fall below the criteria, and optional fine-tuning after to try and recover the accuracy lost during weight removal.

-

Unstructured pruning has some advantages over structured pruning: removing individual weights instead of entire model substructures often leads in practice to lower model accuracy decreases. Furthermore, generally determining the criterion of importance for an individual weight is much simpler than for an entire substructure of parameters in structured pruning, making the former preferable for cases where that overhead is hard or unclear to compute. Similarly, the actual process of structured pruning is generally less flexible, as removing individual weights is generally simpler than removing entire substructures and ensuring the model still works.

-

Unstructured pruning, while offering the potential for significant model size reduction and enhanced deployability, brings with it challenges related to managing sparse representations and ensuring computational efficiency. It is particularly useful in scenarios where achieving the highest possible model compression is paramount and where the deployment environment can handle sparse computations efficiently.

-

Table tbl-pruning_methods provides a concise comparison between structured and unstructured pruning. In this table, aspects related to the nature and architecture of the pruned model (Definition, Model Regularity, and Compression Level) are grouped together, followed by aspects related to computational considerations (Computational Efficiency and Hardware Compatibility), and ending with aspects related to the implementation and adaptation of the pruned model (Implementation Complexity and Fine-Tuning Complexity). Both pruning strategies offer unique advantages and challenges, as shown in Table tbl-pruning_methods, and the selection between them should be influenced by specific project and deployment requirements.

-
-
-
-Table 9.1: Comparison of structured versus unstructured pruning. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AspectStructured PruningUnstructured Pruning
DefinitionPruning entire structures (e.g., neurons, channels, layers) within the networkPruning individual weights or neurons, resulting in sparse matrices or non-regular network structures
Model RegularityMaintains a regular, structured network architectureResults in irregular, sparse network architectures
Compression LevelMay offer limited model compression compared to unstructured pruningCan achieve higher model compression due to fine-grained pruning
Computational EfficiencyTypically more computationally efficient due to maintaining regular structuresCan be computationally inefficient due to sparse weight matrices, unless specialized hardware/software is used
Hardware CompatibilityGenerally better compatible with various hardware due to regular structuresMay require hardware that efficiently handles sparse computations to realize benefits
Implementation ComplexityOften simpler to implement and manage due to maintaining network structureCan be complex to manage and compute due to sparse representations
Fine-Tuning ComplexityMay require less complex fine-tuning strategies post-pruningMight necessitate more complex retraining or fine-tuning strategies post-pruning
-
-
-
-

In Figure fig-structured-unstructured we have exapmles that illustrate the differences between unstructured and structured pruning. Observe that unstructured pruning can lead to models that no longer obey high-level structural guaruntees of their original unpruned counterparts: the left network is no longer a fully connected network after pruning. Structured pruning on the other hand maintains those invariants: in the middle, the fully connected network is pruned in a way that the pruned network is still fully connected; likewise, the CNN maintains its convolutional structure, albeit with fewer filters.

-
-
-
- -
-
-Figure 9.5: Unstructured vs structured pruning. Credit: Qi et al. (2021). -
-
-Qi, Chen, Shibo Shen, Rongpeng Li, Zhifeng Zhao, Qing Liu, Jing Liang, and Honggang Zhang. 2021. “An Efficient Pruning Scheme of Deep Neural Networks for Internet of Things Applications.” EURASIP Journal on Advances in Signal Processing 2021 (1). https://doi.org/10.1186/s13634-021-00744-4. -
-
-
-
-

Lottery Ticket Hypothesis

-

Pruning has evolved from a purely post-training technique that came at the cost of some accuracy, to a powerful meta-learning approach applied during training to reduce model complexity. This advancement in turn improves compute, memory, and latency efficiency at both training and inference.

-

A breakthrough finding that catalyzed this evolution was the lottery ticket hypothesis by Frankle and Carbin (2019). They empirically discovered by Jonathan Frankle and Michael Carbin. Their work states that within dense neural networks, there exist sparse subnetworks, referred to as “winning tickets,” that can match or even exceed the performance of the original model when trained in isolation. Specifically, these winning tickets, when initialized using the same weights as the original network, can achieve similarly high training convergence and accuracy on a given task. It is worthwhile pointing out that they empirically discovered the lottery ticket hypothesis, which was later formalized.

-
-Frankle, Jonathan, and Michael Carbin. 2019. “The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.” In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. https://openreview.net/forum?id=rJl-b3RcF7. -

The intuition behind this hypothesis is that, during the training process of a neural network, many neurons and connections become redundant or unimportant, particularly with the inclusion of training techniques encouraging redundancy like dropout. Identifying, pruning out, and initializing these “winning tickets’’ allows for faster training and more efficient models, as they contain the essential model decision information for the task. Furthermore, as generally known with the bias-variance tradeoff theory, these tickets suffer less from overparameterization and thus generalize better rather than overfitting to the task.

-

In Figure fig-lottery-ticket-hypothesis we have an example experiment showing pruning and training experiments on a fully connected LeNet over a variety of pruning ratios. In the left plot, notice how heavy pruning reveals a more efifcient subnetwork (in green) that is 21.1% the size of the original network (in blue), The subnetwork achieves higher accuracy and in a faster manner than the unpruned version (green line is above the blue line). However, pruning has a limit (sweet spot), and further pruning will produce performance degredations and eventually drop below the unpruned version’s performance (notice how the red, purple, and brown subnetworks gradually drop in accuracy performance) due to the significant loss in the number of parameters.

-
-
-
- -
-
-Figure 9.6: Lottery ticket hypothesis experiments. -
-
-
-

The following is the process of finding the winning lottery ticket subnetwork, as also shown in Figure fig-winning-ticket (left side):

-

1- Initialize the network’s weights to random values.

-

2- Train the network until it converges to the desired performance.

-

3- Prune out some percentage of the edges with the lowest weight values.

-

4- Reinitialize the network with the same random values from step 1.

-

5- Repeat steps 2-4 for a number of times, or as long as the accuracy doesn’t significantly degrade.

-

When we finish, we are left with a pruned network (Figure fig-winning-ticket right side), which is a subnetwork of the one we start with. The subnetwork should have a significantly smaller structure, while maintaining a comparable level of accuracy.

-
-
-
- -
-
-Figure 9.7: Finding the winning ticket subnetwork. -
-
-
-
-
-

Challenges & Limitations

-

There is no free lunch with pruning optimizations, with some choices coming with both improvements and costs to considers. Below we discuss some tradeoffs for practitioners to consider.

-
-
Quality vs. Size Reduction
-

A key challenge in both structured and unstructured pruning is balancing size reduction with maintaining or improving predictive performance. This trade-off becomes more complex with unstructured pruning, where individual weight removal can create sparse weight matrices. Ensuring the pruned model retains generalization capacity while becoming more computationally efficient is critical, often requiring extensive experimentation and validation.

-
-
-
Determining Pruning Criteria
-

Establishing a robust pruning criteria, whether for removing entire structures (structured pruning) or individual weights (unstructured pruning), is challenging. The criteria must accurately identify elements whose removal minimally impacts performance. For unstructured pruning, this might involve additional complexities due to the potential for generating sparse weight matrices, which can be computationally inefficient on certain hardware.

-
-
-
Fine-Tuning and Retraining
-

Post-pruning fine-tuning is imperative in both structured and unstructured pruning to recover lost performance and stabilize the model. The challenge encompasses determining the extent, duration, and nature of the fine-tuning process, which can be influenced by the pruning method and the degree of pruning applied.

-
-
-
Scalability of Pruning Strategies
-

Ensuring that pruning strategies, whether structured or unstructured, are scalable and applicable across various models and domains is challenging. Unstructured pruning might introduce additional challenges related to managing and deploying models with sparse weight matrices, especially in hardware that is not optimized for sparse computations.

-
-
-
Hardware Compatibility and Efficiency
-

Especially pertinent to unstructured pruning, hardware compatibility and efficiency become critical. Unstructured pruning often results in sparse weight matrices, which may not be efficiently handled by certain hardware, potentially negating the computational benefits of pruning (see Figure fig-sparse-matrix). Ensuring that pruned models, particularly those resulting from unstructured pruning, are compatible and efficient on the target hardware is a significant consideration.

-
-
-
Complexity in Implementing Pruning Algorithms
-

Unstructured pruning might introduce additional complexity in implementing pruning algorithms due to the need to manage sparse representations of weights. Developing or adapting algorithms that can efficiently handle, store, and compute sparse weight matrices is an additional challenge and consideration in unstructured pruning.

-
- -
-
-
-

9.2.2 Model Compression

-

Model compression techniques are crucial for deploying deep learning models on resource-constrained devices. These techniques aim to create smaller, more efficient models that preserve the predictive performance of the original models.

-
-

Knowledge Distillation

-

One popular technique is knowledge distillation (KD), which transfers knowledge from a large, complex “teacher” model to a smaller “student” model. The key idea is to train the student model to mimic the teacher’s outputs. The concept of KD was first popularized by Hinton (2005).

-
-Hinton, Geoffrey. 2005. “Van Nostrand’s Scientific Encyclopedia.” Wiley. https://doi.org/10.1002/0471743984.vse0673. -
-
Overview and Benefits
-

At its core, KD strategically leverages the refined outputs of a pre-trained teacher model to transfer knowledge to a smaller student model. The key technique is using “soft targets” derived from the teacher’s probabilistic predictions. Specifically, the teacher’s outputs are passed through a temperature-scaled softmax function, yielding softened probability distributions over classes. This softening provides richer supervision signals for the student model compared to hard target labels.

-

The loss function is another critical component that typically amalgamates a distillation loss, which measures the divergence between the teacher and student outputs, and a classification loss, which ensures the student model adheres to the true data labels. The Kullback-Leibler (KL) divergence is commonly employed to quantify the distillation loss, providing a measure of the discrepancy between the probability distributions output by the teacher and student models.

-

Another core concept is “temperature scaling” in the softmax function. It plays the role of controlling the granularity of the information distilled from the teacher model. A higher temperature parameter produces softer, more informative distributions, thereby facilitating the transfer of more nuanced knowledge to the student model. However, it also introduces the challenge of effectively balancing the trade-off between the informativeness of the soft targets and the stability of the training process.

-

These components, when adeptly configured and harmonized, enable the student model to assimilate the teacher model’s knowledge, crafting a pathway towards efficient and robust smaller models that retain the predictive prowess of their larger counterparts. Figure fig-knowledge-distillation visualizes the training procedure of knowledge distillation. Note how the logits or soft labels of the teacher model are used to provide a distillation loss for the student model to learn from.

-
-
-
- -
-
-Figure 9.9: Knowledge distillation training process. Credit: IntelLabs (2023). -
-
-IntelLabs. 2023. “Knowledge Distillation - Neural Network Distiller.” https://intellabs.github.io/distiller/knowledge_distillation.html. -
-
-
-
-
Challenges
-

However, KD has a unique set of challenges and considerations that researchers and practitioners must attentively address. One of the challenges is in the meticulous tuning of hyperparameters, such as the temperature parameter in the softmax function and the weighting between the distillation and classification loss in the objective function. Striking a balance that effectively leverages the softened outputs of the teacher model while maintaining fidelity to the true data labels is non-trivial and can significantly impact the student model’s performance and generalization capabilities.

-

Furthermore, the architecture of the student model itself poses a considerable challenge. Designing a model that is compact to meet computational and memory constraints, while still being capable of assimilating the essential knowledge from the teacher model, demands a nuanced understanding of model capacity and the inherent trade-offs involved in compression. The student model must be carefully architected to navigate the dichotomy of size and performance, ensuring that the distilled knowledge is meaningfully captured and utilized. Moreover, the choice of teacher model, which inherently influences the quality and nature of the knowledge to be transferred, is important and it introduces an added layer of complexity to the KD process.

-

These challenges underscore the necessity for a thorough and nuanced approach to implementing KD, ensuring that the resultant student models are both efficient and effective in their operational contexts.

-
-
-
-

Low-rank Matrix Factorization

-

Similar in approximation theme, low-rank matrix factorization (LRMF) is a mathematical technique used in linear algebra and data analysis to approximate a given matrix by decomposing it into two or more lower-dimensional matrices. The fundamental idea is to express a high-dimensional matrix as a product of lower-rank matrices, which can help reduce the complexity of data while preserving its essential structure. Mathematically, given a matrix \(A \in \mathbb{R}^{m \times n}\), LRMF seeks matrices \(U \in \mathbb{R}^{m \times k}\) and \(V \in \mathbb{R}^{k \times n}\) such that \(A \approx UV\), where \(k\) is the rank and is typically much smaller than \(m\) and \(n\).

-
-
Background and Benefits
-

One of the seminal works in the realm of matrix factorization, particularly in the context of recommendation systems, is the paper by Koren, Bell, and Volinsky (2009). The authors delve into various factorization models, providing insights into their efficacy in capturing the underlying patterns in the data and enhancing predictive accuracy in collaborative filtering. LRMF has been widely applied in recommendation systems (such as Netflix, Facebook, etc.), where the user-item interaction matrix is factorized to capture latent factors corresponding to user preferences and item attributes.

-
-Koren, Yehuda, Robert Bell, and Chris Volinsky. 2009. “Matrix Factorization Techniques for Recommender Systems.” Computer 42 (8): 30–37. https://doi.org/10.1109/mc.2009.263. -

The main advantage of low-rank matrix factorization lies in its ability to reduce data dimensionality as shown in Figure fig-matrix-factorization, where there are fewer parameters to store, making it computationally more efficient and reducing storage requirements at the cost of some additional compute. This can lead to faster computations and more compact data representations, which is especially valuable when dealing with large datasets. Additionally, it may aid in noise reduction and can reveal underlying patterns and relationships in the data.

-

Figure fig-matrix-factorization illustrates the decrease in parameterization enabled by low-rank matrix factorization. Observe how the matrix \(M\) can be approximated by the product of matrices \(L_k\) and \(R_k^T\). For intuition, most fully connected layers in networks are stored as a projection matrix \(M\), which requires \(m \times n\) parameter to be loaded on computation. However, by decomposing and approximating it as the product of two lower rank matrices, we thus only need to store \(m \times k + k\times n\) parameters in terms of storage while incurring an additional compute cost of the matrix multiplication. So long as \(k < n/2\), this factorization has fewer parameters total to store while adding a computation of runtime \(O(mkn)\) (Gu (2023)).

-
-Gu, Ivy. 2023. “Deep Learning Model Compression (Ii) by Ivy Gu Medium.” https://ivygdy.medium.com/deep-learning-model-compression-ii-546352ea9453. -
-
-
- -
-
-Figure 9.10: Low matrix factorization. Credit: The Clever Machine. -
-
-
-
-
-
Challenges
-

But practitioners and researchers encounter a spectrum of challenges and considerations that necessitate careful attention and strategic approaches. As with any lossy compression technique, we may lose information during this approximation process: choosing the correct rank that balances the information lost and the computational costs is tricky as well and adds an additional hyper-parameter to tune for.

-

Low-rank matrix factorization is a valuable tool for dimensionality reduction and making compute fit onto edge devices but, like other techniques, needs to be carefully tuned to the model and task at hand. A key challenge resides in managing the computational complexity inherent to LRMF, especially when grappling with high-dimensional and large-scale data. The computational burden, particularly in the context of real-time applications and massive datasets, remains a significant hurdle for effectively using LRMF.

-

Moreover, the conundrum of choosing the optimal rank, (k), for the factorization introduces another layer of complexity. The selection of (k) inherently involves a trade-off between approximation accuracy and model simplicity, and identifying a rank that adeptly balances these conflicting objectives often demands a combination of domain expertise, empirical validation, and sometimes, heuristic approaches. The challenge is further amplified when the data encompasses noise or when the inherent low-rank structure is not pronounced, making the determination of a suitable (k) even more elusive.

-

Handling missing or sparse data, a common occurrence in applications like recommendation systems, poses another substantial challenge. Traditional matrix factorization techniques, such as Singular Value Decomposition (SVD), are not directly applicable to matrices with missing entries, necessitating the development and application of specialized algorithms that can factorize incomplete matrices while mitigating the risks of overfitting to the observed entries. This often involves incorporating regularization terms or constraining the factorization in specific ways, which in turn introduces additional hyperparameters that need to be judiciously selected.

-

Furthermore, in scenarios where data evolves or grows over time, developing LRMF models that can adapt to new data without necessitating a complete re-factorization is a critical yet challenging endeavor. Online and incremental matrix factorization algorithms seek to address this by enabling the update of factorized matrices as new data arrives, yet ensuring stability, accuracy, and computational efficiency in these dynamic settings remains an intricate task. This is particularly challenging in the space of TinyML, where edge redeployment for refreshed models can be quite challenging.

-
-
-
-

Tensor Decomposition

-

Similar to low-rank matrix factorization, more complex models may store weights in higher dimensions, such as tensors: tensor decomposition is the higher-dimensional analogue of matrix factorization, where a model tensor is decomposed into lower rank components (see Figure fig-tensor-decomposition), which again are easier to compute on and store but may suffer from the same issues as mentioned above of information loss and nuanced hyperparameter tuning. Mathematically, given a tensor \(\mathcal{A}\), tensor decomposition seeks to represent \(\mathcal{A}\) as a combination of simpler tensors, facilitating a compressed representation that approximates the original data while minimizing the loss of information.

-

The work of Tamara G. Kolda and Brett W. Bader, “Tensor Decompositions and Applications” (2009), stands out as a seminal paper in the field of tensor decompositions. The authors provide a comprehensive overview of various tensor decomposition methods, exploring their mathematical underpinnings, algorithms, and a wide array of applications, ranging from signal processing to data mining. Of course, the reason we are discussing it is because it has huge potential for system performance improvements, particularly in the space of TinyML, where throughput and memory footprint savings are crucial to feasibility of deployments.

-
-
-
- -
-
-Figure 9.11: Tensor decomposition. Credit: Xinyu (n.d.). -
-
-Xinyu, Chen. n.d. -
-
-
-

Exercise 9.2 (Scalable Model Compression with TensorFlow)  

-
-
- -
-
-

This Colab dives into a technique for compressing models while maintaining high accuracy. The key idea is to train a model with an extra penalty term that encourages the model to be more compressible. Then, the model is encoded using a special coding scheme that aligns with this penalty. This approach allows you to achieve compressed models that perform just as well as the original models and is useful in deploying models to devices with limited resources like mobile phones and edge devices.

-

-
-
-
-
-
-
-

9.2.3 Edge-Aware Model Design

-

Finally, we reach the other end of the hardware-software gradient, where we specifically make model architecture decisions directly given knowledge of the edge devices we wish to deploy on.

-

As covered in previous sections, edge devices are constrained specifically with limitations on memory and parallelizable computations: as such, if there are critical inference speed requirements, computations must be flexible enough to satisfy hardware constraints, something that can be designed at the model architecture level. Furthermore, trying to cram SOTA large ML models onto edge devices even after pruning and compression is generally infeasible purely due to size: the model complexity itself must be chosen with more nuance as to more feasibly fit the device. Edge ML developers have approached this architectural challenge both through designing bespoke edge ML model architectures and through device-aware neural architecture search (NAS), which can more systematically generate feasible on-device model architectures.

-
-

Model Design Techniques

-

One edge friendly architecture design is depthwise separable convolutions. Commonly used in deep learning for image processing, it consists of two distinct steps: the first is the depthwise convolution, where each input channel is convolved independently with its own set of learnable filters, as show in Figure fig-depthwise-convolution. This step reduces computational complexity by a significant margin compared to standard convolutions, as it drastically reduces the number of parameters and computations involved. The second step is the pointwise convolution, which combines the output of the depthwise convolution channels through a 1x1 convolution, creating inter-channel interactions. This approach offers several advantages. Pros include reduced model size, faster inference times, and often better generalization due to fewer parameters, making it suitable for mobile and embedded applications. However, depthwise separable convolutions may not capture complex spatial interactions as effectively as standard convolutions and might require more depth (layers) to achieve the same level of representational power, potentially leading to longer training times. Nonetheless, their efficiency in terms of parameters and computation makes them a popular choice in modern convolutional neural network architectures.

-
-
-
- -
-
-Figure 9.12: Depthwise separable convolutions. Credit: Hegde (2023). -
-
-Hegde, Sumant. 2023. “An Introduction to Separable Convolutions - Analytics Vidhya.” https://www.analyticsvidhya.com/blog/2021/11/an-introduction-to-separable-convolutions/. -
-
-
-
-

Example Model Architectures

-

In this vein, a number of recent architectures have been, from inception, specifically designed for maximizing accuracy on an edge deployment, notably SqueezeNet, MobileNet, and EfficientNet.

-
    -
  • SqueezeNet by Iandola et al. (2016) for instance, utilizes a compact architecture with 1x1 convolutions and fire modules to minimize the number of parameters while maintaining strong accuracy.

  • -
  • MobileNet by Howard et al. (2017), on the other hand, employs the aforementioned depthwise separable convolutions to reduce both computation and model size.

  • -
  • EfficientNet by Tan and Le (2023) takes a different approach by optimizing network scaling (i.e. varying the depth, width and resolution of a network) and compound scaling, a more nuanced variation network scaling, to achieve superior performance with fewer parameters.

  • -
-
-Iandola, Forrest N, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. 2016. SqueezeNet: Alexnet-level Accuracy with 50x Fewer Parameters and 0.5 MB Model Size.” ArXiv Preprint abs/1602.07360. https://arxiv.org/abs/1602.07360. -
-Howard, Andrew G., Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.” ArXiv Preprint. https://arxiv.org/abs/1704.04861. -
-Tan, Mingxing, and Quoc V. Le. 2023. “Demystifying Deep Learning.” Wiley. https://doi.org/10.1002/9781394205639.ch6. -

These models are essential in the context of edge computing where limited processing power and memory require lightweight yet effective models that can efficiently perform tasks such as image recognition, object detection, and more. Their design principles showcase the importance of intentionally tailored model architecture for edge computing, where performance and efficiency must fit within constraints.

-
- -
-
-
-

9.3 Efficient Numerics Representation

-

Numerics representation involves a myriad of considerations, including, but not limited to, the precision of numbers, their encoding formats, and the arithmetic operations facilitated. It invariably involves a rich array of different trade-offs, where practitioners are tasked with navigating between numerical accuracy and computational efficiency. For instance, while lower-precision numerics may offer the allure of reduced memory usage and expedited computations, they concurrently present challenges pertaining to numerical stability and potential degradation of model accuracy.

-
-

Motivation

-

The imperative for efficient numerics representation arises, particularly as efficient model optimization alone falls short when adapting models for deployment on low-powered edge devices operating under stringent constraints.

-

Beyond minimizing memory demands, the tremendous potential of efficient numerics representation lies in, but is not limited to, these fundamental ways. By diminishing computational intensity, efficient numerics can thereby amplify computational speed, allowing more complex models to compute on low-powered devices. Reducing the bit precision of weights and activations on heavily over-parameterized models enables condensation of model size for edge devices without significantly harming the model’s predictive accuracy. With the omnipresence of neural networks in models, efficient numerics has a unique advantage in leveraging the layered structure of NNs to vary numeric precision across layers, minimizing precision in resistant layers while preserving higher precision in sensitive layers.

-

In this section, we will dive into how practitioners can harness the principles of hardware-software co-design at the lowest levels of a model to facilitate compatibility with edge devices. Kicking off with an introduction to the numerics, we will examine its implications for device memory and computational complexity. Subsequently, we will embark on a discussion regarding the trade-offs entailed in adopting this strategy, followed by a deep dive into a paramount method of efficient numerics: quantization.

-
-
-

9.3.1 The Basics

-
-

Types

-

Numerical data, the bedrock upon which machine learning models stand, manifest in two primary forms. These are integers and floating point numbers.

-

Integers: Whole numbers, devoid of fractional components, integers (e.g., -3, 0, 42) are key in scenarios demanding discrete values. For instance, in ML, class labels in a classification task might be represented as integers, where “cat”, “dog”, and “bird” could be encoded as 0, 1, and 2 respectively.

-

Floating-Point Numbers: Encompassing real numbers, floating-point numbers (e.g., -3.14, 0.01, 2.71828) afford the representation of values with fractional components. In ML model parameters, weights might be initialized with small floating-point values, such as 0.001 or -0.045, to commence the training process. Currently, there are 4 popular precision formats discussed below.

-

Variable bit widths: Beyond the standard widths, research is ongoing into extremely low bit-width numerics, even down to binary or ternary representations. Extremely low bit-width operations can offer significant speedups and reduce power consumption even further. While challenges remain in maintaining model accuracy with such drastic quantization, advances continue to be made in this area.

-
-
-

Precision

-

Precision, delineating the exactness with which a number is represented, bifurcates typically into single, double, half and in recent years there have been a number of other precisions that have emerged to better support machine learning tasks efficiently on the underlying hardware.

-

Double Precision (Float64): Allocating 64 bits, double precision (e.g., 3.141592653589793) provides heightened accuracy, albeit demanding augmented memory and computational resources. In scientific computations, where precision is paramount, variables like π might be represented with Float64.

-

Single Precision (Float32): With 32 bits at its disposal, single precision (e.g., 3.1415927) strikes a balance between numerical accuracy and memory conservation. In ML, Float32 might be employed to store weights during training to maintain a reasonable level of precision.

-

Half Precision (Float16): Constrained to 16 bits, half precision (e.g., 3.14) curtails memory usage and can expedite computations, albeit sacrificing numerical accuracy and range. In ML, especially during inference on resource-constrained devices, Float16 might be utilized to reduce the model’s memory footprint.

-

Bfloat16: Brain Floating-Point Format or Bfloat16, also employs 16 bits but allocates them differently compared to FP16: 1 bit for the sign, 8 bits for the exponent (resulting in the same number range as in float32), and 7 bits for the fraction. This format, developed by Google, prioritizes a larger exponent range over precision, making it particularly useful in deep learning applications where the dynamic range is crucial.

-

Figure fig-3float illustrates the differences between the three floating-point formats: Float32, Float16, and BFloat16.

-
-
-
- -
-
-Figure 9.13: Three floating-point formats. -
-
-
-

Integer: Integer representations are made using 8, 4, and 2 bits. They are often used during the inference phase of neural networks, where the weights and activations of the model are quantized to these lower precisions. Integer representations are deterministic and offer significant speed and memory advantages over floating-point representations. For many inference tasks, especially on edge devices, the slight loss in accuracy due to quantization is often acceptable given the efficiency gains. An extreme form of integer numerics is for binary neural networks (BNNs), where weights and activations are constrained to one of two values: either +1 or -1.

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PrecisionProsCons
FP32 (Floating Point 32-bit)Standard precision used in most deep learning frameworks.
High accuracy due to ample representational capacity.
Well-suited for training.
High memory usage.
Slower inference times compared to quantized models.
Higher energy consumption.
FP16 (Floating Point 16-bit)Reduces memory usage compared to FP32.
Speeds up computations on hardware that supports FP16.
Often used in mixed-precision training to balance speed and accuracy.
Lower representational capacity compared to FP32.
Risk of numerical instability in some models or layers.
INT8 (8-bit Integer)Significantly reduced memory footprint compared to floating-point representations.
Faster inference if hardware supports INT8 computations.
Suitable for many post-training quantization scenarios.
Quantization can lead to some accuracy loss.
Requires careful calibration during quantization to minimize accuracy degradation.
INT4 (4-bit Integer)Even lower memory usage than INT8.
Further speed-up potential for inference.
Higher risk of accuracy loss compared to INT8.
Calibration during quantization becomes more critical.
BinaryMinimal memory footprint (only 1 bit per parameter).
Extremely fast inference due to bitwise operations.
Power efficient.
Significant accuracy drop for many tasks.
Complex training dynamics due to extreme quantization.
TernaryLow memory usage but slightly more than binary.
Offers a middle ground between representation and efficiency.
Accuracy might still be lower than higher precision models.
Training dynamics can be complex.
-
-
-

Numeric Encoding and Storage

-

Numeric encoding, the art of transmuting numbers into a computer-amenable format, and their subsequent storage are critical for computational efficiency. For instance, floating-point numbers might be encoded using the IEEE 754 standard, which apportions bits among sign, exponent, and fraction components, thereby enabling the representation of a vast array of values with a single format. There are a few new IEEE floating point formats that have been defined specifically for AI workloads:

-
    -
  • bfloat16- A 16-bit floating point format introduced by Google. It has 8 bits for exponent, 7 bits for mantissa and 1 bit for sign. Offers a reduced precision compromise between 32-bit float and 8-bit integers. Supported on many hardware accelerators.
  • -
  • posit - A configurable format that can represent different levels of precision based on exponent bits. Aims to be more efficient than IEEE 754 binary floats. Has adjustable dynamic range and precision.
  • -
  • Flexpoint - A format introduced by Intel that can dynamically adjust precision across layers or within a layer. Allows tuning precision to accuracy and hardware requirements.
  • -
  • BF16ALT - A proposed 16-bit format by ARM as an alternative to bfloat16. Uses additional bit in exponent to prevent overflow/underflow.
  • -
  • TF32 - Introduced by Nvidia for Ampere GPUs. Uses 10 bits for exponent instead of 8 bits like FP32. Improves model training performance while maintaining accuracy.
  • -
  • FP8 - 8-bit floating point format that keeps 6 bits for mantissa and 2 bits for exponent. Enables better dynamic range than integers.
  • -
-

The key goals of these new formats are to provide lower precision alternatives to 32-bit floats for better computational efficiency and performance on AI accelerators while maintaining model accuracy. They offer different tradeoffs in terms of precision, range and implementation cost/complexity.

-
-
-
-

9.3.2 Efficiency Benefits

-

Numerical efficiency matters for machine learning workloads for a number of reasons:

-

Computational Efficiency: High-precision computations (like FP32 or FP64) can be slow and resource-intensive. By reducing numeric precision, one can achieve faster computation times, especially on specialized hardware that supports lower precision.

-

Memory Efficiency: Storage requirements decrease with reduced numeric precision. For instance, FP16 requires half the memory of FP32. This is crucial when deploying models to edge devices with limited memory or when working with very large models.

-

Power Efficiency: Lower precision computations often consume less power, which is especially important for battery-operated devices.

-

Noise Introduction: Interestingly, the noise introduced by using lower precision can sometimes act as a regularizer, helping to prevent overfitting in some models.

-

Hardware Acceleration: Many modern AI accelerators and GPUs are optimized for lower precision operations, leveraging the efficiency benefits of such numerics.

-

Efficient numerics is not just about reducing the bit-width of numbers but understanding the trade-offs between accuracy and efficiency. As machine learning models become more pervasive, especially in real-world, resource-constrained environments, the focus on efficient numerics will continue to grow. By thoughtfully selecting and leveraging the appropriate numeric precision, one can achieve robust model performance while optimizing for speed, memory, and energy.

-
-
-

9.3.3 Numeric Representation Nuances

-

There are a number of nuances with numerical representations for ML that require us to have an understanding of both the theoretical and practical aspects of numerics representation, as well as a keen awareness of the specific requirements and constraints of the application domain.

-
-

Memory Usage

-

The memory footprint of ML models, particularly those of considerable complexity and depth, can be substantial, thereby posing a significant challenge in both training and deployment phases. For instance, a deep neural network with 100 million parameters, represented using Float32 (32 bits or 4 bytes per parameter), would necessitate approximately 400 MB of memory just for storing the model weights. This does not account for additional memory requirements during training for storing gradients, optimizer states, and forward pass caches, which can further amplify the memory usage, potentially straining the resources on certain hardware, especially edge devices with limited memory capacity.

-
-
-

Impact on Model Parameters and Weights

-

The numeric representation casts a significant impact on the storage and computational requisites of ML model parameters and weights. For instance, a model utilizing Float64 for weights will demand double the memory and potentially increased computational time compared to a counterpart employing Float32. A weight matrix, for instance, with dimensions [1000, 1000] using Float64 would consume approximately 8MB of memory, whereas using Float32 would halve this to approximately 4MB.

-
-
-

Computational Complexity

-

Numerical precision directly impacts computational complexity, influencing the time and resources required to perform arithmetic operations. For example, operations using Float64 generally consume more computational resources than their Float32 or Float16 counterparts (see Figure fig-quantized-energy). In the realm of ML, where models might need to process millions of operations (e.g., multiplications and additions in matrix operations during forward and backward passes), even minor differences in the computational complexity per operation can aggregate into a substantial impact on training and inference times. As shown in Figure fig-models-speeds, quantized models can be many times faster than their unquantized versions.

-
-
-
- -
-
-Figure 9.14: Energy use by quantized operations. Credit: Mark Horowitz, Stanford University. -
-
-
-
-
-
- -
-
-Figure 9.15: Speed of three different models in normal and quantized form. -
-
-
-

In addition to pure runtimes, there is also a concern over energy efficiency. Not all numerical computations are created equal from the underlying hardware standpoint. Some numerical operations are more energy efficient than others. For example, Figure fig-operations-energy-comparison below shows that integer addition is much more energy efficient than integer multiplication.

-
-
-
- -
-
-Figure 9.16: Energy use by quantized operations. Credit: Isscc (2014). -
-
-Isscc. 2014. “Computing’s Energy Problem (and What We Can Do about It).” https://ieeexplore.ieee.org/document/6757323. -
-
-
-
-

Hardware Compatibility

-

Ensuring compatibility and optimized performance across diverse hardware platforms is another challenge in numerics representation. Different hardware, such as CPUs, GPUs, TPUs, and FPGAs, have varying capabilities and optimizations for handling different numeric precisions. For example, certain GPUs might be optimized for Float32 computations, while others might provide accelerations for Float16. Developing and optimizing ML models that can leverage the specific numerical capabilities of different hardware, while ensuring that the model maintains its accuracy and robustness, requires careful consideration and potentially additional development and testing efforts.

-
-
-

Precision and Accuracy Trade-offs

-

The trade-off between numerical precision and model accuracy is a nuanced challenge in numerics representation. Utilizing lower-precision numerics, such as Float16, might conserve memory and expedite computations but can also introduce issues like quantization error and reduced numerical range. For instance, training a model with Float16 might introduce challenges in representing very small gradient values, potentially impacting the convergence and stability of the training process. Furthermore, in certain applications, such as scientific simulations or financial computations, where high precision is paramount, the use of lower-precision numerics might not be permissible due to the risk of accruing significant errors.

-
-
-

Trade-off Examples

-

To understand and appreciate the nuances let’s consider some use case examples. Through these we will realize that the choice of numeric representation is not merely a technical decision but a strategic one, influencing the model’s predictive acumen, its computational demands, and its deployability across diverse computational environments. In this section we will look at a couple of examples to better understand the trade-offs with numerics and how they tie to the real world.

-
-
Autonomous Vehicles
-

In the domain of autonomous vehicles, ML models are employed to interpret sensor data and make real-time decisions. The models must process high-dimensional data from various sensors (e.g., LiDAR, cameras, radar) and execute numerous computations within a constrained time frame to ensure safe and responsive vehicle operation. So the trade-offs here would include:

-
    -
  • Memory Usage: Storing and processing high-resolution sensor data, especially in floating-point formats, can consume substantial memory.
  • -
  • Computational Complexity: Real-time processing demands efficient computations, where higher-precision numerics might impede the timely execution of control actions.
  • -
-
-
-
Mobile Health Applications
-

Mobile health applications often utilize ML models for tasks like activity recognition, health monitoring, or predictive analytics, operating within the resource-constrained environment of mobile devices. The trade-offs here would include:

-
    -
  • Precision and Accuracy Trade-offs: Employing lower-precision numerics to conserve resources might impact the accuracy of health predictions or anomaly detections, which could have significant implications for user health and safety.
  • -
  • Hardware Compatibility: Models need to be optimized for diverse mobile hardware, ensuring efficient operation across a wide range of devices with varying numerical computation capabilities.
  • -
-
-
-
High-Frequency Trading (HFT) Systems
-

HFT systems leverage ML models to make rapid trading decisions based on real-time market data. These systems demand extremely low-latency responses to capitalize on short-lived trading opportunities.

-
    -
  • Computational Complexity: The models must process and analyze vast streams of market data with minimal latency, where even slight delays, potentially introduced by higher-precision numerics, can result in missed opportunities.
  • -
  • Precision and Accuracy Trade-offs: Financial computations often demand high numerical precision to ensure accurate pricing and risk assessments, posing challenges in balancing computational efficiency and numerical accuracy.
  • -
-
-
-
Edge-Based Surveillance Systems
-

Surveillance systems deployed on edge devices, like security cameras, utilize ML models for tasks like object detection, activity recognition, and anomaly detection, often operating under stringent resource constraints.

-
    -
  • Memory Usage: Storing pre-trained models and processing video feeds in real-time demands efficient memory usage, which can be challenging with high-precision numerics.
  • -
  • Hardware Compatibility: Ensuring that models can operate efficiently on edge devices with varying hardware capabilities and optimizations for different numeric precisions is crucial for widespread deployment.
  • -
-
-
-
Scientific Simulations
-

ML models are increasingly being utilized in scientific simulations, such as climate modeling or molecular dynamics simulations, to enhance predictive capabilities and reduce computational demands.

-
    -
  • Precision and Accuracy Trade-offs: Scientific simulations often require high numerical precision to ensure accurate and reliable results, which can conflict with the desire to reduce computational demands via lower-precision numerics.
  • -
  • Computational Complexity: The models must manage and process complex, high-dimensional simulation data efficiently to ensure timely results and enable large-scale or long-duration simulations.
  • -
-

These examples illustrate diverse scenarios where the challenges of numerics representation in ML models are prominently manifested. Each system presents a unique set of requirements and constraints, necessitating tailored strategies and solutions to navigate the challenges of memory usage, computational complexity, precision-accuracy trade-offs, and hardware compatibility.

-
-
-
-
-

9.3.4 Quantization

-

Quantization is prevalent in various scientific and technological domains, and it essentially involves the mapping or constraining of a continuous set or range into a discrete counterpart to minimize the number of bits required.

-
-

History

-

Historically, the idea of quantization is not novel and can be traced back to ancient times, particularly in the realm of music and astronomy. In music, the Greeks utilized a system of tetrachords, segmenting the continuous range of pitches into discrete notes, thereby quantizing musical sounds. In astronomy and physics, the concept of quantization was present in the discretized models of planetary orbits, as seen in the Ptolemaic and Copernican systems.

-

During the 1800s, quantization-based discretization was used to approximate the calculation of integrals, and further used to investigate the impact of rounding errors on the integration result. With algorithms, Lloyd’s K-Means Algorithm is a classic example of quantization. However, the term “quantization” was firmly embedded in scientific literature with the advent of quantum mechanics in the early 20th century, where it was used to describe the phenomenon that certain physical properties, such as energy, exist only in discrete, quantized states. This principle was pivotal in explaining phenomena at the atomic and subatomic levels. In the digital age, quantization found its application in signal processing, where continuous signals are converted into a discrete digital form, and in numerical algorithms, where computations on real-valued numbers are performed with finite-precision arithmetic.

-

Extending upon this second application and relevant to this section, it is used in computer science to optimize neural networks by reducing the precision of the network weights. Thus, quantization, as a concept, has been subtly woven into the tapestry of scientific and technological development, evolving and adapting to the needs and discoveries of various epochs.

-
-
-

Initial Breakdown

-

We begin our foray into quantization with a brief analysis of one important use for quantization.

-

In signal processing, the continuous sine wave (shown in Figure fig-sine-wave) can be quantized into discrete values through a process known as sampling. This is a fundamental concept in digital signal processing and is crucial for converting analog signals (like the continuous sine wave) into a digital form that can be processed by computers. The sine wave is a prevalent example due to its periodic and smooth nature, making it a useful tool for explaining concepts like frequency, amplitude, phase, and, of course, quantization.

-
-
-
- -
-
-Figure 9.17: Sine Wave. -
-
-
-

In the quantized version shown in Figure fig-quantized-sine-wave, the continuous sine wave (Figure fig-sine-wave) is sampled at regular intervals (in this case, every \(\frac{\pi}{4}\) radians), and only these sampled values are represented in the digital version of the signal. The step-wise lines between the points show one way to represent the quantized signal in a piecewise-constant form. This is a simplified example of how analog-to-digital conversion works, where a continuous signal is mapped to a discrete set of values, enabling it to be represented and processed digitally.

-
-
-
- -
-
-Figure 9.18: Quantized Sine Wave. -
-
-
-

Returning to the context of Machine Learning (ML), quantization refers to the process of constraining the possible values that numerical parameters (such as weights and biases) can take to a discrete set, thereby reducing the precision of the parameters and consequently, the model’s memory footprint. When properly implemented, quantization can reduce model size by up to 4x and improve inference latency and throughput by up to 2-3x. Figure fig-quantized-models-size illustrates the impact that quantization has on different models’ sizes: for example, an Image Classification model like ResNet-v2 can be compressed from 180MB down to 45MB with 8-bit quantization. There is typically less than 1% loss in model accuracy from well tuned quantization. Accuracy can often be recovered by re-training the quantized model with quantization aware training techniques. Therefore, this technique has emerged to be very important in deploying ML models to resource-constrained environments, such as mobile devices, IoT devices, and edge computing platforms, where computational resources (memory and processing power) are limited.

-
-
-
- -
-
-Figure 9.19: Effect of quantization on model sizes. Credit: HarvardX. -
-
-
-

There are several dimensions to quantization such as uniformity, stochasticity (or determinism), symmetry, granularity (across layers/channels/groups or even within channels), range calibration considerations (static vs dynamic), and fine-tuning methods (QAT, PTQ, ZSQ). We examine these below.

-
-
-
-

9.3.5 Types

-
-

Uniform Quantization

-

Uniform quantization involves mapping continuous or high-precision values to a lower-precision representation using a uniform scale. This means that the interval between each possible quantized value is consistent. For example, if weights of a neural network layer are quantized to 8-bit integers (values between 0 and 255), a weight with a floating-point value of 0.56 might be mapped to an integer value of 143, assuming a linear mapping between the original and quantized scales. Due to its use of integer or fixed-point math pipelines, this form of quantization allows computation on the quantized domain without the need to dequantize beforehand.

-

The process for implementing uniform quantization starts with choosing a range of real numbers to be quantized. The next step is to select a quantization function and map the real values to the integers representable by the bit-width of the quantized representation. For instance, a popular choice for a quantization function is:

-

\[ -Q(r)=Int(r/S) - Z -\]

-

where Q is the quantization operator, r is a real valued input (in our case, an activation or weight), S is a real valued scaling factor, and Z is an integer zero point. The Int function maps a real value to an integer value through a rounding operation. Through this function, we have effectively mapped real values r to some integer values, resulting in quantized levels which are uniformly spaced.

-

When the need arises for practitioners to retrieve the original higher precision values, real values r can be recovered from quantized values through an operation known as dequantization. In the example above, this would mean performing the following operation on our quantized value:

-

\[ -\bar{r} = S(Q(r) + Z) -\]

-

As discussed, some precision in the real value is lost by quantization. In this case, the recovered value \(\bar{r}\) will not exactly match r due to the rounding operation. This is an important tradeoff to note; however, in many successful uses of quantization, the loss of precision can be negligible and the test accuracy remains high. Despite this, uniform quantization continues to be the current de-facto choice due to its simplicity and efficient mapping to hardware.

-
-
-

Non-uniform Quantization

-

Non-uniform quantization, on the other hand, does not maintain a consistent interval between quantized values. This approach might be used to allocate more possible discrete values in regions where the parameter values are more densely populated, thereby preserving more detail where it is most needed. For instance, in bell-shaped distributions of weights with long tails, a set of weights in a model predominantly lies within a certain range; thus, more quantization levels might be allocated to that range to preserve finer details, enabling us to better capture information. However, one major weakness of non-uniform quantization is that it requires dequantization before higher precision computations due to its non-uniformity, restricting its ability to accelerate computation compared to uniform quantization.

-

Typically, a rule-based non-uniform quantization uses a logarithmic distribution of exponentially increasing steps and levels as opposed to linearly. Another popular branch lies in binary-code-based quantization where real number vectors are quantized into binary vectors with a scaling factor. Notably, there is no closed form solution for minimizing errors between the real value and non-uniformly quantized value, so most quantizations in this field rely on heuristic solutions. For instance, recent work by Xu et al. (2018) formulates non-uniform quantization as an optimization problem where the quantization steps/levels in quantizer Q are adjusted to minimize the difference between the original tensor and quantized counterpart.

-
-Xu, Chen, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. 2018. “Alternating Multi-Bit Quantization for Recurrent Neural Networks.” In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=S19dR9x0b. -

\[ -\min_Q ||Q(r)-r||^2 -\]

-

Furthermore, learnable quantizers can be jointly trained with model parameters, and the quantization steps/levels are generally trained with iterative optimization or gradient descent. Additionally, clustering has been used to alleviate information loss from quantization. While capable of capturing higher levels of detail, non-uniform quantization schemes can be difficult to deploy efficiently on general computation hardware, making it less-preferred to methods which use uniform quantization.

-
-
-
- -
-
-Figure 9.20: Quantization uniformity. Credit: Gholami et al. (2021). -
-
-
-
-
-

Stochastic Quantization

-

Unlike the two previous approaches which generate deterministic mappings, there is some work exploring the idea of stochastic quantization for quantization aware training and reduced precision training. This approach maps floating numbers up or down with a probability associated to the magnitude of the weight update. The hope generated by high level intuition is that such a probabilistic approach may allow a neural network to explore more, as compared to deterministic quantization. Supposedly, enabling a stochastic rounding may allow neural networks to escape local optimums, thereby updating its parameters. Below are two example stochastic mapping functions:

-

-
-
-
- -
-
-Figure 9.21: Integer vs Binary quantization functions. -
-
-
-
-
-

Zero Shot Quantization

-

Zero-shot quantization refers to the process of converting a full-precision deep learning model directly into a low-precision, quantized model without the need for any retraining or fine-tuning on the quantized model. The primary advantage of this approach is its efficiency, as it eliminates the often time-consuming and resource-intensive process of retraining a model post-quantization. By leveraging techniques that anticipate and minimize quantization errors, zero-shot quantization aims to maintain the model’s original accuracy even after reducing its numerical precision. It is particularly useful for Machine Learning as a Service (MLaaS) providers aiming to expedite the deployment of their customer’s workloads without having to access their datasets.

-
-
-
-

9.3.6 Calibration

-

Calibration is the process of selecting the most effective clipping range [\(\alpha\), \(\beta\)] for weights and activations to be quantized to. For example, consider quantizing activations that originally have a floating-point range between -6 and 6 to 8-bit integers. If you just take the minimum and maximum possible 8-bit integer values (-128 to 127) as your quantization range, it might not be the most effective. Instead, calibration would involve passing a representative dataset then use this observed range for quantization.

-

There are many calibration methods but a few commonly used include:

-
    -
  • Max: Use the maximum absolute value seen during calibration. However, this method is susceptible to outlier data. Notice how in Figure fig-resnet-activations-histogram, we have an outlier cluster around 2.1, while the rest are clustered around smaller values.
  • -
  • Entropy: Use KL divergence to minimize information loss between the original floating-point values and values that could be represented by the quantized format. This is the default method used by TensorRT.
  • -
  • Percentile: Set the range to a percentile of the distribution of absolute values seen during calibration. For example, 99% calibration would clip 1% of the largest magnitude values.
  • -
-
-
-
- -
-
-Figure 9.22: Input activations to layer 3 in ResNet50. Credit: @Wu, Judd, and Isaev (2020). -
-
-
-

Importantly, the quality of calibration can make a difference between a quantized model that retains most of its accuracy and one that degrades significantly. Hence, it’s an essential step in the quantization process. When choosing a calibration range, there are two types: symmetric and asymmetric.

-
-

Symmetric Quantization

-

Symmetric quantization maps real values to a symmetrical clipping range centered around 0. This involves choosing a range [\(\alpha\), \(\beta\)] where \(\alpha = -\beta\). For example, one symmetrical range would be based on the min/max values of the real values such that: -\(\alpha = \beta = max(abs(r_{max}), abs(r_{min}))\).

-

Symmetric clipping ranges are the most widely adopted in practice as they have the advantage of easier implementation. In particular, the mapping of zero to zero in the clipping range (sometimes called “zeroing out of the zero point”) can lead to reduction in computational cost during inference (Wu, Judd, and Isaev (2020)).

-
-
-

Asymmetric Quantization

-

Asymmetric quantization maps real values to an asymmetrical clipping range that isn’t necessarily centered around 0, as shown in Figure fig-quantization-symmetry on the right. It involves choosing a range [\(\alpha\), \(\beta\)] where \(\alpha \neq -\beta\). For example, selecting a range based on the minimum and maximum real values, or where \(\alpha = r_{min}\) and \(\beta = r_{max}\), creates an asymmetric range. Typically, asymmetric quantization produces tighter clipping ranges compared to symmetric quantization, which is important when target weights and activations are imbalanced, e.g., the activation after the ReLU always has non-negative values. Despite producing tighter clipping ranges, asymmetric quantization is less preferred to symmetric quantization as it doesn’t always zero out the real value zero.

-
-
-
- -
-
-Figure 9.23: Quantization (a)symmetry. Credit: Gholami et al. (2021). -
-
-
-
-
-

Granularity

-

Upon deciding the type of clipping range, it is essential to tighten the range to allow a model to retain as much of its accuracy as possible. We’ll be taking a look at convolutional neural networks as our way of exploring methods that fine tune the granularity of clipping ranges for quantization. The input activation of a layer in our CNN undergoes convolution with multiple convolutional filters. Every convolutional filter can possess a unique range of values. Notice how in Figure fig-quantization-granularity, the range for Filter1 is much smaller than that for Filter 3. Consequently, one distinguishing feature of quantization approaches is the precision with which the clipping range [α,β] is determined for the weights.

-
-
-
- -
-
-Figure 9.24: Quantization granularity: variable ranges. Credit: Gholami et al. (2021). -
-
-
-
    -
  1. Layerwise Quantization: This approach determines the clipping range by considering all of the weights in the convolutional filters of a layer. Then, the same clipping range is used for all convolutional filters. It’s the simplest to implement, and, as such, it often results in sub-optimal accuracy due the wide variety of differing ranges between filters. For example, a convolutional kernel with a narrower range of parameters loses its quantization resolution due to another kernel in the same layer having a wider range.
  2. -
  3. Groupwise Quantization: This approach groups different channels inside a layer to calculate the clipping range. This method can be helpful when the distribution of parameters across a single convolution/activation varies a lot. In practice, this method was useful in Q-BERT (Shen et al. 2020) for quantizing Transformer (Vaswani et al. 2017) models that consist of fully-connected attention layers. The downside with this approach comes with the extra cost of accounting for different scaling factors.
  4. -
  5. Channelwise Quantization: This popular method uses a fixed range for each convolutional filter that is independent of other channels. Because each channel is assigned a dedicated scaling factor, this method ensures a higher quantization resolution and often results in higher accuracy.
  6. -
  7. Sub-channelwise Quantization: Taking channelwise quantization to the extreme, this method determines the clipping range with respect to any groups of parameters in a convolution or fully-connected layer. It may result in considerable overhead since different scaling factors need to be taken into account when processing a single convolution or fully-connected layer.
  8. -
-
-Shen, Sheng, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2020. “Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT.” Proceedings of the AAAI Conference on Artificial Intelligence 34 (05): 8815–21. https://doi.org/10.1609/aaai.v34i05.6409. -
-Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” Advances in Neural Information Processing Systems 30. -

Of these, channelwise quantization is the current standard used for quantizing convolutional kernels, since it enables the adjustment of clipping ranges for each individual kernel with negligible overhead.

-
-
-

Static and Dynamic Quantization

-

After determining the type and granularity of the clipping range, practitioners must decide when ranges are determined in their range calibration algorithms. There are two approaches to quantizing activations: static quantization and dynamic quantization.

-

Static quantization is the most frequently used approach. In this, the clipping range is pre-calculated and static during inference. It does not add any computational overhead, but, consequently, results in lower accuracy as compared to dynamic quantization. A popular method of implementing this is to run a series of calibration inputs to compute the typical range of activations [Quantization and training of neural networks for efficient integer-arithmetic-only inference, Dyadic neural network quantization].

-

Dynamic quantization is an alternative approach which dynamically calculates the range for each activation map during runtime. The approach requires real-time computations which might have a very high overhead. By doing this, dynamic quantization often achieves the highest accuracy as the range is calculated specifically for each input.

-

Between the two, calculating the range dynamically usually is very costly, so most practitioners will often use static quantization instead.

-
-
-
-

9.3.7 Techniques

-

The two prevailing techniques for quantizing models are Post Training Quantization and Quantization Aware Training.

-

Post Training Quantization - Post-training quantization (PTQ) is a quantization technique where the model is quantized after it has been trained. The model is trained in floating point and then weights and activations are quantized as a post-processing step. This is the simplest approach and does not require access to the training data. Unlike Quantization-Aware Training (QAT), PTQ sets weight and activation quantization parameters directly, making it low-overhead and suitable for limited or unlabeled data situations. However, not readjusting the weights after quantizing, especially in low-precision quantization can lead to very different behavior and thus lower accuracy. To tackle this, techniques like bias correction, equalizing weight ranges, and adaptive rounding methods have been developed. PTQ can also be applied in zero-shot scenarios, where no training or testing data are available. This method has been made even more efficient to benefit compute- and memory- intensive large language models. Recently, SmoothQuant, a training-free, accuracy-preserving, and general-purpose PTQ solution which enables 8-bit weight, 8-bit activation quantization for LLMs, has been developed, demonstrating up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy (Xiao et al. (2022)).

-

In PTQ, a pretrained model undergoes a calibration process, as shown in Figure fig-PTQ-diagram. Calibration involves using a separate dataset known as calibration data, a specific subset of the training data reserved for quantization to help find the appropriate clipping ranges and scaling factors.

-
-
-
- -
-
-Figure 9.25: Post-Training Quantization and calibration. Credit: Gholami et al. (2021). -
-
-
-

Quantization Aware Training - Quantization-aware training (QAT) is a fine-tuning of the PTQ model. The model is trained aware of quantization, allowing it to adjust for quantization effects. This produces better accuracy with quantized inference. Quantizing a trained neural network model with methods such as PTQ introduces perturbations that can deviate the model from its original convergence point. For instance, Krishnamoorthi showed that even with per-channel quantization, networks like MobileNet do not reach baseline accuracy with int8 Post Training Quantization (PTQ) and require Quantization Aware Training (QAT) (Krishnamoorthi (2018)).To address this, QAT retrains the model with quantized parameters, employing forward and backward passes in floating point but quantizing parameters after each gradient update. Handling the non-differentiable quantization operator is crucial; a widely used method is the Straight Through Estimator (STE), approximating the rounding operation as an identity function. While other methods and variations exist, STE remains the most commonly used due to its practical effectiveness. In QAT, a pretrained model is quantized and then finetuned using training data to adjust parameters and recover accuracy degradation, as shown in Figure fig-QAT-diagram. The calibration process is often conducted in parallel with the finetuning process for QAT.

-
-
-
- -
-
-Figure 9.26: Quantization-Aware Training. Credit: Gholami et al. (2021). -
-
-Gholami, Dong Kim, Mahoney Yao, and Keutzer. 2021. “A Survey of Quantization Methods for Efficient Neural Network Inference).” ArXiv Preprint. https://arxiv.org/abs/2103.13630. -
-
-

Quantization-Aware Training serves as a natural extension of Post-Training Quantization. Following the initial quantization performed by PTQ, QAT is used to further refine and fine-tune the quantized parameters - see how in Figure fig-QAT-PTQ-relation, the PTQ model undergoes an additional step, QAT. It involves a retraining process where the model is exposed to additional training iterations using the original data. This dynamic training approach allows the model to adapt and adjust its parameters, compensating for the performance degradation caused by quantization.

-
-
-
- -
-
-Figure 9.27: PTQ and QAT. Credit: “The Ultimate Guide to Deep Learning Model Quantization and Quantization-Aware Training” (n.d.). -
-
-“The Ultimate Guide to Deep Learning Model Quantization and Quantization-Aware Training.” n.d. https://deci.ai/quantization-and-quantization-aware-training/. -
-
-

Figure fig-quantization-methods-summary shows the relative accuracy of different models after PTQ and QAT. In almost all cases, QAT yields a better accuracy than PTQ. Consider for example EfficientNet b0. After PTQ, the accuracy drops from 76.85% to 72.06%. But when we apply QAT, the accuracy rebounds to 76.95% (with even a slight improvement over the original accuracy).

-
-
-
- -
-
-Figure 9.28: Relative accuracies of PTQ and QAT. Credit: Wu, Judd, and Isaev (2020). -
-
-
- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Feature/TechniquePost Training QuantizationQuantization Aware TrainingDynamic Quantization
Pros
Simplicity
Accuracy Preservation
Adaptability
Optimized PerformancePotentially
Cons
Accuracy DegradationPotentially
Computational Overhead
Implementation Complexity
Tradeoffs
Speed vs. Accuracy
Accuracy vs. Cost
Adaptability vs. Overhead
-
-
-

9.3.8 Weights vs. Activations

-

Weight Quantization: Involves converting the continuous or high-precision weights of a model to lower-precision, such as converting Float32 weights to quantized INT8 (integer) weights - in Figure fig-weight-activations-quantization, weight quantization is taking place in the second step (red squares) when we multiply the inputs. This reduces the model size, thereby reducing the memory required to store the model and the computational resources needed to perform inference. For example, consider a weight matrix in a neural network layer with Float32 weights as [0.215, -1.432, 0.902, …]. Through weight quantization, these might be mapped to INT8 values like [27, -183, 115, …], significantly reducing the memory required to store them.

-
-
-
- -
-
-Figure 9.29: Weight and activation quantization. Credit: HarvardX. -
-
-
-

Activation Quantization: Involves quantizing the activation values (outputs of layers) during model inference. This can reduce the computational resources required during inference, but it introduces additional challenges in maintaining model accuracy due to the reduced precision of intermediate computations. For example, in a convolutional neural network (CNN), the activation maps (feature maps) produced by convolutional layers, originally in Float32, might be quantized to INT8 during inference to accelerate computation, especially on hardware optimized for integer arithmetic. Additionally, recent work has explored the use of Activation-aware Weight Quantization for LLM compression and acceleration, which involves protecting only 1% of the most important salient weights by observing the activations not weights (Lin et al. (2023)).

-
-
-

9.3.9 Trade-offs

-

Quantization invariably introduces a trade-off between model size/performance and accuracy. While it significantly reduces the memory footprint and can accelerate inference, especially on hardware optimized for low-precision arithmetic, the reduced precision can degrade model accuracy.

-

Model Size: A model with weights represented as Float32 being quantized to INT8 can theoretically reduce the model size by a factor of 4, enabling it to be deployed on devices with limited memory. The model size of large language models is developing at a faster pace than the GPU memory in recent years, leading to a big gap between the supply and demand for memory. Figure fig-model-size-pace illustrates the recent trend of the widening gap between model size (red line) and acceleartor memory (yellow line). Quantization and model compression techniques can help bridge the gap

-
-
-
- -
-
-Figure 9.30: Model size vs. accelerator memory. Credit: Xiao et al. (2022). -
-
-Xiao, Seznec Lin, Demouth Wu, and Han. 2022. SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models.” ArXiv Preprint. https://arxiv.org/abs/2211.10438. -
-
-

Inference Speed: Quantization can also accelerate inference, as lower-precision arithmetic is computationally less expensive. For example, certain hardware accelerators, like Google’s Edge TPU, are optimized for INT8 arithmetic and can perform inference significantly faster with INT8 quantized models compared to their floating-point counterparts. The reduction in memory from quantization helps reduce the amount of data transmission, saving up memory and speeding the process. Figure fig-nvidia-turing compares the increase in throughput and the reduction in bandwidth memory for different data type on the NVIDIA Turing GPU.

-
-
-
- -
-
-Figure 9.31: Benefits of lower precision data types. Credit: Wu, Judd, and Isaev (2020). -
-
-Wu, Zhang Judd, and Micikevicius Isaev. 2020. “Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation).” ArXiv Preprint. https://arxiv.org/abs/2004.09602. -
-
-

Accuracy: The reduction in numerical precision post-quantization can lead to a degradation in model accuracy, which might be acceptable in certain applications (e.g., image classification) but not in others (e.g., medical diagnosis). Therefore, post-quantization, the model typically requires re-calibration or fine-tuning to mitigate accuracy loss. Furthermore, recent work has explored the use of Activation-aware Weight Quantization (Lin et al. (2023)) which is based on the observation that protecting only 1% of salient weights can greatly reduce quantization error.

-
-
-

9.3.10 Quantization and Pruning

-

Pruning and quantization work well together, and it’s been found that pruning doesn’t hinder quantization. In fact, pruning can help reduce quantization error. Intuitively, this is due to pruning reducing the number of weights to quantize, thereby reducing the accumulated error from quantization. For example, an unpruned AlexNet has 60 million weights to quantize whereas a pruned AlexNet only has 6.7 million weights to quantize. This significant drop in weights helps reduce the error between quantizing the unpruned AlexNet vs. the pruned AlexNet. Furthermore, recent work has found that quantization-aware pruning generates more computationally efficient models than either pruning or quantization alone; It typically performs similar to or better in terms of computational efficiency compared to other neural architecture search techniques like Bayesian optimization (Hawks et al. (2021)).

-
-
-
- -
-
-Figure 9.32: Accuracy vs. compression rate under different compression methods. Credit: Han, Mao, and Dally (2015). -
-
-Han, Song, Huizi Mao, and William J Dally. 2015. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding.” arXiv Preprint arXiv:1510.00149. -
-
-
-
-

9.3.11 Edge-aware Quantization

-

Quantization not only reduces model size but also enables faster computations and draws less power, making it vital to edge development. Edge devices typically have tight resource constraints with compute, memory, and power, which are impossible to meet for many of the deep NN models of today. Furthermore, edge processors do not support floating point operations, making integer quantization particularly important for chips like GAP-8, a RISC-V SoC for edge inference with a dedicated CNN accelerator, which only support integer arithmetic..

-

One hardware platform utilizing quantization is the ARM Cortex-M group of 32-bit RISC ARM processor cores. They leverage fixed-point quantization with power of two scaling factors so that quantization and dequantization can be efficiently done by bit shifting. Additionally, Google Edge TPUs, Google’s emerging solution for running inference at the edge, is designed for small, low-powered devices and can only support 8-bit arithmetic. Many complex neural network models that could only be deployed on servers due to their high computational needs can now be run on edge devices thanks to recent advancements (e.g. quantization methods) in edge computing field.

-

In addition to being an indispensable technique for many edge processors, quantization has also brought noteworthy improvements to non-edge processors such as encouraging such processors to meet the Service Level Agreement (SLA) requirements such as 99th percentile latency.

-

Thus, quantization combined with efficient low-precision logic and dedicated deep learning accelerators, has been one crucial driving force for the evolution of such edge processors.

-

The video below is a lecture on quantization and the different quantization methods.

-
-
-
-
-

9.4 Efficient Hardware Implementation

-

Efficient hardware implementation transcends the selection of suitable components; it requires a holistic understanding of how software will interact with underlying architectures. The essence of achieving peak performance in TinyML applications lies not only in refining algorithms to hardware but also in ensuring that the hardware is strategically tailored to support these algorithms. This synergy between hardware and software is crucial. As we delve deeper into the intricacies of efficient hardware implementation, the significance of a co-design approach, where hardware and software are developed in tandem, becomes increasingly evident. This section provides an overview of the techniques of how hardware and the interactions between hardware and software can be optimized to improve models performance.

- - -
-

9.4.3 Kernel Optimizations

-

Kernel Optimizations are modifications made to the kernel to enhance the performance of machine learning models onf resource-constrained devices. We will separate kernel optimizations into two types.

-
-

General Kernel Optimizations

-

These are kernel optimizations that all devices can benefit from. They provide technics to convert the code to more efficient instructions.

-
-
Loop unrolling
-

Instead of having a loop with loop control (incrementing the loop counter, checking the loop termination condition) the loop can be unrolled and the overhead of loop control can be omitted. This may also provide additional opportunities for parallelism that may not be possible with the loop structure. This can be particularly beneficial for tight loops, where the boy of the loop is a small number of instructions with a lot of iterations.

-
-
-
Blocking
-

Blocking is used to make memory access patterns more efficient. If we have three computations the first and the last need to access cache A and the second needs to access cache B, blocking blocks the first two computations together to reduce the number of memory reads needed.

-
-
-
Tiling
-

Similarly to blocking, tiling divides data and computation into chunks, but extends beyond cache improvements. Tiling creates independent partitions of computation that can be run in parallel, which can result in significant performance improvements.:

-
-
-
Optimized Kernel Libraries
-

This comprises developing optimized kernels that take full advantage of a specific hardware. One example is the CMSIS-NN library, which is a collection of efficient neural network kernels developed to optimize the performance and minimize the memory footprint of models on Arm Cortex-M processors, which are common on IoT edge devices. The kernel leverage multiple hardware capabilities of Cortex-M processors like Single Instruction Multple Data (SIMD), Floating Point Units (FPUs) and M-Profile Vector Extensions (MVE). These optimization make common operations like matrix multiplications more efficient, boosting the performance of model operations on Cortex-M processors. (Lai, Suda, and Chandra 2018)

-
-Lai, Liangzhen, Naveen Suda, and Vikas Chandra. 2018. CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-m CPUs.” https://arxiv.org/abs/1801.06601. -
-
-
-
-

9.4.4 Compute-in-Memory (CiM)

-

This is one example of Algorithm-Hardware Co-design. CiM is a computing paradigm that performs computation within memory. Therefore, CiM architectures allow for operations to be performed directly on the stored data, without the need to shuttle data back and forth between separate processing and memory units. This design paradigm is particularly beneficial in scenarios where data movement is a primary source of energy consumption and latency, such as in TinyML applications on edge devices. Figure fig-computing-memory is one example of using CiM in TinyML: keyword spotting requires an always-on process that looks for certain wake words (such as ‘Hey, Siri’). Given the resource-intensive nature of this task, integrating CiM for the always-on keyword detection model can enhance efficiency.

-

Through algorithm-hardware co-design, the algorithms can be optimized to leverage the unique characteristics of CiM architectures, and conversely, the CiM hardware can be customized or configured to better support the computational requirements and characteristics of the algorithms. This is achieved by using the analog properties of memory cells, such as addition and multiplication in DRAM. (Zhou et al. 2021)

-
-
-
- -
-
-Figure 9.34: CiM for keyword spotting. Credit: Zhou et al. (2021). -
-
-Zhou, Chuteng, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, and Paul N. Whatmough. 2021. AnalogNets: Ml-hw Co-Design of Noise-Robust TinyML Models and Always-on Analog Compute-in-Memory Accelerator.” https://arxiv.org/abs/2111.06503. -
-
-
-
-

9.4.5 Memory Access Optimization

-

Different devices may have different memory hierarchies. Optimizing for the specific memory hierarchy in the specific hardware can lead to great performance improvements by reducing the costly operations of reading and writing to memory. Dataflow optimization can be achieved by optimizing for reusing data within a single layer and across multiple layers. This dataflow optimization can be tailored to the specific memory hierarchy of the hardware, which can lead to greater benefits than general optimizations for different hardwares.

-
-

Leveraging Sparsity

-

Pruning is a fundamental approach to compress models to make them compatible with resource constrained devices. This results in sparse models where a lot of weights are 0’s. Therefore, leveraging this sparsity can lead to significant improvements in performance. Tools were created to achieve exactly this. RAMAN, is a sparseTinyML accelerator designed for inference on edge devices. RAMAN overlap input and output activations on the same memory space, reducing storage requirements by up to 50%. (Krishna et al. 2023)

-
-Krishna, Adithya, Srikanth Rohit Nudurupati, Chandana D G, Pritesh Dwivedi, André van Schaik, Mahesh Mehendale, and Chetan Singh Thakur. 2023. RAMAN: A Re-Configurable and Sparse TinyML Accelerator for Inference on Edge.” https://arxiv.org/abs/2306.06493. -
-
-

Optimization Frameworks

-

Optimization Frameworks have been introduced to exploit the specific capabilities of the hardware to accelerate the software. One example of such a framework is hls4ml - Figure fig-hls4ml-workflow provides an overview of the framework’s workflow. This open-source software-hardware co-design workflow aids in interpreting and translating machine learning algorithms for implementation with both FPGA and ASIC technologies. Features such as network optimization, new Python APIs, quantization-aware pruning, and end-to-end FPGA workflows are embedded into the hls4ml framework, leveraging parallel processing units, memory hierarchies, and specialized instruction sets to optimize models for edge hardware. Moreover, hls4ml is capable of translating machine learning algorithms directly into FPGA firmware.

-
-
-
- -
-
-Figure 9.35: hls4ml framework workflow. Credit: Fahim et al. (2021). -
-
-Fahim, Farah, Benjamin Hawks, Christian Herwig, James Hirschauer, Sergo Jindariani, Nhan Tran, Luca P. Carloni, et al. 2021. “Hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices.” https://arxiv.org/abs/2103.05579. -
-
-

One other framework for FPGAs that focuses on a holistic approach is CFU Playground (Prakash et al. 2023)

-
-Prakash, Shvetank, Tim Callahan, Joseph Bushagour, Colby Banbury, Alan V. Green, Pete Warden, Tim Ansell, and Vijay Janapa Reddi. 2023. CFU Playground: Full-stack Open-Source Framework for Tiny Machine Learning (TinyML) Acceleration on FPGAs.” In 2023 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). Vol. abs/2201.01863. IEEE. https://doi.org/10.1109/ispass57527.2023.00024. -
-
-

Hardware Built Around Software

-

In a contrasting approach, hardware can be custom-designed around software requirements to optimize the performance for a specific application. This paradigm creates specialized hardware to better adapt to the specifics of the software, thus reducing computational overhead and improving operational efficiency. One example of this approach is a voice-recognition application by (Kwon and Park 2021). The paper proposes a structure wherein preprocessing operations, traditionally handled by software, are allocated to custom-designed hardware. This technique was achieved by introducing resistor-transistor logic to an inter-integrated circuit sound module for windowing and audio raw data acquisition in the voice-recognition application. Consequently, this offloading of preprocessing operations led to a reduction in computational load on the software, showcasing a practical application of building hardware around software to enhance the efficiency and performance.

-
-
-
- -
-
-Figure 9.36: Delegating data processing to an FPGA. Credit: Kwon and Park (2021). -
-
-Kwon, Jisu, and Daejin Park. 2021. Hardware/Software Co-Design for TinyML Voice-Recognition Application on Resource Frugal Edge Devices.” Applied Sciences 11 (22): 11073. https://doi.org/10.3390/app112211073. -
-
-
-
-

SplitNets

-

SplitNets were introduced in the context of Head-Mounted systems. They distribute the Deep Neural Networks (DNNs) workload among camera sensors and an aggregator. This is particularly compelling the in context of TinyML. The SplitNet framework is a split-aware NAS to find the optimal neural network architecture to achieve good accuracy, split the model among the sensors and the aggregator, and minimize the communication between the sensors and the aggregator. Figure fig-splitnet-performance demonstrates how SplitNets (in red) achieves higher accuracy for lower latency (running on ImageNet) than different approaches, such as running the DNN on-sensor (All-on-sensor; in green) or on mobile (All-on-aggregator; in blue). Minimal communication is important in TinyML where memory is highly constrained, this way the sensors conduct some of the processing on their chips and then they send only the necessary information to the aggregator. When testing on ImageNet, SplitNets were able to reduce the latency by one order of magnitude on head-mounted devices. This can be helpful when the sensor has its own chip. (Dong et al. 2022)

-
-
-
- -
-
-Figure 9.37: SplitNets vs other approaches. Credit: Dong et al. (2022). -
-
-Dong, Xin, Barbara De Salvo, Meng Li, Chiao Liu, Zhongnan Qu, H. T. Kung, and Ziyun Li. 2022. SplitNets: Designing Neural Architectures for Efficient Distributed Computing on Head-Mounted Systems.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12549–59. IEEE. https://doi.org/10.1109/cvpr52688.2022.01223. -
-
-
-
-

Hardware Specific Data Augmentation

-

Each edge device may possess unique sensor characteristics, leading to specific noise patterns that can impact model performance. One example is audio data, where variations stemming from the choice of microphone are prevalent. Applications such as Keyword Spotting can experience substantial enhancements by incorporating data recorded from devices similar to those intended for deployment. Fine-tuning of existing models can be employed to adapt the data precisely to the sensor’s distinctive characteristics.

-
-
-
-
-

9.5 Software and Framework Support

-

While all of the aforementioned techniques like pruning, quantization, and efficient numerics are well-known, they would remain impractical and inaccessible without extensive software support. For example, directly quantizing weights and activations in a model would require manually modifying the model definition and inserting quantization operations throughout. Similarly, directly pruning model weights requires manipulating weight tensors. Such tedious approaches become infeasible at scale.

-

Without the extensive software innovation across frameworks, optimization tools and hardware integration, most of these techniques would remain theoretical or only viable to experts. Without framework APIs and automation to simplify applying these optimizations, they would not see adoption. Software support makes them accessible to general practitioners and unlocks real-world benefits. In addition, issues such as hyperparameter tuning for pruning, managing the trade-off between model size and accuracy, and ensuring compatibility with target devices pose hurdles that developers must navigate.

-
-

9.5.1 Built-in Optimization APIs

-

Major machine learning frameworks like TensorFlow, PyTorch, and MXNet provide libraries and APIs to allow common model optimization techniques to be applied without requiring custom implementations. For example, TensorFlow offers the TensorFlow Model Optimization Toolkit which contains modules like:

-
    -
  • quantization - Applies quantization-aware training to convert floating point models to lower precision like int8 with minimal accuracy loss. Handles weight and activation quantization.
  • -
  • sparsity - Provides pruning APIs to induce sparsity and remove unnecessary connections in models like neural networks. Can prune weights, layers, etc.
  • -
  • clustering - Supports model compression by clustering weights into groups for higher compression rates.
  • -
-

These APIs allow users to enable optimization techniques like quantization and pruning without directly modifying model code. Parameters like target sparsity rates, quantization bit-widths etc. can be configured. Similarly, PyTorch provides torch.quantization for converting models to lower precision representations. TorchTensor and TorchModule form the base classes for quantization support. It also offers torch.nn.utils.prune for built-in pruning of models. MXNet offers gluon.contrib layers that add quantization capabilities like fixed point rounding and stochastic rounding of weights/activations during training. This allows quantization to be readily included in gluon models.

-

The core benefit of built-in optimizations is that users can apply them without re-implementing complex techniques. This makes optimized models accessible to a broad range of practitioners. It also ensures best practices are followed by building on research and experience implementing the methods. As new optimizations emerge, frameworks strive to provide native support and APIs where possible to further lower the barrier to efficient ML. The availability of these tools is key to widespread adoption.

-
-
-

9.5.2 Automated Optimization Tools

-

Automated optimization tools provided by frameworks can analyze models and automatically apply optimizations like quantization, pruning, and operator fusion to make the process easier and accessible without excessive manual tuning. In effect, this builds on top of the previous section. For example, TensorFlow provides the TensorFlow Model Optimization Toolkit which contains modules like:

-
    -
  • QuantizationAwareTraining - Automatically quantizes weights and activations in a model to lower precision like UINT8 or INT8 with minimal accuracy loss. It inserts fake quantization nodes during training so that the model can learn to be quantization-friendly.
  • -
  • Pruning - Automatically removes unnecessary connections in a model based on analysis of weight importance. Can prune entire filters in convolutional layers or attention heads in transformers. Handles iterative re-training to recover any accuracy loss.
  • -
  • GraphOptimizer - Applies graph optimizations like operator fusion to consolidate operations and reduce execution latency, especially for inference. In Figure fig-graph-optimizer, you can see the original (Source Graph) on the left, and how its operations are transformed (consolidated) on the right. Notice how Block1 in Source Graph has 3 separate steps (Convolution, BiasAdd, and Activation), which are then consolidated together in Block1 on Optimized Graph.
  • -
-
-
-
- -
-
-Figure 9.38: GraphOptimizer. Credit: Wess et al. (2020). -
-
-Wess, Matthias, Matvey Ivanov, Christoph Unger, and Anvesh Nookala. 2020. “ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked Models.” IEEE. https://doi.org/10.1109/ACCESS.2020.3047259. -
-
-

These automated modules only require the user to provide the original floating point model, and handle the end-to-end optimization pipeline including any re-training to regain accuracy. Other frameworks like PyTorch also offer increasing automation support, for example through torch.quantization.quantize_dynamic. Automated optimization makes efficient ML accessible to practitioners without optimization expertise.

-
-
-

9.5.3 Hardware Optimization Libraries

-

Hardware libraries like TensorRT and TensorFlow XLA allow models to be highly optimized for target hardware through techniques that we discussed earlier.

-

Quantization: For example, TensorRT and TensorFlow Lite both support quantization of models during conversion to their format. This provides speedups on mobile SoCs with INT8/INT4 support.

-

Kernel Optimization: For instance, TensorRT does auto-tuning to optimize CUDA kernels based on the GPU architecture for each layer in the model graph. This extracts maximum throughput.

-

Operator Fusion: TensorFlow XLA does aggressive fusion to create optimized binary for TPUs. On mobile, frameworks like NCNN also support fused operators. ` Hardware-Specific Code: Libraries are used to generate optimized binary code specialized for the target hardware. For example, TensorRT uses Nvidia CUDA/cuDNN libraries which are hand-tuned for each GPU architecture. This hardware-specific coding is key for performance. On TinyML devices, this can mean assembly code optimized for a Cortex M4 CPU for example. Vendors provide CMSIS-NN and other libraries.

-

Data Layout Optimizations - We can efficiently leverage memory hierarchy of hardware like cache and registers through techniques like tensor/weight rearrangement, tiling, and reuse. For example, TensorFlow XLA optimizes buffer layouts to maximize TPU utilization. This helps any memory constrained systems.

-

Profiling-based Tuning - We can use profiling tools to identify bottlenecks. For example, adjust kernel fusion levels based on latency profiling. On mobile SoCs, vendors like Qualcomm provide profilers in SNPE to find optimization opportunities in CNNs. This data-driven approach is important for performance.

-

By integrating framework models with these hardware libraries through conversion and execution pipelines, ML developers can achieve significant speedups and efficiency gains from low-level optimizations tailored to the target hardware. The tight integration between software and hardware is key to enabling performant deployment of ML applications, especially on mobile and TinyML devices.

-
-
-

9.5.4 Visualizing Optimizations

-

Implementing model optimization techniques without visibility into the effects on the model can be challenging. Dedicated tooling or visualization tools can provide critical and useful insight into model changes and helps track the optimization process. Let’s consider the optimizations we considered earlier, such as pruning for sparsity and quantization.

- -
-
Quantization
-

Converting models to lower numeric precisions through quantization introduces errors that can impact model accuracy if not properly tracked and addressed. Visualizing quantization error distributions provides valuable insights into the effects of reduced precision numerics applied to different parts of a model. For this, histograms of the quantization errors for weights and activations can be generated. These histograms can reveal the shape of the error distribution - whether they resemble a Gaussian distribution or contain significant outliers and spikes. Figure fig-quantization-error shows the distributions of different quantization methods. Large outliers may indicate issues with particular layers handling the quantization. Comparing the histograms across layers highlights any problem areas standing out with abnormally high errors.

-
-
-
- -
-
-Figure 9.40: Quantization errors. Credit: Kuzmin et al. (2022). -
-
-Kuzmin, Andrey, Mart Van Baalen, Yuwei Ren, Markus Nagel, Jorn Peters, and Tijmen Blankevoort. 2022. FP8 Quantization: The Power of the Exponent.” https://arxiv.org/abs/2208.09225. -
-
-

Activation visualizations are also important to detect overflow issues. By color mapping the activations before and after quantization, any values pushed outside the intended ranges become visible. This reveals saturation and truncation issues that could skew the information flowing through the model. Detecting these errors allows recalibrating activations to prevent loss of information (Mandal 2022). Figure fig-color-mapping is a color mapping of the AlexNet convolutional kernels.

-
-
-
- -
-
-Figure 9.41: Color mapping of activations. Credit: Krizhevsky, Sutskever, and Hinton (2012). -
-
-Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. 2012. “ImageNet Classification with Deep Convolutional Neural Networks.” Edited by F. Pereira, C. J. Burges, L. Bottou, and K. Q. Weinberger 25. https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf. -
-
-

Other techniques, such as tracking the overall mean square quantization error at each step of the quantization-aware training process identifies fluctuations and divergences. Sudden spikes in the tracking plot may indicate points where quantization is disrupting the model training. Monitoring this metric builds intuition on model behavior under quantization. Together these techniques turn quantization into a transparent process. The empirical insights enable practitioners to properly assess quantization effects. They pinpoint areas of the model architecture or training process to recalibrate based on observed quantization issues. This helps achieve numerically stable and accurate quantized models.

-

Providing this data enables practitioners to properly assess the impact of quantization and identify potential problem areas of the model to recalibrate or redesign to be more quantization friendly. This empirical analysis builds intuition on achieving optimal quantization.

-

Visualization tools can provide insights that help practitioners better understand the effects of optimizations on their models. The visibility enables correcting issues early before accuracy or performance is impacted significantly. It also aids applying optimizations more effectively for specific models. These optimization analytics help build intuition when transitioning models to more efficient representations.

-
-
-
-

9.5.5 Model Conversion and Deployment

-

Once models have been successfully optimized in frameworks like TensorFlow and PyTorch, specialized model conversion and deployment platforms are needed to bridge the gap to running them on target devices.

-

TensorFlow Lite - TensorFlow’s platform to convert models to a lightweight format optimized for mobile, embedded and edge devices. Supports optimizations like quantization, kernel fusion, and stripping away unused ops. Models can be executed using optimized TensorFlow Lite kernels on device hardware. Critical for mobile and TinyML deployment.

-

ONNX Runtime - Performs model conversion and inference for models in the open ONNX model format. Provides optimized kernels, supports hardware accelerators like GPUs, and cross-platform deployment from cloud to edge. Allows framework-agnostic deployment. Figure fig-interop is an ONNX interoperability map, including major popular frameworks.

-
-
-
- -
-
-Figure 9.42: Interoperablily of ONNX. Credit: TowardsDataScience. -
-
-
-

PyTorch Mobile - Enables PyTorch models to be run on iOS and Android by converting to mobile-optimized representations. Provides efficient mobile implementations of ops like convolution and special functions optimized for mobile hardware.

-

These platforms integrate with hardware drivers, operating systems, and accelerator libraries on devices to execute models efficiently using hardware optimization. They also offload operations to dedicated ML accelerators where present. The availability of these proven, robust deployment platforms bridges the gap between optimizing models in frameworks and actual deployment to billions of devices. They allow users to focus on model development rather than building custom mobile runtimes. Continued innovation to support new hardware and optimizations in these platforms is key to widespread ML optimizations.

-

By providing these optimized deployment pipelines, the entire workflow from training to device deployment can leverage model optimizations to deliver performant ML applications. This end-to-end software infrastructure has helped drive the adoption of on-device ML.

-
-
-
-

9.6 Conclusion

-

In this chapter we’ve discussed model optimization across the software-hardware span. We dove deep into efficient model representation, where we covered the nuances of structured and unstructured pruning and other techniques for model compression such as knowledge distillation and matrix and tensor decomposition. We also dove briefly into edge-specific model design at the parameter and model architecture level, exploring topics like edge-specific models and hardware-aware NAS.

-

We then explored efficient numerics representations, where we covered the basics of numerics, numeric encodings and storage, benefits of efficient numerics, and the nuances of numeric representation with memory usage, computational complexity, hardware compatibility, and tradeoff scenarios. We finished by honing in on an efficient numerics staple: quantization, where we examined its history, calibration, techniques, and interaction with pruning.

-

Finally, we looked at how we can make optimizations specific to the hardware we have. We explored how we can find model architectures tailored to the hardware, make optimizations in the kernel to better handle the model, and frameworks built to make the most use out of the hardware. We also looked at how we can go the other way around and build hardware around our specific software and talked about splitting networks to run on multiple processors available on the edge device.

-

By understanding the full picture of the degrees of freedom within model optimization both away and close to the hardware and the tradeoffs to consider when implementing these methods, practitioners can develop a more thoughtful pipeline for compressing their workloads onto edge devices.

-
-
-

Resources

-

Here is a curated list of resources to support both students and instructors in their learning and teaching journey. We are continuously working on expanding this collection and will be adding new exercises in the near future.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides serve as a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we also offer a series of hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/privacy_security/privacy_security.html b/contents/privacy_security/privacy_security.html deleted file mode 100644 index 5a2b86cf..00000000 --- a/contents/privacy_security/privacy_security.html +++ /dev/null @@ -1,2412 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 14  Security & Privacy - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

14  Security & Privacy

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: An illustration on privacy and security in machine learning systems. The image shows a digital landscape with a network of interconnected nodes and data streams, symbolizing machine learning algorithms. In the foreground, there’s a large lock superimposed over the network, representing privacy and security. The lock is semi-transparent, allowing the underlying network to be partially visible. The background features binary code and digital encryption symbols, emphasizing the theme of cybersecurity. The color scheme is a mix of blues, greens, and grays, suggesting a high-tech, digital environment.
-
-
-

Security and privacy are critical when developing real-world machine learning systems. As machine learning is increasingly applied to sensitive domains like healthcare, finance, and personal data, protecting confidentiality and preventing misuse of data and models becomes imperative. Anyone aiming to build robust and responsible ML systems must grasp potential security and privacy risks such as data leaks, model theft, adversarial attacks, bias, and unintended access to private information. We also need to understand best practices for mitigating these risks. Most importantly, security and privacy cannot be an afterthought and must be proactively addressed throughout the ML system development lifecycle - from data collection and labeling to model training, evaluation, and deployment. Embedding security and privacy considerations into each stage of building, deploying, and managing machine learning systems is essential for safely unlocking the benefits of A.I.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand key ML privacy and security risks, such as data leaks, model theft, adversarial attacks, bias, and unintended data access.

  • -
  • Learn from historical hardware and embedded systems security incidents.

  • -
  • Identify threats to ML models like data poisoning, model extraction, membership inference, and adversarial examples.

  • -
  • Recognize hardware security threats to embedded ML spanning hardware bugs, physical attacks, side channels, counterfeit components, etc.

  • -
  • Explore embedded ML defenses, such as trusted execution environments, secure boot, physical unclonable functions, and hardware security modules.

  • -
  • Discuss privacy issues handling sensitive user data with embedded ML, including regulations.

  • -
  • Learn privacy-preserving ML techniques like differential privacy, federated learning, homomorphic encryption, and synthetic data generation.

  • -
  • Understand tradeoffs between privacy, accuracy, efficiency, threat models, and trust assumptions.

  • -
  • Recognize the need for a cross-layer perspective spanning electrical, firmware, software, and physical design when securing embedded ML devices.

  • -
-
-
-
-

14.1 Introduction

-

Machine learning has evolved substantially from its academic origins, where privacy was not a primary concern. As ML migrated into commercial and consumer applications, the data became more sensitive - encompassing personal information like communications, purchases, and health data. This explosion of data availability fueled rapid advancements in ML capabilities. However, it also exposed new privacy risks, as demonstrated by incidents like the AOL data leak in 2006 and the Cambridge Analytica scandal.

-

These events highlighted the growing need to address privacy in ML systems. In this chapter, we explore privacy and security considerations together, as they are inherently linked in ML:

-
    -
  • Privacy refers to controlling access to sensitive user data, such as financial information or biometric data collected by an ML application.

  • -
  • Security protects ML systems and data from hacking, theft, and misuse.

  • -
-

For example, an ML-powered home security camera must secure video feeds against unauthorized access and provide privacy protections to ensure only intended users can view the footage. A breach of either security or privacy could expose private user moments.

-

Embedded ML systems like smart assistants and wearables are ubiquitous and process intimate user data. However, their computational constraints often prevent heavy security protocols. Designers must balance performance needs with rigorous security and privacy standards tailored to embedded hardware limitations.

-

This chapter provides essential knowledge for addressing the complex privacy and security landscape of embedded ML. We will explore vulnerabilities and cover various techniques that enhance privacy and security within embedded systems’ resource constraints.

-

We hope that by building a holistic understanding of risks and safeguards, you will gain the principles to develop secure, ethical, embedded ML applications.

-
-
-

14.2 Terminology

-

In this chapter, we will discuss security and privacy together, so there are key terms that we need to be clear about.

-
    -
  • Privacy: Consider an ML-powered home security camera that identifies and records potential threats. This camera records identifiable information, including faces, of individuals approaching and potentially entering this home. Privacy concerns may surround who can access this data.

  • -
  • Security: Consider an ML-powered home security camera that identifies and records potential threats. The security aspect would ensure that hackers cannot access these video feeds and recognition models.

  • -
  • Threat: Using our home security camera example, a threat could be a hacker trying to access live feeds or stored videos or using false inputs to trick the system.

  • -
  • Vulnerability: A common vulnerability might be a poorly secured network through which the camera connects to the internet, which could be exploited to access the data.

  • -
-
-
-

14.3 Historical Precedents

-

While the specifics of machine learning hardware security can be distinct, the embedded systems field has a history of security incidents that provide critical lessons for all connected systems, including those using ML. Here are detailed explorations of past breaches:

-
-

14.3.1 Stuxnet

-

In 2010, something unexpected was found on a computer in Iran - a very complicated computer virus that experts had never seen before. Stuxnet was a malicious computer worm that targeted supervisory control and data acquisition (SCADA) systems and was designed to damage Iran’s nuclear program (Farwell and Rohozinski 2011). Stuxnet was using four “zero-day exploits” - attacks that take advantage of secret weaknesses in software that no one knows about yet. This made Stuxnet very sneaky and hard to detect.

-
-Farwell, James P., and Rafal Rohozinski. 2011. “Stuxnet and the Future of Cyber War.” Survival 53 (1): 23–40. https://doi.org/10.1080/00396338.2011.555586. -

But Stuxnet wasn’t designed to steal information or spy on people. Its goal was physical destruction - to sabotage centrifuges at Iran’s Natanz nuclear plant! So how did the virus get onto computers at the Natanz plant, which was supposed to be disconnected from the outside world for security? Experts think someone inserted a USB stick containing Stuxnet into the internal Natanz network. This allowed the virus to “jump” from an outside system onto the isolated nuclear control systems and wreak havoc.

-

Stuxnet was incredibly advanced malware built by national governments to cross from the digital realm into real-world infrastructure. It specifically targeted important industrial machines, where embedded machine learning is highly applicable in a way never done before. The virus provided a wake-up call about how sophisticated cyberattacks could now physically destroy equipment and facilities.

-

This breach was significant due to its sophistication; Stuxnet specifically targeted programmable logic controllers (PLCs) used to automate electromechanical processes such as the speed of centrifuges for uranium enrichment. The worm exploited vulnerabilities in the Windows operating system to gain access to the Siemens Step7 software controlling the PLCs. Despite not being a direct attack on ML systems, Stuxnet is relevant for all embedded systems as it showcases the potential for state-level actors to design attacks that bridge the cyber and physical worlds with devastating effects.

-
-
-

14.3.2 Jeep Cherokee Hack

-

The Jeep Cherokee hack was a groundbreaking event demonstrating the risks inherent in increasingly connected automobiles (Miller 2019). In a controlled demonstration, security researchers remotely exploited a vulnerability in the Uconnect entertainment system, which had a cellular connection to the internet. They were able to control the vehicle’s engine, transmission, and brakes, alarming the automotive industry into recognizing the severe safety implications of cyber vulnerabilities in vehicles.

-
-Miller, Charlie. 2019. “Lessons Learned from Hacking a Car.” IEEE Design &Amp; Test 36 (6): 7–9. https://doi.org/10.1109/mdat.2018.2863106. -

The video below is a short documentary of the attack.

-
-

While this wasn’t an attack on an ML system per se, the reliance of modern vehicles on embedded systems for safety-critical functions has significant parallels to the deployment of ML in embedded systems, underscoring the need for robust security at the hardware level.

-
-
-

14.3.3 Mirai Botnet

-

The Mirai botnet involved the infection of networked devices such as digital cameras and DVR players (Antonakakis et al. 2017). In October 2016, the botnet was used to conduct one of the largest DDoS attacks, disrupting internet access across the United States. The attack was possible because many devices used default usernames and passwords, which were easily exploited by the Mirai malware to control the devices.

-
-Antonakakis, Manos, Tim April, Michael Bailey, Matt Bernhard, Elie Bursztein, Jaime Cochran, Zakir Durumeric, et al. 2017. “Understanding the Mirai Botnet.” In 26th USENIX Security Symposium (USENIX Security 17), 1093–1110. -

The following video presentation explains how the Mirai Botnet works.

-
-

Although the devices were not ML-based, the incident is a stark reminder of what can happen when numerous embedded devices with poor security controls are networked, which is becoming more common with the growth of ML-based IoT devices.

-
-
-

14.3.4 Implications

-

These historical breaches demonstrate the cascading effects of hardware vulnerabilities in embedded systems. Each incident offers a precedent for understanding the risks and designing better security protocols. For instance, the Mirai botnet highlights the immense destructive potential when threat actors can gain control over networked devices with weak security, a situation becoming increasingly common with ML systems. Many current ML devices function as “edge” devices meant to collect and process data locally before sending it to the cloud. Much like the cameras and DVRs compromised by Mirai, edge ML devices often rely on embedded hardware like ARM processors and run lightweight O.S. like Linux. Securing the device credentials is critical.

-

Similarly, the Jeep Cherokee hack was a watershed moment for the automotive industry. It exposed serious vulnerabilities in the growing network-connected vehicle systems and their lack of isolation from core drive systems like brakes and steering. In response, auto manufacturers invested heavily in new cybersecurity measures, though gaps likely remain.

-

Chrysler did a recall to patch the vulnerable Uconnect software, allowing the remote exploit. This included adding network-level protections to prevent unauthorized external access and compartmentalizing in-vehicle systems to limit lateral movement. Additional layers of encryption were added for commands sent over the CAN bus within vehicles.

-

The incident also spurred the creation of new cybersecurity standards and best practices. The Auto-ISAC was established for automakers to share intelligence, and the NHTSA guided management risks. New testing and audit procedures were developed to assess vulnerabilities proactively. The aftereffects continue to drive change in the automotive industry as cars become increasingly software-defined.

-

Unfortunately, manufacturers often overlook security in the rush to develop new ML edge devices - using default passwords, unencrypted communications, unsecured firmware updates, etc. Any such vulnerabilities could allow attackers to gain access and control devices at scale by infecting them with malware. With a botnet of compromised ML devices, attackers could leverage their aggregated computational power for DDoS attacks on critical infrastructure.

-

While these events didn’t directly involve machine learning hardware, the principles of the attacks carry over to ML systems, which often involve similar embedded devices and network architectures. As ML hardware often operates in continuous interaction with the physical world, securing it against such breaches is paramount. The evolution of security measures in response to these incidents provides valuable insights into protecting current and future ML systems from analogous vulnerabilities.

-

The distributed nature of ML edge devices means threats can propagate quickly across networks. And if devices are being used for mission-critical purposes like medical devices, industrial controls, or self-driving vehicles, the potential physical damage from weaponized ML bots could be severe. Just like Mirai demonstrated the dangerous potential of poorly secured IoT devices, the litmus test for ML hardware security will be how vulnerable or resilient these devices are to worm-like attacks. The stakes are raised as ML spreads to safety-critical domains, putting the onus on manufacturers and system operators to incorporate the lessons from Mirai.

-

The lesson is the importance of designing for security from the outset and having layered defenses. For ML systems, the Jeep case highlights potential blindspots around externally facing software interfaces and isolation between subsystems. Manufacturers of ML devices and platforms should assume a similar proactive and comprehensive approach to security rather than leaving it as an afterthought. Rapid response and dissemination of best practices will be key as threats evolve.

-
-
-
-

14.4 Security Threats to ML Models

-

ML models face security risks that can undermine their integrity, performance, and trustworthiness if not properly addressed. While there are several different threats, the key threats include: Model theft, where adversaries steal the proprietary model parameters and the sensitive data they contain. Data poisoning, which compromises models through data tampering. Adversarial attacks deceive the model to make incorrect or unwanted predictions.

-
-

14.4.1 Model Theft

-

Model theft occurs when an attacker gains unauthorized access to a deployed ML model. The concern here is the theft of the model’s structure and trained parameters and the proprietary data it contains (Ateniese et al. 2015). Model theft is a real and growing threat, as demonstrated by cases like ex-Google engineer Anthony Levandowski, who allegedly stole Waymo’s self-driving car designs and started a competing company. Beyond economic impacts, model theft can seriously undermine privacy and enable further attacks.

-
-Ateniese, Giuseppe, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, Domenico Vitali, and Giovanni Felici. 2015. “Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers.” Int. J. Secur. Netw. 10 (3): 137. https://doi.org/10.1504/ijsn.2015.071829. -

For instance, consider an ML model developed for personalized recommendations in an e-commerce application. If a competitor steals this model, they gain insights into business analytics, customer preferences, and even trade secrets embedded within the model’s data. Attackers could leverage stolen models to craft more effective inputs for model inversion attacks, deducing private details about the model’s training data. A cloned e-commerce recommendation model could reveal customer purchase behaviors and demographics.

-

To understand model inversion attacks, consider a facial recognition system used to grant access to secured facilities. The system is trained on a dataset of employee photos. An attacker could infer features of the original dataset by observing the model’s output to various inputs. For example, suppose the model’s confidence level for a particular face is significantly higher for a given set of features. In that case, an attacker might deduce that someone with those features is likely in the training dataset.

-

The methodology of model inversion typically involves the following steps:

-
    -
  • Accessing Model Outputs: The attacker queries the ML model with input data and observes the outputs. This is often done through a legitimate interface, like a public API.

  • -
  • Analyzing Confidence Scores: For each input, the model provides a confidence score that reflects how similar the input is to the training data.

  • -
  • Reverse-Engineering: By analyzing the confidence scores or output probabilities, attackers can use optimization techniques to reconstruct what they believe is close to the original input data.

  • -
-

One historical example of such a vulnerability being explored was the research on inversion attacks against the U.S. Netflix Prize dataset, where researchers demonstrated that it was possible to learn about an individual’s movie preferences, which could lead to privacy breaches (Narayanan and Shmatikov 2006).

-
-Narayanan, Arvind, and Vitaly Shmatikov. 2006. “How to Break Anonymity of the Netflix Prize Dataset.” arXiv Preprint Cs/0610105. -

Model theft implies that it could lead to economic losses, undermine competitive advantage, and violate user privacy. There’s also the risk of model inversion attacks, where an adversary could input various data into the stolen model to infer sensitive information about the training data.

-

Based on the desired asset, model theft attacks can be divided into two categories: exact model properties and approximate model behavior.

-
-
Stealing Exact Model Properties
-

In these attacks, the objective is to extract information about concrete metrics, such as a network’s learned parameters, fine-tuned hyperparameters, and the model’s internal layer architecture (Oliynyk, Mayer, and Rauber 2023).

-
    -
  • Learned Parameters: Adversaries aim to steal a model’s learned knowledge (weights and biases) in order to replicate it. Parameter theft is generally used in conjunction with other attacks, such as architecture theft, which lacks parameter knowledge.

  • -
  • Fine-Tuned Hyperparameters: Training is costly, and finding the right configuration of hyperparameters (such as the learning rate and regularization) can be a very long and expensive process. Thus, stealing an optimized model’s hyperparameters can allow an adversary to replicate the model without the high training costs.

  • -
  • Model Architecture: This attack concerns the specific design and structure of the model, such as layers, neurons, and connectivity patterns. Aside from reducing associated training costs, it can provide an attacker; this type of theft is especially dangerous because it concerns core I.P. theft, which can affect a company’s competitive edge. Architecture theft can be achieved by exploiting side-channel attacks (discussed later).

  • -
-
-
-
Stealing Approximate Model Behavior
-

Instead of focusing on extracting exact numerical values of the model’s parameters, these attacks aim to reproduce the model’s behavior (predictions and effectiveness), decision-making, and high-level characteristics (Oliynyk, Mayer, and Rauber 2023). These techniques aim to achieve similar outcomes while allowing for internal deviations in parameters and architecture. Types of approximate behavior theft include achieving the same level of effectiveness and obtaining prediction consistency.

-
-Oliynyk, Daryna, Rudolf Mayer, and Andreas Rauber. 2023. “I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences.” ACM Comput. Surv. 55 (14s): 1–41. https://doi.org/10.1145/3595292. -
    -
  • Level of Effectiveness: Attackers aim to replicate the model’s decision-making capabilities rather than focus on the precise parameter values. This is done through understanding the overall behavior of the model. Consider a scenario where an attacker wants to copy the behavior of an image classification model. By analyzing the model’s decision boundaries, the attack tunes its model to reach an effectiveness comparable to the original model. This could entail analyzing 1) the confusion matrix to understand the balance of prediction metrics (true positive, true negative, false positive, false negative) and 2)other performance metrics, such as F1 score and precision, to ensure that the two models are comparable.

  • -
  • Prediction Consistency: The attacker tries to align their model’s prediction patterns with the target model’s. This involves matching prediction outputs (both positive and negative) on the same set of inputs and ensuring distributional consistency across different classes. For instance, consider a natural language processing (NLP) model that generates sentiment analysis for move reviews (labels reviews as positive, neutral, or negative). The attacker will try to fine-tune their model to match the prediction of the original models on the same set of movie reviews. This includes ensuring that the model makes the same mistakes (mispredictions) that the targeted model makes.

  • -
-
-
-

Case Study

-

In 2018, Tesla filed a lawsuit against self-driving car startup Zoox, alleging former employees stole confidential data and trade secrets related to Tesla’s autonomous driving assistance system.

-

Tesla claimed that several of its former employees took over 10 G.B. of proprietary data, including ML models and source code, before joining Zoox. This allegedly included one of Tesla’s crucial image recognition models for identifying objects.

-

The theft of this sensitive proprietary model could help Zoox shortcut years of ML development and duplicate Tesla’s capabilities. Tesla argued this theft of I.P. caused major financial and competitive harm. There were also concerns it could allow model inversion attacks to infer private details about Tesla’s testing data.

-

The Zoox employees denied stealing any proprietary information. However, the case highlights the significant risks of model theft—enabling the cloning of commercial models, causing economic impacts, and opening the door for further data privacy violations.

-
-
-
-

14.4.2 Data Poisoning

-

Data poisoning is an attack where the training data is tampered with, leading to a compromised model (Biggio, Nelson, and Laskov 2012). Attackers can modify existing training examples, insert new malicious data points, or influence the data collection process. The poisoned data is labeled in such a way as to skew the model’s learned behavior. This can be particularly damaging in applications where ML models make automated decisions based on learned patterns. Beyond training sets, poisoning tests, and validation data can allow adversaries to boost reported model performance artificially.

-
-Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning Attacks Against Support Vector Machines.” In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf. -

The process usually involves the following steps:

-
    -
  • Injection: The attacker adds incorrect or misleading examples into the training set. These examples are often designed to look normal to cursory inspection but have been carefully crafted to disrupt the learning process.

  • -
  • Training: The ML model trains on this manipulated dataset and develops skewed understandings of the data patterns.

  • -
  • Deployment: Once the model is deployed, the corrupted training leads to flawed decision-making or predictable vulnerabilities the attacker can exploit.

  • -
-

The impacts of data poisoning extend beyond just classification errors or accuracy drops. For instance, if incorrect or malicious data is introduced into a traffic sign recognition system’s training set, the model may learn to misclassify stop signs as yield signs, which can have dangerous real-world consequences, especially in embedded autonomous systems like autonomous vehicles.

-

Data poisoning can degrade a model’s accuracy, force it to make incorrect predictions or cause it to behave unpredictably. In critical applications like healthcare, such alterations can lead to significant trust and safety issues.

-

There are six main categories of data poisoning (Oprea, Singhal, and Vassilev 2022):

-
-Oprea, Alina, Anoop Singhal, and Apostol Vassilev. 2022. “Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?” Computer 55 (11): 94–99. https://doi.org/10.1109/mc.2022.3190787. -
    -
  • Availability Attacks: These attacks aim to compromise a model’s overall functionality. They cause it to misclassify most testing samples, rendering the model unusable for practical applications. An example is label flipping, where labels of a specific, targeted class are replaced with labels from a different one.

  • -
  • Targeted Attacks: In contrast to availability attacks, targeted attacks aim to compromise a small number of the testing samples. So, the effect is localized to a limited number of classes, while the model maintains the same original level of accuracy on the majority of the classes. The targeted nature of the attack requires the attacker to possess knowledge of the model’s classes, making detecting these attacks more challenging.

  • -
  • Backdoor Attacks: In these attacks, an adversary targets specific patterns in the data. The attacker introduces a backdoor(a malicious, hidden trigger or pattern) into the training data, such as manipulating certain features in structured data or manipulating a pattern of pixels at a fixed position. This causes the model to associate the malicious pattern with specific labels. As a result, when the model encounters test samples that contain a malicious pattern, it makes false predictions.

  • -
  • Subpopulation Attacks: Attackers selectively choose to compromise a subset of the testing samples while maintaining accuracy on the rest of the samples. You can think of these attacks as a combination of availability and targeted attacks: performing availability attacks (performance degradation) within the scope of a targeted subset. Although subpopulation attacks may seem very similar to targeted attacks, the two have clear differences:

  • -
  • Scope: While targeted attacks target a selected set of samples, subpopulation attacks target a general subpopulation with similar feature representations. For example, in a targeted attack, an actor inserts manipulated images of a ‘speed bump’ warning sign(with carefully crafted perturbation or patterns), which causes an autonomous car to fail to recognize such a sign and slow down. On the other hand, manipulating all samples of people with a British accent so that a speech recognition model would misclassify a British person’s speech is an example of a subpopulation attack.

  • -
  • Knowledge: While targeted attacks require a high degree of familiarity with the data, subpopulation attacks require less intimate knowledge to be effective.

  • -
-
-

Case Study 1

-

In 2017, researchers demonstrated a data poisoning attack against a popular toxicity classification model called Perspective (Hosseini et al. 2017). This ML model detects toxic comments online.

-
-Hosseini, Hossein, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. “Deceiving Google’s Perspective Api Built for Detecting Toxic Comments.” ArXiv Preprint abs/1702.08138. https://arxiv.org/abs/1702.08138. -

The researchers added synthetically generated toxic comments with slight misspellings and grammatical errors to the model’s training data. This slowly corrupted the model, causing it to misclassify increasing numbers of severely toxic inputs as non-toxic over time.

-

After retraining on the poisoned data, the model’s false negative rate increased from 1.4% to 27% - allowing extremely toxic comments to bypass detection. The researchers warned this stealthy data poisoning could enable the spread of hate speech, harassment, and abuse if deployed against real moderation systems.

-

This case highlights how data poisoning can degrade model accuracy and reliability. For social media platforms, a poisoning attack that impairs toxicity detection could lead to the proliferation of harmful content and distrust of ML moderation systems. The example demonstrates why securing training data integrity and monitoring for poisoning is critical across application domains.

-
-
-

Case Study 2

-

Interestingly enough, data poisoning attacks are not always malicious (Shan et al. 2023). Nightshade, a tool developed by a team led by Professor Ben Zhao at the University of Chicago, utilizes data poisoning to help artists protect their art against scraping and copyright violations by generative A.I. models. Artists can use the tool to modify their images subtly before uploading them online.

-

While these changes are indiscernible to the human eye, they can significantly disrupt the performance of generative A.I. models when incorporated into the training data. Generative models can be manipulated to generate hallucinations and weird images. For example, with only 300 poisoned images, the University of Chicago researchers could trick the latest Stable Diffusion model into generating images of dogs that look like cats or images of cows when prompted for cars.

-

As the number of poisoned images on the internet increases, the performance of the models that use scraped data will deteriorate exponentially. First, the poisoned data is hard to detect and requires manual elimination. Second, the “poison” spreads quickly to other labels because generative models rely on connections between words and concepts as they generate images. So a poisoned image of a “car” could spread into generated images associated with words like “truck,” “train,” “bus,” etc.

-

On the other hand, this tool can be used maliciously and can affect legitimate applications of the generative models. This shows the very challenging and novel nature of machine learning attacks.

-

Figure fig-poisoning demonstrates the effects of different levels of data poisoning (50 samples, 100 samples, and 300 samples of poisoned images) on generating images in different categories. Notice how the images start deforming and deviating from the desired category. For example, after 300 poison samples, a car prompt generates a cow.

-
-
-
- -
-
-Figure 14.1: Data poisoning. Credit: Shan et al. (2023). -
-
-Shan, Shawn, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y Zhao. 2023. “Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.” ArXiv Preprint abs/2310.13828. https://arxiv.org/abs/2310.13828. -
-
-
-
-
-

14.4.3 Adversarial Attacks

-

Adversarial attacks aim to trick models into making incorrect predictions by providing them with specially crafted, deceptive inputs (called adversarial examples) (Parrish et al. 2023). By adding slight perturbations to input data, adversaries can “hack” a model’s pattern recognition and deceive it. These are sophisticated techniques where slight, often imperceptible alterations to input data can trick an ML model into making a wrong prediction.

-
-Parrish, Alicia, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, et al. 2023. “Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models.” ArXiv Preprint abs/2305.14384. https://arxiv.org/abs/2305.14384. -
-Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. “Zero-Shot Text-to-Image Generation.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:8821–31. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/ramesh21a.html. -
-Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042. -

One can generate prompts that lead to unsafe images in text-to-image models like DALLE (Ramesh et al. 2021) or Stable Diffusion (Rombach et al. 2022). For example, by altering the pixel values of an image, attackers can deceive a facial recognition system into identifying a face as a different person.

-

Adversarial attacks exploit the way ML models learn and make decisions during inference. These models work on the principle of recognizing patterns in data. An adversary crafts special inputs with perturbations to mislead the model’s pattern recognition—essentially ‘hacking’ the model’s perceptions.

-

Adversarial attacks fall under different scenarios:

-
    -
  • Whitebox Attacks: The attacker has full knowledge of the target model’s internal workings, including the training data,parameters, and architecture. This comprehensive access creates favorable conditions for exploiting the model’s vulnerabilities. The attacker can use specific and subtle weaknesses to craft effective adversarial examples.

  • -
  • Blackbox Attacks: In contrast to whitebox attacks, in blackbox attacks, the attacker has little to no knowledge of the target model. To carry out the attack, the adversarial actor must carefully observe the model’s output behavior.

  • -
  • Greybox Attacks: These fall between blackbox and whitebox attacks. The attacker has only partial knowledge about the target model’s internal design. For example, the attacker could have knowledge about training data but not the architecture or parameters. In the real world, practical attacks fall under black black-box box grey-boxes.

  • -
-

The landscape of machine learning models is complex and broad, especially given their relatively recent integration into commercial applications. This rapid adoption, while transformative, has brought to light numerous vulnerabilities within these models. Consequently, various adversarial attack methods have emerged, each strategically exploiting different aspects of different models. Below, we highlight a subset of these methods, showcasing the multifaceted nature of adversarial attacks on machine learning models:

-
    -
  • Generative Adversarial Networks (GANs) are deep learning models consisting of two networks competing against each other: a generator and a discriminator (Goodfellow et al. 2020). The generator tries to synthesize realistic data while the discriminator evaluates whether they are real or fake. GANs can be used to craft adversarial examples. The generator network is trained to produce inputs that the target model misclassifies. These GAN-generated images can then attack a target classifier or detection model. The generator and the target model are engaged in a competitive process, with the generator continually improving its ability to create deceptive examples and the target model enhancing its resistance to such examples. GANs provide a powerful framework for crafting complex and diverse adversarial inputs, illustrating the adaptability of generative models in the adversarial landscape.

  • -
  • Transfer Learning Adversarial Attacks exploit the knowledge transferred from a pre-trained model to a target model, creating adversarial examples that can deceive both models. These attacks pose a growing concern, particularly when adversaries have knowledge of the feature extractor but lack access to the classification head (the part or layer responsible for making the final classifications). Referred to as”headless attacks,” these transferable adversarial strategies leverage the expressive capabilities of feature extractors to craft perturbations while oblivious to the label space or training data. The existence of such attacks underscores the importance of developing robust defenses for transfer learning applications, especially since pre-trained models are commonly used (Abdelkader et al. 2020).

  • -
-
-Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622. -
-Abdelkader, Ahmed, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, and Chen Zhu. 2020. “Headless Horseman: Adversarial Attacks on Transfer Learning Models.” In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3087–91. IEEE. https://doi.org/10.1109/icassp40776.2020.9053181. -
-

Case Study

-

In 2017, researchers conducted experiments by placing small black and white stickers on stop signs (Eykholt et al. 2017). When viewed by a normal human eye, the stickers did not obscure the sign or prevent interpretability. However, when images of the stickers stop signs were fed into standard traffic sign classification ML models, they were misclassified as speed limit signs over 85% of the time.

-
-Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. “Robust Physical-World Attacks on Deep Learning Models.” ArXiv Preprint abs/1707.08945. https://arxiv.org/abs/1707.08945. -

This demonstration showed how simple adversarial stickers could trick ML systems into misreading critical road signs. If deployed in the real world, these attacks could endanger public safety, causing autonomous vehicles to misinterpret stop signs as speed limits. Researchers warned this could potentially cause dangerous rolling stops or acceleration into intersections.

-

This case study provides a concrete illustration of how adversarial examples exploit how ML models recognize patterns. By subtly manipulating the input data, attackers can induce incorrect predictions and create serious risks for safety-critical applications like self-driving cars. The attack’s simplicity shows how even minor changes imperceptible to humans can lead models astray. Developers need robust defenses against such threats.

-
-
-
-
-

14.5 Security Threats to ML Hardware

-

Discussing the threats to embedded ML hardware security in a structured order is useful for a clear and in-depth understanding of the potential pitfalls of ML systems. We will begin with hardware bugs. We address the issues where intrinsic design flaws in the hardware can be a gateway to exploitation. This forms the fundamental knowledge required to understand the genesis of hardware vulnerabilities. Moving to physical attacks establishes the basic threat model, as these are the most overt and direct methods of compromising hardware integrity. Fault-injection attacks naturally extend this discussion, showing how specific manipulations can induce systematic failures.

-

Advancing to side-channel attacks next will show the increasing complexity, as these rely on exploiting indirect information leakages, requiring a nuanced understanding of hardware operations and environmental interactions. Leaky interfaces will show how external communication channels can become vulnerable, leading to accidental data exposures. Counterfeit hardware discussions benefit from prior explorations of hardware integrity and exploitation techniques, as they often compound these issues with additional risks due to their questionable provenance. Finally, supply chain risks encompass all concerns above and frame them within the context of the hardware’s journey from production to deployment, highlighting the multifaceted nature of hardware security and the need for vigilance at every stage.

-

Table tbl-threat_types overview table summarizing the topics:

-
-
-
-Table 14.1: Threat types on hardware security. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Threat TypeDescriptionRelevance to Embedded ML Hardware Security
Hardware BugsIntrinsic flaws in hardware designs that can compromise system integrity.Foundation of hardware vulnerability.
Physical AttacksDirect exploitation of hardware through physical access or manipulation.Basic and overt threat model.
Fault-injection AttacksInduction of faults to cause errors in hardware operation, leading to potential system compromise.Systematic manipulation leading to failure.
Side-Channel AttacksExploitation of leaked information from hardware operation to extract sensitive data.Indirect attack via environmental observation.
Leaky InterfacesVulnerabilities arising from interfaces that expose data unintentionally.Data exposure through communication channels.
Counterfeit HardwareUse of unauthorized hardware components that may have security flaws.Compounded vulnerability issues.
Supply Chain RisksRisks introduced through the hardware lifecycle, from production to deployment.Cumulative and multifaceted security challenges.
-
-
-
-
-

14.5.1 Hardware Bugs

-

Hardware is not immune to the pervasive issue of design flaws or bugs. Attackers can exploit these vulnerabilities to access, manipulate, or extract sensitive data, breaching the confidentiality and integrity that users and services depend on. An example of such vulnerabilities came to light with the discovery of Meltdown and Spectre—two hardware vulnerabilities that exploit critical vulnerabilities in modern processors. These bugs allow attackers to bypass the hardware barrier that separates applications, allowing a malicious program to read the memory of other programs and the operating system.

-

Meltdown (Kocher et al. 2019a) and Spectre (Kocher et al. 2019b) work by taking advantage of optimizations in modern CPUs that allow them to speculatively execute instructions out of order before validity checks have been completed. This reveals data that should be inaccessible, which the attack captures through side channels like caches. The technical complexity demonstrates the difficulty of eliminating vulnerabilities even with extensive validation.

-
-———, et al. 2019a. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP). IEEE. https://doi.org/10.1109/sp.2019.00002. -
-Kocher, Paul, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, et al. 2019b. “Spectre Attacks: Exploiting Speculative Execution.” In 2019 IEEE Symposium on Security and Privacy (SP). IEEE. https://doi.org/10.1109/sp.2019.00002. -

If an ML system is processing sensitive data, such as personal user information or proprietary business analytics, Meltdown and Spectre represent a real and present danger to data security. Consider the case of an ML accelerator card designed to speed up machine learning processes, such as the ones we discussed in the A.I. Hardware chapter. These accelerators work with the CPU to handle complex calculations, often related to data analytics, image recognition, and natural language processing. If such an accelerator card has a vulnerability akin to Meltdown or Spectre, it could leak the data it processes. An attacker could exploit this flaw not just to siphon off data but also to gain insights into the ML model’s workings, including potentially reverse-engineering the model itself (thus, going back to the issue of model theft.

-

A real-world scenario where this could be devastating would be in the healthcare industry. ML systems routinely process highly sensitive patient data to help diagnose, plan treatment, and forecast outcomes. A bug in the system’s hardware could lead to the unauthorized disclosure of personal health information, violating patient privacy and contravening strict regulatory standards like the Health Insurance Portability and Accountability Act (HIPAA)

-

The Meltdown and Spectre vulnerabilities are stark reminders that hardware security is not just about preventing unauthorized physical access but also about ensuring that the hardware’s architecture does not become a conduit for data exposure. Similar hardware design flaws regularly emerge in CPUs, accelerators, memory, buses, and other components. This necessitates ongoing retroactive mitigations and performance tradeoffs in deployed systems. Proactive solutions like confidential computing architectures could mitigate entire classes of vulnerabilities through fundamentally more secure hardware design. Thwarting hardware bugs requires rigor at every design stage, validation, and deployment.

-
-
-

14.5.2 Physical Attacks

-

Physical tampering refers to the direct, unauthorized manipulation of physical computing resources to undermine the integrity of machine learning systems. It’s a particularly insidious attack because it circumvents traditional cybersecurity measures, which often focus more on software vulnerabilities than hardware threats.

-

Physical tampering can take many forms, from the relatively simple, such as someone inserting a USB device loaded with malicious software into a server, to the highly sophisticated, such as embedding a hardware Trojan during the manufacturing process of a microchip (discussed later in greater detail in the Supply Chain section). ML systems are susceptible to this attack because they rely on the accuracy and integrity of their hardware to process and analyze vast amounts of data correctly.

-

Consider an ML-powered drone used for geographical mapping. The drone’s operation relies on a series of onboard systems, including a navigation module that processes inputs from various sensors to determine its path. If an attacker gains physical access to this drone, they could replace the genuine navigation module with a compromised one that includes a backdoor. This manipulated module could then alter the drone’s flight path to conduct surveillance over restricted areas or even smuggle contraband by flying undetected routes.

-

Another example is the physical tampering of biometric scanners used for access control in secure facilities. By introducing a modified sensor that transmits biometric data to an unauthorized receiver, an attacker can access personal identification data to authenticate individuals.

-

There are several ways that physical tampering can occur in ML hardware:

-
    -
  • Manipulating sensors: Consider an autonomous vehicle that relies on cameras and LiDAR for situational awareness. An attacker could carefully calibrate the physical alignment of these sensors to introduce blindspots or distort critical distances. This could impair object detection and endanger passengers.

  • -
  • Hardware trojans: Malicious circuit modifications can introduce trojans that activate under certain inputs. For example, an ML accelerator chip could function normally until a rare trigger case occurs, causing it to accelerate unsafely.

  • -
  • Tampering with memory: Physically exposing and manipulating memory chips could allow the extraction of encrypted ML model parameters. Fault injection techniques can also corrupt model data to degrade accuracy.

  • -
  • Introducing backdoors: Gaining physical access to servers, an adversary could use hardware keyloggers to capture passwords and create backdoor accounts for persistent access. These could then be used to exfiltrate ML training data over time.

  • -
  • Supply chain attacks: Manipulating third-party hardware components or compromising manufacturing and shipping channels creates systemic vulnerabilities that are difficult to detect and remediate.

  • -
-
-
-

14.5.3 Fault-injection Attacks

-

By intentionally introducing faults into ML hardware, attackers can induce errors in the computational process, leading to incorrect outputs. This manipulation compromises the integrity of ML operations and can serve as a vector for further exploitation, such as system reverse engineering or security protocol bypass. Fault injection involves intentionally disrupting normal computations in a system through external interference (Joye and Tunstall 2012). By precisely triggering computational errors, adversaries can alter program execution in ways that degrade reliability or leak sensitive information.

-
-Joye, Marc, and Michael Tunstall. 2012. Fault Analysis in Cryptography. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-29656-7. -
-Barenghi, Alessandro, Guido M. Bertoni, Luca Breveglieri, Mauro Pellicioli, and Gerardo Pelosi. 2010. “Low Voltage Fault Attacks to AES.” In 2010 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), 7–12. IEEE; IEEE. https://doi.org/10.1109/hst.2010.5513121. -
-Hutter, Michael, Jorn-Marc Schmidt, and Thomas Plos. 2009. “Contact-Based Fault Injections and Power Analysis on RFID Tags.” In 2009 European Conference on Circuit Theory and Design, 409–12. IEEE; IEEE. https://doi.org/10.1109/ecctd.2009.5275012. -
-Amiel, Frederic, Christophe Clavier, and Michael Tunstall. 2006. “Fault Analysis of DPA-Resistant Algorithms.” In International Workshop on Fault Diagnosis and Tolerance in Cryptography, 223–36. Springer. -
-Agrawal, Dakshi, Selcuk Baktir, Deniz Karakoyunlu, Pankaj Rohatgi, and Berk Sunar. 2007. Trojan Detection Using IC Fingerprinting.” In 2007 IEEE Symposium on Security and Privacy (SP ’07), 29–45. Springer; IEEE. https://doi.org/10.1109/sp.2007.36. -
-Skorobogatov, Sergei. 2009. “Local Heating Attacks on Flash Memory Devices.” In 2009 IEEE International Workshop on Hardware-Oriented Security and Trust, 1–6. IEEE; IEEE. https://doi.org/10.1109/hst.2009.5225028. -
-Skorobogatov, Sergei P, and Ross J Anderson. 2003. “Optical Fault Induction Attacks.” In Cryptographic Hardware and Embedded Systems-CHES 2002: 4th International Workshop Redwood Shores, CA, USA, August 1315, 2002 Revised Papers 4, 2–12. Springer. -

Various physical tampering techniques can be used for fault injection. Low voltage (Barenghi et al. 2010), power spikes (Hutter, Schmidt, and Plos 2009), clock glitches (Amiel, Clavier, and Tunstall 2006), electromagnetic pulses (Agrawal et al. 2007), temperate increase (S. Skorobogatov 2009) and laser strikes (S. P. Skorobogatov and Anderson 2003) are common hardware attack vectors. They are precisely timed to induce faults like flipped bits or skipped instructions during key operations.

-

For ML systems, consequences include impaired model accuracy, denial of service, extraction of private training data or model parameters, and reverse engineering of model architectures. Attackers could use fault injection to force misclassifications, disrupt autonomous systems, or steal intellectual property.

-

For example, in (Breier et al. 2018), the authors successfully injected a fault attack into a deep neural network deployed on a microcontroller. They used a laser to heat specific transistors, forcing them to switch states. In one instance, they used this method to attack a ReLU activation function, resulting in the function always outputting a value of 0, regardless of the input. In the assembly code in Figure fig-injection, the attack caused the executing program to always skip the jmp end instruction on line 6. This means that HiddenLayerOutput[i] is always set to 0, overwriting any values written to it on lines 4 and 5. As a result, the targeted neurons are rendered inactive, resulting in misclassifications.

-
-
-
- -
-
-Figure 14.2: Fault-injection demonstrated with assembly code. Credit: Breier et al. (2018). -
-
-Breier, Jakub, Xiaolu Hou, Dirmanto Jap, Lei Ma, Shivam Bhasin, and Yang Liu. 2018. “Deeplaser: Practical Fault Attack on Deep Neural Networks.” ArXiv Preprint abs/1806.05859. https://arxiv.org/abs/1806.05859. -
-
-

An attacker’s strategy could be to infer information about the activation functions using side-channel attacks (discussed next). Then, the attacker could attempt to target multiple activation function computations by randomly injecting faults into the layers as close to the output layer as possible, increasing the likelihood and impact of the attack.

-

Embedded devices are particularly vulnerable due to limited physical hardening and resource constraints that restrict robust runtime defenses. Without tamper-resistant packaging, attacker access to system buses and memory enables precise fault strikes. Lightweight embedded ML models also lack redundancy to overcome errors.

-

These attacks can be particularly insidious because they bypass traditional software-based security measures, often not accounting for physical disruptions. Furthermore, because ML systems rely heavily on the accuracy and reliability of their hardware for tasks like pattern recognition, decision-making, and automated responses, any compromise in their operation due to fault injection can have serious and wide-ranging consequences.

-

Mitigating fault injection risks necessitates a multilayer approach. Physical hardening through tamper-proof enclosures and design obfuscation helps reduce access. Lightweight anomaly detection can identify unusual sensor inputs or erroneous model outputs (Hsiao et al. 2023). Error-correcting memories minimize disruption, while data encryption safeguards information. Emerging model watermarking techniques trace stolen parameters.

-
-Hsiao, Yu-Shun, Zishen Wan, Tianyu Jia, Radhika Ghosal, Abdulrahman Mahmoud, Arijit Raychowdhury, David Brooks, Gu-Yeon Wei, and Vijay Janapa Reddi. 2023. MAVFI: An End-to-End Fault Analysis Framework with Anomaly Detection and Recovery for Micro Aerial Vehicles.” In 2023 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1–6. IEEE; IEEE. https://doi.org/10.23919/date56975.2023.10137246. -

However, balancing robust protections with embedded systems’ tight size and power limits remains challenging. Cryptography limits and lack of secure co-processors on cost-sensitive embedded hardware restrict options. Ultimately, fault injection resilience demands a cross-layer perspective spanning electrical, firmware, software, and physical design layers.

-
-
-

14.5.4 Side-Channel Attacks

-

Side-channel attacks are a category of security breach that depends on information gained from a computer system’s physical implementation. Unlike direct attacks on software or network vulnerabilities, side-channel attacks exploit a system’s hardware characteristics. These attacks can be particularly effective against complex machine learning systems, where large amounts of data are processed, and a high level of security is expected.

-

The fundamental premise of a side-channel attack is that a device’s operation can inadvertently leak information. Such leaks can come from various sources, including the electrical power a device consumes (Kocher, Jaffe, and Jun 1999), the electromagnetic fields it emits (Gandolfi, Mourtel, and Olivier 2001), the time it takes to process certain operations, or even the sounds it produces. Each channel can indirectly glimpse the system’s internal processes, revealing information that can compromise security.

-
-Kocher, Paul, Joshua Jaffe, and Benjamin Jun. 1999. “Differential Power Analysis.” In Advances in CryptologyCRYPTO’99: 19th Annual International Cryptology Conference Santa Barbara, California, USA, August 1519, 1999 Proceedings 19, 388–97. Springer. -
-Gandolfi, Karine, Christophe Mourtel, and Francis Olivier. 2001. “Electromagnetic Analysis: Concrete Results.” In Cryptographic Hardware and Embedded SystemsCHES 2001: Third International Workshop Paris, France, May 1416, 2001 Proceedings 3, 251–61. Springer. -
-Kocher, Paul, Joshua Jaffe, Benjamin Jun, and Pankaj Rohatgi. 2011. “Introduction to Differential Power Analysis.” Journal of Cryptographic Engineering 1 (1): 5–27. https://doi.org/10.1007/s13389-011-0006-y. -

For instance, consider a machine learning system performing encrypted transactions. Encryption algorithms are supposed to secure data but require computational work to encrypt and decrypt information. An attacker can analyze the power consumption patterns of the device performing encryption to figure out the cryptographic key. With sophisticated statistical methods, small variations in power usage during the encryption process can be correlated with the data being processed, eventually revealing the key. Some differential analysis attack techniques are Differential Power Analysis (DPA) (Kocher et al. 2011), Differential Electromagnetic Analysis (DEMA), and Correlation Power Analysis (CPA).

-

For example, consider an attacker trying to break the AES encryption algorithm using a differential analysis attack. The attacker would first need to collect many power or electromagnetic traces (a trace is a record of consumptions or emissions) of the device while performing AES encryption.

-

Once the attacker has collected sufficient traces, they would then use a statistical technique to identify correlations between the traces and the different values of the plaintext (original, unencrypted text) and ciphertext (encrypted text). These correlations would then be used to infer the value of a bit in the AES key and, eventually, the entire key. Differential analysis attacks are dangerous because they are low-cost, effective, and non-intrusive, allowing attackers to bypass algorithmic and hardware-level security measures. Compromises by these attacks are also hard to detect because they do not physically modify the device or break the encryption algorithm.

-

Below is a simplified visualization of how analyzing the power consumption patterns of the encryption device can help us extract information about the algorithm’s operations and, in turn, the secret data. Say we have a device that takes a 5-byte password as input. We will analyze and compare the different voltage patterns that are measured while the encryption device performs operations on the input to authenticate the password.

-

First, consider the power analysis of the device’s operations after entering a correct password in the first picture in Figure fig-encryption. The dense blue graph outputs the encryption device’s voltage measurement. What matters here is the comparison between the different analysis charts rather than the specific details of what is going on in each scenario.

-
-
-
- -
-
-Figure 14.3: Power analysis of an encryption device with a correct password. Credit: Colin O’Flynn. -
-
-
-

Let’s look at the power analysis chart when we enter an incorrect password in Figure fig-encryption2. The first three bytes of the password are correct. As a result, we can see that the voltage patterns are very similar or identical between the two charts, up to and including the fourth byte. After the device processes the fourth byte, it determines a mismatch between the secret key and the attempted input. We notice a change in the pattern at the transition point between the fourth and fifth bytes: the voltage has gone up (the current has gone down) because the device has stopped processing the rest of the input.

-
-
-
- -
-
-Figure 14.4: Power analysis of an encryption device with a (partially) wrong password. Credit: Colin O’Flynn. -
-
-
-

Figure fig-encryption3 describes another chart of a completely wrong password. After the device finishes processing the first byte, it determines that it is incorrect and stops further processing - the voltage goes up and the current down.

-
-
-
- -
-
-Figure 14.5: Power analysis of an encryption device with a wrong password. Credit: Colin O’Flynn. -
-
-
-

The example above shows how we can infer information about the encryption process and the secret key by analyzing different inputs and trying to ‘eavesdrop’ on the device’s operations on each input byte.

-

For a more detailed explanation, watch the video below.

-
-

Another example is an ML system for speech recognition, which processes voice commands to perform actions. By measuring the time it takes for the system to respond to commands or the power used during processing, an attacker could infer what commands are being processed and thus learn about the system’s operational patterns. Even more subtle, the sound emitted by a computer’s fan or hard drive could change in response to the workload, which a sensitive microphone could pick up and analyze to determine what kind of operations are being performed.

-

In real-world scenarios, side-channel attacks have been used to extract encryption keys and compromise secure communications. One of the earliest recorded side-channel attacks dates back to the 1960s when British intelligence agency MI5 faced the challenge of deciphering encrypted communications from the Egyptian Embassy in London. Their cipher-breaking attempts were thwarted by the computational limitations of the time until an ingenious observation changed the game.

-

MI5 agent Peter Wright proposed using a microphone to capture the subtle acoustic signatures emitted from the embassy’s rotor cipher machine during encryption (Burnet and Thomas 1989). The distinct mechanical clicks of the rotors as operators configured them daily leaked critical information about the initial settings. This simple side channel of sound enabled MI5 to dramatically reduce the complexity of deciphering messages. This early acoustic leak attack highlights that side-channel attacks are not merely a digital age novelty but a continuation of age-old cryptanalytic principles. The notion that where there is a signal, there is an opportunity for interception remains foundational. From mechanical clicks to electrical fluctuations and beyond, side channels enable adversaries to extract secrets indirectly through careful signal analysis.

-
-Burnet, David, and Richard Thomas. 1989. “Spycatcher: The Commodification of Truth.” J. Law Soc. 16 (2): 210. https://doi.org/10.2307/1410360. -
-Asonov, D., and R. Agrawal. 2004. “Keyboard Acoustic Emanations.” In IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004, 3–11. IEEE; IEEE. https://doi.org/10.1109/secpri.2004.1301311. -
-Gnad, Dennis R. E., Fabian Oboril, and Mehdi B. Tahoori. 2017. “Voltage Drop-Based Fault Attacks on FPGAs Using Valid Bitstreams.” In 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 1–7. IEEE; IEEE. https://doi.org/10.23919/fpl.2017.8056840. -
-Zhao, Mark, and G. Edward Suh. 2018. FPGA-Based Remote Power Side-Channel Attacks.” In 2018 IEEE Symposium on Security and Privacy (SP), 229–44. IEEE; IEEE. https://doi.org/10.1109/sp.2018.00049. -

Today, acoustic cryptanalysis has evolved into attacks like keyboard eavesdropping (Asonov and Agrawal 2004). Electrical side channels range from power analysis on cryptographic hardware (Gnad, Oboril, and Tahoori 2017) to voltage fluctuations (Zhao and Suh 2018) on machine learning accelerators. Timing, electromagnetic emission, and even heat footprints can likewise be exploited. New and unexpected side channels often emerge as computing becomes more interconnected and miniaturized.

-

Just as MI5’s analog acoustic leak transformed their codebreaking, modern side-channel attacks circumvent traditional boundaries of cyber defense. Understanding the creative spirit and historical persistence of side channel exploits is key knowledge for developers and defenders seeking to secure modern machine learning systems comprehensively against digital and physical threats.

-
-
-

14.5.5 Leaky Interfaces

-

Leaky interfaces in embedded systems are often overlooked backdoors that can become significant security vulnerabilities. While designed for legitimate purposes such as communication, maintenance, or debugging, these interfaces may inadvertently provide attackers with a window through which they can extract sensitive information or inject malicious data.

-

An interface becomes “leaky” when it exposes more information than it should, often due to a lack of stringent access controls or inadequate shielding of the transmitted data. Here are some real-world examples of leaky interface issues causing security problems in IoT and embedded devices:

-
    -
  • Baby Monitors: Many WiFi-enabled baby monitors have been found to have unsecured interfaces for remote access. This allowed attackers to gain live audio and video feeds from people’s homes, representing a major privacy violation.

  • -
  • Pacemakers: Interface vulnerabilities were discovered in some pacemakers that could allow attackers to manipulate cardiac functions if exploited. This presents a potentially life-threatening scenario.

  • -
  • Smart Lightbulbs: A researcher found he could access unencrypted data from smart lightbulbs via a debug interface, including WiFi credentials, allowing him to gain access to the connected network (Greengard 2015).

  • -
  • Smart Cars: If left unsecured, The OBD-II diagnostic port has been shown to provide an attack vector into automotive systems. Researchers could use it to control brakes and other components (Miller and Valasek 2015).

  • -
-
-Greengard, Samuel. 2015. The Internet of Things. The MIT Press. https://doi.org/10.7551/mitpress/10277.001.0001. -
-Miller, Charlie, and Chris Valasek. 2015. “Remote Exploitation of an Unaltered Passenger Vehicle.” Black Hat USA 2015 (S 91): 1–91. -

While the above are not directly connected with ML, consider the example of a smart home system with an embedded ML component that controls home security based on behavior patterns it learns over time. The system includes a maintenance interface accessible via the local network for software updates and system checks. If this interface does not require strong authentication or the data transmitted through it is not encrypted, an attacker on the same network could gain access. They could then eavesdrop on the homeowner’s daily routines or reprogram the security settings by manipulating the firmware.

-

Such leaks are a privacy issue and a potential entry point for more damaging exploits. The exposure of training data, model parameters, or ML outputs from a leak could help adversaries construct adversarial examples or reverse-engineer models. Access through a leaky interface could also be used to alter an embedded device’s firmware, loading it with malicious code that could turn off the device, intercept data, or use it in botnet attacks.

-

To mitigate these risks, a multi-layered approach is necessary, spanning technical controls like authentication, encryption, anomaly detection, policies and processes like interface inventories, access controls, auditing, and secure development practices. Turning off unnecessary interfaces and compartmentalizing risks via a zero-trust model provide additional protection.

-

As designers of embedded ML systems, we should assess interfaces early in development and continually monitor them post-deployment as part of an end-to-end security lifecycle. Understanding and securing interfaces is crucial for ensuring the overall security of embedded ML.

-
-
-

14.5.6 Counterfeit Hardware

-

ML systems are only as reliable as the underlying hardware. In an era where hardware components are global commodities, the rise of counterfeit or cloned hardware presents a significant challenge. Counterfeit hardware encompasses any components that are unauthorized reproductions of original parts. Counterfeit components infiltrate ML systems through complex supply chains that stretch across borders and involve numerous stages from manufacture to delivery.

-

A single lapse in the supply chain’s integrity can result in the insertion of counterfeit parts designed to closely imitate the functions and appearance of genuine hardware. For instance, a facial recognition system for high-security access control may be compromised if equipped with counterfeit processors. These processors could fail to accurately process and verify biometric data, potentially allowing unauthorized individuals to access restricted areas.

-

The challenge with counterfeit hardware is multifaceted. It undermines the quality and reliability of ML systems, as these components may degrade faster or perform unpredictably due to substandard manufacturing. The security risks are also profound; counterfeit hardware can contain vulnerabilities ripe for exploitation by malicious actors. For example, a cloned network router in an ML data center might include a hidden backdoor, enabling data interception or network intrusion without detection.

-

Furthermore, counterfeit hardware poses legal and compliance risks. Companies inadvertently utilizing counterfeit parts in their ML systems may face serious legal repercussions, including fines and sanctions for failing to comply with industry regulations and standards. This is particularly true for sectors where compliance with specific safety and privacy regulations is mandatory, such as healthcare and finance.

-

The issue of counterfeit hardware is exacerbated by economic pressures to reduce costs, which can compel businesses to source from lower-cost suppliers without stringent verification processes. This economizing can inadvertently introduce counterfeit parts into otherwise secure systems. Additionally, detecting these counterfeits is inherently difficult since they are created to pass as the original components, often requiring sophisticated equipment and expertise to identify.

-

In ML, where decisions are made in real time and based on complex computations, the consequences of hardware failure are inconvenient and potentially dangerous. Stakeholders in the field of ML need to understand these risks thoroughly. The issues presented by counterfeit hardware necessitate a deep dive into the current challenges facing ML system integrity and emphasize the importance of vigilant, informed management of the hardware life cycle within these advanced systems.

-
-
-

14.5.7 Supply Chain Risks

-

The threat of counterfeit hardware is closely tied to broader supply chain vulnerabilities. Globalized, interconnected supply chains create multiple opportunities for compromised components to infiltrate a product’s lifecycle. Supply chains involve numerous entities, from design to manufacturing, assembly, distribution, and integration. A lack of transparency and oversight of each partner makes verifying integrity at every step challenging. Lapses anywhere along the chain can allow the insertion of counterfeit parts.

-

For example, a contracted manufacturer may unknowingly receive and incorporate recycled electronic waste containing dangerous counterfeits. An untrustworthy distributor could smuggle in cloned components. Insider threats at any vendor might deliberately mix counterfeits into legitimate shipments.

-

Once counterfeits enter the supply stream, they move quickly through multiple hands before ending up in ML systems where detection is difficult. Advanced counterfeits like refurbished parts or clones with repackaged externals can masquerade as authentic components, passing visual inspection.

-

To identify fakes, thorough technical profiling using micrography, X-ray screening, component forensics, and functional testing is often required. However, such costly analysis is impractical for large-volume procurement.

-

Strategies like supply chain audits, screening suppliers, validating component provenance, and adding tamper-evident protections can help mitigate risks. However, given global supply chain security challenges, a zero-trust approach is prudent. Designing ML systems to utilize redundant checking, fail-safes, and continuous runtime monitoring provides resilience against component compromises.

-

Rigorous validation of hardware sources coupled with fault-tolerant system architectures offers the most robust defense against the pervasive risks of convoluted, opaque global supply chains.

-
-

Case Study

-

In 2018, Bloomberg Businessweek published an alarming story that got much attention in the tech world. The article claimed that Supermicro had secretly planted tiny spy chips on server hardware. Reporters said Chinese state hackers working with Supermicro could sneak these tiny chips onto motherboards during manufacturing. The tiny chips allegedly gave the hackers backdoor access to servers used by over 30 major companies, including Apple and Amazon.

-

If true, this would allow hackers to spy on private data or even tamper with systems. However, after investigating, Apple and Amazon found no proof that such hacked Supermicro hardware existed. Other experts questioned whether the Bloomberg article was accurate reporting.

-

Whether the story is completely true or not is not our concern from a pedagogical viewpoint. However, this incident drew attention to the risks of global supply chains for hardware, especially manufactured in China. When companies outsource and buy hardware components from vendors worldwide, there needs to be more visibility into the process. In this complex global pipeline, there are concerns that counterfeits or tampered hardware could be slipped in somewhere along the way without tech companies realizing it. Companies relying too much on single manufacturers or distributors creates risk. For instance, due to the over-reliance on TSMC for semiconductor manufacturing, the U.S. has invested 50 billion dollars into the CHIPS Act.

-

As ML moves into more critical systems, verifying hardware integrity from design through production and delivery is crucial. The reported Supermicro backdoor demonstrated that for ML security, we cannot take global supply chains and manufacturing for granted. We must inspect and validate hardware at every link in the chain.

-
-
-
-
-

14.6 Embedded ML Hardware Security

-
-

14.6.1 Trusted Execution Environments

-
-

About TEE

-

A Trusted Execution Environment (TEE) is a secure area within a main processor that provides a high level of security for the execution of code and protection of data. TEEs operate by isolating the execution of sensitive tasks from the rest of the device’s operations, thereby creating an environment resistant to attacks from software and hardware vectors.

-
-
-

Benefits

-

TEEs are particularly valuable in scenarios where sensitive data must be processed or where the integrity of a system’s operations is critical. In the context of ML hardware, TEEs ensure that the ML algorithms and data are protected against tampering and leakage. This is essential because ML models often process private information, trade secrets, or data that could be exploited if exposed.

-

For instance, a TEE can protect ML model parameters from being extracted by malicious software on the same device. This protection is vital for privacy and maintaining the integrity of the ML system, ensuring that the models perform as expected and do not provide skewed outputs due to manipulated parameters. Apple’s Secure Enclave, found in iPhones and iPads, is a form of TEE that provides an isolated environment to protect sensitive user data and cryptographic operations.

-

In ML systems, TEEs can:

-
    -
  • Securely perform model training and inference, ensuring the computation results remain confidential.

  • -
  • Protect the confidentiality of input data, like biometric information, used for personal identification or sensitive classification tasks.

  • -
  • Secure ML models by preventing reverse engineering, which can protect proprietary information and maintain a competitive advantage.

  • -
  • Enable secure updates to ML models, ensuring that updates come from a trusted source and have not been tampered with in transit.

  • -
-

The importance of TEEs in ML hardware security stems from their ability to protect against external and internal threats, including the following:

-
    -
  • Malicious Software: TEEs can prevent high-privilege malware from accessing sensitive areas of the ML system.

  • -
  • Physical Tampering: By integrating with hardware security measures, TEEs can protect against physical tampering that attempts to bypass software security.

  • -
  • Side-channel Attacks: Although not impenetrable, TEEs can mitigate certain side-channel attacks by controlling access to sensitive operations and data patterns.

  • -
-
-
-

Mechanics

-

The fundamentals of TEEs contain four main parts:

-
    -
  • Isolated Execution: Code within a TEE runs in a separate environment from the device’s main operating system. This isolation protects the code from unauthorized access by other applications.

  • -
  • Secure Storage: TEEs can securely store cryptographic keys,authentication tokens, and sensitive data, preventing access by regular applications running outside the TEE.

  • -
  • Integrity Protection: TEEs can verify the integrity of code and data, ensuring that they have not been altered before execution or during storage.

  • -
  • Data Encryption: Data handled within a TEE can be encrypted, making it unreadable to entities without the proper keys, which are also managed within the TEE.

  • -
-

Here are some examples of TEEs that provide hardware-based security for sensitive applications:

-
    -
  • ARMTrustZone:This technology creates secure and normal world execution environments isolated using hardware controls and implemented in many mobile chipsets.

  • -
  • IntelSGX:Intel’s Software Guard Extensions provide an enclave for code execution that protects against certain software attacks, specifically O.S. layer attacks. They are used to safeguard workloads in the cloud.

  • -
  • Qualcomm Secure Execution Environment:A Hardware sandbox on Qualcomm chipsets for mobile payment and authentication apps.

  • -
  • Apple SecureEnclave:TEE for biometric data and key management on iPhones and iPads.Facilitates mobile payments.

  • -
-

Figure fig-enclave is a diagram demonstrating a secure enclave isolated from the main processor to provide an extra layer of security. The secure enclave has a boot ROM to establish a hardware root of trust, an AES engine for efficient and secure cryptographic operations, and protected memory. It also has a mechanism to store information securely on attached storage separate from the NAND flash storage used by the application processor and operating system. This design keeps sensitive user data secure even when the Application Processor kernel becomes compromised.

-
-
-
- -
-
-Figure 14.6: System-on-chip secure enclave. Credit: Apple. -
-
-
-
-
-

Tradeoffs

-

If TEEs are so good, why don’t all systems have TEE enabled by default? The decision to implement a TEE is not taken lightly. There are several reasons why a TEE might only be present in some systems by default. Here are some tradeoffs and challenges associated with TEEs:

-

Cost: Implementing TEEs involves additional costs. There are direct costs for the hardware and indirect costs associated with developing and maintaining secure software for TEEs. These costs may only be justifiable for some devices, especially low-margin products.

-

Complexity: TEEs add complexity to system design and development. Integrating a TEE with existing systems requires a substantial redesign of the hardware and software stack, which can be a barrier, especially for legacy systems.

-

Performance Overhead: While TEEs offer enhanced security, they can introduce performance overhead. For example, the additional steps in verifying and encrypting data can slow down system performance, which may be critical in time-sensitive applications.

-

Development Challenges: Developing for TEEs requires specialized knowledge and often must adhere to strict development protocols. This can extend development time and complicate the debugging and testing processes.

-

Scalability and Flexibility: TEEs, due to their secure nature, may impose limitations on scalability and flexibility. Upgrading secure components or scaling the system for more users or data can be more challenging when everything must pass through a secure, enclosed environment.

-

Energy Consumption: The increased processing required for encryption, decryption, and integrity checks can lead to higher energy consumption, a significant concern for battery-powered devices.

-

Market Demand: Not all markets or applications require the level of security provided by TEEs. For many consumer applications, the perceived risk may be low enough that manufacturers opt not to include TEEs in their designs.

-

Security Certification and Assurance: Systems with TEEs may need rigorous security certifications with bodies like Common Criteria (CC) or the European Union Agency for Cybersecurity (ENISA), which can be lengthy and expensive. Some organizations may choose to refrain from implementing TEEs to avoid these hurdles.

-

Limited Resource Devices: Devices with limited processing power, memory, or storage may only support TEEs without compromising their primary functionality.

-
-
-
-

14.6.2 Secure Boot

-
-

About

-

A secure boot is a security standard that ensures a device boots using only software trusted by the original equipment manufacturer (OEM). When the device starts up, the firmware checks the signature of each piece of boot software, including the bootloader, kernel, and base operating system, to ensure it’s not tampered with. If the signatures are valid, the device continues to boot. If not, the boot process stops to prevent potential security threats from executing.

-
-
-

Benefits

-

The integrity of an ML system is critical from the moment it is powered on. A compromised boot process could undermine the system by allowing malicious software to load before the operating system and ML applications start. This could lead to manipulated ML operations, stolen data, or the device being repurposed for malicious activities such as botnets or crypto-mining.

-

Secure Boot helps protect embedded ML hardware in several ways:

-
    -
  • Protecting ML Data: Ensuring that the data used by ML models, which may include private or sensitive information, is not exposed to tampering or theft during the boot process.

  • -
  • Guarding Model Integrity: Maintaining the ML models’ integrity is important, as tampering with them could lead to incorrect or malicious outcomes.

  • -
  • Secure Model Updates: Enabling secure updates to ML models and algorithms, ensuring that updates are authenticated and have not been altered.

  • -
-
-
-

Mechanics

-

TEEs benefit from Secure Boot in multiple ways. Figure fig-secure-boot illustrates a flow diagram of a trusted embedded system. For instance, during initial validation, Secure Boot ensures that the code running inside the TEE is the correct and untampered version approved by the device manufacturer. It can ensure resilience against tampering by verifying the digital signatures of the firmware and other critical components; Secure Boot prevents unauthorized modifications that could undermine the TEE’s security properties. Secure Boot establishes a foundation of trust upon which the TEE can securely operate, enabling secure operations such as cryptographic key management, secure processing, and sensitive data handling.

-
-
-
- -
-
-Figure 14.7: Secure Boot flow. Credit: R. V. and A. (2018). -
-
-R. V., Rashmi, and Karthikeyan A. 2018. “Secure Boot of Embedded Applications - a Review.” In 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 291–98. IEEE. https://doi.org/10.1109/iceca.2018.8474730. -
-
-
-
-

Case Study: Apple’s Face ID

-

Let’s take a real-world example. Apple’s Face ID technology uses advanced machine learning algorithms to enable facial recognition on iPhones and iPads. It relies on a sophisticated framework of sensors and software to accurately map the geometry of a user’s face. For Face ID to function securely and protect user biometric data, the device’s operations must be trustworthy from the moment it is powered on, which is where Secure Boot plays a crucial role. Here’s how Secure Boot works in conjunction with Face ID:

-

Initial Verification: When an iPhone is powered on, the Secure Boot process begins in the Secure Enclave, a coprocessor providing an extra security layer. The Secure Enclave is responsible for processing fingerprint data for Touch ID and facial recognition data for Face ID. The boot process verifies that Apple has signed the Secure Enclave’s firmware and has not been tampered with. This step ensures that the firmware used to process biometric data is authentic and safe.

-

Continuous Security Checks: After the initial power-on self-test and verification by Secure Boot, the Secure Enclave communicates with the device’s main processor to continue the secure boot chain. It verifies the digital signatures of the iOS kernel and other critical boot components before allowing the boot process to proceed. This chained trust model prevents unauthorized modifications to the bootloader and operating system, which could compromise the device’s security.

-

Face Data Processing: Once the device has completed its secure boot sequence, the Secure Enclave can interact safely with the ML algorithms that power Face ID. Facial recognition involves projecting and analyzing over 30,000 invisible dots to create a depth map of the user’s face and an infrared image. This data is then converted into a mathematical representation and compared with the registered face data securely stored in the Secure Enclave.

-

Secure Enclave and Data Protection: The Secure Enclave is designed to protect sensitive data and handle the cryptographic operations that secure it. It ensures that even if the operating system kernel is compromised, the facial data cannot be accessed by unauthorized apps or attackers. Face ID data never leaves the device and is not backed up to iCloud or anywhere else.

-

Firmware Updates: Apple frequently releases firmware updates to address security vulnerabilities and improve the functionality of its systems. Secure Boot ensures that each firmware update is authenticated and that only updates signed by Apple are installed on the device, preserving the integrity and security of the Face ID system.

-

By using Secure Boot with dedicated hardware like the Secure Enclave, Apple can provide strong security assurances for sensitive operations like facial recognition.

-
-
-

Challenges

-

Implementing Secure Boot poses several challenges that must be addressed to realize its full benefits.

-

Key Management Complexity: Generating, storing, distributing, rotating, and revoking cryptographic keys provably securely is extremely challenging yet vital for maintaining the chain of trust. Any compromise of keys cripples protections. Large enterprises managing multitudes of device keys face particular scale challenges.

-

Performance Overhead: Checking cryptographic signatures during Boot can add 50-100ms or more per component verified. This delay may be prohibitive for time-sensitive or resource-constrained applications. However, performance impacts can be reduced through parallelization and hardware acceleration.

-

Signing Burden: Developers must diligently ensure that all software components involved in the boot process - bootloaders, firmware, OS kernel, drivers, applications, etc. are correctly signed by trusted keys. Accommodating third-party code signing remains an issue.

-

Cryptographic Verification: Secure algorithms and protocols must validate the legitimacy of keys and signatures, avoid tampering or bypass, and support revocation. Accepting dubious keys undermines trust.

-

Customizability Constraints: Vendor-locked Secure Boot architectures limit user control and upgradability. Open-source bootloaders like u-boot and coreboot enable security while supporting customizability.

-

Scalable Standards: Emerging standards like Device Identifier Composition Engine (DICE) and IDevID promise to securely provision and manage device identities and keys at scale across ecosystems.

-

Adopting Secure Boot requires following security best practices around key management, crypto validation, signed updates, and access control. Secure Boot provides a robust foundation for building device integrity and trust when implemented with care.

-
-
-
-

14.6.3 Hardware Security Modules

-
-

About HSM

-

A Hardware Security Module (HSM) is a physical device that manages digital keys for strong authentication and provides crypto-processing. These modules are designed to be tamper-resistant and provide a secure environment for performing cryptographic operations. HSMs can come in standalone devices, plug-in cards, or integrated circuits on another device.

-

HSMs are crucial for various security-sensitive applications because they offer a hardened, secure enclave for storing cryptographic keys and executing cryptographic functions. They are particularly important for ensuring the security of transactions, identity verifications, and data encryption.

-
-
-

Benefits

-

HSMs provide several functionalities that are beneficial for the security of ML systems:

-

Protecting Sensitive Data: In machine learning applications, models often process sensitive data that can be proprietary or personal. HSMs protect the encryption keys used to secure this data, both at rest and in transit, from exposure or theft.

-

Ensuring Model Integrity: The integrity of ML models is vital for their reliable operation. HSMs can securely manage the signing and verification processes for ML software and firmware, ensuring unauthorized parties have not altered the models.

-

Secure Model Training and Updates: The training and updating of ML models involve the processing of potentially sensitive data. HSMs ensure that these processes are conducted within a secure cryptographic boundary, protecting against the exposure of training data and unauthorized model updates.

-
-
-

Tradeoffs

-

HSMs involve several tradeoffs for embedded ML. These tradeoffs are similar to TEEs, but for completeness, we will also discuss them here through the lens of HSM.

-

Cost: HSMs are specialized devices that can be expensive to procure and implement, raising the overall cost of an ML project. This may be a significant factor for embedded systems, where cost constraints are often stricter.

-

Performance Overhead: While secure, the cryptographic operations performed by HSMs can introduce latency. Any added delay can be critical in high-performance embedded ML applications where inference must happen in real-time, such as in autonomous vehicles or translation devices.

-

Physical Space: Embedded systems are often limited by physical space, and adding an HSM can be challenging in tightly constrained environments. This is especially true for consumer electronics and wearable technology, where size and form factor are key considerations.

-

Power Consumption: HSMs require power for their operation, which can be a drawback for battery-operated devices with long battery life. The secure processing and cryptographic operations can drain the battery faster, a significant tradeoff for mobile or remote embedded ML applications.

-

Complexity in Integration: Integrating HSMs into existing hardware systems adds complexity. It often requires specialized knowledge to manage the secure communication between the HSM and the system’s processor and develop software capable of interfacing with the HSM.

-

Scalability: Scaling an ML solution that uses HSMs can be challenging. Managing a fleet of HSMs and ensuring uniformity in security practices across devices can become complex and costly when the deployment size increases, especially when dealing with embedded systems where communication is costly.

-

Operational Complexity: HSMs can make updating firmware and ML models more complex. Every update must be signed and possibly encrypted, which adds steps to the update process and may require secure mechanisms for key management and update distribution.

-

Development and Maintenance: The secure nature of HSMs means that only limited personnel have access to the HSM for development and maintenance purposes. This can slow down the development process and make routine maintenance more difficult.

-

Certification and Compliance: Ensuring that an HSM meets specific industry standards and compliance requirements can add to the time and cost of development. This may involve undergoing rigorous certification processes and audits.

-
-
-
-

14.6.4 Physical Unclonable Functions (PUFs)

-
-

About

-

Physical Unclonable Functions (PUFs) provide a hardware-intrinsic means for cryptographic key generation and device authentication by harnessing the inherent manufacturing variability in semiconductor components. During fabrication, random physical factors such as doping variations, line edge roughness, and dielectric thickness result in microscale differences between semiconductors, even when produced from the same masks. These create detectable timing and power variances that act as a "fingerprint” unique to each chip. PUFs exploit this phenomenon by incorporating integrated circuits to amplify minute timing or power differences into measurable digital outputs.

-

When stimulated with an input challenge, the PUF circuit produces an output response based on the device’s intrinsic physical characteristics. Due to their physical uniqueness, the same challenge will yield a different response on other devices. This challenge-response mechanism can be used to generate keys securely and identifiers tied to the specific hardware, perform device authentication, or securely store secrets. For example, a key derived from a PUF will only work on that device and cannot be cloned or extracted even with physical access or full reverse engineering (Gao, Al-Sarawi, and Abbott 2020).

-
-
-

Benefits

-

PUF key generation avoids external key storage, which risks exposure. It also provides a foundation for other hardware security primitives like Secure Boot. Implementation challenges include managing varying reliability and entropy across different PUFs, sensitivity to environmental conditions, and susceptibility to machine learning modeling attacks. When designed carefully, PUFs enable promising applications in IP protection, trusted computing, and anti-counterfeiting.

-
-
-

Utility

-

Machine learning models are rapidly becoming a core part of the functionality for many embedded devices, such as smartphones, smart home assistants, and autonomous drones. However, securing ML on resource-constrained embedded hardware can be challenging. This is where physical unclonable functions (PUFs) come in uniquely handy. Let’s look at some examples of how PUFs can be useful.

-

PUFs provide a way to generate unique fingerprints and cryptographic keys tied to the physical characteristics of each chip on the device. Let’s take an example. We have a smart camera drone that uses embedded ML to track objects. A PUF integrated into the drone’s processor could create a device-specific key to encrypt the ML model before loading it onto the drone. This way, even if an attacker somehow hacks the drone and tries to steal the model, they won’t be able to use it on another device!

-

The same PUF key could also create a digital watermark embedded in the ML model. If that model ever gets leaked and posted online by someone trying to pirate it, the watermark could help prove it came from your stolen drone and didn’t originate from the attacker. Also, imagine the drone camera connects to the cloud to offload some of its ML processing. The PUF can authenticate that the camera is legitimate before the cloud will run inference on sensitive video feeds. The cloud could verify that the drone has not been physically tampered with by checking that the PUF responses have not changed.

-

PUFs enable all this security through their challenge-response behavior’s inherent randomness and hardware binding. Without needing to store keys externally, PUFs are ideal for securing embedded ML with limited resources. Thus, they offer a unique advantage over other mechanisms.

-
-
-

Mechanics

-

The working principle behind PUFs, shown in Figure fig-pfu, involves generating a "challenge-response” pair, where a specific input (the challenge) to the PUF circuit results in an output (the response) that is determined by the unique physical properties of that circuit. This process can be likened to a fingerprinting mechanism for electronic devices. Devices that utilize ML for processing sensor data can employ PUFs to secure communication between devices and prevent the execution of ML models on counterfeit hardware.

-

Figure fig-pfu illustrates an overview of the PUF basics: a) PUF can be thought of as a unique fingerprint for each piece of hardware; b) an Optical PUF is a special plastic token that is illuminated, creating a unique speckle pattern that is then recorded; c) in an APUF (Arbiter PUF), challenge bits select different paths, and a judge decides which one is faster, giving a response of ‘1’ or ‘0’; d) in an SRAM PUF, the response is determined by the mismatch in the threshold voltage of transistors, where certain conditions lead to a preferred response of ‘1’. Each of these methods uses specific characteristics of the hardware to create a unique identifier.

-
-
-
- -
-
-Figure 14.8: PUF basics. Credit: Gao, Al-Sarawi, and Abbott (2020). -
-
-Gao, Yansong, Said F. Al-Sarawi, and Derek Abbott. 2020. “Physical Unclonable Functions.” Nature Electronics 3 (2): 81–91. https://doi.org/10.1038/s41928-020-0372-5. -
-
-
-
-

Challenges

-

There are a few challenges with PUFs. The PUF response can be sensitive to environmental conditions, such as temperature and voltage fluctuations, leading to inconsistent behavior that must be accounted for in the design. Also, since PUFs can generate many unique challenge-response pairs, managing and ensuring the consistency of these pairs across the device’s lifetime can be challenging. Last but not least, integrating PUF technology may increase the overall manufacturing cost of a device, although it can save costs in key management over the device’s lifecycle.

-
-
-
-
-

14.7 Privacy Concerns in Data Handling

-

Handling personal and sensitive data securely and ethically is critical as machine learning permeates devices like smartphones, wearables, and smart home appliances. For medical hardware, handling data securely and ethically is further required by law through the Health Insurance Portability and Accountability Act (HIPAA). These embedded ML systems pose unique privacy risks, given their intimate proximity to users’ lives.

-
-

14.7.1 Sensitive Data Types

-

Embedded ML devices like wearables, smart home assistants, and autonomous vehicles frequently process highly personal data that requires careful handling to maintain user privacy and prevent misuse. Specific examples include medical reports and treatment plans processed by health wearables, private conversations continuously captured by smart home assistants, and detailed driving habits collected by connected cars. Compromise of such sensitive data can lead to serious consequences like identity theft, emotional manipulation, public shaming, and mass surveillance overreach.

-

Sensitive data takes many forms - structured records like contact lists and unstructured content like conversational audio and video streams. In medical settings, protected health information (PHI) is collected by doctors throughout every interaction and is heavily regulated by strict HIPAA guidelines. Even outside of medical settings, sensitive data can still be collected in the form of Personally Identifiable Information (PII), which is defined as “any representation of information that permits the identity of an individual to whom the information applies to be reasonably inferred by either direct or indirect means.” Examples of PII include email addresses, social security numbers, and phone numbers, among other fields. PII is collected in medical settings and other settings (financial applications, etc) and is heavily regulated by Department of Labor policies.

-

Even derived model outputs could indirectly leak details about individuals. Beyond just personal data, proprietary algorithms and datasets also warrant confidentiality protections. In the Data Engineering section, we covered several topics in detail.

-

Techniques like de-identification, aggregation, anonymization, and federation can help transform sensitive data into less risky forms while retaining analytical utility. However, diligent controls around access, encryption, auditing, consent, minimization, and compliance practices are still essential throughout the data lifecycle. Regulations like GDPR categorize different classes of sensitive data and prescribe responsibilities around their ethical handling. Standards like NIST 800-53 provide rigorous security control guidance for confidentiality protection. With growing reliance on embedded ML, understanding sensitive data risks is crucial.

-
-
-

14.7.2 Applicable Regulations

-

Many embedded ML applications handle sensitive user data under HIPAA, GDPR, and CCPA regulations. Understanding the protections mandated by these laws is crucial for building compliant systems.

-
    -
  • HIPAA Privacy Rule establishes care providers that conduct certain governs medical data privacy and security in the US, with severe penalties for violations. Any health-related embedded ML devices like diagnostic wearables or assistive robots would need to implement controls like audit trails, access controls, and encryption prescribed by HIPAA.

  • -
  • GDPR imposes transparency, retention limits, and user rights on EU citizen data, even when processed by companies outside the EU. Smart home systems capturing family conversations or location patterns would need GDPR compliance. Key requirements include data minimization, encryption, and mechanisms for consent and erasure.

  • -
  • CCPA, which applies in California, protects consumer data privacy through provisions like required disclosures and opt-out rights—ioT gadgets like smart speakers and fitness trackers Californians use likely to fall under its scope.

  • -
  • The CCPA was the first state-specific set of regulations regarding privacy concerns. Following the CCPA, similar regulations were also enacted in 10 other states, with some states proposing bills for consumer data privacy protections.

  • -
-

Additionally, when relevant to the application, sector-specific rules govern telematics, financial services, utilities, etc. Best practices like Privacy by design, impact assessments, and maintaining audit trails help embed compliance if it is not already required by law. Given potentially costly penalties, consulting legal/compliance teams is advisable when developing regulated embedded ML systems.

-
-
-

14.7.3 De-identification

-

If medical data is de-identified thoroughly, HIPAA guidelines do not directly apply, and there are far fewer regulations. However, medical data needs to be de-identified using HIPAA methods (Safe Harbor methods or Expert Determination methods) for HIPAA guidelines to no longer apply.

-
-

Safe Harbor Methods

-

Safe Harbor methods are most commonly used for de-identifying protected healthcare information due to the limited resources needed compared to Expert Determination methods. Safe Harbor de-identification requires scrubbing datasets of any data that falls into one of 18 categories. The following categories are listed as sensitive information based on the Safe Harbor standard:

-
    -
  • Name, Geographic locator, Birthdate, Phone Number, Email Address, addresses, Social Security Numbers, Medical Record Numbers, health beneficiary Numbers, Device Identifiers and Serial Numbers, Certificate/License Numbers (Birth Certificate, Drivers License, etc), Account Numbers, Vehicle Identifiers, Website URLs, FullFace Photos and Comparable Images, Biometric Identifiers, Any other unique identifiers
  • -
-

For most of these categories, all data must be removed regardless of the circumstances. For other categories, including geographical information and birthdate, the data can be partially removed enough to make the information hard to re-identify. For example, if a zip code is large enough, the first 3 digits can remain since there are enough people in the geographic area to make re-identification difficult. Birthdates need to be scrubbed of all elements except birth year, and all ages above 89 need to be aggregated into a 90+ category.

-
-
-

Expert Determination Methods

-

Safe Harbor methods work for several cases of medical data de-identification, though re-identification is still possible in some cases. For example, let’s say you collect data on a patient in an urban city with a large zip code, but you have documented a rare disease that they have—a disease that only 25 people have in the entire city. Given geographic data coupled with birth year, it is highly possible that someone can re-identify this individual, which is an extremely detrimental privacy breach.

-

In unique cases like these, expert determination data de-identification methods are preferred. Expert determination de-identification requires a “person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods for rendering information not individually identifiable” to evaluate a dataset and determine if the risk of re-identification of individual data in a given dataset in combination with publicly available data (voting records, etc.), is extremely small.

-

Expert Determination de-identification is understandably harder to complete than Safe Harbour de-identification due to the cost and feasibility of accessing an expert to verify the likelihood of re-identifying a dataset. However, in many cases, expert determination is required to ensure that re-identification of data is extremely unlikely.

-
-
-
-

14.7.4 Data Minimization

-

Data minimization involves collecting, retaining, and processing only the necessary user data to reduce privacy risks from embedded ML systems. This starts by restricting the data types and instances gathered to the bare minimum required for the system’s core functionality. For example, an object detection model only collects the images needed for that specific computer vision task. Similarly, a voice assistant would limit audio capture to specific spoken commands rather than persistently recording ambient sounds.

-

Where possible, temporary data that briefly resides in memory without persisting storage provides additional minimization. A clear legal basis, like user consent, should be established for collection and retention. Sandboxing and access controls prevent unauthorized use beyond intended tasks. Retention periods should be defined based on purpose, with secure deletion procedures removing expired data.

-

Data minimization can be broken down into 3 categories:

-
    -
  1. “Data must be adequate about the purpose that is pursued.” Data omission can limit the accuracy of models trained on the data and any general usefulness of a dataset. Data minimization requires a minimum amount of data to be collected from users while creating a dataset that adds value to others.

  2. -
  3. The data collected from users must be relevant to the purpose of the data collection.

  4. -
  5. Users’ data should be limited to only the necessary data to fulfill the purpose of the initial data collection. If similarly robust and accurate results can be obtained from a smaller dataset, any additional data beyond this smaller dataset should not be collected.

  6. -
-

Emerging techniques like differential Privacy, federated learning, and synthetic data generation allow useful insights derived from less raw user data. Performing data flow mapping and impact assessments helps identify opportunities to minimize raw data usage.

-

Methodologies like Privacy by Design (Cavoukian 2009) consider such minimization early in system architecture. Regulations like GDPR also mandate data minimization principles. With a multilayered approach across legal, technical, and process realms, data minimization limits risks in embedded ML products.

-
-Cavoukian, Ann. 2009. “Privacy by Design.” Office of the Information and Privacy Commissioner. -
-

Case Study - Performance-Based Data Minimization

-

Performance-based data minimization (Biega et al. 2020) focuses on expanding upon the third category of data minimization mentioned above, namely limitation. It specifically defines the robustness of model results on a given dataset by certain performance metrics, such that data should not be additionally collected if it does not significantly improve performance. Performance metrics can be divided into two categories:

-
-Biega, Asia J., Peter Potash, Hal Daumé, Fernando Diaz, and Michèle Finck. 2020. “Operationalizing the Legal Principle of Data Minimization for Personalization.” In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, edited by Jimmy Huang, Yi Chang, Xueqi Cheng, Jaap Kamps, Vanessa Murdock, Ji-Rong Wen, and Yiqun Liu, 399–408. ACM. https://doi.org/10.1145/3397271.3401034. -
    -
  1. Global data minimization performance
  2. -
-
    -
  1. Satisfied if a dataset minimizes the amount of per-user data while its mean performance across all data is comparable to the mean performance of the original, unminimized dataset.
  2. -
-
    -
  1. Per user data minimization performance
  2. -
-
    -
  1. Satisfied if a dataset minimizes the amount of per-user data while the minimum performance of individual user data is comparable to that of individual user data in the original, unminimized dataset.
  2. -
-

Performance-based data minimization can be leveraged in machine-learning settings, including movie recommendation algorithms and e-commerce settings.

-

Global data minimization is much more feasible than per-user data minimization, given the much more significant difference in per-user losses between the minimized and original datasets.

-
-
- -
-

14.7.6 Privacy Concerns in Machine Learning

-
-

Generative AI

-

Privacy and security concerns have also risen with the public use of generative AI models, including OpenAI’s GPT4 and other LLMs. ChatGPT, in particular, has been discussed more recently about Privacy, given all the personal information collected from ChatGPT users. In June, a class action lawsuit was filed against ChatGPT due to concerns that it was trained on proprietary medical and personal information without proper permissions or consent. As a result of these privacy concerns, many companies have prohibited their employees from accessing ChatGPT, and uploading private, company related information to the chatbot. Further, ChatGPT is susceptible to prompt injection and other security attacks that could compromise the privacy of the proprietary data upon which it was trained.

-
-
Case Study
-

While ChatGPT has instituted protections to prevent people from accessing private and ethically questionable information, several individuals have successfully bypassed these protections through prompt injection and other security attacks. As demonstrated in Figure fig-role-play, users can bypass ChatGPT protections to mimic the tone of a “deceased grandmother” to learn how to bypass a web application firewall (Gupta et al. 2023).

-
-
-
- -
-
-Figure 14.9: Grandma role play to bypass safety restrictions. Credit: Gupta et al. (2023). -
-
-
-

Further, users have also successfully used reverse psychology to manipulate ChatGPT and access information initially prohibited by the model. In Figure fig-role-play2, a user is initially prevented from learning about piracy websites through ChatGPT but can bypass these restrictions using reverse psychology.

-
-
-
- -
-
-Figure 14.10: Reverse psychology to bypass safety restrictions. Credit: Gupta et al. (2023). -
-
-Gupta, Maanak, Charankumar Akiri, Kshitiz Aryal, Eli Parker, and Lopamudra Praharaj. 2023. “From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy.” #IEEE_O_ACC# 11: 80218–45. https://doi.org/10.1109/access.2023.3300381. -
-
-

The ease at which security attacks can manipulate ChatGPT is concerning, given the private information it was trained upon without consent. Further research on data privacy in LLMs and generative AI should focus on preventing the model from being so naive to prompt injection attacks.

-
-
-
-

Data Erasure

-

Many previous regulations mentioned above, including GDPR, include a “right to be forgotten” clause. This clause essentially states that “the data subject shall have the right to obtain from the controller the erasure of personal data concerning him or her without undue delay.” However, in several cases, even if user data has been erased from a platform, the data is only partially erased if a machine learning model has been trained on this data for separate purposes. Through methods similar to membership inference attacks, other individuals can still predict the training data a model was trained upon, even if the data’s presence was explicitly removed online.

-

One approach to addressing privacy concerns with machine learning training data has been through differential privacy methods. For example, by adding Laplacian noise in the training set, a model can be robust to membership inference attacks, preventing deleted data from being recovered. Another approach to preventing deleted data from being inferred from security attacks is simply retraining the model from scratch on the remaining data. Since this process is time-consuming and computationally expensive, other researchers have attempted to address privacy concerns surrounding inferring model training data through a process called machine unlearning, in which a model actively iterates on itself to remove the influence of “forgotten” data that it might have been trained on, as mentioned below.

-
-
-
-
-

14.8 Privacy-Preserving ML Techniques

-

Many techniques have been developed to preserve privacy, each addressing different aspects and data security challenges. These methods can be broadly categorized into several key areas: Differential Privacy, which focuses on statistical privacy in data outputs; Federated Learning, emphasizing decentralized data processing; Homomorphic Encryption and Secure Multi-party Computation (SMC), both enabling secure computations on encrypted or private data; Data Anonymization and Data Masking and Obfuscation, which alter data to protect individual identities; Private Set Intersection and Zero-Knowledge Proofs, facilitating secure data comparisons and validations; Decentralized Identifiers (DIDs) for self-sovereign digital identities; Privacy-Preserving Record Linkage (PPRL), linking data across sources without exposure; Synthetic Data Generation, creating artificial datasets for safe analysis; and Adversarial Learning Techniques, enhancing data or model resistance to privacy attacks.

-

Given the extensive range of these techniques, it is not feasible to delve into each in depth within a single course or discussion, let alone for anyone to know it all in its glorious detail. Therefore, we will explore a few specific techniques in relative detail, providing a deeper understanding of their principles, applications, and the unique privacy challenges they address in machine learning. This focused approach will give us a more comprehensive and practical understanding of key privacy-preserving methods in modern ML systems.

-
-

14.8.1 Differential Privacy

-
-

Core Idea

-

Differential Privacy is a framework for quantifying and managing the privacy of individuals in a dataset (Dwork et al. 2006). It provides a mathematical guarantee that the privacy of individuals in the dataset will not be compromised, regardless of any additional knowledge an attacker may possess. The core idea of differential Privacy is that the outcome of any analysis (like a statistical query) should be essentially the same, whether any individual’s data is included in the dataset or not. This means that by observing the analysis result, one cannot determine whether any individual’s data was used in the computation.

-
-Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. “Calibrating Noise to Sensitivity in Private Data Analysis.” In Theory of Cryptography, edited by Shai Halevi and Tal Rabin, 265–84. Berlin, Heidelberg: Springer Berlin Heidelberg. -

For example, let’s say a database contains medical records for 10 patients. We want to release statistics about the prevalence of diabetes in this sample without revealing one patient’s condition. To do this, we could add a small amount of random noise to the true count before releasing it. If the true number of diabetes patients is 6, we might add noise from a Laplace distribution to randomly output 5, 6, or 7 each with some probability. An observer now can’t tell if any single patient has diabetes based only on the noisy output. The query result looks similar to whether each patient’s data is included or excluded. This is differential Privacy. More formally, a randomized algorithm satisfies ε-differential Privacy if, for any neighbor databases D and Dʹ differing by only one entry, the probability of any outcome changes by at most a factor of ε. A lower ε provides stronger privacy guarantees.

-

The Laplace Mechanism is one of the most straightforward and commonly used methods to achieve differential Privacy. It involves adding noise that follows a Laplace distribution to the data or query results. Apart from the Laplace Mechanism, the general principle of adding noise is central to differential Privacy. The idea is to add random noise to the data or the results of a query. The noise is calibrated to ensure the necessary privacy guarantee while keeping the data useful.

-

While the Laplace distribution is common, other distributions like Gaussian can also be used. Laplace noise is used for strict ε-differential Privacy for low-sensitivity queries. In contrast, Gaussian distributions can be used when Privacy is not guaranteed, known as (ϵ, 𝛿)-Differential Privacy. In this relaxed version of differential Privacy, epsilon and delta define the amount of Privacy guaranteed when releasing information or a model related to a dataset. Epsilon sets a bound on how much information can be learned about the data based on the output. At the same time, delta allows for a small probability of the privacy guarantee to be violated. The choice between Laplace, Gaussian, and other distributions will depend on the specific requirements of the query and the dataset and the tradeoff between Privacy and accuracy.

-

To illustrate the tradeoff of Privacy and accuracy in (\(\epsilon\), \(\delta\))-differential Privacy, the following graphs in Figure fig-tradeoffs show the results on accuracy for different noise levels on the MNIST dataset, a large dataset of handwritten digits (Abadi et al. 2016). The delta value (black line; right y-axis) denotes the level of privacy relaxation (a high value means Privacy is less stringent). As Privacy becomes more relaxed, the accuracy of the model increases.

-
-
-
- -
-
-Figure 14.11: Privacy-accuracy tradeoff. Credit: Abadi et al. (2016). -
-
-Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318. -
-
-

The key points to remember about differential Privacy are the following:

-
    -
  • Adding Noise: The fundamental technique in differential Privacy is adding controlled random noise to the data or query results. This noise masks the contribution of individual data points.

  • -
  • Balancing Act: There’s a balance between Privacy and accuracy. More noise (lower ϵ) in the data means higher Privacy but less accuracy in the model’s results.

  • -
  • Universality: Differential Privacy doesn’t rely on assumptions about what an attacker knows. This makes it robust against re-identification attacks, where an attacker tries to uncover individual data.

  • -
  • Applicability: It can be applied to various types of data and queries, making it a versatile tool for privacy-preserving data analysis.

  • -
-
-
-

Tradeoffs

-

There are several tradeoffs to make with differential Privacy, as is the case with any algorithm. But let’s focus on the computational-specific tradeoffs since we care about ML systems. There are some key computational considerations and tradeoffs when implementing differential Privacy in a machine-learning system:

-

Noise generation: Implementing differential Privacy introduces several important computational tradeoffs compared to standard machine learning techniques. One major consideration is the need to securely generate random noise from distributions like Laplace or Gaussian that get added to query results and model outputs. High-quality cryptographic random number generation can be computationally expensive.

-

Sensitivity analysis: Another key requirement is rigorously tracking the sensitivity of the underlying algorithms to single data points getting added or removed. This global sensitivity analysis is required to calibrate the noise levels properly. However, analyzing worst-case sensitivity can substantially increase computational complexity for complex model training procedures and data pipelines.

-

Privacy budget management: Managing the privacy loss budget across multiple queries and learning iterations is another bookkeeping overhead. The system must keep track of cumulative privacy costs and compose them to explain overall privacy guarantees. This adds a computational burden beyond just running queries or training models.

-

Batch vs. online tradeoffs: For online learning systems with continuous high-volume queries, differentially private algorithms require new mechanisms to maintain utility and prevent too much accumulated privacy loss since each query can potentially alter the privacy budget. Batch offline processing is simpler from a computational perspective as it processes data in large batches, where each batch is treated as a single query. High-dimensional sparse data also increases sensitivity analysis challenges.

-

Distributed training: When training models using distributed or federated approaches, new cryptographic protocols are needed to track and bound privacy leakage across nodes. Secure multiparty computation with encrypted data for differential Privacy adds substantial computational load.

-

While differential Privacy provides strong formal privacy guarantees, implementing it rigorously requires additions and modifications to the machine learning pipeline at a computational cost. Managing these overheads while preserving model accuracy remains an active research area.

-
-
-

Case Study

-

Apple’s implementation of differential Privacy in iOS and MacOS provides a prominent real-world example of how differential Privacy can be deployed at large scale. Apple wanted to collect aggregated usage statistics across their ecosystem to improve products and services, but aimed to do so without compromising individual user privacy.

-

To achieve this, they implemented differential privacy techniques directly on user devices to anonymize data points before sending them to Apple servers. Specifically, Apple uses the Laplace mechanism to inject carefully calibrated random noise. For example, suppose a user’s location history contains [Work, Home, Work, Gym, Work, Home]. In that case, the differentially private version might replace the exact locations with a noisy sample like [Gym, Home, Work, Work, Home, Work].

-

Apple tunes the Laplace noise distribution to provide a high level of Privacy while preserving the utility of aggregated statistics. Increasing noise levels provides stronger privacy guarantees (lower ε values in DP terminology) but can reduce data utility. Apple’s privacy engineers empirically optimized this tradeoff based on their product goals.

-

Apple obtains high-fidelity aggregated statistics by aggregating hundreds of millions of noisy data points from devices. For instance, they can analyze new iOS apps’ features while masking any user’s app behaviors. On-device computation avoids sending raw data to Apple servers.

-

The system uses hardware-based secure random number generation to sample from the Laplace distribution on devices efficiently. Apple also had to optimize its differentially private algorithms and pipeline to operate under the computational constraints of consumer hardware.

-

Multiple third-party audits have verified that Apple’s system provides rigorous differential privacy protections in line with their stated policies. Of course, assumptions around composition over time and potential re-identification risks still apply. Apple’s deployment shows how differential Privacy can be realized in large real-world products when backed by sufficient engineering resources.

-
-

Exercise 14.1 (Differential Privacy - TensorFlow Privacy)  

-
-
- -
-
-

Want to train an ML model without compromising anyone’s secrets? Differential Privacy is like a superpower for your data! In this Colab, we’ll use TensorFlow Privacy to add special noise during training. This makes it way harder for anyone to determine if a single person’s data was used, even if they have sneaky ways of peeking at the model.

-

-
-
-
-
-
-
-

14.8.2 Federated Learning

-
-

Core Idea

-

Federated Learning (FL) is a type of machine learning in which a model is built and distributed across multiple devices or servers while keeping the training data localized. It was previously discussed in the Model Optimizations chapter. Still, we will recap it here briefly to complete it and focus on things that pertain to this chapter.

-

FL aims to train machine learning models across decentralized networks of devices or systems while keeping all training data localized. Figure fig-fl-lifecycle illustrates this process: each participating device leverages its local data to calculate model updates, which are then aggregated to build an improved global model. However, the raw training data is never directly shared, transferred, or compiled. This privacy-preserving approach allows for the joint development of ML models without centralizing the potentially sensitive training data in one place.

-
-
-
- -
-
-Figure 14.12: Federated Learning lifecycle. Credit: Jin et al. (2020). -
-
-Jin, Yilun, Xiguang Wei, Yang Liu, and Qiang Yang. 2020. “Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective.” arXiv Preprint arXiv:2002.11545. -
-
-

One of the most common model aggregation algorithms is Federated Averaging (FedAvg), where the global model is created by averaging all of the parameters from local parameters. While FedAvg works well with independent and identically distributed data (IID), alternate algorithms like Federated Proximal (FedProx) are crucial in real-world applications where data is often non-IID. FedProx is designed for the FL process when there is significant heterogeneity in the client updates due to diverse data distributions across devices, computational capabilities, or varied amounts of data.

-

By leaving the raw data distributed and exchanging only temporary model updates, federated learning provides a more secure and privacy-enhancing alternative to traditional centralized machine learning pipelines. This allows organizations and users to benefit collaboratively from shared models while maintaining control and ownership over sensitive data. The decentralized nature of FL also makes it robust to single points of failure.

-

Imagine a group of hospitals that want to collaborate on a study to predict patient outcomes based on their symptoms. However, they cannot share their patient data due to privacy concerns and regulations like HIPAA. Here’s how Federated Learning can help.

-
    -
  • Local Training: Each hospital trains a machine learning model on patient data. This training happens locally, meaning the data never leaves the hospital’s servers.

  • -
  • Model Sharing: After training, each hospital only sends the model (specifically, its parameters or weights ) to a central server. It does not send any patient data.

  • -
  • Aggregating Models: The central server aggregates these models from all hospitals into a single, more robust model. This process typically involves averaging the model parameters.

  • -
  • Benefit: The result is a machine learning model that has learned from a wide range of patient data without sharing sensitive data or removing it from its original location.

  • -
-
-
-

Tradeoffs

-

There are several system performance-related aspects of FL in machine learning systems. It would be wise to understand these tradeoffs because there is no “free lunch” for preserving Privacy through FL (Li et al. 2020).

-
-Li, Tian, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. “Federated Learning: Challenges, Methods, and Future Directions.” IEEE Signal Process Mag. 37 (3): 50–60. https://doi.org/10.1109/msp.2020.2975749. -

Communication Overhead and Network Constraints: In FL, one of the most significant challenges is managing the communication overhead. This involves the frequent transmission of model updates between a central server and numerous client devices, which can be bandwidth-intensive. The total number of communication rounds and the size of transmitted messages per round need to be reduced to minimize communication further. This can lead to substantial network traffic, especially in scenarios with many participants. Additionally, latency becomes a critical factor — the time taken for these updates to be sent, aggregated, and redistributed can introduce delays. This affects the overall training time and impacts the system’s responsiveness and real-time capabilities. Managing this communication while minimizing bandwidth usage and latency is crucial for implementing FL.

-

Computational Load on Local Devices: FL relies on client devices (like smartphones or IoT devices, which especially matter in TinyML) for model training, which often have limited computational power and battery life. Running complex machine learning algorithms locally can strain these resources, leading to potential performance issues. Moreover, the capabilities of these devices can vary significantly, resulting in uneven contributions to the model training process. Some devices process updates faster and more efficiently than others, leading to disparities in the learning process. Balancing the computational load to ensure consistent participation and efficiency across all devices is a key challenge in FL.

-

Model Training Efficiency: FL’s decentralized nature can impact model training’s efficiency. Achieving convergence, where the model no longer significantly improves, can be slower in FL than in centralized training methods. This is particularly true in cases where the data is non-IID (non-independent and identically distributed) across devices. Additionally, the algorithms used for aggregating model updates play a critical role in the training process. Their efficiency directly affects the speed and effectiveness of learning. Developing and implementing algorithms that can handle the complexities of FL while ensuring timely convergence is essential for the system’s performance.

-

Scalability Challenges: Scalability is a significant concern in FL, especially as the number of participating devices increases. Managing and coordinating model updates from many devices adds complexity and can strain the system. Ensuring that the system architecture can efficiently handle this increased load without degrading performance is crucial. This involves not just handling the computational and communication aspects but also maintaining the quality and consistency of the model as the scale of the operation grows. A key challenge is designing FL systems that scale effectively while maintaining performance.

-

Data Synchronization and Consistency: Ensuring data synchronization and maintaining model consistency across all participating devices in FL is challenging. Keeping all devices synchronized with the latest model version can be difficult in environments with intermittent connectivity or devices that go offline periodically. Furthermore, maintaining consistency in the learned model, especially when dealing with a wide range of devices with different data distributions and update frequencies, is crucial. This requires sophisticated synchronization and aggregation strategies to ensure that the final model accurately reflects the learnings from all devices.

-

Energy Consumption: The energy consumption of client devices in FL is a critical factor, particularly for battery-powered devices like smartphones and other TinyML/IoT devices. The computational demands of training models locally can lead to significant battery drain, which might discourage continuous participation in the FL process. Balancing the computational requirements of model training with energy efficiency is essential. This involves optimizing algorithms and training processes to reduce energy consumption while achieving effective learning outcomes. Ensuring energy-efficient operation is key to user acceptance and the sustainability of FL systems.

-
-
-

Case Studies

-

Here are a couple of real-world case studies that can illustrate the use of federated learning:

-
-
Google Gboard
-

Google uses federated learning to improve predictions on its Gboard mobile keyboard app. The app runs a federated learning algorithm on users’ devices to learn from their local usage patterns and text predictions while keeping user data private. The model updates are aggregated in the cloud to produce an enhanced global model. This allows for providing next-word predictions personalized to each user’s typing style while avoiding directly collecting sensitive typing data. Google reported that the federated learning approach reduced prediction errors by 25% compared to the baseline while preserving Privacy.

-
-
-
Healthcare Research
-

The UK Biobank and American College of Cardiology combined datasets to train a model for heart arrhythmia detection using federated learning. The datasets could not be combined directly due to legal and Privacy restrictions. Federated learning allowed collaborative model development without sharing protected health data, with only model updates exchanged between the parties. This improved model accuracy as it could leverage a wider diversity of training data while meeting regulatory requirements.

-
-
-
Financial Services
-

Banks are exploring using federated learning for anti-money laundering (AML) detection models. Multiple banks could jointly improve AML Models without sharing confidential customer transaction data with competitors or third parties. Only the model updates need to be aggregated rather than raw transaction data. This allows access to richer training data from diverse sources while avoiding regulatory and confidentiality issues around sharing sensitive financial customer data.

-

These examples demonstrate how federated learning provides tangible privacy benefits and enables collaborative ML in settings where direct data sharing is impossible.

-
-
-
-
-

14.8.3 Machine Unlearning

-
-

Core Idea

-

Machine unlearning is a fairly new process that describes how the influence of a subset of training data can be removed from the model. Several methods have been used to perform machine unlearning and remove the influence of a subset of training data from the final model. A baseline approach might consist of simply fine-tuning the model for more epochs on just the data that should be remembered to decrease the influence of the data “forgotten” by the model. Since this approach doesn’t explicitly remove the influence of data that should be erased, membership inference attacks are still possible, so researchers have adopted other approaches to unlearn data from a model explicitly. One type of approach that researchers have adopted includes adjusting the model loss function to treat the losses of the “forget set explicitly” (data to be unlearned) and the “retain set” (remaining data that should still be remembered) differently (Tarun et al. 2022; Khan and Swaroop 2021).

-
-Tarun, Ayush K, Vikram S Chundawat, Murari Mandal, and Mohan Kankanhalli. 2022. “Deep Regression Unlearning.” ArXiv Preprint abs/2210.08196. https://arxiv.org/abs/2210.08196. -
-Khan, Mohammad Emtiyaz, and Siddharth Swaroop. 2021. “Knowledge-Adaptation Priors.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 19757–70. https://proceedings.neurips.cc/paper/2021/hash/a4380923dd651c195b1631af7c829187-Abstract.html. -
-
-

Case Study

-

Some researchers demonstrate a real-life example of machine unlearning approaches applied to SOTA machine learning models through training an LLM, LLaMA2-7b, to unlearn any references to Harry Potter (Eldan and Russinovich 2023). Though this model took 184K GPU hours to pre-train, it only took 1 GPU hour of fine-tuning to erase the model’s ability to generate or recall Harry Potter-related content without noticeably compromising the accuracy of generating content unrelated to Harry Potter. Figure fig-hp-prompts demonstrates how the model output changes before (Llama-7b-chat-hf column) and after (Finetuned Llama-b column) unlearning has occurred.

-
-
-
- -
-
-Figure 14.13: Llama unlearning Harry Potter. Credit: Eldan and Russinovich (2023). -
-
-Eldan, Ronen, and Mark Russinovich. 2023. “Who’s Harry Potter? Approximate Unlearning in LLMs.” ArXiv Preprint abs/2310.02238. https://arxiv.org/abs/2310.02238. -
-
-
-
-

Other Uses

-
-
Removing adversarial data
-

Deep learning models have previously been shown to be vulnerable to adversarial attacks, in which the attacker generates adversarial data similar to the original training data, where a human cannot tell the difference between the real and fabricated data. The adversarial data results in the model outputting incorrect predictions, which could have detrimental consequences in various applications, including healthcare diagnosis predictions. Machine unlearning has been used to unlearn the influence of adversarial data to prevent these incorrect predictions from occurring and causing any harm

-
-
-
-
-

14.8.4 Homomorphic Encryption

-
-

Core Idea

-

Homomorphic encryption is a form of encryption that allows computations to be carried out on ciphertext, generating an encrypted result that, when decrypted, matches the result of operations performed on the plaintext. For example, multiplying two numbers encrypted with homomorphic encryption produces an encrypted product that decrypts the actual product of the two numbers. This means that data can be processed in an encrypted form, and only the resulting output needs to be decrypted, significantly enhancing data security, especially for sensitive information.

-

Homomorphic encryption enables outsourced computation on encrypted data without exposing the data itself to the external party performing the operations. However, only certain computations like addition and multiplication are supported in partially homomorphic schemes. Fully homomorphic encryption (FHE) that can handle any computation is even more complex. The number of possible operations is limited before noise accumulation corrupts the ciphertext.

-

To use homomorphic encryption across different entities, carefully generated public keys must be exchanged for operations across separately encrypted data. This advanced encryption technique enables previously impossible secure computation paradigms but requires expertise to implement correctly for real-world systems.

-
-
-

Benefits

-

Homomorphic encryption enables machine learning model training and inference on encrypted data, ensuring that sensitive inputs and intermediate values remain confidential. This is critical in healthcare, finance, genetics, and other domains, which are increasingly relying on ML to analyze sensitive and regulated data sets containing billions of personal records.

-

Homomorphic encryption thwarts attacks like model extraction and membership inference that could expose private data used in ML workflows. It provides an alternative to TEEs using hardware enclaves for confidential computing. However, current schemes have high computational overheads and algorithmic limitations that constrain real-world applications.

-

Homomorphic encryption realizes the decades-old vision of secure multipartymultiparty computation by allowing computation on ciphertexts. Conceptualized in the 1970s, the first fully homomorphic cryptosystems emerged in 2009, enabling arbitrary computations. Ongoing research is making these techniques more efficient and practical.

-

Homomorphic encryption shows great promise in enabling privacy-preserving machine learning under emerging data regulations. However, given constraints, one should carefully evaluate its applicability against other confidential computing approaches. Extensive resources exist to explore homomorphic encryption and track progress in easing adoption barriers.

-
-
-

Mechanics

-
    -
  1. Data Encryption: Before data is processed or sent to an ML model, it is encrypted using a homomorphic encryption scheme and public key. For example, encrypting numbers \(x\) and \(y\) generates ciphertexts \(E(x)\) and \(E(y)\).

  2. -
  3. Computation on Ciphertext: The ML algorithm processes the encrypted data directly. For instance, multiplying the ciphertexts \(E(x)\) and \(E(y)\) generates \(E(xy)\). More complex model training can also be done on ciphertexts.

  4. -
  5. Result Encryption: The result \(E(xy)\) remains encrypted and can only be decrypted by someone with the corresponding private key to reveal the actual product \(xy\).

  6. -
-

Only authorized parties with the private key can decrypt the final outputs, protecting the intermediate state. However, noise accumulates with each operation, preventing further computation without decryption.

-

Beyond healthcare, homomorphic encryption enables confidential computing for applications like financial fraud detection, insurance analytics, genetics research, and more. It offers an alternative to techniques like multipartymultiparty computation and TEEs. Ongoing research aims to improve the efficiency and capabilities.

-

Tools like HElib, SEAL, and TensorFlow HE provide libraries for exploring implementing homomorphic encryption in real-world machine learning pipelines.

-
-
-

Tradeoffs

-

For many real-time and embedded applications, fully homomorphic encryption remains impractical for the following reasons.

-

Computational Overhead: Homomorphic encryption imposes very high computational overheads, often resulting in slowdowns of over 100x for real-world ML applications. This makes it impractical for many time-sensitive or resource-constrained uses. Optimized hardware and parallelization can help but not eliminate this issue.

-

Complexity of Implementation The sophisticated algorithms require deep expertise in cryptography to be implemented correctly. Nuances like format compatibility with floating point ML models and scalable key management pose hurdles. This complexity hinders widespread practical adoption.

-

Algorithmic Limitations: Current schemes restrict the functions and depth of computations supported, limiting the models and data volumes that can be processed. Ongoing research is pushing these boundaries, but restrictions remain.

-

Hardware Acceleration: Homomorphic encryption requires specialized hardware, such as secure processors or coprocessors with TEEs, which adds design and infrastructure costs.

-

Hybrid Designs: Rather than encrypting entire workflows, selective application of homomorphic encryption to critical subcomponents can achieve protection while minimizing overheads.

-
-

Exercise 14.2 (Homomorphic Encryption)  

-
-
- -
-
-

Ready to unlock the power of encrypted computation? Homomorphic encryption is like a magic trick for your data! In this Colab, we’ll learn how to do calculations on secret numbers without ever revealing them. Imagine training a model on data you can’t even see – that’s the power of this mind-bending technology.

-

-
-
-
-
-
-
-

14.8.5 Secure MultipartyMultiparty Communication

-
-

Core Idea

-

The overarching goal of MPC is to enable different parties to jointly compute a function over their inputs while keeping those inputs private. For example, two organizations may want to collaborate on training a machine learning model by combining their respective data sets. Still, they cannot directly reveal that data due to Privacy or confidentiality constraints. MPC aims to provide protocols and techniques that allow them to achieve the benefits of pooled data for model accuracy without compromising the privacy of each organization’s sensitive data.

-

At a high level, MPC works by carefully splitting the computation into parts that each party can execute independently using their private input. The results are then combined to reveal only the final output of the function and nothing about the intermediate values. Cryptographic techniques are used to guarantee that the partial results remain private provably.

-

Let’s take a simple example of an MPC protocol. One of the most basic MPC protocols is the secure addition of two numbers. Each party splits its input into random shares that are secretly distributed. They exchange the shares and locally compute the sum of the shares, which reconstructs the final sum without revealing the individual inputs. For example, if Alice has input x and Bob has input y:

-
    -
  1. Alice generates random \(x_1\) and sets \(x_2 = x - x_1\)

  2. -
  3. Bob generates random \(y_1\) and sets \(y_2 = y - y_1\)

  4. -
  5. Alice sends \(x_1\) to Bob, Bob sends \(y_1\) to Alice (keeping \(x_2\) and \(y_2\) secret)

  6. -
  7. Alice computes \(x_2 + y_1 = s_1\), Bob computes \(x_1 + y_2 = s_2\)

  8. -
  9. \(s_1 + s_2 = x + y\) is the final sum, without revealing \(x\) or \(y\).

  10. -
-

Alice’s and Bob’s individual inputs (\(x\) and \(y\)) remain private, and each party only reveals one number associated with their original inputs. The random spits ensure no information about the original numbers disclosed

-

Secure Comparison: Another basic operation is a secure comparison of two numbers, determining which is greater than the other. This can be done using techniques like Yao’s Garbled Circuits, where the comparison circuit is encrypted to allow joint evaluation of the inputs without leaking them.

-

Secure Matrix Multiplication: Matrix operations like multiplication are essential for machine learning. MPC techniques like additive secret sharing can be used to split matrices into random shares, compute products on the shares, and then reconstruct the result.

-

Secure Model Training: Distributed machine learning training algorithms like federated averaging can be made secure using MPC. Model updates computed on partitioned data at each node are secretly shared between nodes and aggregated to train the global model without exposing individual updates.

-

The core idea behind MPC protocols is to divide the computation into steps that can be executed jointly without revealing intermediate sensitive data. This is accomplished by combining cryptographic techniques like secret sharing, homomorphic encryption, oblivious transfer, and garbled circuits. MPC protocols enable the collaborative computation of sensitive data while providing provable privacy guarantees. This privacy-preserving capability is essential for many machine learning applications today involving multiple parties that cannot directly share their raw data.

-

The main approaches used in MPC include:

-
    -
  • Homomorphic encryption: Special encryption allows computations to be carried out on encrypted data without decrypting it.

  • -
  • Secret sharing: The private data is divided into random shares distributed to each party. Computations are done locally on the shares and finally reconstructed.

  • -
  • Oblivious transfer: A protocol where a receiver obtains a subset of data from a sender, but the sender does not know which specific data was transferred.

  • -
  • Garbled circuits: The function to be computed is represented as a Boolean circuit that is encrypted (“garbled”) to allow joint evaluation without revealing inputs.

  • -
-
-
-

Tradeoffs

-

While MPC protocols provide strong privacy guarantees, they come at a high computational cost compared to plain computations. Every secure operation, like addition, multiplication, comparison, etc., requires more processing orders than the equivalent unencrypted operation. This overhead stems from the underlying cryptographic techniques:

-
    -
  • In partially homomorphic encryption, each computation on ciphertexts requires costly public-key operations. Fully homomorphic encryption has even higher overheads.

  • -
  • Secret sharing divides data into multiple shares, so even basic operations require manipulating many shares.

  • -
  • Oblivious transfer and garbled circuits add masking and encryption to hide data access patterns and execution flows.

  • -
  • MPC systems require extensive communication and interaction between parties to compute on shares/ciphertexts jointly.

  • -
-

As a result, MPC protocols can slow down computations by 3-4 orders of magnitude compared to plain implementations. This becomes prohibitively expensive for large datasets and models. Therefore, training machine learning models on encrypted data using MPC remains infeasible today for realistic dataset sizes due to the overhead. Clever optimizations and approximations are needed to make MPC practical.

-

Ongoing MPC research aims to close this efficiency gap through cryptographic advances, new algorithms, trusted hardware like SGX enclaves, and leveraging accelerators like GPUs/TPUs. However, in the foreseeable future, some degree of approximation and performance tradeoff is needed to scale MPC to meet the demands of real-world machine learning systems.

-
-
-
-

14.8.6 Synthetic Data Generation

-
-

Core Idea

-

Synthetic data generation has emerged as an important privacy-preserving machine learning approach that allows models to be developed and tested without exposing real user data. The key idea is to train generative models on real-world datasets and then sample from these models to synthesize artificial data that statistically match the original data distribution but does not contain actual user information. For example, a GAN could be trained on a dataset of sensitive medical records to learn the underlying patterns and then used to sample synthetic patient data.

-

The primary challenge of synthesizing data is to ensure adversaries are unable to re-identify the original dataset. A simple approach to achieving synthetic data is adding noise to the original dataset, which still risks privacy leakage. When noise is added to data in the context of differential privacy, sophisticated mechanisms based on the data’s sensitivity are used to calibrate the amount and distribution of noise. Through these mathematically rigorous frameworks, differential Privacy generally guarantees Privacy at some level, which is the primary goal of this privacy-preserving technique. Beyond preserving privacy, synthetic data combats multiple data availability issues such as imbalanced datasets, scarce datasets, and anomaly detection.

-

Researchers can freely share this synthetic data and collaborate on modeling without revealing private medical information. Well-constructed synthetic data protects Privacy while providing utility for developing accurate models. Key techniques to prevent reconstructing the original data include adding differential privacy noise during training, enforcing plausibility constraints, and using multiple diverse generative models. Here are some common approaches for generating synthetic data:

-
    -
  • Generative Adversarial Networks (GANs): GANs are an AI algorithm used in unsupervised learning where two neural networks compete against each other in a game. Figure fig-gans is an overview of the GAN system. The generator network (big red box) is responsible for producing the synthetic data, and the discriminator network (yellow box) evaluates the authenticity of the data by distinguishing between fake data created by the generator network and the real data. The generator and discriminator networks learn and update their parameters based on the results. The discriminator acts as a metric on how similar the fake and real data are to one another. It is highly effective at generating realistic data and is a popular approach for generating synthetic data.
  • -
-
-
-
- -
-
-Figure 14.14: Flowchart of GANs. Credit: Rosa and Papa (2021). -
-
-Rosa, Gustavo H. de, and João P. Papa. 2021. “A Survey on Text Generation Using Generative Adversarial Networks.” Pattern Recogn. 119 (November): 108098. https://doi.org/10.1016/j.patcog.2021.108098. -
-
-
    -
  • Variational Autoencoders (VAEs): VAEs are neural networks capable of learning complex probability distributions and balancing data generation quality and computational efficiency. They encode data into a latent space where they learn the distribution to decode the data back.

  • -
  • Data Augmentation: This involves transforming existing data to create new, altered data. For example, flipping, rotating, and scaling (uniformly or non-uniformly) original images can help create a more diverse, robust image dataset before training an ML model.

  • -
  • Simulations: Mathematical models can simulate real-world systems or processes to mimic real-world phenomena. This is highly useful in scientific research, urban planning, and economics.

  • -
-
-
-

Benefits

-

While synthetic data may be necessary due to Privacy or compliance risks, it is widely used in machine learning models when available data is of poor quality, scarce, or inaccessible. Synthetic data offers more efficient and effective development by streamlining robust model training, testing, and deployment processes. It allows researchers to share models more widely without breaching privacy laws and regulations. Collaboration between users of the same dataset will be facilitated, which will help broaden the capabilities and advancements in ML research.

-

There are several motivations for using synthetic data in machine learning:

-
    -
  • Privacy and compliance: Synthetic data avoids exposing personal information, allowing more open sharing and collaboration. This is important when working with sensitive datasets like healthcare records or financial information.

  • -
  • Data scarcity: When insufficient real-world data is available, synthetic data can augment training datasets. This improves model accuracy when limited data is a bottleneck.

  • -
  • Model testing: Synthetic data provides privacy-safe sandboxes for testing model performance, debugging issues, and monitoring for bias.

  • -
  • Data labeling: High-quality labeled training data is often scarce and expensive. Synthetic data can help auto-generate labeled examples.

  • -
-
-
-

Tradeoffs

-

While synthetic data aims to remove any evidence of the original dataset, privacy leakage is still a risk since the synthetic data mimics the original data. The statistical information and distribution are similar, if not the same, between the original and synthetic data. By resampling from the distribution, adversaries may still be able to recover the original training samples. Due to their inherent learning processes and complexities, neural networks might accidentally reveal sensitive information about the original training data.

-

A core challenge with synthetic data is the potential gap between synthetic and real-world data distributions. Despite advancements in generative modeling techniques, synthetic data may only partially capture real data’s complexity, diversity, and nuanced patterns. This can limit the utility of synthetic data for robustly training machine learning models. Rigorously evaluating synthetic data quality through adversary methods and comparing model performance to real data benchmarks helps assess and improve fidelity. However, inherently, synthetic data remains an approximation.

-

Another critical concern is the privacy risks of synthetic data. Generative models may leak identifiable information about individuals in the training data, which could enable reconstruction of private information. Emerging adversarial attacks demonstrate the challenges in preventing identity leakage from synthetic data generation pipelines. Techniques like differential Privacy can help safeguard Privacy but come with tradeoffs in data utility. There is an inherent tension between producing useful synthetic data and fully protecting sensitive training data, which must be balanced.

-

Additional pitfalls of synthetic data include amplified biases, labeling difficulties, the computational overhead of training generative models, storage costs, and failure to account for out-of-distribution novel data. While these are secondary to the core synthetic-real gap and privacy risks, they remain important considerations when evaluating the suitability of synthetic data for particular machine-learning tasks. As with any technique, the advantages of synthetic data come with inherent tradeoffs and limitations that require thoughtful mitigation strategies.

-
-
-
-

14.8.7 Summary

-

While all the techniques we have discussed thus far aim to enable privacy-preserving machine learning, they involve distinct mechanisms and tradeoffs. Factors like computational constraints, required trust assumptions, threat models, and data characteristics help guide the selection process for a particular use case. However, finding the right balance between Privacy, accuracy, and efficiency necessitates experimentation and empirical evaluation for many applications. Below is a comparison table of the key privacy-preserving machine learning techniques and their pros and cons:

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TechniqueProsCons
Differential PrivacyStrong formal privacy guarantees
Robust to auxiliary data attacks
Versatile for many data types and analyses
Accuracy loss from noise addition
Computational overhead for sensitivity analysis and noise generation
Federated LearningAllows collaborative learning without sharing raw data
Data remains decentralized improving security
No need for encrypted computation
Increased communication overhead
Potentially slower model convergence
Uneven client device capabilities
Secure Multi-Party ComputationEnables joint computation on sensitive data
Provides cryptographic privacy guarantees
Flexible protocols for various functions
Very high computational overhead
Complexity of implementation
Algorithmic constraints on function depth
Homomorphic EncryptionAllows computation on encrypted data
Prevents intermediate state exposure
Extremely high computational cost
Complex cryptographic implementations
Restrictions on function types
Synthetic Data GenerationEnables data sharing without leakage
Mitigates data scarcity problems
Synthetic-real gap in distributions
Potential for reconstructing private data
Biases and labeling challenges
-
-
-
-

14.9 Conclusion

-

Machine learning hardware security is critical as embedded ML systems are increasingly deployed in safety-critical domains like medical devices, industrial controls, and autonomous vehicles. We have explored various threats spanning hardware bugs, physical attacks, side channels, supply chain risks, etc. Defenses like TEEs, Secure Boot, PUFs, and hardware security modules provide multilayer protection tailored for resource-constrained embedded devices.

-

However, continual vigilance is essential to track emerging attack vectors and address potential vulnerabilities through secure engineering practices across the hardware lifecycle. As ML and embedded ML spread, maintaining rigorous security foundations that match the field’s accelerating pace of innovation remains imperative.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/responsible_ai/responsible_ai.html b/contents/responsible_ai/responsible_ai.html deleted file mode 100644 index be56c09f..00000000 --- a/contents/responsible_ai/responsible_ai.html +++ /dev/null @@ -1,1716 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 15  Responsible AI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

15  Responsible AI

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Illustration of responsible AI in a futuristic setting with the universe in the backdrop: A human hand or hands nurturing a seedling that grows into an AI tree, symbolizing a neural network. The tree has digital branches and leaves, resembling a neural network, to represent the interconnected nature of AI. The background depicts a future universe where humans and animals with general intelligence collaborate harmoniously. The scene captures the initial nurturing of the AI as a seedling, emphasizing the ethical development of AI technology in harmony with humanity and the universe.
-
-
-

As machine learning models grow across various domains, these algorithms have the potential to perpetuate historical biases, breach privacy, or enable unethical automated decisions if developed without thoughtful consideration of their societal impacts. Even systems created with good intentions can ultimately discriminate against certain demographic groups, enable surveillance, or lack transparency into their behaviors and decision-making processes. As such, machine learning engineers and companies have an ethical responsibility to proactively ensure principles of fairness, accountability, safety, and transparency are reflected in their models to prevent harm and build public trust.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand responsible AI’s core principles and motivations, including fairness, transparency, privacy, safety, and accountability.

  • -
  • Learn technical methods for implementing responsible AI principles, such as detecting dataset biases, building interpretable models, adding noise for privacy, and testing model robustness.

  • -
  • Recognize organizational and social challenges to achieving responsible AI, including data quality, model objectives, communication, and job impacts.

  • -
  • Knowledge of ethical frameworks and considerations for AI systems, spanning AI safety, human autonomy, and economic consequences.

  • -
  • Appreciate the increased complexity and costs of developing ethical, trustworthy AI systems compared to unprincipled AI.

  • -
-
-
-
-

15.1 Introduction

-

Machine learning models are increasingly used to automate decisions in high-stakes social domains like healthcare, criminal justice, and employment. However, without deliberate care, these algorithms can perpetuate biases, breach privacy, or cause other harm. For instance, a loan approval model solely trained on data from high-income neighborhoods could disadvantage applicants from lower-income areas. This motivates the need for responsible machine learning - creating fair, accountable, transparent, and ethical models.

-

Several core principles underlie responsible ML. Fairness ensures models do not discriminate based on gender, race, age, and other attributes. Explainability enables humans to interpret model behaviors and improve transparency. Robustness and safety techniques prevent vulnerabilities like adversarial examples. Rigorous testing and validation help reduce unintended model weaknesses or side effects.

-

Implementing responsible ML presents both technical and ethical challenges. Developers must grapple with defining fairness mathematically, balancing competing objectives like accuracy vs interpretability, and securing quality training data. Organizations must also align incentives, policies, and culture to uphold ethical AI.

-

This chapter will equip you to critically evaluate AI systems and contribute to developing beneficial and ethical machine learning applications by covering the foundations, methods, and real-world implications of responsible ML. The responsible ML principles discussed are crucial knowledge as algorithms mediate more aspects of human society.

-
-
-

15.2 Definition

-

Responsible AI is about developing AI that positively impacts society under human ethics and values. There is no universally agreed-upon definition of “responsible AI,” but here is a summary of how it is commonly described. Responsible AI refers to designing, developing, and deploying artificial intelligence systems in an ethical, socially beneficial way. The core goal is to create trustworthy, unbiased, fair, transparent, accountable, and safe AI. While there is no canonical definition, responsible AI is generally considered to encompass principles such as:

-
    -
  • Fairness: Avoiding biases, discrimination, and potential harm to certain groups or populations

  • -
  • Explainability: Enabling humans to understand and interpret how AI models make decisions

  • -
  • Transparency: Openly communicating how AI systems operate, are built, and are evaluated

  • -
  • Accountability: Having processes to determine responsibility and liability for AI failures or negative impacts

  • -
  • Robustness: Ensuring AI systems are secure, reliable, and behave as intended

  • -
  • Privacy: Protecting sensitive user data and adhering to privacy laws and ethics

  • -
-

Putting these principles into practice involves technical techniques, corporate policies, governance frameworks, and moral philosophy. There are also ongoing debates around defining ambiguous concepts like fairness and determining how to balance competing objectives.

-
-
-

15.3 Principles and Concepts

-
-

15.3.1 Transparency and Explainability

-

Machine learning models are often criticized as mysterious “black boxes” - opaque systems where it’s unclear how they arrived at particular predictions or decisions. For example, an AI system called COMPAS used to assess criminal recidivism risk in the US was found to be racially biased against black defendants. Still, the opacity of the algorithm made it difficult to understand and fix the problem. This lack of transparency can obscure biases, errors, and deficiencies.

-

Explaining model behaviors helps engender trust from the public and domain experts and enables identifying issues to address. Interpretability techniques like LIME, Shapley values, and saliency maps empower humans to understand and validate model logic. Laws like the EU’s GDPR also mandate transparency, which requires explainability for certain automated decisions. Overall, transparency and explainability are critical pillars of responsible AI.

-
-
-

15.3.2 Fairness, Bias, and Discrimination

-

ML models trained on historically biased data often perpetuate and amplify those prejudices. Healthcare algorithms have been shown to disadvantage black patients by underestimating their needs (Obermeyer et al. 2019). Facial recognition needs to be more accurate for women and people of color. Such algorithmic discrimination can negatively impact people’s lives in profound ways.

-
-Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–53. https://doi.org/10.1126/science.aax2342. -

Different philosophical perspectives also exist on fairness - for example, is it fairer to treat all individuals equally or try to achieve equal outcomes for groups? Ensuring fairness requires proactively detecting and mitigating biases in data and models. However, achieving perfect fairness is tremendously difficult due to contrasting mathematical definitions and ethical perspectives. Still, promoting algorithmic fairness and non-discrimination is a key responsibility in AI development.

-
-
-

15.3.3 Privacy and Data Governance

-

Maintaining individuals’ privacy is an ethical obligation and legal requirement for organizations deploying AI systems. Regulations like the EU’s GDPR mandate data privacy protections and rights, such as the ability to access and delete one’s data.

-

However, maximizing the utility and accuracy of data for training models can conflict with preserving privacy - modeling disease progression could benefit from access to patients’ full genomes, but sharing such data widely violates privacy.

-

Responsible data governance involves carefully anonymizing data, controlling access with encryption, getting informed consent from data subjects, and collecting the minimum data needed. Honoring privacy is challenging but critical as AI capabilities and adoption expand.

-
-
-

15.3.4 Safety and Robustness

-

Putting AI systems into real-world operation requires ensuring they are safe, reliable, and robust, especially for human interaction scenarios. Self-driving cars from Uber and Tesla have been involved in deadly crashes due to unsafe behaviors.

-

Adversarial attacks that subtly alter input data can also fool ML models and cause dangerous failures if systems are not resistant. Deepfakes represent another emerging threat area.

-

Below is a deepfake video of Barack Obama that went viral a few years ago.

-
-

Promoting safety requires extensive testing, risk analysis, human oversight, and designing systems that combine multiple weak models to avoid single points of failure. Rigorous safety mechanisms are essential for the responsible deployment of capable AI.

-
-
-

15.3.5 Accountability and Governance

-

When AI systems eventually fail or produce harmful outcomes, mechanisms must exist to address resultant issues, compensate affected parties, and assign responsibility. Both corporate accountability policies and government regulations are indispensable for responsible AI governance. For instance, Illinois’ Artificial Intelligence Video Interview Act requires companies to disclose and obtain consent for AI video analysis, promoting accountability.

-

Without clear accountability, even harms caused unintentionally could go unresolved, furthering public outrage and distrust. Oversight boards, impact assessments, grievance redress processes, and independent audits promote responsible development and deployment.

-
-
-
-

15.4 Cloud, Edge & Tiny ML

-

While these principles broadly apply across AI systems, certain responsible AI considerations are unique or pronounced when dealing with machine learning on embedded devices versus traditional server-based modeling. Therefore, we present a high-level taxonomy comparing responsible AI considerations across cloud, edge, and TinyML systems.

-
-

15.4.1 Summary

-

The table below summarizes how responsible AI principles manifest differently across cloud, edge, and TinyML architectures and how core considerations tie into their unique capabilities and limitations. Each environment’s constraints and tradeoffs shape how we approach transparency, accountability, governance, and other pillars of responsible AI.

- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PrincipleCloud MLEdge MLTinyML
ExplainabilityComplex models supportedLightweight requiredSevere limits
FairnessBroad data availableOn-device biasesLimited data labels
PrivacyCloud data vulnerabilitiesMore sensitive dataData dispersed
SafetyHacking threatsReal-world interactionAutonomous devices
AccountabilityCorporate policiesSupply chain issuesComponent tracing
GovernanceExternal oversight feasibleSelf-governance neededProtocol constraints
-
-
-

15.4.2 Explainability

-

For cloud-based machine learning, explainability techniques can leverage significant compute resources, enabling complex methods like SHAP values or sampling-based approaches to interpret model behaviors. For example, Microsoft’s InterpretML toolkit provides explainability techniques tailored for cloud environments.

-

However, edge ML operates on resource-constrained devices, requiring more lightweight explainability methods that can run locally without excessive latency. Techniques like LIME (Ribeiro, Singh, and Guestrin 2016) approximate model explanations using linear models or decision trees to avoid expensive computations, which makes them ideal for resource-constrained devices. However, LIME requires training hundreds to even thousands of models to generate good explanations, which is often infeasible given edge computing constraints. In contrast, saliency-based methods are often much faster in practice, only requiring a single forward pass through the network to estimate feature importance. This greater efficiency makes such methods better suited to edge devices with limited compute resources where low-latency explanations are critical.

-

Given tiny hardware capabilities, embedded systems pose the most significant challenges for explainability. More compact models and limited data make inherent model transparency easier. Explaining decisions may not be feasible on high-size and power-optimized microcontrollers. DARPA’s Transparent Computing program aims to develop extremely low overhead explainability, especially for TinyML devices like sensors and wearables.

-
-
-

15.4.3 Fairness

-

For cloud machine learning, vast datasets and computing power enable detecting biases across large heterogeneous populations and mitigating them through techniques like re-weighting data samples. However, biases may emerge from the broad behavioral data used to train cloud models. Amazon’s Fairness Flow framework helps assess cloud ML fairness.

-

Edge ML relies on limited on-device data, making analyzing biases across diverse groups harder. However, edge devices interact closely with individuals, providing an opportunity to adapt locally for fairness. Google’s Federated Learning distributes model training across devices to incorporate individual differences.

-

TinyML poses unique challenges for fairness with highly dispersed specialized hardware and minimal training data. Bias testing is difficult across diverse devices. Collecting representative data from many devices to mitigate bias has scale and privacy hurdles. DARPA’s Assured Neuro Symbolic Learning and Reasoning (ANSR) efforts are geared toward developing fairness techniques given extreme hardware constraints.

-
-
-

15.4.4 Safety

-

Key safety risks for cloud ML include model hacking, data poisoning, and malware disrupting cloud services. Robustness techniques like adversarial training, anomaly detection, and diversified models aim to harden cloud ML against attacks. Redundancy can help prevent single points of failure.

-

Edge ML and TinyML interact with the physical world, so reliability and safety validation are critical. Rigorous testing platforms like Foretellix synthetically generate edge scenarios to validate safety. TinyML safety is magnified by autonomous devices with limited supervision. TinyML safety often relies on collective coordination - swarms of drones maintain safety through redundancy. Physical control barriers also constrain unsafe TinyML device behaviors.

-

In summary, safety is crucial but manifests differently in each domain. Cloud ML guards against hacking, edge ML interacts physically, so reliability is key, and TinyML leverages distributed coordination for safety. Understanding the nuances guides appropriate safety techniques.

-
-
-

15.4.5 Accountability

-

Cloud ML’s accountability centers on corporate practices like responsible AI committees, ethical charters, and processes to address harmful incidents. Third-party audits and external government oversight promote cloud ML accountability.

-

Edge ML accountability is more complex with distributed devices and supply chain fragmentation. Companies are accountable for devices, but components come from various vendors. Industry standards help coordinate edge ML accountability across stakeholders.

-

With TinyML, accountability mechanisms must be traced across long, complex supply chains of integrated circuits, sensors, and other hardware. TinyML certification schemes help track component provenance. Trade associations should ideally promote shared accountability for ethical TinyML.

-
-
-

15.4.6 Governance

-

Organizations institute internal governance for cloud ML, such as ethics boards, audits, and model risk management. But external governance also oversees cloud ML, like regulations on bias and transparency such as the AI Bill of Rights, General Data Protection Regulation (GDPR), and California Consumer Protection Act (CCPA). Third-party auditing supports cloud ML governance.

-

Edge ML is more decentralized, requiring responsible self-governance by developers and companies deploying models locally. Industry associations coordinate governance across edge ML vendors, and open software helps align incentives for ethical edge ML.

-

Extreme decentralization and complexity make external governance infeasible with TinyML. TinyML relies on protocols and standards for self-governance baked into model design and hardware. Cryptography enables the provable trustworthiness of TinyML devices.

-
-
-

15.4.7 Privacy

-

For cloud ML, vast amounts of user data are concentrated in the cloud, creating risks of exposure through breaches. Differential privacy techniques add noise to cloud data to preserve privacy. Strict access controls and encryption protect cloud data at rest and in transit.

-

Edge ML moves data processing onto user devices, reducing aggregated data collection but increasing potential sensitivity as personal data resides on the device. Apple uses on-device ML and differential privacy to train models while minimizing data sharing. Data anonymization and secure enclaves protect on-device data.

-

TinyML distributes data across many resource-constrained devices, making centralized breaches unlikely and making scale anonymization challenging. Data minimization and using edge devices as intermediaries help TinyML privacy.

-

So, while cloud ML must protect expansive centralized data, edge ML secures sensitive on-device data, and TinyML aims for minimal distributed data sharing due to constraints. While privacy is vital throughout, techniques must match the environment. Understanding nuances allows for selecting appropriate privacy preservation approaches.

-
-
-
-

15.5 Technical Aspects

-
-

15.5.1 Detecting and Mitigating Bias

-

A large body of work has demonstrated that machine learning models can exhibit bias, from underperforming people of a certain identity to making decisions that limit groups’ access to important resources (Buolamwini and Gebru 2018).

-

Ensuring fair and equitable treatment for all groups affected by machine learning systems is crucial as these models increasingly impact people’s lives in areas like lending, healthcare, and criminal justice. We typically evaluate model fairness by considering “subgroup attributes” unrelated to the prediction task that capture identities like race, gender, or religion. For example, in a loan default prediction model, subgroups could include race, gender, or religion. When models are trained naively to maximize accuracy, they often ignore subgroup performance. However, this can negatively impact marginalized communities.

-

To illustrate, imagine a model predicting loan repayment where the plusses (+’s) represent repayment and the circles (O’s) represent default, as shown in Figure fig-fairness-example. The optimal accuracy would be correctly classifying all of Group A while misclassifying some of Group B’s creditworthy applicants as defaults. If positive classifications allow access loans, Group A would receive many more loans—which would naturally result in a biased outcome.

-
-
-
- -
-
-Figure 15.1: Fairness and accuracy. -
-
-
-

Alternatively, correcting the biases against Group B would likely increase “false positives” and reduce accuracy for Group A. Or, we could train separate models focused on maximizing true positives for each group. However, this would require explicitly using sensitive attributes like race in the decision process.

-

As we see, there are inherent tensions around priorities like accuracy versus subgroup fairness and whether to explicitly account for protected classes. Reasonable people can disagree on the appropriate tradeoffs. Constraints around costs and implementation options further complicate matters. Overall, ensuring the fair and ethical use of machine learning involves navigating these complex challenges.

-

Thus, the fairness literature has proposed three main fairness metrics for quantifying how fair a model performs over a dataset (Hardt, Price, and Srebro 2016). Given a model h and a dataset D consisting of (x,y,s) samples, where x is the data features, y is the label, and s is the subgroup attribute, and we assume there are simply two subgroups a and b, we can define the following.

-
    -
  1. Demographic Parity asks how accurate a model is for each subgroup. In other words, P(h(X) = Y S = a) = P(h(X) = Y S = b)

  2. -
  3. Equalized Odds asks how precise a model is on positive and negative samples for each subgroup. P(h(X) = y S = a, Y = y) = P(h(X) = y S = b, Y = y)

  4. -
  5. Equality of Opportunity is a special case of equalized odds that only asks how precise a model is on positive samples. This is relevant in cases such as resource allocation, where we care about how positive (i.e., resource-allocated) labels are distributed across groups. For example, we care that an equal proportion of loans are given to both men and women. P(h(X) = 1 S = a, Y = 1) = P(h(X) = 1 S = b, Y = 1)

  6. -
-

Note: These definitions often take a narrow view when considering binary comparisons between two subgroups. Another thread of fair machine learning research focusing on multicalibration and multiaccuracy considers the interactions between an arbitrary number of identities, acknowledging the inherent intersectionality of individual identities in the real world (Hébert-Johnson et al. 2018).

-
-Hébert-Johnson, Úrsula, Michael P. Kim, Omer Reingold, and Guy N. Rothblum. 2018. “Multicalibration: Calibration for the (Computationally-Identifiable) Masses.” In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, edited by Jennifer G. Dy and Andreas Krause, 80:1944–53. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v80/hebert-johnson18a.html. -
-

Context Matters

-

Before making any technical decisions to develop an unbiased ML algorithm, we need to understand the context surrounding our model. Here are some of the key questions to think about:

-
    -
  • Who will this model make decisions for?
  • -
  • Who is represented in the training data?
  • -
  • Who is represented, and who is missing at the table of engineers, designers, and managers?
    -
  • -
  • What sort of long-lasting impacts could this model have? For example, will it impact an individual’s financial security at a generational scale, such as determining college admissions or admitting a loan for a house?
    -
  • -
  • What historical and systematic biases are present in this setting, and are they present in the training data the model will generalize from?
  • -
-

Understanding a system’s social, ethical, and historical background is critical to preventing harm and should inform decisions throughout the model development lifecycle. After understanding the context, one can make various technical decisions to remove bias. First, one must decide what fairness metric is the most appropriate criterion for optimizing. Next, there are generally three main areas where one can intervene to debias an ML system.

-

First, preprocessing is when one balances a dataset to ensure fair representation or even increases the weight on certain underrepresented groups to ensure the model performs well. Second, in processing attempts to modify the training process of an ML system to ensure it prioritizes fairness. This can be as simple as adding a fairness regularizer (Lowy et al. 2021) to training an ensemble of models and sampling from them in a specific manner (Agarwal et al. 2018).

-
-Lowy, Andrew, Rakesh Pavan, Sina Baharlouei, Meisam Razaviyayn, and Ahmad Beirami. 2021. “Fermi: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information.” -
-Agarwal, Alekh, Alina Beygelzimer, Miroslav Dudı́k, John Langford, and Hanna M. Wallach. 2018. “A Reductions Approach to Fair Classification.” In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, edited by Jennifer G. Dy and Andreas Krause, 80:60–69. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v80/agarwal18a.html. -
-Alghamdi, Wael, Hsiang Hsu, Haewon Jeong, Hao Wang, Peter Michalak, Shahab Asoodeh, and Flavio Calmon. 2022. “Beyond Adult and COMPAS: Fair Multi-Class Prediction via Information Projection.” Adv. Neur. In. 35: 38747–60. -
-Hardt, Moritz, Eric Price, and Nati Srebro. 2016. “Equality of Opportunity in Supervised Learning.” In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, edited by Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, 3315–23. https://proceedings.neurips.cc/paper/2016/hash/9d2682367c3935defcb1f9e247a97c0d-Abstract.html. -

Finally, post-processing debases a model after the fact, taking a trained model and modifying its predictions in a specific manner to ensure fairness is preserved (Alghamdi et al. 2022; Hardt, Price, and Srebro 2016). Post-processing builds on the preprocessing and in-processing steps by providing another opportunity to address bias and fairness issues in the model after it has already been trained.

-

The three-step process of preprocessing, in-processing, and post-processing provides a framework for intervening at different stages of model development to mitigate issues around bias and fairness. While preprocessing and in-processing focus on data and training, post-processing allows for adjustments after the model has been fully trained. Together, these three approaches give multiple opportunities to detect and remove unfair bias.

-
-
-

Thoughtful Deployment

-

The breadth of existing fairness definitions and debiasing interventions underscores the need for thoughtful assessment before deploying ML systems. As ML researchers and developers, responsible model development requires proactively educating ourselves on the real-world context, consulting domain experts and end-users, and centering harm prevention.

-

Rather than seeing fairness considerations as a box to check, we must deeply engage with the unique social implications and ethical tradeoffs around each model we build. Every technical choice about datasets, model architectures, evaluation metrics, and deployment constraints embeds values. By broadening our perspective beyond narrow technical metrics, carefully evaluating tradeoffs, and listening to impacted voices, we can work to ensure our systems expand opportunity rather than encode bias.

-

The path forward lies not in an arbitrary debiasing checklist but in a commitment to understanding and upholding our ethical responsibility at each step. This commitment starts with proactively educating ourselves and consulting others rather than just going through the motions of a fairness checklist. It requires engaging deeply with ethical tradeoffs in our technical choices, evaluating impacts on different groups, and listening to those voices most impacted.

-

Ultimately, responsible and ethical AI systems do not come from checkbox debiasing but from upholding our duty to assess harms, broaden perspectives, understand tradeoffs, and ensure we provide opportunity for all groups. This ethical responsibility should drive every step.

-

The connection between the paragraphs is that the first paragraph establishes the need for a thoughtful assessment of fairness issues rather than a checkbox approach. The second paragraph then expands on what that thoughtful assessment looks like in practice—engaging with tradeoffs, evaluating impacts on groups, and listening to impacted voices. Finally, the last paragraph refers to avoiding an “arbitrary debiasing checklist” and committing to ethical responsibility through assessment, understanding tradeoffs, and providing opportunity.

-
-
-
-

15.5.2 Preserving Privacy

-

Recent incidents have demonstrated how AI models can memorize sensitive user data in ways that violate privacy. For example, as shown in Figure XXX below, Stable Diffusion’s art generations were found to mimic identifiable artists’ styles and replicate existing photos, concerning many (Ippolito et al. 2023). These risks are amplified with personalized ML systems deployed in intimate environments like homes or wearables.

-

Imagine if a smart speaker uses our conversations to improve the quality of service to end users who genuinely want it. Still, others could violate privacy by trying to extract what the speaker “remembers.” Figure fig-diffusion-model-example below shows how diffusion models can memorize and generate individual training examples (Ippolito et al. 2023).

-
-
-
- -
-
-Figure 15.2: Diffusion models memorizing samples from training data. Credit: Ippolito et al. (2023). -
-
-Ippolito, Daphne, Florian Tramer, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher Choquette Choo, and Nicholas Carlini. 2023. “Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy.” In Proceedings of the 16th International Natural Language Generation Conference, 5253–70. Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.inlg-main.3. -
-
-

Adversaries can use these memorization capabilities and train models to detect if specific training data influenced a target model. For example, membership inference attacks train a secondary model that learns to detect a change in the target model’s outputs when making inferences over data it was trained on versus not trained on (Shokri et al. 2017).

-
-Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. “Membership Inference Attacks Against Machine Learning Models.” In 2017 IEEE Symposium on Security and Privacy (SP), 3–18. IEEE; IEEE. https://doi.org/10.1109/sp.2017.41. -
-Abadi, Martin, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. “Deep Learning with Differential Privacy.” In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 308–18. CCS ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2976749.2978318. -

ML devices are especially vulnerable because they are often personalized on user data and are deployed in even more intimate settings such as the home. Private machine learning techniques have evolved to establish safeguards against adversaries, as mentioned in the Security and Privacy chapter to combat these privacy issues. Methods like differential privacy add mathematical noise during training to obscure individual data points’ influence on the model. Popular techniques like DP-SGD (Abadi et al. 2016) also clip gradients to limit what the model leaks about the data. Still, users should also be able to delete the impact of their data after the fact.

-
-
-

15.5.3 Machine Unlearning

-

With ML devices personalized to individual users and then deployed to remote edges without connectivity, a challenge arises—how can models responsively “forget” data points after deployment? If users request their data be removed from a personalized model, the lack of connectivity makes retraining infeasible. Thus, efficient on-device data forgetting is necessary but poses hurdles.

-

Initial unlearning approaches faced limitations in this context. Given the resource constraints, retrieving models from scratch on the device to forget data points proves inefficient or even impossible. Fully retraining also requires retaining all the original training data on the device, which brings its own security and privacy risks. Common machine unlearning techniques (Bourtoule et al. 2021) for remote embedded ML systems fail to enable responsive, secure data removal.

-
-Bourtoule, Lucas, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. 2021. “Machine Unlearning.” In 2021 IEEE Symposium on Security and Privacy (SP), 141–59. IEEE; IEEE. https://doi.org/10.1109/sp40001.2021.00019. -

However, newer methods show promise in modifying models to approximately forget data [?] without full retraining. While the accuracy loss from avoiding full rebuilds is modest, guaranteeing data privacy should still be the priority when handling sensitive user information ethically. Even slight exposure to private data can violate user trust. As ML systems become deeply personalized, efficiency and privacy must be enabled from the start—not afterthoughts.

-

Recent policy discussions which include the European Union’s General Data, Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), the Act on the Protection of Personal Information (APPI), and Canada’s proposed Consumer Privacy Protection Act (CPPA), require the deletion of private information. These policies, coupled with AI incidents like Stable Diffusion memorizing artist data, have underscored the ethical need for users to delete their data from models after training.

-

The right to remove data arises from privacy concerns around corporations or adversaries misusing sensitive user information. Machine unlearning refers to removing the influence of specific points from an already-trained model. Naively, this involves full retraining without the deleted data. However, connectivity constraints often make retraining infeasible for ML systems personalized and deployed to remote edges. If a smart speaker learns from private home conversations, retaining access to delete that data is important.

-

Although limited, methods are evolving to enable efficient approximations of retraining for unlearning. By modifying models’ inference time, they can mimic “forgetting” data without full access to training data. However, most current techniques are restricted to simple models, still have resource costs, and trade some accuracy. Though methods are evolving, enabling efficient data removal and respecting user privacy remains imperative for responsible TinyML deployment.

-
-
-

15.5.4 Adversarial Examples and Robustness

-

Machine learning models, especially deep neural networks, have a well-documented Achilles heel: they often break when even tiny perturbations are made to their inputs (Szegedy et al. 2014). This surprising fragility highlights a major robustness gap threatening real-world deployment in high-stakes domains. It also opens the door for adversarial attacks designed to fool models deliberately.

-

Machine learning models can exhibit surprising brittleness—minor input tweaks can cause shocking malfunctions, even in state-of-the-art deep neural networks (Szegedy et al. 2014). This unpredictability around out-of-sample data underscores gaps in model generalization and robustness. Given the growing ubiquity of ML, it also enables adversarial threats that weaponize models’ blindspots.

-

Deep neural networks demonstrate an almost paradoxical dual nature - human-like proficiency in training distributions coupled with extreme fragility to tiny input perturbations (Szegedy et al. 2014). This adversarial vulnerability gap highlights gaps in standard ML procedures and threats to real-world reliability. At the same time, it can be exploited: attackers can find model-breaking points humans wouldn’t perceive.

-
-Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. 2014. “Intriguing Properties of Neural Networks.” In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, edited by Yoshua Bengio and Yann LeCun. http://arxiv.org/abs/1312.6199. -

Figure fig-adversarial-example includes an example of a small meaningless perturbation that changes a model prediction. This fragility has real-world impacts: lack of robustness undermines trust in deploying models for high-stakes applications like self-driving cars or medical diagnosis. Moreover, the vulnerability leads to security threats: attackers can deliberately craft adversarial examples that are perceptually indistinguishable from normal data but cause model failures.

-
-
-
- -
-
-Figure 15.3: Perturbation effect on prediction. Credit: Microsoft. -
-
-
-

For instance, past work shows successful attacks that trick models for tasks like NSFW detection (Bhagoji et al. 2018), ad-blocking (Tramèr et al. 2019), and speech recognition (Carlini et al. 2016). While errors in these domains already pose security risks, the problem extends beyond IT security. Recently, adversarial robustness has been proposed as an additional performance metric by approximating worst-case behavior.

-
-Bhagoji, Arjun Nitin, Warren He, Bo Li, and Dawn Song. 2018. “Practical Black-Box Attacks on Deep Neural Networks Using Efficient Query Mechanisms.” In Proceedings of the European Conference on Computer Vision (ECCV), 154–69. -
-Tramèr, Florian, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, and Dan Boneh. 2019. AdVersarial: Perceptual Ad Blocking Meets Adversarial Machine Learning.” In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2005–21. ACM. https://doi.org/10.1145/3319535.3354222. -
-Carlini, Nicholas, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David Wagner, and Wenchao Zhou. 2016. “Hidden Voice Commands.” In 25th USENIX Security Symposium (USENIX Security 16), 513–30. -

The surprising model fragility highlighted above casts doubt on real-world reliability and opens the door to adversarial manipulation. This growing vulnerability underscores several needs. First, moral robustness evaluations are essential for quantifying model vulnerabilities before deployment. Approximating worst-case behavior surfaces blindspots.

-

Second, effective defenses across domains must be developed to close these robustness gaps. With security on the line, developers cannot ignore the threat of attacks exploiting model weaknesses. Moreover, we cannot afford any fragility-induced failures for safety-critical applications like self-driving vehicles and medical diagnosis. Lives are at stake.

-

Finally, the research community continues mobilizing rapidly in response. Interest in adversarial machine learning has exploded as attacks reveal the need to bridge the robustness gap between synthetic and real-world data. Conferences now commonly feature defenses for securing and stabilizing models. The community recognizes that model fragility is a critical issue that must be addressed through robustness testing, defense development, and ongoing research. By surfacing blindspots and responding with principled defenses, we can work to ensure reliability and safety for machine learning systems, especially in high-stakes domains.

-
-
-

15.5.5 Building Interpretable Models

-

As models are deployed more frequently in high-stakes settings, practitioners, developers, downstream end-users, and increasing regulation have highlighted the need for explainability in machine learning. The goal of many interpretability and explainability methods is to provide practitioners with more information about the models’ overall behavior or the behavior given a specific input. This allows users to decide whether or not a model’s output or prediction is trustworthy.

-

Such analysis can help developers debug models and improve performance by pointing out biases, spurious correlations, and failure modes of models. In cases where models can surpass human performance on a task, interpretability can help users and researchers better understand relationships in their data and previously unknown patterns.

-

There are many classes of explainability/interpretability methods, including post hoc explainability, inherent interpretability, and mechanistic interpretability. These methods aim to make complex machine learning models more understandable and ensure users can trust model predictions, especially in critical settings. By providing transparency into model behavior, explainability techniques are an important tool for developing safe, fair, and reliable AI systems.

-
-

Post Hoc Explainability

-

Post hoc explainability methods typically explain the output behavior of a black-box model on a specific input. Popular methods include counterfactual explanations, feature attribution methods, and concept-based explanations.

-

Counterfactual explanations, also frequently called algorithmic recourse, “If X had not occurred, Y would not have occurred” (Wachter, Mittelstadt, and Russell 2017). For example, consider a person applying for a bank loan whose application is rejected by a model. They may ask their bank for recourse or how to change to be eligible for a loan. A counterfactual explanation would tell them which features they need to change and by how much such that the model’s prediction changes.

-
-Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017. “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” SSRN Electronic Journal 31: 841. https://doi.org/10.2139/ssrn.3063289. -
-Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.” In 2017 IEEE International Conference on Computer Vision (ICCV), 618–26. IEEE. https://doi.org/10.1109/iccv.2017.74. -
-Smilkov, Daniel, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. 2017. “Smoothgrad: Removing Noise by Adding Noise.” ArXiv Preprint abs/1706.03825. https://arxiv.org/abs/1706.03825. -
-Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. Why Should i Trust You? Explaining the Predictions of Any Classifier.” In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–44. -
-Lundberg, Scott M., and Su-In Lee. 2017. “A Unified Approach to Interpreting Model Predictions.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4765–74. https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html. -

Feature attribution methods highlight the input features that are important or necessary for a particular prediction. For a computer vision model, this would mean highlighting the individual pixels that contributed most to the predicted label of the image. Note that these methods do not explain how those pixels/features impact the prediction, only that they do. Common methods include input gradients, GradCAM (Selvaraju et al. 2017), SmoothGrad (Smilkov et al. 2017), LIME (Ribeiro, Singh, and Guestrin 2016), and SHAP (Lundberg and Lee 2017).

-

By providing examples of changes to input features that would alter a prediction (counterfactuals) or indicating the most influential features for a given prediction (attribution), these post hoc explanation techniques shed light on model behavior for individual inputs. This granular transparency helps users determine whether they can trust and act upon specific model outputs.

-

Concept-based explanations aim to explain model behavior and outputs using a pre-defined set of semantic concepts (e.g., the model recognizes scene class “bedroom” based on the presence of concepts “bed” and “pillow”). Recent work shows that users often prefer these explanations to attribution and example-based explanations because they “resemble human reasoning and explanations” (Vikram V. Ramaswamy et al. 2023b). Popular concept-based explanation methods include TCAV (Cai et al. 2019), Network Dissection (Bau et al. 2017), and interpretable basis decomposition (Zhou et al. 2018).

-
-Ramaswamy, Vikram V, Sunnie SY Kim, Ruth Fong, and Olga Russakovsky. 2023b. UFO: A Unified Method for Controlling Understandability and Faithfulness Objectives in Concept-Based Explanations for CNNs.” ArXiv Preprint abs/2303.15632. https://arxiv.org/abs/2303.15632. -
-Cai, Carrie J., Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, et al. 2019. “Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, edited by Jennifer G. Dy and Andreas Krause, 80:2673–82. Proceedings of Machine Learning Research. ACM. https://doi.org/10.1145/3290605.3300234. -
-Bau, David, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. 2017. “Network Dissection: Quantifying Interpretability of Deep Visual Representations.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3319–27. IEEE. https://doi.org/10.1109/cvpr.2017.354. -
-Zhou, Bolei, Yiyou Sun, David Bau, and Antonio Torralba. 2018. “Interpretable Basis Decomposition for Visual Explanation.” In Proceedings of the European Conference on Computer Vision (ECCV), 119–34. -
-Ramaswamy, Vikram V., Sunnie S. Y. Kim, Ruth Fong, and Olga Russakovsky. 2023a. “Overlooked Factors in Concept-Based Explanations: Dataset Choice, Concept Learnability, and Human Capability.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10932–41. IEEE. https://doi.org/10.1109/cvpr52729.2023.01052. -

Note that these methods are extremely sensitive to the size and quality of the concept set, and there is a tradeoff between their accuracy and faithfulness and their interpretability or understandability to humans (Vikram V. Ramaswamy et al. 2023a). However, by mapping model predictions to human-understandable concepts, concept-based explanations can provide transparency into the reasoning behind model outputs.

-
-
-

Inherent Interpretability

-

Inherently interpretable models are constructed such that their explanations are part of the model architecture and are thus naturally faithful, which sometimes makes them preferable to post-hoc explanations applied to black-box models, especially in high-stakes domains where transparency is imperative (Rudin 2019). Often, these models are constrained so that the relationships between input features and predictions are easy for humans to follow (linear models, decision trees, decision sets, k-NN models), or they obey structural knowledge of the domain, such as monotonicity (Gupta et al. 2016), causality, or additivity (Lou et al. 2013; Beck and Jackman 1998).

-
-Rudin, Cynthia. 2019. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1 (5): 206–15. https://doi.org/10.1038/s42256-019-0048-x. -
-Gupta, Maya, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, and Alexander Van Esbroeck. 2016. “Monotonic Calibrated Interpolated Look-up Tables.” The Journal of Machine Learning Research 17 (1): 3790–3836. -
-Lou, Yin, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. “Accurate Intelligible Models with Pairwise Interactions.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, edited by Inderjit S. Dhillon, Yehuda Koren, Rayid Ghani, Ted E. Senator, Paul Bradley, Rajesh Parekh, Jingrui He, Robert L. Grossman, and Ramasamy Uthurusamy, 623–31. ACM. https://doi.org/10.1145/2487575.2487579. -
-Beck, Nathaniel, and Simon Jackman. 1998. “Beyond Linearity by Default: Generalized Additive Models.” Am. J. Polit. Sci. 42 (2): 596. https://doi.org/10.2307/2991772. -
-Koh, Pang Wei, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, and Percy Liang. 2020. “Concept Bottleneck Models.” In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, 119:5338–48. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v119/koh20a.html. -
-Chen, Chaofan, Oscar Li, Daniel Tao, Alina Barnett, Cynthia Rudin, and Jonathan Su. 2019. “This Looks Like That: Deep Learning for Interpretable Image Recognition.” In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, edited by Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, 8928–39. https://proceedings.neurips.cc/paper/2019/hash/adf7ee2dcf142b0e11888e72b43fcb75-Abstract.html. -

However, more recent works have relaxed the restrictions on inherently interpretable models, using black-box models for feature extraction and a simpler inherently interpretable model for classification, allowing for faithful explanations that relate high-level features to prediction. For example, Concept Bottleneck Models (Koh et al. 2020) predict a concept set c that is passed into a linear classifier. ProtoPNets (Chen et al. 2019) dissect inputs into linear combinations of similarities to prototypical parts from the training set.

-
-
-

Mechanistic Interpretability

-

Mechanistic interpretability methods seek to reverse engineer neural networks, often analogizing them to how one might reverse engineer a compiled binary or how neuroscientists attempt to decode the function of individual neurons and circuits in brains. Most research in mechanistic interpretability views models as a computational graph (Geiger et al. 2021), and circuits are subgraphs with distinct functionality (Wang and Zhan 2019). Current approaches to extracting circuits from neural networks and understanding their functionality rely on human manual inspection of visualizations produced by circuits (Olah et al. 2020).

-
-Geiger, Atticus, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. “Causal Abstractions of Neural Networks.” In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, Virtual, edited by Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, 9574–86. https://proceedings.neurips.cc/paper/2021/hash/4f5c422f4d49a5a807eda27434231040-Abstract.html. -
-Wang, LingFeng, and YaQing Zhan. 2019. “A Conceptual Peer Review Model for arXiv and Other Preprint Databases.” Learn. Publ. 32 (3): 213–19. https://doi.org/10.1002/leap.1229. -
-Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. 2020. “Zoom in: An Introduction to Circuits.” Distill 5 (3): e00024–001. https://doi.org/10.23915/distill.00024.001. -
-Davarzani, Samaneh, David Saucier, Purva Talegaonkar, Erin Parker, Alana Turner, Carver Middleton, Will Carroll, et al. 2023. “Closing the Wearable Gap: Footankle Kinematic Modeling via Deep Learning Models Based on a Smart Sock Wearable.” Wearable Technologies 4. https://doi.org/10.1017/wtc.2023.3. -

Alternatively, some approaches build sparse autoencoders that encourage neurons to encode disentangled interpretable features (Davarzani et al. 2023). This field is much newer than existing areas in explainability and interpretability, and as such, most works are generally exploratory rather than solution-oriented.

-

There are many problems in mechanistic interpretability, including the polysemanticity of neurons and circuits, the inconvenience and subjectivity of human labeling, and the exponential search space for identifying circuits in large models with billions or trillions of neurons.

-
-
-

Challenges and Considerations

-

As methods for interpreting and explaining models progress, it is important to note that humans overtrust and misuse interpretability tools (Kaur et al. 2020) and that a user’s trust in a model due to an explanation can be independent of the correctness of the explanations (Lakkaraju and Bastani 2020). As such, it is necessary that aside from assessing the faithfulness/correctness of explanations, researchers must also ensure that interpretability methods are developed and deployed with a specific user in mind and that user studies are performed to evaluate their efficacy and usefulness in practice.

-
-Kaur, Harmanpreet, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. “Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning.” In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, edited by Regina Bernhaupt, Florian ’Floyd’Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, et al., 1–14. ACM. https://doi.org/10.1145/3313831.3376219. -
-Lakkaraju, Himabindu, and Osbert Bastani. 2020. ”How Do i Fool You?”: Manipulating User Trust via Misleading Black Box Explanations.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 79–85. ACM. https://doi.org/10.1145/3375627.3375833. -

Furthermore, explanations should be tailored to the user’s expertise, the task they are using the explanation for and the corresponding minimal amount of information required for the explanation to be useful to prevent information overload.

-

While interpretability/explainability are popular areas in machine learning research, very few works study their intersection with TinyML and edge computing. Given that a significant application of TinyML is healthcare, which often requires high transparency and interpretability, existing techniques must be tested for scalability and efficiency concerning edge devices. Many methods rely on extra forward and backward passes, and some even require extensive training in proxy models, which are infeasible on resource-constrained microcontrollers.

-

That said, explainability methods can be highly useful in developing models for edge devices, as they can give insights into how input data and models can be compressed and how representations may change post-compression. Furthermore, many interpretable models are often smaller than their black-box counterparts, which could benefit TinyML applications.

-
-
-
-

15.5.6 Monitoring Model Performance

-

While developers may train models that seem adversarially robust, fair, and interpretable before deployment, it is imperative that both the users and the model owners continue to monitor the model’s performance and trustworthiness during the model’s full lifecycle. Data is frequently changing in practice, which can often result in distribution shifts. These distribution shifts can profoundly impact the model’s vanilla predictive performance and its trustworthiness (fairness, robustness, and interpretability) in real-world data.

-

Furthermore, definitions of fairness frequently change with time, such as what society considers a protected attribute, and the expertise of the users asking for explanations may also change.

-

To ensure that models keep up to date with such changes in the real world, developers must continually evaluate their models on current and representative data and standards and update models when necessary.

-
-
-
-

15.6 Implementation Challenges

-
-

15.6.1 Organizational and Cultural Structures

-

While innovation and regulation are often seen as having competing interests, many countries have found it necessary to provide oversight as AI systems expand into more sectors. As illustrated in Figure fig-human-centered-ai, this oversight has become crucial as these systems continue permeating various industries and impacting people’s lives (see Human-Centered AI, Chapter 8 “Government Interventions and Regulations”.

-
-
-
- -
-
-Figure 15.4: How various groups impact human-centered AI. Credit: Shneiderman (2020). -
-
-Shneiderman, Ben. 2020. “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM Trans. Interact. Intell. Syst. 10 (4): 1–31. https://doi.org/10.1145/3419764. -
-
-

Among these are:

- -
-
-

15.6.2 Obtaining Quality and Representative Data

-

As discussed in the Data Engineering chapter, responsible AI design must occur at all pipeline stages, including data collection. This begs the question: what does it mean for data to be high-quality and representative? Consider the following scenarios that hinder the representativeness of data:

-
-

Subgroup Imbalance

-

This is likely what comes to mind when hearing “representative data.” Subgroup imbalance means the dataset contains relatively more data from one subgroup than another. This imbalance can negatively affect the downstream ML model by causing it to overfit a subgroup of people while performing poorly on another.

-

One example consequence of subgroup imbalance is racial discrimination in facial recognition technology (Buolamwini and Gebru 2018); commercial facial recognition algorithms have up to 34% worse error rates on darker-skinned females than lighter-skinned males.

-
-Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. PMLR. -

Note that data imbalance goes both ways, and subgroups can also be harmful overrepresented in the dataset. For example, the Allegheny Family Screening Tool (AFST) predicts the likelihood that a child will eventually be removed from a home. The AFST produces disproportionate scores for different subgroups, one of the reasons being that it is trained on historically biased data, sourced from juvenile and adult criminal legal systems, public welfare agencies, and behavioral health agencies and programs.

-
-
-

Quantifying Target Outcomes

-

This occurs in applications where the ground-truth label cannot be measured or is difficult to represent in a single quantity. For example, an ML model in a mobile wellness application may want to predict individual stress levels. The true stress labels themselves are impossible to obtain directly and must be inferred from other biosignals, such as heart rate variability and user self-reported data. In these situations, noise is built into the data by design, making this a challenging ML task.

-
-
-

Distribution Shift

-

Data may no longer represent a task if a major external event causes the data source to change drastically. The most common way to think about distribution shifts is with respect to time; for example, data on consumer shopping habits collected pre-covid may no longer be present in consumer behavior today.

-

The transfer causes another form of distribution shift. For instance, when applying a triage system that was trained on data from one hospital to another, a distribution shift may occur if the two hospitals are very different.#

-
-
-

Gathering Data

-

A reasonable solution for many of the above problems with non-representative or low-quality data is to collect more; we can collect more data targeting an underrepresented subgroup or from the target hospital to which our model might be transferred. However, for some reasons, gathering more data is an inappropriate or infeasible solution for the task at hand.

-
    -
  • Data collection can be harmful. This is the paradox of exposure, the situation in which those who stand to significantly gain from their data being collected are also those who are put at risk by the collection process (D’ignazio and Klein (2023), Chapter 4). For example, collecting more data on non-binary individuals may be important for ensuring the fairness of the ML application, but it also puts them at risk, depending on who is collecting the data and how (whether the data is easily identifiable, contains sensitive content, etc.).

  • -
  • Data collection can be costly. In some domains, such as healthcare, obtaining data can be costly in terms of time and money.

  • -
  • Biased data collection. Electronic Health Records is a huge data source for ML-driven healthcare applications. Issues of subgroup representation aside, the data itself may be collected in a biased manner. For example, negative language (“nonadherent,” “unwilling”) is disproportionately used on black patients (Himmelstein, Bates, and Zhou 2022).

  • -
-
-D’ignazio, Catherine, and Lauren F Klein. 2023. Data Feminism. MIT press. -
-Himmelstein, Gracie, David Bates, and Li Zhou. 2022. “Examination of Stigmatizing Language in the Electronic Health Record.” JAMA Network Open 5 (1): e2144967. https://doi.org/10.1001/jamanetworkopen.2021.44967. -

We conclude with several additional strategies for maintaining data quality: improving understanding of the data, data exploration, and intr. First, fostering a deeper understanding of the data is crucial. This can be achieved through the implementation of standardized labels and measures of data quality, such as in the Data Nutrition Project.

-

Collaborating with organizations responsible for collecting data helps ensure the data is interpreted correctly. Second, employing effective tools for data exploration is important. Visualization techniques and statistical analyses can reveal issues with the data. Finally, establishing a feedback loop within the ML pipeline is essential for understanding the real-world implications of the data. Metrics, such as fairness measures, allow us to define “data quality” in the context of the downstream application; improving fairness may directly improve the quality of the predictions that the end users receive.

-
-
-
-

15.6.3 Balancing Accuracy and Other Objectives

-

Machine learning models are often evaluated on accuracy alone, but this single metric cannot fully capture model performance and tradeoffs for responsible AI systems. Other ethical dimensions, such as fairness, robustness, interpretability, and privacy, may compete with pure predictive accuracy during model development. For instance, inherently interpretable models such as small decision trees or linear classifiers with simplified features intentionally trade some accuracy for transparency in the model behavior and predictions. While these simplified models achieve lower accuracy by not capturing all the complexity in the dataset, improved interpretability builds trust by enabling direct analysis by human practitioners.

-

Additionally, certain techniques meant to improve adversarial robustness, such as adversarial training examples or dimensionality reduction, can degrade the accuracy of clean validation data. In sensitive applications like healthcare, focusing narrowly on state-of-the-art accuracy carries ethical risks if it allows models to rely more on spurious correlations that introduce bias or use opaque reasoning. Therefore, the appropriate performance objectives depend greatly on the sociotechnical context.

-

Methodologies like Value Sensitive Design provide frameworks for formally evaluating the priorities of various stakeholders within the real-world deployment system. These elucidate tensions between values like accuracy, interpretation, ility, and fail and redness, which can then guide responsible tradeoff decisions. For a medical diagnosis system, achieving the highest accuracy may not be the singular goal - improving transparency to build practitioner trust or reducing bias towards minority groups could justify small losses in accuracy. Analyzing the sociotechnical context is key for setting these objectives.

-

By taking a holistic view, we can responsibly balance accuracy with other ethical objectives for model success. Ongoing performance monitoring along multiple dimensions is crucial as the system evolves after deployment.

-
-
-
-

15.7 Ethical Considerations in AI Design

-

We must discuss at least some of the many ethical issues at stake in designing and applying AI systems and diverse frameworks for approaching these issues, including those from AI safety, Human-Computer Interaction (HCI), and Science, Technology, and Society (STS).

-
-

15.7.1 AI Safety and Value Alignment

-

In 1960, Norbert Weiner wrote, “’if we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively… we had better be quite sure that the purpose put into the machine is the purpose which we desire” (Wiener 1960).

-
-Wiener, Norbert. 1960. “Some Moral and Technical Consequences of Automation: As Machines Learn They May Develop Unforeseen Strategies at Rates That Baffle Their Programmers.” Science 131 (3410): 1355–58. https://doi.org/10.1126/science.131.3410.1355. -
-Russell, Stuart. 2021. “Human-Compatible Artificial Intelligence.” Human-Like Machine Intelligence, 3–23. -

In recent years, as the capabilities of deep learning models have achieved, and sometimes even surpassed, human abilities, the issue of creating AI systems that act in accord with human intentions instead of pursuing unintended or undesirable goals has become a source of concern (Russell 2021). Within the field of AI safety, a particular goal concerns “value alignment,” or the problem of how to code the “right” purpose into machines Human-Compatible Artificial Intelligence. Present AI research assumes we know the objectives we want to achieve and “studies the ability to achieve objectives, not the design of those objectives.”

-

However, complex real-world deployment contexts make explicitly defining “the right purpose” for machines difficult, requiring frameworks for responsible and ethical goal-setting. Methodologies like Value Sensitive Design provide formal mechanisms to surface tensions between stakeholder values and priorities.

-

By taking a holistic sociotechnical view, we can better ensure intelligent systems pursue objectives that align with broad human intentions rather than maximizing narrow metrics like accuracy alone. Achieving this in practice remains an open and critical research question as AI capabilities advance rapidly.

-

The absence of this alignment can lead to several AI safety issues, as have been documented in a variety of deep learning models. A common feature of systems that optimize for an objective is that variables not directly included in the objective may be set to extreme values to help optimize for that objective, leading to issues characterized as specification gaming, reward hacking, etc., in reinforcement learning (RL).

-

In recent years, a particularly popular implementation of RL has been models pre-trained using self-supervised learning and fine-tuned reinforcement learning from human feedback (RLHF) (Christiano et al. 2017). Ngo 2022 (Ngo, Chan, and Mindermann 2022) argues that by rewarding models for appearing harmless and ethical while also maximizing useful outcomes, RLHF could encourage the emergence of three problematic properties: situationally aware reward hacking, where policies exploit human fallibility to gain high reward, misaligned internally-represented goals that generalize beyond the RLHF fine-tuning distribution, and power-seeking strategies.

-
-Christiano, Paul F., Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. “Deep Reinforcement Learning from Human Preferences.” In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, edited by Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, 4299–4307. https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html. -
-Ngo, Richard, Lawrence Chan, and Sören Mindermann. 2022. “The Alignment Problem from a Deep Learning Perspective.” ArXiv Preprint abs/2209.00626. https://arxiv.org/abs/2209.00626. -
-Van Noorden, Richard. 2016. ArXiv Preprint Server Plans Multimillion-Dollar Overhaul.” Nature 534 (7609): 602–2. https://doi.org/10.1038/534602a. -

Similarly, Van Noorden (2016) outlines six concrete problems for AI safety, including avoiding negative side effects, avoiding reward hacking, scalable oversight for aspects of the objective that are too expensive to be frequently evaluated during training, safe exploration strategies that encourage creativity while preventing harm, and robustness to distributional shift in unseen testing environments.

-
-
-

15.7.2 Autonomous Systems and Control [and Trust]

-

The consequences of autonomous systems that act independently of human oversight and often outside human judgment have been well documented across several industries and use cases. Most recently, the California Department of Motor Vehicles suspended Cruise’s deployment and testing permits for its autonomous vehicles citing “unreasonable risks to public safety”. One such accident occurred when a vehicle struck a pedestrian who stepped into a crosswalk after the stoplight had turned green, and the vehicle was allowed to proceed. In 2018, a pedestrian crossing the street with her bike was killed when a self-driving Uber car, which was operating in autonomous mode, failed to accurately classify her moving body as an object to be avoided.

-

Autonomous systems beyond self-driving vehicles are also susceptible to such issues, with potentially graver consequences, as remotely-powered drones are already reshaping warfare. While such incidents bring up important ethical questions regarding who should be held responsible when these systems fail, they also highlight the technical challenges of giving full control of complex, real-world tasks to machines.

-

At its core, there is a tension between human and machine autonomy. Engineering and computer science disciplines have tended to focus on machine autonomy. For example, as of 2019, a search for the word “autonomy” in the Digital Library of the Association for Computing Machinery (ACM) reveals that of the top 100 most cited papers, 90% are on machine autonomy (Calvo et al. 2020). In an attempt to build systems for the benefit of humanity, these disciplines have taken, without question, increasing productivity, efficiency, and automation as primary strategies for benefiting humanity.

-
-McCarthy, John. 1981. “Epistemological Problems of Artificial Intelligence.” In Readings in Artificial Intelligence, 459–65. Elsevier. https://doi.org/10.1016/b978-0-934613-03-3.50035-0. -

These goals put machine automation at the forefront, often at the expense of the human. This approach suffers from inherent challenges, as noted since the early days of AI through the Frame problem and qualification problem, which formalizes the observation that it is impossible to specify all the preconditions needed for a real-world action to succeed (McCarthy 1981).

-

These logical limitations have given rise to mathematical approaches such as Responsibility-sensitive safety (RSS) (Shalev-Shwartz, Shammah, and Shashua 2017), which is aimed at breaking down the end goal of an automated driving system (namely safety) into concrete and checkable conditions that can be rigorously formulated in mathematical terms. The goal of RSS is that those safety rules guarantee ADS safety in the rigorous form of mathematical proof. However, such approaches tend towards using automation to address the problems of automation and are susceptible to many of the same issues.

-
-Shalev-Shwartz, Shai, Shaked Shammah, and Amnon Shashua. 2017. “On a Formal Model of Safe and Scalable Self-Driving Cars.” ArXiv Preprint abs/1708.06374. https://arxiv.org/abs/1708.06374. -
-Friedman, Batya. 1996. “Value-Sensitive Design.” Interactions 3 (6): 16–23. https://doi.org/10.1145/242485.242493. -
-Peters, Dorian, Rafael A. Calvo, and Richard M. Ryan. 2018. “Designing for Motivation, Engagement and Wellbeing in Digital Experience.” Front. Psychol. 9 (May): 797. https://doi.org/10.3389/fpsyg.2018.00797. -
-Ryan, Richard M., and Edward L. Deci. 2000. “Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being.” Am. Psychol. 55 (1): 68–78. https://doi.org/10.1037/0003-066x.55.1.68. -

Another approach to combating these issues is to focus on the human-centered design of interactive systems that incorporate human control. Value-sensitive design (Friedman 1996) described three key design factors for a user interface that impact autonomy, including system capability, complexity, misrepresentation, and fluidity. A more recent model, called METUX (A Model for Motivation, Engagement, and Thriving in the User Experience), leverages insights from Self-determination Theory (SDT) in Psychology to identify six distinct spheres of technology experience that contribute to the design systems that promote well-being and human flourishing (Peters, Calvo, and Ryan 2018). SDT defines autonomy as acting by one’s goals and values, which is distinct from the use of autonomy as simply a synonym for either independence or being in control (Ryan and Deci 2000).

-

Calvo 2020 elaborates on METUX and its six “spheres of technology experience” in the context of AI-recommender systems (Calvo et al. 2020). They propose these spheres—adoption, Interface, Tasks, Behavior, Life, and Society—as a way of organizing thinking and evaluation of technology design in order to appropriately capture contradictory and downstream impacts on human autonomy when interacting with AI systems.

-
-Calvo, Rafael A, Dorian Peters, Karina Vold, and Richard M Ryan. 2020. “Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry.” Ethics of Digital Well-Being: A Multidisciplinary Approach, 31–54. -
-
-

15.7.3 Economic Impacts on Jobs, Skills, Wages

-

A major concern of the current rise of AI technologies is widespread unemployment. As AI systems’ capabilities expand, many fear these technologies will cause an absolute loss of jobs as they replace current workers and overtake alternative employment roles across industries. However, changing economic landscapes at the hands of automation is not new, and historically, have been found to reflect patterns of displacement rather than replacement (Shneiderman 2022)—Chapter 4. In particular, automation usually lowers costs and increases quality, greatly increasing access and demand. The need to serve these growing markets pushes production, creating new jobs.

-
-———. 2022. Human-Centered AI. Oxford University Press. -

Furthermore, studies have found that attempts to achieve “lights-out” automation – productive and flexible automation with a minimal number of human workers – have been unsuccessful. Attempts to do so have led to what the MIT Work of the Future taskforce has termed “zero-sum automation”, in which process flexibility is sacrificed for increased productivity.

-

In contrast, the task force proposes a “positive-sum automation” approach in which flexibility is increased by designing technology that strategically incorporates humans where they are very much needed, making it easier for line employees to train and debug robots, using a bottom-up approach to identifying what tasks should be automated; and choosing the right metrics for measuring success (see MIT’s Work of the Future).

-

However, the optimism of the high-level outlook does not preclude individual harm, especially to those whose skills and jobs will be rendered obsolete by automation. Public and legislative pressure, as well as corporate social responsibility efforts, will need to be directed at creating policies that share the benefits of automation with workers and result in higher minimum wages and benefits.

-
-
-

15.7.4 Scientific Communication and AI Literacy

-

A 1993 survey of 3000 North American adults’ beliefs about the “electronic thinking machine” revealed two primary perspectives of the early computer: the “beneficial tool of man” perspective and the “awesome thinking machine” perspective. The attitudes contributing to the “awesome thinking machine” view in this and other studies revealed a characterization of computers as “intelligent brains, smarter than people, unlimited, fast, mysterious, and frightening” (Martin 1993). These fears highlight an easily overlooked component of responsible AI, especially amidst the rush to commercialize such technologies: scientific communication that accurately communicates the capabilities and limitations of these systems while providing transparency about the limitations of experts’ knowledge about these systems.

-
-Martin, C. Dianne. 1993. “The Myth of the Awesome Thinking Machine.” Commun. ACM 36 (4): 120–33. https://doi.org/10.1145/255950.153587. -
-Handlin, Oscar. 1965. “Science and Technology in Popular Culture.” Daedalus-Us., 156–70. -

As AI systems’ capabilities expand beyond most people’s comprehension, there is a natural tendency to assume the kinds of apocalyptic worlds painted by our media. This is partly due to the apparent difficulty of assimilating scientific information, even in technologically advanced cultures, which leads to the products of science being perceived as magic—“understandable only in terms of what it did, not how it worked” (Handlin 1965).

-

While tech companies should be held responsible for limiting grandiose claims and not falling into cycles of hype, research studying scientific communication, especially concerning (generative) AI, will also be useful in tracking and correcting public understanding of these technologies. An analysis of the Scopus scholarly database found that such research is scarce, with only a handful of papers mentioning both “science communication” and “artificial intelligence” (Schäfer 2023).

-
-Schäfer, Mike S. 2023. “The Notorious GPT: Science Communication in the Age of Artificial Intelligence.” Journal of Science Communication 22 (02): Y02. https://doi.org/10.22323/2.22020402. -
-Lindgren, Simon. 2023. Handbook of Critical Studies of Artificial Intelligence. Edward Elgar Publishing. -
-Ng, Davy Tsz Kit, Jac Ka Lok Leung, Kai Wah Samuel Chu, and Maggie Shen Qiao. 2021. AI Literacy: Definition, Teaching, Evaluation and Ethical Issues.” Proceedings of the Association for Information Science and Technology 58 (1): 504–9. -

Research that exposes the perspectives, frames, and images of the future promoted by academic institutions, tech companies, stakeholders, regulators, journalists, NGOs, and others will also help to identify potential gaps in AI literacy among adults (Lindgren 2023). Increased focus on AI literacy from all stakeholders will be important in helping people whose skills are rendered obsolete by AI automation (Ng et al. 2021).

-

“But even those who never acquire that understanding need assurance that there is a connection between the goals of science and their welfare, and above all, that the scientist is not a man altogether apart but one who shares some of their value.” (Handlin, 1965)

-
-
-
-

15.8 Conclusion

-

Responsible artificial intelligence is crucial as machine learning systems exert growing influence across healthcare, employment, finance, and criminal justice sectors. While AI promises immense benefits, thoughtlessly designed models risk perpetrating harm through biases, privacy violations, unintended behaviors, and other pitfalls.

-

Upholding principles of fairness, explainability, accountability, safety, and transparency enables the development of ethical AI aligned with human values. However, implementing these principles involves surmounting complex technical and social challenges around detecting dataset biases, choosing appropriate model tradeoffs, securing quality training data, and more. Frameworks like value-sensitive design guide balancing accuracy versus other objectives based on stakeholder needs.

-

Looking forward, advancing responsible AI necessitates continued research and industry commitment. More standardized benchmarks are required to compare model biases and robustness. As personalized TinyML expands, enabling efficient transparency and user control for edge devices warrants focus. Revised incentive structures and policies must encourage deliberate, ethical development before reckless deployment. Education around AI literacy and its limitations will further contribute to public understanding.

-

Responsible methods underscore that while machine learning offers immense potential, thoughtless application risks adverse consequences. Cross-disciplinary collaboration and human-centered design are imperative so AI can promote broad social benefit. The path ahead lies not in an arbitrary checklist but in a steadfast commitment to understand and uphold our ethical responsibility at each step. By taking conscientious action, the machine learning community can lead AI toward empowering all people equitably and safely.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

-

Coming soon.

-
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/robust_ai/robust_ai.html b/contents/robust_ai/robust_ai.html deleted file mode 100644 index 2b27c4be..00000000 --- a/contents/robust_ai/robust_ai.html +++ /dev/null @@ -1,2624 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 18  Robust AI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

18  Robust AI

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Create an image featuring an advanced AI system symbolized by an intricate, glowing neural network, deeply nested within a series of progressively larger and more fortified shields. Each shield layer represents a layer of defense, showcasing the system’s robustness against external threats and internal errors. The neural network, at the heart of this fortress of shields, radiates with connections that signify the AI’s capacity for learning and adaptation. This visual metaphor emphasizes not only the technological sophistication of the AI but also its resilience and security, set against the backdrop of a state-of-the-art, secure server room filled with the latest in technological advancements. The image aims to convey the concept of ultimate protection and resilience in the field of artificial intelligence.
-
-
-

The development of robust machine learning systems has become increasingly crucial. As these systems are deployed in various critical applications, from autonomous vehicles to healthcare diagnostics, ensuring their resilience to faults and errors is paramount.

-

Robust AI, in the context of hardware faults, software faults, and errors, plays an important role in maintaining the reliability, safety, and performance of machine learning systems. By addressing the challenges posed by transient, permanent, and intermittent hardware faults (Ahmadilivani et al. 2024), as well as bugs, design flaws, and implementation errors in software (H. Zhang 2008), robust AI techniques enable machine learning systems to operate effectively even in adverse conditions.

-

This chapter explores the fundamental concepts, techniques, and tools for building fault-tolerant and error-resilient machine learning systems. It empowers researchers and practitioners to develop AI solutions that can withstand the complexities and uncertainties of real-world environments.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the importance of robust and resilient AI systems in real-world applications.

  • -
  • Identify and characterize hardware faults, software faults, and their impact on ML systems.

  • -
  • Recognize and develop defensive strategies against threats posed by adversarial attacks, data poisoning, and distribution shifts.

  • -
  • Learn techniques for detecting, mitigating, and designing fault-tolerant ML systems.

  • -
  • Become familiar with tools and frameworks for studying and enhancing ML system resilience throughout the AI development lifecycle.

  • -
-
-
-
-

18.1 Introduction

-

Robust AI refers to a system’s ability to maintain its performance and reliability in the presence of hardware, software, and errors. A robust machine learning system is designed to be fault-tolerant and error-resilient, capable of operating effectively even under adverse conditions.

-

As ML systems become increasingly integrated into various aspects of our lives, from cloud-based services to edge devices and embedded systems, the impact of hardware and software faults on their performance and reliability becomes more significant. In the future, as ML systems become more complex and are deployed in even more critical applications, the need for robust and fault-tolerant designs will be paramount.

-

ML systems are expected to play crucial roles in autonomous vehicles, smart cities, healthcare, and industrial automation domains. In these domains, the consequences of hardware or software faults can be severe, potentially leading to loss of life, economic damage, or environmental harm.

-

Researchers and engineers must focus on developing advanced techniques for fault detection, isolation, and recovery to mitigate these risks and ensure the reliable operation of future ML systems.

-

This chapter will focus specifically on three main categories of faults and errors that can impact the robustness of ML systems: hardware faults, software faults, and human errors.

-
    -
  • Hardware Faults: Transient, permanent, and intermittent faults can affect the hardware components of an ML system, corrupting computations and degrading performance.

  • -
  • Model Robustness: ML models can be vulnerable to adversarial attacks, data poisoning, and distribution shifts, which can induce targeted misclassifications, skew the model’s learned behavior, or compromise the system’s integrity and reliability.

  • -
  • Software Faults: Bugs, design flaws, and implementation errors in the software components, such as algorithms, libraries, and frameworks, can propagate errors and introduce vulnerabilities.

  • -
-

The specific challenges and approaches to achieving robustness may vary depending on the scale and constraints of the ML system. Large-scale cloud computing or data center systems may focus on fault tolerance and resilience through redundancy, distributed processing, and advanced error detection and correction techniques. In contrast, resource-constrained edge devices or embedded systems face unique challenges due to limited computational power, memory, and energy resources.

-

Regardless of the scale and constraints, the key characteristics of a robust ML system include fault tolerance, error resilience, and performance maintenance. By understanding and addressing the multifaceted challenges to robustness, we can develop trustworthy and reliable ML systems that can navigate the complexities of real-world environments.

-

This chapter is not just about exploring ML systems’ tools, frameworks, and techniques for detecting and mitigating faults, attacks, and distributional shifts. It’s about emphasizing the crucial role of each one of you in prioritizing resilience throughout the AI development lifecycle, from data collection and model training to deployment and monitoring. By proactively addressing the challenges to robustness, we can unlock the full potential of ML technologies while ensuring their safe, reliable, and responsible deployment in real-world applications.

-

As AI continues to shape our future, the potential of ML technologies is immense. But it’s only when we build resilient systems that can withstand the challenges of the real world that we can truly harness this potential. This is a defining factor in the success and societal impact of this transformative technology, and it’s within our reach.

-
-
-

18.2 Real-World Examples

-

Here are some real-world examples of cases where faults in hardware or software have caused major issues in ML systems across cloud, edge, and embedded environments:

-
-

18.2.1 Cloud

-

In February 2017, Amazon Web Services (AWS) experienced a significant outage due to human error during maintenance. An engineer inadvertently entered an incorrect command, causing many servers to be taken offline. This outage disrupted many AWS services, including Amazon’s AI-powered assistant, Alexa. As a result, Alexa-powered devices, such as Amazon Echo and third-party products using Alexa Voice Service, could not respond to user requests for several hours. This incident highlights the potential impact of human errors on cloud-based ML systems and the need for robust maintenance procedures and failsafe mechanisms.

-

In another example (Vangal et al. 2021), Facebook encountered a silent data corruption (SDC) issue within its distributed querying infrastructure, as shown in Figure fig-sdc-example. Facebook’s infrastructure includes a querying system that fetches and executes SQL and SQL-like queries across multiple datasets using frameworks like Presto, Hive, and Spark. One of the applications that utilized this querying infrastructure was a compression application to reduce the footprint of data stores. In this compression application, files were compressed when not being read and decompressed when a read request was made. Before decompression, the file size was checked to ensure it was greater than zero, indicating a valid compressed file with contents.

-
-Vangal, Sriram, Somnath Paul, Steven Hsu, Amit Agarwal, Saurabh Kumar, Ram Krishnamurthy, Harish Krishnamurthy, James Tschanz, Vivek De, and Chris H. Kim. 2021. “Wide-Range Many-Core SoC Design in Scaled CMOS: Challenges and Opportunities.” IEEE Trans. Very Large Scale Integr. VLSI Syst. 29 (5): 843–56. https://doi.org/10.1109/tvlsi.2021.3061649. -
-
-
- -
-
-Figure 18.1: Silent data corruption in database applications (Source: Facebook) -
-
-
-

However, in one instance, when the file size was being computed for a valid non-zero-sized file, the decompression algorithm invoked a power function from the Scala library. Unexpectedly, the Scala function returned a zero size value for the file despite having a known non-zero decompressed size. As a result, the decompression was not performed, and the file was not written to the output database. This issue manifested sporadically, with some occurrences of the same file size computation returning the correct non-zero value.

-

The impact of this silent data corruption was significant, leading to missing files and incorrect data in the output database. The application relying on the decompressed files failed due to the data inconsistencies. In the case study presented in the paper, Facebook’s infrastructure, which consists of hundreds of thousands of servers handling billions of requests per day from their massive user base, encountered a silent data corruption issue. The affected system processed user queries, image uploads, and media content, which required fast, reliable, and secure execution.

-

This case study illustrates how silent data corruption can propagate through multiple layers of an application stack, leading to data loss and application failures in a large-scale distributed system. The intermittent nature of the issue and the lack of explicit error messages made it particularly challenging to diagnose and resolve. But this is not restricted to just Meta, even other companies such as Google that operate AI hypercomputers face this challenge. Figure fig-sdc-jeffdean Jeff Dean, Chief Scientist at Google DeepMind and Google Research, discusses SDCS and their impact on ML systems.

-
-
-
- -
-
-Figure 18.2: Silent data corruption (SDC) errors are a major issue for AI hypercomputers. (Source: Jeff Dean at MLSys 2024, Keynote (Google)) -
-
-
-
-
-

18.2.2 Edge

-

Regarding examples of faults and errors in edge ML systems, one area that has gathered significant attention is the domain of self-driving cars. Self-driving vehicles rely heavily on machine learning algorithms for perception, decision-making, and control, making them particularly susceptible to the impact of hardware and software faults. In recent years, several high-profile incidents involving autonomous vehicles have highlighted the challenges and risks associated with deploying these systems in real-world environments.

-

In May 2016, a fatal accident occurred when a Tesla Model S operating on Autopilot crashed into a white semi-trailer truck crossing the highway. The Autopilot system, which relied on computer vision and machine learning algorithms, failed to recognize the white trailer against a bright sky background. The driver, who was reportedly watching a movie when the crash, did not intervene in time, and the vehicle collided with the trailer at full speed. This incident raised concerns about the limitations of AI-based perception systems and the need for robust failsafe mechanisms in autonomous vehicles. It also highlighted the importance of driver awareness and the need for clear guidelines on using semi-autonomous driving features, as shown in Figure fig-tesla-example.

-
-
-
- -
-
-Figure 18.3: Tesla in the fatal California crash was on Autopilot (Source: BBC News) -
-
-
-

In March 2018, an Uber self-driving test vehicle struck and killed a pedestrian crossing the street in Tempe, Arizona. The incident was caused by a software flaw in the vehicle’s object recognition system, which failed to identify the pedestrians appropriately to avoid them as obstacles. The safety driver, who was supposed to monitor the vehicle’s operation and intervene if necessary, was found distracted during the crash. This incident led to widespread scrutiny of Uber’s self-driving program and raised questions about the readiness of autonomous vehicle technology for public roads. It also emphasized the need for rigorous testing, validation, and safety measures in developing and deploying AI-based self-driving systems.

-

In 2021, Tesla faced increased scrutiny following several accidents involving vehicles operating on Autopilot mode. Some of these accidents were attributed to issues with the Autopilot system’s ability to detect and respond to certain road situations, such as stationary emergency vehicles or obstacles in the road. For example, in April 2021, a Tesla Model S crashed into a tree in Texas, killing two passengers. Initial reports suggested that no one was in the driver’s seat at the time of the crash, raising questions about the use and potential misuse of Autopilot features. These incidents highlight the ongoing challenges in developing robust and reliable autonomous driving systems and the need for clear regulations and consumer education regarding the capabilities and limitations of these technologies.

-
-
-

18.2.3 Embedded

-

Embedded systems, which often operate in resource-constrained environments and safety-critical applications, have long faced challenges related to hardware and software faults. As AI and machine learning technologies are increasingly integrated into these systems, the potential for faults and errors takes on new dimensions, with the added complexity of AI algorithms and the critical nature of the applications in which they are deployed.

-

Let’s consider a few examples, starting with outer space exploration. NASA’s Mars Polar Lander mission in 1999 suffered a catastrophic failure due to a software error in the touchdown detection system (Figure fig-nasa-example). The spacecraft’s onboard software mistakenly interpreted the noise from the deployment of its landing legs as a sign that it had touched down on the Martian surface. As a result, the spacecraft prematurely shut down its engines, causing it to crash into the surface. This incident highlights the critical importance of robust software design and extensive testing in embedded systems, especially those operating in remote and unforgiving environments. As AI capabilities are integrated into future space missions, ensuring these systems’ reliability and fault tolerance will be paramount to mission success.

-
-
-
- -
-
-Figure 18.4: NASA’s Failed Mars Polar Lander mission in 1999 cost over $200M (Source: SlashGear) -
-
-
-

Back on earth, in 2015, a Boeing 787 Dreamliner experienced a complete electrical shutdown during a flight due to a software bug in its generator control units. The bug caused the generator control units to enter a failsafe mode, cutting power to the aircraft’s electrical systems and forcing an emergency landing. This incident underscores the potential for software faults to have severe consequences in complex embedded systems like aircraft. As AI technologies are increasingly applied in aviation, such as in autonomous flight systems and predictive maintenance, ensuring the robustness and reliability of these systems will be critical to passenger safety.

-

As AI capabilities increasingly integrate into embedded systems, the potential for faults and errors becomes more complex and severe. Imagine a smart pacemaker that has a sudden glitch. A patient could die from that effect. Therefore, AI algorithms, such as those used for perception, decision-making, and control, introduce new sources of potential faults, such as data-related issues, model uncertainties, and unexpected behaviors in edge cases. Moreover, the opaque nature of some AI models can make it challenging to identify and diagnose faults when they occur.

-
-
-
-

18.3 Hardware Faults

-

Hardware faults are a significant challenge in computing systems, including traditional and ML systems. These faults occur when physical components, such as processors, memory modules, storage devices, or interconnects, malfunction or behave abnormally. Hardware faults can cause incorrect computations, data corruption, system crashes, or complete system failure, compromising the integrity and trustworthiness of the computations performed by the system (Jha et al. 2019). A complete system failure refers to a situation where the entire computing system becomes unresponsive or inoperable due to a critical hardware malfunction. This type of failure is the most severe, as it renders the system unusable and may lead to data loss or corruption, requiring manual intervention to repair or replace the faulty components.

-

Understanding the taxonomy of hardware faults is essential for anyone working with computing systems, especially in the context of ML systems. ML systems rely on complex hardware architectures and large-scale computations to train and deploy models that learn from data and make intelligent predictions or decisions. However, hardware faults can introduce errors and inconsistencies in the MLOps pipeline, affecting the trained models’ accuracy, robustness, and reliability (G. Li et al. 2017).

-

Knowing the different types of hardware faults, their mechanisms, and their potential impact on system behavior is crucial for developing effective strategies to detect, mitigate, and recover them. This knowledge is necessary for designing fault-tolerant computing systems, implementing robust ML algorithms, and ensuring the overall dependability of ML-based applications.

-

The following sections will explore the three main categories of hardware faults: transient, permanent, and intermittent. We will discuss their definitions, characteristics, causes, mechanisms, and examples of how they manifest in computing systems. We will also cover detection and mitigation techniques specific to each fault type.

-
    -
  • Transient Faults: Transient faults are temporary and non-recurring. They are often caused by external factors such as cosmic rays, electromagnetic interference, or power fluctuations. A common example of a transient fault is a bit flip, where a single bit in a memory location or register changes its value unexpectedly. Transient faults can lead to incorrect computations or data corruption, but they do not cause permanent damage to the hardware.

  • -
  • Permanent Faults: Permanent faults, also called hard errors, are irreversible and persist over time. They are typically caused by physical defects or wear-out of hardware components. Examples of permanent faults include stuck-at faults, where a bit or signal is permanently set to a specific value (e.g., always 0 or always 1), and device failures, such as a malfunctioning processor or a damaged memory module. Permanent faults can result in complete system failure or significant performance degradation.

  • -
  • Intermittent Faults: Intermittent faults are recurring faults that appear and disappear intermittently. Unstable hardware conditions, such as loose connections, aging components, or manufacturing defects, often cause them. Intermittent faults can be challenging to diagnose and reproduce because they may occur sporadically and under specific conditions. Examples include intermittent short circuits or contact resistance issues. Intermittent faults can lead to unpredictable system behavior and intermittent errors.

  • -
-

By the end of this discussion, readers will have a solid understanding of fault taxonomy and its relevance to traditional computing and ML systems. This foundation will help them make informed decisions when designing, implementing, and deploying fault-tolerant solutions, improving the reliability and trustworthiness of their computing systems and ML applications.

-
-

18.3.1 Transient Faults

-

Transient faults in hardware can manifest in various forms, each with its own unique characteristics and causes. These faults are temporary in nature and do not result in permanent damage to the hardware components.

-
-

Definition and Characteristics

-

Some of the common types of transient faults include Single Event Upsets (SEUs) caused by ionizing radiation, voltage fluctuations (Reddi and Gupta 2013) due to power supply noise or electromagnetic interference, Electromagnetic Interference (EMI) induced by external electromagnetic fields, Electrostatic Discharge (ESD) resulting from sudden static electricity flow, crosstalk caused by unintended signal coupling, ground bounce triggered by simultaneous switching of multiple outputs, timing violations due to signal timing constraint breaches, and soft errors in combinational logic affecting the output of logic circuits (Mukherjee, Emer, and Reinhardt 2005). Understanding these different types of transient faults is crucial for designing robust and resilient hardware systems that can mitigate their impact and ensure reliable operation.

-
-Reddi, Vijay Janapa, and Meeta Sharma Gupta. 2013. Resilient Architecture Design for Voltage Variation. Springer International Publishing. https://doi.org/10.1007/978-3-031-01739-1. -
-Mukherjee, S. S., J. Emer, and S. K. Reinhardt. 2005. “The Soft Error Problem: An Architectural Perspective.” In 11th International Symposium on High-Performance Computer Architecture, 243–47. IEEE; IEEE. https://doi.org/10.1109/hpca.2005.37. -

All of these transient faults are characterized by their short duration and non-permanent nature. They do not persist or leave any lasting impact on the hardware. However, they can still lead to incorrect computations, data corruption, or system misbehavior if not properly handled.

-

-
-
-

Causes of Transient Faults

-

Transient faults can be attributed to various external factors. One common cause is cosmic rays, high-energy particles originating from outer space. When these particles strike sensitive areas of the hardware, such as memory cells or transistors, they can induce charge disturbances that alter the stored or transmitted data. This is illustrated in Figure fig-transient-fault. Another cause of transient faults is electromagnetic interference (EMI) from nearby devices or power fluctuations. EMI can couple with the circuits and cause voltage spikes or glitches that temporarily disrupt the normal operation of the hardware.

-
-
-
- -
-
-Figure 18.5: Mechanism of Hardware Transient Fault Occurrence (Source: NTT) -
-
-
-
-
-

Mechanisms of Transient Faults

-

Transient faults can manifest through different mechanisms depending on the affected hardware component. In memory devices like DRAM or SRAM, transient faults often lead to bit flips, where a single bit changes its value from 0 to 1 or vice versa. This can corrupt the stored data or instructions. In logic circuits, transient faults can cause glitches or voltage spikes propagating through the combinational logic, resulting in incorrect outputs or control signals. Transient faults can also affect communication channels, causing bit errors or packet losses during data transmission.

-
-
-

Impact on ML Systems

-

A common example of a transient fault is a bit flip in the main memory. If an important data structure or critical instruction is stored in the affected memory location, it can lead to incorrect computations or program misbehavior. If a transient fault occurs in the memory storing the model weights or gradients. For instance, a bit flip in the memory storing a loop counter can cause the loop to execute indefinitely or terminate prematurely. Transient faults in control registers or flag bits can alter the flow of program execution, leading to unexpected jumps or incorrect branch decisions. In communication systems, transient faults can corrupt transmitted data packets, resulting in retransmissions or data loss.

-

In ML systems, transient faults can have significant implications during the training phase (He et al. 2023). ML training involves iterative computations and updates to model parameters based on large datasets. If a transient fault occurs in the memory storing the model weights or gradients, it can lead to incorrect updates and compromise the convergence and accuracy of the training process. Figure fig-sdc-training-fault Show a real-world example from Google’s production fleet where an SDC anomaly caused a significant difference in the gradient norm.

-
-
-
- -
-
-Figure 18.6: SDC in ML training phase results in anomalies in the gradient norm. (Source: Jeff Dean, MLSys 2024 Keynote (Google)) -
-
-
-

For example, a bit flip in the weight matrix of a neural network can cause the model to learn incorrect patterns or associations, leading to degraded performance (Wan et al. 2021). Transient faults in the data pipeline, such as corruption of training samples or labels, can also introduce noise and affect the quality of the learned model.

-
-Wan, Zishen, Aqeel Anwar, Yu-Shun Hsiao, Tianyu Jia, Vijay Janapa Reddi, and Arijit Raychowdhury. 2021. “Analyzing and Improving Fault Tolerance of Learning-Based Navigation Systems.” In 2021 58th ACM/IEEE Design Automation Conference (DAC), 841–46. IEEE; IEEE. https://doi.org/10.1109/dac18074.2021.9586116. -

During the inference phase, transient faults can impact the reliability and trustworthiness of ML predictions. If a transient fault occurs in the memory storing the trained model parameters or in the computation of the inference results, it can lead to incorrect or inconsistent predictions. For instance, a bit flip in the activation values of a neural network can alter the final classification or regression output (Mahmoud et al. 2020).

-

In safety-critical applications, such as autonomous vehicles or medical diagnosis, transient faults during inference can have severe consequences, leading to incorrect decisions or actions (G. Li et al. 2017; Jha et al. 2019). Ensuring the resilience of ML systems against transient faults is crucial to maintaining the integrity and reliability of the predictions.

-
-Li, Guanpeng, Siva Kumar Sastry Hari, Michael Sullivan, Timothy Tsai, Karthik Pattabiraman, Joel Emer, and Stephen W. Keckler. 2017. “Understanding Error Propagation in Deep Learning Neural Network (DNN) Accelerators and Applications.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, 1–12. ACM. https://doi.org/10.1145/3126908.3126964. -
-
-
-

18.3.2 Permanent Faults

-

Permanent faults are hardware defects that persist and cause irreversible damage to the affected components. These faults are characterized by their persistent nature and require repair or replacement of the faulty hardware to restore normal system functionality.

-
-

Definition and Characteristics

-

Permanent faults are hardware defects that cause persistent and irreversible malfunctions in the affected components. The faulty component remains non-operational until a permanent fault is repaired or replaced. These faults are characterized by their consistent and reproducible nature, meaning that the faulty behavior is observed every time the affected component is used. Permanent faults can impact various hardware components, such as processors, memory modules, storage devices, or interconnects, leading to system crashes, data corruption, or complete system failure.

-

One notable example of a permanent fault is the Intel FDIV bug, which was discovered in 1994. The FDIV bug was a flaw in certain Intel Pentium processors’ floating-point division (FDIV) units. The bug caused incorrect results for specific division operations, leading to inaccurate calculations.

-

The FDIV bug occurred due to an error in the lookup table used by the division unit. In rare cases, the processor would fetch an incorrect value from the lookup table, resulting in a slightly less precise result than expected. For instance, Figure fig-permanent-fault shows a fraction 4195835/3145727 plotted on a Pentium processor with the FDIV permanent fault. The triangular regions are where erroneous calculations occurred. Ideally, all correct values would round to 1.3338, but the erroneous results show 1.3337, indicating a mistake in the 5th digit.

-

Although the error was small, it could compound over many division operations, leading to significant inaccuracies in mathematical calculations. The impact of the FDIV bug was significant, especially for applications that relied heavily on precise floating-point division, such as scientific simulations, financial calculations, and computer-aided design. The bug led to incorrect results, which could have severe consequences in fields like finance or engineering.

-
-
-
- -
-
-Figure 18.7: Intel Pentium processor with the FDIV permanent fault. The triangular regions are where erroneous calculations occurred. (Source: Byte Magazine) -
-
-
-

The Intel FDIV bug is a cautionary tale for the potential impact of permanent faults on ML systems. In the context of ML, permanent faults in hardware components can lead to incorrect computations, affecting the accuracy and reliability of the models. For example, if an ML system relies on a processor with a faulty floating-point unit, similar to the Intel FDIV bug, it could introduce errors in the calculations performed during training or inference.

-

These errors can propagate through the model, leading to inaccurate predictions or skewed learning. In applications where ML is used for critical tasks, such as autonomous driving, medical diagnosis, or financial forecasting, the consequences of incorrect computations due to permanent faults can be severe.

-

It is crucial for ML practitioners to be aware of the potential impact of permanent faults and to incorporate fault-tolerant techniques, such as hardware redundancy, error detection and correction mechanisms, and robust algorithm design, to mitigate the risks associated with these faults. Additionally, thorough testing and validation of ML hardware components can help identify and address permanent faults before they impact the system’s performance and reliability.

-
-
-

Causes of Permanent Faults

-

Permanent faults can arise from several causes, including manufacturing defects and wear-out mechanisms. Manufacturing defects are inherent flaws introduced during the fabrication process of hardware components. These defects include improper etching, incorrect doping, or contamination, leading to non-functional or partially functional components.

-

On the other hand, wear-out mechanisms occur over time as the hardware components are subjected to prolonged use and stress. Factors such as electromigration, oxide breakdown, or thermal stress can cause gradual degradation of the components, eventually leading to permanent failures.

-
-
-

Mechanisms of Permanent Faults

-

Permanent faults can manifest through various mechanisms, depending on the nature and location of the fault. Stuck-at faults (Seong et al. 2010) are common permanent faults where a signal or memory cell remains fixed at a particular value (either 0 or 1) regardless of the inputs, as illustrated in Figure fig-stuck-fault.

-
-Seong, Nak Hee, Dong Hyuk Woo, Vijayalakshmi Srinivasan, Jude A. Rivers, and Hsien-Hsin S. Lee. 2010. SAFER: Stuck-at-fault Error Recovery for Memories.” In 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture, 115–24. IEEE; IEEE. https://doi.org/10.1109/micro.2010.46. -
-
-
- -
-
-Figure 18.8: Stuck-at Fault Model in Digital Circuits (Source: Accendo Reliability) -
-
-
-

Stuck-at faults can occur in logic gates, memory cells, or interconnects, causing incorrect computations or data corruption. Another mechanism is device failures, where a component, such as a transistor or a memory cell, completely ceases to function. This can be due to manufacturing defects or severe wear-out. Bridging faults occur when two or more signal lines are unintentionally connected, causing short circuits or incorrect logic behavior.

-

In addition to stuck-at faults, there are several other types of permanent faults that can affect digital circuits that can impact an ML system. Delay faults can cause the propagation delay of a signal to exceed the specified limit, leading to timing violations. Interconnect faults, such as open faults (broken wires), resistive faults (increased resistance), or capacitive faults (increased capacitance), can cause signal integrity issues or timing violations. Memory cells can also suffer from various faults, including transition faults (inability to change state), coupling faults (interference between adjacent cells), and neighborhood pattern sensitive faults (faults that depend on the values of neighboring cells). Other permanent faults can occur in the power supply network or the clock distribution network, affecting the functionality and timing of the circuit.

-
-
-

Impact on ML Systems

-

Permanent faults can severely affect the behavior and reliability of computing systems. For example, a stuck-at-fault in a processor’s arithmetic logic unit (ALU) can cause incorrect computations, leading to erroneous results or system crashes. A permanent fault in a memory module, such as a stuck-at fault in a specific memory cell, can corrupt the stored data, causing data loss or program misbehavior. In storage devices, permanent faults like bad sectors or device failures can result in data inaccessibility or complete loss of stored information. Permanent interconnect faults can disrupt communication channels, causing data corruption or system hangs.

-

Permanent faults can significantly affect ML systems during the training and inference phases. During training, permanent faults in processing units or memory can lead to incorrect computations, resulting in corrupted or suboptimal models (He et al. 2023). Furthermore, faults in storage devices can corrupt the training data or the stored model parameters, leading to data loss or model inconsistencies (He et al. 2023).

-
-Zhang, Jeff Jun, Tianyu Gu, Kanad Basu, and Siddharth Garg. 2018. “Analyzing and Mitigating the Impact of Permanent Faults on a Systolic Array Based Neural Network Accelerator.” In 2018 IEEE 36th VLSI Test Symposium (VTS), 1–6. IEEE; IEEE. https://doi.org/10.1109/vts.2018.8368656. -

During inference, permanent faults can impact the reliability and correctness of ML predictions. Faults in the processing units can produce incorrect results or cause system failures, while faults in memory storing the model parameters can lead to corrupted or outdated models being used for inference (J. J. Zhang et al. 2018).

-

To mitigate the impact of permanent faults in ML systems, fault-tolerant techniques must be employed at both the hardware and software levels. Hardware redundancy, such as duplicating critical components or using error-correcting codes (Kim, Sullivan, and Erez 2015), can help detect and recover from permanent faults. Software techniques, such as checkpoint and restart mechanisms (Egwutuoha et al. 2013), can enable the system to recover from permanent faults by returning to a previously saved state. Regular monitoring, testing, and maintenance of ML systems can help identify and replace faulty components before they cause significant disruptions.

-
-Kim, Jungrae, Michael Sullivan, and Mattan Erez. 2015. “Bamboo ECC: Strong, Safe, and Flexible Codes for Reliable Computer Memory.” In 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA), 101–12. IEEE; IEEE. https://doi.org/10.1109/hpca.2015.7056025. -
-Egwutuoha, Ifeanyi P., David Levy, Bran Selic, and Shiping Chen. 2013. “A Survey of Fault Tolerance Mechanisms and Checkpoint/Restart Implementations for High Performance Computing Systems.” The Journal of Supercomputing 65 (3): 1302–26. https://doi.org/10.1007/s11227-013-0884-0. -

Designing ML systems with fault tolerance in mind is crucial to ensure their reliability and robustness in the presence of permanent faults. This may involve incorporating redundancy, error detection and correction mechanisms, and fail-safe strategies into the system architecture. By proactively addressing the challenges posed by permanent faults, ML systems can maintain their integrity, accuracy, and trustworthiness, even in the face of hardware failures.

-
-
-
-

18.3.3 Intermittent Faults

-

Intermittent faults are hardware faults that occur sporadically and unpredictably in a system. An example is illustrated in Figure fig-intermittent-fault, where cracks in the material can introduce increased resistance in circuitry. These faults are particularly challenging to detect and diagnose because they appear and disappear intermittently, making it difficult to reproduce and isolate the root cause. Intermittent faults can lead to system instability, data corruption, and performance degradation.

-
-
-
- -
-
-Figure 18.9: Increased resistance due to an intermittent fault – crack between copper bump and package solder (Source: Constantinescu) -
-
-
-
-

Definition and Characteristics

-

Intermittent faults are characterized by their sporadic and non-deterministic nature. They occur irregularly and may appear and disappear spontaneously, with varying durations and frequencies. These faults do not consistently manifest every time the affected component is used, making them harder to detect than permanent faults. Intermittent faults can affect various hardware components, including processors, memory modules, storage devices, or interconnects. They can cause transient errors, data corruption, or unexpected system behavior.

-

Intermittent faults can significantly impact the behavior and reliability of computing systems (Rashid, Pattabiraman, and Gopalakrishnan 2015). For example, an intermittent fault in a processor’s control logic can cause irregular program flow, leading to incorrect computations or system hangs. Intermittent faults in memory modules can corrupt data values, resulting in erroneous program execution or data inconsistencies. In storage devices, intermittent faults can cause read/write errors or data loss. Intermittent faults in communication channels can lead to data corruption, packet loss, or intermittent connectivity issues. These faults can cause system crashes, data integrity problems, or performance degradation, depending on the severity and frequency of the intermittent failures.

-
-———. 2015. “Characterizing the Impact of Intermittent Hardware Faults on Programs.” IEEE Trans. Reliab. 64 (1): 297–310. https://doi.org/10.1109/tr.2014.2363152. -
-
-

Causes of Intermittent Faults

-

Intermittent faults can arise from several causes, both internal and external, to the hardware components (Constantinescu 2008). One common cause is aging and wear-out of the components. As electronic devices age, they become more susceptible to intermittent failures due to degradation mechanisms such as electromigration, oxide breakdown, or solder joint fatigue.

-
-Constantinescu, Cristian. 2008. “Intermittent Faults and Effects on Reliability of Integrated Circuits.” In 2008 Annual Reliability and Maintainability Symposium, 370–74. IEEE; IEEE. https://doi.org/10.1109/rams.2008.4925824. -

Manufacturing defects or process variations can also introduce intermittent faults, where marginal or borderline components may exhibit sporadic failures under specific conditions, as shown in Figure fig-intermittent-fault-dram.

-

Environmental factors, such as temperature fluctuations, humidity, or vibrations, can trigger intermittent faults by altering the electrical characteristics of the components. Loose or degraded connections, such as those in connectors or printed circuit boards, can cause intermittent faults.

-
-
-
- -
-
-Figure 18.10: Residue induced intermittent fault in a DRAM chip (Source: Hynix Semiconductor) -
-
-
-
-
-

Mechanisms of Intermittent Faults

-

Intermittent faults can manifest through various mechanisms, depending on the underlying cause and the affected component. One mechanism is the intermittent open or short circuit, where a signal path or connection becomes temporarily disrupted or shorted, causing erratic behavior. Another mechanism is the intermittent delay fault (J. Zhang et al. 2018), where the timing of signals or propagation delays becomes inconsistent, leading to synchronization issues or incorrect computations. Intermittent faults can manifest as transient bit flips or soft errors in memory cells or registers, causing data corruption or incorrect program execution.

-
-Zhang, Jeff, Kartheek Rangineni, Zahra Ghodsi, and Siddharth Garg. 2018. ThUnderVolt: Enabling Aggressive Voltage Underscaling and Timing Error Resilience for Energy Efficient Deep Learning Accelerators.” In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1–6. IEEE. https://doi.org/10.1109/dac.2018.8465918. -
-
-

Impact on ML Systems

-

In the context of ML systems, intermittent faults can introduce significant challenges and impact the system’s reliability and performance. During the training phase, intermittent faults in processing units or memory can lead to inconsistencies in computations, resulting in incorrect or noisy gradients and weight updates. This can affect the convergence and accuracy of the training process, leading to suboptimal or unstable models. Intermittent data storage or retrieval faults can corrupt the training data, introducing noise or errors that degrade the quality of the learned models (He et al. 2023).

-
-He, Yi, Mike Hutton, Steven Chan, Robert De Gruijl, Rama Govindaraju, Nishant Patil, and Yanjing Li. 2023. “Understanding and Mitigating Hardware Failures in Deep Learning Training Systems.” In Proceedings of the 50th Annual International Symposium on Computer Architecture, 1–16. IEEE; ACM. https://doi.org/10.1145/3579371.3589105. -

During the inference phase, intermittent faults can impact the reliability and consistency of ML predictions. Faults in the processing units or memory can cause incorrect computations or data corruption, leading to erroneous or inconsistent predictions. Intermittent faults in the data pipeline can introduce noise or errors in the input data, affecting the accuracy and robustness of the predictions. In safety-critical applications, such as autonomous vehicles or medical diagnosis systems, intermittent faults can have severe consequences, leading to incorrect decisions or actions that compromise safety and reliability.

-

Mitigating the impact of intermittent faults in ML systems requires a multifaceted approach (Rashid, Pattabiraman, and Gopalakrishnan 2012). At the hardware level, techniques such as robust design practices, component selection, and environmental control can help reduce the occurrence of intermittent faults. Redundancy and error correction mechanisms can be employed to detect and recover from intermittent failures. At the software level, runtime monitoring, anomaly detection, and fault-tolerant techniques can be incorporated into the ML pipeline. This may include techniques such as data validation, outlier detection, model ensembling, or runtime model adaptation to handle intermittent faults gracefully.

-
-Rashid, Layali, Karthik Pattabiraman, and Sathish Gopalakrishnan. 2012. “Intermittent Hardware Errors Recovery: Modeling and Evaluation.” In 2012 Ninth International Conference on Quantitative Evaluation of Systems, 220–29. IEEE; IEEE. https://doi.org/10.1109/qest.2012.37. -

Designing ML systems resilient to intermittent faults is crucial to ensuring their reliability and robustness. This involves incorporating fault-tolerant techniques, runtime monitoring, and adaptive mechanisms into the system architecture. By proactively addressing the challenges of intermittent faults, ML systems can maintain their accuracy, consistency, and trustworthiness, even in sporadic hardware failures. Regular testing, monitoring, and maintenance of ML systems can help identify and mitigate intermittent faults before they cause significant disruptions or performance degradation.

-
-
-
-

18.3.4 Detection and Mitigation

-

This section explores various fault detection techniques, including hardware-level and software-level approaches, and discusses effective mitigation strategies to enhance the resilience of ML systems. Additionally, we will look into resilient ML system design considerations, present case studies and examples, and highlight future research directions in fault-tolerant ML systems.

-
-

Fault Detection Techniques

-

Fault detection techniques are important for identifying and localizing hardware faults in ML systems. These techniques can be broadly categorized into hardware-level and software-level approaches, each offering unique capabilities and advantages.

-
-
Hardware-level fault detection
-

Hardware-level fault detection techniques are implemented at the physical level of the system and aim to identify faults in the underlying hardware components. There are several hardware techniques, but broadly, we can bucket these different mechanisms into the following categories.

-

Built-in self-test (BIST) mechanisms: BIST is a powerful technique for detecting faults in hardware components (Bushnell and Agrawal 2002). It involves incorporating additional hardware circuitry into the system for self-testing and fault detection. BIST can be applied to various components, such as processors, memory modules, or application-specific integrated circuits (ASICs). For example, BIST can be implemented in a processor using scan chains, which are dedicated paths that allow access to internal registers and logic for testing purposes.

-
-Bushnell, Michael L, and Vishwani D Agrawal. 2002. “Built-in Self-Test.” Essentials of Electronic Testing for Digital, Memory and Mixed-Signal VLSI Circuits, 489–548. -

During the BIST process, predefined test patterns are applied to the processor’s internal circuitry, and the responses are compared against expected values. Any discrepancies indicate the presence of faults. Intel’s Xeon processors, for instance, include BIST mechanisms to test the CPU cores, cache memory, and other critical components during system startup.

-

Error detection codes: Error detection codes are widely used to detect data storage and transmission errors (Hamming 1950). These codes add redundant bits to the original data, allowing the detection of bit errors. Example: Parity checks are a simple form of error detection code shown in Figure fig-parity. In a single-bit parity scheme, an extra bit is appended to each data word, making the number of 1s in the word even (even parity) or odd (odd parity).

-
-Hamming, R. W. 1950. “Error Detecting and Error Correcting Codes.” Bell Syst. Tech. J. 29 (2): 147–60. https://doi.org/10.1002/j.1538-7305.1950.tb00463.x. -
-
-
- -
-
-Figure 18.11: Parity bit example (Source: Computer Hope) -
-
-
-

When reading the data, the parity is checked, and if it doesn’t match the expected value, an error is detected. More advanced error detection codes, such as cyclic redundancy checks (CRC), calculate a checksum based on the data and append it to the message. The checksum is recalculated at the receiving end and compared with the transmitted checksum to detect errors. Error-correcting code (ECC) memory modules, commonly used in servers and critical systems, employ advanced error detection and correction codes to detect and correct single-bit or multi-bit errors in memory.

-

Hardware redundancy and voting mechanisms: Hardware redundancy involves duplicating critical components and comparing their outputs to detect and mask faults (Sheaffer, Luebke, and Skadron 2007). Voting mechanisms, such as triple modular redundancy (TMR), employ multiple instances of a component and compare their outputs to identify and mask faulty behavior (Arifeen, Hassan, and Lee 2020).

-
-Sheaffer, Jeremy W, David P Luebke, and Kevin Skadron. 2007. “A Hardware Redundancy and Recovery Mechanism for Reliable Scientific Computation on Graphics Processors.” In Graphics Hardware, 2007:55–64. Citeseer. -
-Arifeen, Tooba, Abdus Sami Hassan, and Jeong-A Lee. 2020. “Approximate Triple Modular Redundancy: A Survey.” #IEEE_O_ACC# 8: 139851–67. https://doi.org/10.1109/access.2020.3012673. -
-Yeh, Y. C. 1996. “Triple-Triple Redundant 777 Primary Flight Computer.” In 1996 IEEE Aerospace Applications Conference. Proceedings, 1:293–307. IEEE; IEEE. https://doi.org/10.1109/aero.1996.495891. -

In a TMR system, three identical instances of a hardware component, such as a processor or a sensor, perform the same computation in parallel. The outputs of these instances are fed into a voting circuit, which compares the results and selects the majority value as the final output. If one of the instances produces an incorrect result due to a fault, the voting mechanism masks the error and maintains the correct output. TMR is commonly used in aerospace and aviation systems, where high reliability is critical. For instance, the Boeing 777 aircraft employs TMR in its primary flight computer system to ensure the availability and correctness of flight control functions (Yeh 1996).

-

Tesla’s self-driving computers employ a redundant hardware architecture to ensure the safety and reliability of critical functions, such as perception, decision-making, and vehicle control, as shown in Figure fig-tesla-dmr. One key component of this architecture is using dual modular redundancy (DMR) in the car’s onboard computer systems.

-
-
-
- -
-
-Figure 18.12: Tesla full self-driving computer with dual redundant SoCs (Source: Tesla) -
-
-
-

In Tesla’s DMR implementation, two identical hardware units, often called “redundant computers” or “redundant control units,” perform the same computations in parallel (Bannon et al. 2019). Each unit independently processes sensor data, executes perception and decision-making algorithms, and generates control commands for the vehicle’s actuators (e.g., steering, acceleration, and braking).

-
-Bannon, Pete, Ganesh Venkataramanan, Debjit Das Sarma, and Emil Talpes. 2019. “Computer and Redundancy Solution for the Full Self-Driving Computer.” In 2019 IEEE Hot Chips 31 Symposium (HCS), 1–22. IEEE Computer Society; IEEE. https://doi.org/10.1109/hotchips.2019.8875645. -

The outputs of these two redundant units are continuously compared to detect any discrepancies or faults. If the outputs match, the system assumes that both units function correctly, and the control commands are sent to the vehicle’s actuators. However, if there is a mismatch between the outputs, the system identifies a potential fault in one of the units and takes appropriate action to ensure safe operation.

-

The system may employ additional mechanisms to determine which unit is faulty in a mismatch. This can involve using diagnostic algorithms, comparing the outputs with data from other sensors or subsystems, or analyzing the consistency of the outputs over time. Once the faulty unit is identified, the system can isolate it and continue operating using the output from the non-faulty unit.

-

DMR in Tesla’s self-driving computer provides an extra safety and fault tolerance layer. By having two independent units performing the same computations, the system can detect and mitigate faults that may occur in one of the units. This redundancy helps prevent single points of failure and ensures that critical functions remain operational despite hardware faults.

-

Furthermore, Tesla also incorporates additional redundancy mechanisms beyond DMR. For example, they utilize redundant power supplies, steering and braking systems, and diverse sensor suites (e.g., cameras, radar, and ultrasonic sensors) to provide multiple layers of fault tolerance. These redundancies collectively contribute to the overall safety and reliability of the self-driving system.

-

It’s important to note that while DMR provides fault detection and some level of fault tolerance, TMR may provide a different level of fault masking. In DMR, if both units experience simultaneous faults or the fault affects the comparison mechanism, the system may be unable to identify the fault. Therefore, Tesla’s SDCs rely on a combination of DMR and other redundancy mechanisms to achieve a high level of fault tolerance.

-

The use of DMR in Tesla’s self-driving computer highlights the importance of hardware redundancy in safety-critical applications. By employing redundant computing units and comparing their outputs, the system can detect and mitigate faults, enhancing the overall safety and reliability of the self-driving functionality.

-

Google employs redundant hot spares to deal with SDC issues within its data centers, thereby enhancing the reliability of critical functions. As illustrated in Figure fig-sdc-controller, during the normal training phase, multiple synchronous training workers function flawlessly. However, if a worker becomes defective and causes SDC, an SDC checker automatically identifies the issues. Upon detecting the SDC, the SDC checker moves the training to a hot spare and sends the defective machine for repair. This redundancy safeguards the continuity and reliability of ML training, effectively minimizing downtime and preserving data integrity.

-
-
-
- -
-
-Figure 18.13: Google employs hot spare cores to transparently handle SDCs in the data center. (Source: Jeff Dean, MLSys 2024 Keynote (Google)) -
-
-
-

Watchdog timers: Watchdog timers are hardware components that monitor the execution of critical tasks or processes (Pont and Ong 2002). They are commonly used to detect and recover from software or hardware faults that cause a system to become unresponsive or stuck in an infinite loop. In an embedded system, a watchdog timer can be configured to monitor the execution of the main control loop, as illustrated in Figure fig-watchdog. The software periodically resets the watchdog timer to indicate that it functions correctly. Suppose the software fails to reset the timer within a specified time limit (timeout period). In that case, the watchdog timer assumes that the system has encountered a fault and triggers a predefined recovery action, such as resetting the system or switching to a backup component. Watchdog timers are widely used in automotive electronics, industrial control systems, and other safety-critical applications to ensure the timely detection and recovery from faults.

-
-Pont, Michael J, and Royan HL Ong. 2002. “Using Watchdog Timers to Improve the Reliability of Single-Processor Embedded Systems: Seven New Patterns and a Case Study.” In Proceedings of the First Nordic Conference on Pattern Languages of Programs, 159–200. Citeseer. -
-
-
- -
-
-Figure 18.14: Watchdog timer example in detecting MCU faults (Source: Ablic) -
-
-
-
-
-
Software-level fault detection
-

Software-level fault detection techniques rely on software algorithms and monitoring mechanisms to identify system faults. These techniques can be implemented at various levels of the software stack, including the operating system, middleware, or application level.

-

Runtime monitoring and anomaly detection: Runtime monitoring involves continuously observing the behavior of the system and its components during execution (Francalanza et al. 2017). It helps detect anomalies, errors, or unexpected behavior that may indicate the presence of faults. For example, consider an ML-based image classification system deployed in a self-driving car. Runtime monitoring can be implemented to track the classification model’s performance and behavior (Mahmoud et al. 2021).

-
-Francalanza, Adrian, Luca Aceto, Antonis Achilleos, Duncan Paul Attard, Ian Cassar, Dario Della Monica, and Anna Ingólfsdóttir. 2017. “A Foundation for Runtime Monitoring.” In International Conference on Runtime Verification, 8–29. Springer. -
-Mahmoud, Abdulrahman, Siva Kumar Sastry Hari, Christopher W. Fletcher, Sarita V. Adve, Charbel Sakr, Naresh Shanbhag, Pavlo Molchanov, Michael B. Sullivan, Timothy Tsai, and Stephen W. Keckler. 2021. “Optimizing Selective Protection for CNN Resilience.” In 2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE), 127–38. IEEE. https://doi.org/10.1109/issre52982.2021.00025. -
-Chandola, Varun, Arindam Banerjee, and Vipin Kumar. 2009. “Anomaly Detection: A Survey.” ACM Comput. Surv. 41 (3): 1–58. https://doi.org/10.1145/1541880.1541882. -

Anomaly detection algorithms can be applied to the model’s predictions or intermediate layer activations, such as statistical outlier detection or machine learning-based approaches (e.g., One-Class SVM or Autoencoders) (Chandola, Banerjee, and Kumar 2009). Figure fig-ad shows example of anomaly detection. Suppose the monitoring system detects a significant deviation from the expected patterns, such as a sudden drop in classification accuracy or out-of-distribution samples. In that case, it can raise an alert indicating a potential fault in the model or the input data pipeline. This early detection allows for timely intervention and fault mitigation strategies to be applied.

-
-
-
- -
-
-Figure 18.15: Examples of anomaly detection. (a) Fully supervised anomaly detection, (b) normal-only anomaly detection, (c, d, e) semi-supervised anomaly detection, (f) unsupervised anomaly detection (Source: Google) -
-
-
-

Consistency checks and data validation: Consistency checks and data validation techniques ensure data integrity and correctness at different processing stages in an ML system (Lindholm et al. 2019). These checks help detect data corruption, inconsistencies, or errors that may propagate and affect the system’s behavior. Example: In a distributed ML system where multiple nodes collaborate to train a model, consistency checks can be implemented to validate the integrity of the shared model parameters. Each node can compute a checksum or hash of the model parameters before and after the training iteration, as shown in Figure fig-ad. Any inconsistencies or data corruption can be detected by comparing the checksums across nodes. Additionally, range checks can be applied to the input data and model outputs to ensure they fall within expected bounds. For instance, if an autonomous vehicle’s perception system detects an object with unrealistic dimensions or velocities, it can indicate a fault in the sensor data or the perception algorithms (Wan et al. 2023).

-
-Lindholm, Andreas, Dave Zachariah, Petre Stoica, and Thomas B. Schon. 2019. “Data Consistency Approach to Model Validation.” #IEEE_O_ACC# 7: 59788–96. https://doi.org/10.1109/access.2019.2915109. -
-Wan, Zishen, Yiming Gan, Bo Yu, S Liu, A Raychowdhury, and Y Zhu. 2023. “Vpp: The Vulnerability-Proportional Protection Paradigm Towards Reliable Autonomous Machines.” In Proceedings of the 5th International Workshop on Domain Specific System Architecture (DOSSA), 1–6. -
-Kawazoe Aguilera, Marcos, Wei Chen, and Sam Toueg. 1997. “Heartbeat: A Timeout-Free Failure Detector for Quiescent Reliable Communication.” In Distributed Algorithms: 11th International Workshop, WDAG’97 Saarbrücken, Germany, September 2426, 1997 Proceedings 11, 126–40. Springer. -

Heartbeat and timeout mechanisms: Heartbeat mechanisms and timeouts are commonly used to detect faults in distributed systems and ensure the liveness and responsiveness of components (Kawazoe Aguilera, Chen, and Toueg 1997). These are quite similar to the watchdog timers found in hardware. For example, in a distributed ML system, where multiple nodes collaborate to perform tasks such as data preprocessing, model training, or inference, heartbeat mechanisms can be implemented to monitor the health and availability of each node. Each node periodically sends a heartbeat message to a central coordinator or its peer nodes, indicating its status and availability. Suppose a node fails to send a heartbeat within a specified timeout period, as shown in Figure fig-heartbeat. In that case, it is considered faulty, and appropriate actions can be taken, such as redistributing the workload or initiating a failover mechanism. Timeouts can also be used to detect and handle hanging or unresponsive components. For example, if a data loading process exceeds a predefined timeout threshold, it may indicate a fault in the data pipeline, and the system can take corrective measures.

-
-
-
- -
-
-Figure 18.16: Heartbeat messages in distributed systems (Source: GeeksforGeeks) -
-
-
- -

Software-implemented fault tolerance (SIFT) techniques: SIFT techniques introduce redundancy and fault detection mechanisms at the software level to enhance the reliability and fault tolerance of the system (Reis et al. 2005). Example: N-version programming is a SIFT technique where multiple functionally equivalent software component versions are developed independently by different teams. This can be applied to critical components such as the model inference engine in an ML system. Multiple versions of the inference engine can be executed in parallel, and their outputs can be compared for consistency. It is considered the correct result if most versions produce the same output. If there is a discrepancy, it indicates a potential fault in one or more versions, and appropriate error-handling mechanisms can be triggered. Another example is using software-based error correction codes, such as Reed-Solomon codes (Plank 1997), to detect and correct errors in data storage or transmission, as shown in Figure fig-Reed-Solomon. These codes add redundancy to the data, enabling detecting and correcting certain errors and enhancing the system’s fault tolerance.

-
-Reis, G. A., J. Chang, N. Vachharajani, R. Rangan, and D. I. August. 2005. SWIFT: Software Implemented Fault Tolerance.” In International Symposium on Code Generation and Optimization, 243–54. IEEE; IEEE. https://doi.org/10.1109/cgo.2005.34. -
-Plank, James S. 1997. “A Tutorial on ReedSolomon Coding for Fault-Tolerance in RAID-Like Systems.” Software: Practice and Experience 27 (9): 995–1012. -
-
-
- -
-
-Figure 18.17: n-bits representation of the Reed-Solomon codes (Source: GeeksforGeeks) -
-
-
-
-

Exercise 18.1 (Anomaly Detection)  

-
-
- -
-
-

In this Colab, play the role of an AI fault detective! You’ll build an autoencoder-based anomaly detector to pinpoint errors in heart health data. Learn how to identify malfunctions in ML systems, a vital skill for creating dependable AI. We’ll use Keras Tuner to fine-tune your autoencoder for top-notch fault detection. This experience directly links to the Robust AI chapter, demonstrating the importance of fault detection in real-world applications like healthcare and autonomous systems. Get ready to strengthen the reliability of your AI creations!

-

-
-
-
-
-
-
-
-

18.3.5 Summary

-

Table tbl-fault_types provides an extensive comparative analysis of transient, permanent, and intermittent faults. It outlines the primary characteristics or dimensions that distinguish these fault types. Here, we summarize the relevant dimensions we examined and explore the nuances that differentiate transient, permanent, and intermittent faults in greater detail.

-
-
-
-Table 18.1: Comparison of transient, permanent, and intermittent faults. -
-
- ------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
DimensionTransient FaultsPermanent FaultsIntermittent Faults
DurationShort-lived, temporaryPersistent, remains until repair or replacementSporadic, appears and disappears intermittently
PersistenceDisappears after the fault condition passesConsistently present until addressedRecurs irregularly, not always present
CausesExternal factors (e.g., electromagnetic interference, cosmic rays)Hardware defects, physical damage, wear-outUnstable hardware conditions, loose connections, aging components
ManifestationBit flips, glitches, temporary data corruptionStuck-at faults, broken components, complete device failuresOccasional bit flips, intermittent signal issues, sporadic malfunctions
Impact on ML SystemsIntroduces temporary errors or noise in computationsCauses consistent errors or failures, affecting reliabilityLeads to sporadic and unpredictable errors, challenging to diagnose and mitigate
DetectionError detection codes, comparison with expected valuesBuilt-in self-tests, error detection codes, consistency checksMonitoring for anomalies, analyzing error patterns and correlations
MitigationError correction codes, redundancy, checkpoint and restartHardware repair or replacement, component redundancy, failover mechanismsRobust design, environmental control, runtime monitoring, fault-tolerant techniques
-
-
-
-
-
-
-

18.4 ML Model Robustness

-
-

18.4.1 Adversarial Attacks

-
-

Definition and Characteristics

-

Adversarial attacks aim to trick models into making incorrect predictions by providing them with specially crafted, deceptive inputs (called adversarial examples) (Parrish et al. 2023). By adding slight perturbations to input data, adversaries can "hack" a model’s pattern recognition and deceive it. These are sophisticated techniques where slight, often imperceptible alterations to input data can trick an ML model into making a wrong prediction, as shown in Figure fig-adversarial-attack-noise-example.

-
-Parrish, Alicia, Hannah Rose Kirk, Jessica Quaye, Charvi Rastogi, Max Bartolo, Oana Inel, Juan Ciro, et al. 2023. “Adversarial Nibbler: A Data-Centric Challenge for Improving the Safety of Text-to-Image Models.” ArXiv Preprint abs/2305.14384. https://arxiv.org/abs/2305.14384. -
-
-
- -
-
-Figure 18.18: A small adversarial noise added to the original image can make the neural network classify the image as a Guacamole instead of an Egyptian cat (Source: Sutanto) -
-
-
-

One can generate prompts that lead to unsafe images in text-to-image models like DALLE (Ramesh et al. 2021) or Stable Diffusion (Rombach et al. 2022). For example, by altering the pixel values of an image, attackers can deceive a facial recognition system into identifying a face as a different person.

-
-Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. “Zero-Shot Text-to-Image Generation.” In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, edited by Marina Meila and Tong Zhang, 139:8821–31. Proceedings of Machine Learning Research. PMLR. http://proceedings.mlr.press/v139/ramesh21a.html. -
-Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042. -

Adversarial attacks exploit the way ML models learn and make decisions during inference. These models work on the principle of recognizing patterns in data. An adversary crafts special inputs with perturbations to mislead the model’s pattern recognition---essentially ‘hacking’ the model’s perceptions.

-

Adversarial attacks fall under different scenarios:

-
    -
  • Whitebox Attacks: The attacker fully knows the target model’s internal workings, including the training data, parameters, and architecture (Ye and Hamidi 2021). This comprehensive access creates favorable conditions for attackers to exploit the model’s vulnerabilities. The attacker can use specific and subtle weaknesses to craft effective adversarial examples.

  • -
  • Blackbox Attacks: In contrast to white-box attacks, black-box attacks involve the attacker having little to no knowledge of the target model (Guo et al. 2019). To carry out the attack, the adversarial actor must carefully observe the model’s output behavior.

  • -
  • Greybox Attacks: These fall between blackbox and whitebox attacks. The attacker has only partial knowledge about the target model’s internal design (Xu et al. 2021). For example, the attacker could have knowledge about training data but not the architecture or parameters. In the real world, practical attacks fall under black black-box box grey-boxes.

  • -
-
-Ye, Linfeng, and Shayan Mohajer Hamidi. 2021. “Thundernna: A White Box Adversarial Attack.” arXiv Preprint arXiv:2111.12305. -
-Guo, Chuan, Jacob Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Weinberger. 2019. “Simple Black-Box Adversarial Attacks.” In International Conference on Machine Learning, 2484–93. PMLR. -
-Xu, Ying, Xu Zhong, Antonio Jimeno Yepes, and Jey Han Lau. 2021. Grey-Box Adversarial Attack and Defence for Sentiment Classification.” arXiv Preprint arXiv:2103.11576. -

The landscape of machine learning models is complex and broad, especially given their relatively recent integration into commercial applications. This rapid adoption, while transformative, has brought to light numerous vulnerabilities within these models. Consequently, various adversarial attack methods have emerged, each strategically exploiting different aspects of different models. Below, we highlight a subset of these methods, showcasing the multifaceted nature of adversarial attacks on machine learning models:

-
    -
  • Generative Adversarial Networks (GANs) are deep learning models that consist of two networks competing against each other: a generator and a discriminator (Goodfellow et al. 2020). The generator tries to synthesize realistic data while the discriminator evaluates whether they are real or fake. GANs can be used to craft adversarial examples. The generator network is trained to produce inputs that the target model misclassifies. These GAN-generated images can then attack a target classifier or detection model. The generator and the target model are engaged in a competitive process, with the generator continually improving its ability to create deceptive examples and the target model enhancing its resistance to such examples. GANs provide a powerful framework for crafting complex and diverse adversarial inputs, illustrating the adaptability of generative models in the adversarial landscape.

  • -
  • Transfer Learning Adversarial Attacks exploit the knowledge transferred from a pre-trained model to a target model, creating adversarial examples that can deceive both models. These attacks pose a growing concern, particularly when adversaries have knowledge of the feature extractor but lack access to the classification head (the part or layer responsible for making the final classifications). Referred to as "headless attacks," these transferable adversarial strategies leverage the expressive capabilities of feature extractors to craft perturbations while being oblivious to the label space or training data. The existence of such attacks underscores the importance of developing robust defenses for transfer learning applications, especially since pre-trained models are commonly used (Abdelkader et al. 2020).

  • -
-
-Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. “Generative Adversarial Networks.” Commun. ACM 63 (11): 139–44. https://doi.org/10.1145/3422622. -
-Abdelkader, Ahmed, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, and Chen Zhu. 2020. “Headless Horseman: Adversarial Attacks on Transfer Learning Models.” In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 3087–91. IEEE. https://doi.org/10.1109/icassp40776.2020.9053181. -
-
-

Mechanisms of Adversarial Attacks

-
-
-
- -
-
-Figure 18.19: Gradient-Based Attacks (Source: Ivezic) -
-
-
-

Gradient-based Attacks

-

One prominent category of adversarial attacks is gradient-based attacks. These attacks leverage the gradients of the ML model’s loss function to craft adversarial examples. The Fast Gradient Sign Method (FGSM) is a well-known technique in this category. FGSM perturbs the input data by adding small noise in the gradient direction, aiming to maximize the model’s prediction error. FGSM can quickly generate adversarial examples, as shown in Figure fig-gradient-attack, by taking a single step in the gradient direction.

-

Another variant, the Projected Gradient Descent (PGD) attack, extends FGSM by iteratively applying the gradient update step, allowing for more refined and powerful adversarial examples. The Jacobian-based Saliency Map Attack (JSMA) is another gradient-based approach that identifies the most influential input features and perturbs them to create adversarial examples.

-

Optimization-based Attacks

-

These attacks formulate the generation of adversarial examples as an optimization problem. The Carlini and Wagner (C&W) attack is a prominent example in this category. It aims to find the smallest perturbation that can cause misclassification while maintaining the perceptual similarity to the original input. The C&W attack employs an iterative optimization process to minimize the perturbation while maximizing the model’s prediction error.

-

Another optimization-based approach is the Elastic Net Attack to DNNs (EAD), which incorporates elastic net regularization to generate adversarial examples with sparse perturbations.

-

Transfer-based Attacks

-

Transfer-based attacks exploit the transferability property of adversarial examples. Transferability refers to the phenomenon where adversarial examples crafted for one ML model can often fool other models, even if they have different architectures or were trained on different datasets. This enables attackers to generate adversarial examples using a surrogate model and then transfer them to the target model without requiring direct access to its parameters or gradients. Transfer-based attacks highlight the generalization of adversarial vulnerabilities across different models and the potential for black-box attacks.

-

Physical-world Attacks

-

Physical-world attacks bring adversarial examples into the realm of real-world scenarios. These attacks involve creating physical objects or manipulations that can deceive ML models when captured by sensors or cameras. Adversarial patches, for example, are small, carefully designed patches that can be placed on objects to fool object detection or classification models. When attached to real-world objects, these patches can cause models to misclassify or fail to detect the objects accurately. Adversarial objects, such as 3D-printed sculptures or modified road signs, can also be crafted to deceive ML systems in physical environments.

-

Summary

-

Table tbl-attack_types a concise overview of the different categories of adversarial attacks, including gradient-based attacks (FGSM, PGD, JSMA), optimization-based attacks (C&W, EAD), transfer-based attacks, and physical-world attacks (adversarial patches and objects). Each attack is briefly described, highlighting its key characteristics and mechanisms.

-
-
-
-Table 18.2: Different attack types on ML models. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Attack CategoryAttack NameDescription
Gradient-basedFast Gradient Sign Method (FGSM)Perturbs input data by adding small noise in the gradient direction to maximize prediction error.
Projected Gradient Descent (PGD)Extends FGSM by iteratively applying the gradient update step for more refined adversarial examples.
Jacobian-based Saliency Map Attack (JSMA)Identifies influential input features and perturbs them to create adversarial examples.
Optimization-basedCarlini and Wagner (C&W) AttackFinds the smallest perturbation that causes misclassification while maintaining perceptual similarity.
Elastic Net Attack to DNNs (EAD)Incorporates elastic net regularization to generate adversarial examples with sparse perturbations.
Transfer-basedTransferability-based AttacksExploits the transferability of adversarial examples across different models, enabling black-box attacks.
Physical-worldAdversarial PatchesSmall, carefully designed patches placed on objects to fool object detection or classification models.
Adversarial ObjectsPhysical objects (e.g., 3D-printed sculptures, modified road signs) crafted to deceive ML systems in real-world scenarios.
-
-
-
-

The mechanisms of adversarial attacks reveal the intricate interplay between the ML model’s decision boundaries, the input data, and the attacker’s objectives. By carefully manipulating the input data, attackers can exploit the model’s sensitivities and blind spots, leading to incorrect predictions. The success of adversarial attacks highlights the need for a deeper understanding of ML models’ robustness and generalization properties.

-

Defending against adversarial attacks requires a multifaceted approach. Adversarial training is one common defense strategy in which models are trained on adversarial examples to improve robustness. Exposing the model to adversarial examples during training teaches it to classify them correctly and become more resilient to attacks. Defensive distillation, input preprocessing, and ensemble methods are other techniques that can help mitigate the impact of adversarial attacks.

-

As adversarial machine learning evolves, researchers explore new attack mechanisms and develop more sophisticated defenses. The arms race between attackers and defenders drives the need for constant innovation and vigilance in securing ML systems against adversarial threats. Understanding the mechanisms of adversarial attacks is crucial for developing robust and reliable ML models that can withstand the ever-evolving landscape of adversarial examples.

-
-
-

Impact on ML Systems

-

Adversarial attacks on machine learning systems have emerged as a significant concern in recent years, highlighting the potential vulnerabilities and risks associated with the widespread adoption of ML technologies. These attacks involve carefully crafted perturbations to input data that can deceive or mislead ML models, leading to incorrect predictions or misclassifications, as shown in Figure fig-adversarial-googlenet. The impact of adversarial attacks on ML systems is far-reaching and can have serious consequences in various domains.

-

One striking example of the impact of adversarial attacks was demonstrated by researchers in 2017. They experimented with small black and white stickers on stop signs (Eykholt et al. 2017). To the human eye, these stickers did not obscure the sign or prevent its interpretability. However, when images of the sticker-modified stop signs were fed into standard traffic sign classification ML models, a shocking result emerged. The models misclassified the stop signs as speed limit signs over 85% of the time.

-
-Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. “Robust Physical-World Attacks on Deep Learning Models.” ArXiv Preprint abs/1707.08945. https://arxiv.org/abs/1707.08945. -

This demonstration shed light on the alarming potential of simple adversarial stickers to trick ML systems into misreading critical road signs. The implications of such attacks in the real world are significant, particularly in the context of autonomous vehicles. If deployed on actual roads, these adversarial stickers could cause self-driving cars to misinterpret stop signs as speed limits, leading to dangerous situations, as shown in Figure fig-graffiti. Researchers warned that this could result in rolling stops or unintended acceleration into intersections, endangering public safety.

-
-
-
- -
-
-Figure 18.20: Adversarial example generation applied to GoogLeNet (Szegedy et al., 2014a) on ImageNet (Source: Goodfellow) -
-
-
-
-
-
- -
-
-Figure 18.21: Graffiti on a stop sign tricked a self-driving car into thinking it was a 45 mph speed limit sign (Source: Eykholt) -
-
-
-

The case study of the adversarial stickers on stop signs provides a concrete illustration of how adversarial examples exploit how ML models recognize patterns. By subtly manipulating the input data in ways that are invisible to humans, attackers can induce incorrect predictions and create serious risks, especially in safety-critical applications like autonomous vehicles. The attack’s simplicity highlights the vulnerability of ML models to even minor changes in the input, emphasizing the need for robust defenses against such threats.

-

The impact of adversarial attacks extends beyond the degradation of model performance. These attacks raise significant security and safety concerns, particularly in domains where ML models are relied upon for critical decision-making. In healthcare applications, adversarial attacks on medical imaging models could lead to misdiagnosis or incorrect treatment recommendations, jeopardizing patient well-being (M.-J. Tsai, Lin, and Lee 2023). In financial systems, adversarial attacks could enable fraud or manipulation of trading algorithms, resulting in substantial economic losses.

-
-Tsai, Min-Jen, Ping-Yi Lin, and Ming-En Lee. 2023. “Adversarial Attacks on Medical Image Classification.” Cancers 15 (17): 4228. https://doi.org/10.3390/cancers15174228. -
-Fursov, Ivan, Matvey Morozov, Nina Kaploukhaya, Elizaveta Kovtun, Rodrigo Rivera-Castro, Gleb Gusev, Dmitry Babaev, Ivan Kireev, Alexey Zaytsev, and Evgeny Burnaev. 2021. “Adversarial Attacks on Deep Models for Financial Transaction Records.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining, 2868–78. ACM. https://doi.org/10.1145/3447548.3467145. -

Moreover, adversarial vulnerabilities undermine the trustworthiness and interpretability of ML models. If carefully crafted perturbations can easily fool models, confidence in their predictions and decisions erodes. Adversarial examples expose the models’ reliance on superficial patterns and the inability to capture the true underlying concepts, challenging the reliability of ML systems (Fursov et al. 2021).

-

Defending against adversarial attacks often requires additional computational resources and can impact the overall system performance. Techniques like adversarial training, where models are trained on adversarial examples to improve robustness, can significantly increase training time and computational requirements (Bai et al. 2021). Runtime detection and mitigation mechanisms, such as input preprocessing (Addepalli et al. 2020) or prediction consistency checks, introduce latency and affect the real-time performance of ML systems.

-
-Bai, Tao, Jinqi Luo, Jun Zhao, Bihan Wen, and Qian Wang. 2021. “Recent Advances in Adversarial Training for Adversarial Robustness.” arXiv Preprint arXiv:2102.01356. -
-Addepalli, Sravanti, B. S. Vivek, Arya Baburaj, Gaurang Sriramanan, and R. Venkatesh Babu. 2020. “Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1020–29. IEEE. https://doi.org/10.1109/cvpr42600.2020.00110. -

The presence of adversarial vulnerabilities also complicates the deployment and maintenance of ML systems. System designers and operators must consider the potential for adversarial attacks and incorporate appropriate defenses and monitoring mechanisms. Regular updates and retraining of models become necessary to adapt to new adversarial techniques and maintain system security and performance over time.

-

The impact of adversarial attacks on ML systems is significant and multifaceted. These attacks expose ML models’ vulnerabilities, from degrading model performance and raising security and safety concerns to challenging model trustworthiness and interpretability. Developers and researchers must prioritize the development of robust defenses and countermeasures to mitigate the risks posed by adversarial attacks. By addressing these challenges, we can build more secure, reliable, and trustworthy ML systems that can withstand the ever-evolving landscape of adversarial threats.

-
-

Exercise 18.2 (Adversarial Attacks)  

-
-
- -
-
-

Get ready to become an AI adversary! In this Colab, you’ll become a white-box hacker, learning to craft attacks that deceive image classification models. We’ll focus on the Fast Gradient Sign Method (FGSM), where you’ll weaponize a model’s gradients against it! You’ll deliberately distort images with tiny perturbations, observing how they increasingly fool the AI more intensely. This hands-on exercise highlights the importance of building secure AI – a critical skill as AI integrates into cars and healthcare. The Colab directly ties into the Robust AI chapter of your book, moving adversarial attacks from theory into your own hands-on experience.

-

-

Think you can outsmart an AI? In this Colab, learn how to trick image classification models with adversarial attacks. We’ll use methods like FGSM to change images and subtly fool the AI. Discover how to design deceptive image patches and witness the surprising vulnerability of these powerful models. This is crucial knowledge for building truly robust AI systems!

-

-
-
-
-
-
-
-

18.4.2 Data Poisoning

-
-

Definition and Characteristics

-

Data poisoning is an attack where the training data is tampered with, leading to a compromised model (Biggio, Nelson, and Laskov 2012), as shown in Figure fig-poisoning-example. Attackers can modify existing training examples, insert new malicious data points, or influence the data collection process. The poisoned data is labeled in such a way as to skew the model’s learned behavior. This can be particularly damaging in applications where ML models make automated decisions based on learned patterns. Beyond training sets, poisoning tests, and validation data can allow adversaries to boost reported model performance artificially.

-
-Biggio, Battista, Blaine Nelson, and Pavel Laskov. 2012. “Poisoning Attacks Against Support Vector Machines.” In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf. -
-
-
- -
-
-Figure 18.22: NightShade’s poisoning effects on Stable Diffusion (Source: TOMÉ) -
-
-
-

The process usually involves the following steps:

-
    -
  • Injection: The attacker adds incorrect or misleading examples into the training set. These examples are often designed to look normal to cursory inspection but have been carefully crafted to disrupt the learning process.

  • -
  • Training: The ML model trains on this manipulated dataset and develops skewed understandings of the data patterns.

  • -
  • Deployment: Once the model is deployed, the corrupted training leads to flawed decision-making or predictable vulnerabilities the attacker can exploit.

  • -
-

The impact of data poisoning extends beyond classification errors or accuracy drops. In critical applications like healthcare, such alterations can lead to significant trust and safety issues (Marulli, Marrone, and Verde 2022). Later, we will discuss a few case studies of these issues.

-
-Marulli, Fiammetta, Stefano Marrone, and Laura Verde. 2022. “Sensitivity of Machine Learning Approaches to Fake and Untrusted Data in Healthcare Domain.” Journal of Sensor and Actuator Networks 11 (2): 21. https://doi.org/10.3390/jsan11020021. -
-Oprea, Alina, Anoop Singhal, and Apostol Vassilev. 2022. “Poisoning Attacks Against Machine Learning: Can Machine Learning Be Trustworthy?” Computer 55 (11): 94–99. https://doi.org/10.1109/mc.2022.3190787. -

There are six main categories of data poisoning (Oprea, Singhal, and Vassilev 2022):

-
    -
  • Availability Attacks: These attacks aim to compromise the overall functionality of a model. They cause it to misclassify most testing samples, rendering the model unusable for practical applications. An example is label flipping, where labels of a specific, targeted class are replaced with labels from a different one.

  • -
  • Targeted Attacks: In contrast to availability attacks, targeted attacks aim to compromise a small number of the testing samples. So, the effect is localized to a limited number of classes, while the model maintains the same original level of accuracy for the majority of the classes. The targeted nature of the attack requires the attacker to possess knowledge of the model’s classes, making detecting these attacks more challenging.

  • -
  • Backdoor Attacks: In these attacks, an adversary targets specific patterns in the data. The attacker introduces a backdoor (a malicious, hidden trigger or pattern) into the training data, such as manipulating certain features in structured data or manipulating a pattern of pixels at a fixed position. This causes the model to associate the malicious pattern with specific labels. As a result, when the model encounters test samples that contain a malicious pattern, it makes false predictions.

  • -
  • Subpopulation Attacks: Attackers selectively choose to compromise a subset of the testing samples while maintaining accuracy on the rest of the samples. You can think of these attacks as a combination of availability and targeted attacks: performing availability attacks (performance degradation) within the scope of a targeted subset. Although subpopulation attacks may seem very similar to targeted attacks, the two have clear differences:

  • -
  • Scope: While targeted attacks target a selected set of samples, subpopulation attacks target a general subpopulation with similar feature representations. For example, in a targeted attack, an actor inserts manipulated images of a ‘speed bump’ warning sign (with carefully crafted perturbations or patterns), which causes an autonomous car to fail to recognize such a sign and slow down. On the other hand, manipulating all samples of people with a British accent so that a speech recognition model would misclassify a British person’s speech is an example of a subpopulation attack.

  • -
  • Knowledge: While targeted attacks require a high degree of familiarity with the data, subpopulation attacks require less intimate knowledge to be effective.

  • -
-

The characteristics of data poisoning include:

-

Subtle and hard-to-detect manipulations of training data: Data poisoning often involves subtle manipulations of the training data that are carefully crafted to be difficult to detect through casual inspection. Attackers employ sophisticated techniques to ensure that the poisoned samples blend seamlessly with the legitimate data, making them easier to identify with thorough analysis. These manipulations can target specific features or attributes of the data, such as altering numerical values, modifying categorical labels, or introducing carefully designed patterns. The goal is to influence the model’s learning process while evading detection, allowing the poisoned data to subtly corrupt the model’s behavior.

-

Can be performed by insiders or external attackers: Data poisoning attacks can be carried out by various actors, including malicious insiders with access to the training data and external attackers who find ways to influence the data collection or preprocessing pipeline. Insiders pose a significant threat because they often have privileged access and knowledge of the system, enabling them to introduce poisoned data without raising suspicions. On the other hand, external attackers may exploit vulnerabilities in data sourcing, crowdsourcing platforms, or data aggregation processes to inject poisoned samples into the training dataset. This highlights the importance of implementing strong access controls, data governance policies, and monitoring mechanisms to mitigate the risk of insider threats and external attacks.

-

Exploits vulnerabilities in data collection and preprocessing: Data poisoning attacks often exploit vulnerabilities in the machine learning pipeline’s data collection and preprocessing stages. Attackers carefully design poisoned samples to evade common data validation techniques, ensuring that the manipulated data still falls within acceptable ranges, follows expected distributions, or maintains consistency with other features. This allows the poisoned data to pass through data preprocessing steps without detection. Furthermore, poisoning attacks can take advantage of weaknesses in data preprocessing, such as inadequate data cleaning, insufficient outlier detection, or lack of integrity checks. Attackers may also exploit the lack of robust data provenance and lineage tracking mechanisms to introduce poisoned data without leaving a traceable trail. Addressing these vulnerabilities requires rigorous data validation, anomaly detection, and data provenance tracking techniques to ensure the integrity and trustworthiness of the training data.

-

Disrupts the learning process and skews model behavior: Data poisoning attacks are designed to disrupt the learning process of machine learning models and skew their behavior towards the attacker’s objectives. The poisoned data is typically manipulated with specific goals, such as skewing the model’s behavior towards certain classes, introducing backdoors, or degrading overall performance. These manipulations are not random but targeted to achieve the attacker’s desired outcomes. By introducing label inconsistencies, where the manipulated samples have labels that do not align with their true nature, poisoning attacks can confuse the model during training and lead to biased or incorrect predictions. The disruption caused by poisoned data can have far-reaching consequences, as the compromised model may make flawed decisions or exhibit unintended behavior when deployed in real-world applications.

-

Impacts model performance, fairness, and trustworthiness: Poisoned data in the training dataset can have severe implications for machine learning models’ performance, fairness, and trustworthiness. Poisoned data can degrade the accuracy and performance of the trained model, leading to increased misclassifications or errors in predictions. This can have significant consequences, especially in critical applications where the model’s outputs inform important decisions. Moreover, poisoning attacks can introduce biases and fairness issues, causing the model to make discriminatory or unfair decisions for certain subgroups or classes. This undermines machine learning systems’ ethical and social responsibilities and can perpetuate or amplify existing biases. Furthermore, poisoned data erodes the trustworthiness and reliability of the entire ML system. The model’s outputs become questionable and potentially harmful, leading to a loss of confidence in the system’s integrity. The impact of poisoned data can propagate throughout the entire ML pipeline, affecting downstream components and decisions that rely on the compromised model. Addressing these concerns requires robust data governance, regular model auditing, and ongoing monitoring to detect and mitigate the effects of data poisoning attacks.

-
-
-

Mechanisms of Data Poisoning

-

Data poisoning attacks can be carried out through various mechanisms, exploiting different ML pipeline vulnerabilities. These mechanisms allow attackers to manipulate the training data and introduce malicious samples that can compromise the model’s performance, fairness, or integrity. Understanding these mechanisms is crucial for developing effective defenses against data poisoning and ensuring the robustness of ML systems. Data poisoning mechanisms can be broadly categorized based on the attacker’s approach and the stage of the ML pipeline they target. Some common mechanisms include modifying training data labels, altering feature values, injecting carefully crafted malicious samples, exploiting data collection and preprocessing vulnerabilities, manipulating data at the source, poisoning data in online learning scenarios, and collaborating with insiders to manipulate data.

-

Each of these mechanisms presents unique challenges and requires different mitigation strategies. For example, detecting label manipulation may involve analyzing the distribution of labels and identifying anomalies (Zhou et al. 2018), while preventing feature manipulation may require secure data preprocessing and anomaly detection techniques (Carta et al. 2020). Defending against insider threats may involve strict access control policies and monitoring of data access patterns. Moreover, the effectiveness of data poisoning attacks often depends on the attacker’s knowledge of the ML system, including the model architecture, training algorithms, and data distribution. Attackers may use adversarial machine learning or data synthesis techniques to craft samples that are more likely to bypass detection and achieve their malicious objectives.

-
-Zhou, Peng, Xintong Han, Vlad I. Morariu, and Larry S. Davis. 2018. “Learning Rich Features for Image Manipulation Detection.” In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1053–61. IEEE. https://doi.org/10.1109/cvpr.2018.00116. -
-Carta, Salvatore, Alessandro Sebastian Podda, Diego Reforgiato Recupero, and Roberto Saia. 2020. “A Local Feature Engineering Strategy to Improve Network Anomaly Detection.” Future Internet 12 (10): 177. https://doi.org/10.3390/fi12100177. -
-
-
- -
-
-Figure 18.23: Garbage In – Garbage Out (Source: Information Matters) -
-
-
-

Modifying training data labels: One of the most straightforward mechanisms of data poisoning is modifying the training data labels. In this approach, the attacker selectively changes the labels of a subset of the training samples to mislead the model’s learning process as shown in Figure fig-distribution-shift-example. For example, in a binary classification task, the attacker might flip the labels of some positive samples to negative, or vice versa. By introducing such label noise, the attacker aims to degrade the model’s performance or cause it to make incorrect predictions for specific target instances.

-

Altering feature values in training data: Another mechanism of data poisoning involves altering the feature values of the training samples without modifying the labels. The attacker carefully crafts the feature values to introduce specific biases or vulnerabilities into the model. For instance, in an image classification task, the attacker might add imperceptible perturbations to a subset of images, causing the model to learn a particular pattern or association. This type of poisoning can create backdoors or trojans in the trained model, which specific input patterns can trigger.

-

Injecting carefully crafted malicious samples: In this mechanism, the attacker creates malicious samples designed to poison the model. These samples are crafted to have a specific impact on the model’s behavior while blending in with the legitimate training data. The attacker might use techniques such as adversarial perturbations or data synthesis to generate poisoned samples that are difficult to detect. The attacker aims to manipulate the model’s decision boundaries by injecting these malicious samples into the training data or introducing targeted misclassifications.

-

Exploiting data collection and preprocessing vulnerabilities: Data poisoning attacks can also exploit the data collection and preprocessing pipeline vulnerabilities. If the data collection process is not secure or there are weaknesses in the data preprocessing steps, an attacker can manipulate the data before it reaches the training phase. For example, if data is collected from untrusted sources or issues in data cleaning or aggregation, an attacker can introduce poisoned samples or manipulate the data to their advantage.

-

Manipulating data at the source (e.g., sensor data): In some cases, attackers can manipulate the data at its source, such as sensor data or input devices. By tampering with the sensors or manipulating the environment in which data is collected, attackers can introduce poisoned samples or bias the data distribution. For instance, in a self-driving car scenario, an attacker might manipulate the sensors or the environment to feed misleading information into the training data, compromising the model’s ability to make safe and reliable decisions.

-
-
-
- -
-
-Figure 18.24: Data Poisoning Attack (Source: Sikandar) -
-
-
-

Poisoning data in online learning scenarios: Data poisoning attacks can also target ML systems that employ online learning, where the model is continuously updated with new data in real time. In such scenarios, an attacker can gradually inject poisoned samples over time, slowly manipulating the model’s behavior. Online learning systems are particularly vulnerable to data poisoning because they adapt to new data without extensive validation, making it easier for attackers to introduce malicious samples, as shown in Figure fig-poisoning-attack-example.

-

Collaborating with insiders to manipulate data: Sometimes, data poisoning attacks can involve collaboration with insiders with access to the training data. Malicious insiders, such as employees or data providers, can manipulate the data before it is used to train the model. Insider threats are particularly challenging to detect and prevent, as the attackers have legitimate access to the data and can carefully craft the poisoning strategy to evade detection.

-

These are the key mechanisms of data poisoning in ML systems. Attackers often employ these mechanisms to make their attacks more effective and harder to detect. The risk of data poisoning attacks grows as ML systems become increasingly complex and rely on larger datasets from diverse sources. Defending against data poisoning requires a multifaceted approach. ML practitioners and system designers must be aware of the various mechanisms of data poisoning and adopt a comprehensive approach to data security and model resilience. This includes secure data collection, robust data validation, and continuous model performance monitoring. Implementing secure data collection and preprocessing practices is crucial to prevent data poisoning at the source. Data validation and anomaly detection techniques can also help identify and mitigate potential poisoning attempts. Monitoring model performance for signs of data poisoning is also essential to detect and respond to attacks promptly.

-
-
-

Impact on ML Systems

-

Data poisoning attacks can severely affect ML systems, compromising their performance, reliability, and trustworthiness. The impact of data poisoning can manifest in various ways, depending on the attacker’s objectives and the specific mechanism used. Let’s explore each of the potential impacts in detail.

-

Degradation of model performance: One of the primary impacts of data poisoning is the degradation of the model’s overall performance. By manipulating the training data, attackers can introduce noise, biases, or inconsistencies that hinder the model’s ability to learn accurate patterns and make reliable predictions. This can reduce accuracy, precision, recall, or other performance metrics. The degradation of model performance can have significant consequences, especially in critical applications such as healthcare, finance, or security, where the reliability of predictions is crucial.

-

Misclassification of specific targets: Data poisoning attacks can also be designed to cause the model to misclassify specific target instances. Attackers may introduce carefully crafted poisoned samples similar to the target instances, leading the model to learn incorrect associations. This can result in the model consistently misclassifying the targeted instances, even if it performs well on other inputs. Such targeted misclassification can have severe consequences, such as causing a malware detection system to overlook specific malicious files or leading to the wrong diagnosis in a medical imaging application.

-

Backdoors and trojans in trained models: Data poisoning can introduce backdoors or trojans into the trained model. Backdoors are hidden functionalities that allow attackers to trigger specific behaviors or bypass normal authentication mechanisms. On the other hand, Trojans are malicious components embedded within the model that can activate specific input patterns. By poisoning the training data, attackers can create models that appear to perform normally but contain hidden vulnerabilities that can be exploited later. Backdoors and trojans can compromise the integrity and security of the ML system, allowing attackers to gain unauthorized access, manipulate predictions, or exfiltrate sensitive information.

-

Biased or unfair model outcomes: Data poisoning attacks can introduce biases or unfairness into the model’s predictions. By manipulating the training data distribution or injecting samples with specific biases, attackers can cause the model to learn and perpetuate discriminatory patterns. This can lead to unfair treatment of certain groups or individuals based on sensitive attributes such as race, gender, or age. Biased models can have severe societal implications, reinforcing existing inequalities and discriminatory practices. Ensuring fairness and mitigating biases is crucial for building trustworthy and ethical ML systems.

-

Increased false positives or false negatives: Data poisoning can also impact the model’s ability to correctly identify positive or negative instances, leading to increased false positives or false negatives. False positives occur when the model incorrectly identifies a negative instance as positive, while false negatives happen when a positive instance is misclassified as negative. The consequences of increased false positives or false negatives can be significant depending on the application. For example, in a fraud detection system, high false positives can lead to unnecessary investigations and customer frustration, while high false negatives can allow fraudulent activities to go undetected.

-

Compromised system reliability and trustworthiness: Data poisoning attacks can undermine ML systems’ overall reliability and trustworthiness. When models are trained on poisoned data, their predictions become reliable and trustworthy. This can erode user confidence in the system and lead to a loss of trust in the decisions made by the model. In critical applications where ML systems are relied upon for decision-making, such as autonomous vehicles or medical diagnosis, compromised reliability can have severe consequences, putting lives and property at risk.

-

Addressing the impact of data poisoning requires a proactive approach to data security, model testing, and monitoring. Organizations must implement robust measures to ensure the integrity and quality of training data, employ techniques to detect and mitigate poisoning attempts, and continuously monitor the performance and behavior of deployed models. Collaboration between ML practitioners, security experts, and domain specialists is essential to develop comprehensive strategies for preventing and responding to data poisoning attacks.

-
-
Case Study 1
-

In 2017, researchers demonstrated a data poisoning attack against a popular toxicity classification model called Perspective (Hosseini et al. 2017). This ML model detects toxic comments online.

-
-Hosseini, Hossein, Sreeram Kannan, Baosen Zhang, and Radha Poovendran. 2017. “Deceiving Google’s Perspective Api Built for Detecting Toxic Comments.” ArXiv Preprint abs/1702.08138. https://arxiv.org/abs/1702.08138. -

The researchers added synthetically generated toxic comments with slight misspellings and grammatical errors to the model’s training data. This slowly corrupted the model, causing it to misclassify increasing numbers of severely toxic inputs as non-toxic over time.

-

After retraining on the poisoned data, the model’s false negative rate increased from 1.4% to 27% - allowing extremely toxic comments to bypass detection. The researchers warned this stealthy data poisoning could enable the spread of hate speech, harassment, and abuse if deployed against real moderation systems.

-

This case highlights how data poisoning can degrade model accuracy and reliability. For social media platforms, a poisoning attack that impairs toxicity detection could lead to the proliferation of harmful content and distrust of ML moderation systems. The example demonstrates why securing training data integrity and monitoring for poisoning is critical across application domains.

-
-
-
Case Study 2
-
-
-
- -
-
-Figure 18.25: Samples of dirty-label poison data regarding mismatched text/image pairs (Source: Shan) -
-
-
-

Interestingly enough, data poisoning attacks are not always malicious (Shan et al. 2023). Nightshade, a tool developed by a team led by Professor Ben Zhao at the University of Chicago, utilizes data poisoning to help artists protect their art against scraping and copyright violations by generative AI models. Artists can use the tool to make subtle modifications to their images before uploading them online, as shown in Figure fig-dirty-label-example.

-

While these changes are indiscernible to the human eye, they can significantly disrupt the performance of generative AI models when incorporated into the training data. Generative models can be manipulated to generate hallucinations and weird images. For example, with only 300 poisoned images, the University of Chicago researchers could trick the latest Stable Diffusion model into generating images of dogs that look like cats or images of cows when prompted for cars.

-

As the number of poisoned images on the internet increases, the performance of the models that use scraped data will deteriorate exponentially. First, the poisoned data is hard to detect and requires manual elimination. Second, the "poison" spreads quickly to other labels because generative models rely on connections between words and concepts as they generate images. So a poisoned image of a "car" could spread into generated images associated with words like "truck,” "train,” " bus,” etc.

-

On the other hand, this tool can be used maliciously and can affect legitimate applications of the generative models. This shows the very challenging and novel nature of machine learning attacks.

-

Figure fig-poisoning demonstrates the effects of different levels of data poisoning (50 samples, 100 samples, and 300 samples of poisoned images) on generating images in different categories. Notice how the images start deforming and deviating from the desired category. For example, after 300 poison samples, a car prompt generates a cow.

-
-
-
- -
-
-Figure 18.26: Data poisoning (Source: Shan et al. (2023)) -
-
-Shan, Shawn, Wenxin Ding, Josephine Passananti, Haitao Zheng, and Ben Y Zhao. 2023. “Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models.” ArXiv Preprint abs/2310.13828. https://arxiv.org/abs/2310.13828. -
-
-
-

Exercise 18.3 (Poisoning Attacks)  

-
-
- -
-
-

Get ready to explore the dark side of AI security! In this Colab, you’ll learn about data poisoning – how bad data can trick AI models into making wrong decisions. We’ll focus on a real-world attack against a Support Vector Machine (SVM), observing how the AI’s behavior changes under attack. This hands-on exercise will highlight why protecting AI systems is crucial, especially as they become more integrated into our lives. Think like a hacker, understand the vulnerability, and brainstorm how to defend our AI systems!

-

-
-
-
-
-
-
-
-

18.4.3 Distribution Shifts

-
-

Definition and Characteristics

-

Distribution shift refers to the phenomenon where the data distribution encountered by an ML model during deployment (inference) differs from the distribution it was trained on, as shown in Figure fig-distribution-shift. This is not so much an attack as it is that the model’s robustness will vary over time. In other words, the data’s statistical properties, patterns, or underlying assumptions can change between the training and test phases.

-
-
-
- -
-
-Figure 18.27: The curly brackets enclose the distribution shift between the environments. Here, z stands for the spurious feature, and y stands for label class (Source: Xin) -
-
-
-

The key characteristics of distribution shift include:

-

Domain mismatch: The input data during inference comes from a different domain or distribution than the training data. When the input data during inference comes from a domain or distribution different from the training data, it can significantly affect the model’s performance. This is because the model has learned patterns and relationships specific to the training domain, and when applied to a different domain, those learned patterns may not hold. For example, consider a sentiment analysis model trained on movie reviews. Suppose this model is applied to analyze sentiment in tweets. In that case, it may need help to accurately classify the sentiment because the language, grammar, and context of tweets can differ from movie reviews. This domain mismatch can result in poor performance and unreliable predictions, limiting the model’s practical utility.

-

Temporal drift: The data distribution evolves, leading to a gradual or sudden shift in the input characteristics. Temporal drift is important because ML models are often deployed in dynamic environments where the data distribution can change over time. If the model is not updated or adapted to these changes, its performance can gradually degrade. For instance, the patterns and behaviors associated with fraudulent activities may evolve in a fraud detection system as fraudsters adapt their techniques. If the model is not retrained or updated to capture these new patterns, it may fail to detect new types of fraud effectively. Temporal drift can lead to a decline in the model’s accuracy and reliability over time, making monitoring and addressing this type of distribution shift crucial.

-

Contextual changes: The ML model’s context can vary, resulting in different data distributions based on factors such as location, user behavior, or environmental conditions. Contextual changes matter because ML models are often deployed in various contexts or environments that can have different data distributions. If the model cannot generalize well to these different contexts, its performance may improve. For example, consider a computer vision model trained to recognize objects in a controlled lab environment. When deployed in a real-world setting, factors such as lighting conditions, camera angles, or background clutter can vary significantly, leading to a distribution shift. If the model is robust to these contextual changes, it may be able to accurately recognize objects in the new environment, limiting its practical utility.

-

Unrepresentative training data: The training data may only partially capture the variability and diversity of the real-world data encountered during deployment. Unrepresentative training data can lead to biased or skewed models that perform poorly on real-world data. Suppose the training data needs to capture the variability and diversity of the real-world data adequately. In that case, the model may learn patterns specific to the training set but needs to generalize better to new, unseen data. This can result in poor performance, biased predictions, and limited model applicability. For instance, if a facial recognition model is trained primarily on images of individuals from a specific demographic group, it may struggle to accurately recognize faces from other demographic groups when deployed in a real-world setting. Ensuring that the training data is representative and diverse is crucial for building models that can generalize well to real-world scenarios.

-
-
-
- -
-
-Figure 18.28: Concept drift refers to a change in data patterns and relationships over time (Source: Evidently AI) -
-
-
-

Distribution shift can manifest in various forms, such as:

-

Covariate shift: The distribution of the input features (covariates) changes while the conditional distribution of the target variable given the input remains the same. Covariate shift matters because it can impact the model’s ability to make accurate predictions when the input features (covariates) differ between the training and test data. Even if the relationship between the input features and the target variable remains the same, a change in the distribution of the input features can affect the model’s performance. For example, consider a model trained to predict housing prices based on features like square footage, number of bedrooms, and location. Suppose the distribution of these features in the test data significantly differs from the training data (e.g., the test data contains houses with much larger square footage). In that case, the model’s predictions may become less accurate. Addressing covariate shifts is important to ensure the model’s robustness and reliability when applied to new data.

-

Concept drift: The relationship between the input features and the target variable changes over time, altering the underlying concept the model is trying to learn, as shown in Figure fig-drift-over-time. Concept drift is important because it indicates changes in the fundamental relationship between the input features and the target variable over time. When the underlying concept that the model is trying to learn shifts, its performance can deteriorate if not adapted to the new concept. For instance, in a customer churn prediction model, the factors influencing customer churn may evolve due to market conditions, competitor offerings, or customer preferences. If the model is not updated to capture these changes, its predictions may become less accurate and irrelevant. Detecting and adapting to concept drift is crucial to maintaining the model’s effectiveness and alignment with evolving real-world concepts.

-

Domain generalization: The model must generalize to unseen domains or distributions not present during training. Domain generalization is important because it enables ML models to be applied to new, unseen domains without requiring extensive retraining or adaptation. In real-world scenarios, training data that covers all possible domains or distributions that the model may encounter is often infeasible. Domain generalization techniques aim to learn domain-invariant features or models that can generalize well to new domains. For example, consider a model trained to classify images of animals. If the model can learn features invariant to different backgrounds, lighting conditions, or poses, it can generalize well to classify animals in new, unseen environments. Domain generalization is crucial for building models that can be deployed in diverse and evolving real-world settings.

-

The presence of a distribution shift can significantly impact the performance and reliability of ML models, as the models may need help generalizing well to the new data distribution. Detecting and adapting to distribution shifts is crucial to ensure ML systems’ robustness and practical utility in real-world scenarios.

-
-
-

Mechanisms of Distribution Shifts

-

The mechanisms of distribution shift, such as changes in data sources, temporal evolution, domain-specific variations, selection bias, feedback loops, and adversarial manipulations, are important to understand because they help identify the underlying causes of distribution shift. By understanding these mechanisms, practitioners can develop targeted strategies to mitigate their impact and improve the model’s robustness. Here are some common mechanisms:

-
-
-
- -
-
-Figure 18.29: Temporal evolution (Source: Białek) -
-
-
-

Changes in data sources: Distribution shifts can occur when the data sources used for training and inference differ. For example, if a model is trained on data from one sensor but deployed on data from another sensor with different characteristics, it can lead to a distribution shift.

-

Temporal evolution: Over time, the underlying data distribution can evolve due to changes in user behavior, market dynamics, or other temporal factors. For instance, in a recommendation system, user preferences may shift over time, leading to a distribution shift in the input data, as shown in Figure fig-temporal-evoltion.

-

Domain-specific variations: Different domains or contexts can have distinct data distributions. A model trained on data from one domain may only generalize well to another domain with appropriate adaptation techniques. For example, an image classification model trained on indoor scenes may struggle when applied to outdoor scenes.

-

Selection bias: A Distribution shift can arise from selection bias during data collection or sampling. If the training data does not represent the true population or certain subgroups are over- or underrepresented, this can lead to a mismatch between the training and test distributions.

-

Feedback loops: In some cases, the predictions or actions taken by an ML model can influence future data distribution. For example, in a dynamic pricing system, the prices set by the model can impact customer behavior, leading to a shift in the data distribution over time.

-

Adversarial manipulations: Adversaries can intentionally manipulate the input data to create a distribution shift and deceive the ML model. By introducing carefully crafted perturbations or generating out-of-distribution samples, attackers can exploit the model’s vulnerabilities and cause it to make incorrect predictions.

-

Understanding the mechanisms of distribution shift is important for developing effective strategies to detect and mitigate its impact on ML systems. By identifying the sources and characteristics of the shift, practitioners can design appropriate techniques, such as domain adaptation, transfer learning, or continual learning, to improve the model’s robustness and performance under distributional changes.

-
-
-

Impact on ML Systems

-

Distribution shifts can significantly negatively impact the performance and reliability of ML systems. Here are some key ways in which distribution shift can affect ML models:

-

Degraded predictive performance: When the data distribution encountered during inference differs from the training distribution, the model’s predictive accuracy can deteriorate. The model may need help generalizing the new data well, leading to increased errors and suboptimal performance.

-

Reduced reliability and trustworthiness: Distribution shift can undermine the reliability and trustworthiness of ML models. If the model’s predictions become unreliable or inconsistent due to the shift, users may lose confidence in the system’s outputs, leading to potential misuse or disuse of the model.

-

Biased predictions: Distribution shift can introduce biases in the model’s predictions. If the training data does not represent the real-world distribution or certain subgroups are underrepresented, the model may make biased predictions that discriminate against certain groups or perpetuate societal biases.

-

Increased uncertainty and risk: Distribution shift introduces additional uncertainty and risk into the ML system. The model’s behavior and performance may become less predictable, making it challenging to assess its reliability and suitability for critical applications. This uncertainty can lead to increased operational risks and potential failures.

-

Adaptability challenges: ML models trained on a specific data distribution may need help to adapt to changing environments or new domains. The lack of adaptability can limit the model’s usefulness and applicability in dynamic real-world scenarios where the data distribution evolves.

-

Maintenance and update difficulties: Distribution shift can complicate the maintenance and updating of ML models. As the data distribution changes, the model may require frequent retraining or fine-tuning to maintain its performance. This can be time-consuming and resource-intensive, especially if the shift occurs rapidly or continuously.

-

Vulnerability to adversarial attacks: Distribution shift can make ML models more vulnerable to adversarial attacks. Adversaries can exploit the model’s sensitivity to distributional changes by crafting adversarial examples outside the training distribution, causing the model to make incorrect predictions or behave unexpectedly.

-

To mitigate the impact of distribution shifts, it is crucial to develop robust ML systems that detect and adapt to distributional changes. Techniques such as domain adaptation, transfer learning, and continual learning can help improve the model’s generalization ability across different distributions. ML model monitoring, testing, and updating are also necessary to ensure their performance and reliability during distribution shifts.

-
-
-
-

18.4.4 Detection and Mitigation

-
-

Adversarial Attacks

-

As you may recall from above, adversarial attacks pose a significant threat to the robustness and reliability of ML systems. These attacks involve crafting carefully designed inputs, known as adversarial examples, to deceive ML models and cause them to make incorrect predictions. To safeguard ML systems against adversarial attacks, developing effective techniques for detecting and mitigating these threats is crucial.

-
-
Adversarial Example Detection Techniques
-

Detecting adversarial examples is the first line of defense against adversarial attacks. Several techniques have been proposed to identify and flag suspicious inputs that may be adversarial.

-

Statistical methods aim to detect adversarial examples by analyzing the statistical properties of the input data. These methods often compare the input data distribution to a reference distribution, such as the training data distribution or a known benign distribution. Techniques like the Kolmogorov-Smirnov (Berger and Zhou 2014) test or the Anderson-Darling test can be used to measure the discrepancy between the distributions and flag inputs that deviate significantly from the expected distribution.

-
-Berger, Vance W, and YanYan Zhou. 2014. “Kolmogorovsmirnov Test: Overview.” Wiley Statsref: Statistics Reference Online. -

Kernel density estimation (KDE) is a non-parametric technique used to estimate the probability density function of a dataset. In the context of adversarial example detection, KDE can be used to estimate the density of benign examples in the input space. Adversarial examples often lie in low-density regions and can be detected by comparing their estimated density to a threshold. Inputs with an estimated density below the threshold are flagged as potential adversarial examples.

-

Another technique is feature squeezing (Panda, Chakraborty, and Roy 2019), which reduces the complexity of the input space by applying dimensionality reduction or discretization. The idea behind feature squeezing is that adversarial examples often rely on small, imperceptible perturbations that can be eliminated or reduced through these transformations. Inconsistencies can be detected by comparing the model’s predictions on the original input and the squeezed input, indicating the presence of adversarial examples.

-
-Panda, Priyadarshini, Indranil Chakraborty, and Kaushik Roy. 2019. “Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks.” #IEEE_O_ACC# 7: 70157–68. https://doi.org/10.1109/access.2019.2919463. -

Model uncertainty estimation techniques aim to quantify the confidence or uncertainty associated with a model’s predictions. Adversarial examples often exploit regions of high uncertainty in the model’s decision boundary. By estimating the uncertainty using techniques like Bayesian neural networks, dropout-based uncertainty estimation, or ensemble methods, inputs with high uncertainty can be flagged as potential adversarial examples.

-
-
-
Adversarial Defense Strategies
-

Once adversarial examples are detected, various defense strategies can be employed to mitigate their impact and improve the robustness of ML models.

-

Adversarial training is a technique that involves augmenting the training data with adversarial examples and retraining the model on this augmented dataset. Exposing the model to adversarial examples during training teaches it to classify them correctly and becomes more robust to adversarial attacks. Adversarial training can be performed using various attack methods, such as the Fast Gradient Sign Method (FGSM) or Projected Gradient Descent (PGD) (Madry et al. 2017).

-
-Madry, Aleksander, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. “Towards Deep Learning Models Resistant to Adversarial Attacks.” arXiv Preprint arXiv:1706.06083. -
-Papernot, Nicolas, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. “Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks.” In 2016 IEEE Symposium on Security and Privacy (SP), 582–97. IEEE; IEEE. https://doi.org/10.1109/sp.2016.41. -

Defensive distillation (Papernot et al. 2016) is a technique that trains a second model (the student model) to mimic the behavior of the original model (the teacher model). The student model is trained on the soft labels produced by the teacher model, which are less sensitive to small perturbations. Using the student model for inference can reduce the impact of adversarial perturbations, as the student model learns to generalize better and is less sensitive to adversarial noise.

-

Input preprocessing and transformation techniques aim to remove or mitigate the effect of adversarial perturbations before feeding the input to the ML model. These techniques include image denoising, JPEG compression, random resizing, padding, or applying random transformations to the input data. By reducing the impact of adversarial perturbations, these preprocessing steps can help improve the model’s robustness to adversarial attacks.

-

Ensemble methods combine multiple models to make more robust predictions. The ensemble can reduce the impact of adversarial attacks by using a diverse set of models with different architectures, training data, or hyperparameters. Adversarial examples that fool one model may not fool others in the ensemble, leading to more reliable and robust predictions. Model diversification techniques, such as using different preprocessing techniques or feature representations for each model in the ensemble, can further enhance the robustness.

-
-
-
Robustness Evaluation and Testing
-

Conduct thorough evaluation and testing to assess the effectiveness of adversarial defense techniques and measure the robustness of ML models.

-

Adversarial robustness metrics quantify the model’s resilience to adversarial attacks. These metrics can include the model’s accuracy on adversarial examples, the average distortion required to fool the model, or the model’s performance under different attack strengths. By comparing these metrics across different models or defense techniques, practitioners can assess and compare their robustness levels.

-

Standardized adversarial attack benchmarks and datasets provide a common ground for evaluating and comparing the robustness of ML models. These benchmarks include datasets with pre-generated adversarial examples and tools and frameworks for generating adversarial attacks. Examples of popular adversarial attack benchmarks include the MNIST-C, CIFAR-10-C, and ImageNet-C (Hendrycks and Dietterich 2019) datasets, which contain corrupted or perturbed versions of the original datasets.

-
-Hendrycks, Dan, and Thomas Dietterich. 2019. “Benchmarking Neural Network Robustness to Common Corruptions and Perturbations.” arXiv Preprint arXiv:1903.12261. -

Practitioners can develop more robust and resilient ML systems by leveraging these adversarial example detection techniques, defense strategies, and robustness evaluation methods. However, it is important to note that adversarial robustness is an ongoing research area, and no single technique provides complete protection against all types of adversarial attacks. A comprehensive approach that combines multiple defense mechanisms and regular testing is essential to maintain the security and reliability of ML systems in the face of evolving adversarial threats.

-
-
-
-

Data Poisoning

-

Recall that data poisoning is an attack that targets the integrity of the training data used to build ML models. By manipulating or corrupting the training data, attackers can influence the model’s behavior and cause it to make incorrect predictions or perform unintended actions. Detecting and mitigating data poisoning attacks is crucial to ensure the trustworthiness and reliability of ML systems, as shown in Figure fig-adversarial-attack-injection.

-
-
Anomaly Detection Techniques for Identifying Poisoned Data
-
-
-
- -
-
-Figure 18.30: Malicious data injection (Source: Li) -
-
-
-

Statistical outlier detection methods identify data points that deviate significantly from most data. These methods assume that poisoned data instances are likely to be statistical outliers. Techniques such as the Z-score method, Tukey’s method, or the [Mahalanobis] distance can be used to measure the deviation of each data point from the central tendency of the dataset. Data points that exceed a predefined threshold are flagged as potential outliers and considered suspicious for data poisoning.

-

Clustering-based methods group similar data points together based on their features or attributes. The assumption is that poisoned data instances may form distinct clusters or lie far away from the normal data clusters. By applying clustering algorithms like K-means, DBSCAN, or hierarchical clustering, anomalous clusters or data points that do not belong to any cluster can be identified. These anomalous instances are then treated as potentially poisoned data.

-
-
-
- -
-
-Figure 18.31: Autoencoder (Source: Dertat) -
-
-
-

Autoencoders are neural networks trained to reconstruct the input data from a compressed representation, as shown in Figure fig-autoencoder. They can be used for anomaly detection by learning the normal patterns in the data and identifying instances that deviate from them. During training, the autoencoder is trained on clean, unpoisoned data. At inference time, the reconstruction error for each data point is computed. Data points with high reconstruction errors are considered abnormal and potentially poisoned, as they do not conform to the learned normal patterns.

-
-
-
Data Sanitization and Preprocessing Techniques
-

Data poisoning can be avoided by cleaning data, which involves identifying and removing or correcting noisy, incomplete, or inconsistent data points. Techniques such as data deduplication, missing value imputation, and outlier removal can be applied to improve the quality of the training data. By eliminating or filtering out suspicious or anomalous data points, the impact of poisoned instances can be reduced.

-

Data validation involves verifying the integrity and consistency of the training data. This can include checking for data type consistency, range validation, and cross-field dependencies. By defining and enforcing data validation rules, anomalous or inconsistent data points indicative of data poisoning can be identified and flagged for further investigation.

-

Data provenance and lineage tracking involve maintaining a record of data’s origin, transformations, and movements throughout the ML pipeline. By documenting the data sources, preprocessing steps, and any modifications made to the data, practitioners can trace anomalies or suspicious patterns back to their origin. This helps identify potential points of data poisoning and facilitates the investigation and mitigation process.

-
-
-
Robust Training Techniques
-

Robust optimization techniques can be used to modify the training objective to minimize the impact of outliers or poisoned instances. This can be achieved by using robust loss functions less sensitive to extreme values, such as the Huber loss or the modified Huber loss. Regularization techniques, such as L1 or L2 regularization, can also help in reducing the model’s sensitivity to poisoned data by constraining the model’s complexity and preventing overfitting.

-

Robust loss functions are designed to be less sensitive to outliers or noisy data points. Examples include the modified Huber loss, the Tukey loss (Beaton and Tukey 1974), and the trimmed mean loss. These loss functions down-weight or ignore the contribution of abnormal instances during training, reducing their impact on the model’s learning process. Robust objective functions, such as the minimax or distributionally robust objective, aim to optimize the model’s performance under worst-case scenarios or in the presence of adversarial perturbations.

-
-Beaton, Albert E., and John W. Tukey. 1974. “The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data.” Technometrics 16 (2): 147. https://doi.org/10.2307/1267936. -

Data augmentation techniques involve generating additional training examples by applying random transformations or perturbations to the existing data Figure fig-data-augmentation. This helps in increasing the diversity and robustness of the training dataset. By introducing controlled variations in the data, the model becomes less sensitive to specific patterns or artifacts that may be present in poisoned instances. Randomization techniques, such as random subsampling or bootstrap aggregating, can also help reduce the impact of poisoned data by training multiple models on different subsets of the data and combining their predictions.

-
-
-
- -
-
-Figure 18.32: An image of the number “3” in original form and with basic augmentations applied. -
-
-
-
-
-
Secure and Trusted Data Sourcing
-

Implementing the best data collection and curation practices can help mitigate the risk of data poisoning. This includes establishing clear data collection protocols, verifying the authenticity and reliability of data sources, and conducting regular data quality assessments. Sourcing data from trusted and reputable providers and following secure data handling practices can reduce the likelihood of introducing poisoned data into the training pipeline.

-

Strong data governance and access control mechanisms are essential to prevent unauthorized modifications or tampering with the training data. This involves defining clear roles and responsibilities for data access, implementing access control policies based on the principle of least privilege, and monitoring and logging data access activities. By restricting access to the training data and maintaining an audit trail, potential data poisoning attempts can be detected and investigated.

-

Detecting and mitigating data poisoning attacks requires a multifaceted approach that combines anomaly detection, data sanitization, robust training techniques, and secure data sourcing practices. By implementing these measures, ML practitioners can enhance the resilience of their models against data poisoning and ensure the integrity and trustworthiness of the training data. However, it is important to note that data poisoning is an active area of research, and new attack vectors and defense mechanisms continue to emerge. Staying informed about the latest developments and adopting a proactive and adaptive approach to data security is crucial for maintaining the robustness of ML systems.

-
-
-
-

Distribution Shifts

-
-
Detecting and Mitigating Distribution Shifts
-

Recall that distribution shifts occur when the data distribution encountered by a machine learning (ML) model during deployment differs from the distribution it was trained on. These shifts can significantly impact the model’s performance and generalization ability, leading to suboptimal or incorrect predictions. Detecting and mitigating distribution shifts is crucial to ensure the robustness and reliability of ML systems in real-world scenarios.

-
-
-
Detection Techniques for Distribution Shifts
-

Statistical tests can be used to compare the distributions of the training and test data to identify significant differences. Techniques such as the Kolmogorov-Smirnov test or the Anderson-Darling test measure the discrepancy between two distributions and provide a quantitative assessment of the presence of distribution shift. By applying these tests to the input features or the model’s predictions, practitioners can detect if there is a statistically significant difference between the training and test distributions.

-

Divergence metrics quantify the dissimilarity between two probability distributions. Commonly used divergence metrics include the Kullback-Leibler (KL) divergence and the [Jensen-Shannon (JS)] divergence. By calculating the divergence between the training and test data distributions, practitioners can assess the extent of the distribution shift. High divergence values indicate a significant difference between the distributions, suggesting the presence of a distribution shift.

-

Uncertainty quantification techniques, such as Bayesian neural networks or ensemble methods, can estimate the uncertainty associated with the model’s predictions. When a model is applied to data from a different distribution, its predictions may have higher uncertainty. By monitoring the uncertainty levels, practitioners can detect distribution shifts. If the uncertainty consistently exceeds a predetermined threshold for test samples, it suggests that the model is operating outside its trained distribution.

-

In addition, domain classifiers are trained to distinguish between different domains or distributions. Practitioners can detect distribution shifts by training a classifier to differentiate between the training and test domains. If the domain classifier achieves high accuracy in distinguishing between the two domains, it indicates a significant difference in the underlying distributions. The performance of the domain classifier serves as a measure of the distribution shift.

-
-
-
Mitigation Techniques for Distribution Shifts
-
-
-
- -
-
-Figure 18.33: Transfer learning (Source: Bhavsar) -
-
-
-

Transfer learning leverages knowledge gained from one domain to improve performance in another, as shown in Figure fig-transfer-learning. By using pre-trained models or transferring learned features from a source domain to a target domain, transfer learning can help mitigate the impact of distribution shifts. The pre-trained model can be fine-tuned on a small amount of labeled data from the target domain, allowing it to adapt to the new distribution. Transfer learning is particularly effective when the source and target domains share similar characteristics or when labeled data in the target domain is scarce.

-

Continual learning, also known as lifelong learning, enables ML models to learn continuously from new data distributions while retaining knowledge from previous distributions. Techniques such as elastic weight consolidation (EWC) (Kirkpatrick et al. 2017) or gradient episodic memory (GEM) (Lopez-Paz and Ranzato 2017) allow models to adapt to evolving data distributions over time. These techniques aim to balance the plasticity of the model (ability to learn from new data) with the stability of the model (retaining previously learned knowledge). By incrementally updating the model with new data and mitigating catastrophic forgetting, continual learning helps models stay robust to distribution shifts.

-
-Kirkpatrick, James, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, et al. 2017. “Overcoming Catastrophic Forgetting in Neural Networks.” Proc. Natl. Acad. Sci. 114 (13): 3521–26. https://doi.org/10.1073/pnas.1611835114. -
-Lopez-Paz, David, and Marc’Aurelio Ranzato. 2017. “Gradient Episodic Memory for Continual Learning.” Adv Neural Inf Process Syst 30. -

Data augmentation techniques, such as those we have seen previously, involve applying transformations or perturbations to the existing training data to increase its diversity and improve the model’s robustness to distribution shifts. By introducing variations in the data, such as rotations, translations, scaling, or adding noise, data augmentation helps the model learn invariant features and generalize better to unseen distributions. Data augmentation can be performed during training and inference to enhance the model’s ability to handle distribution shifts.

-

Ensemble methods combine multiple models to make predictions more robust to distribution shifts. By training models on different subsets of the data, using different algorithms, or with different hyperparameters, ensemble methods can capture diverse aspects of the data distribution. When presented with a shifted distribution, the ensemble can leverage the strengths of individual models to make more accurate and stable predictions. Techniques like bagging, boosting, or stacking can create effective ensembles.

-

Regularly updating models with new data from the target distribution is crucial to mitigate the impact of distribution shifts. As the data distribution evolves, models should be retrained or fine-tuned on the latest available data to adapt to the changing patterns. Monitoring model performance and data characteristics can help detect when an update is necessary. By keeping the models up to date, practitioners can ensure they remain relevant and accurate in the face of distribution shifts.

-

Evaluating models using robust metrics less sensitive to distribution shifts can provide a more reliable assessment of model performance. Metrics such as the area under the precision-recall curve (AUPRC) or the F1 score are more robust to class imbalance and can better capture the model’s performance across different distributions. Additionally, using domain-specific evaluation metrics that align with the desired outcomes in the target domain can provide a more meaningful measure of the model’s effectiveness.

-

Detecting and mitigating distribution shifts is an ongoing process that requires continuous monitoring, adaptation, and improvement. By employing a combination of detection techniques and mitigation strategies, ML practitioners can proactively identify and address distribution shifts, ensuring the robustness and reliability of their models in real-world deployments. It is important to note that distribution shifts can take various forms and may require domain-specific approaches depending on the nature of the data and the application. Staying informed about the latest research and best practices in handling distribution shifts is essential for building resilient ML systems.

-
-
-
-
-
-

18.5 Software Faults

-
-

Definition and Characteristics

-

Software faults refer to defects, errors, or bugs in the runtime software frameworks and components that support the execution and deployment of ML models (Myllyaho et al. 2022). These faults can arise from various sources, such as programming mistakes, design flaws, or compatibility issues (H. Zhang 2008), and can have significant implications for ML systems’ performance, reliability, and security. Software faults in ML frameworks exhibit several key characteristics:

-
-Myllyaho, Lalli, Mikko Raatikainen, Tomi Männistö, Jukka K. Nurminen, and Tommi Mikkonen. 2022. “On Misbehaviour and Fault Tolerance in Machine Learning Systems.” J. Syst. Software 183 (January): 111096. https://doi.org/10.1016/j.jss.2021.111096. -
-Zhang, Hongyu. 2008. “On the Distribution of Software Faults.” IEEE Trans. Software Eng. 34 (2): 301–2. https://doi.org/10.1109/tse.2007.70771. -
    -
  • Diversity: Software faults can manifest in different forms, ranging from simple logic and syntax mistakes to more complex issues like memory leaks, race conditions, and integration problems. The variety of fault types adds to the challenge of detecting and mitigating them effectively.

  • -
  • Propagation: In ML systems, software faults can propagate through the various layers and components of the framework. A fault in one module can trigger a cascade of errors or unexpected behavior in other parts of the system, making it difficult to pinpoint the root cause and assess the full impact of the fault.

  • -
  • Intermittency: Some software faults may exhibit intermittent behavior, occurring sporadically or under specific conditions. These faults can be particularly challenging to reproduce and debug, as they may manifest inconsistently during testing or normal operation.

  • -
  • Interaction with ML models: Software faults in ML frameworks can interact with the trained models in subtle ways. For example, a fault in the data preprocessing pipeline may introduce noise or bias into the model’s inputs, leading to degraded performance or incorrect predictions. Similarly, faults in the model serving component may cause inconsistencies between the training and inference environments.

  • -
  • Impact on system properties: Software faults can compromise various desirable properties of ML systems, such as performance, scalability, reliability, and security. Faults may lead to slowdowns, crashes, incorrect outputs, or vulnerabilities that attackers can exploit.

  • -
  • Dependency on external factors: The occurrence and impact of software faults in ML frameworks often depend on external factors, such as the choice of hardware, operating system, libraries, and configurations. Compatibility issues and version mismatches can introduce faults that are difficult to anticipate and mitigate.

  • -
-

Understanding the characteristics of software faults in ML frameworks is crucial for developing effective fault prevention, detection, and mitigation strategies. By recognizing the diversity, propagation, intermittency, and impact of software faults, ML practitioners can design more robust and reliable systems resilient to these issues.

-
-
-

Mechanisms of Software Faults in ML Frameworks

-

Machine learning frameworks, such as TensorFlow, PyTorch, and sci-kit-learn, provide powerful tools and abstractions for building and deploying ML models. However, these frameworks are not immune to software faults that can impact ML systems’ performance, reliability, and correctness. Let’s explore some of the common software faults that can occur in ML frameworks:

-

Memory Leaks and Resource Management Issues: Improper memory management, such as failing to release memory or close file handles, can lead to memory leaks and resource exhaustion over time. This issue is compounded by inefficient memory usage, where creating unnecessary copies of large tensors or not leveraging memory-efficient data structures can cause excessive memory consumption and degrade system performance. Additionally, failing to manage GPU memory properly can result in out-of-memory errors or suboptimal utilization of GPU resources, further exacerbating the problem as shown in Figure fig-gpu-out-of-memory.

-
-
-
- -
-
-Figure 18.34: Example of GPU out-of-the-memory and suboptimal utilization issues -
-
-
-

Synchronization and Concurrency Problems: Incorrect synchronization between threads or processes can lead to race conditions, deadlocks, or inconsistent behavior in multi-threaded or distributed ML systems. This issue is often tied to improper handling of asynchronous operations, such as non-blocking I/O or parallel data loading, which can cause synchronization issues and impact the correctness of the ML pipeline. Moreover, proper coordination and communication between distributed nodes in a cluster can result in consistency or stale data during training or inference, compromising the reliability of the ML system.

-

Compatibility Issues: Mismatches between the versions of ML frameworks, libraries, or dependencies can introduce compatibility problems and runtime errors. Upgrading or changing the versions of underlying libraries without thoroughly testing the impact on the ML system can lead to unexpected behavior or breakages. Furthermore, inconsistencies between the training and deployment environments, such as differences in hardware, operating systems, or package versions, can cause compatibility issues and affect the reproducibility of ML models, making it challenging to ensure consistent performance across different platforms.

-

Numerical Instability and Precision Errors: Inadequate handling of numerical instabilities, such as division by zero, underflow, or overflow, can lead to incorrect calculations or convergence issues during training. This problem is compounded by insufficient precision or rounding errors, which can accumulate over time and impact the accuracy of the ML models, especially in deep learning architectures with many layers. Moreover, improper scaling or normalization of input data can cause numerical instabilities and affect the convergence and performance of optimization algorithms, resulting in suboptimal or unreliable model performance.

-

Inadequate Error Handling and Exception Management: Proper error handling and exception management can prevent ML systems from crashing or behaving unexpectedly when encountering exceptional conditions or invalid inputs. Failing to catch and handle specific exceptions or relying on generic exception handling can make it difficult to diagnose and recover from errors gracefully, leading to system instability and reduced reliability. Furthermore, incomplete or misleading error messages can hinder the ability to effectively debug and resolve software faults in ML frameworks, prolonging the time required to identify and fix issues.

-
-
-

Impact on ML Systems

-

Software faults in machine learning frameworks can have significant and far-reaching impacts on ML systems’ performance, reliability, and security. Let’s explore the various ways in which software faults can affect ML systems:

-

Performance Degradation and System Slowdowns: Memory leaks and inefficient resource management can lead to gradual performance degradation over time as the system becomes increasingly memory-constrained and spends more time on garbage collection or memory swapping (Maas et al. 2024). This issue is compounded by synchronization issues and concurrency bugs, which can cause delays, reduced throughput, and suboptimal utilization of computational resources, especially in multi-threaded or distributed ML systems. Furthermore, compatibility problems or inefficient code paths can introduce additional overhead and slowdowns, affecting the overall performance of the ML system.

-
-Maas, Martin, David G. Andersen, Michael Isard, Mohammad Mahdi Javanmard, Kathryn S. McKinley, and Colin Raffel. 2024. “Combining Machine Learning and Lifetime-Based Resource Management for Memory Allocation and Beyond.” Commun. ACM 67 (4): 87–96. https://doi.org/10.1145/3611018. -

Incorrect Predictions or Outputs: Software faults in data preprocessing, feature engineering, or model evaluation can introduce biases, noise, or errors propagating through the ML pipeline and resulting in incorrect predictions or outputs. Over time, numerical instabilities, precision errors, or rounding issues can accumulate and lead to degraded accuracy or convergence problems in the trained models. Moreover, faults in the model serving or inference components can cause inconsistencies between the expected and actual outputs, leading to incorrect or unreliable predictions in production.

-

Reliability and Stability Issues: Software faults can cause Unparalleled exceptions, crashes, or sudden terminations that can compromise the reliability and stability of ML systems, especially in production environments. Intermittent or sporadic faults can be difficult to reproduce and diagnose, leading to unpredictable behavior and reduced confidence in the ML system’s outputs. Additionally, faults in checkpointing, model serialization, or state management can cause data loss or inconsistencies, affecting the reliability and recoverability of the ML system.

-

Security Vulnerabilities: Software faults, such as buffer overflows, injection vulnerabilities, or improper access control, can introduce security risks and expose the ML system to potential attacks or unauthorized access. Adversaries may exploit faults in the preprocessing or feature extraction stages to manipulate the input data and deceive the ML models, leading to incorrect or malicious behavior. Furthermore, inadequate protection of sensitive data, such as user information or confidential model parameters, can lead to data breaches or privacy violations (Q. Li et al. 2023).

-
-Li, Qinbin, Zeyi Wen, Zhaomin Wu, Sixu Hu, Naibo Wang, Yuan Li, Xu Liu, and Bingsheng He. 2023. “A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection.” IEEE Trans. Knowl. Data Eng. 35 (4): 3347–66. https://doi.org/10.1109/tkde.2021.3124599. -

Difficulty in Reproducing and Debugging: Software faults can make it challenging to reproduce and debug issues in ML systems, especially when the faults are intermittent or dependent on specific runtime conditions. Incomplete or ambiguous error messages, coupled with the complexity of ML frameworks and models, can prolong the debugging process and hinder the ability to identify and fix the underlying faults. Moreover, inconsistencies between development, testing, and production environments can make reproducing and diagnosing faults in specific contexts difficult.

-

Increased Development and Maintenance Costs Software faults can lead to increased development and maintenance costs, as teams spend more time and resources debugging, fixing, and validating the ML system. The need for extensive testing, monitoring, and fault-tolerant mechanisms to mitigate the impact of software faults can add complexity and overhead to the ML development process. Frequent patches, updates, and bug fixes to address software faults can disrupt the development workflow and require additional effort to ensure the stability and compatibility of the ML system.

-

Understanding the potential impact of software faults on ML systems is crucial for prioritizing testing efforts, implementing fault-tolerant designs, and establishing effective monitoring and debugging practices. By proactively addressing software faults and their consequences, ML practitioners can build more robust, reliable, and secure ML systems that deliver accurate and trustworthy results.

-
-
-

Detection and Mitigation

-

Detecting and mitigating software faults in machine learning frameworks is essential to ensure ML systems’ reliability, performance, and security. Let’s explore various techniques and approaches that can be employed to identify and address software faults effectively:

-

Thorough Testing and Validation: Comprehensive unit testing of individual components and modules can verify their correctness and identify potential faults early in development. Integration testing validates the interaction and compatibility between different components of the ML framework, ensuring seamless integration. Systematic testing of edge cases, boundary conditions, and exceptional scenarios helps uncover hidden faults and vulnerabilities. Continuous testing and regression testing as shown in Figure fig-regression-testing detect faults introduced by code changes or updates to the ML framework.

-
-
-
- -
-
-Figure 18.35: Automated regression testing (Source: UTOR) -
-
-
-

Static Code Analysis and Linting: Utilizing static code analysis tools automatically identifies potential coding issues, such as syntax errors, undefined variables, or security vulnerabilities. Enforcing coding standards and best practices through linting tools maintains code quality and reduces the likelihood of common programming mistakes. Conducting regular code reviews allows manual inspection of the codebase, identification of potential faults, and ensures adherence to coding guidelines and design principles.

-

Runtime Monitoring and Logging: Implementing comprehensive logging mechanisms captures relevant information during runtime, such as input data, model parameters, and system events. Monitoring key performance metrics, resource utilization, and error rates helps detect anomalies, performance bottlenecks, or unexpected behavior. Employing runtime assertion checks and invariants validates assumptions and detects violations of expected conditions during program execution. Utilizing profiling tools identifies performance bottlenecks, memory leaks, or inefficient code paths that may indicate the presence of software faults.

-

Fault-Tolerant Design Patterns: Implementing error handling and exception management mechanisms enables graceful handling and recovery from exceptional conditions or runtime errors. Employing redundancy and failover mechanisms, such as backup systems or redundant computations, ensures the availability and reliability of the ML system in the presence of faults. Designing modular and loosely coupled architectures minimizes the propagation and impact of faults across different components of the ML system. Utilizing checkpointing and recovery mechanisms (Eisenman et al. 2022) allows the system to resume from a known stable state in case of failures or interruptions.

-
-Eisenman, Assaf, Kiran Kumar Matam, Steven Ingram, Dheevatsa Mudigere, Raghuraman Krishnamoorthi, Krishnakumar Nair, Misha Smelyanskiy, and Murali Annavaram. 2022. “Check-n-Run: A Checkpointing System for Training Deep Learning Recommendation Models.” In 19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22), 929–43. -

Regular Updates and Patches: Staying up to date with the latest versions and patches of the ML frameworks, libraries, and dependencies provides benefits from bug fixes, security updates, and performance improvements. Monitoring release notes, security advisories, and community forums inform practitioners about known issues, vulnerabilities, or compatibility problems in the ML framework. Establishing a systematic process for testing and validating updates and patches before applying them to production systems ensures stability and compatibility.

-

Containerization and Isolation: Leveraging containerization technologies, such as Docker or Kubernetes, encapsulates ML components and their dependencies in isolated environments. Utilizing containerization ensures consistent and reproducible runtime environments across development, testing, and production stages, reducing the likelihood of compatibility issues or environment-specific faults. Employing isolation techniques, such as virtual environments or sandboxing, prevents faults or vulnerabilities in one component from affecting other parts of the ML system.

-

Automated Testing and Continuous Integration/Continuous Deployment (CI/CD): Implement automated testing frameworks and scripts, execute comprehensive test suites, and catch faults early in development. Integrating automated testing into the CI/CD pipeline, as shown in Figure fig-CI-CD-procedure, ensures that code changes are thoroughly tested before being merged or deployed to production. Utilizing continuous monitoring and automated alerting systems detects and notifies developers and operators about potential faults or anomalies in real-time.

-
-
-
- -
-
-Figure 18.36: Continuous Integration/Continuous Deployment (CI/CD) procedure (Source: geeksforgeeks) -
-
-
-

Adopting a proactive and systematic approach to fault detection and mitigation can significantly improve ML systems’ robustness, reliability, and maintainability. By investing in comprehensive testing, monitoring, and fault-tolerant design practices, organizations can minimize the impact of software faults and ensure their ML systems’ smooth operation in production environments.

-
-

Exercise 18.4 (Fault Tolerance)  

-
-
- -
-
-

Get ready to become an AI fault-fighting superhero! Software glitches can derail machine learning systems, but in this Colab, you’ll learn how to make them resilient. We’ll simulate software faults to see how AI can break, then explore techniques to save your ML model’s progress, like checkpoints in a game. You’ll see how to train your AI to bounce back after a crash, ensuring it stays on track. This is crucial for building reliable, trustworthy AI, especially in critical applications. So gear up because this Colab directly connects with the Robust AI chapter – you’ll move from theory to hands-on troubleshooting and build AI systems that can handle the unexpected!

-

-
-
-
-
-
-
-

18.6 Tools and Frameworks

-

Given the significance or importance of developing robust AI systems, in recent years, researchers and practitioners have developed a wide range of tools and frameworks to understand how hardware faults manifest and propagate to impact ML systems. These tools and frameworks play a crucial role in evaluating the resilience of ML systems to hardware faults by simulating various fault scenarios and analyzing their impact on the system’s performance. This enables designers to identify potential vulnerabilities and develop effective mitigation strategies, ultimately creating more robust and reliable ML systems that can operate safely despite hardware faults. This section provides an overview of widely used fault models in the literature and the tools and frameworks developed to evaluate the impact of such faults on ML systems.

-
-

18.6.1 Fault Models and Error Models

-

As discussed previously, hardware faults can manifest in various ways, including transient, permanent, and intermittent faults. In addition to the type of fault under study, how the fault manifests is also important. For example, does the fault happen in a memory cell or during the computation of a functional unit? Is the impact on a single bit, or does it impact multiple bits? Does the fault propagate all the way and impact the application (causing an error), or does it get masked quickly and is considered benign? All these details impact what is known as the fault model, which plays a major role in simulating and measuring what happens to a system when a fault occurs.

-

To effectively study and understand the impact of hardware faults on ML systems, it is essential to understand the concepts of fault models and error models. A fault model describes how a hardware fault manifests itself in the system, while an error model represents how the fault propagates and affects the system’s behavior.

-

Fault models can be categorized based on various characteristics:

-
    -
  • Duration: Transient faults occur briefly and then disappear, while permanent faults persist indefinitely. Intermittent faults occur sporadically and may be difficult to diagnose.

  • -
  • Location: Faults can occur in hardware parts, such as memory cells, functional units, or interconnects.

  • -
  • Granularity: Faults can affect a single bit (e.g., bitflip) or multiple bits (e.g., burst errors) within a hardware component.

  • -
-

On the other hand, error models describe how a fault propagates through the system and manifests as an error. An error may cause the system to deviate from its expected behavior, leading to incorrect results or even system failures. Error models can be defined at different levels of abstraction, from the hardware level (e.g., register-level bitflips) to the software level (e.g., corrupted weights or activations in an ML model).

-

The fault model (or error model, typically the more applicable terminology in understanding the robustness of an ML system) plays a major role in simulating and measuring what happens to a system when a fault occurs. The chosen model informs the assumptions made about the system being studied. For example, a system focusing on single-bit transient errors (Sangchoolie, Pattabiraman, and Karlsson 2017) would not be well-suited to understand the impact of permanent, multi-bit flip errors (Wilkening et al. 2014), as it is designed assuming a different model altogether.

-
-Wilkening, Mark, Vilas Sridharan, Si Li, Fritz Previlon, Sudhanva Gurumurthi, and David R. Kaeli. 2014. “Calculating Architectural Vulnerability Factors for Spatial Multi-Bit Transient Faults.” In 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture, 293–305. IEEE; IEEE. https://doi.org/10.1109/micro.2014.15. -

Furthermore, implementing an error model is also an important consideration, particularly regarding where an error is said to occur in the compute stack. For instance, a single-bit flip model at the architectural register level differs from a single-bit flip in the weight of a model at the PyTorch level. Although both target a similar error model, the former would usually be modeled in an architecturally accurate simulator (like gem5 [binkert2011gem5]), which captures error propagation compared to the latter, focusing on value propagation through a model.

-

Recent research has shown that certain characteristics of error models may exhibit similar behaviors across different levels of abstraction (Sangchoolie, Pattabiraman, and Karlsson 2017) (Papadimitriou and Gizopoulos 2021). For example, single-bit errors are generally more problematic than multi-bit errors, regardless of whether they are modeled at the hardware or software level. However, other characteristics, such as error masking (Mohanram and Touba 2003) as shown in Figure fig-error-masking, may not always be accurately captured by software-level models, as they can hide underlying system effects.

-
-Sangchoolie, Behrooz, Karthik Pattabiraman, and Johan Karlsson. 2017. “One Bit Is (Not) Enough: An Empirical Study of the Impact of Single and Multiple Bit-Flip Errors.” In 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 97–108. IEEE; IEEE. https://doi.org/10.1109/dsn.2017.30. -
-Papadimitriou, George, and Dimitris Gizopoulos. 2021. “Demystifying the System Vulnerability Stack: Transient Fault Effects Across the Layers.” In 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 902–15. IEEE; IEEE. https://doi.org/10.1109/isca52012.2021.00075. -
-Mohanram, K., and N. A. Touba. 2003. “Partial Error Masking to Reduce Soft Error Failure Rate in Logic Circuits.” In Proceedings. 16th IEEE Symposium on Computer Arithmetic, 433–40. IEEE; IEEE Comput. Soc. https://doi.org/10.1109/dftvs.2003.1250141. -
-
-
- -
-
-Figure 18.37: Example of error masking in microarchitectural components (Ko 2021) -
-
-Ko, Yohan. 2021. “Characterizing System-Level Masking Effects Against Soft Errors.” Electronics 10 (18): 2286. https://doi.org/10.3390/electronics10182286. -
-
-

Some tools, such as Fidelity (He, Balaprakash, and Li 2020), aim to bridge the gap between hardware-level and software-level error models by mapping patterns between the two levels of abstraction (Cheng et al. 2016). This allows for more accurate modeling of hardware faults in software-based tools, essential for developing robust and reliable ML systems. Lower-level tools typically represent more accurate error propagation characteristics but must be faster in simulating many errors due to the complex nature of hardware system designs. On the other hand, higher-level tools, such as those implemented in ML frameworks like PyTorch or TensorFlow, which we will discuss soon in the later sections, are often faster and more efficient for evaluating the robustness of ML systems.

-
-Cheng, Eric, Shahrzad Mirkhani, Lukasz G. Szafaryn, Chen-Yong Cher, Hyungmin Cho, Kevin Skadron, Mircea R. Stan, et al. 2016. “Clear: \(<\)U\(>\)c\(<\)/u\(>\) Ross \(<\)u\(>\)-l\(<\)/u\(>\) Ayer \(<\)u\(>\)e\(<\)/u\(>\) Xploration for \(<\)u\(>\)a\(<\)/u\(>\) Rchitecting \(<\)u\(>\)r\(<\)/u\(>\) Esilience - Combining Hardware and Software Techniques to Tolerate Soft Errors in Processor Cores.” In Proceedings of the 53rd Annual Design Automation Conference, 1–6. ACM. https://doi.org/10.1145/2897937.2897996. -

In the following subsections, we will discuss various hardware-based and software-based fault injection methods and tools, highlighting their capabilities, limitations, and the fault and error models they support.

-
-
-

18.6.2 Hardware-based Fault Injection

-

An error injection tool is a tool that allows the user to implement a particular error model, such as a transient single-bit flip during inference Figure fig-hardware-errors. Most error injection tools are software-based, as software-level tools are faster for ML robustness studies. However, hardware-based fault injection methods are still important for grounding the higher-level error models, as they are considered the most accurate way to study the impact of faults on ML systems by directly manipulating the hardware to introduce faults. These methods allow researchers to observe the system’s behavior under real-world fault conditions. Both software-based and hardware-based error injection tools are described in this section in more detail.

-
-
-
- -
-
-Figure 18.38: Hardware errors can occur due to a variety of reasons and at different times and/or locations in a system, which can be explored when studying the impact of hardware-based errors on systems (Ahmadilivani et al. 2024) -
-
-Ahmadilivani, Mohammad Hasan, Mahdi Taheri, Jaan Raik, Masoud Daneshtalab, and Maksim Jenihhin. 2024. “A Systematic Literature Review on Hardware Reliability Assessment Methods for Deep Neural Networks.” ACM Comput. Surv. 56 (6): 1–39. https://doi.org/10.1145/3638242. -
-
-
-

Methods

-

Two of the most common hardware-based fault injection methods are FPGA-based fault injection and radiation or beam testing.

-

FPGA-based Fault Injection: Field-Programmable Gate Arrays (FPGAs) are reconfigurable integrated circuits that can be programmed to implement various hardware designs. In the context of fault injection, FPGAs offer high precision and accuracy, as researchers can target specific bits or sets of bits within the hardware. By modifying the FPGA configuration, faults can be introduced at specific locations and times during the execution of an ML model. FPGA-based fault injection allows for fine-grained control over the fault model, enabling researchers to study the impact of different types of faults, such as single-bit flips or multi-bit errors. This level of control makes FPGA-based fault injection a valuable tool for understanding the resilience of ML systems to hardware faults.

-

Radiation or Beam Testing: Radiation or beam testing (Velazco, Foucard, and Peronnard 2010) involves exposing the hardware running an ML model to high-energy particles, such as protons or neutrons as illustrated in Figure fig-beam-testing. These particles can cause bitflips or other types of faults in the hardware, mimicking the effects of real-world radiation-induced faults. Beam testing is widely regarded as a highly accurate method for measuring the error rate induced by particle strikes on a running application. It provides a realistic representation of the faults in real-world environments, particularly in applications exposed to high radiation levels, such as space systems or particle physics experiments. However, unlike FPGA-based fault injection, beam testing could be more precise in targeting specific bits or components within the hardware, as it might be difficult to aim the beam of particles to a particular bit in the hardware. Despite being quite expensive from a research standpoint, beam testing is a well-regarded industry practice for reliability.

-
-Velazco, Raoul, Gilles Foucard, and Paul Peronnard. 2010. “Combining Results of Accelerated Radiation Tests and Fault Injections to Predict the Error Rate of an Application Implemented in SRAM-Based FPGAs.” IEEE Trans. Nucl. Sci. 57 (6): 3500–3505. https://doi.org/10.1109/tns.2010.2087355. -

-
-
-
- -
-
-Figure 18.39: Radiation test setup for semiconductor components (Lee et al. 2022) (Source: JD Instrument) -
-
-Lee, Minwoong, Namho Lee, Huijeong Gwon, Jongyeol Kim, Younggwan Hwang, and Seongik Cho. 2022. “Design of Radiation-Tolerant High-Speed Signal Processing Circuit for Detecting Prompt Gamma Rays by Nuclear Explosion.” Electronics 11 (18): 2970. https://doi.org/10.3390/electronics11182970. -
-
-
-
-

Limitations

-

Despite their high accuracy, hardware-based fault injection methods have several limitations that can hinder their widespread adoption:

-

Cost: FPGA-based fault injection and beam testing require specialized hardware and facilities, which can be expensive to set up and maintain. The cost of these methods can be a significant barrier for researchers and organizations with limited resources.

-

Scalability: Hardware-based methods are generally slower and less scalable than software-based methods. Injecting faults and collecting data on hardware can take time, limiting the number of experiments performed within a given timeframe. This can be particularly challenging when studying the resilience of large-scale ML systems or conducting statistical analyses that require many fault injection experiments.

-

Flexibility: Hardware-based methods may not be as flexible as software-based methods in terms of the range of fault models and error models they can support. Modifying the hardware configuration or the experimental setup to accommodate different fault models can be more challenging and time-consuming than software-based methods.

-

Despite these limitations, hardware-based fault injection methods remain essential tools for validating the accuracy of software-based methods and for studying the impact of faults on ML systems in realistic settings. By combining hardware-based and software-based methods, researchers can gain a more comprehensive understanding of ML systems’ resilience to hardware faults and develop effective mitigation strategies.

-
-
-
-

18.6.3 Software-based Fault Injection Tools

-

With the rapid development of ML frameworks in recent years, software-based fault injection tools have gained popularity in studying the resilience of ML systems to hardware faults. These tools simulate the effects of hardware faults by modifying the software representation of the ML model or the underlying computational graph. The rise of ML frameworks such as TensorFlow, PyTorch, and Keras has facilitated the development of fault injection tools that are tightly integrated with these frameworks, making it easier for researchers to conduct fault injection experiments and analyze the results.

-
-
Advantages and Trade-offs
-

Software-based fault injection tools offer several advantages over hardware-based methods:

-

Speed: Software-based tools are generally faster than hardware-based methods, as they do not require the modification of physical hardware or the setup of specialized equipment. This allows researchers to conduct more fault injection experiments in a shorter time, enabling more comprehensive analyses of the resilience of ML systems.

-

Flexibility: Software-based tools are more flexible than hardware-based methods in terms of the range of fault and error models they can support. Researchers can easily modify the fault injection tool’s software implementation to accommodate different fault models or to target specific components of the ML system.

-

Accessibility: Software-based tools are more accessible than hardware-based methods, as they do not require specialized hardware or facilities. This makes it easier for researchers and practitioners to conduct fault injection experiments and study the resilience of ML systems, even with limited resources.

-
-
-
Limitations
-

Software-based fault injection tools also have some limitations compared to hardware-based methods:

-

Accuracy: Software-based tools may not always capture the full range of effects that hardware faults can have on the system. As these tools operate at a higher level of abstraction, they may need to catch up on some of the low-level hardware interactions and error propagation mechanisms that can impact the behavior of the ML system.

-

Fidelity: Software-based tools may provide a different level of Fidelity than hardware-based methods in terms of representing real-world fault conditions. The accuracy of the results obtained from software-based fault injection experiments may depend on how closely the software model approximates the actual hardware behavior.

-
-
-
- -
-
-Figure 18.40: Comparison of techniques at layers of abstraction (Source: MAVFI) -
-
-
-
-
-
Types of Fault Injection Tools
-

Software-based fault injection tools can be categorized based on their target frameworks or use cases. Here, we will discuss some of the most popular tools in each category:

-

Ares (Reagen et al. 2018), a fault injection tool initially developed for the Keras framework in 2018, emerged as one of the first tools to study the impact of hardware faults on deep neural networks (DNNs) in the context of the rising popularity of ML frameworks in the mid-to-late 2010s. The tool was validated against a DNN accelerator implemented in silicon, demonstrating its effectiveness in modeling hardware faults. Ares provides a comprehensive study on the impact of hardware faults in both weights and activation values, characterizing the effects of single-bit flips and bit-error rates (BER) on hardware structures. Later, the Ares framework was extended to support the PyTorch ecosystem, enabling researchers to investigate hardware faults in a more modern setting and further extending its utility in the field.

-
-Reagen, Brandon, Udit Gupta, Lillian Pentecost, Paul Whatmough, Sae Kyu Lee, Niamh Mulholland, David Brooks, and Gu-Yeon Wei. 2018. “Ares: A Framework for Quantifying the Resilience of Deep Neural Networks.” In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1–6. IEEE. https://doi.org/10.1109/dac.2018.8465834. -
-
-
- -
-
-Figure 18.41: Hardware bitflips in ML workloads can cause phantom objects and misclassifications, which can erroneously be used downstream by larger systems, such as in autonomous driving. Shown above is a correct and faulty version of the same image using the PyTorchFI injection framework. -
-
-
-

PyTorchFI (Mahmoud et al. 2020), a fault injection tool specifically designed for the PyTorch framework, was developed in 2020 in collaboration with Nvidia Research. It enables the injection of faults into the weights, activations, and gradients of PyTorch models, supporting a wide range of fault models. By leveraging the GPU acceleration capabilities of PyTorch, PyTorchFI provides a fast and efficient implementation for conducting fault injection experiments on large-scale ML systems, as shown in Figure fig-phantom-objects. The tool’s speed and ease of use have led to widespread adoption in the community, resulting in multiple developer-led projects, such as PyTorchALFI by Intel Labs, which focuses on safety in automotive environments. Follow-up PyTorch-centric tools for fault injection include Dr. DNA by Meta (Ma et al. 2024) (which further facilitates the Pythonic programming model for ease of use), and the GoldenEye framework (Mahmoud et al. 2022), which incorporates novel numerical datatypes (such as AdaptivFloat (Tambe et al. 2020) and BlockFloat in the context of hardware bit flips.

-
-Mahmoud, Abdulrahman, Neeraj Aggarwal, Alex Nobbe, Jose Rodrigo Sanchez Vicarte, Sarita V. Adve, Christopher W. Fletcher, Iuri Frosio, and Siva Kumar Sastry Hari. 2020. PyTorchFI: A Runtime Perturbation Tool for DNNs.” In 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-w), 25–31. IEEE; IEEE. https://doi.org/10.1109/dsn-w50199.2020.00014. -
-Ma, Dongning, Fred Lin, Alban Desmaison, Joel Coburn, Daniel Moore, Sriram Sankar, and Xun Jiao. 2024. Dr. DNA: Combating Silent Data Corruptions in Deep Learning Using Distribution of Neuron Activations.” In Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3, 239–52. ACM. https://doi.org/10.1145/3620666.3651349. -
-Mahmoud, Abdulrahman, Thierry Tambe, Tarek Aloui, David Brooks, and Gu-Yeon Wei. 2022. GoldenEye: A Platform for Evaluating Emerging Numerical Data Formats in DNN Accelerators.” In 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 206–14. IEEE. https://doi.org/10.1109/dsn53405.2022.00031. -
-Tambe, Thierry, En-Yu Yang, Zishen Wan, Yuntian Deng, Vijay Janapa Reddi, Alexander Rush, David Brooks, and Gu-Yeon Wei. 2020. “Algorithm-Hardware Co-Design of Adaptive Floating-Point Encodings for Resilient Deep Learning Inference.” In 2020 57th ACM/IEEE Design Automation Conference (DAC), 1–6. IEEE; IEEE. https://doi.org/10.1109/dac18072.2020.9218516. -
-Chen, Zitao, Niranjhana Narayanan, Bo Fang, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben. 2020. TensorFI: A Flexible Fault Injection Framework for TensorFlow Applications.” In 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE), 426–35. IEEE; IEEE. https://doi.org/10.1109/issre5003.2020.00047. -
-Chen, Zitao, Guanpeng Li, Karthik Pattabiraman, and Nathan DeBardeleben. 2019. \(<\)i\(>\)BinFI\(<\)/i\(>\): An Efficient Fault Injector for Safety-Critical Machine Learning Systems.” In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. SC ’19. New York, NY, USA: ACM. https://doi.org/10.1145/3295500.3356177. -

TensorFI (Chen et al. 2020), or the TensorFlow Fault Injector, is a fault injection tool developed specifically for the TensorFlow framework. Analogous to Ares and PyTorchFI, TensorFI is considered the state-of-the-art tool for ML robustness studies in the TensorFlow ecosystem. It allows researchers to inject faults into the computational graph of TensorFlow models and study their impact on the model’s performance, supporting a wide range of fault models. One of the key benefits of TensorFI is its ability to evaluate the resilience of various ML models, not just DNNs. Further advancements, such as BinFi (Chen et al. 2019), provide a mechanism to speed up error injection experiments by focusing on the "important" bits in the system, accelerating the process of ML robustness analysis and prioritizing the critical components of a model.

-

NVBitFI (T. Tsai et al. 2021), a general-purpose fault injection tool developed by Nvidia for their GPU platforms, operates at a lower level compared to framework-specific tools like Ares, PyTorchFI, and TensorFlow. While these tools focus on various deep learning platforms to implement and perform robustness analysis, NVBitFI targets the underlying hardware assembly code for fault injection. This allows researchers to inject faults into any application running on Nvidia GPUs, making it a versatile tool for studying the resilience of ML systems and other GPU-accelerated applications. By enabling users to inject errors at the architectural level, NVBitFI provides a more general-purpose fault model that is not restricted to just ML models. As Nvidia’s GPU systems are commonly used in many ML-based systems, NVBitFI is a valuable tool for comprehensive fault injection analysis across various applications.

-
-Tsai, Timothy, Siva Kumar Sastry Hari, Michael Sullivan, Oreste Villa, and Stephen W. Keckler. 2021. NVBitFI: Dynamic Fault Injection for GPUs.” In 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 284–91. IEEE; IEEE. https://doi.org/10.1109/dsn48987.2021.00041. -
-
Domain-specific Examples
-

Domain-specific fault injection tools have been developed to address various ML application domains’ unique challenges and requirements, such as autonomous vehicles and robotics. This section highlights three domain-specific fault injection tools: DriveFI and PyTorchALFI for autonomous vehicles and MAVFI for uncrewed aerial vehicles (UAVs). These tools enable researchers to inject hardware faults into these complex systems’ perception, control, and other subsystems, allowing them to study the impact of faults on system performance and safety. The development of these software-based fault injection tools has greatly expanded the capabilities of the ML community to develop more robust and reliable systems that can operate safely and effectively in the presence of hardware faults.

-

DriveFI (Jha et al. 2019) is a fault injection tool designed for autonomous vehicles. It enables the injection of hardware faults into the perception and control pipelines of autonomous vehicle systems, allowing researchers to study the impact of these faults on the system’s performance and safety. DriveFI has been integrated with industry-standard autonomous driving platforms, such as Nvidia DriveAV and Baidu Apollo, making it a valuable tool for evaluating the resilience of autonomous vehicle systems.

-
-Jha, Saurabh, Subho Banerjee, Timothy Tsai, Siva K. S. Hari, Michael B. Sullivan, Zbigniew T. Kalbarczyk, Stephen W. Keckler, and Ravishankar K. Iyer. 2019. ML-Based Fault Injection for Autonomous Vehicles: A Case for Bayesian Fault Injection.” In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 112–24. IEEE; IEEE. https://doi.org/10.1109/dsn.2019.00025. -
-Gräfe, Ralf, Qutub Syed Sha, Florian Geissler, and Michael Paulitsch. 2023. “Large-Scale Application of Fault Injection into PyTorch Models -an Extension to PyTorchFI for Validation Efficiency.” In 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks - Supplemental Volume (DSN-s), 56–62. IEEE; IEEE. https://doi.org/10.1109/dsn-s58398.2023.00025. -

PyTorchALFI (Gräfe et al. 2023) is an extension of PyTorchFI developed by Intel Labs for the autonomous vehicle domain. It builds upon PyTorchFI’s fault injection capabilities. It adds features specifically tailored for evaluating the resilience of autonomous vehicle systems, such as the ability to inject faults into the camera and LiDAR sensor data.

-

MAVFI (Hsiao et al. 2023) is a fault injection tool designed for the robotics domain, specifically for uncrewed aerial vehicles (UAVs). MAVFI is built on top of the Robot Operating System (ROS) framework and allows researchers to inject faults into the various components of a UAV system, such as sensors, actuators, and control algorithms. By evaluating the impact of these faults on the UAV’s performance and stability, researchers can develop more resilient and fault-tolerant UAV systems.

-
-Hsiao, Yu-Shun, Zishen Wan, Tianyu Jia, Radhika Ghosal, Abdulrahman Mahmoud, Arijit Raychowdhury, David Brooks, Gu-Yeon Wei, and Vijay Janapa Reddi. 2023. MAVFI: An End-to-End Fault Analysis Framework with Anomaly Detection and Recovery for Micro Aerial Vehicles.” In 2023 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 1–6. IEEE; IEEE. https://doi.org/10.23919/date56975.2023.10137246. -

The development of software-based fault injection tools has greatly expanded the capabilities of researchers and practitioners to study the resilience of ML systems to hardware faults. By leveraging the speed, flexibility, and accessibility of these tools, the ML community can develop more robust and reliable systems that can operate safely and effectively in the presence of hardware faults.

-
-
-
-
-

18.6.4 Bridging the Gap between Hardware and Software Error Models

-

While software-based fault injection tools offer many advantages in speed, flexibility, and accessibility, they may not always accurately capture the full range of effects that hardware faults can have on the system. This is because software-based tools operate at a higher level of abstraction than hardware-based methods and may miss some of the low-level hardware interactions and error propagation mechanisms that can impact the behavior of the ML system.

-

As Bolchini et al. (2023) illustrates in their work, hardware errors can manifest in complex spatial distribution patterns that are challenging to fully replicate with software-based fault injection alone. They identify four distinct patterns: (a) single point, where the fault corrupts a single value in a feature map; (b) same row, where the fault corrupts a partial or entire row in a single feature map; (c) bullet wake, where the fault corrupts the same location across multiple feature maps; and (d) shatter glass, which combines the effects of same row and bullet wake patterns, as shown in Figure fig-hardware-errors-bolchini. These intricate error propagation mechanisms highlight the need for hardware-aware fault injection techniques to accurately assess the resilience of ML systems.

-
-
-
- -
-
-Figure 18.42: Hardware errors may manifest themselves in different ways at the software level, as classified by Bolchini et al. (Bolchini et al. 2023) -
-
-Bolchini, Cristiana, Luca Cassano, Antonio Miele, and Alessandro Toschi. 2023. “Fast and Accurate Error Simulation for CNNs Against Soft Errors.” IEEE Trans. Comput. 72 (4): 984–97. https://doi.org/10.1109/tc.2022.3184274. -
-
-

Researchers have developed tools to address this issue by bridging the gap between low-level hardware error models and higher-level software error models. One such tool is Fidelity, designed to map patterns between hardware-level faults and their software-level manifestations.

-
-

Fidelity: Bridging the Gap

-

Fidelity (He, Balaprakash, and Li 2020) is a tool for accurately modeling hardware faults in software-based fault injection experiments. It achieves this by carefully studying the relationship between hardware-level faults and their impact on the software representation of the ML system.

-
-He, Yi, Prasanna Balaprakash, and Yanjing Li. 2020. FIdelity: Efficient Resilience Analysis Framework for Deep Learning Accelerators.” In 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 270–81. IEEE; IEEE. https://doi.org/10.1109/micro50266.2020.00033. -

The key insights behind Fidelity are:

-
    -
  • Fault Propagation: Fidelity models how faults propagate through the hardware and manifest as errors in the software-visible state of the system. By understanding these propagation patterns, Fidelity can more accurately simulate the effects of hardware faults in software-based experiments.

  • -
  • Fault Equivalence: Fidelity identifies equivalent classes of hardware faults that produce similar software-level errors. This allows researchers to design software-based fault models that are representative of the underlying hardware faults without the need to model every possible hardware fault individually.

  • -
  • Layered Approach: Fidelity employs a layered approach to fault modeling, where the effects of hardware faults are propagated through multiple levels of abstraction, from the hardware to the software level. This approach ensures that the software-based fault models are grounded in the actual behavior of the hardware.

  • -
-

By incorporating these insights, Fidelity enables software-based fault injection tools to capture the effects of hardware faults on ML systems accurately. This is particularly important for safety-critical applications, where the system’s resilience to hardware faults is paramount.

-
-
-

Importance of Capturing True Hardware Behavior

-

Capturing true hardware behavior in software-based fault injection tools is crucial for several reasons:

-
    -
  • Accuracy: By accurately modeling the effects of hardware faults, software-based tools can provide more reliable insights into the resilience of ML systems. This is essential for designing and validating fault-tolerant systems that can operate safely and effectively in the presence of hardware faults.

  • -
  • Reproducibility: When software-based tools accurately capture hardware behavior, fault injection experiments become more reproducible across different platforms and environments. This is important for the scientific study of ML system resilience, as it allows researchers to compare and validate results across different studies and implementations.

  • -
  • Efficiency: Software-based tools that capture true hardware behavior can be more efficient in their fault injection experiments by focusing on the most representative and impactful fault models. This allows researchers to cover a wider range of fault scenarios and system configurations with limited computational resources.

  • -
  • Mitigation Strategies: Understanding how hardware faults manifest at the software level is crucial for developing effective mitigation strategies. By accurately capturing hardware behavior, software-based fault injection tools can help researchers identify the most vulnerable components of the ML system and design targeted hardening techniques to improve resilience.

  • -
-

Tools like Fidelity are vital in advancing the state-of-the-art in ML system resilience research. These tools enable researchers to conduct more accurate, reproducible, and efficient fault injection experiments by bridging the gap between hardware and software error models. As the complexity and criticality of ML systems continue to grow, the importance of capturing true hardware behavior in software-based fault injection tools will only become more apparent.

-

Ongoing research in this area aims to refine the mapping between hardware and software error models and develop new techniques for efficiently simulating hardware faults in software-based experiments. As these tools mature, they will provide the ML community with increasingly powerful and accessible means to study and improve the resilience of ML systems to hardware faults.

-
-
-
-
-

18.7 Conclusion

-

Developing robust and resilient AI is paramount as machine learning systems become increasingly integrated into safety-critical applications and real-world environments. This chapter has explored the key challenges to AI robustness arising from hardware faults, malicious attacks, distribution shifts, and software bugs.

-

Some of the key takeaways include the following:

-
    -
  • Hardware Faults: Transient, permanent, and intermittent faults in hardware components can corrupt computations and degrade the performance of machine learning models if not properly detected and mitigated. Techniques such as redundancy, error correction, and fault-tolerant designs play a crucial role in building resilient ML systems that can withstand hardware faults.

  • -
  • Model Robustness: Malicious actors can exploit vulnerabilities in ML models through adversarial attacks and data poisoning, aiming to induce targeted misclassifications, skew the model’s learned behavior, or compromise the system’s integrity and reliability. Also, distribution shifts can occur when the data distribution encountered during deployment differs from those seen during training, leading to performance degradation. Implementing defensive measures, including adversarial training, anomaly detection, robust model architectures, and techniques such as domain adaptation, transfer learning, and continual learning, is essential to safeguard against these challenges and ensure the model’s reliability and generalization in dynamic environments.

  • -
  • Software Faults: Faults in ML frameworks, libraries, and software stacks can propagate errors, degrade performance, and introduce security vulnerabilities. Rigorous testing, runtime monitoring, and adopting fault-tolerant design patterns are essential for building robust software infrastructure supporting reliable ML systems.

  • -
-

As ML systems take on increasingly complex tasks with real-world consequences, prioritizing resilience becomes critical. The tools and frameworks discussed in this chapter, including fault injection techniques, error analysis methods, and robustness evaluation frameworks, provide practitioners with the means to thoroughly test and harden their ML systems against various failure modes and adversarial conditions.

-

Moving forward, resilience must be a central focus throughout the entire AI development lifecycle, from data collection and model training to deployment and monitoring. By proactively addressing the multifaceted challenges to robustness, we can develop trustworthy, reliable ML systems that can navigate the complexities and uncertainties of real-world environments.

-

Future research in robust ML should continue to advance techniques for detecting and mitigating faults, attacks, and distributional shifts. Additionally, exploring novel paradigms for developing inherently resilient AI architectures, such as self-healing systems or fail-safe mechanisms, will be crucial in pushing the boundaries of AI robustness. By prioritizing resilience and investing in developing robust AI systems, we can unlock the full potential of machine learning technologies while ensuring their safe, reliable, and responsible deployment in real-world applications. As AI continues to shape our future, building resilient systems that can withstand the challenges of the real world will be a defining factor in the success and societal impact of this transformative technology.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage both students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

-

Coming soon.

-
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/sustainable_ai/sustainable_ai.html b/contents/sustainable_ai/sustainable_ai.html deleted file mode 100644 index d10e4f20..00000000 --- a/contents/sustainable_ai/sustainable_ai.html +++ /dev/null @@ -1,1947 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 16  Sustainable AI - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

16  Sustainable AI

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: 3D illustration on a light background of a sustainable AI network interconnected with a myriad of eco-friendly energy sources. The AI actively manages and optimizes its energy from sources like solar arrays, wind turbines, and hydro dams, emphasizing power efficiency and performance. Deep neural networks spread throughout, receiving energy from these sustainable resources.
-
-
-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand AI’s environmental impact, including energy consumption, carbon emissions, electronic waste, and biodiversity effects.
  • -
  • Learn about methods and best practices for developing sustainable AI systems
  • -
  • Appreciate the importance of taking a lifecycle perspective when evaluating and addressing the sustainability of AI systems.
  • -
  • Recognize the roles various stakeholders, such as researchers, corporations, policymakers, and end users, play in furthering responsible and sustainable AI progress.
  • -
  • Learn about specific frameworks, metrics, and tools to enable greener AI development.
  • -
  • Appreciate real-world case studies like Google’s 4M efficiency practices that showcase how organizations are taking tangible steps to improve AI’s environmental record
  • -
-
-
-
-

16.1 Introduction

-

The rapid advancements in artificial intelligence (AI) and machine learning (ML) have led to many beneficial applications and optimizations for performance efficiency. However, the remarkable growth of AI comes with a significant yet often overlooked cost: its environmental impact. The most recent report released by the IPCC, the international body leading scientific assessments of climate change and its impacts, emphasized the pressing importance of tackling climate change. Without immediate efforts to decrease global \(\textrm{CO}_2\) emissions by at least 43 percent before 2030, we exceed global warming of 1.5 degrees Celsius (Winkler et al. 2022). This could initiate positive feedback loops, pushing temperatures even higher. Next to environmental issues, the United Nations recognized 17 Sustainable Development Goals (SDGs), in which AI can play an important role, and vice versa, play an important role in the development of AI systems. As the field continues expanding, considering sustainability is crucial.

-
-Winkler, Harald, Franck Lecocq, Hans Lofgren, Maria Virginia Vilariño, Sivan Kartha, and Joana Portugal-Pereira. 2022. “Examples of Shifting Development Pathways: Lessons on How to Enable Broader, Deeper, and Faster Climate Action.” Climate Action 1 (1). https://doi.org/10.1007/s44168-022-00026-1. -
-Maslej, Nestor, Loredana Fattorini, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, et al. 2023. “Artificial Intelligence Index Report 2023.” ArXiv Preprint abs/2310.03715. https://arxiv.org/abs/2310.03715. -

AI systems, particularly large language models like GPT-3 and computer vision models like DALL-E 2, require massive amounts of computational resources for training. For example, GPT-3 was estimated to consume 1,300 megawatt-hours of electricity, which is equal to 1,450 average US households in an entire month (Maslej et al. 2023), or put another way, it consumed enough energy to supply an average US household for 120 years! This immense energy demand stems primarily from power-hungry data centers with servers running intense computations to train these complex neural networks for days or weeks.

-

Current estimates indicate that the carbon emissions produced from developing a single, sophisticated AI model can equal the emissions over the lifetime of five standard gasoline-powered vehicles (Strubell, Ganesh, and McCallum 2019). A significant portion of the electricity presently consumed by data centers is generated from nonrenewable sources such as coal and natural gas, resulting in data centers contributing around 1% of total worldwide carbon emissions. This is comparable to the emissions from the entire airline sector. This immense carbon footprint demonstrates the pressing need to transition to renewable power sources such as solar and wind to operate AI development.

-
-Prakash, Shvetank, Matthew Stewart, Colby Banbury, Mark Mazumder, Pete Warden, Brian Plancher, and Vijay Janapa Reddi. 2023. “Is TinyML Sustainable? Assessing the Environmental Impacts of Machine Learning on Microcontrollers.” ArXiv Preprint. https://arxiv.org/abs/2301.11899. -

Additionally, even small-scale AI systems deployed to edge devices as part of TinyML have environmental impacts that should not be ignored (Prakash, Stewart, et al. 2023). The specialized hardware required for AI has an environmental toll from natural resource extraction and manufacturing. GPUs, CPUs, and chips like TPUs depend on rare earth metals whose mining and processing generate substantial pollution. The production of these components also has its energy demands. Furthermore, collecting, storing, and preprocessing data used to train both small- and large-scale models comes with environmental costs, further exacerbating the sustainability implications of ML systems.

-

Thus, while AI promises innovative breakthroughs in many fields, sustaining progress requires addressing sustainability challenges. AI can continue advancing responsibly by optimizing models’ efficiency, exploring alternative specialized hardware and renewable energy sources for data centers, and tracking its overall environmental impact.

-
-
-

16.2 Social and Ethical Responsibility

-

The environmental impact of AI is not just a technical issue but also an ethical and social one. As AI becomes more integrated into our lives and industries, its sustainability becomes increasingly critical.

-
-

16.2.1 Ethical Considerations

-

The scale of AI’s environmental footprint raises profound ethical questions about the responsibilities of AI developers and companies to minimize their carbon emissions and energy usage. As the creators of AI systems and technologies that can have sweeping global impacts, developers have an ethical obligation to consciously integrate environmental stewardship into their design process, even if sustainability comes at the cost of some efficiency gains.

-

There is a clear and present need for us to have open and honest conversations about AI’s environmental tradeoffs earlier in the development lifecycle. Researchers should feel empowered to voice concerns if organizational priorities do not align with ethical goals, as in the case of the open letter to pause giant AI experiments.

-

Additionally, there is an increasing need for AI companies to scrutinize their contributions to climate change and environmental harm. Large tech firms are responsible for the cloud infrastructure, data center energy demands, and resource extraction required to power today’s AI. Leadership should assess whether organizational values and policies promote sustainability, from hardware manufacturing through model training pipelines.

-

Furthermore, more than voluntary self-regulation may be needed– -governments may need to introduce new regulations aimed at sustainable AI standards and practices if we hope to curb the projected energy explosion of ever-larger models. Reported metrics like computing usage, carbon footprint, and efficiency benchmarks could hold organizations accountable.

-

Through ethical principles, company policies, and public rules, AI technologists and corporations have a profound duty to our planet to ensure the responsible and sustainable advancement of technology positioned to transform modern society radically. We owe it to future generations to get this right.

-
-
-

16.2.2 Long-term Sustainability

-

The massive projected expansion of AI raises urgent concerns about its long-term sustainability. As AI software and applications rapidly increase in complexity and usage across industries, demand for computing power and infrastructure will skyrocket exponentially in the coming years.

-

To put the scale of projected growth in perspective, the total computing capacity required for training AI models saw an astonishing 350,000x increase from 2012 to 2019 (R. Schwartz et al. 2020). Researchers forecast over an order of magnitude growth each year moving forward as personalized AI assistants, autonomous technology, precision medicine tools, and more are developed. Similar trends are estimated for embedded ML systems, with an estimated 2.5 billion AI-enabled edge devices deployed by 2030.

-

Managing this expansion level requires software and hardware-focused breakthroughs in efficiency and renewable integration from AI engineers and scientists. On the software side, novel techniques in model optimization, distillation, pruning, low-precision numerics, knowledge sharing between systems, and other areas must become widespread best practices to curb energy needs. For example, realizing even a 50% reduced computational demand per capability doubling would have massive compounding on total energy.

-

On the hardware infrastructure side, due to increasing costs of data transfer, storage, cooling, and space, continuing today’s centralized server farm model at data centers is likely infeasible long-term (Lannelongue, Grealey, and Inouye 2021). Exploring alternative decentralized computing options around “edge AI” on local devices or within telco networks can alleviate scaling pressures on power-hungry hyper scale data centers. Likewise, the shift towards carbon-neutral, hybrid renewable energy sources powering leading cloud provider data centers worldwide will be essential.

-
-Lannelongue, Loı̈c, Jason Grealey, and Michael Inouye. 2021. “Green Algorithms: Quantifying the Carbon Footprint of Computation.” Adv. Sci. 8 (12): 2100707. https://doi.org/10.1002/advs.202100707. -
-
-

16.2.3 AI for Environmental Good

-

While much focus goes on AI’s sustainability challenges, these powerful technologies provide unique solutions to combat climate change and drive environmental progress. For example, ML can continuously optimize smart power grids to improve renewable integration and electricity distribution efficiency across networks (Zhang, Han, and Deng 2018). Models can ingest the real-time status of a power grid and weather forecasts to allocate and shift sources responding to supply and demand.

-
-Zhang, Dongxia, Xiaoqing Han, and Chunyu Deng. 2018. “Review on the Research and Practice of Deep Learning and Reinforcement Learning in Smart Grids.” CSEE Journal of Power and Energy Systems 4 (3): 362–70. https://doi.org/10.17775/cseejpes.2018.00520. -
-Lam, Remi, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri, et al. 2023. “Learning Skillful Medium-Range Global Weather Forecasting.” Science 382 (6677): 1416–21. https://doi.org/10.1126/science.adi2336. -
-Kurth, Thorsten, Shashank Subramanian, Peter Harrington, Jaideep Pathak, Morteza Mardani, David Hall, Andrea Miele, Karthik Kashinath, and Anima Anandkumar. 2023. FourCastNet: Accelerating Global High-Resolution Weather Forecasting Using Adaptive Fourier Neural Operators.” In Proceedings of the Platform for Advanced Scientific Computing Conference, 1–11. ACM. https://doi.org/10.1145/3592979.3593412. -

Fine-tuned neural networks have also proven remarkably effective at next-generation weather forecasting (Lam et al. 2023) and climate modeling (Kurth et al. 2023). They can rapidly analyze massive volumes of climate data to boost extreme event preparation and resource planning for hurricanes, floods, droughts, and more. Climate researchers have achieved state-of-the-art storm path accuracy by combining AI simulations with traditional numerical models.

-

AI also enables better tracking of biodiversity (Silvestro et al. 2022), wildlife (D. Schwartz et al. 2021), ecosystems, and illegal deforestation using drones and satellite feeds. Computer vision algorithms can automate species population estimates and habitat health assessments over huge untracked regions. These capabilities provide conservationists with powerful tools for combating poaching (Bondi et al. 2018), reducing species extinction risks, and understanding ecological shifts.

-
-Silvestro, Daniele, Stefano Goria, Thomas Sterner, and Alexandre Antonelli. 2022. “Improving Biodiversity Protection Through Artificial Intelligence.” Nature Sustainability 5 (5): 415–24. https://doi.org/10.1038/s41893-022-00851-6. -
-Schwartz, Daniel, Jonathan Michael Gomes Selman, Peter Wrege, and Andreas Paepcke. 2021. “Deployment of Embedded Edge-AI for Wildlife Monitoring in Remote Regions.” In 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), 1035–42. IEEE; IEEE. https://doi.org/10.1109/icmla52953.2021.00170. -
-Bondi, Elizabeth, Ashish Kapoor, Debadeepta Dey, James Piavis, Shital Shah, Robert Hannaford, Arvind Iyer, Lucas Joppa, and Milind Tambe. 2018. “Near Real-Time Detection of Poachers from Drones in AirSim.” In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, edited by Jérôme Lang, 5814–16. International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2018/847. -

Targeted investment in AI applications for environmental sustainability, cross-sector data sharing, and model accessibility can profoundly accelerate solutions to pressing ecological issues. Emphasizing AI for social good steers innovation in cleaner directions, guiding these world-shaping technologies towards ethical and responsible development.

-
-
-

16.2.4 Case Study

-

Google’s data centers are foundational to powering products like Search, Gmail, and YouTube, which are used by billions daily. However, keeping the vast server farms up and running requires substantial energy, particularly for vital cooling systems. Google continuously strives to enhance efficiency across operations. Yet progress was proving difficult through traditional methods alone, considering the complex, custom dynamics involved. This challenge prompted an ML breakthrough, yielding potential savings.

-

After over a decade of optimizing data center design, inventing energy-efficient computing hardware, and securing renewable energy sources, Google brought DeepMind scientists to unlock further advances. The AI experts faced intricate factors surrounding the functioning of industrial cooling apparatuses. Equipment like pumps and chillers interact nonlinearly, while external weather and internal architectural variables also change. Capturing this complexity confounded rigid engineering formulas and human intuition.

-

The DeepMind team leveraged Google’s extensive historical sensor data detailing temperatures, power draw, and other attributes as training inputs. They built a flexible system based on neural networks to model the relationships and predict optimal configurations, minimizing power usage effectiveness (PUE) (Barroso, Hölzle, and Ranganathan 2019); PUE is the standard measurement for gauging how efficiently a data center uses energy gives the proportion of total facility power consumed divided by the power directly used for computing operations. When tested live, the AI system delivered remarkable gains beyond prior innovations, lowering cooling energy by 40% for a 15% drop in total PUE, a new site record. The generalizable framework learned cooling dynamics rapidly across shifting conditions that static rules could not match. The breakthrough highlights AI’s rising role in transforming modern tech and enabling a sustainable future.

-
-Barroso, Luiz André, Urs Hölzle, and Parthasarathy Ranganathan. 2019. The Datacenter as a Computer: Designing Warehouse-Scale Machines. Springer International Publishing. https://doi.org/10.1007/978-3-031-01761-2. -
-
-
-

16.3 Energy Consumption

-
-

16.3.1 Understanding Energy Needs

-

Understanding the energy needs for training and operating AI models is crucial in the rapidly evolving field of A.I. With AI entering widespread use in many new fields (Bohr and Memarzadeh 2020; Sudhakar, Sze, and Karaman 2023), the demand for AI-enabled devices and data centers is expected to explode. This understanding helps us understand why AI, particularly deep learning, is often labeled energy-intensive.

-
-Bohr, Adam, and Kaveh Memarzadeh. 2020. “The Rise of Artificial Intelligence in Healthcare Applications.” In Artificial Intelligence in Healthcare, 25–60. Elsevier. https://doi.org/10.1016/b978-0-12-818438-7.00002-2. -
-

Energy Requirements for AI Training

-

The training of complex AI systems like large deep learning models can demand startlingly high levels of computing power–with profound energy implications. Consider OpenAI’s state-of-the-art language model GPT-3 as a prime example. This system pushes the frontiers of text generation through algorithms trained on massive datasets. Yet, the energy GPT-3 consumed for a single training cycle could rival an entire small town’s monthly usage. In recent years, these generative AI models have gained increasing popularity, leading to more models being trained. Next to the increased number of models, the number of parameters in these models will also increase. Research shows that increasing the model size (number of parameters), dataset size, and compute used for training improves performance smoothly with no signs of saturation (Kaplan et al. 2020). See how, in Figure fig-scaling-laws, the test loss decreases as each of the 3 increases above.

-
-
-
- -
-
-Figure 16.1: Performance improves with compute, dataset set, and model size. Credit: Kaplan et al. (2020). -
-
-Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. “Scaling Laws for Neural Language Models.” ArXiv Preprint abs/2001.08361. https://arxiv.org/abs/2001.08361. -
-
-

What drives such immense requirements? During training, models like GPT-3 learn their capabilities by continuously processing huge volumes of data to adjust internal parameters. The processing capacity enabling AI’s rapid advances also contributes to surging energy usage, especially as datasets and models balloon. GPT-3 highlights a steady trajectory in the field where each leap in AI’s sophistication traces back to ever more substantial computational power and resources. Its predecessor, GPT-2, required 10x less training to compute only 1.5 billion parameters, a difference now dwarfed by magnitudes as GPT-3 comprises 175 billion parameters. Sustaining this trajectory toward increasingly capable AI raises energy and infrastructure provision challenges ahead.

-
-
-

Operational Energy Use

-

Developing and training AI models requires immense data, computing power, and energy. However, the deployment and operation of those models also incur significant recurrent resource costs over time. AI systems are now integrated across various industries and applications and are entering the daily lives of an increasing demographic. Their cumulative operational energy and infrastructure impacts could eclipse the upfront model training.

-

This concept is reflected in the demand for training and inference hardware in data centers and on the edge. Inference refers to using a trained model to make predictions or decisions on real-world data. According to a recent McKinsey analysis, the need for advanced systems to train ever-larger models is rapidly growing. However, inference computations already make up a dominant and increasing portion of total AI workloads, as shown in Figure fig-mckinsey. Running real-time inference with trained models–whether for image classification, speech recognition, or predictive analytics–invariably demands computing hardware like servers and chips. However, even a model handling thousands of facial recognition requests or natural language queries daily is dwarfed by massive platforms like Meta. Where inference on millions of photos and videos shared on social media, the infrastructure energy requirements continue to scale!

-
-
-
- -
-
-Figure 16.2: Market size for inference and training hardware. Credit: McKinsey. -
-
-
-

Algorithms powering AI-enabled smart assistants, automated warehouses, self-driving vehicles, tailored healthcare, and more have marginal individual energy footprints. However, the projected proliferation of these technologies could add hundreds of millions of endpoints running AI algorithms continually, causing the scale of their collective energy requirements to surge. Current efficiency gains need help to counterbalance this sheer growth.

-

AI is expected to see an annual growth rate of 37.3% between 2023 and 2030. Yet, applying the same growth rate to operational computing could multiply annual AI energy needs up to 1,000 times by 2030. So, while model optimization tackles one facet, responsible innovation must also consider total lifecycle costs at global deployment scales that were unfathomable just years ago but now pose infrastructure and sustainability challenges ahead.

-
-
-
-

16.3.2 Data Centers and Their Impact

-

As the demand for AI services grows, the impact of data centers on the energy consumption of AI systems is becoming increasingly important. While these facilities are crucial for the advancement and deployment of AI, they contribute significantly to its energy footprint.

-
-

Scale

-

Data centers are the essential workhorses enabling the recent computational demands of advanced AI systems. For example, leading providers like Meta operate massive data centers spanning up to the size of multiple football fields, housing hundreds of thousands of high-capacity servers optimized for parallel processing and data throughput.

-

These massive facilities provide the infrastructure for training complex neural networks on vast datasets. For instance, based on leaked information, OpenAI’s language model GPT-4 was trained on Azure data centers packing over 25,000 Nvidia A100 GPUs, used continuously for over 90 to 100 days.

-

Additionally, real-time inference for consumer AI applications at scale is only made possible by leveraging the server farms inside data centers. Services like Alexa, Siri, and Google Assistant process billions of voice requests per month from users globally by relying on data center computing for low-latency response. In the future, expanding cutting-edge use cases like self-driving vehicles, precision medicine diagnostics, and accurate climate forecasting models will require significant computational resources to be obtained by tapping into vast on-demand cloud computing resources from data centers. Some emerging applications, like autonomous cars, have harsh latency and bandwidth constraints. Locating data center-level computing power on the edge rather than the cloud will be necessary.

-

MIT research prototypes have shown trucks and cars with onboard hardware performing real-time AI processing of sensor data equivalent to small data centers (Sudhakar, Sze, and Karaman 2023). These innovative “data centers on wheels” demonstrate how vehicles like self-driving trucks may need embedded data center-scale compute on board to achieve millisecond system latency for navigation, though still likely supplemented by wireless 5G connectivity to more powerful cloud data centers.

-
-Sudhakar, Soumya, Vivienne Sze, and Sertac Karaman. 2023. “Data Centers on Wheels: Emissions from Computing Onboard Autonomous Vehicles.” IEEE Micro 43 (1): 29–39. https://doi.org/10.1109/mm.2022.3219803. -

The bandwidth, storage, and processing capacities required to enable this future technology at scale will depend heavily on advancements in data center infrastructure and AI algorithmic innovations.

-
-
-

Energy Demand

-

The energy demand of data centers can roughly be divided into 4 components—infrastructure, network, storage, and servers. In Figure fig-energydemand, we see that the data infrastructure (which includes cooling, lighting, and controls) and the servers use most of the total energy budget of data centers in the US (Shehabi et al. 2016). This section breaks down the energy demand for the servers and the infrastructure. For the latter, the focus is on cooling systems, as cooling is the dominant factor in energy consumption in the infrastructure.

-
-Shehabi, Arman, Sarah Smith, Dale Sartor, Richard Brown, Magnus Herrlin, Jonathan Koomey, Eric Masanet, Nathaniel Horner, Inês Azevedo, and William Lintner. 2016. “United States Data Center Energy Usage Report.” -
-
-
- -
-
-Figure 16.3: Data centers energy consumption in the US. Credit: International Energy Agency (IEA). -
-
-
-
-
Servers
-

The increase in energy consumption of data centers stems mainly from exponentially growing AI computing requirements. NVIDIA DGX H100 machines that are optimized for deep learning can draw up to 10.2 kW at peak. Leading providers operate data centers with hundreds to thousands of these power-hungry DGX nodes networked to train the latest AI models. For example, the supercomputer developed for OpenAI is a single system with over 285,000 CPU cores, 10,000 GPUs, and 400 gigabits per second of network connectivity for each GPU server.

-

The intensive computations needed across an entire facility’s densely packed fleet and supporting hardware result in data centers drawing tens of megawatts around the clock. Overall, advancing AI algorithms continue to expand data center energy consumption as more DGX nodes get deployed to keep pace with projected growth in demand for AI compute resources over the coming years.

-
-
-
Cooling Systems
-

To keep the beefy servers fed at peak capacity and cool, data centers require tremendous cooling capacity to counteract the heat produced by densely packed servers, networking equipment, and other hardware running computationally intensive workloads without pause. With large data centers packing thousands of server racks operating at full tilt, massive industrial-scale cooling towers and chillers are required, using energy amounting to 30-40% of the total data center electricity footprint (Dayarathna, Wen, and Fan 2016). Consequently, companies are looking for alternative methods of cooling. For example, Microsoft’s data center in Ireland leverages a nearby fjord to exchange heat using over half a million gallons of seawater daily.

-

Recognizing the importance of energy-efficient cooling, there have been innovations aimed at reducing this energy demand. Techniques like free cooling, which uses outside air or water sources when conditions are favorable, and the use of AI to optimize cooling systems are examples of how the industry adapts. These innovations reduce energy consumption, lower operational costs, and lessen the environmental footprint. However, exponential increases in AI model complexity continue to demand more servers and acceleration hardware operating at higher utilization, translating to rising heat generation and ever greater energy used solely for cooling purposes.

-
-
-
-

The Environmental Impact

-

The environmental impact of data centers is not only caused by the direct energy consumption of the data center itself (Siddik, Shehabi, and Marston 2021). Data center operation involves the supply of treated water to the data center and the discharge of wastewater from the data center. Water and wastewater facilities are major electricity consumers.

-
-Siddik, Md Abu Bakar, Arman Shehabi, and Landon Marston. 2021. “The Environmental Footprint of Data Centers in the United States.” Environ. Res. Lett. 16 (6): 064017. https://doi.org/10.1088/1748-9326/abfba1. -
-Davis, Jacqueline, Daniel Bizo, Andy Lawrence, Owen Rogers, and Max Smolaks. 2022. “Uptime Institute Global Data Center Survey 2022.” Uptime Institute. -

Next to electricity usage, there are many more aspects to the environmental impacts of these data centers. The water usage of the data centers can lead to water scarcity issues, increased water treatment needs, and proper wastewater discharge infrastructure. Also, raw materials required for construction and network transmission considerably impact environmental t the environment, and components in data centers need to be upgraded and maintained. Where almost 50 percent of servers were refreshed within 3 years of usage, refresh cycles have shown to slow down (Davis et al. 2022). Still, this generates significant e-waste, which can be hard to recycle.

-
-
-
-

16.3.3 Energy Optimization

-

Ultimately, measuring and understanding the energy consumption of AI facilitates optimizing energy consumption.

-

One way to reduce the energy consumption of a given amount of computational work is to run it on more energy-efficient hardware. For instance, TPU chips can be more energy-efficient compared to CPUs when it comes to running large tensor computations for AI, as TPUs can run such computations much faster without drawing significantly more power than CPUs. Another way is to build software systems aware of energy consumption and application characteristics. Good examples are systems works such as Zeus (You, Chung, and Chowdhury 2023) and Perseus (Chung et al. 2023), both of which characterize the tradeoff between computation time and energy consumption at various levels of an ML training system to achieve energy reduction without end-to-end slowdown. In reality, building both energy-efficient hardware and software and combining their benefits should be promising, along with open-source frameworks (e.g., Zeus) that facilitate community efforts.

-
-You, Jie, Jae-Won Chung, and Mosharaf Chowdhury. 2023. “Zeus: Understanding and Optimizing GPU Energy Consumption of DNN Training.” In 20th USENIX Symposium on Networked Systems Design and Implementation (NSDI 23), 119–39. Boston, MA: USENIX Association. https://www.usenix.org/conference/nsdi23/presentation/you. -
-Chung, Jae-Won, Yile Gu, Insu Jang, Luoxi Meng, Nikhil Bansal, and Mosharaf Chowdhury. 2023. “Perseus: Removing Energy Bloat from Large Model Training.” ArXiv Preprint abs/2312.06902. https://arxiv.org/abs/2312.06902. -
-
-
-

16.4 Carbon Footprint

-

The massive electricity demands of data centers can lead to significant environmental externalities absent an adequate renewable power supply. Many facilities rely heavily on nonrenewable energy sources like coal and natural gas. For example, data centers are estimated to produce up to 2% of total global \(\textrm{CO}_2\) emissions which is closing the gap with the airline industry. As mentioned in previous sections, the computational demands of AI are set to increase. The emissions of this surge are threefold. First, data centers are projected to increase in size (Liu et al. 2020). Secondly, emissions during training are set to increase significantly (Patterson et al. 2022). Thirdly, inference calls to these models are set to increase dramatically.

-
-Liu, Yanan, Xiaoxia Wei, Jinyu Xiao, Zhijie Liu, Yang Xu, and Yun Tian. 2020. “Energy Consumption and Emission Mitigation Prediction Based on Data Center Traffic and PUE for Global Data Centers.” Global Energy Interconnection 3 (3): 272–82. https://doi.org/10.1016/j.gloei.2020.07.008. -

Without action, this exponential demand growth risks ratcheting up the carbon footprint of data centers further to unsustainable levels. Major providers have pledged carbon neutrality and committed funds to secure clean energy, but progress remains incremental compared to overall industry expansion plans. More radical grid decarbonization policies and renewable energy investments may prove essential to counteracting the climate impact of the coming tide of new data centers aimed at supporting the next generation of AI.

-
-

16.4.1 Definition and Significance

-

The concept of a ‘carbon footprint’ has emerged as a key metric. This term refers to the total amount of greenhouse gasses, particularly carbon dioxide, emitted directly or indirectly by an individual, organization, event, or product. These emissions significantly contribute to the greenhouse effect, accelerating global warming and climate change. The carbon footprint is measured in terms of carbon dioxide equivalents (\(\textrm{CO}_2\)e), allowing for a comprehensive account that includes various greenhouse gasses and their relative environmental impact. Examples of this as applied to large-scale ML tasks are shown in Figure fig-carbonfootprint.

-
-
-
- -
-
-Figure 16.4: Carbon footprint of large-scale ML tasks. Credit: Wu et al. (2022). -
-
-
-

Considering the carbon footprint is especially important in AI AI’s rapid advancement and integration into various sectors, bringing its environmental impact into sharp focus. AI systems, particularly those involving intensive computations like deep learning and large-scale data processing, are known for their substantial energy demands. This energy, often drawn from power grids, may still predominantly rely on fossil fuels, leading to significant greenhouse gas emissions.

-

Take, for example, training large AI models such as GPT-3 or complex neural networks. These processes require immense computational power, typically provided by data centers. The energy consumption associated with operating these centers, particularly for high-intensity tasks, results in notable greenhouse gas emissions. Studies have highlighted that training a single AI model can generate carbon emissions comparable to that of the lifetime emissions of multiple cars, shedding light on the environmental cost of developing advanced AI technologies (Dayarathna, Wen, and Fan 2016). Figure fig-carboncars shows a comparison from lowest to highest carbon footprints, starting with a roundtrip flight between NY and SF, human life average per year, American life average per year, US car including fuel over a lifetime, and a Transformer model with neural architecture search, which has the highest footprint.

-
-
-
- -
-
-Figure 16.5: Carbon footprint of NLP model in lbs of \(\textrm{CO}_2\) equivalent. Credit: Dayarathna, Wen, and Fan (2016). -
-
-Dayarathna, Miyuru, Yonggang Wen, and Rui Fan. 2016. “Data Center Energy Consumption Modeling: A Survey.” IEEE Communications Surveys &Amp; Tutorials 18 (1): 732–94. https://doi.org/10.1109/comst.2015.2481183. -
-
-

Moreover, AI’s carbon footprint extends beyond the operational phase. The entire lifecycle of AI systems, including the manufacturing of computing hardware, the energy used in data centers for cooling and maintenance, and the disposal of electronic waste, contributes to their overall carbon footprint. We have discussed some of these aspects earlier, and we will discuss the waste aspects later in this chapter.

-
-
-

16.4.2 The Need for Awareness and Action

-

Understanding the carbon footprint of AI systems is crucial for several reasons. Primarily, it is a step towards mitigating the impacts of climate change. As AI continues to grow and permeate different aspects of our lives, its contribution to global carbon emissions becomes a significant concern. Awareness of these emissions can inform decisions made by developers, businesses, policymakers, and even ML engineers and scientists like us to ensure a balance between technological innovation and environmental responsibility.

-

Furthermore, this understanding stimulates the drive towards ‘Green AI’ (R. Schwartz et al. 2020). This approach focuses on developing AI technologies that are efficient, powerful, and environmentally sustainable. It encourages exploring energy-efficient algorithms, using renewable energy sources in data centers, and adopting practices that reduce A. I’m the overall environmental impact.

-

In essence, the carbon footprint is an essential consideration in developing and applying AI technologies. As AI evolves and its applications become more widespread, managing its carbon footprint is key to ensuring that this technological progress aligns with the broader environmental sustainability goals.

-
-
-

16.4.3 Estimating the AI Carbon Footprint

-

Estimating AI systems’ carbon footprint is critical in understanding their environmental impact. This involves analyzing the various elements contributing to emissions throughout AI technologies’ lifecycle and employing specific methodologies to quantify these emissions accurately. Many different methods for quantifying ML’s carbon emissions have been proposed.

-

The carbon footprint of AI encompasses several key elements, each contributing to the overall environmental impact. First, energy is consumed during the AI model training and operational phases. The source of this energy heavily influences the carbon emissions. Once trained, these models, depending on their application and scale, continue to consume electricity during operation. Next to energy considerations, the hardware used stresses the environment as well.

-

The carbon footprint varies significantly based on the energy sources used. The composition of the sources providing the energy used in the grid varies widely depending on geographical region and even time in a single day! For example, in the USA, roughly 60 percent of the total energy supply is still covered by fossil fuels. Nuclear and renewable energy sources cover the remaining 40 percent. These fractions are not constant throughout the day. As renewable energy production usually relies on environmental factors, such as solar radiation and pressure fields, they do not provide a constant energy source.

-

The variability of renewable energy production has been an ongoing challenge in the widespread use of these sources. Looking at Figure fig-energyprod, which shows data for the European grid, we see that it is supposed to be able to produce the required amount of energy throughout the day. While solar energy peaks in the middle of the day, wind energy shows two distinct peaks in the mornings and evenings. Currently, we rely on fossil and coal-based energy generation methods to supply the lack of energy during times when renewable energy does not meet requirements,

-

Innovation in energy storage solutions is required to enable constant use of renewable energy sources. The base energy load is currently met with nuclear energy. This constant energy source does not directly emit carbon emissions but needs to be faster to accommodate the variability of renewable energy sources. Tech companies such as Microsoft have shown interest in nuclear energy sources to power their data centers. As the demand of data centers is more constant than the demand of regular households, nuclear energy could be used as a dominant source of energy.

-
-
-
- -
-
-Figure 16.6: Energy sources and generation capabilities. Credit: Energy Charts.. -
-
-
-

Additionally, the manufacturing and disposal of AI hardware add to the carbon footprint. Producing specialized computing devices, such as GPUs and CPUs, is energy- and resource-intensive. This phase often relies on energy sources that contribute to greenhouse gas emissions. The electronics industry’s manufacturing process has been identified as one of the eight big supply chains responsible for more than 50 percent of global emissions (Challenge 2021). Furthermore, the end-of-life disposal of this hardware, which can lead to electronic waste, also has environmental implications. As mentioned, servers have a refresh cycle of roughly 3 to 5 years. Of this e-waste, currently only 17.4 percent is properly collected and recycled.. The carbon emissions of this e-waste has shown an increase of more than 50 percent between 2014 and 2020 (Singh and Ogunseitan 2022).

-
-Challenge, WEF Net-Zero. 2021. “The Supply Chain Opportunity.” In World Economic Forum: Geneva, Switzerland. -
-Singh, Narendra, and Oladele A. Ogunseitan. 2022. “Disentangling the Worldwide Web of e-Waste and Climate Change Co-Benefits.” Circular Economy 1 (2): 100011. https://doi.org/10.1016/j.cec.2022.100011. -

As is clear from the above, a proper Life Cycle Analysis is necessary to portray all relevant aspects of the emissions caused by AI. Another method is carbon accounting, which quantifies the amount of carbon dioxide emissions directly and indirectly associated with AI operations. This measurement typically uses \(\textrm{CO}_2\) equivalents, allowing for a standardized way of reporting and assessing emissions.

-
-

Exercise 16.1 (AI’s Carbon Footprint)  

-
-
- -
-
-

Did you know that the cutting-edge AI models you might use have an environmental impact? This exercise will delve into an AI system’s “carbon footprint.” You’ll learn how data centers’ energy demands, large AI models’ training, and even hardware manufacturing contribute to greenhouse gas emissions. We’ll discuss why it’s crucial to be aware of this impact, and you’ll learn methods to estimate the carbon footprint of your own AI projects. Get ready to explore the intersection of AI and environmental sustainability!

-

-
-
-
-
-
-
-

16.5 Beyond Carbon Footprint

-

The current focus on reducing AI systems’ carbon emissions and energy consumption addresses one crucial aspect of sustainability. However, manufacturing the semiconductors and hardware that enable AI also carries severe environmental impacts that receive comparatively less public attention. Building and operating a leading-edge semiconductor fabrication plant, or “fab,” has substantial resource requirements and polluting byproducts beyond a large carbon footprint.

-

For example, a state-of-the-art fab producing state-of-the-art chips like in 5nm can require up to four million gallons of pure water each day. This water usage approaches what a city of half a million people would require for all needs. Sourcing this consistently places immense strain on local water tables and reservoirs, especially in already water-stressed regions that host many high-tech manufacturing hubs.

-

Additionally, over 250 unique hazardous chemicals are utilized at various stages of semiconductor production within fabs (Mills and Le Hunte 1997). These include volatile solvents like sulfuric acid, nitric acid, and hydrogen fluoride, along with arsine, phosphine, and other highly toxic substances. Preventing the discharge of these chemicals requires extensive safety controls and wastewater treatment infrastructure to avoid soil contamination and risks to surrounding communities. Any improper chemical handling or unanticipated spill carries dire consequences.

-
-Mills, Andrew, and Stephen Le Hunte. 1997. “An Overview of Semiconductor Photocatalysis.” J. Photochem. Photobiol., A 108 (1): 1–35. https://doi.org/10.1016/s1010-6030(97)00118-4. -

Beyond water consumption and chemical risks, fab operations also depend on rare metals sourcing, generate tons of dangerous waste products, and can hamper local biodiversity. This section will analyze these critical but less discussed impacts. With vigilance and investment in safety, the harms from semiconductor manufacturing can be contained while still enabling technological progress. However, ignoring these externalized issues will exacerbate ecological damage and health risks over the long run.

-
-

16.5.1 Water Usage and Stress

-

Semiconductor fabrication is an incredibly water-intensive process. Based on an article from 2009, a typical 300mm silicon wafer requires 8,328 liters of water, of which 5,678 liters is ultrapure water (Cope 2009). Today, a typical fab can use up to four million gallons of pure water. To operate one facility, TSMC’s latest fab in Arizona is projected to use 8.9 million gallons daily or nearly 3 percent of the city’s current water production. To put things in perspective, Intel and Quantis found that over 97% of their direct water consumption is attributed to semiconductor manufacturing operations within their fabrication facilities (Cooper et al. 2011).

-
-Cope, Gord. 2009. “Pure Water, Semiconductors and the Recession.” Global Water Intelligence 10 (10). -
-Cooper, Tom, Suzanne Fallender, Joyann Pafumi, Jon Dettling, Sebastien Humbert, and Lindsay Lessard. 2011. “A Semiconductor Company’s Examination of Its Water Footprint Approach.” In Proceedings of the 2011 IEEE International Symposium on Sustainable Systems and Technology, 1–6. IEEE; IEEE. https://doi.org/10.1109/issst.2011.5936865. -

This water is repeatedly used to flush away contaminants in cleaning steps and also acts as a coolant and carrier fluid in thermal oxidation, chemical deposition, and chemical mechanical planarization processes. During peak summer months, this approximates the daily water consumption of a city with a population of half a million people.

-

Despite being located in regions with sufficient water, the intensive usage can severely depress local water tables and drainage basins. For example, the city of Hsinchu in Taiwan suffered sinking water tables and seawater intrusion into aquifers due to excessive pumping to satisfy water supply demands from the Taiwan Semiconductor Manufacturing Company (TSMC) fab. In water-scarce inland areas like Arizona, massive water inputs are needed to support fabs despite already strained reservoirs.

-

Water discharge from fabs risks environmental contamination besides depletion if not properly treated. While much discharge is recycled within the fab, the purification systems still filter out metals, acids, and other contaminants that can pollute rivers and lakes if not cautiously handled (Prakash, Callahan, et al. 2023). These factors make managing water usage essential when mitigating wider sustainability impacts.

-
-
-

16.5.2 Hazardous Chemicals Usage

-

Modern semiconductor fabrication involves working with many highly hazardous chemicals under extreme conditions of heat and pressure (Kim et al. 2018). Key chemicals utilized include:

-
-Kim, Sunju, Chungsik Yoon, Seunghon Ham, Jihoon Park, Ohun Kwon, Donguk Park, Sangjun Choi, Seungwon Kim, Kwonchul Ha, and Won Kim. 2018. “Chemical Use in the Semiconductor Manufacturing Industry.” Int. J. Occup. Env. Heal. 24 (3-4): 109–18. https://doi.org/10.1080/10773525.2018.1519957. -
    -
  • Strong acids: Hydrofluoric, sulfuric, nitric, and hydrochloric acids rapidly eat through oxides and other surface contaminants but also pose toxicity dangers. Fabs can use thousands of metric tons of these acids annually, and accidental exposure can be fatal for workers.
  • -
  • Solvents: Key solvents like xylene, methanol, and methyl isobutyl ketone (MIBK) handle dissolving photoresists but have adverse health impacts like skin/eye irritation and narcotic effects if mishandled. They also create explosive and air pollution risks.
  • -
  • Toxic gases: Gas mixtures containing arsine (AsH3), phosphine (PH3), diborane (B2H6), germane (GeH4), etc., are some of the deadliest chemicals used in doping and vapor deposition steps. Minimal exposures can lead to poisoning, tissue damage, and even death without quick treatment.
  • -
  • Chlorinated compounds: Older chemical mechanical planarization formulations incorporated perchloroethylene, trichloroethylene, and other chlorinated solvents, which have since been banned due to their carcinogenic effects and impacts on the ozone layer. However, their prior release still threatens surrounding groundwater sources.
  • -
-

Strict handling protocols, protective equipment for workers, ventilation, filtrating/scrubbing systems, secondary containment tanks, and specialized disposal mechanisms are vital where these chemicals are used to minimize health, explosion, air, and environmental spill dangers (Wald and Jones 1987). But human errors and equipment failures still occasionally occur–highlighting why reducing fab chemical intensities is an ongoing sustainability effort.

-
-Wald, Peter H., and Jeffrey R. Jones. 1987. “Semiconductor Manufacturing: An Introduction to Processes and Hazards.” Am. J. Ind. Med. 11 (2): 203–21. https://doi.org/10.1002/ajim.4700110209. -
-
-

16.5.3 Resource Depletion

-

While silicon forms the base, there is an almost endless supply of silicon on Earth. In fact, silicon is the second most plentiful element found in the Earth’s crust, accounting for 27.7% of the crust’s total mass. Only oxygen exceeds silicon in abundance within the crust. Therefore, silicon is not necessary to consider for resource depletion. However, the various specialty metals and materials that enable the integrated circuit fabrication process and provide specific properties still need to be discovered. Maintaining supplies of these resources is crucial yet threatened by finite availability and geopolitical influences (Nakano 2021).

-
-Nakano, Jane. 2021. The Geopolitics of Critical Minerals Supply Chains. JSTOR. -
-Chen, H.-W. 2006. “Gallium, Indium, and Arsenic Pollution of Groundwater from a Semiconductor Manufacturing Area of Taiwan.” B. Environ. Contam. Tox. 77 (2): 289–96. https://doi.org/10.1007/s00128-006-1062-3. -

Gallium, indium, and arsenic are vital ingredients in forming ultra-efficient compound semiconductors in the highest-speed chips suited for 5G and AI applications (Chen 2006). However, these rare elements have relatively scarce natural deposits that are being depleted. The United States Geological Survey has indium on its list of most critical at-risk commodities, estimated to have less than a 15-year viable global supply at current demand growth (Davies 2011).

-

Helium is required in huge volumes for next-gen fabs to enable precise wafer cooling during operation. But helium’s relative rarity and the fact that once it vents into the atmosphere, it quickly escapes Earth make maintaining helium supplies extremely challenging long-term (Davies 2011). According to the US National Academies, substantial price increases and supply shocks are already occurring in this thinly traded market.

-
-Jha, A. R. 2014. Rare Earth Materials: Properties and Applications. CRC Press. https://doi.org/10.1201/b17045. -

Other risks include China’s control over 90% of the rare earth elements critical to semiconductor material production (Jha 2014). Any supply chain issues or trade disputes can lead to catastrophic raw material shortages, given the lack of current alternatives. In conjunction with helium shortages, resolving the limited availability and geographic imbalance in accessing essential ingredients remains a sector priority for sustainability.

-
-
-

16.5.4 Hazardous Waste Generation

-

Semiconductor fabs generate tons of hazardous waste annually as byproducts from the various chemical processes (Grossman 2007). The key waste streams include:

-
-Grossman, Elizabeth. 2007. High Tech Trash: Digital Devices, Hidden Toxics, and Human Health. Island press. -
    -
  • Gaseous waste: Fab ventilation systems capture harmful gases like arsine, phosphine, and germane and filter them out to avoid worker exposure. However, this produces significant quantities of dangerous condensed gas that need specialized treatment.
  • -
  • VOCs: Volatile organic compounds like xylene, acetone, and methanol are used extensively as photoresist solvents and are evaporated as emissions during baking, etching, and stripping. VOCs pose toxicity issues and require scrubbing systems to prevent release.
  • -
  • Spent acids: Strong acids such as sulfuric acid, hydrofluoric acid, and nitric acid get depleted in cleaning and etching steps, transforming into a corrosive, toxic soup that can dangerously react, releasing heat and fumes if mixed.
  • -
  • Sludge: Water treatment of discharged effluent contains concentrated heavy metals, acid residues, and chemical contaminants. Filter press systems separate this hazardous sludge.
  • -
  • Filter cake: Gaseous filtration systems generate multi-ton sticky cakes of dangerous absorbed compounds requiring containment.
  • -
-

Without proper handling procedures, storage tanks, packaging materials, and secondary containment, improper disposal of any of these waste streams can lead to dangerous spills, explosions, and environmental releases. The massive volumes mean even well-run fabs produce tons of hazardous waste year after year, requiring extensive treatment.

-
-
-

16.5.5 Biodiversity Impacts

-
-

Habitat Disruption and Fragmentation

-

Semiconductor fabs require large, contiguous land areas to accommodate cleanrooms, support facilities, chemical storage, waste treatment, and ancillary infrastructure. Developing these vast built-up spaces inevitably dismantles existing habitats, damaging sensitive biomes that may have taken decades to develop. For example, constructing a new fabrication module may level local forest ecosystems that species, like spotted owls and elk, rely upon for survival. The outright removal of such habitats severely threatens wildlife populations dependent on those lands.

-

Furthermore, pipelines, water channels, air and waste exhaust systems, access roads, transmission towers, and other support infrastructure fragment the remaining undisturbed habitats. Animals moving daily for food, water, and spawning can find their migration patterns blocked by these physical human barriers that bisect previously natural corridors.

-
-
-

Aquatic Life Disturbances

-

With semiconductor fabs consuming millions of gallons of ultra-pure water daily, accessing and discharging such volumes risks altering the suitability of nearby aquatic environments housing fish, water plants, amphibians, and other species. If the fab is tapping groundwater tables as its primary supply source, overdrawing at unsustainable rates can deplete lakes or lead to stream drying as water levels drop (Davies 2011).

-
-Davies, Emma. 2011. “Endangered Elements: Critical Thinking.” https://www.rsc.org/images/Endangered\%20Elements\%20-\%20Critical\%20Thinking\_tcm18-196054.pdf. -
-LeRoy Poff, N, MM Brinson, and JW Day. 2002. “Aquatic Ecosystems & Global Climate Change.” Pew Center on Global Climate Change. -
-Till, Aaron, Andrew L. Rypel, Andrew Bray, and Samuel B. Fey. 2019. “Fish Die-Offs Are Concurrent with Thermal Extremes in North Temperate Lakes.” Nat. Clim. Change 9 (8): 637–41. https://doi.org/10.1038/s41558-019-0520-y. -

Also, discharging wastewater at higher temperatures to cool fabrication equipment can shift downstream river conditions through thermal pollution. Temperature changes beyond thresholds that native species evolved for can disrupt reproductive cycles. Warmer water also holds less dissolved oxygen, critical to supporting aquatic plant and animal life (LeRoy Poff, Brinson, and Day 2002). Combined with traces of residual contaminants that escape filtration systems, the discharged water can cumulatively transform environments to be far less habitable for sensitive organisms (Till et al. 2019).

-
-
-

Air and Chemical Emissions

-

While modern semiconductor fabs aim to contain air and chemical discharges through extensive filtration systems, some levels of emissions often persist, raising risks for nearby flora and fauna. Air pollutants can carry downwind, including volatile organic compounds (VOCs), nitrogen oxide compounds (NOx), particulate matter from fab operational exhausts, and power plant fuel emissions.

-

As contaminants permeate local soils and water sources, wildlife ingesting affected food and water ingest toxic substances, which research shows can hamper cell function, reproduction rates, and longevity–slowly poisoning ecosystems (Hsu et al. 2016).

-
-Hsu, Liang-Ching, Ching-Yi Huang, Yen-Hsun Chuang, Ho-Wen Chen, Ya-Ting Chan, Heng Yi Teah, Tsan-Yao Chen, Chiung-Fen Chang, Yu-Ting Liu, and Yu-Min Tzou. 2016. “Accumulation of Heavy Metals and Trace Elements in Fluvial Sediments Received Effluents from Traditional and Semiconductor Industries.” Scientific Reports 6 (1): 34250. https://doi.org/10.1038/srep34250. -

Likewise, accidental chemical spills and improper waste handling, which release acids, BODs, and heavy metals into soils, can dramatically affect retention and leeching capabilities. Flora, such as vulnerable native orchids adapted to nutrient-poor substrates, can experience die-offs when contacted by foreign runoff chemicals that alter soil pH and permeability. One analysis found that a single 500-gallon nitric acid spill led to the regional extinction of a rare moss species in the year following when the acidic effluent reached nearby forest habitats. Such contamination events set off chain reactions across the interconnected web of life. Thus, strict protocols are essential to avoid hazardous discharge and runoff.

-
-
-
-
-

16.6 Life Cycle Analysis

-

Understanding the holistic environmental impact of AI systems requires a comprehensive approach that considers the entire life cycle of these technologies. Life Cycle Analysis (LCA) refers to a methodological framework used to quantify the environmental impacts across all stages in a product or system’s lifespan, from raw material extraction to end-of-life disposal. Applying LCA to AI systems can help identify priority areas to target for reducing overall environmental footprints.

-
-

16.6.1 Stages of an AI System’s Life Cycle

-

The life cycle of an AI system can be divided into four key phases:

-
    -
  • Design Phase: This includes the energy and resources used in researching and developing AI technologies. It encompasses the computational resources used for algorithm development and testing contributing to carbon emissions.

  • -
  • Manufacture Phase: This stage involves producing hardware components such as graphics cards, processors, and other computing devices necessary for running AI algorithms. Manufacturing these components often involves significant energy for material extraction, processing, and greenhouse gas emissions.

  • -
  • Use Phase: The next most energy-intensive phase involves the operational use of AI systems. It includes the electricity consumed in data centers for training and running neural networks and powering end-user applications. This is arguably one of the most carbon-intensive stages.

  • -
  • Disposal Phase: This final stage covers the end-of-life aspects of AI systems, including the recycling and disposal of electronic waste generated from outdated or non-functional hardware past their usable lifespan.

  • -
-
-
-

16.6.2 Environmental Impact at Each Stage

-

Design and Manufacturing

-

The environmental impact during these beginning-of-life phases includes emissions from energy use and resource depletion from extracting materials for hardware production. At the heart of AI hardware are semiconductors, primarily silicon, used to make the integrated circuits in processors and memory chips. This hardware manufacturing relies on metals like copper for wiring, aluminum for casings, and various plastics and composites for other components. It also uses rare earth metals and specialized alloys- elements like neodymium, terbium, and yttrium- used in small but vital quantities. For example, the creation of GPUs relies on copper and aluminum. At the same time, chips use rare earth metals, which is the mining process that can generate substantial carbon emissions and ecosystem damage.

-

Use Phase

-

AI computes the majority of emissions in the lifecycle due to continuous high-power consumption, especially for training and running models. This includes direct and indirect emissions from electricity usage and nonrenewable grid energy generation. Studies estimate training complex models can have a carbon footprint comparable to the lifetime emissions of up to five cars.

-

Disposal Phase

-

The disposal stage impacts include air and water pollution from toxic materials in devices, challenges associated with complex electronics recycling, and contamination when improperly handled. Harmful compounds from burned e-waste are released into the atmosphere. At the same time, landfill leakage of lead, mercury, and other materials poses risks of soil and groundwater contamination if not properly controlled. Implementing effective electronics recycling is crucial.

-
-

Exercise 16.2 (Tracking ML Emissions)  

-
-
- -
-
-

In this exercise, you’ll delve into the environmental impact of training machine learning models. We’ll use CodeCarbon to track emissions, learn about Life Cycle Analysis (LCA) to understand AI’s carbon footprint, and explore strategies to make your ML model development more environmentally friendly. By the end, you’ll be equipped to track the carbon emissions of your models and start implementing greener practices in your projects.

-

-
-
-
-
-
-
-

16.7 Challenges in LCA

-
-

16.7.1 Lack of Consistency and Standards

-

One major challenge facing life cycle analysis (LCA) for AI systems is the need for consistent methodological standards and frameworks. Unlike product categories like building materials, which have developed international standards for LCA through ISO 14040, there are no firmly established guidelines for analyzing the environmental footprint of complex information technology like AI.

-

This absence of uniformity means researchers make differing assumptions and varying methodological choices. For example, a 2021 study from the University of Massachusetts Amherst (Strubell, Ganesh, and McCallum 2019) analyzed the life cycle emissions of several natural language processing models but only considered computational resource usage for training and omitted hardware manufacturing impacts. A more comprehensive 2020 study from Stanford University researchers included emissions estimates from producing relevant servers, processors, and other components, following an ISO-aligned LCA standard for computer hardware. However, these diverging choices in system boundaries and accounting approaches reduce robustness and prevent apples-to-apples comparisons of results.

-

Standardized frameworks and protocols tailored to AI systems’ unique aspects and rapid update cycles would provide more coherence. This could equip researchers and developers to understand environmental hotspots, compare technology options, and accurately track progress on sustainability initiatives across the AI field. Industry groups and international standards bodies like the IEEE or ACM should prioritize addressing this methodological gap.

-
-
-

16.7.2 Data Gaps

-

Another key challenge for comprehensive life cycle assessment of AI systems is substantial data gaps, especially regarding upstream supply chain impacts and downstream electronic waste flows. Most existing studies focus narrowly on the learner or usage phase emissions from computational power demands, which misses a significant portion of lifetime emissions (Gupta et al. 2022).

-

For example, little public data from companies exists quantifying energy use and emissions from manufacturing the specialized hardware components that enable AI–including high-end GPUs, ASIC chips, solid-state drives, and more. Researchers often rely on secondary sources or generic industry averages to approximate production impacts. Similarly, on average, there is limited transparency into downstream fate once AI systems are discarded after 4-5 years of usable lifespans.

-

While electronic waste generation levels can be estimated, specifics on hazardous material leakage, recycling rates, and disposal methods for the complex components are hugely uncertain without better corporate documentation or regulatory reporting requirements.

-

The need for fine-grained data on computational resource consumption for training different model types makes reliable per-parameter or per-query emissions calculations difficult even for the usage phase. Attempts to create lifecycle inventories estimating average energy needs for key AI tasks exist (Henderson et al. 2020; Anthony, Kanding, and Selvan 2020), but variability across hardware setups, algorithms, and input data uncertainty remains extremely high. Furthermore, real-time carbon intensity data, critical in accurately tracking operational carbon footprint, must be improved in many geographic locations, rendering existing tools for operational carbon emission mere approximations based on annual average carbon intensity values.

-
-Henderson, Peter, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. “Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning.” The Journal of Machine Learning Research 21 (1): 10039–81. -
-Anthony, Lasse F. Wolff, Benjamin Kanding, and Raghavendra Selvan. 2020. ICML Workshop on Challenges in Deploying and monitoring Machine Learning Systems. -

The challenge is that tools like CodeCarbon and ML \(\textrm{CO}_2\) but these are ad hoc approaches at best. Bridging the real data gaps with more rigorous corporate sustainability disclosures and mandated environmental impact reporting will be key for AI’s overall climatic impacts to be understood and managed.

-
-
-

16.7.3 Rapid Pace of Evolution

-

The extremely quick evolution of AI systems poses additional challenges in keeping life cycle assessments up-to-date and accounting for the latest hardware and software advancements. The core algorithms, specialized chips, frameworks, and technical infrastructure underpinning AI have all been advancing exceptionally fast, with new developments rapidly rendering prior systems obsolete.

-

For example, in deep learning, novel neural network architectures that achieve significantly better performance on key benchmarks or new optimized hardware like Google’s TPU chips can completely change an “average” model in less than a year. These swift shifts quickly make one-off LCA studies outdated for accurately tracking emissions from designing, running, or disposing of the latest AI.

-

However, the resources and access required to update LCAs continuously need to be improved. Frequently re-doing labor—and data-intensive life cycle inventories and impact modeling to stay current with AI’s state-of-the-art is likely infeasible for many researchers and organizations. However, updated analyses could notice environmental hotspots as algorithms and silicon chips continue rapidly evolving.

-

This presents difficulty in balancing dynamic precision through continuous assessment with pragmatic constraints. Some researchers have proposed simplified proxy metrics like tracking hardware generations over time or using representative benchmarks as an oscillating set of goalposts for relative comparisons, though granularity may be sacrificed. Overall, the challenge of rapid change will require innovative methodological solutions to prevent underestimating AI’s evolving environmental burdens.

-
-
-

16.7.4 Supply Chain Complexity

-

Finally, the complex and often opaque supply chains associated with producing the wide array of specialized hardware components that enable AI pose challenges for comprehensive life cycle modeling. State-of-the-art AI relies on cutting-edge advancements in processing chips, graphics cards, data storage, networking equipment, and more. However, tracking emissions and resource use across the tiered networks of globalized suppliers for all these components is extremely difficult.

-

For example, NVIDIA graphics processing units dominate much of the AI computing hardware, but the company relies on several discrete suppliers across Asia and beyond to produce GPUs. Many firms at each supplier tier choose to keep facility-level environmental data private, which could fully enable robust LCAs. Gaining end-to-end transparency down multiple levels of suppliers across disparate geographies with varying disclosure protocols and regulations poses barriers despite being crucial for complete boundary setting. This becomes even more complex when attempting to model emerging hardware accelerators like tensor processing units (TPUs), whose production networks still need to be made public.

-

Without tech giants’ willingness to require and consolidate environmental impact data disclosure from across their global electronics supply chains, considerable uncertainty will remain around quantifying the full lifecycle footprint of AI hardware enablement. More supply chain visibility coupled with standardized sustainability reporting frameworks specifically addressing AI’s complex inputs hold promise for enriching LCAs and prioritizing environmental impact reductions.

-
-
-
-

16.8 Sustainable Design and Development

-
-

16.8.1 Sustainability Principles

-

As the impact of AI on the environment becomes increasingly evident, the focus on sustainable design and development in AI is gaining prominence. This involves incorporating sustainability principles into AI design, developing energy-efficient models, and integrating these considerations throughout the AI development pipeline. There is a growing need to consider its sustainability implications and develop principles to guide responsible innovation. Below is a core set of principles. The principles flow from the conceptual foundation to practical execution to supporting implementation factors; the principles provide a full cycle perspective on embedding sustainability in AI design and development.

-

Lifecycle Thinking: Encouraging designers to consider the entire lifecycle of AI systems, from data collection and preprocessing to model development, training, deployment, and monitoring. The goal is to ensure sustainability is considered at each stage. This includes using energy-efficient hardware, prioritizing renewable energy sources, and planning to reuse or recycle retired models.

-

Future Proofing: Designing AI systems anticipating future needs and changes can enhance sustainability. This may involve making models adaptable via transfer learning and modular architectures. It also includes planning capacity for projected increases in operational scale and data volumes.

-

Efficiency and Minimalism: This principle focuses on creating AI models that achieve desired results with the least possible resource use. It involves simplifying models and algorithms to reduce computational requirements. Specific techniques include pruning redundant parameters, quantizing and compressing models, and designing efficient model architectures, such as those discussed in the Optimizations chapter.

-

Lifecycle Assessment (LCA) Integration: Analyzing environmental impacts throughout the development and deployment of lifecycles highlights unsustainable practices early on. Teams can then make adjustments instead of discovering issues late when they are more difficult to address. Integrating this analysis into the standard design flow avoids creating legacy sustainability problems.

-

Incentive Alignment: Economic and policy incentives should promote and reward sustainable AI development. These may include government grants, corporate initiatives, industry standards, and academic mandates for sustainability. Aligned incentives enable sustainability to become embedded in AI culture.

-

Sustainability Metrics and Goals: It is important to establish clearly defined Metrics that measure sustainability factors like carbon usage and energy efficiency. Establishing clear targets for these metrics provides concrete guidelines for teams to develop responsible AI systems. Tracking performance on metrics over time shows progress towards set sustainability goals.

-

Fairness, Transparency, and Accountability: Sustainable AI systems should be fair, transparent, and accountable. Models should be unbiased, with transparent development processes and mechanisms for auditing and redressing issues. This builds public trust and enables the identification of unsustainable practices.

-

Cross-disciplinary Collaboration: AI researchers teaming up with environmental scientists and engineers can lead to innovative systems that are high-performing yet environmentally friendly. Combining expertise from different fields from the start of projects enables sustainable thinking to be incorporated into the AI design process.

-

Education and Awareness: Workshops, training programs, and course curricula that cover AI sustainability raise awareness among the next generation of practitioners. This equips students with the knowledge to develop AI that consciously minimizes negative societal and environmental impacts. Instilling these values from the start shapes tomorrow’s professionals and company cultures.

-
-
-
-

16.9 Green AI Infrastructure

-

Green AI represents a transformative approach to AI that incorporates environmental sustainability as a fundamental principle across the AI system design and lifecycle (R. Schwartz et al. 2020). This shift is driven by growing awareness of AI technologies’ significant carbon footprint and ecological impact, especially the compute-intensive process of training complex ML models.

-
-Schwartz, Roy, Jesse Dodge, Noah A. Smith, and Oren Etzioni. 2020. “Green AI.” Commun. ACM 63 (12): 54–63. https://doi.org/10.1145/3381831. -

The essence of Green AI lies in its commitment to align AI advancement with sustainability goals around energy efficiency, renewable energy usage, and waste reduction. The introduction of Green AI ideals reflects maturing responsibility across the tech industry towards environmental stewardship and ethical technology practices. It moves beyond technical optimizations toward holistic life cycle assessment on how AI systems affect sustainability metrics. Setting new bars for ecologically conscious AI paves the way for the harmonious coexistence of technological progress and planetary health.

-
-

16.9.1 Energy Efficient AI Systems

-

Energy efficiency in AI systems is a cornerstone of Green AI, aiming to reduce the energy demands traditionally associated with AI development and operations. This shift towards energy-conscious AI practices is vital in addressing the environmental concerns raised by the rapidly expanding field of AI. By focusing on energy efficiency, AI systems can become more sustainable, lessening their environmental impact and paving the way for more responsible AI use.

-

As we discussed earlier, the training and operation of AI models, especially large-scale ones, are known for their high energy consumption, which stems from compute-intensive model architecture and reliance on vast amounts of training data. For example, it is estimated that training a large state-of-the-art neural network model can have a carbon footprint of 284 tonnes—equivalent to the lifetime emissions of 5 cars (Strubell, Ganesh, and McCallum 2019).

-
-Strubell, Emma, Ananya Ganesh, and Andrew McCallum. 2019. “Energy and Policy Considerations for Deep Learning in NLP.” In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–50. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/p19-1355. -

To tackle the massive energy demands, researchers and developers are actively exploring methods to optimize AI systems for better energy efficiency while maintaining model accuracy and performance. This includes techniques like the ones we have discussed in the model optimizations, efficient AI, and hardware acceleration chapters:

-
    -
  • Knowledge distillation to transfer knowledge from large AI models to miniature versions
  • -
  • Quantization and pruning approaches that reduce computational and space complexities
  • -
  • Low-precision numerics–lowering mathematical precision without impacting model quality
  • -
  • Specialized hardware like TPUs, neuromorphic chips tuned explicitly for efficient AI processing
  • -
-

One example is Intel’s work on Q8BERT—quantizing the BERT language model with 8-bit integers, leading to a 4x reduction in model size with minimal accuracy loss (Zafrir et al. 2019). The push for energy-efficient AI is not just a technical endeavor–it has tangible real-world implications. More performant systems lower AI’s operational costs and carbon footprint, making it accessible for widespread deployment on mobile and edge devices. It also paves the path toward the democratization of AI and mitigates unfair biases that can emerge from uneven access to computing resources across regions and communities. Pursuing energy-efficient AI is thus crucial for creating an equitable and sustainable future with AI.

-
-Zafrir, Ofir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8BERT: Quantized 8Bit BERT.” In 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS), 36–39. IEEE; IEEE. https://doi.org/10.1109/emc2-nips53020.2019.00016. -
-
-

16.9.2 Sustainable AI Infrastructure

-

Sustainable AI infrastructure includes the physical and technological frameworks that support AI systems, focusing on environmental sustainability. This involves designing and operating AI infrastructure to minimize ecological impact, conserve resources, and reduce carbon emissions. The goal is to create a sustainable ecosystem for AI that aligns with broader environmental objectives.

-

Green data centers are central to sustainable AI infrastructure, optimized for energy efficiency, and often powered by renewable energy sources. These data centers employ advanced cooling technologies (Ebrahimi, Jones, and Fleischer 2014), energy-efficient server designs (Uddin and Rahman 2012), and smart management systems (Buyya, Beloglazov, and Abawajy 2010) to reduce power consumption. The shift towards green computing infrastructure also involves adopting energy-efficient hardware, like AI-optimized processors that deliver high performance with lower energy requirements, which we discussed in the AI. Acceleration chapter. These efforts collectively reduce the carbon footprint of running large-scale AI operations.

-
-Ebrahimi, Khosrow, Gerard F. Jones, and Amy S. Fleischer. 2014. “A Review of Data Center Cooling Technology, Operating Conditions and the Corresponding Low-Grade Waste Heat Recovery Opportunities.” Renewable Sustainable Energy Rev. 31 (March): 622–38. https://doi.org/10.1016/j.rser.2013.12.007. -
-Uddin, Mueen, and Azizah Abdul Rahman. 2012. “Energy Efficiency and Low Carbon Enabler Green IT Framework for Data Centers Considering Green Metrics.” Renewable Sustainable Energy Rev. 16 (6): 4078–94. https://doi.org/10.1016/j.rser.2012.03.014. -
-Buyya, Rajkumar, Anton Beloglazov, and Jemal Abawajy. 2010. “Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges.” https://arxiv.org/abs/1006.0308. -
-Chua, L. 1971. “Memristor-the Missing Circuit Element.” #IEEE_J_CT# 18 (5): 507–19. https://doi.org/10.1109/tct.1971.1083337. -

Integrating renewable energy sources, such as solar, wind, and hydroelectric power, into AI infrastructure is important for environmental sustainability (Chua 1971). Many tech companies and research institutions are investing in renewable energy projects to power their data centers. This not only helps in making AI operations carbon-neutral but also promotes the wider adoption of clean energy. Using renewable energy sources clearly shows commitment to environmental responsibility in the AI industry.

-

Sustainability in AI also extends to the materials and hardware used in creating AI systems. This involves choosing environmentally friendly materials, adopting recycling practices, and ensuring responsible electronic waste disposal. Efforts are underway to develop more sustainable hardware components, including energy-efficient chips designed for domain-specific tasks (such as AI accelerators) and environmentally friendly materials in device manufacturing (Cenci et al. 2021; Irimia-Vladu 2014). The lifecycle of these components is also a focus, with initiatives aimed at extending the lifespan of hardware and promoting recycling and reuse.

-
-Cenci, Marcelo Pilotto, Tatiana Scarazzato, Daniel Dotto Munchen, Paula Cristina Dartora, Hugo Marcelo Veit, Andrea Moura Bernardes, and Pablo R. Dias. 2021. Eco-Friendly ElectronicsA Comprehensive Review.” Adv. Mater. Technol. 7 (2): 2001263. https://doi.org/10.1002/admt.202001263. -
-Irimia-Vladu, Mihai. 2014. Green Electronics: Biodegradable and Biocompatible Materials and Devices for Sustainable Future.” Chem. Soc. Rev. 43 (2): 588–610. https://doi.org/10.1039/c3cs60235d. -

While strides are being made in sustainable AI infrastructure, challenges remain, such as the high costs of green technology and the need for global standards in sustainable practices. Future directions include more widespread adoption of green energy, further innovations in energy-efficient hardware, and international collaboration on sustainable AI policies. Pursuing sustainable AI infrastructure is not just a technical endeavor but a holistic approach that encompasses environmental, economic, and social aspects, ensuring that AI advances harmoniously with our planet’s health.

-
-
-

16.9.3 Frameworks and Tools

-

Access to the right frameworks and tools is essential to effectively implementing green AI practices. These resources are designed to assist developers and researchers in creating more energy-efficient and environmentally friendly AI systems. They range from software libraries optimized for low-power consumption to platforms that facilitate the development of sustainable AI applications.

-

Several software libraries and development environments are specifically tailored for Green AI. These tools often include features for optimizing AI models to reduce their computational load and, consequently, their energy consumption. For example, libraries in PyTorch and TensorFlow that support model pruning, quantization, and efficient neural network architectures enable developers to build AI systems that require less processing power and energy. Additionally, open-source communities like the Green Carbon Foundation are creating a centralized carbon intensity metric and building software for carbon-aware computing.

-

Energy monitoring tools are crucial for Green AI, as they allow developers to measure and analyze the energy consumption of their AI systems. By providing detailed insights into where and how energy is being used, these tools enable developers to make informed decisions about optimizing their models for better energy efficiency. This can involve adjustments in algorithm design, hardware selection, cloud computing software selection, or operational parameters. Figure fig-azuredashboard is a screenshot of an energy consumption dashboard provided by Microsoft’s cloud services platform.

-
-
-
- -
-
-Figure 16.7: Microsoft Azure energy consumption dashboard. Credit: Will Buchanan. -
-
-
-

With the increasing integration of renewable energy sources in AI operations, frameworks facilitating this process are becoming more important. These frameworks help manage the energy supply from renewable sources like solar or wind power, ensuring that AI systems can operate efficiently with fluctuating energy inputs.

-

Beyond energy efficiency, sustainability assessment tools help evaluate the broader environmental impact of AI systems. These tools can analyze factors like the carbon footprint of AI operations, the lifecycle impact of hardware components (Gupta et al. 2022), and the overall sustainability of AI projects (Prakash, Callahan, et al. 2023).

-
-Gupta, Udit, Mariam Elgamal, Gage Hills, Gu-Yeon Wei, Hsien-Hsin S. Lee, David Brooks, and Carole-Jean Wu. 2022. “Act: Designing Sustainable Computer Systems with an Architectural Carbon Modeling Tool.” In Proceedings of the 49th Annual International Symposium on Computer Architecture, 784–99. ACM. https://doi.org/10.1145/3470496.3527408. -
-Prakash, Shvetank, Tim Callahan, Joseph Bushagour, Colby Banbury, Alan V. Green, Pete Warden, Tim Ansell, and Vijay Janapa Reddi. 2023. CFU Playground: Full-stack Open-Source Framework for Tiny Machine Learning (TinyML) Acceleration on FPGAs.” In 2023 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS). Vol. abs/2201.01863. IEEE. https://doi.org/10.1109/ispass57527.2023.00024. -

The availability and ongoing development of Green AI frameworks and tools are critical for advancing sustainable AI practices. By providing the necessary resources for developers and researchers, these tools facilitate the creation of more environmentally friendly AI systems and encourage a broader shift towards sustainability in the tech community. As Green AI continues to evolve, these frameworks and tools will play a vital role in shaping a more sustainable future for AI.

-
-
-

16.9.4 Benchmarks and Leaderboards

-

Benchmarks and leaderboards are important for driving progress in Green AI, as they provide standardized ways to measure and compare different methods. Well-designed benchmarks that capture relevant metrics around energy efficiency, carbon emissions, and other sustainability factors enable the community to track advancements fairly and meaningfully.

-

Extensive benchmarks exist for tracking AI model performance, such as those extensively discussed in the Benchmarking chapter. Still, a clear and pressing need exists for additional standardized benchmarks focused on sustainability metrics like energy efficiency, carbon emissions, and overall ecological impact. Understanding the environmental costs of AI currently needs to be improved by a lack of transparency and standardized measurement around these factors.

-

Emerging efforts such as the ML.ENERGY Leaderboard, which provides performance and energy consumption benchmarking results for large language models (LLMs) text generation, assists in enhancing the understanding of the energy cost of GenAI deployment.

-

As with any benchmark, Green AI benchmarks must represent realistic usage scenarios and workloads. Benchmarks that focus narrowly on easily gamed metrics may lead to short-term gains but fail to reflect actual production environments where more holistic efficiency and sustainability measures are needed. The community should continue expanding benchmarks to cover diverse use cases.

-

Wider adoption of common benchmark suites by industry players will accelerate innovation in Green AI by allowing easier comparison of techniques across organizations. Shared benchmarks lower the barrier to demonstrating the sustainability benefits of new tools and best practices. However, when designing industry-wide benchmarks, care must be taken around issues like intellectual property, privacy, and commercial sensitivity. Initiatives to develop open reference datasets for Green AI evaluation may help drive broader participation.

-

As methods and infrastructure for Green AI continue maturing, the community must revisit benchmark design to ensure existing suites capture new techniques and scenarios well. Tracking the evolving landscape through regular benchmark updates and reviews will be important to maintain representative comparisons over time. Community efforts for benchmark curation can enable sustainable benchmark suites that stand the test of time. Comprehensive benchmark suites owned by research communities or neutral third parties like MLCommons may encourage wider participation and standardization.

-
-
-
-

16.10 Case Study: Google’s 4Ms

-

Over the past decade, AI has rapidly moved from academic research to large-scale production systems powering numerous Google products and services. As AI models and workloads have grown exponentially in size and computational demands, concerns have emerged about their energy consumption and carbon footprint. Some researchers predicted runaway growth in ML’s energy appetite that could outweigh efficiencies gained from improved algorithms and hardware (Thompson et al. 2021).

-
-Thompson, Neil C., Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso. 2021. “Deep Learning’s Diminishing Returns: The Cost of Improvement Is Becoming Unsustainable.” IEEE Spectr. 58 (10): 50–55. https://doi.org/10.1109/mspec.2021.9563954. -

However, Google’s production data reveals a different story—AI represents a steady 10-15% of total company energy usage from 2019 to 2021. This case study analyzes how Google applied a systematic approach leveraging four best practices—what they term the “4 Ms” of model efficiency, machine optimization, mechanization through cloud computing, and mapping to green locations—to bend the curve on emissions from AI workloads.

-

The scale of Google’s AI usage makes it an ideal case study. In 2021 alone, the company trained models like the 1.2 trillion-parameter GLam model. Analyzing how the application of AI has been paired with rapid efficiency gains in this environment helps us by providing a logical blueprint for the broader AI field to follow.

-

By transparently publishing detailed energy usage statistics, adopting rates of carbon-free clouds and renewables purchases, and more, alongside its technical innovations, Google has enabled outside researchers to measure progress accurately. Their study in the ACM CACM (Patterson et al. 2022) highlights how the company’s multipronged approach shows that runaway AI energy consumption predictions can be overcome by focusing engineering efforts on sustainable development patterns. The pace of improvements also suggests ML’s efficiency gains are just starting.

-
-Patterson, David, Joseph Gonzalez, Urs Holzle, Quoc Le, Chen Liang, Lluis-Miquel Munguia, Daniel Rothchild, David R. So, Maud Texier, and Jeff Dean. 2022. “The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink.” Computer 55 (7): 18–28. https://doi.org/10.1109/mc.2022.3148714. -
-

16.10.1 Google’s 4M Best Practices

-

To curb emissions from their rapidly expanding AI workloads, Google engineers systematically identified four best practice areas–termed the “4 Ms”–where optimizations could compound to reduce the carbon footprint of ML:

-
    -
  • Model - Selecting efficient AI model architectures can reduce computation by 5-10X with no loss in model quality. Google has extensively researched developing sparse models and neural architecture search to create more efficient models like the Evolved Transformer and Primer.
  • -
  • Machine—Using hardware optimized for AI over general-purpose systems improves performance per watt by 2-5X. Google’s Tensor Processing Units (TPUs) led to 5-13X better carbon efficiency versus GPUs not optimized for ML.
  • -
  • Mechanization—By leveraging cloud computing systems tailored for high utilization over conventional on-premise data centers, energy costs are reduced by 1.4-2X. Google cites its data center’s power usage effectiveness as outpacing industry averages.
  • -
  • Map - Choosing data center locations with low-carbon electricity reduces gross emissions by another 5-10X. Google provides real-time maps highlighting the percentage of renewable energy used by its facilities.
  • -
-

Together, these practices created drastic compound efficiency gains. For example, optimizing the Transformer AI model on TPUs in a sustainable data center location cut energy use by 83. It lowered \(\textrm{CO}_2\) emissions by a factor of 747.

-
-
-

16.10.2 Significant Results

-

Despite exponential growth in AI adoption across products and services, Google’s efforts to improve the carbon efficiency of ML have produced measurable gains, helping to restrain overall energy appetite. One key data point highlighting this progress is that AI workloads have remained a steady 10% to 15% of total company energy use from 2019 to 2021. As AI became integral to more Google offerings, overall compute cycles dedicated to AI grew substantially. However, efficiencies in algorithms, specialized hardware, data center design, and flexible geography allowed sustainability to keep pace—with AI representing just a fraction of total data center electricity over years of expansion.

-

Other case studies underscore how an engineering focus on sustainable AI development patterns enabled rapid quality improvements in lockstep with environmental gains. For example, the natural language processing model GPT-3 was viewed as state-of-the-art in mid-2020. Yet its successor GLaM improved accuracy while cutting training compute needs and using cleaner data center energy–cutting CO2 emissions by a factor of 14 in just 18 months of model evolution.

-

Similarly, Google found past published speculation missing the mark on ML’s energy appetite by factors of 100 to 100,000X due to a lack of real-world metrics. By transparently tracking optimization impact, Google hoped to motivate efficiency while preventing overestimated extrapolations about ML’s environmental toll.

-

These data-driven case studies show how companies like Google are steering AI advancements toward sustainable trajectories and improving efficiency to outpace adoption growth. With further efforts around lifecycle analysis, inference optimization, and renewable expansion, companies can aim to accelerate progress, giving evidence that ML’s clean potential is only just being unlocked by current gains.

-
-
-

16.10.3 Further Improvements

-

While Google has made measurable progress in restraining the carbon footprint of its AI operations, the company recognizes further efficiency gains will be vital for responsible innovation given the technology’s ongoing expansion.

-

One area of focus is showing how advances are often incorrectly viewed as increasing unsustainable computing—like neural architecture search (NAS) to find optimized models— spur downstream savings, outweighing their upfront costs. Despite expending more energy on model discovery rather than hand-engineering, NAS cuts lifetime emissions by producing efficient designs callable across countless applications.

-

Additionally, the analysis reveals that focusing sustainability efforts on data center and server-side optimization makes sense, given the dominant energy draw versus consumer devices. Though Google aims to shrink inference impacts across processors like mobile phones, priority rests on improving training cycles and data center renewables procurement for maximal effect.

-

To that end, Google’s progress in pooling computing inefficiently designed cloud facilities highlights the value of scale and centralization. As more workloads shift away from inefficient on-premise servers, internet giants’ prioritization of renewable energy—with Google and Facebook matched 100% by renewables since 2017 and 2020, respectively—unlocks compounding emissions cuts.

-

Together, these efforts emphasize that while no resting on laurels is possible, Google’s multipronged approach shows that AI efficiency improvements are only accelerating. Cross-domain initiatives around lifecycle assessment, carbon-conscious development patterns, transparency, and matching rising AI demand with clean electricity supply pave a path toward bending the curve further as adoption grows. The company’s results compel the broader field towards replicating these integrated sustainability pursuits.

-
-
-
-

16.11 Embedded AI - Internet of Trash

-

While much attention has focused on making the immense data centers powering AI more sustainable, an equally pressing concern is the movement of AI capabilities into smart edge devices and endpoints. Edge/embedded AI allows near real-time responsiveness without connectivity dependencies. It also reduces transmission bandwidth needs. However, the increase of tiny devices leads to other risks.

-

Tiny computers, microcontrollers, and custom ASICs powering edge intelligence face size, cost, and power limitations that rule out high-end GPUs used in data centers. Instead, they require optimized algorithms and extremely compact, energy-efficient circuitry to run smoothly. However, engineering for these microscopic form factors opens up risks around planned obsolescence, disposability, and waste. Figure fig-iot-devices shows that the number of IoT devices is projected to reach 30 billion connected devices by 2030.

-
-
-
- -
-
-Figure 16.8: Number of Internet of Things (IoT) connected devices worldwide from 2019 to 2023. Credit: Statista. -
-
-
-

End-of-life handling of internet-connected gadgets embedded with sensors and AI remains an often overlooked issue during design. However, these products permeate consumer goods, vehicles, public infrastructure, industrial equipment, and more.

-
-

E-waste

-

Electronic waste, or e-waste, refers to discarded electrical equipment and components that enter the waste stream. This includes devices that have to be plugged in, have a battery, or electrical circuitry. With the rising adoption of internet-connected smart devices and sensors, e-waste volumes rapidly increase yearly. These proliferating gadgets contain toxic heavy metals like lead, mercury, and cadmium that become environmental and health hazards when improperly disposed of.

-

The amount of electronic waste being produced is growing at an alarming rate. Today, we already produce 50 million tons per year. By 2030, that figure is projected to jump to a staggering 75 million tons as consumer electronics consumption continues to accelerate. Global e-waste production will reach 120 million tonnes annually by 2050 (Un and Forum 2019). The soaring production and short lifecycles of our gadgets fuel this crisis, from smartphones and tablets to internet-connected devices and home appliances.

-

Developing nations are being hit the hardest as they need more infrastructure to process obsolete electronics safely. In 2019, formal e-waste recycling rates in poorer countries ranged from 13% to 23%. The remainder ends up illegally dumped, burned, or crudely dismantled, releasing toxic materials into the environment and harming workers and local communities. Clearly, more needs to be done to build global capacity for ethical and sustainable e-waste management, or we risk irreversible damage.

-

The danger is that crude handling of electronics to strip valuables exposes marginalized workers and communities to noxious burnt plastics/metals. Lead poisoning poses especially high risks to child development if ingested or inhaled. Overall, only about 20% of e-waste produced was collected using environmentally sound methods, according to UN estimates (Un and Forum 2019). So solutions for responsible lifecycle management are urgently required to contain the unsafe disposal as volume soars higher.

-
-Un, and World Economic Forum. 2019. A New Circular Vision for Electronics, Time for a Global Reboot. PACE - Platform for Accelerating the Circular Economy. https://www3.weforum.org/docs/WEF\_A\_New\_Circular\_Vision\_for\_Electronics.pdf. -
-
-

Disposable Electronics

-

The rapidly falling costs of microcontrollers, tiny rechargeable batteries, and compact communication hardware have enabled the embedding of intelligent sensor systems throughout everyday consumer goods. These internet-of-things (IoT) devices monitor product conditions, user interactions, and environmental factors to enable real-time responsiveness, personalization, and data-driven business decisions in the evolving connected marketplace.

-

However, these embedded electronics face little oversight or planning around sustainably handling their eventual disposal once the often plastic-encased products are discarded after brief lifetimes. IoT sensors now commonly reside in single-use items like water bottles, food packaging, prescription bottles, and cosmetic containers that overwhelmingly enter landfill waste streams after a few weeks to months of consumer use.

-

The problem accelerates as more manufacturers rush to integrate mobile chips, power sources, Bluetooth modules, and other modern silicon ICs, costing under US$1, into various merchandise without protocols for recycling, replacing batteries, or component reusability. Despite their small individual size, the volumes of these devices and lifetime waste burden loom large. Unlike regulating larger electronics, few policy constraints exist around materials requirements or toxicity in tiny disposable gadgets.

-

While offering convenience when working, the unsustainable combination of difficult retrievability and limited safe breakdown mechanisms causes disposable connected devices to contribute outsized shares of future e-waste volumes needing urgent attention.

-
-
-

Planned Obsolescence

-

Planned obsolescence refers to the intentional design strategy of manufacturing products with artificially limited lifetimes that quickly become non-functional or outdated. This spurs faster replacement purchase cycles as consumers find devices no longer meet their needs within a few years. However, electronics designed for premature obsolescence contribute to unsustainable e-waste volumes.

-

For example, gluing smartphone batteries and components together hinders repairability compared to modular, accessible assemblies. Rolling out software updates that deliberately slow system performance creates a perception that upgrading devices produced only several years earlier is worth it.

-

Likewise, fashionable introductions of new product generations with minor but exclusive feature additions make prior versions rapidly seem dated. These tactics compel buying new gadgets (e.g., iPhones) long before operational endpoints. When multiplied across fast-paced electronics categories, billions of barely worn items are discarded annually.

-

Planned obsolescence thus intensifies resource utilization and waste creation in making products with no intention for long lifetimes. This contradicts sustainability principles around durability, reuse, and material conservation. While stimulating continuous sales and gains for manufacturers in the short term, the strategy externalizes environmental costs and toxins onto communities lacking proper e-waste processing infrastructure.

-

Policy and consumer action are crucial to counter gadget designs that are needlessly disposable by default. Companies should also invest in product stewardship programs supporting responsible reuse and reclamation.

-

Consider the real-world example. Apple has faced scrutiny over the years for allegedly engaging in planned obsolescence to encourage customers to buy new iPhone models. The company allegedly designed its phones so that performance degrades over time or existing features become incompatible with new operating systems, which critics argue is meant to spur more rapid upgrade cycles. In 2020, Apple paid a 25 million Euros fine to settle a case in France where regulators found the company guilty of intentionally slowing down older iPhones without clearly informing customers via iOS updates.

-

By failing to be transparent about power management changes that reduced device performance, Apple participated in deceptive activities that reduced product lifespan to drive sales. The company claimed it was done to “smooth out” peaks that could suddenly cause older batteries to shut down. However, this example highlights the legal risks around employing planned obsolescence and not properly disclosing when functionality changes impact device usability over time- even leading brands like Apple can run into trouble if perceived as intentionally shortening product life cycles.

-
-
-
-

16.12 Policy and Regulatory Considerations

-
-

16.12.1 Measurement and Reporting Mandates

-

One policy mechanism that is increasingly relevant for AI systems is measurement and reporting requirements regarding energy consumption and carbon emissions. Mandated metering, auditing, disclosures, and more rigorous methodologies aligned to sustainability metrics can help address information gaps hindering efficiency optimizations.

-

Simultaneously, national or regional policies require companies above a certain size to utilize AI in their products or backend systems to report energy consumption or emissions associated with major AI workloads. Organizations like the Partnership on AI, IEEE, and NIST could help shape standardized methodologies. More complex proposals involve defining consistent ways to measure computational complexity, data center PUE, carbon intensity of energy supply, and efficiencies gained through AI-specific hardware.

-

Reporting obligations for public sector users procuring AI services—such as through proposed legislation in Europe—could also increase transparency. However, regulators must balance the additional measurement burden such mandates place on organizations against ongoing carbon reductions from ingraining sustainability-conscious development patterns.

-

To be most constructive, any measurement and reporting policies should focus on enabling continuous refinement rather than simplistic restrictions or caps. As AI advancements unfold rapidly, nimble governance guardrails that embed sustainability considerations into normal evaluation metrics can motivate positive change. However, overprescription risks constraining innovation if requirements grow outdated. AI efficiency policy aims to accelerate progress industry-wide by combining flexibility with appropriate transparency guardrails.

-
-
-

16.12.2 Restriction Mechanisms

-

In addition to reporting mandates, policymakers have several restriction mechanisms that could directly shape how AI systems are developed and deployed to curb emissions:

-

Caps on Computing Emissions: The European Commission’s proposed AI Act takes a horizontal approach that could allow setting economy-wide caps on the volume of computing power available for training AI models. Like emissions trading systems, caps aim to disincentivize extensive computing over sustainability indirectly. However, model quality could be improved to provide more pathways for procuring additional capacity.

-

Conditioning Access to Public Resources: Some experts have proposed incentives like only allowing access to public datasets or computing power for developing fundamentally efficient models rather than extravagant architectures. For example, the MLCommons benchmarking consortium founded by major tech firms could formally integrate efficiency into its standardized leaderboard metrics—however, conditioned access risks limiting innovation.

-

Financial Mechanisms: Analogous to carbon taxes on polluting industries, fees applied per unit of AI-related compute consumption could discourage unnecessary model scaling while funding efficiency innovations. Tax credits could alternatively reward organizations pioneering more accurate but compact AI techniques. However, financial tools require careful calibration between revenue generation and fairness and not over-penalizing productive uses of AI.

-

Technology Bans: If measurement consistently pinned extreme emissions on specific applications of AI without paths for remediation, outright bans present a tool of last resort for policymakers. However, given AI’s dual use, defining harmful versus beneficial deployments proves complex, necessitating holistic impact assessment before concluding no redeeming value exists. Banning promising technologies risks unintended consequences and requires caution.

-
-
-

16.12.3 Government Incentives

-

It is a common practice for governments to provide tax or other incentives to consumers or businesses when contributing to more sustainable technological practices. Such incentives already exist in the US for adopting solar panels or energy-efficient buildings. To the best of our knowledge, no such tax incentives exist for AI-specific development practices yet.

-

Another potential incentive program that is beginning to be explored is using government grants to fund Green AI projects. For example, in Spain, 300 million euros have been allocated to specifically fund projects in AI and sustainability. Government incentives are a promising avenue to encourage sustainable business and consumer behavior practices, but careful thought is required to determine how those incentives will fit into market demands (Cohen, Lobel, and Perakis 2016).

-
-Cohen, Maxime C., Ruben Lobel, and Georgia Perakis. 2016. “The Impact of Demand Uncertainty on Consumer Subsidies for Green Technology Adoption.” Manage. Sci. 62 (5): 1235–58. https://doi.org/10.1287/mnsc.2015.2173. -
-
-

16.12.4 Self-Regulation

-

Complimentary to potential government action, voluntary self-governance mechanisms allow the AI community to pursue sustainability ends without top-down intervention:

-

Renewables Commitments: Large AI practitioners like Google, Microsoft, Amazon, and Facebook have pledged to procure enough renewable electricity to match 100% of their energy demands. These commitments unlock compounding emissions cuts as compute scales up. Formalizing such programs incentivizes green data center regions. However, there are critiques on whether these pledges are enough (Monyei and Jenkins 2018).

-
-Monyei, Chukwuka G., and Kirsten E. H. Jenkins. 2018. “Electrons Have No Identity: Setting Right Misrepresentations in Google and Apples Clean Energy Purchasing.” Energy Research &Amp; Social Science 46 (December): 48–51. https://doi.org/10.1016/j.erss.2018.06.015. -

Internal Carbon Prices: Some organizations utilize shadow prices on carbon emissions to represent environmental costs in capital allocation decisions between AI projects. If modeled effectively, theoretical charges on development carbon footprints steer funding toward efficient innovations rather than solely accuracy gains.

-

Efficiency Development Checklists: Groups like the AI Sustainability Coalition suggest voluntary checklist templates highlighting model design choices, hardware configurations, and other factors architects can tune per application to restrain emissions. Organizations can drive change by ingraining sustainability as a primary success metric alongside accuracy and cost.

-

Independent Auditing: Even absent public disclosure mandates, firms specializing in technology sustainability audits help AI developers identify waste, create efficiency roadmaps, and benchmark progress via impartial reviews. Structuring such audits into internal governance procedures or the procurement process expands accountability.

-
-
-

16.12.5 Global Considerations

-

While measurement, restrictions, incentives, and self-regulation represent potential policy mechanisms for furthering AI sustainability, fragmentation across national regimes risks unintended consequences. As with other technology policy domains, divergence between regions must be carefully managed.

-

For example, due to regional data privacy concerns, OpenAI barred European users from accessing its viral ChatGPT chatbot. This came after the EU’s proposed AI Act signaled a precautionary approach, allowing the EC to ban certain high-risk AI uses and enforcing transparency rules that create uncertainty for releasing brand new models. However, it would be wise to caution against regulator action as it could inadvertently limit European innovation if regimes with lighter-touch regulation attract more private-sector AI research spending and talent. Finding common ground is key.

-

The OECD principles on AI and the United Nations frameworks underscore universally agreed-upon tenets all national policies should uphold: transparency, accountability, bias mitigation, and more. Constructively embedding sustainability as a core principle for responsible AI within international guidance can motivate unified action without sacrificing flexibility across divergent legal systems. Avoiding race-to-the-bottom dynamics hinges on enlightened multilateral cooperation.

-
-
-
-

16.13 Public Perception and Engagement

-

As societal attention and policy efforts aimed at environmental sustainability ramp up worldwide, there is growing enthusiasm for leveraging AI to help address ecological challenges. However, public understanding and attitudes toward the role of AI systems in sustainability contexts still need to be clarified and clouded by misconceptions. On the one hand, people hope advanced algorithms can provide new solutions for green energy, responsible consumption, decarbonization pathways, and ecosystem preservation. On the other, fears regarding the risks of uncontrolled AI also seep into the environmental domain and undermine constructive discourse. Furthermore, a lack of public awareness on key issues like transparency in developing sustainability-focused AI tools and potential biases in data or modeling also threaten to limit inclusive participation and degrade public trust.

-

Tackling complex, interdisciplinary priorities like environmental sustainability requires informed, nuanced public engagement and responsible advances in AI innovation. The path forward demands careful, equitable collaborative efforts between experts in ML, climate science, environmental policy, social science, and communication. Mapping the landscape of public perceptions, identifying pitfalls, and charting strategies to cultivate understandable, accessible, and trustworthy AI systems targeting shared ecological priorities will prove essential to realizing sustainability goals. This complex terrain warrants a deep examination of the sociotechnical dynamics involved.

-
-

16.13.1 AI Awareness

-

In May 2022, the Pew Research Center polled 5,101 US adults, finding 60% had heard or read “a little” about AI while 27% heard “a lot”–indicating decent broad recognition, but likely limited comprehension about details or applications. However, among those with some AI familiarity, concerns emerge regarding risks of personal data misuse according to agreed terms. Still, 62% felt AI could ease modern life if applied responsibly. Yet, a specific understanding of sustainability contexts still needs to be improved.

-

Studies attempting to categorize online discourse sentiments find a nearly even split between optimism and caution regarding deploying AI for sustainability goals. Factors driving positivity include hopes around better forecasting of ecological shifts using ML models. Negativity arises from a lack of confidence in self-supervised algorithms avoiding unintended consequences due to unpredictable human impacts on complex natural systems during training.

-

The most prevalent public belief remains that while AI does harbor the potential for accelerating solutions on issues like emission reductions and wildlife protections, inadequate safeguarding around data biases, ethical blindspots, and privacy considerations could be more appreciated risks if pursued carelessly, especially at scale. This leads to hesitancy around unconditional support without evidence of deliberate, democratically guided development.

-
-
-

16.13.2 Messaging

-

Optimistic efforts are highlighting AI’s sustainability promise and emphasize the potential for advanced ML to radically accelerate decarbonization effects from smart grids, personalized carbon tracking apps, automated building efficiency optimizations, and predictive analytics guiding targeted conservation efforts. More comprehensive real-time modeling of complex climate and ecological shifts using self-improving algorithms offers hope for mitigating biodiversity losses and averting worst-case scenarios.

-

However, cautionary perspectives, such as the Asilomar AI Principles, question whether AI itself could exacerbate sustainability challenges if improperly constrained. The rising energy demands of large-scale computing systems and the increasingly massive neural network model training conflict with clean energy ambitions. Lack of diversity in data inputs or developers’ priorities may downplay urgent environmental justice considerations. Near-term skeptical public engagement likely hinges on a need for perceivable safeguards against uncontrolled AI systems running amok on core ecological processes.

-

In essence, polarized framings either promote AI as an indispensable tool for sustainability problem-solving–if compassionately directed toward people and the planet–or present AI as an amplifier of existing harms insidiously dominating hidden facets of natural systems central to all life. Overcoming such impasses demands balancing honest trade-off discussions with shared visions for equitable, democratically governed technological progress targeting restoration.

-
-
-

16.13.3 Equitable Participation

-

Ensuring equitable participation and access should form a cornerstone of any sustainability initiative with the potential for major societal impacts. This principle applies equally to AI systems targeting environmental goals. However, commonly excluded voices like frontline, rural, or indigenous communities and future generations not present to consent could suffer disproportionate consequences from technology transformations. For instance, the Partnership on AI has launched events expressly targeting input from marginalized communities on deploying AI responsibly.

-

Ensuring equitable access and participation should form a cornerstone of any sustainability initiative with the potential for major societal impacts, whether AI or otherwise. However, inclusive engagement in environmental AI relies partly on the availability and understanding of fundamental computing resources. As the recent OECD report on National AI Compute Capacity highlights (Oecd 2023), many countries currently lack data or strategic plans mapping needs for the infrastructure required to fuel AI systems. This policy blindspot could constrain economic goals and exacerbate barriers to entry for marginalized populations. Their blueprint urges developing national AI compute capacity strategies along dimensions of capacity, accessibility, innovation pipelines, and resilience to anchor innovation. The underlying data storage needs to be improved, and model development platforms or specialized hardware could inadvertently concentrate AI progress in the hands of select groups. Therefore, planning for a balanced expansion of fundamental AI computing resources via policy initiatives ties directly to hopes for democratized sustainability problem-solving using equitable and transparent ML tools.

-
-Oecd. 2023. “A Blueprint for Building National Compute Capacity for Artificial Intelligence.” 350. Organisation for Economic Co-Operation; Development (OECD). https://doi.org/10.1787/876367e3-en. -

The key idea is that equitable participation in AI systems targeting environmental challenges relies in part on ensuring the underlying computing capacity and infrastructure are correct, which requires proactive policy planning from a national perspective.

-
-
-

16.13.4 Transparency

-

As public sector agencies and private companies alike rush towards adopting AI tools to help tackle pressing environmental challenges, calls for transparency around these systems’ development and functionality have begun to amplify. Explainable and interpretable ML features grow more crucial for building trust in emerging models aiming to guide consequential sustainability policies. Initiatives like the Montreal Carbon Pledge brought tech leaders together to commit to publishing impact assessments before launching environmental systems, as pledged below:

-

*“As institutional investors, we must act in the best long-term interests of our beneficiaries. In this fiduciary role, long-term investment risks are associated with greenhouse gas emissions, climate change, and carbon regulation.

-

Measuring our carbon footprint is integral to understanding better, quantifying, and managing the carbon and climate change-related impacts, risks, and opportunities in our investments. Therefore, as a first step, we commit to measuring and disclosing the carbon footprint of our investments annually to use this information to develop an engagement strategy and identify and set carbon footprint reduction targets.”*

-

We need a similar pledge for AI sustainability and responsibility. Widespread acceptance and impact of AI sustainability solutions will partly be on deliberate communication of validation schemes, metrics, and layers of human judgment applied before live deployment. Efforts like NIST’s Principles for Explainable AI can help foster transparency into AI systems. The National Institute of Standards and Technology (NIST) has published an influential set of guidelines dubbed the Principles for Explainable AI (Phillips et al. 2020). This framework articulates best practices for designing, evaluating, and deploying responsible AI systems with transparent and interpretable features that build critical user understanding and trust.

-
-Phillips, P Jonathon, Carina A Hahn, Peter C Fontana, David A Broniatowski, and Mark A Przybocki. 2020. “Four Principles of Explainable Artificial Intelligence.” Gaithersburg, Maryland 18. -

It delineates four core principles: Firstly, AI systems should provide contextually relevant explanations justifying the reasoning behind their outputs to appropriate stakeholders. Secondly, these AI explanations must communicate information meaningfully for their target audience’s appropriate comprehension level. Next is the accuracy principle, which dictates that explanations should faithfully reflect the actual process and logic informing an AI model’s internal mechanics for generating given outputs or recommendations based on inputs. Finally, a knowledge limits principle compels explanations to clarify an AI model’s boundaries in capturing the full breadth of real-world complexity, variance, and uncertainties within a problem space.

-

Altogether, these NIST principles offer AI practitioners and adopters guidance on key transparency considerations vital for developing accessible solutions prioritizing user autonomy and trust rather than simply maximizing predictive accuracy metrics alone. As AI rapidly advances across sensitive social contexts like healthcare, finance, employment, and beyond, such human-centered design guidelines will continue growing in importance for anchoring innovation to public interests.

-

This applies equally to the domain of environmental ability. Responsible and democratically guided AI innovation targeting shared ecological priorities depends on maintaining public vigilance, understanding, and oversight over otherwise opaque systems taking prominent roles in societal decisions. Prioritizing explainable algorithm designs and radical transparency practices per global standards can help sustain collective confidence that these tools improve rather than imperil hopes for a driven future.

-
-
-
-

16.14 Future Directions and Challenges

-

As we look towards the future, the role of AI in environmental sustainability is poised to grow even more significant. AI’s potential to drive advancements in renewable energy, climate modeling, conservation efforts, and more is immense. However, it is a two-sided coin, as we need to overcome several challenges and direct our efforts towards sustainable and responsible AI development.

-
-

16.14.1 Future Directions

-

One key future direction is the development of more energy-efficient AI models and algorithms. This involves ongoing research and innovation in areas like model pruning, quantization, and the use of low-precision numerics, as well as developing the hardware to enable full profitability of these innovations. Even further, we look at alternative computing paradigms that do not rely on von-Neumann architectures. More on this topic can be found in the hardware acceleration chapter. The goal is to create AI systems that deliver high performance while minimizing energy consumption and carbon emissions.

-

Another important direction is the integration of renewable energy sources into AI infrastructure. As data centers continue to be major contributors to AI’s carbon footprint, transitioning to renewable energy sources like solar and wind is crucial. Developments in long-term, sustainable energy storage, such as Ambri, an MIT spinoff, could enable this transition. This requires significant investment and collaboration between tech companies, energy providers, and policymakers.

-
-
-

16.14.2 Challenges

-

Despite these promising directions, several challenges need to be addressed. One of the major challenges is the need for consistent standards and methodologies for measuring and reporting the environmental impact of AI. These methods must capture the complexity of the life cycles of AI models and system hardware. Next, efficient and environmentally sustainable AI infrastructure and system hardware are needed. This consists of three components. It aims to maximize the utilization of accelerator and system resources, prolong the lifetime of AI infrastructure, and design systems hardware with environmental impact in mind.

-

On the software side, we should trade off experimentation and the subsequent training cost. Techniques such as neural architecture search and hyperparameter optimization can be used for design space exploration. However, these are often very resource-intensive. Efficient experimentation can significantly reduce the environmental footprint overhead. Next, methods to reduce wasted training efforts should be explored.

-

To improve model quality, we often scale the dataset. However, the increased system resources required for data storage and ingestion caused by this scaling have a significant environmental impact (Wu et al. 2022). A thorough understanding of the rate at which data loses its predictive value and devising data sampling strategies is important.

-
-Wu, Carole-Jean, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, et al. 2022. “Sustainable Ai: Environmental Implications, Challenges and Opportunities.” Proceedings of Machine Learning and Systems 4: 795–813. -

Data gaps also pose a significant challenge. Without companies and governments openly sharing detailed and accurate data on energy consumption, carbon emissions, and other environmental impacts, it isn’t easy to develop effective strategies for sustainable AI.

-

Finally, the fast pace of AI development requires an agile approach to the policy imposed on these systems. The policy should ensure sustainable development without constraining innovation. This requires experts in all domains of AI, environmental sciences, energy, and policy to work together to achieve a sustainable future.

-
-
-
-

16.15 Conclusion

-

We must address sustainability considerations as AI rapidly expands across industries and society. AI promises breakthrough innovations, yet its environmental footprint threatens its widespread growth. This chapter analyzes multiple facets, from energy and emissions to waste and biodiversity impacts, that AI/ML developers must weigh when creating responsible AI systems.

-

Fundamentally, we require elevating sustainability as a primary design priority rather than an afterthought. Techniques like energy-efficient models, renewable-powered data centers, and hardware recycling programs offer solutions, but the holistic commitment remains vital. We need standards around transparency, carbon accounting, and supply chain disclosures to supplement technical gains. Still, examples like Google’s 4M efficiency practices containing ML energy use highlight that we can advance AI in lockstep with environmental objectives with concerted effort. We achieve this harmonious balance by having researchers, corporations, regulators, and users collaborate across domains. The aim is not perfect solutions but continuous improvement as we integrate AI across new sectors.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer hands-on labs that allow students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/training/training.html b/contents/training/training.html deleted file mode 100644 index 2724f6d3..00000000 --- a/contents/training/training.html +++ /dev/null @@ -1,2350 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 7  AI Training - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

7  AI Training

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: An illustration for AI training, depicting a neural network with neurons that are being repaired and firing. The scene includes a vast network of neurons, each glowing and firing to represent activity and learning. Among these neurons, small figures resembling engineers and scientists are actively working, repairing and tweaking the neurons. These miniature workers symbolize the process of training the network, adjusting weights and biases to achieve convergence. The entire scene is a visual metaphor for the intricate and collaborative effort involved in AI training, with the workers representing the continuous optimization and learning within a neural network. The background is a complex array of interconnected neurons, creating a sense of depth and complexity.
-
-
-

Training is central to developing accurate and useful AI systems using machine learning techniques. At a high level, training involves feeding data into machine learning algorithms so they can learn patterns and make predictions. However, effectively training models requires tackling various challenges around data, algorithms, optimization of model parameters, and enabling generalization. This chapter will explore the nuances and considerations around training machine learning models.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the fundamental mathematics of neural networks, including linear transformations, activation functions, loss functions, backpropagation, and optimization via gradient descent.

  • -
  • Learn how to effectively leverage data for model training through proper splitting into train, validation, and test sets to enable generalization.

  • -
  • Learn various optimization algorithms like stochastic gradient descent and adaptations like momentum and Adam that accelerate training.

  • -
  • Understand hyperparameter tuning and regularization techniques to improve model generalization by reducing overfitting.

  • -
  • Learn proper weight initialization strategies matched to model architectures and activation choices that accelerate convergence.

  • -
  • Identify the bottlenecks posed by key operations like matrix multiplication during training and deployment.

  • -
  • Learn how hardware improvements like GPUs, TPUs, and specialized accelerators speed up critical math operations to accelerate training.

  • -
  • Understand parallelization techniques, both data and model parallelism, to distribute training across multiple devices and accelerate system throughput.

  • -
-
-
-
-

7.1 Introduction

-

Training is critical for developing accurate and useful AI systems using machine learning. The training aims to create a machine learning model that can generalize to new, unseen data rather than memorizing the training examples. This is done by feeding training data into algorithms that learn patterns from these examples by adjusting internal parameters.

-

The algorithms minimize a loss function, which compares their predictions on the training data to the known labels or solutions, guiding the learning. Effective training often requires high-quality, representative data sets large enough to capture variability in real-world use cases.

-

It also requires choosing an algorithm suited to the task, whether a neural network for computer vision, a reinforcement learning algorithm for robotic control, or a tree-based method for categorical prediction. Careful tuning is needed for the model structure, such as neural network depth and width, and learning parameters like step size and regularization strength.

-

Techniques to prevent overfitting like regularization penalties and validation with held-out data, are also important. Overfitting can occur when a model fits the training data too closely, failing to generalize to new data. This can happen if the model is too complex or trained too long.

-

To avoid overfitting, regularization techniques can help constrain the model. One regularization method is adding a penalty term to the loss function that discourages complexity, like the L2 norm of the weights. This penalizes large parameter values. Another technique is dropout, where a percentage of neurons is randomly set to zero during training. This reduces neuron co-adaptation.

-

Validation methods also help detect and avoid overfitting. Part of the training data is held out from the training loop as a validation set. The model is evaluated on this data. If validation error increases while training error decreases, overfitting occurs. The training can then be stopped early or regularized more strongly. Regularization and validation enable models to train to maximum capability without overfitting the training data.

-

Training takes significant computing resources, especially for deep neural networks used in computer vision, natural language processing, and other areas. These networks have millions of adjustable weights that must be tuned through extensive training. Hardware improvements and distributed training techniques have enabled training ever larger neural nets that can achieve human-level performance on some tasks.

-

In summary, some key points about training:

-
    -
  • Data is crucial: Machine learning models learn from examples in training data. More high-quality, representative data leads to better model performance. Data needs to be processed and formatted for training.
  • -
  • Algorithms learn from data: Different algorithms (neural networks, decision trees, etc.) have different approaches to finding patterns in data. Choosing the right algorithm for the task is important.
  • -
  • Training refines model parameters: Model training adjusts internal parameters to find patterns in data. Advanced models like neural networks have many adjustable weights. Training iteratively adjusts weights to minimize a loss function.
  • -
  • Generalization is the goal: A model that overfits the training data will not generalize well. Regularization techniques (dropout, early stopping, etc.) reduce overfitting. Validation data is used to evaluate generalization.
  • -
  • Training takes compute resources: Training complex models requires significant processing power and time. Hardware improvements and distributed training across GPUs/TPUs have enabled advances.
  • -
-

We will walk you through these details in the rest of the sections. Understanding how to effectively leverage data, algorithms, parameter optimization, and generalization through thorough training is essential for developing capable, deployable AI systems that work robustly in the real world.

-
-
-

7.2 Mathematics of Neural Networks

-

Deep learning has revolutionized machine learning and artificial intelligence, enabling computers to learn complex patterns and make intelligent decisions. The neural network is at the heart of the deep learning revolution, and as discussed in section 3, “Deep Learning Primer,” it is a cornerstone in some of these advancements.

-

Neural networks are made up of simple functions layered on each other. Each layer takes in some data, performs some computation, and passes it to the next layer. These layers learn progressively high-level features useful for the tasks the network is trained to perform. For example, in a network trained for image recognition, the input layer may take in pixel values, while the next layers may detect simple shapes like edges. The layers after that may detect more complex shapes like noses, eyes, etc. The final output layer classifies the image as a whole.

-

The network in a neural network refers to how these layers are connected. Each layer’s output is considered a single neuron and is connected to many other neurons in the layers preceding it, forming a “network.” The way these neurons interact is determined by the weights between them, which model synaptic strengths similar to that of a brain’s neuron. The neural network is trained by adjusting these weights. Concretely, the weights are initially set randomly, then input is fed in, the output is compared to the desired result, and finally, the weights are tweaked to improve the network. This process is repeated until the network reliably minimizes the loss, indicating it has learned the patterns in the data.

-

How is this process defined mathematically? Formally, neural networks are mathematical models that consist of alternating linear and nonlinear operations, parameterized by a set of learnable weights that are trained to minimize some loss function. This loss function measures how good our model is concerning fitting our training data, and it produces a numerical value when evaluated on our model against the training data. Training neural networks involves repeatedly evaluating the loss function on many different data points to measure how good our model is, then continuously tweaking the weights of our model using backpropagation so that the loss decreases, ultimately optimizing the model to fit our data.

-
-

7.2.1 Neural Network Notation

-

Diving into the details, the core of a neural network can be viewed as a sequence of alternating linear and nonlinear operations, as show in Figure fig-neural-net-diagram:

-

\[ -L_i = W_i A_{i-1} -\]

-

\[ -A_i = F_i(L_{i}) -\]

-
-
-
- -
-
-

Why are the nonlinear operations necessary? If we only had linear layers, the entire network would be equivalent to a single linear layer consisting of the product of the linear operators. Hence, the nonlinear functions play a key role in the power of neural networks as they enhance the neural network’s ability to fit functions.

-
-
-
-
-
-
- -
-
-

Convolutions are also linear operators and can be cast as a matrix multiplication.

-
-
-
-
-
-
- -
-
-Figure 7.1: Neural network diagram. Credit: astroML. -
-
-
-

Where \(A_{0}\) is a vector input to the neural network (i.e., an image that we want the neural network to classify or some other data that the neural network operates on), \(A_{n}\) (where \(n\) is the number of layers of the network) is the vector output of the neural network (i.e., a vector of size 10 in the case of classifying pictures of handwritten digits), \(W_i\)s are the weights of the neural network that are tweaked at training time to fit our data, and \(F_{i}\) is that layer’s nonlinear activation function (i.e., ReLU, softmax, etc.). As defined, the intermediate output of the neural network is a vector of real-valued numbers with dimensions:

-

\[ -L_i, A_i \in \mathbb{R}^{d_{i}} -\]

-

Where \(d_{i}\) is the number of neurons at layer \(i\); in the case of the first layer \(i=0\), \(d_{i}\) is the dimension of the input data, and in the last layer \(i=n\), \(d_{n}\) is the dimension of the output label. Anything in between can be set arbitrarily and may be viewed as the architecture of the neural network (i.e., the dimensionality of the intermediate layers). The weights, which determine how each layer of the neural network interacts with each other, are matrices of real numbers with shape.

-

\[ -W_i \in \mathbb{R}^{d_{i} \times d_{i-1}} -\]

-

Our neural network, as defined, performs a sequence of linear and nonlinear operations on the input data (\(L_{0}\)) to obtain predictions (\(L_{n}\)), which hopefully is a good answer to what we want the neural network to do on the input (i.e., classify if the input image is a cat or not). Our neural network may then be represented succinctly as a function \(N\) which takes in an input \(x \in \mathbb{R}^{d_0}\) parameterized by \(W_1, ..., W_n\):

-

\[ -N(x; W_1, ... W_n) = \text{Let } A_0 = x, \text{ then output } A_n -\]

-

Next, we will see how to evaluate this neural network against training data by introducing a loss function.

-
-
-

7.2.2 Loss Function as a Measure of Goodness of Fit against Training Data

-

After defining our neural network, we are given some training data, which is a set of points \({(x_j, y_j)}\) for \(j=1..M\), and we want to evaluate how good our neural network is at fitting this data. To do this, we introduce a loss function, which is a function that takes the output of the neural network on a particular datapoint (\(N(x_j; W_1, ..., W_n)\)) and compares it against the “label” of that particular datapoint (the corresponding \(y_j\)), and outputs a single numerical scalar (i.e., one real number) that represents how “good” the neural network fit that particular data point; the final measure of how good the neural network is on the entire dataset is therefore just the average of the losses across all data points.

-

There are many different types of loss functions; for example, in the case of image classification, we might use the cross-entropy loss function, which tells us how well two vectors representing classification predictions compare (i.e., if our prediction predicts that an image is more likely a dog, but the label says it is a cat, it will return a high “loss,” indicating a bad fit).

-

Mathematically, this loss function is a function that takes in two real-valued vectors of the shape of the label and outputs a single numerical scalar. \[ -L: \mathbb{R}^{d_{n}} \times \mathbb{R}^{d_{n}} \longrightarrow \mathbb{R} -\]

-

The loss across the entire dataset can be written as the average loss across all data points in the training data.

-
-

Loss Function for Optimizing Neural Network Model on a Dataset \[ -L_{full} = \frac{1}{M} \sum_{j=1}^{M} L(N(x_j; W_1,...W_n), y_j) -\]

-
-
-
-

7.2.3 Training Neural Networks with Gradient Descent

-

Now that we can measure how well our network fits the training data, we can optimize the neural network weights to minimize this loss. At a high level, we tweak the parameters of the real-valued matrices \(W_i\)s to minimize the loss function \(L_{full}\). Overall, our mathematical objective is

-
-

Neural Network Training Objective \[ -min_{W_1, ..., W_n} L_{full} -\] \[ -= min_{W_1, ..., W_n} \frac{1}{M} \sum_{j=1}^{M} L(N(x_j; W_1,...W_n), y_j) -\]

-
-

So, how do we optimize this objective? Recall from calculus that minimizing a function can be done by taking the function’s derivative concerning the input parameters and tweaking the parameters in the gradient direction. This technique is called gradient descent and concretely involves calculating the derivative of the loss function \(L_{full}\) concerning \(W_1, ..., W_n\) to obtain a gradient for these parameters to take a step in, then updating these parameters in the direction of the gradient. Thus, we can train our neural network using gradient descent, which repeatedly applies the update rule.

-
-

Gradient Descent Update Rule \[ -W_i := W_i - \lambda \frac{\partial L_{full}}{\partial W_i} \mbox{ for } i=1..n -\]

-
-
-
-
- -
-
-

In practice, the gradient is computed over a minibatch of data points to improve computational efficiency. This is called stochastic gradient descent or batch gradient descent.

-
-
-
-

Where \(\lambda\) is the stepsize or learning rate of our tweaks, in training our neural network, we repeatedly perform the step above until convergence, or when the loss no longer decreases.@fig-gradient-descent illustrates this process: we want to reach the minimum point, which’s done by following the gradient (as illustrated with the blue arrows in the figure). This prior approach is known as full gradient descent since we are computing the derivative concerning the entire training data and only then taking a single gradient step; a more efficient approach is to calculate the gradient concerning just a random batch of data points and then taking a step, a process known as batch gradient descent or stochastic gradient descent (Robbins and Monro 1951), which is more efficient since now we are taking many more steps per pass of the entire training data. Next, we will cover the mathematics behind computing the gradient of the loss function concerning the \(W_i\)s, a process known as backpropagation.

-
-Robbins, Herbert, and Sutton Monro. 1951. “A Stochastic Approximation Method.” The Annals of Mathematical Statistics 22 (3): 400–407. https://doi.org/10.1214/aoms/1177729586. -
-
-
- -
-
-Figure 7.2: Gradient descent. Credit: Towards Data Science. -
-
-
-
-
-

7.2.4 Backpropagation

-

Training neural networks involve repeated applications of the gradient descent algorithm, which involves computing the derivative of the loss function with respect to the \(W_i\)s. How do we compute the loss derivative concerning the \(W_i\)s, given that the \(W_i\)s are nested functions of each other in a deep neural network? The trick is to leverage the chain rule: we can compute the derivative of the loss concerning the \(W_i\)s by repeatedly applying the chain rule in a complete process known as backpropagation. Specifically, we can calculate the gradients by computing the derivative of the loss concerning the outputs of the last layer, then progressively use this to compute the derivative of the loss concerning each prior layer to the input layer. This process starts from the end of the network (the layer closest to the output) and progresses backwards, and hence gets its name backpropagation.

-

Let’s break this down. We can compute the derivative of the loss concerning the outputs of each layer of the neural network by using repeated applications of the chain rule.

-

\[ -\frac{\partial L_{full}}{\partial L_{n}} = \frac{\partial A_{n}}{\partial L_{n}} \frac{\partial L_{full}}{\partial A_{n}} -\]

-

\[ -\frac{\partial L_{full}}{\partial L_{n-1}} = \frac{\partial A_{n-1}}{\partial L_{n-1}} \frac{\partial L_{n}}{\partial A_{n-1}} \frac{\partial A_{n}}{\partial L_{n}} \frac{\partial L_{full}}{\partial A_{n}} -\]

-

or more generally

-

\[ -\frac{\partial L_{full}}{\partial L_{i}} = \frac{\partial A_{i}}{\partial L_{i}} \frac{\partial L_{i+1}}{\partial A_{i}} ... \frac{\partial A_{n}}{\partial L_{n}} \frac{\partial L_{full}}{\partial A_{n}} -\]

-
-
-
- -
-
-

In what order should we perform this computation? From a computational perspective, performing the calculations from the end to the front is preferable. (i.e: first compute \(\frac{\partial L_{full}}{\partial A_{n}}\) then the prior terms, rather than start in the middle) since this avoids materializing and computing large jacobians. This is because \(\ \frac {\partial L_{full}}{\partial A_{n}}\) is a vector; hence, any matrix operation that includes this term has an output that is squished to be a vector. Thus, performing the computation from the end avoids large matrix-matrix multiplications by ensuring that the intermediate products are vectors.

-
-
-
-
-
-
- -
-
-

In our notation, we assume the intermediate activations \(A_{i}\) are column vectors, rather than row vectors, hence the chain rule is \(\frac{\partial L}{\partial L_{i}} = \frac{\partial L_{i+1}}{\partial L_{i}} ... \frac{\partial L}{\partial L_{n}}\) rather than \(\frac{\partial L}{\partial L_{i}} = \frac{\partial L}{\partial L_{n}} ... \frac{\partial L_{i+1}}{\partial L_{i}}\)

-
-
-
-

After computing the derivative of the loss concerning the output of each layer, we can easily obtain the derivative of the loss concerning the parameters, again using the chain rule:

-

\[ -\frac{\partial L_{full}}{W_{i}} = \frac{\partial L_{i}}{\partial W_{i}} \frac{\partial L_{full}}{\partial L_{i}} -\]

-

And this is ultimately how the derivatives of the layers’ weights are computed using backpropagation! What does this concretely look like in a specific example? Below, we walk through a specific example of a simple 2-layer neural network on a regression task using an MSE loss function with 100-dimensional inputs and a 30-dimensional hidden layer:

-
-

Example of Backpropagation
-Suppose we have a two-layer neural network \[ -L_1 = W_1 A_{0} -\] \[ -A_1 = ReLU(L_1) -\] \[ -L_2 = W_2 A_{1} -\] \[ -A_2 = ReLU(L_2) -\] \[ -NN(x) = \mbox{Let } A_{0} = x \mbox{ then output } A_2 -\] where \(W_1 \in \mathbb{R}^{30 \times 100}\) and \(W_2 \in \mathbb{R}^{1 \times 30}\). Furthermore, suppose we use the MSE loss function: \[ -L(x, y) = (x-y)^2 -\] We wish to compute \[ -\frac{\partial L(NN(x), y)}{\partial W_i} \mbox{ for } i=1,2 -\] Note the following: \[ -\frac{\partial L(x, y)}{\partial x} = 2 \times (x-y) -\] \[ -\frac{\partial ReLU(x)}{\partial x} \delta = \left\{\begin{array}{lr} -0 & \text{for } x \leq 0 \\ -1 & \text{for } x \geq 0 \\ -\end{array}\right\} \odot \delta -\] \[ -\frac{\partial WA}{\partial A} \delta = W^T \delta -\] \[ -\frac{\partial WA}{\partial W} \delta = \delta A^T -\] Then we have \[ -\frac{\partial L(NN(x), y)}{\partial W_2} = \frac{\partial L_2}{\partial W_2} \frac{\partial A_2}{\partial L_2} \frac{\partial L(NN(x), y)}{\partial A_2} -\] \[ -= (2L(NN(x) - y) \odot ReLU'(L_2)) A_1^T -\] and \[ -\frac{\partial L(NN(x), y)}{\partial W_1} = \frac{\partial L_1}{\partial W_1} \frac{\partial A_1}{\partial L_1} \frac{\partial L_2}{\partial A_1} \frac{\partial A_2}{\partial L_2} \frac{\partial L(NN(x), y)}{\partial A_2} -\] \[ -= [ReLU'(L_1) \odot (W_2^T [2L(NN(x) - y) \odot ReLU'(L_2)])] A_0^T -\]

-
-
-
-
- -
-
-

Double-check your work by making sure that the shapes are correct!

-
    -
  • All Hadamard products (\(\odot\)) should operate on tensors of the same shape
  • -
  • All matrix multiplications should operate on matrices that share a common dimension (i.e., m by n, n by k)
  • -
  • All gradients concerning the weights should have the same shape as the weight matrices themselves
  • -
-
-
-
-

The entire backpropagation process can be complex, especially for very deep networks. Fortunately, machine learning frameworks like PyTorch support automatic differentiation, which performs backpropagation for us. In these frameworks, we simply need to specify the forward pass, and the derivatives will be automatically computed for us. Nevertheless, it is beneficial to understand the theoretical process that is happening under the hood in these machine-learning frameworks.

-
-
-
- -
-
-

As seen above, intermediate activations \(A_i\) are reused in backpropagation. To improve performance, these activations are cached from the forward pass to avoid being recomputed. However, activations must be kept in memory between the forward and backward passes, leading to higher memory usage. If the network and batch size are large, this may lead to memory issues. Similarly, the derivatives with respect to each layer’s outputs are cached to avoid recomputation.

-
-
-
-
-

Exercise 7.1 (Neural Networks with Backpropagation and Gradient Descent)  

-
-
- -
-
-

Unlock the math behind powerful neural networks! Deep learning might seem like magic, but it’s rooted in mathematical principles. In this chapter, you’ve broken down neural network notation, loss functions, and the powerful technique of backpropagation. Now, prepare to implement this theory with these Colab notebooks. Dive into the heart of how neural networks learn. You’ll see the math behind backpropagation and gradient descent, updating those weights step-by-step.

-

-
-
-
-
-
-
-

7.3 Differentiable Computation Graphs

-

In general, stochastic gradient descent using backpropagation can be performed on any computational graph that a user may define, provided that the operations of the computation are differentiable. As such, generic deep learning libraries like PyTorch and Tensorflow allow users to specify their computational process (i.e., neural networks) as a computational graph. Backpropagation is automatically performed via automatic differentiation when stochastic gradient descent is performed on these computational graphs. Framing AI training as an optimization problem on differentiable computation graphs is a general way to understand what is happening under the hood with deep learning systems.

-

The structure depicted in Figure fig-computational-graph showcases a segment of a differentiable computational graph. In this graph, the input ‘x’ is processed through a series of operations: it is first multiplied by a weight matrix ‘W’ (MatMul), then added to a bias ‘b’ (Add), and finally passed to an activation function, Rectified Linear Unit (ReLU). This sequence of operations gives us the output C. The graph’s differentiable nature means that each operation has a well-defined gradient. Automatic differentiation, as implemented in ML frameworks, leverages this property to efficiently compute the gradients of the loss with respect to each parameter in the network (e.g., ‘W’ and ‘b’).

-
-
-
- -
-
-Figure 7.3: Computational Graph. Credit: TensorFlow. -
-
-
-
-
-

7.4 Training Data

-

To enable effective neural network training, the available data must be split into training, validation, and test sets. The training set is used to train the model parameters. The validation set evaluates the model during training to tune hyperparameters and prevent overfitting. The test set provides an unbiased final evaluation of the trained model’s performance.

-

Maintaining clear splits between train, validation, and test sets with representative data is crucial to properly training, tuning, and evaluating models to achieve the best real-world performance. To this end, we will learn about the common pitfalls or mistakes people make when creating these data splits.

-

Table tbl-training_splits compares the differences between training, validation, and test data splits:

-
-
-
-Table 7.1: Comparing training, validation, and test data splits. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - -
Data SplitPurposeTypical Size
Training SetTrain the model parameters60-80% of total data
Validation SetEvaluate model during training to tune hyperparameters and prevent overfitting∼20% of total data
Test SetProvide unbiased evaluation of final trained model∼20% of total data
-
-
-
-
-

7.4.1 Dataset Splits

-
-

Training Set

-

The training set is used to train the model. It is the largest subset, typically 60-80% of the total data. The model sees and learns from the training data to make predictions. A sufficiently large and representative training set is required for the model to learn the underlying patterns effectively.

-
-
-

Validation Set

-

The validation set evaluates the model during training, usually after each epoch. Typically, 20% of the data is allocated for the validation set. The model does not learn or update its parameters based on the validation data. It is used to tune hyperparameters and make other tweaks to improve training. Monitoring metrics like loss and accuracy on the validation set prevents overfitting on just the training data.

-
-
-

Test Set

-

The test set acts as a completely unseen dataset that the model did not see during training. It is used to provide an unbiased evaluation of the final trained model. Typically, 20% of the data is reserved for testing. Maintaining a hold-out test set is vital for obtaining an accurate estimate of how the trained model would perform on real-world unseen data. Data leakage from the test set must be avoided at all costs.

-

The relative proportions of the training, validation, and test sets can vary based on data size and application. However, following the general guidelines for a 60/20/20 split is a good starting point. Careful data splitting ensures models are properly trained, tuned, and evaluated to achieve the best performance.

-

The video below explains how to properly split the dataset into training, validation, and testing sets, ensuring an optimal training process.

-
-
-
-
-

7.4.2 Common Pitfalls and Mistakes

-
-

Insufficient Training Data

-

Allocating too little data to the training set is a common mistake when splitting data that can severely impact model performance. If the training set is too small, the model will not have enough samples to effectively learn the true underlying patterns in the data. This leads to high variance and causes the model to fail to generalize well to new data.

-

For example, if you train an image classification model to recognize handwritten digits, providing only 10 or 20 images per digit class would be completely inadequate. The model would need more examples to capture the wide variances in writing styles, rotations, stroke widths, and other variations.

-

As a rule of thumb, the training set size should be at least hundreds or thousands of examples for most machine learning algorithms to work effectively. Due to the large number of parameters, the training set often needs to be in the tens or hundreds of thousands for deep neural networks, especially those using convolutional layers.

-

Insufficient training data typically manifests in symptoms like high error rates on validation/test sets, low model accuracy, high variance, and overfitting on small training set samples. Collecting more quality training data is the solution. Data augmentation techniques can also help virtually increase the size of training data for images, audio, etc.

-

Carefully factoring in the model complexity and problem difficulty when allocating training samples is important to ensure sufficient data is available for the model to learn successfully. Following guidelines on minimum training set sizes for different algorithms is also recommended. More training data is needed to maintain the overall success of any machine learning application.

-

Consider Figure fig-over-under-fitting where we try to classify/split datapoints into two categories (here, by color): On the left, overfitting is depicted by a model that has learned the nuances in the training data too well (either the dataset was too small or we ran the model for too long), causing it to follow the noise along with the signal, as indicated by the line’s excessive curves. The right side shows underfitting, where the model’s simplicity prevents it from capturing the dataset’s underlying structure, resulting in a line that does not fit the data well. The center graph represents an ideal fit, where the model balances well between generalization and fitting, capturing the main trend of the data without being swayed by outliers. Although the model is not a perfect fit (it misses some points), we care more about its ability to recognize general patterns rather than idiosyncratic outliers.

-
-
-
- -
-
-Figure 7.4: Data fitting: overfitting, right fit, and underfitting. Credit: MathWorks. -
-
-
-

Figure fig-fitting-time illustrates the process of fitting the data over time. When training, we search for the “sweet spot” between underfitting and overfitting. At first when the model hasn’t had enough time to learn the patterns in the data, we find ourselves in the underfitting zone, indicated by high error rates on the validation set (remember that the model is trained on the training set and we test its generalizability on the validation set, or data it hasn’t seen before). At some point, we achieve a global minimum for error rates, and ideally we want to stop the training there. If we continue training, the model will start “memorizing” or getting to know the data too well that the error rate starts going back up, since the model will fail to generalize to data it hasn’t seen before.

-
-
-
- -
-
-Figure 7.5: Fitting the data overtime. Credit: IBM. -
-
-
-

The video below provides an overview of bias and variance and the relationship between the two concepts and model accuracy.

-
-
-
-

Data Leakage Between Sets

-

Data leakage refers to the unintentional transfer of information between the training, validation, and test sets. This violates the fundamental assumption that the splits are completely separated. Data leakage leads to seriously compromised evaluation results and inflated performance metrics.

-

A common way data leakage occurs is if some samples from the test set are inadvertently included in the training data. When evaluating the t-set, the model has already seen some of the data, which gives overly optimistic scores. For example, if 2% of the test data leaks into the training set of a binary classifier, it can result in an accuracy boost of up to 20%!

-

If the data splits are not done carefully, more subtle forms of leakage can happen. If the splits are not properly randomized and shuffled, samples close to each other in the dataset may end up across different splits. This creates information bleed through based on proximity in the dataset. Time series data is especially vulnerable unless special cross-validation techniques are used.

-

Preventing data leakage requires creating solid separation between splits—no sample should exist in more than one split. Shuffling and randomized splitting help create robust divisions. Cross-validation techniques can be used for more rigorous evaluation. Detecting leakage is difficult, but telltale signs include models doing way better on test vs. validation data.

-

Data leakage severely compromises the validity of the evaluation because the model has already partially seen the test data. No amount of tuning or complex architectures can substitute for clean data splits. It is better to be conservative and create complete separation between splits to avoid this fundamental mistake in machine learning pipelines.

-
-
-

Small or Unrepresentative Validation Set

-

The validation set evaluates models during training and for hyperparameter tuning. It must be bigger and representative of the real data distribution to provide reliable and stable evaluations during training, making model selection and tuning more difficult.

-

For example, if the validation set only contains 100 samples, the metrics calculated will have a high variance. Due to noise, the accuracy may fluctuate up to 5-10% between epochs. This makes it difficult to know if a drop in validation accuracy is due to overfitting or natural variance. With a larger validation set, say 1000 samples, the metrics will be much more stable.

-

Additionally, if the validation set is not representative, perhaps missing certain subclasses, the estimated skill of the model may be inflated. This could lead to poor hyperparameter choices or premature training stops. Models selected based on such biased validation sets do not generalize well to real data.

-

A good rule of thumb is that the validation set size should be at least several hundred samples and up to 10-20% of the training set. The splits should also be stratified, especially if working with imbalanced datasets. A larger validation set representing the original data characteristics is essential for proper model selection and tuning.

-

The validation set should also be manageable, leaving insufficient samples for training. Overall, the validation set is a critical piece of the data-splitting process, and care should be taken to avoid the pitfalls of small, inadequate samples that negatively impact model development.

-
-
-

Reusing the Test Set Multiple Times

-

The test set is designed to provide an unbiased evaluation of the fully trained model only once at the end of the model development process. Reusing the test set multiple times during development for model evaluation, hyperparameter tuning, model selection, etc., can result in overfitting on the test data.

-

If the test set is reused as part of the validation process, the model may start to see and learn from the test samples. This, coupled with intentionally or unintentionally optimizing model performance on the test set, can artificially inflate metrics like accuracy.

-

For example, suppose the test set is used repeatedly for model selection out of 5 architectures. In that case, the model may achieve 99% test accuracy by memorizing the samples rather than learning generalizable patterns. However, when deployed in the real world, the accuracy of new data could drop by 60%.

-

The best practice is to interact with the test set only once at the end to report unbiased metrics on how the final tuned model would perform in the real world. While developing the model, the validation set should be used for all parameter tuning, model selection, early stopping, etc.

-

Maintaining the complete separation of training/validation from the test set is essential to obtain accurate estimates of model performance. Even minor deviations from a single use of the test set could positively bias results and metrics, providing an overly optimistic view of real-world efficacy.

-
-
-

Same Data Splits Across Experiments

-

When comparing different machine learning models or experimenting with various architectures and hyperparameters, using the same data splits for training, validation, and testing across the different experiments can introduce bias and invalidate the comparisons.

-

If the same splits are reused, the evaluation results may be more balanced and accurately measure which model performs better. For example, a certain random data split may favor model A over model B irrespective of the algorithms. Reusing this split will then bias towards model A.

-

Instead, the data splits should be randomized or shuffled for each experimental iteration. This ensures that randomness in the sampling of the splits does not confer an unfair advantage to any model.

-

With different splits per experiment, the evaluation becomes more robust. Each model is tested on a wide range of test sets drawn randomly from the overall population, smoothing out variation and removing correlation between results.

-

Proper practice is to set a random seed before splitting the data for each experiment. Splitting should occur after shuffling/resampling as part of the experimental pipeline. Carrying out comparisons on the same splits violates the i.i.d (independent and identically distributed) assumption required for statistical validity.

-

Unique splits are essential for fair model comparisons. Though more compute-intensive, randomized allocation per experiment removes sampling bias and enables valid benchmarking. This highlights the true differences in model performance irrespective of a particular split’s characteristics.

-
-
-

Information Leakage Between Sets

-

Information leakage between the training, validation, and test sets occurs when information from one set inadvertently bleeds into another. This could happen due to flaws in the data-splitting process, which violates the assumption that the sets are mutually exclusive.

-

For example, consider a dataset sorted chronologically. If a simple random split is performed, samples close to each other in the dataset may end up in different splits. Models could then learn from ‘future’ data if test samples are leaked into the training set.

-

Similarly, distribution biases may persist across sets if the splits are not properly shuffled. The training set may contain more than certain outliers in the test set, compromising generalization. Issues like class imbalance may also get amplified if splitting is not stratified.

-

Another case is when datasets have linked, inherently connected samples, such as graphs, networks, or time series data. Naive splitting may isolate connected nodes or time steps into different sets. Models can make invalid assumptions based on partial information.

-

Preventing information leakage requires awareness of the dataset’s structure and relationships between samples. Shuffling, stratification, and grouped splitting of related samples can help mitigate leakage. Proper cross-validation procedures should be followed, mindful of temporal or sample proximity.

-

Subtle leakage of information between sets undermines model evaluation and training. It creates misleading results on model effectiveness. Data splitting procedures should account for sample relationships and distribution differences to ensure mutual exclusivity between sets.

-
-
-

Failing to Stratify Splits

-

When splitting data into training, validation, and test sets, failing to stratify the splits can result in an uneven representation of the target classes across the splits and introduce sampling bias. This is especially problematic for imbalanced datasets.

-

Stratified splitting involves sampling data points such that the proportion of output classes is approximately preserved in each split. For example, if performing a 70/30 train-test split on a dataset with 60% negative and 40% positive samples, stratification ensures ~60% negative and ~40% positive examples in both training and test sets.

-

Without stratification, random chance could result in the training split having 70% positive samples while the test has 30% positive samples. The model trained on this skewed training distribution will not generalize well. Class imbalance also compromises model metrics like accuracy.

-

Stratification works best when done using labels, though proxies like clustering can be used for unsupervised learning. It becomes essential for highly skewed datasets with rare classes that could easily be omitted from splits.

-

Libraries like Scikit-Learn have stratified splitting methods built into them. Failing to use them could inadvertently introduce sampling bias and hurt model performance on minority groups. After performing the splits, the overall class balance should be examined to ensure even representation across the splits.

-

Stratification provides a balanced dataset for both model training and evaluation. Though simple random splitting is easy, mindful of stratification needs, especially for real-world imbalanced data, results in more robust model development and evaluation.

-
-
-

Ignoring Time Series Dependencies

-

Time series data has an inherent temporal structure with observations depending on past context. Naively splitting time series data into train and test sets without accounting for this dependency leads to data leakage and lookahead bias.

-

For example, simply splitting a time series into the first 70% of training and the last 30% as test data will contaminate the training data with future data points. The model can use this information to “peek” ahead during training.

-

This results in an overly optimistic evaluation of the model’s performance. The model may appear to forecast the future accurately but has actually implicitly learned based on future data, which does not translate to real-world performance.

-

Proper time series cross-validation techniques, such as forward chaining, should be used to preserve order and dependency. The test set should only contain data points from a future time window that the model was not exposed to for training.

-

Failing to account for temporal relationships leads to invalid causality assumptions. If the training data contains future points, the model may also need to learn how to extrapolate forecasts further.

-

Maintaining the temporal flow of events and avoiding lookahead bias is key to properly training and testing time series models. This ensures they can truly predict future patterns and not just memorize past training data.

-
-
-

No Unseen Data for Final Evaluation

-

A common mistake when splitting data is failing to set aside some portion of the data just for the final evaluation of the completed model. All of the data is used for training, validation, and test sets during development.

-

This leaves no unseen data to get an unbiased estimate of how the final tuned model would perform in the real world. The metrics on the test set used during development may only partially reflect actual model skills.

-

For example, choices like early stopping and hyperparameter tuning are often optimized based on test set performance. This couples the model to the test data. An unseen dataset is needed to break this coupling and get true real-world metrics.

-

Best practice is to reserve a portion, such as 20-30% of the full dataset, solely for final model evaluation. This data should not be used for validation, tuning, or model selection during development.

-

Saving some unseen data allows for evaluating the completely trained model as a black box on real-world data. This provides reliable metrics to decide whether the model is ready for production deployment.

-

Failing to keep an unseen hold-out set for final validation risks optimizing results and overlooking potential failures before model release. Having some fresh data provides a final sanity check on real-world efficacy.

-
-
-

Overoptimizing on the Validation Set

-

The validation set is meant to guide the model training process, not serve as additional training data. Overoptimizing the validation set to maximize performance metrics treats it more like a secondary training set, leading to inflated metrics and poor generalization.

-

For example, techniques like extensively tuning hyperparameters or adding data augmentations targeted to boost validation accuracy can cause the model to fit too closely to the validation data. The model may achieve 99% validation accuracy but only 55% test accuracy.

-

Similarly, reusing the validation set for early stopping can also optimize the model specifically for that data. Stopping at the best validation performance overfits noise and fluctuations caused by the small validation size.

-

The validation set serves as a proxy to tune and select models. However, the goal remains maximizing real-world data performance, not the validation set. Minimizing the loss or error on validation data does not automatically translate to good generalization.

-

A good approach is to keep the use of the validation set minimal—hyperparameters can be tuned coarsely first on training data, for example. The validation set guides the training but should not influence or alter the model itself. It is a diagnostic, not an optimization tool.

-

When assessing performance on the validation set, care should be taken not to overfit. Tradeoffs are needed to build models that perform well on the overall population and are not overly tuned to the validation samples.

-
-
-
-
-

7.5 Optimization Algorithms

-

Stochastic gradient descent (SGD) is a simple yet powerful optimization algorithm for training machine learning models. It works by estimating the gradient of the loss function concerning the model parameters using a single training example and then updating the parameters in the direction that reduces the loss.

-

While conceptually straightforward, SGD needs a few areas for improvement. First, choosing a proper learning rate can be difficult—too small, and progress is very slow; too large, and parameters may oscillate and fail to converge. Second, SGD treats all parameters equally and independently, which may not be ideal in all cases. Finally, vanilla SGD uses only first-order gradient information, which results in slow progress on ill-conditioned problems.

-
-

7.5.1 Optimizations

-

Over the years, various optimizations have been proposed to accelerate and improve vanilla SGD. Ruder (2016) gives an excellent overview of the different optimizers. Briefly, several commonly used SGD optimization techniques include:

-
-Ruder, Sebastian. 2016. “An Overview of Gradient Descent Optimization Algorithms.” ArXiv Preprint abs/1609.04747. https://arxiv.org/abs/1609.04747. -

Momentum: Accumulates a velocity vector in directions of persistent gradient across iterations. This helps accelerate progress by dampening oscillations and maintains progress in consistent directions.

-

Nesterov Accelerated Gradient (NAG): A variant of momentum that computes gradients at the “look ahead” rather than the current parameter position. This anticipatory update prevents overshooting while the momentum maintains the accelerated progress.

-

RMSProp: Divides the learning rate by an exponentially decaying average of squared gradients. This has a similar normalizing effect as Adagrad but does not accumulate the gradients over time, avoiding a rapid decay of learning rates (Hinton 2017).

-
-Hinton, Geoffrey. 2017. “Overview of Minibatch Gradient Descent.” University of Toronto; University Lecture. -
-Duchi, John C., Elad Hazan, and Yoram Singer. 2010. “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” In COLT 2010 - the 23rd Conference on Learning Theory, Haifa, Israel, June 27-29, 2010, edited by Adam Tauman Kalai and Mehryar Mohri, 257–69. Omnipress. http://colt2010.haifa.il.ibm.com/papers/COLT2010proceedings.pdf\#page=265. -

Adagrad: An adaptive learning rate algorithm that maintains a per-parameter learning rate scaled down proportionate to each parameter’s historical sum of gradients. This helps eliminate the need to tune learning rates (Duchi, Hazan, and Singer 2010) manually.

-

Adadelta: A modification to Adagrad restricts the window of accumulated past gradients, thus reducing the aggressive decay of learning rates (Zeiler 2012).

-
-Zeiler, Matthew D. 2012. “Reinforcement and Systemic Machine Learning for Decision Making.” Wiley. https://doi.org/10.1002/9781118266502.ch6. -
-Kingma, Diederik P., and Jimmy Ba. 2015. “Adam: A Method for Stochastic Optimization.” In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, edited by Yoshua Bengio and Yann LeCun. http://arxiv.org/abs/1412.6980. -

Adam: - Combination of momentum and rmsprop where rmsprop modifies the learning rate based on the average of recent magnitudes of gradients. Displays very fast initial progress and automatically tunes step sizes (Kingma and Ba 2015).

-

Of these methods, Adam has widely considered the go-to optimization algorithm for many deep-learning tasks. It consistently outperforms vanilla SGD in terms of training speed and performance. Other optimizers may be better suited in some cases, particularly for simpler models.

-
-
-

7.5.2 Tradeoffs

-

Here is a pros and cons table for some of the main optimization algorithms for neural network training:

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
AlgorithmProsCons
MomentumFaster convergence due to acceleration along gradients Less oscillation than vanilla SGDRequires tuning of momentum parameter
Nesterov Accelerated Gradient (NAG)Faster than standard momentum in some cases Anticipatory updates prevent overshootingMore complex to understand intuitively
AdagradEliminates need to tune learning rates manually Performs well on sparse gradientsLearning rate may decay too quickly on dense gradients
AdadeltaLess aggressive learning rate decay than AdagradStill sensitive to initial learning rate value
RMSPropAutomatically adjusts learning rates Works well in practiceNo major downsides
AdamCombination of momentum and adaptive learning rates Efficient and fast convergenceSlightly worse generalization performance in some cases
AMSGradImprovement to Adam addressing generalization issueNot as extensively used/tested as Adam
-
-
-

7.5.3 Benchmarking Algorithms

-

No single method is best for all problem types. This means we need comprehensive benchmarking to identify the most effective optimizer for specific datasets and models. The performance of algorithms like Adam, RMSProp, and Momentum varies due to batch size, learning rate schedules, model architecture, data distribution, and regularization. These variations underline the importance of evaluating each optimizer under diverse conditions.

-

Take Adam, for example, who often excels in computer vision tasks, unlike RMSProp, who may show better generalization in certain natural language processing tasks. Momentum’s strength lies in its acceleration in scenarios with consistent gradient directions, whereas Adagrad’s adaptive learning rates are more suited for sparse gradient problems.

-

This wide array of interactions among optimizers demonstrates the challenge of declaring a single, universally superior algorithm. Each optimizer has unique strengths, making it crucial to evaluate various methods to discover their optimal application conditions empirically.

-

A comprehensive benchmarking approach should assess the speed of convergence and factors like generalization error, stability, hyperparameter sensitivity, and computational efficiency, among others. This entails monitoring training and validation learning curves across multiple runs and comparing optimizers on various datasets and models to understand their strengths and weaknesses.

-

AlgoPerf, introduced by Dahl et al. (2021), addresses the need for a robust benchmarking system. This platform evaluates optimizer performance using criteria such as training loss curves, generalization error, sensitivity to hyperparameters, and computational efficiency. AlgoPerf tests various optimization methods, including Adam, LAMB, and Adafactor, across different model types like CNNs and RNNs/LSTMs on established datasets. It utilizes containerization and automatic metric collection to minimize inconsistencies and allows for controlled experiments across thousands of configurations, providing a reliable basis for comparing optimizers.

-
-Dahl, George E, Frank Schneider, Zachary Nado, Naman Agarwal, Chandramouli Shama Sastry, Philipp Hennig, Sourabh Medapati, et al. 2021. CSF Findings in Acute NMDAR and LGI1 AntibodyAssociated Autoimmune Encephalitis.” Neurology Neuroimmunology &Amp; Neuroinflammation 8 (6). https://doi.org/10.1212/nxi.0000000000001086. -

The insights gained from AlgoPerf and similar benchmarks are invaluable for guiding optimizers’ optimal choice or tuning. By enabling reproducible evaluations, these benchmarks contribute to a deeper understanding of each optimizer’s performance, paving the way for future innovations and accelerated progress in the field.

-
-
-
-

7.6 Hyperparameter Tuning

-

Hyperparameters are important settings in machine learning models that greatly impact how well your models ultimately perform. Unlike other model parameters that are learned during training, hyperparameters are specified by the data scientists or machine learning engineers before training the model.

-

Choosing the right hyperparameter values enables your models to learn patterns from data effectively. Some examples of key hyperparameters across ML algorithms include:

-
    -
  • Neural networks: Learning rate, batch size, number of hidden units, activation functions
  • -
  • Support vector machines: Regularization strength, kernel type and parameters
  • -
  • Random forests: Number of trees, tree depth
  • -
  • K-means: Number of clusters
  • -
-

The problem is that there are no reliable rules of thumb for choosing optimal hyperparameter configurations—you typically have to try out different values and evaluate performance. This process is called hyperparameter tuning.

-

In the early years of modern deep learning, researchers were still grappling with unstable and slow convergence issues. Common pain points included training losses fluctuating wildly, gradients exploding or vanishing, and extensive trial-and-error needed to train networks reliably. As a result, an early focal point was using hyperparameters to control model optimization. For instance, seminal techniques like batch normalization allowed faster model convergence by tuning aspects of internal covariate shift. Adaptive learning rate methods also mitigated the need for extensive manual schedules. These addressed optimization issues during training, such as uncontrolled gradient divergence. Carefully adapted learning rates are also the primary control factor for achieving rapid and stable convergence even today.

-

As computational capacity expanded exponentially in subsequent years, much larger models could be trained without falling prey to pure numerical optimization issues. The focus shifted towards generalization - though efficient convergence was a core prerequisite. State-of-the-art techniques like Transformers brought in parameters in billions. At such sizes, hyperparameters around capacity, regularization, ensembling, etc., took center stage for tuning rather than only raw convergence metrics.

-

The lesson is that understanding the acceleration and stability of the optimization process itself constitutes the groundwork. Initialization schemes, batch sizes, weight decays, and other training hyperparameters remain indispensable today. Mastering fast and flawless convergence allows practitioners to expand their focus on emerging needs around tuning for metrics like accuracy, robustness, and efficiency at scale.

-
-

7.6.1 Search Algorithms

-

When it comes to the critical process of hyperparameter tuning, there are several sophisticated algorithms that machine learning practitioners rely on to search through the vast space of possible model configurations systematically. Some of the most prominent hyperparameter search algorithms include:

-
    -
  • Grid Search: The most basic search method, where you manually define a grid of values to check for each hyperparameter. For example, checking learning rates = [0.01, 0.1, 1] and batch sizes = [32, 64, 128]. The key advantage is simplicity, but exploring all combinations leads to exponential search space explosion. Best for fine-tuning a few parameters.

  • -
  • Random Search: Instead of a grid, you define a random distribution per hyperparameter to sample values from during the search. This method is more efficient at searching a vast hyperparameter space. However, it is still somewhat arbitrary compared to more adaptive methods.

  • -
  • Bayesian Optimization: This is an advanced probabilistic approach for adaptive exploration based on a surrogate function to model performance over iterations. It is simple and efficient—it finds highly optimized hyperparameters in fewer evaluation steps. However, it requires more investment in setup (Snoek, Larochelle, and Adams 2012).

  • -
  • Evolutionary Algorithms: These algorithms Mimic natural selection principles. They generate populations of hyperparameter combinations and evolve them over time-based on performance. These algorithms offer robust search capabilities better suited for complex response surfaces. However, many iterations are required for reasonable convergence.

  • -
  • Neural Architecture Search: An approach to designing well-performing architectures for neural networks. Traditionally, NAS approaches use some form of reinforcement learning to propose neural network architectures, which are then repeatedly evaluated (Zoph and Le 2023).

  • -
-
-Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. 2012. “Practical Bayesian Optimization of Machine Learning Algorithms.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 2960–68. https://proceedings.neurips.cc/paper/2012/hash/05311655a15b75fab86956663e1819cd-Abstract.html. -
-Zoph, Barret, and Quoc V. Le. 2023. “Cybernetical Intelligence.” Wiley. https://doi.org/10.1002/9781394217519.ch17. -
-
-

7.6.2 System Implications

-

Hyperparameter tuning can significantly impact time to convergence during model training, directly affecting overall runtime. The right values for key training hyperparameters are crucial for efficient model convergence. For example, the hyperparameter’s learning rate controls the step size during gradient descent optimization. Setting a properly tuned learning rate schedule ensures the optimization algorithm converges quickly towards a good minimum. Too small a learning rate leads to painfully slow convergence, while too large a value causes the losses to fluctuate wildly. Proper tuning ensures rapid movement towards optimal weights and biases.

-

Similarly, the batch size for stochastic gradient descent impacts convergence stability. The right batch size smooths out fluctuations in parameter updates to approach the minimum faster. More batch sizes are needed to avoid noisy convergence, while large batch sizes fail to generalize and slow down convergence due to less frequent parameter updates. Tuning hyperparameters for faster convergence and reduced training duration has direct implications on cost and resource requirements for scaling machine learning systems:

-
    -
  • Lower computational costs: Shorter time to convergence means lower computational costs for training models. ML training often leverages large cloud computing instances like GPU and TPU clusters that incur heavy hourly charges. Minimizing training time directly reduces this resource rental cost, which tends to dominate ML budgets for organizations. Quicker iteration also lets data scientists experiment more freely within the same budget.

  • -
  • Reduced training time: Reduced training time unlocks opportunities to train more models using the same computational budget. Optimized hyperparameters stretch available resources further, allowing businesses to develop and experiment with more models under resource constraints to maximize performance.

  • -
  • Resource efficiency: Quicker training allows allocating smaller compute instances in the cloud since models require access to the resources for a shorter duration. For example, a one-hour training job allows using less powerful GPU instances compared to multi-hour training, which requires sustained compute access over longer intervals. This achieves cost savings, especially for large workloads.

  • -
-

There are other benefits as well. For instance, faster convergence reduces pressure on ML engineering teams regarding provisioning training resources. Simple model retraining routines can use lower-powered resources instead of requesting access to high-priority queues for constrained production-grade GPU clusters, freeing up deployment resources for other applications.

-
-
-

7.6.3 Auto Tuners

-

Given its importance, there is a wide array of commercial offerings to help with hyperparameter tuning. We will briefly touch on two examples: one focused on optimization for machine learning models targeting microcontrollers and another on cloud-scale ML.

-
-

BigML

-

Several commercial auto-tuning platforms are available to address this problem. One solution is Google’s Vertex AI Cloud, which has extensive integrated support for state-of-the-art tuning techniques.

-

One of the most salient capabilities of Google’s Vertex AI-managed machine learning platform is efficient, integrated hyperparameter tuning for model development. Successfully training performant ML models requires identifying optimal configurations for a set of external hyperparameters that dictate model behavior, posing a challenging high-dimensional search problem. Vertex AI aims to simplify this through Automated Machine Learning (AutoML) tooling.

-

Specifically, data scientists can leverage Vertex AI’s hyperparameter tuning engines by providing a labeled dataset and choosing a model type such as a Neural Network or Random Forest classifier. Vertex launches a Hyperparameter Search job transparently on the backend, fully handling resource provisioning, model training, metric tracking, and result analysis automatically using advanced optimization algorithms.

-

Under the hood, Vertex AutoML employs various search strategies to intelligently explore the most promising hyperparameter configurations based on previous evaluation results. Compared to standard Grid Search or Random Search methods, Bayesian Optimization offers superior sample efficiency, requiring fewer training iterations to arrive at optimized model quality. For more complex neural architecture search spaces, Vertex AutoML utilizes Population-Based Training approaches, which evolve candidate solutions over time analogous to natural selection principles.

-

Vertex AI aims to democratize state-of-the-art hyperparameter search techniques at the cloud scale for all ML developers, abstracting away the underlying orchestration and execution complexity. Users focus solely on their dataset, model requirements, and accuracy goals, while Vertex manages the tuning cycle, resource allocation, model training, accuracy tracking, and artifact storage under the hood. The result is getting deployment-ready, optimized ML models faster for the target problem.

-
-
-

TinyML

-

Edge Impulse’s Efficient On-device Neural Network Tuner (EON Tuner) is an automated hyperparameter optimization tool designed to develop microcontroller machine learning models. It streamlines the model development process by automatically finding the best neural network configuration for efficient and accurate deployment on resource-constrained devices.

-

The key functionality of the EON Tuner is as follows. First, developers define the model hyperparameters, such as number of layers, nodes per layer, activation functions, and learning rate annealing schedule. These parameters constitute the search space that will be optimized. Next, the target microcontroller platform is selected, providing embedded hardware constraints. The user can also specify optimization objectives, such as minimizing memory footprint, lowering latency, reducing power consumption, or maximizing accuracy.

-

With the defined search space and optimization goals, the EON Tuner leverages Bayesian hyperparameter optimization to explore possible configurations intelligently. Each prospective configuration is automatically implemented as a full model specification, trained, and evaluated for quality metrics. The continual process balances exploration and exploitation to arrive at optimized settings tailored to the developer’s chosen chip architecture and performance requirements.

-

The EON Tuner frees machine learning engineers from the demandingly iterative process of hand-tuning models by automatically tuning models for embedded deployment. The tool integrates seamlessly into the Edge Impulse workflow, taking models from concept to efficiently optimized implementations on microcontrollers. The expertise encapsulated in EON Tuner regarding ML model optimization for microcontrollers ensures beginner and experienced developers alike can rapidly iterate to models fitting their project needs.

-
-

Exercise 7.2 (Hyperparameter Tuning)  

-
-
- -
-
-

Get ready to unlock the secrets of hyperparameter tuning and take your PyTorch models to the next level! Hyperparameters are like the hidden dials and knobs that control your model’s learning superpowers. In this Colab notebook, you’ll team up with Ray Tune to find those perfect hyperparameter combinations. Learn how to define what values to search through, set up your training code for optimization, and let Ray Tune do the heavy lifting. By the end, you’ll be a hyperparameter tuning pro!

-

-
-
-
-

The video below explains the systematic organization of the hyperparameter tuning process.

-
-
-
-
-
-

7.7 Regularization

-

Regularization is a critical technique for improving the performance and generalizability of machine learning models in applied settings. It refers to mathematically constraining or penalizing model complexity to avoid overfitting the training data. Without regularization, complex ML models are prone to overfitting the dataset and memorizing peculiarities and noise in the training set rather than learning meaningful patterns. They may achieve high training accuracy but perform poorly when evaluating new unseen inputs.

-

Regularization helps address this problem by placing constraints that favor simpler, more generalizable models that don’t latch onto sampling errors. Techniques like L1/L2 regularization directly penalize large parameter values during training, forcing the model to use the smallest parameters that can adequately explain the signal. Early stopping rules halt training when validation set performance stops improving - before the model starts overfitting.

-

Appropriate regularization is crucial when deploying models to new user populations and environments where distribution shifts are likely. For example, an irregularized fraud detection model trained at a bank may work initially but accrue technical debt over time as new fraud patterns emerge.

-

Regularizing complex neural networks also offers computational advantages—smaller models require less data augmentation, compute power, and data storage. Regularization also allows for more efficient AI systems, where accuracy, robustness, and resource management are thoughtfully balanced against training set limitations.

-

Several powerful regularization techniques are commonly used to improve model generalization. Architecting the optimal strategy requires understanding how each method affects model learning and complexity.

-
-

7.7.1 L1 and L2

-

Two of the most widely used regularization forms are L1 and L2 regularization. Both penalize model complexity by adding an extra term to the cost function optimized during training. This term grows larger as model parameters increase.

-

L2 regularization, also known as ridge regression, adds the sum of squared magnitudes of all parameters multiplied by a coefficient α. This quadratic penalty curtails extreme parameter values more aggressively than L1 techniques. Implementation requires only changing the cost function and tuning α.

-

\[R_{L2}(\Theta) = \alpha \sum_{i=1}^{n}\theta_{i}^2\]

-

Where:

-
    -
  • \(R_{L2}(\Theta)\) - The L2 regularization term that is added to the cost function
    -
  • -
  • \(\alpha\) - The L2 regularization hyperparameter that controls the strength of regularization
  • -
  • \(\theta_{i}\) - The ith model parameter
  • -
  • \(n\) - The number of parameters in the model
  • -
  • \(\theta_{i}^2\) - The square of each parameter
  • -
-

And the full L2 regularized cost function is:

-

\[J(\theta) = L(\theta) + R_{L2}(\Theta)\]

-

Where:

-
    -
  • \(L(\theta)\) - The original unregularized cost function
  • -
  • \(J(\theta)\) - The new regularized cost function
  • -
-

Both L1 and L2 regularization penalize large weights in the neural network. However, the key difference between L1 and L2 regularization is that L2 regularization penalizes the squares of the parameters rather than the absolute values. This key difference has a considerable impact on the resulting regularized weights. L1 regularization, or lasso regression, utilizes the absolute sum of magnitudes rather than the square multiplied by α. Penalizing the absolute value of weights induces sparsity since the gradient of the errors extrapolates linearly as the weight terms tend towards zero; this is unlike penalizing the squared value of the weights, where the penalty reduces as the weights tend towards 0. By inducing sparsity in the parameter vector, L1 regularization automatically performs feature selection, setting the weights of irrelevant features to zero. Unlike L2 regularization, L1 regularization leads to sparsity as weights are set to 0; in L2 regularization, weights are set to a value very close to 0 but generally never reach exact 0. L1 regularization encourages sparsity and has been used in some works to train sparse networks that may be more hardware efficient (Hoefler et al. 2021).

-
-Hoefler, Torsten, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, and Alexandra Peste. 2021. “Sparsity in Deep Learning: Pruning and Growth for Efficient Inference and Training in Neural Networks.” https://arxiv.org/abs/2102.00554. -

\[R_{L1}(\Theta) = \alpha \sum_{i=1}^{n}||\theta_{i}||\]

-

Where:

-
    -
  • \(R_{L1}(\Theta)\) - The L1 regularization term that is added to the cost function
  • -
  • \(\alpha\) - The L1 regularization hyperparameter that controls the strength of regularization
  • -
  • \(\theta_{i}\) - The i-th model parameter
  • -
  • \(n\) - The number of parameters in the model
  • -
  • \(||\theta_{i}||\) - The L1 norm, which takes the absolute value of each parameter
  • -
-

And the full L1 regularized cost function is:

-

\[J(\theta) = L(\theta) + R_{L1}(\Theta)\]

-

Where:

-
    -
  • \(L(\theta)\) - The original unregularized cost function
  • -
  • \(J(\theta)\) - The new regularized cost function
  • -
-

The choice between L1 and L2 depends on the expected model complexity and whether intrinsic feature selection is needed. Both require iterative tuning across a validation set to select the optimal α hyperparameter.

-

The two videos below explain how regularization works and can help reduce model overfitting to improve performance.

-
-
-
-
-

7.7.2 Dropout

-

Another widely adopted regularization method is dropout (Srivastava et al. 2014). During training, dropout randomly sets a fraction \(p\) of node outputs or hidden activations to zero. This encourages greater information distribution across more nodes rather than reliance on a small number of nodes. Come prediction time; the full neural network is used, with intermediate activations scaled by \(p\) to maintain output magnitudes. GPU optimizations make implementing dropout efficiently straightforward via frameworks like PyTorch and TensorFlow.

-
-Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” J. Mach. Learn. Res. http://jmlr.org/papers/v15/srivastava14a.html. -

Let’s be more pedantic. During training with dropout, each node’s output \(a_i\) is passed through a dropout mask \(r_i\) before being used by the next layer:

-

\[ ã_i = r_i \odot a_i \]

-

Where:

-
    -
  • \(a_i\) - output of node \(i\)
  • -
  • \(ã_i\) - output of node \(i\) after dropout
  • -
  • \(r_i\) - independent Bernoulli random variable with probability \(p\) of being 1
  • -
  • \(\odot\) - elementwise multiplication
  • -
-

This dropout mask \(r_i\) randomly sets a fraction \(1-p\) of activations to 0 during training, forcing the network to make redundant representations.

-

At test time, the dropout mask is removed, and the activations are rescaled by \(p\) to maintain expected output magnitudes:

-

\[ a_i^{test} = p a_i\]

-

Where:

-
    -
  • \(a_i^{test}\) - node output at test time
  • -
  • \(p\) - dropout probability hyperparameter
  • -
-

The key hyperparameter is \(p\), the fraction of nodes dropped, often set between 0.2 and 0.5. Larger networks tend to benefit from more dropout, while small networks risk underfitting if too many nodes are cut out. Trial and error combined with monitoring validation performance helps tune the dropout level.

-

The following video discusses the intuition behind the dropout regularization technique and how it works.

-
-
-
-

7.7.3 Early Stopping

-

The intuition behind early stopping involves tracking model performance on a held-out validation set across training epochs. At first, increases in training set fitness accompany gains in validation accuracy as the model picks up generalizable patterns. After some point, however, the model starts overfitting - latching onto peculiarities and noise in the training data that don’t apply more broadly. The validation performance peaks and then degrades if training continues. Early stopping rules halt training at this peak to prevent overfitting. This technique demonstrates how ML pipelines must monitor system feedback, not just unquestioningly maximize performance on a static training set. The system’s state evolves, and the optimal endpoints change.

-

Therefore, formal early stopping methods require monitoring a metric like validation accuracy or loss after each epoch. Common curves exhibit rapid initial gains that taper off, eventually plateauing and decreasing slightly as overfitting occurs. The optimal stopping point is often between 5 and 15 epochs past the peak, depending on patient thresholds. Tracking multiple metrics can improve signal since variance exists between measures.

-

Simple, early-stopping rules stop immediately at the first post-peak degradation. More robust methods introduce a patience parameter—the number of degrading epochs permitted before stopping. This avoids prematurely halting training due to transient fluctuations. Typical patience windows range from 50 to 200 validation batches. Wider windows incur the risk of overfitting. Formal tuning strategies can determine optimal patience.

-
-

Exercise 7.3 (Regularization)  

-
-
- -
-
-

Battling Overfitting: Unlock the Secrets of Regularization! Overfitting is like your model memorizing the answers to a practice test, then failing the real exam. Regularization techniques are the study guides that help your model generalize and ace new challenges. In this Colab notebook, you’ll learn how to tune regularization parameters for optimal results using L1 & L2 regularization, dropout, and early stopping.

-

-
-
-
-

The following video covers a few other regularization methods that can reduce model overfitting.

-
-
-
-
-

7.8 Weight Initialization

-

Proper Initializing the weights in a neural network before training is a vital step directly impacting model performance. Randomly initializing weights to very large or small values can lead to problems like vanishing/exploding gradients, slow convergence of training, or getting trapped in poor local minima. Proper weight initialization accelerates model convergence during training and carries implications for system performance at inference time in production environments. Some key aspects include:

-
    -
  • Faster Time-to-Accuracy: Carefully tuned Initialization leads to faster convergence, which results in models reaching target accuracy milestones earlier in the training cycle. For instance, Xavier init could reduce time-to-accuracy by 20% versus bad random init. As training is typically the most time- and compute-intensive phase, this directly enhances ML system velocity and productivity.

  • -
  • Model Iteration Cycle Efficiency: If models train faster, the overall turnaround time for experimentation, evaluation, and model design iterations decreases significantly. Systems have more flexibility to explore architectures, data pipelines, etc, within given timeframes.

  • -
  • Impact on Necessary Training Epochs: The training process runs for multiple epochs - with each full pass through the data being an epoch. Good Initialization can reduce the epochs required to converge the loss and accuracy curves on the training set by 10-30%. This means tangible resource and infrastructure cost savings.

  • -
  • Effect on Training Hyperparameters: Weight initialization parameters interact strongly with certain regularization hyperparameters that govern the training dynamics, like learning rate schedules and dropout probabilities. Finding the right combination of settings is non-trivial. Appropriate Initialization smoothens this search.

  • -
-

Weight initialization has cascading benefits for machine learning engineering efficiency and minimized system resource overhead. It is an easily overlooked tactic that every practitioner should master. The choice of which weight initialization technique to use depends on factors like model architecture (number of layers, connectivity pattern, etc.), activation functions, and the specific problem being solved. Over the years, researchers have developed and empirically verified different initialization strategies targeted to common neural network architectures, which we will discuss here.

-
-

7.8.1 Uniform and Normal Initialization

-

When randomly initializing weights, two standard probability distributions are commonly used - uniform and Gaussian (normal). The uniform distribution sets an equal probability of the initial weight parameters falling anywhere within set minimum and maximum bounds. For example, the bounds could be -1 and 1, leading to a uniform spread of weights between these limits. The Gaussian distribution, on the other hand, concentrates probability around a mean value, following the shape of a bell curve. Most weight values will cluster in the region of the specified mean, with fewer samples towards the extreme ends. The standard deviation (std dev) parameter controls the spread around the mean.

-

The choice between uniform or normal Initialization depends on the network architecture and activation functions. For shallow networks, a normal distribution with a relatively small std dev (e.g., 0.01) is recommended. The bell curve prevents large weight values that could trigger training instability in small networks. For deeper networks, a normal distribution with higher std dev (say 0.5 or above) or uniform distribution may be preferred to account for vanishing gradient issues over many layers. The larger spread drives greater differentiation between neuron behaviors. Fine-tuning the initialization distribution parameters is crucial for stable and speedy model convergence. Monitoring training loss trends can diagnose issues for tweaking the parameters iteratively.

-
-
-

7.8.2 Xavier/Glorot Initialization

-

Proposed by Glorot and Bengio (2010), this initialization technique is specially designed for sigmoid and tanh activation functions. These saturated activations can cause vanishing or exploding gradients during backpropagation over many layers.

-
-Glorot, Xavier, and Yoshua Bengio. 2010. “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. https://proceedings.mlr.press/v9/glorot10a.html. -

The Xavier method cleverly sets the variance of the weight distribution based on the number of inputs and outputs to each layer. The intuition is that this balances the flow of information and gradients throughout the network. For example, consider a layer with 300 input units and 100 output units. Plugging this into the formula variance = 2/(#inputs + #outputs) gives a variance of 2/(300+100) = 0.01.

-

Sampling the initial weights from a uniform or normal distribution centered at 0 with this variance provides much smoother training convergence for deep sigmoid/tanh networks. The gradients are well-conditioned, preventing exponential vanishing or growth.

-
-
-

7.8.3 He Initialization

-

As proposed by He et al. (2015), this Initialization is tailored to ReLU (Rectified Linear Unit) activation functions. ReLUs introduce the dying neuron problem where units get stuck outputting all 0s if they receive strong negative inputs initially. This slows and hinders training.

-
-He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. IEEE. https://doi.org/10.1109/iccv.2015.123. -

He overcomes this by sampling weights from a distribution with a variance set based only on the number of inputs per layer, disregarding the outputs. This keeps the incoming signals small enough to activate the ReLUs into their linear regime from the beginning, avoiding dead units. For a layer with 1024 inputs, the formula variance = 2/1024 = 0.002 keeps most weights concentrated closely around 0.

-

This specialized Initialization allows ReLU networks to converge efficiently right from the start. The choice between Xavier and He must match the intended network activation function.

-
-

Exercise 7.4 (Weight Initialization)  

-
-
- -
-
-

Get your neural network off to a strong start with weight initialization! How you set those initial weights can make or break your model’s training. Think of it like tuning the instruments in an orchestra before the concert. In this Colab notebook, you’ll learn that the right initialization strategy can save time, improve model performance, and make your deep-learning journey much smoother.

-

-
-
-
-

The video below emphasizes the importance of deliberately selecting initial weight values over random choices.

-
-
-
-
-

7.9 Activation Functions

-

Activation functions play a crucial role in neural networks. They introduce nonlinear behaviors that allow neural nets to model complex patterns. Element-wise activation functions are applied to the weighted sums coming into each neuron in the network. Without activation functions, neural nets would be reduced to linear regression models.

-

Ideally, activation functions possess certain desirable qualities:

-
    -
  • Nonlinear: They enable modeling complex relationships through nonlinear transformations of the input sum.
  • -
  • Differentiable: They must have well-defined first derivatives to enable backpropagation and gradient-based optimization during training.
  • -
  • Range-bounding: They constrain the output signal, preventing an explosion. For example, sigmoid squashes inputs to (0,1).
  • -
-

Additionally, properties like computational efficiency, monotonicity, and smoothness make some activations better suited over others based on network architecture and problem complexity.

-

We will briefly survey some of the most widely adopted activation functions and their strengths and limitations. We will also provide guidelines for selecting appropriate functions matched to ML system constraints and use case needs.

-
-

7.9.1 Sigmoid

-

The sigmoid activation applies a squashing S-shaped curve tightly binding the output between 0 and 1. It has the mathematical form:

-

\[ sigmoid(x) = \frac{1}{1+e^{-x}} \]

-

The exponentiation transform allows the function to smoothly transition from near 0 towards near 1 as the input moves from very negative to very positive. The monotonic rise covers the full (0,1) range.

-

Pros:

-

A smooth gradient is always available for a backdrop Output bounded preventing “exploding” Simple formula Cons:

-

Tendency to saturate at extremes, killing gradients (“vanishing”) Not zero-centered - outputs not symmetrically distributed

-
-
-

7.9.2 Tanh

-

Tanh or hyperbolic tangent also assumes an S-shape but is zero-centered, meaning the average output value is 0.

-

\[ tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} \]

-

The numerator/denominator transform shifts the range from (0,1) in Sigmoid to (-1, 1) in tanh.

-

Most pros/cons are shared with Sigmoid, but Tanh avoids some output saturation issues by being centered. However, it still suffers from vanishing gradients with many layers.

-
-
-

7.9.3 ReLU

-

The Rectified Linear Unit (ReLU) introduces a simple thresholding behavior with its mathematical form:

-

\[ ReLU(x) = max(0, x) \]

-

It leaves all positive inputs unchanged while clipping all negative values to 0. This sparse activation and cheap computation make ReLU widely favored over sigmoid/tanh.

-

Figure fig-activation-functions demonstrates the 3 activation functions we discussed above -Tanh, ReLU, Sigmoid- in addition to the Linear case.

-
-
-
- -
-
-Figure 7.6: Common activation functions. Credit: AI Wiki. -
-
-
-
-
-

7.9.4 Softmax

-

The softmax activation function is generally used as the last layer for classification tasks to normalize the activation value vector so that its elements sum to 1. This is useful for classification tasks where we want to learn to predict class-specific probabilities of a particular input, in which case the cumulative probability across classes is equal to 1. The softmax activation function is defined as

-

\[\sigma(z_i) = \frac{e^{z_{i}}}{\sum_{j=1}^K e^{z_{j}}} \ \ \ for\ i=1,2,\dots,K\]

-
-
-

7.9.5 Pros and Cons

-

Here are the summarizing pros and cons of these various standard activation functions:

- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Activation FunctionProsCons
SigmoidSmooth gradient for backdrop
Output bounded between 0 and 1
Saturation kills gradients
Not zero-centered
TanhSmoother gradient than sigmoid
Zero-centered output [-1, 1]
Still suffers vanishing gradient issue
ReLUComputationally efficient
Introduces sparsity
Avoids vanishing gradients
“Dying ReLU” units
Not bounded
SoftmaxUsed for the last layer to normalize vector outputs to be a probability distribution; typically used for classification tasks-
-
-

Exercise 7.5 (Activation Functions)  

-
-
- -
-
-

Unlock the power of activation functions! These little mathematical workhorses are what make neural networks so incredibly flexible. In this Colab notebook, you’ll go hands-on with functions like the Sigmoid, tanh, and the superstar ReLU. See how they transform inputs and learn which works best in different situations. It’s the key to building neural networks that can tackle complex problems!

-

-
-
-
-
-
-
-

7.10 System Bottlenecks

-

As introduced earlier, neural networks comprise linear operations (matrix multiplications) interleaved with element-wise nonlinear activation functions. The most computationally expensive portion of neural networks is the linear transformations, specifically the matrix multiplications between each layer. These linear layers map the activations from the previous layer to a higher dimensional space that serves as inputs to the next layer’s activation function.

-
-

7.10.1 Runtime Complexity of Matrix Multiplication

-
-

Layer Multiplications vs. Activations

-

The bulk of computation in neural networks arises from the matrix multiplications between layers. Consider a neural network layer with an input dimension of \(M\) = 500 and output dimension of \(N\) = 1000; the matrix multiplication requires \(O(N \cdot M) = O(1000 \cdot 500) = 500,000\) multiply-accumulate (MAC) operations between those layers.

-

Contrast this with the preceding layer, which had \(M\) = 300 inputs, requiring \(O(500 \cdot 300) = 150,000\) ops. We can see how the computations scale exponentially as the layer widths increase, with the total computations across \(L\) layers being \(\sum_{l=1}^{L-1} O\big(N^{(l)} \cdot M^{(l-1)}\big)\).

-

Now, comparing the matrix multiplication to the activation function, which requires only \(O(N) = 1000\) element-wise nonlinearities for \(N = 1000\) outputs, we can see the linear transformations dominating the activations computationally.

-

These large matrix multiplications impact hardware choices, inference latency, and power constraints for real-world neural network applications. For example, a typical DNN layer may require 500,000 multiply-accumulates vs. only 1000 nonlinear activations, demonstrating a 500x increase in mathematical operations.

-

When training neural networks, we typically use mini-batch gradient descent, operating on small batches of data simultaneously. Considering a batch size of \(B\) training examples, the input to the matrix multiplication becomes a \(M \times B\) matrix, while the output is an \(N \times B\) matrix.

-
-
-

Mini-batch

-

In training neural networks, we need to repeatedly estimate the gradient of the loss function with respect to the network parameters (i.e., weights, and biases). This gradient indicates which direction the parameters should be updated in to minimize the loss. As introduced previously, we perform updates over a batch of data points every update, also known as stochastic gradient descent or mini-batch gradient descent.

-

The most straightforward approach is to estimate the gradient based on a single training example, compute the parameter update, lather, rinse, and repeat for the next example. However, this involves very small and frequent parameter updates that can be computationally inefficient and may need to be more accurate in terms of convergence due to the stochasticity of using just a single data point for a model update.

-

Instead, mini-batch gradient descent balances convergence stability and computational efficiency. Rather than computing the gradient on single examples, we estimate the gradient based on small “mini-batches” of data—usually between 8 and 256 examples in practice.

-

This provides a noisy but consistent gradient estimate that leads to more stable convergence. Additionally, the parameter update must only be performed once per mini-batch rather than once per example, reducing computational overhead.

-

By tuning the mini-batch size, we can control the tradeoff between the smoothness of the estimate (larger batches are generally better) and the frequency of updates (smaller batches allow more frequent updates). Mini-batch sizes are usually powers of 2, so they can efficiently leverage parallelism across GPU cores.

-

So, the total computation performs an \(N \times M\) by \(M \times B\) matrix multiplication, yielding \(O(N \cdot M \cdot B)\) floating point operations. As a numerical example, \(N=1000\) hidden units, \(M=500\) input units, and a batch size \(B=64\) equates to 1000 x 500 x 64 = 32 million multiply-accumulates per training iteration!

-

In contrast, the activation functions are applied element-wise to the \(N \times B\) output matrix, requiring only \(O(N \cdot B)\) computations. For \(N=1000\) and \(B=64\), that is just 64,000 nonlinearities - 500X less work than the matrix multiplication.

-

As we increase the batch size to fully leverage parallel hardware like GPUs, the discrepancy between matrix multiplication and activation function cost grows even larger. This reveals how optimizing the linear algebra operations offers tremendous efficiency gains.

-

Therefore, matrix multiplication is central in analyzing where and how neural networks spend computation. For example, matrix multiplications often account for over 90% of inference latency and training time in common convolutional and recurrent neural networks.

-
-
-

Optimizing Matrix Multiplication

-

Several techniques enhance the efficiency of general dense/sparse matrix-matrix and matrix-vector operations to improve overall efficiency. Some key methods include:

-
    -
  • Leveraging optimized math libraries like cuBLAS for GPU acceleration
  • -
  • Enabling lower precision formats like FP16 or INT8 where accuracy permits
  • -
  • Employing Tensor Processing Units with hardware matrix multiplication
  • -
  • Sparsity-aware computations and data storage formats to exploit zero parameters
  • -
  • Approximating matrix multiplications with algorithms like Fast Fourier Transforms
  • -
  • Model architecture design to reduce layer widths and activations
  • -
  • Quantization, pruning, distillation, and other compression techniques
  • -
  • Parallelization of computation across available hardware
  • -
  • Caching/pre-computing results where possible to reduce redundant operations
  • -
-

The potential optimization techniques are vast, given the outsized portion of time models spend in matrix and vector math. Even incremental improvements speed up runtimes and lower energy usage. Finding new ways to enhance these linear algebra primitives remains an active area of research aligned with the future demands of machine learning. We will discuss these in detail in the Optimizations and AI Acceleration chapters.

-
-
-
-

7.10.2 Compute vs. Memory Bottleneck

-

At this point, matrix-matrix multiplication is the core mathematical operation underpinning neural networks. Both training and inference for neural networks heavily utilize these matrix multiply operations. Analysis shows that over 90% of computational requirements in state-of-the-art neural networks arise from matrix multiplications. Consequently, the performance of matrix multiplication has an enormous influence on overall model training or inference time.

-
-

Training versus Inference

-

While training and inference rely heavily on matrix multiplication performance, their precise computational profiles differ. Specifically, neural network inference tends to be more compute-bound than training for an equivalent batch size. The key difference lies in the backpropagation pass, which is only required during training. Backpropagation involves a sequence of matrix multiply operations to calculate gradients with respect to activations across each network layer. Critically, though, no additional memory bandwidth is needed here—the inputs, outputs, and gradients are read/written from cache or registers.

-

As a result, training exhibits lower arithmetic intensities, with gradient calculations bounded by memory access instead of FLOPs. In contrast, the forward propagation dominates neural network inference, which corresponds to a series of matrix-matrix multiplies. With no memory-intensive gradient retrospecting, larger batch sizes readily push inference into being extremely compute-bound. The high measured arithmetic intensities exhibit this. Response times may be critical for some inference applications, forcing the application provider to use a smaller batch size to meet these response-time requirements, thereby reducing hardware efficiency; hence, inferences may see lower hardware utilization.

-

The implications are that hardware provisioning and bandwidth vs. FLOP tradeoffs differ depending on whether a system targets training or inference. High-throughput, low-latency servers for inference should emphasize computational power instead of memory, while training clusters require a more balanced architecture.

-

However, matrix multiplication exhibits an interesting tension - the underlying hardware’s memory bandwidth or arithmetic throughput capabilities can bind it. The system’s ability to fetch and supply matrix data versus its ability to perform computational operations determines this direction.

-

This phenomenon has profound impacts; hardware must be designed judiciously, and software optimizations must be considered. Optimizing and balancing compute versus memory to alleviate this underlying matrix multiplication bottleneck is crucial for efficient model training and deployment.

-

Finally, batch size may impact convergence rates during neural network training, another important consideration. For example, there are generally diminishing returns in benefits to convergence with extremely large batch sizes (i.e.,> 16384). In contrast, extremely large batch sizes may be increasingly beneficial from a hardware/arithmetic intensity perspective; using such large batches may not translate to faster convergence vs wall-clock time due to their diminishing benefits to convergence. These tradeoffs are part of the design decisions core to systems for the machine-learning type of research.

-
-
-

Batch Size

-

The batch size used during neural network training and inference significantly impacts whether matrix multiplication poses more of a computational or memory bottleneck. Concretely, the batch size refers to the number of samples propagated through the network together in one forward/backward pass. Matrix multiplication equates to larger matrix sizes.

-

Specifically, let’s look at the arithmetic intensity of matrix multiplication during neural network training. This measures the ratio between computational operations and memory transfers. The matrix multiply of two matrices of size \(N \times M\) and \(M \times B\) requires \(N \times M \times B\) multiply-accumulate operations, but only transfers of \(N \times M + M \times B\) matrix elements.

-

As we increase the batch size \(B\), the number of arithmetic operations grows faster than the memory transfers. For example, with a batch size of 1, we need \(N \times M\) operations and \(N + M\) transfers, giving an arithmetic intensity ratio of around \(\frac{N \times M}{N+M}\). But with a large batch size of 128, the intensity ratio becomes \(\frac{128 \times N \times M}{N \times M + M \times 128} \approx 128\). Using a larger batch size shifts the overall computation from memory-bounded to more compute-bounded. AI training uses large batch sizes and is generally limited by peak arithmetic computational performance, i.e., Application 3 in Figure fig-roofline.

-

Therefore, batched matrix multiplication is far more computationally intensive than memory access bound. This has implications for hardware design and software optimizations, which we will cover next. The key insight is that we can significantly alter the computational profile and bottlenecks posed by neural network training and inference by tuning the batch size.

-
-
-
- -
-
-Figure 7.7: AI training roofline model. -
-
-
-
-
-

Hardware Characteristics

-

Modern hardware like CPUs and GPUs is highly optimized for computational throughput rather than memory bandwidth. For example, high-end H100 Tensor Core GPUs can deliver over 60 TFLOPS of double-precision performance but only provide up to 3 TB/s of memory bandwidth. This means there is almost a 20x imbalance between arithmetic units and memory access; consequently, for hardware like GPU accelerators, neural network training workloads must be made as computationally intensive as possible to utilize the available resources fully.

-

This further motivates the need for using large batch sizes during training. When using a small batch, the matrix multiplication is bounded by memory bandwidth, underutilizing the abundant compute resources. However, we can shift the bottleneck towards computation and attain much higher arithmetic intensity with sufficiently large batches. For instance, batches of 256 or 512 samples may be needed to saturate a high-end GPU. The downside is that larger batches provide less frequent parameter updates, which can impact convergence. Still, the parameter serves as an important tuning knob to balance memory vs compute limitations.

-

Therefore, given the imbalanced compute-memory architectures of modern hardware, employing large batch sizes is essential to alleviate bottlenecks and maximize throughput. As mentioned, the subsequent software and algorithms also need to accommodate such batch sizes since larger batch sizes may have diminishing returns toward the network’s convergence. Using very small batch sizes may lead to suboptimal hardware utilization, ultimately limiting training efficiency. Scaling up to large batch sizes is a research topic explored in various works that aim to do large-scale training (You et al. 2018).

-
-You, Yang, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. 2018. ImageNet Training in Minutes.” https://arxiv.org/abs/1709.05011. -
-
-

Model Architectures

-

The underlying neural network architecture also affects whether matrix multiplication poses more of a computational or memory bottleneck during execution. Transformers and MLPs are much more compute-bound than CNN convolutional neural networks. This stems from the types of matrix multiplication operations involved in each model. Transformers rely on self-attention, multiplying large activation matrices by massive parameter matrices to relate elements. MLPs stack fully connected layers, also requiring large matrix multiplies.

-

In contrast, the convolutional layers in CNNs have a sliding window that reuses activations and parameters across the input, which means fewer unique matrix operations are needed. However, the convolutions require repeatedly accessing small input parts and moving partial sums to populate each window. Even though the arithmetic operations in convolutions are intense, this data movement and buffer manipulation impose huge memory access overheads. CNNs comprise several layered stages, so intermediate outputs must frequently materialize in memory.

-

As a result, CNN training tends to be more memory bandwidth bound relative to arithmetic bound compared to Transformers and MLPs. Therefore, the matrix multiplication profile, and in turn, the bottleneck posed, varies significantly based on model choice. Hardware and systems need to be designed with appropriate compute-memory bandwidth balance depending on target model deployment. Models relying more on attention and MLP layers require higher arithmetic throughput compared to CNNs, which necessitates high memory bandwidth.

-
-
-
-
-

7.11 Training Parallelization

-

Training neural networks entails intensive computational and memory demands. The backpropagation algorithm for calculating gradients and updating weights consists of repeated matrix multiplications and arithmetic operations over the entire dataset. For example, one pass of backpropagation scales in time complexity with \(O(num\_parameters \times batch\_size \times sequence\_length)\).

-

The computational requirements grow rapidly as model size increases in parameters and layers. Moreover, the algorithm requires storing activation outputs and model parameters for the backward pass, which grows with model size.

-

Larger models cannot fit and train on a single accelerator device like a GPU, and the memory footprint becomes prohibitive. Therefore, we need to parallelize model training across multiple devices to provide sufficient compute and memory to train state-of-the-art neural networks.

-

As shown in Figure fig-training-parallelism, the two main approaches are data parallelism, which replicates the model across devices while splitting the input data batch-wise, and model parallelism, which partitions the model architecture itself across different devices. By training in parallel, we can leverage greater aggregate compute and memory resources to overcome system limitations and accelerate deep learning workloads.

-
-
-
- -
-
-Figure 7.8: Data parallelism versus model parallelism. -
-
-
-
-

7.11.1 Data Parallel

-

Data parallelization is a common approach to parallelize machine learning training across multiple processing units, such as GPUs or distributed computing resources. The training dataset is divided into batches in data parallelism, and a separate processing unit processes each batch. The model parameters are then updated based on the gradients computed from the processing of each batch. Here’s a step-by-step description of data parallel parallelization for ML training:

-
    -
  1. Dividing the Dataset: The training dataset is divided into smaller batches, each containing a subset of the training examples.

  2. -
  3. Replicating the Model: The neural network model is replicated across all processing units, and each processing unit has its copy of the model.

  4. -
  5. Parallel Computation: Each processing unit takes a different batch and independently computes the forward and backward passes. During the forward pass, the model makes predictions on the input data. The loss function calculates gradients for the model parameters during the backward pass.

  6. -
  7. Gradient Aggregation: After processing their respective batches, the gradients from each processing unit are aggregated. Common aggregation methods include summation or averaging of the gradients.

  8. -
  9. Parameter Update: The aggregated gradients update the model parameters. The update can be performed using optimization algorithms like SGD or variants like Adam.

  10. -
  11. Synchronization: After the update, all processing units synchronize their model parameters, ensuring that each has the latest version of the model.

  12. -
-

The prior steps are repeated for several iterations or until convergence.

-

Let’s take a specific example. We have 256 batch sizes and 8 GPUs; each GPU will get a micro-batch of 32 samples. Their forward and backward passes compute losses and gradients only based on the local 32 samples. The gradients get aggregated across devices with a parameter server or collective communications library to get the effective gradient for the global batch. Weight updates happen independently on each GPU according to these gradients. After a configured number of iterations, updated weights synchronize and equalize across devices before continuing to the next iterations.

-

Data parallelism is effective when the model is large, and the dataset is substantial, as it allows for parallel processing of different parts of the data. It is widely used in deep learning frameworks and libraries that support distributed training, such as TensorFlow and PyTorch. However, to ensure efficient parallelization, care must be taken to handle issues like communication overhead, load balancing, and synchronization.

-
-
-

7.11.2 Model Parallel

-

Model parallelism refers to distributing the neural network model across multiple devices rather than replicating the full model like data parallelism. This is particularly useful when a model is too large to fit into the memory of a single GPU or accelerator device. While this might not be specifically applicable for embedded or TinyML use cases as most models are relatively small(er), it is still useful to know.

-

In model parallel training, different parts or layers of the model are assigned to separate devices. The input activations and intermediate outputs get partitioned and passed between these devices during the forward and backward passes to coordinate gradient computations across model partitions.

-

The memory footprint and computational operations are distributed by splitting the model architecture across multiple devices instead of concentrating on one. This enables training very large models with billions of parameters that otherwise exceed the capacity of a single device. There are several main ways in which we can do partitioning:

-
    -
  • Layer-wise parallelism: Consecutive layers are distributed onto different devices. For example, device 1 contains layers 1-3; device 2 contains layers 4-6. The output activations from layer 3 would be transferred to device 2 to start the next layers for the forward pass computations.

  • -
  • Filter-wise parallelism: In convolutional layers, output filters can be split among devices. Each device computes activation outputs for a subset of filters, which get concatenated before propagating further.

  • -
  • Spatial parallelism: The input images get divided spatially, so each device processes over a certain region like the top-left quarter of images. The output regions then combine to form the full output.

  • -
-

Additionally, hybrid combinations can split the model layer-wise and data batch-wise. The appropriate type of model parallelism depends on the specific neural architecture constraints and hardware setup. Optimizing the partitioning and communication for the model topology is key to minimizing overhead.

-

However, as the model parts run on physically separate devices, they must communicate and synchronize their parameters during each training step. The backward pass must ensure gradient updates propagate accurately across the model partitions. Hence, coordination and high-speed interconnecting between devices are crucial for optimizing the performance of model parallel training. Careful partitioning and communication protocols are required to minimize transfer overhead.

-
-
-

7.11.3 Comparison

-

To summarize, Table tbl-parallelism demonstrates some of the key characteristics for comparing data parallelism and model parallelism:

-
-
-
-Table 7.2: Comparing data parallelism and model parallelism. -
-
- ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
CharacteristicData ParallelismModel Parallelism
DefinitionDistribute data across devices with model replicasDistribute model across devices
ObjectiveAccelerate training through compute scalingEnable larger model training
Scaling MethodScale devices/workersScale model size
Main ConstraintModel size per deviceDevice coordination overhead
Hardware RequirementsMultiple GPU/TPUsOften specialized interconnect
Primary ChallengeParameter synchronizationComplex partitioning + communication
TypesN/ALayer-wise, filter-wise, spatial
Code ComplexityMinimal changesMore significant model surgery
Popular LibrariesHorovod, PyTorch DistributedMesh TensorFlow
-
-
-
-
-
-
-

7.12 Conclusion

-

In this chapter, we have covered the core foundations that enable effective training of artificial intelligence models. We explored the mathematical concepts like loss functions, backpropagation, and gradient descent that make neural network optimization possible. We also discussed practical techniques around leveraging training data, regularization, hyperparameter tuning, weight initialization, and distributed parallelization strategies that improve convergence, generalization, and scalability.

-

These methodologies form the bedrock through which the success of deep learning has been attained over the past decade. Mastering these fundamentals equips practitioners to architect systems and refine models tailored to their problem context. However, as models and datasets grow exponentially, training systems must optimize across metrics like time, cost, and carbon footprint. Hardware scaling through warehouse scales enables massive computational throughput - but optimizations around efficiency and specialization will be key. Software techniques like compression and sparsity exploitation can augment hardware gains. We will discuss several of these in the coming chapters.

-

Overall, the fundamentals covered in this chapter equip practitioners to build, refine, and deploy models. However, interdisciplinary skills spanning theory, systems, and hardware will differentiate experts who can lift AI to the next level sustainably and responsibly that society requires. Understanding efficiency alongside accuracy constitutes the balanced engineering approach needed to train intelligent systems that integrate smoothly across many real-world contexts.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will be adding new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

- -
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/contents/workflow/workflow.html b/contents/workflow/workflow.html deleted file mode 100644 index da4e1d45..00000000 --- a/contents/workflow/workflow.html +++ /dev/null @@ -1,1277 +0,0 @@ - - - - - - - - - -Machine Learning Systems - 4  AI Workflow - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

4  AI Workflow

-
- - - -
- - - - -
- - - -
- - -

Resources: Slides, Labs, Exercises

-
-
-

-
DALL·E 3 Prompt: Create a rectangular illustration of a stylized flowchart representing the AI workflow/pipeline. From left to right, depict the stages as follows: ‘Data Collection’ with a database icon, ‘Data Preprocessing’ with a filter icon, ‘Model Design’ with a brain icon, ‘Training’ with a weight icon, ‘Evaluation’ with a checkmark, and ‘Deployment’ with a rocket. Connect each stage with arrows to guide the viewer horizontally through the AI processes, emphasizing these steps’ sequential and interconnected nature.
-
-
-

In this chapter, we’ll explore the machine learning (ML) workflow, setting the stage for subsequent chapters that delve into the specifics. To ensure we see the bigger picture, this chapter offers a high-level overview of the steps involved in the ML workflow.

-

The ML workflow is a structured approach that guides professionals and researchers through developing, deploying, and maintaining ML models. This workflow is generally divided into several crucial stages, each contributing to the effective development of intelligent systems.

-
-
-
- -
-
-Learning Objectives -
-
-
-
    -
  • Understand the ML workflow and gain insights into the structured approach and stages of developing, deploying, and maintaining machine learning models.

  • -
  • Learn about the unique challenges and distinctions between workflows for Traditional machine learning and embedded AI.

  • -
  • Appreciate the roles in ML projects and understand their responsibilities and significance.

  • -
  • Understanding the importance, applications, and considerations for implementing ML models in resource-constrained environments.

  • -
  • Gain awareness about the ethical and legal aspects that must be considered and adhered to in ML and embedded AI projects.

  • -
  • Establish a basic understanding of ML workflows and roles to be well-prepared for deeper exploration in the following chapters.

  • -
-
-
-
-

4.1 Overview

-
-
-
- -
-
-Figure 4.1: Multi-step design methodology for the development of a machine learning model. Commonly referred to as the machine learning lifecycle -
-
-
-

Developing a successful machine learning model requires a systematic workflow. This end-to-end process enables you to build, deploy, and maintain models effectively. As shown in Figure fig-ml-life-cycle, It typically involves the following key steps:

-
    -
  1. Problem Definition - Start by clearly articulating the specific problem you want to solve. This focuses on your efforts during data collection and model building.
  2. -
  3. Data Collection to Preparation: Gather relevant, high-quality training data that captures all aspects of the problem. Clean and preprocess the data to prepare it for modeling.
  4. -
  5. Model Selection and Training: Choose a machine learning algorithm suited to your problem type and data. Consider the pros and cons of different approaches. Feed the prepared data into the model to train it. Training time varies based on data size and model complexity.
  6. -
  7. Model Evaluation: Test the trained model on new unseen data to measure its predictive accuracy. Identify any limitations.
  8. -
  9. Model Deployment: Integrate the validated model into applications or systems to start operationalization.
  10. -
  11. Monitor and Maintain: Track model performance in production. Retrain periodically on new data to keep it current.
  12. -
-

Following this structured ML workflow helps guide you through the key phases of development. It ensures you build effective and robust models ready for real-world deployment, resulting in higher-quality models that solve your business needs.

-

The ML workflow is iterative, requiring ongoing monitoring and potential adjustments. Additional considerations include:

-
    -
  • Version Control: Track code and data changes to reproduce results and revert to earlier versions if needed.
  • -
  • Documentation: Maintain detailed documentation for workflow understanding and reproduction.
  • -
  • Testing: Rigorously test the workflow to ensure its functionality.
  • -
  • Security: Safeguard your workflow and data when deploying models in production settings.
  • -
-
-
-

4.2 Traditional vs. Embedded AI

-

The ML workflow is a universal guide applicable across various platforms, including cloud-based solutions, edge computing, and TinyML. However, the workflow for Embedded AI introduces unique complexities and challenges, making it a captivating domain and paving the way for remarkable innovations.

-
-

4.2.1 Resource Optimization

-
    -
  • Traditional ML Workflow: This workflow prioritizes model accuracy and performance, often leveraging abundant computational resources in cloud or data center environments.
  • -
  • Embedded AI Workflow: Given embedded systems’ resource constraints, this workflow requires careful planning to optimize model size and computational demands. Techniques like model quantization and pruning are crucial.
  • -
-
-
-

4.2.2 Real-time Processing

-
    -
  • Traditional ML Workflow: Less emphasis on real-time processing, often relying on batch data processing.
  • -
  • Embedded AI Workflow: Prioritizes real-time data processing, making low latency and quick execution essential, especially in applications like autonomous vehicles and industrial automation.
  • -
-
-
-

4.2.3 Data Management and Privacy

-
    -
  • Traditional ML Workflow: Processes data in centralized locations, often necessitating extensive data transfer and focusing on data security during transit and storage.
  • -
  • Embedded AI Workflow: This workflow leverages edge computing to process data closer to its source, reducing data transmission and enhancing privacy through data localization.
  • -
-
-
-

4.2.4 Hardware-Software Integration

-
    -
  • Traditional ML Workflow: Typically operates on general-purpose hardware, with software development occurring independently.
  • -
  • Embedded AI Workflow: This workflow involves a more integrated approach to hardware and software development, often incorporating custom chips or hardware accelerators to achieve optimal performance.
  • -
-
-
-
-

4.3 Roles & Responsibilities

-

Creating an ML solution, especially for embedded AI, is a multidisciplinary effort involving various specialists.

-

Table tbl-mlops_roles shows a rundown of the typical roles involved:

-
-
-
-Table 4.1: Roles and responsibilities of people involved in MLOps. -
-
- ---- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
RoleResponsibilities
Project ManagerOversees the project, ensuring timelines and milestones are met.
Domain ExpertsOffer domain-specific insights to define project requirements.
Data ScientistsSpecialize in data analysis and model development.
Machine Learning EngineersFocus on model development and deployment.
Data EngineersManage data pipelines.
Embedded Systems EngineersIntegrate ML models into embedded systems.
Software DevelopersDevelop software components for AI system integration.
Hardware EngineersDesign and optimize hardware for the embedded AI system.
UI/UX DesignersFocus on user-centric design.
QA EngineersEnsure the system meets quality standards.
Ethicists and Legal AdvisorsConsult on ethical and legal compliance.
Operations and Maintenance PersonnelMonitor and maintain the deployed system.
Security SpecialistsEnsure system security.
-
-
-
-

Understanding these roles is crucial for completing an ML project. As we proceed through the upcoming chapters, we’ll delve into each role’s essence and expertise, fostering a comprehensive understanding of the complexities involved in embedded AI projects. This holistic view facilitates seamless collaboration and nurtures an environment ripe for innovation and breakthroughs.

-
-
-

Resources

-

Here is a curated list of resources to support students and instructors in their learning and teaching journeys. We are continuously working on expanding this collection and will add new exercises soon.

-
-
-
- -
-
-Slides -
-
-
-
-
-

These slides are a valuable tool for instructors to deliver lectures and for students to review the material at their own pace. We encourage students and instructors to leverage these slides to enhance their understanding and facilitate effective knowledge transfer.

- -
-
-
-
-
-
- -
-
-Exercises -
-
-
-
-
-

To reinforce the concepts covered in this chapter, we have curated a set of exercises that challenge students to apply their knowledge and deepen their understanding.

-

Coming soon.

-
-
-
-
-
-
- -
-
-Labs -
-
-
-
-
-

In addition to exercises, we offer a series of hands-on labs allowing students to gain practical experience with embedded AI technologies. These labs provide step-by-step guidance, enabling students to develop their skills in a structured and supportive environment. We are excited to announce that new labs will be available soon, further enriching the learning experience.

-

Coming soon.

-
-
-
- - -
- -
- - -
- - - - - - \ No newline at end of file diff --git a/references.html b/references.html deleted file mode 100644 index 6a50bc74..00000000 --- a/references.html +++ /dev/null @@ -1,1038 +0,0 @@ - - - - - - - - - -Machine Learning Systems - References - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-
- - -
- -
- - -
- - - -
- -
-
-

References

-
- - - -
- - - - -
- - - -
- - - - - - -
- -
- - - - - - \ No newline at end of file