diff --git a/README.md b/README.md index cc892f964884f..a8600192a9973 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,6 @@ Apache Ignite is a distributed database for high-performance computing with in-m * [JavaDoc](https://ignite.apache.org/releases/latest/javadoc/) * [C#/.NET APIs](https://ignite.apache.org/releases/latest/dotnetdoc/api/) * [C++ APIs](https://ignite.apache.org/releases/latest/cppdoc/) -* [Scala APIs](https://ignite.apache.org/releases/latest/scaladoc/scalar/index.html) ## Multi-Tier Storage diff --git a/assembly/dependencies-apache-ignite-slim.xml b/assembly/dependencies-apache-ignite-slim.xml index db794ba0c8305..08c33072cfcd3 100644 --- a/assembly/dependencies-apache-ignite-slim.xml +++ b/assembly/dependencies-apache-ignite-slim.xml @@ -158,8 +158,6 @@ ${project.groupId}:ignite-osgi ${project.groupId}:ignite-osgi-karaf ${project.groupId}:ignite-osgi-paxlogging - ${project.groupId}:ignite-scalar - ${project.groupId}:ignite-scalar_2.10 ${project.groupId}:ignite-spark ${project.groupId}:ignite-spark-2.4 ${project.groupId}:ignite-ssh diff --git a/assembly/libs/README.txt b/assembly/libs/README.txt index 4299534640fe9..657045a00d55d 100644 --- a/assembly/libs/README.txt +++ b/assembly/libs/README.txt @@ -86,8 +86,6 @@ The following modules are available: - ignite-osgi-karaf (to seemlessly intall ignite into Apache Karaf container) - ignite-osgi-paxlogging (to expose PAX Logging API to Log4j if needed) - ignite-rest-http (for HTTP REST messages) -- ignite-scalar (for ignite Scala API) -- ignite-scalar_2.10 (for Ignite Scala 2.10 API) - ignite-schedule (for Cron-based task scheduling) - ignite-sl4j (for SL4J logging) - ignite-spark (for shared in-memory RDDs and faster SQL for Apache Spark) diff --git a/assembly/release-apache-ignite-base.xml b/assembly/release-apache-ignite-base.xml index e3b69e821274f..ba093d7de8e11 100644 --- a/assembly/release-apache-ignite-base.xml +++ b/assembly/release-apache-ignite-base.xml @@ -205,11 +205,6 @@ /docs/javadoc - - modules/scalar/target/site/scaladocs - /docs/scaladoc/scalar - - examples /examples diff --git a/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc index a214970ba475b..cd1d972dee0ba 100644 --- a/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc +++ b/docs/_docs/extensions-and-integrations/ignite-for-spark/ignitecontext-and-rdd.adoc @@ -102,5 +102,4 @@ val result = cacheRdd.sql("select _val from Integer where val > ? and val < ?", There are​ a couple of examples available on GitHub that demonstrate the usage of `IgniteRDD`: -* link:{githubUrl}/examples/src/main/scala/org/apache/ignite/scalar/examples/spark/ScalarSharedRDDExample.scala[Scala Example^] * link:{githubUrl}/examples/src/main/spark/org/apache/ignite/examples/spark/SharedRDDExample.java[Java Example^] diff --git a/docs/_docs/index.adoc b/docs/_docs/index.adoc index 0cf40ee8aead7..c8561b0692a27 100644 --- a/docs/_docs/index.adoc +++ b/docs/_docs/index.adoc @@ -44,7 +44,6 @@ API reference for various programming languages. * link:/releases/latest/javadoc/[JavaDoc] * link:/releases/latest/dotnetdoc/api/[C#/.NET] * link:/releases/latest/cppdoc/[C++] -* link:/releases/latest/scaladoc/scalar/index.html[Scala] *Older Versions* diff --git a/docs/_docs/setup.adoc b/docs/_docs/setup.adoc index 00dbfdb6e2735..a7c5c7b3fc548 100644 --- a/docs/_docs/setup.adoc +++ b/docs/_docs/setup.adoc @@ -244,10 +244,6 @@ by the Pax Logging API - the logging framework used by Apache Karaf. |ignite-rest-http | Ignite REST-HTTP starts a Jetty-based server within a node that can be used to execute tasks and/or cache commands in grid using HTTP-based link:restapi[RESTful APIs]. -|ignite-scalar | The Ignite Scalar module provides Scala-based DSL with extensions and shortcuts for Ignite API. - -|ignite-scalar_2.10 | Ignite Scalar module that supports Scala 2.10 - |ignite-schedule | This module provides functionality for scheduling jobs locally using UNIX cron-based syntax. |ignite-slf4j | Support for link:logging#using-slf4j[SLF4J logging framework]. diff --git a/examples/README-LGPL.txt b/examples/README-LGPL.txt index ae8b347db2b40..8a5763a1673d3 100644 --- a/examples/README-LGPL.txt +++ b/examples/README-LGPL.txt @@ -13,7 +13,7 @@ The examples folder contains he following subfolders: - `rest` - contains PHP script demonstrating how Ignite Cache can be accessed via HTTP API. - `sql` - contains sample SQL scripts and data sets. - `src/main/java` - contains Java examples for different Ignite modules and features. -- `src/main/scala` - contains examples demonstrating usage of API provided by Scalar. +- `src/main/scala` - contains examples demonstrating usage of API provided by Spark. - `src/main/java-lgpl` - contains lgpl-based examples for different Ignite modules and features. diff --git a/examples/README-slim.txt b/examples/README-slim.txt index 1f45e1bbd349a..2d2748174eb15 100644 --- a/examples/README-slim.txt +++ b/examples/README-slim.txt @@ -13,7 +13,7 @@ The examples folder contains he following subfolders: - `rest` - contains PHP script demonstrating how Ignite Cache can be accessed via HTTP API. - `sql` - contains sample SQL scripts and data sets. - `src/main/java` - contains Java examples for different Ignite modules and features. -- `src/main/scala` - contains examples demonstrating usage of API provided by Scalar. +- `src/main/scala` - contains examples demonstrating usage of API provided by Spark. - `src/main/java-lgpl` - contains lgpl-based examples for different Ignite modules and features. diff --git a/examples/README.txt b/examples/README.txt index 15e887d4a9bcd..f1e8caeb7404b 100644 --- a/examples/README.txt +++ b/examples/README.txt @@ -13,7 +13,7 @@ The examples folder contains he following subfolders: - `rest` - contains PHP script demonstrating how Ignite Cache can be accessed via HTTP API. - `sql` - contains sample SQL scripts and data sets. - `src/main/java` - contains Java examples for different Ignite modules and features. -- `src/main/scala` - contains examples demonstrating usage of API provided by Scalar. +- `src/main/scala` - contains examples demonstrating usage of API provided by Spark. - `src/main/java-lgpl` - contains lgpl-based examples for different Ignite modules and features. Starting Remote Nodes diff --git a/examples/pom-standalone-lgpl.xml b/examples/pom-standalone-lgpl.xml index 42ea1a16f777c..ab919b8f43f59 100644 --- a/examples/pom-standalone-lgpl.xml +++ b/examples/pom-standalone-lgpl.xml @@ -162,12 +162,6 @@ - - ${project.groupId} - ignite-scalar - to_be_replaced_by_ignite_version - - ${project.groupId} ignite-spark @@ -249,12 +243,6 @@ - - ${project.groupId} - ignite-scalar - to_be_replaced_by_ignite_version - - ${project.groupId} ignite-spark-2.4 diff --git a/examples/pom-standalone.xml b/examples/pom-standalone.xml index 9e661ac2a76e3..d9c7dfe12ae83 100644 --- a/examples/pom-standalone.xml +++ b/examples/pom-standalone.xml @@ -162,12 +162,6 @@ - - ${project.groupId} - ignite-scalar - to_be_replaced_by_ignite_version - - ${project.groupId} ignite-spark @@ -250,12 +244,6 @@ - - ${project.groupId} - ignite-scalar - to_be_replaced_by_ignite_version - - ${project.groupId} ignite-spark-2.4 diff --git a/examples/pom.xml b/examples/pom.xml index fcc47a126f097..bf8100a41058d 100644 --- a/examples/pom.xml +++ b/examples/pom.xml @@ -217,11 +217,6 @@ - - ${project.groupId} - ignite-scalar - - org.scalatest scalatest_2.11 @@ -311,11 +306,6 @@ - - ${project.groupId} - ignite-scalar - - org.scalatest scalatest_2.11 diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheAffinityExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheAffinityExample.scala deleted file mode 100644 index fc06fbbf08a04..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheAffinityExample.scala +++ /dev/null @@ -1,115 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import org.apache.ignite.IgniteCache -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import scala.collection.JavaConversions._ - -/** - * This example demonstrates the simplest code that populates the distributed cache - * and co-locates simple closure execution with each key. The goal of this particular - * example is to provide the simplest code example of this logic. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarCacheAffinityExample extends App { - /** Configuration file name. */ - private val CONFIG = "examples/config/example-ignite.xml" - - /** Name of cache. */ - private val NAME = ScalarCacheAffinityExample.getClass.getSimpleName - - /** Number of keys. */ - private val KEY_CNT = 20 - - /** Type alias. */ - type Cache = IgniteCache[Int, String] - - /* - * Note that in case of `LOCAL` configuration, - * since there is no distribution, values may come back as `nulls`. - */ - scalar(CONFIG) { - val cache = createCache$[Int, String](NAME) - - try { - populate (cache) - - visitUsingAffinityRun(cache) - - visitUsingMapKeysToNodes(cache) - } - finally { - cache.destroy() - } - } - - /** - * Visits every in-memory data ignite entry on the remote node it resides by co-locating visiting - * closure with the cache key. - * - * @param c Cache to use. - */ - private def visitUsingAffinityRun(c: IgniteCache[Int, String]) { - (0 until KEY_CNT).foreach (i => - ignite$.compute ().affinityRun (NAME, i, - () => println ("Co-located using affinityRun [key= " + i + ", value=" + c.localPeek (i) + ']') ) - ) - } - - /** - * Collocates jobs with keys they need to work. - * - * @param c Cache to use. - */ - private def visitUsingMapKeysToNodes(c: IgniteCache[Int, String]) { - val keys = (0 until KEY_CNT).toSeq - - // Map all keys to nodes. - val mappings = ignite$.affinity(NAME).mapKeysToNodes(keys) - - mappings.foreach(mapping => { - val node = mapping._1 - val mappedKeys = mapping._2 - - if (node != null) { - ignite$.cluster().forNode(node) *< (() => { - // Check cache without loading the value. - mappedKeys.foreach(key => println("Co-located using mapKeysToNodes [key= " + key + - ", value=" + c.localPeek(key) + ']')) - }, null) - } - }) - } - - /** - * Populates given cache. - * - * @param c Cache to populate. - */ - private def populate(c: Cache) { - (0 until KEY_CNT).foreach(i => c += (i -> i.toString)) - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheEntryProcessorExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheEntryProcessorExample.scala deleted file mode 100644 index ffcbbfdd94707..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheEntryProcessorExample.scala +++ /dev/null @@ -1,125 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import javax.cache.processor.{EntryProcessor, MutableEntry} - -import org.apache.ignite.IgniteCache -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -/** - * This example demonstrates the simplest code that populates the distributed cache - * and co-locates simple closure execution with each key. The goal of this particular - * example is to provide the simplest code example of this logic using EntryProcessor. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: {@code 'ignite.{sh|bat} examples/config/example-ignite.xml'}. - *

- * Alternatively you can run {@link ExampleNodeStartup} in another JVM which will - * start node with {@code examples/config/example-ignite.xml} configuration. - */ -object ScalarCacheEntryProcessorExample extends App { - /** Configuration file name. */ - private val CONFIG = "examples/config/example-ignite.xml" - - /** Name of cache. */ - private val CACHE_NAME = ScalarCacheEntryProcessorExample.getClass.getSimpleName - - /** Number of keys. */ - private val KEY_CNT = 20 - - /** Type alias. */ - type Cache = IgniteCache[String, Int] - - /* - * Note that in case of `LOCAL` configuration, - * since there is no distribution, values may come back as `nulls`. - */ - scalar(CONFIG) { - println() - println(">>> Entry processor example started.") - - val cache = createCache$[String, Int](CACHE_NAME) - - try { - populateEntriesWithInvoke(cache) - - checkEntriesInCache(cache) - - incrementEntriesWithInvoke(cache) - - checkEntriesInCache(cache) - } - finally { - cache.destroy() - } - } - - private def checkEntriesInCache(cache: Cache) { - println() - println(">>> Entries in the cache.") - - (0 until KEY_CNT).foreach(i => - println("Entry: " + cache.get(i.toString))) - } - - /** - * Runs jobs on primary nodes with {@link IgniteCache#invoke(Object, CacheEntryProcessor, Object...)} to create - * entries when they don't exist. - * - * @param cache Cache to populate. - */ - private def populateEntriesWithInvoke(cache: Cache) { - (0 until KEY_CNT).foreach(i => - cache.invoke(i.toString, - new EntryProcessor[String, Int, Object]() { - override def process(e: MutableEntry[String, Int], args: AnyRef*): Object = { - if (e.getValue == null) - e.setValue(i) - - null - } - } - ) - ) - } - - /** - * Runs jobs on primary nodes with {@link IgniteCache#invoke(Object, CacheEntryProcessor, Object...)} to increment - * entries values. - * - * @param cache Cache to populate. - */ - private def incrementEntriesWithInvoke(cache: Cache) { - println() - println(">>> Incrementing values.") - - (0 until KEY_CNT).foreach(i => - cache.invoke(i.toString, - new EntryProcessor[String, Int, Object]() { - override def process(e: MutableEntry[String, Int], args: AnyRef*): Object = { - Option(e.getValue) foreach (v => e.setValue(v + 1)) - - null - } - } - ) - ) - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheExample.scala deleted file mode 100644 index 32afab228a312..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheExample.scala +++ /dev/null @@ -1,128 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import org.apache.ignite.events.Event -import org.apache.ignite.events.EventType._ -import org.apache.ignite.lang.IgnitePredicate -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import scala.collection.JavaConversions._ - -/** - * Demonstrates basic In-Memory Data Ignite Cluster operations with Scalar. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarCacheExample extends App { - /** Configuration file name. */ - private val CONFIG = "examples/config/example-ignite.xml" - - /** Name of cache specified in spring configuration. */ - private val NAME = ScalarCacheExample.getClass.getSimpleName - - scalar(CONFIG) { - val cache = createCache$[String, Int](NAME) - - try { - registerListener() - - basicOperations() - } - catch { - case e: Throwable => - e.printStackTrace(); - } - finally { - cache.destroy() - } - } - - /** - * Demos basic cache operations. - */ - def basicOperations() { - val c = cache$[String, Int](NAME).get - - // Add few values. - c += (1.toString -> 1) - c += (2.toString -> 2) - - // Update values. - c += (1.toString -> 11) - c += (2.toString -> 22) - - c += (1.toString -> 31) - c += (2.toString -> 32) - c += ((2.toString, 32)) - - // Remove couple of keys (if any). - c -= (11.toString, 22.toString) - - // Put one more value. - c += (3.toString -> 11) - - try { - c.opt(44.toString) match { - case Some(v) => sys.error("Should never happen.") - case _ => println("Correct") - } - } - catch { - case e: Throwable => - e.printStackTrace() - } - - - // Print all values. - println("Print all values.") - c.iterator() foreach println - } - - /** - * This method will register listener for cache events on all nodes, - * so we can actually see what happens underneath locally and remotely. - */ - def registerListener() { - val g = ignite$ - - g *< (() => { - val lsnr = new IgnitePredicate[Event] { - override def apply(e: Event): Boolean = { - println(e.shortDisplay) - - true - } - } - - if (g.cluster().nodeLocalMap[String, AnyRef].putIfAbsent("lsnr", lsnr) == null) { - g.events().localListen(lsnr, - EVT_CACHE_OBJECT_PUT, - EVT_CACHE_OBJECT_READ, - EVT_CACHE_OBJECT_REMOVED) - - println("Listener is registered.") - } - }, null) - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCachePopularNumbersExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCachePopularNumbersExample.scala deleted file mode 100644 index d113297ac4d54..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCachePopularNumbersExample.scala +++ /dev/null @@ -1,151 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.lang.{Integer => JavaInt, Long => JavaLong} -import java.util -import java.util.Map.Entry -import java.util.Timer -import javax.cache.processor.{EntryProcessor, MutableEntry} - -import org.apache.ignite.cache.query.SqlFieldsQuery -import org.apache.ignite.internal.util.scala.impl -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ -import org.apache.ignite.stream.StreamReceiver -import org.apache.ignite.{IgniteCache, IgniteException} - -import scala.collection.JavaConversions._ -import scala.util.Random - -/** - * Real time popular number counter. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - *

- * The counts are kept in cache on all remote nodes. Top `10` counts from each node are then grabbed to produce - * an overall top `10` list within the ignite. - */ -object ScalarCachePopularNumbersExample extends App { - /** Configuration file name. */ - private val CONFIG = "examples/config/example-ignite.xml" - - /** Cache name. */ - private final val NAME = ScalarCachePopularNumbersExample.getClass.getSimpleName - - /** Count of most popular numbers to retrieve from cluster. */ - private final val POPULAR_NUMBERS_CNT = 10 - - /** Random number generator. */ - private final val RAND = new Random() - - /** Range within which to generate numbers. */ - private final val RANGE = 1000 - - /** Count of total numbers to generate. */ - private final val CNT = 1000000 - - scalar(CONFIG) { - val cache = createCache$[JavaInt, JavaLong](NAME, indexedTypes = Seq(classOf[JavaInt], classOf[JavaLong])) - - println() - println(">>> Cache popular numbers example started.") - - try { - val prj = ignite$.cluster().forCacheNodes(NAME) - - if (prj.nodes().isEmpty) - println("Ignite does not have cache configured: " + NAME) - else { - val popularNumbersQryTimer = new Timer("numbers-query-worker") - - try { - // Schedule queries to run every 3 seconds during populates cache phase. - popularNumbersQryTimer.schedule(timerTask(query(POPULAR_NUMBERS_CNT)), 3000, 3000) - - streamData() - - // Force one more run to get final counts. - query(POPULAR_NUMBERS_CNT) - } - finally { - popularNumbersQryTimer.cancel() - } - } - } - finally { - cache.destroy() - } - } - - /** - * Populates cache in real time with numbers and keeps count for every number. - * @throws IgniteException If failed. - */ - @throws[IgniteException] - def streamData() { - // Set larger per-node buffer size since our state is relatively small. - // Reduce parallel operations since we running the whole ignite cluster locally under heavy load. - val smtr = dataStreamer$[JavaInt, JavaLong](NAME, 2048) - - smtr.receiver(new IncrementingUpdater()) - - (0 until CNT) foreach (_ => smtr.addData(RAND.nextInt(RANGE), 1L)) - - smtr.close(false) - } - - /** - * Queries a subset of most popular numbers from in-memory data ignite cluster. - * - * @param cnt Number of most popular numbers to return. - */ - def query(cnt: Int) { - val results = cache$[JavaInt, JavaLong](NAME).get - .query(new SqlFieldsQuery("select _key, _val from Long order by _val desc, _key limit " + cnt)) - .getAll - - results.foreach(res => println(res.get(0) + "=" + res.get(1))) - - println("------------------") - } - - /** - * Increments value for key. - */ - private class IncrementingUpdater extends StreamReceiver[JavaInt, JavaLong] { - private[this] final val INC = new EntryProcessor[JavaInt, JavaLong, Object]() { - /** Process entries to increase value by entry key. */ - override def process(e: MutableEntry[JavaInt, JavaLong], args: AnyRef*): Object = { - e.setValue(Option(e.getValue) - .map(l => JavaLong.valueOf(l + 1)) - .getOrElse(JavaLong.valueOf(1L))) - - null - } - } - - @impl def receive(cache: IgniteCache[JavaInt, JavaLong], entries: util.Collection[Entry[JavaInt, JavaLong]]) { - entries.foreach(entry => cache.invoke(entry.getKey, INC)) - } - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheQueryExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheQueryExample.scala deleted file mode 100644 index 6d6c8c34af96c..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCacheQueryExample.scala +++ /dev/null @@ -1,152 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.lang.{Long => JLong} -import java.util._ - -import org.apache.ignite.cache.CacheMode._ -import org.apache.ignite.cache.affinity.AffinityKey -import org.apache.ignite.configuration.CacheConfiguration -import org.apache.ignite.examples.model.{Person, Organization} -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ -import org.apache.ignite.{Ignite, IgniteCache} - -import scala.collection.JavaConversions._ - -/** - * Demonstrates cache ad-hoc queries with Scalar. - *

- * Remote nodes should be started using `ExampleNodeStartup` which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarCacheQueryExample { - /** Configuration file name. */ - private val CONFIG = "examples/config/example-ignite.xml" - - /** Cache name. */ - private val NAME = ScalarCacheQueryExample.getClass.getSimpleName - - /** - * Example entry point. No arguments required. - * - * @param args Command line arguments. None required. - */ - def main(args: Array[String]) { - scalar(CONFIG) { - val cache = createCache$(NAME, indexedTypes = Seq(classOf[JLong], classOf[Organization], - classOf[AffinityKey[_]], classOf[Person])) - - try { - example(ignite$) - } - finally { - cache.destroy() - } - } - } - - /** - * Runs the example. - * - * @param ignite Ignite instance to use. - */ - private def example(ignite: Ignite) { - // Populate cache. - initialize() - - // Cache instance shortcut. - val cache = mkCache[AffinityKey[JLong], Person] - - // Using distributed queries for partitioned cache and local queries for replicated cache. - // Since in replicated caches data is available on all nodes, including local one, - // it is enough to just query the local node. - val prj = if (cache.getConfiguration(classOf[CacheConfiguration[AffinityKey[JLong], Person]]).getCacheMode == PARTITIONED) - ignite.cluster().forRemotes() - else - ignite.cluster().forLocal() - - // Example for SQL-based querying employees based on salary ranges. - // Gets all persons with 'salary > 1000'. - print("People with salary more than 1000: ", cache.sql("salary > 1000").getAll.map(e => e.getValue)) - - // Example for TEXT-based querying for a given string in people resumes. - // Gets all persons with 'Bachelor' degree. - print("People with Bachelor degree: ", cache.text("Bachelor").getAll.map(e => e.getValue)) - } - - /** - * Gets instance of typed cache view to use. - * - * @return Cache to use. - */ - private def mkCache[K, V]: IgniteCache[K, V] = cache$[K, V](NAME).get - - /** - * Populates cache with test data. - */ - private def initialize() { - // Clean up caches on all nodes before run. - cache$(NAME).get.clear() - - // Organization cache projection. - val orgCache = mkCache[JLong, Organization] - - // Organizations. - val org1 = new Organization("Ignite") - val org2 = new Organization("Other") - - orgCache += (org1.id -> org1) - orgCache += (org2.id -> org2) - - // Person cache projection. - val prnCache = mkCache[AffinityKey[JLong], Person] - - // People. - val p1 = new Person(org1, "John", "Doe", 2000, "John Doe has Master Degree.") - val p2 = new Person(org1, "Jane", "Doe", 1000, "Jane Doe has Bachelor Degree.") - val p3 = new Person(org2, "John", "Smith", 1500, "John Smith has Bachelor Degree.") - val p4 = new Person(org2, "Jane", "Smith", 2500, "Jane Smith has Master Degree.") - - // Note that in this example we use custom affinity key for Person objects - // to ensure that all persons are collocated with their organizations. - prnCache += (p1.key -> p1) - prnCache += (p2.key -> p2) - prnCache += (p3.key -> p3) - prnCache += (p4.key -> p4) - } - - /** - * Prints object or collection of objects to standard out. - * - * @param msg Message to print before object is printed. - * @param o Object to print, can be `Iterable`. - */ - private def print(msg: String, o: Any) { - assert(msg != null) - assert(o != null) - - println(">>> " + msg) - - o match { - case it: Iterable[Any] => it.foreach(e => println(">>> " + e.toString)) - case _ => println(">>> " + o.toString) - } - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarClosureExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarClosureExample.scala deleted file mode 100644 index 719f216c62d67..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarClosureExample.scala +++ /dev/null @@ -1,100 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import org.apache.ignite.cluster.ClusterNode -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -/** - * Demonstrates various closure executions on the cloud using Scalar. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarClosureExample extends App { - scalar("examples/config/example-ignite.xml") { - topology() - helloWorld() - helloWorld2() - broadcast() - greetRemotes() - greetRemotesAgain() - } - - /** - * Prints ignite topology. - */ - def topology() { - ignite$ foreach (n => println("Node: " + nid8$(n))) - } - - /** - * Obligatory example (2) - cloud enabled Hello World! - */ - def helloWorld2() { - // Notice the example usage of Java-side closure 'F.println(...)' and method 'scala' - // that explicitly converts Java side object to a proper Scala counterpart. - // This method is required since implicit conversion won't be applied here. - ignite$.run$(for (w <- "Hello World!".split(" ")) yield () => println(w), null) - } - - /** - * Obligatory example - cloud enabled Hello World! - */ - def helloWorld() { - ignite$.run$("HELLO WORLD!".split(" ") map (w => () => println(w)), null) - } - - /** - * One way to execute closures on the ignite cluster. - */ - def broadcast() { - ignite$.bcastRun(() => println("Broadcasting!!!"), null) - } - - /** - * Greats all remote nodes only. - */ - def greetRemotes() { - val me = ignite$.cluster().localNode.id - - // Note that usage Java-based closure. - ignite$.cluster().forRemotes() match { - case p if p.isEmpty => println("No remote nodes!") - case p => p.bcastRun(() => println("Greetings from: " + me), null) - } - } - - /** - * Same as previous greetings for all remote nodes but remote cluster group is filtered manually. - */ - def greetRemotesAgain() { - val me = ignite$.cluster().localNode.id - - // Just show that we can create any groups we like... - // Note that usage of Java-based closure via 'F' typedef. - ignite$.cluster().forPredicate((n: ClusterNode) => n.id != me) match { - case p if p.isEmpty => println("No remote nodes!") - case p => p.bcastRun(() => println("Greetings again from: " + me), null) - } - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarContinuationExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarContinuationExample.scala deleted file mode 100644 index 62b3a13913757..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarContinuationExample.scala +++ /dev/null @@ -1,171 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import org.apache.ignite.compute.ComputeJobContext -import org.apache.ignite.lang.{IgniteClosure, IgniteFuture} -import org.apache.ignite.resources.JobContextResource -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ -import org.jetbrains.annotations.Nullable - -import java.math._ -import java.util - -/** - * This example recursively calculates `Fibonacci` numbers on the ignite cluster. This is - * a powerful design pattern which allows for creation of fully distributively recursive - * (a.k.a. nested) tasks or closures with continuations. This example also shows - * usage of `continuations`, which allows us to wait for results from remote nodes - * without blocking threads. - *

- * Note that because this example utilizes local node storage via `NodeLocal`, - * it gets faster if you execute it multiple times, as the more you execute it, - * the more values it will be cached on remote nodes. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarContinuationExample { - def main(args: Array[String]) { - scalar("examples/config/example-ignite.xml") { - // Calculate fibonacci for N. - val N: Long = 100 - - val thisNode = ignite$.cluster().localNode - - val start = System.currentTimeMillis - - // Group that excludes this node if others exists. - val prj = if (ignite$.cluster().nodes().size() > 1) ignite$.cluster().forOthers(thisNode) else ignite$.cluster().forNode(thisNode) - - val fib = ignite$.compute(prj).apply(new FibonacciClosure(thisNode.id()), N) - - val duration = System.currentTimeMillis - start - - println(">>>") - println(">>> Finished executing Fibonacci for '" + N + "' in " + duration + " ms.") - println(">>> Fibonacci sequence for input number '" + N + "' is '" + fib + "'.") - println(">>> You should see prints out every recursive Fibonacci execution on cluster nodes.") - println(">>> Check remote nodes for output.") - println(">>>") - } - } -} - -/** - * Closure to execute. - * - * @param excludeNodeId Node to exclude from execution if there are more then 1 node in cluster. - */ -class FibonacciClosure ( - private[this] val excludeNodeId: util.UUID -) extends IgniteClosure[Long, BigInteger] { - // These fields must be *transient* so they do not get - // serialized and sent to remote nodes. - // However, these fields will be preserved locally while - // this closure is being "held", i.e. while it is suspended - // and is waiting to be continued. - @transient private var fut1, fut2: IgniteFuture[BigInteger] = null - - // Auto-inject job context. - @JobContextResource - private val jobCtx: ComputeJobContext = null - - @Nullable override def apply(num: Long): BigInteger = { - if (fut1 == null || fut2 == null) { - println(">>> Starting fibonacci execution for number: " + num) - - // Make sure n is not negative. - val n = math.abs(num) - - val g = ignite$ - - if (n <= 2) - return if (n == 0) - BigInteger.ZERO - else - BigInteger.ONE - - // Get properly typed node-local storage. - val store = g.cluster().nodeLocalMap[Long, IgniteFuture[BigInteger]]() - - // Check if value is cached in node-local store first. - fut1 = store.get(n - 1) - fut2 = store.get(n - 2) - - val excludeNode = ignite$.cluster().node(excludeNodeId) - - // Group that excludes node with id passed in constructor if others exists. - val prj = if (ignite$.cluster().nodes().size() > 1) ignite$.cluster().forOthers(excludeNode) else ignite$.cluster().forNode(excludeNode) - - val comp = ignite$.compute(prj) - - // If future is not cached in node-local store, cache it. - // Note recursive execution! - if (fut1 == null) { - val futVal = comp.applyAsync(new FibonacciClosure(excludeNodeId), n - 1) - - fut1 = store.putIfAbsent(n - 1, futVal) - - if (fut1 == null) - fut1 = futVal - } - - // If future is not cached in node-local store, cache it. - if (fut2 == null) { - val futVal = comp.applyAsync(new FibonacciClosure(excludeNodeId), n - 2) - - fut2 = store.putIfAbsent(n - 2, futVal) - - if (fut2 == null) - fut2 = futVal - } - - // If futures are not done, then wait asynchronously for the result - if (!fut1.isDone || !fut2.isDone) { - val lsnr = (fut: IgniteFuture[BigInteger]) => { - // This method will be called twice, once for each future. - // On the second call - we have to have both futures to be done - // - therefore we can call the continuation. - if (fut1.isDone && fut2.isDone) - jobCtx.callcc() // Resume job execution. - } - - // Hold (suspend) job execution. - // It will be resumed in listener above via 'callcc()' call - // once both futures are done. - jobCtx.holdcc() - - // Attach the same listener to both futures. - fut1.listen(lsnr) - fut2.listen(lsnr) - - return null - } - } - - assert(fut1.isDone && fut2.isDone) - - // Return cached results. - fut1.get.add(fut2.get) - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCreditRiskExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCreditRiskExample.scala deleted file mode 100644 index e3ba0014ff332..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarCreditRiskExample.scala +++ /dev/null @@ -1,249 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import scala.util.Random -import scala.util.control.Breaks._ - -/** - * Scalar-based Monte-Carlo example. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarCreditRiskExample { - def main(args: Array[String]) { - scalar("examples/config/example-ignite.xml") { - // Create portfolio. - var portfolio = Seq.empty[Credit] - - val rnd = new Random - - // Generate some test portfolio items. - (0 until 5000).foreach(i => - portfolio +:= Credit( - 50000 * rnd.nextDouble, - rnd.nextInt(1000), - rnd.nextDouble / 10, - rnd.nextDouble / 20 + 0.02 - ) - ) - - // Forecast horizon in days. - val horizon = 365 - - // Number of Monte-Carlo iterations. - val iter = 10000 - - // Percentile. - val percentile = 0.95 - - // Mark the stopwatch. - val start = System.currentTimeMillis - - // Calculate credit risk and print it out. - // As you can see the ignite cluster enabling is completely hidden from the caller - // and it is fully transparent to him. In fact, the caller is never directly - // aware if method was executed just locally or on the 100s of cluster nodes. - // Credit risk crdRisk is the minimal amount that creditor has to have - // available to cover possible defaults. - val crdRisk = ignite$ @< (closures(ignite$.cluster().nodes().size(), portfolio.toArray, horizon, iter, percentile), - (s: Seq[Double]) => s.sum / s.size, null) - - println("Credit risk [crdRisk=" + crdRisk + ", duration=" + - (System.currentTimeMillis - start) + "ms]") - } - } - - /** - * Creates closures for calculating credit risks. - * - * @param clusterSize Size of the cluster. - * @param portfolio Portfolio. - * @param horizon Forecast horizon in days. - * @param iter Number of Monte-Carlo iterations. - * @param percentile Percentile. - * @return Collection of closures. - */ - private def closures(clusterSize: Int, portfolio: Array[Credit], horizon: Int, iter: Int, - percentile: Double): Seq[() => Double] = { - val iterPerNode: Int = math.round(iter / clusterSize.asInstanceOf[Float]) - val lastNodeIter: Int = iter - (clusterSize - 1) * iterPerNode - - var cls = Seq.empty[() => Double] - - (0 until clusterSize).foreach(i => { - val nodeIter = if (i == clusterSize - 1) lastNodeIter else iterPerNode - - cls +:= (() => new CreditRiskManager().calculateCreditRiskMonteCarlo( - portfolio, horizon, nodeIter, percentile)) - }) - - cls - } -} - -/** - * This class provides a simple model for a credit contract (or a loan). It is basically - * defines as remaining crediting amount to date, credit remaining term, APR and annual - * probability on default. Although this model is simplified for the purpose - * of this example, it is close enough to emulate the real-life credit - * risk assessment application. - */ -private case class Credit( - remAmnt: Double, // Remaining crediting amount. - remTerm: Int, // Remaining crediting remTerm. - apr: Double, // Annual percentage rate (APR). - edf: Double // Expected annual probability of default (EaDF). -) { - /** - * Gets either credit probability of default for the given period of time - * if remaining term is less than crediting time or probability of default - * for whole remained crediting time. - * - * @param term Default term. - * @return Credit probability of default in relative percents - * (percentage / 100). - */ - def getDefaultProbability(term: Int): Double = { - (1 - math.exp(math.log(1 - edf) * math.min(remTerm, term) / 365.0)) - } -} - -/** - * This class abstracts out the calculation of risk for a credit portfolio. - */ -private class CreditRiskManager { - /** - * Default randomizer with normal distribution. - * Note that since every JVM on the ignite cluster will have its own random - * generator (independently initialized) the Monte-Carlo simulation - * will be slightly skewed when performed on the ignite cluster due to skewed - * normal distribution of the sub-jobs comparing to execution on the - * local node only with single random generator. Real-life applications - * may want to provide its own implementation of distributed random - * generator. - */ - private val rndGen = new Random - - /** - * Calculates credit risk for a given credit portfolio. This calculation uses - * Monte-Carlo Simulation to produce risk value. - * - * @param portfolio Credit portfolio. - * @param horizon Forecast horizon (in days). - * @param num Number of Monte-Carlo iterations. - * @param percentile Cutoff level. - * @return Credit risk value, i.e. the minimal amount that creditor has to - * have available to cover possible defaults. - */ - def calculateCreditRiskMonteCarlo(portfolio: Seq[Credit], horizon: Int, num: - Int, percentile: Double): Double = { - println(">>> Calculating credit risk for portfolio [size=" + portfolio.length + ", horizon=" + - horizon + ", percentile=" + percentile + ", iterations=" + num + "] <<<") - - val start = System.currentTimeMillis - - val losses = calculateLosses(portfolio, horizon, num).sorted - val lossProbs = new Array[Double](losses.size) - - (0 until losses.size).foreach(i => { - if (i == 0) - lossProbs(i) = getLossProbability(losses, 0) - else if (losses(i) != losses(i - 1)) - lossProbs(i) = getLossProbability(losses, i) + lossProbs(i - 1) - else - lossProbs(i) = lossProbs(i - 1) - }) - - var crdRisk = 0.0 - - breakable { - (0 until lossProbs.size).foreach(i => { - if (lossProbs(i) > percentile) { - crdRisk = losses(i - 1) - - break() - } - }) - } - - println(">>> Finished calculating portfolio risk [risk=" + crdRisk + - ", time=" + (System.currentTimeMillis - start) + "ms]") - - crdRisk - } - - /** - * Calculates losses for the given credit portfolio using Monte-Carlo Simulation. - * Simulates probability of default only. - * - * @param portfolio Credit portfolio. - * @param horizon Forecast horizon. - * @param num Number of Monte-Carlo iterations. - * @return Losses array simulated by Monte Carlo method. - */ - private def calculateLosses(portfolio: Seq[Credit], horizon: Int, num: Int): Array[Double] = { - val losses = new Array[Double](num) - - // Count losses using Monte-Carlo method. We generate random probability of default, - // if it exceeds certain credit default value we count losses - otherwise count income. - (0 until num).foreach(i => { - portfolio.foreach(crd => { - val remDays = math.min(crd.remTerm, horizon) - - if (rndGen.nextDouble >= 1 - crd.getDefaultProbability(remDays)) - // (1 + 'r' * min(H, W) / 365) * S. - // Where W is a horizon, H is a remaining crediting term, 'r' is an annual credit rate, - // S is a remaining credit amount. - losses(i) += (1 + crd.apr * math.min(horizon, crd.remTerm) / 365) * crd.remAmnt - else - // - 'r' * min(H,W) / 365 * S - // Where W is a horizon, H is a remaining crediting term, 'r' is a annual credit rate, - // S is a remaining credit amount. - losses(i) -= crd.apr * math.min(horizon, crd.remTerm) / 365 * crd.remAmnt - }) - }) - - losses - } - - /** - * Calculates probability of certain loss in array of losses. - * - * @param losses Array of losses. - * @param i Index of certain loss in array. - * @return Probability of loss with given index. - */ - private def getLossProbability(losses: Array[Double], i: Int): Double = { - var count = 0.0 - - losses.foreach(tmp => { - if (tmp == losses(i)) - count += 1 - }) - - count / losses.size - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarJvmCloudExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarJvmCloudExample.scala deleted file mode 100644 index 814bb2e99611d..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarJvmCloudExample.scala +++ /dev/null @@ -1,95 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.util.concurrent.Executors -import java.util.concurrent.TimeUnit._ -import javax.swing.{JComponent, JLabel, JOptionPane} - -import org.apache.ignite.configuration.IgniteConfiguration -import org.apache.ignite.internal.util.scala.impl -import org.apache.ignite.scalar.scalar -import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi -import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder - -/** - * This example demonstrates how you can easily startup multiple nodes - * in the same JVM with Scala. All started nodes use default configuration - * with only difference of the ignite cluster name which has to be different for - * every node so they can be differentiated within JVM. - *

- * Starting multiple nodes in the same JVM is especially useful during - * testing and debugging as it allows you to create a full ignite cluster within - * a test case, simulate various scenarios, and watch how jobs and data - * behave within a ignite cluster. - */ -object ScalarJvmCloudExample { - /** Names of nodes to start. */ - val NODES = List("scalar-node-0", "scalar-node-1", "scalar-node-2", "scalar-node-3", "scalar-node-4") - - def main(args: Array[String]) { - try { - // Shared IP finder for in-VM node discovery. - val ipFinder = new TcpDiscoveryVmIpFinder(true) - - val pool = Executors.newFixedThreadPool(NODES.size) - - // Concurrently startup all nodes. - NODES.foreach(name => pool.execute(new Runnable { - @impl def run() { - // All defaults. - val cfg = new IgniteConfiguration - - cfg.setGridName(name) - - // Configure in-VM TCP discovery so we don't - // interfere with other ignites running on the same network. - val discoSpi = new TcpDiscoverySpi - - discoSpi.setIpFinder(ipFinder) - - cfg.setDiscoverySpi(discoSpi) - - // Start node - scalar.start(cfg) - - () - } - })) - - pool.shutdown() - - pool.awaitTermination(Long.MaxValue, MILLISECONDS) - - // Wait until Ok is pressed. - JOptionPane.showMessageDialog( - null, - Array[JComponent]( - new JLabel("Ignite JVM cloud started."), - new JLabel("Number of nodes in the cluster: " + scalar.ignite$(NODES(1)).get.cluster().nodes().size()), - new JLabel("Click OK to stop.") - ), - "Ignite", - JOptionPane.INFORMATION_MESSAGE) - - } - // Stop all nodes - finally - NODES.foreach(node => scalar.stop(node, true)) - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarPingPongExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarPingPongExample.scala deleted file mode 100644 index 75784cfb562b4..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarPingPongExample.scala +++ /dev/null @@ -1,160 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.util.UUID -import java.util.concurrent.CountDownLatch - -import org.apache.ignite.messaging.MessagingListenActor -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -/** - * Demonstrates simple protocol-based exchange in playing a ping-pong between - * two nodes. It is analogous to `MessagingPingPongExample` on Java side. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarPingPongExample extends App { - scalar("examples/config/example-ignite.xml") { - pingPong() - //pingPong2() - } - - /** - * Implements Ping Pong example between local and remote node. - */ - def pingPong() { - val g = ignite$ - - if (g.cluster().nodes().size < 2) { - println(">>>") - println(">>> I need a partner to play a ping pong!") - println(">>>") - - return - } - else { - // Pick first remote node as a partner. - val nodeB = g.cluster().forNode(g.remoteNodes$().head) - - // Set up remote player: configure remote node 'rmt' to listen - // for messages from local node 'loc'. - g.message(nodeB).remoteListen(null, new MessagingListenActor[String]() { - def receive(nodeId: UUID, msg: String) { - println(msg) - - msg match { - case "PING" => respond("PONG") - case "STOP" => stop() - } - } - }) - - val latch = new CountDownLatch(10) - - // Set up local player: configure local node 'loc' - // to listen for messages from remote node 'rmt'. - ignite$.message().localListen(null, new MessagingListenActor[String]() { - def receive(nodeId: UUID, msg: String) { - println(msg) - - if (latch.getCount == 1) - stop("STOP") - else // We know it's 'PONG'. - respond("PING") - - latch.countDown() - } - }) - - // Serve! - nodeB.send$("PING", null) - - // Wait til the match is over. - latch.await() - } - } - - /** - * Implements Ping Pong example between two remote nodes. - */ - def pingPong2() { - val g = ignite$ - - if (g.cluster().forRemotes().nodes().size() < 2) { - println(">>>") - println(">>> I need at least two remote nodes!") - println(">>>") - } - else { - // Pick two remote nodes. - val n1 = g.cluster().forRemotes().head - val n2 = g.cluster().forRemotes().tail.head - - val n1p = g.cluster().forNode(n1) - val n2p = g.cluster().forNode(n2) - - // Configure remote node 'n1' to receive messages from 'n2'. - g.message(n1p).remoteListen(null, new MessagingListenActor[String] { - def receive(nid: UUID, msg: String) { - println(msg) - - msg match { - case "PING" => respond("PONG") - case "STOP" => stop() - } - } - }) - - // Configure remote node 'n2' to receive messages from 'n1'. - g.message(n2p).remoteListen(null, new MessagingListenActor[String] { - // Get local count down latch. - private lazy val latch: CountDownLatch = g.cluster().nodeLocalMap().get("latch") - - def receive(nid: UUID, msg: String) { - println(msg) - - latch.getCount match { - case 1 => stop("STOP") - case _ => respond("PING") - } - - latch.countDown() - } - }) - - // 1. Sets latch into node local storage so that local actor could use it. - // 2. Sends first 'PING' to 'n1'. - // 3. Waits until all messages are exchanged between two remote nodes. - n2p.run$(() => { - val latch = new CountDownLatch(10) - - g.cluster().nodeLocalMap[String, CountDownLatch].put("latch", latch) - - n1p.send$("PING", null) - - latch.await() - }, null) - } - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarPrimeExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarPrimeExample.scala deleted file mode 100644 index 867783b294298..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarPrimeExample.scala +++ /dev/null @@ -1,134 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.util - -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import scala.util.control.Breaks._ - -/** - * Prime Number calculation example based on Scalar. - * - * ==Starting Remote Nodes== - * To try this example you should (but don't have to) start remote ignite instances. - * You can start as many as you like by executing the following script: - * `{IGNITE_HOME}/bin/ignite.{bat|sh} examples/config/example-ignite.xml` - *

- * Once remote instances are started, you can execute this example from - * Eclipse, IntelliJ IDEA, or NetBeans (and any other Java IDE) by simply hitting run - * button. You will see that all nodes discover each other and - * all of the nodes will participate in task execution (check node - * output). - *

- * Note that when running this example on a multi-core box, simply - * starting additional cluster node on the same box will speed up - * prime number calculation by a factor of 2. - */ -object ScalarPrimeExample { - /** - * Main entry point to application. No arguments required. - * - * @param args Command like argument (not used). - */ - def main(args: Array[String]){ - scalar("examples/config/example-ignite.xml") { - val start = System.currentTimeMillis - - // Values we want to check for prime. - val checkVals = Array(32452841L, 32452843L, 32452847L, 32452849L, 236887699L, 217645199L) - - println(">>>") - println(">>> Starting to check the following numbers for primes: " + util.Arrays.toString(checkVals)) - - val g = ignite$ - - checkVals.foreach(checkVal => { - val divisor = g.reduce$[Option[Long], Option[Option[Long]]]( - closures(g.cluster().nodes().size(), checkVal), _.find(_.isDefined), null) - - if (!divisor.isDefined) - println(">>> Value '" + checkVal + "' is a prime number") - else - println(">>> Value '" + checkVal + "' is divisible by '" + divisor.get.get + '\'') - }) - - val totalTime = System.currentTimeMillis - start - - println(">>> Total time to calculate all primes (milliseconds): " + totalTime) - println(">>>") - } - } - - /** - * Creates closures for checking passed in value for prime. - * - * Every closure gets a range of divisors to check. The lower and - * upper boundaries of this range are passed into closure. - * Closures checks if the value passed in is divisible by any of - * the divisors in the range. - * - * @param clusterSize Size of the cluster. - * @param checkVal Value to check. - * @return Collection of closures. - */ - private def closures(clusterSize: Int, checkVal: Long): Seq[() => Option[Long]] = { - var cls = Seq.empty[() => Option[Long]] - - val taskMinRange = 2L - val numbersPerTask = if (checkVal / clusterSize < 10) 10L else checkVal / clusterSize - - var minRange = 0L - var maxRange = 0L - - var i = 0 - - while (maxRange < checkVal) { - minRange = i * numbersPerTask + taskMinRange - maxRange = (i + 1) * numbersPerTask + taskMinRange - 1 - - if (maxRange > checkVal) - maxRange = checkVal - - val min = minRange - val max = maxRange - - cls +:= (() => { - var divisor: Option[Long] = None - - breakable { - (min to max).foreach(d => { - if (d != 1 && d != checkVal && checkVal % d == 0) { - divisor = Some(d) - - break() - } - }) - } - - divisor - }) - - i += 1 - } - - cls - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarSnowflakeSchemaExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarSnowflakeSchemaExample.scala deleted file mode 100644 index b88cfa5095e45..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarSnowflakeSchemaExample.scala +++ /dev/null @@ -1,319 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.lang.{Integer => JavaInt} -import java.util.ConcurrentModificationException -import java.util.concurrent.ThreadLocalRandom -import javax.cache.Cache - -import org.apache.ignite.IgniteCache -import org.apache.ignite.cache.CacheMode -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import scala.collection.JavaConversions._ - -/** - * Snowflake Schema is a logical - * arrangement of data in which data is split into `dimensions` and `facts` - * Dimensions can be referenced or joined by other dimensions or facts, - * however, facts are generally not referenced by other facts. You can view dimensions - * as your master or reference data, while facts are usually large data sets of events or - * other objects that continuously come into the system and may change frequently. In Ignite - * such architecture is supported via cross-cache queries. By storing dimensions in - * `CacheMode#REPLICATED REPLICATED` caches and facts in much larger - * `CacheMode#PARTITIONED PARTITIONED` caches you can freely execute distributed joins across - * your whole in-memory data ignite cluster, thus querying your in memory data without any limitations. - *

- * In this example we have two dimensions, `DimProduct` and `DimStore` and - * one fact - `FactPurchase`. Queries are executed by joining dimensions and facts - * in various ways. - *

- * Remote nodes should be started using `ExampleNodeStartup` which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarSnowflakeSchemaExample { - /** Configuration file name. */ - private val CONFIG = "examples/config/example-ignite.xml" - - /** Name of partitioned cache specified in spring configuration. */ - private val PARTITIONED_CACHE_NAME = "ScalarSnowflakeSchemaExamplePartitioned" - - /** Name of replicated cache specified in spring configuration. */ - private val REPLICATED_CACHE_NAME = "ScalarSnowflakeSchemaExampleReplicated" - - /** ID generator. */ - private[this] val idGen = Stream.from(0).iterator - - /** DimStore data. */ - private[this] val dataStore = scala.collection.mutable.Map[JavaInt, DimStore]() - - /** DimProduct data. */ - private[this] val dataProduct = scala.collection.mutable.Map[JavaInt, DimProduct]() - - /** - * Example entry point. No arguments required. - */ - def main(args: Array[String]) { - scalar(CONFIG) { - println - println(">>> Cache star schema example started.") - - // Destroy caches to clean up the data if any left from previous runs. - destroyCache$(PARTITIONED_CACHE_NAME) - destroyCache$(REPLICATED_CACHE_NAME) - - val dimCache = createCache$[JavaInt, AnyRef](REPLICATED_CACHE_NAME, CacheMode.REPLICATED, Seq(classOf[JavaInt], classOf[DimStore], - classOf[JavaInt], classOf[DimProduct])) - - try { - val factCache = createCache$[JavaInt, FactPurchase](PARTITIONED_CACHE_NAME, indexedTypes = Seq(classOf[JavaInt], classOf[FactPurchase])) - - try { - populateDimensions(dimCache) - populateFacts(factCache) - - queryStorePurchases() - queryProductPurchases() - } - finally { - factCache.destroy() - } - } - finally { - dimCache.destroy() - } - } - } - - /** - * Populate cache with `dimensions` which in our case are - * `DimStore` and `DimProduct` instances. - */ - def populateDimensions(dimCache: IgniteCache[JavaInt, AnyRef]) { - val store1 = new DimStore(idGen.next(), "Store1", "12345", "321 Chilly Dr, NY") - val store2 = new DimStore(idGen.next(), "Store2", "54321", "123 Windy Dr, San Francisco") - - // Populate stores. - dimCache.put(store1.id, store1) - dimCache.put(store2.id, store2) - - dataStore.put(store1.id, store1) - dataStore.put(store2.id, store2) - - for (i <- 1 to 20) { - val product = new DimProduct(idGen.next(), "Product" + i, i + 1, (i + 1) * 10) - - dimCache.put(product.id, product) - - dataProduct.put(product.id, product) - } - } - - /** - * Populate cache with `facts`, which in our case are `FactPurchase` objects. - */ - def populateFacts(factCache: IgniteCache[JavaInt, FactPurchase]) { - for (i <- 1 to 100) { - val store: DimStore = rand(dataStore.values) - val prod: DimProduct = rand(dataProduct.values) - val purchase: FactPurchase = new FactPurchase(idGen.next(), prod.id, store.id, i + 1) - - factCache.put(purchase.id, purchase) - } - } - - /** - * Query all purchases made at a specific store. This query uses cross-cache joins - * between `DimStore` objects stored in `replicated` cache and - * `FactPurchase` objects stored in `partitioned` cache. - */ - def queryStorePurchases() { - val factCache = ignite$.cache[JavaInt, FactPurchase](PARTITIONED_CACHE_NAME) - - val storePurchases = factCache.sql( - "from \"" + REPLICATED_CACHE_NAME + "\".DimStore, \"" + PARTITIONED_CACHE_NAME + "\".FactPurchase " + - "where DimStore.id=FactPurchase.storeId and DimStore.name=?", "Store1") - - printQueryResults("All purchases made at store1:", storePurchases.getAll) - } - - /** - * Query all purchases made at a specific store for 3 specific products. - * This query uses cross-cache joins between `DimStore`, `DimProduct` - * objects stored in `replicated` cache and `FactPurchase` objects - * stored in `partitioned` cache. - */ - private def queryProductPurchases() { - val factCache = ignite$.cache[JavaInt, FactPurchase](PARTITIONED_CACHE_NAME) - - // All purchases for certain product made at store2. - // ================================================= - val p1: DimProduct = rand(dataProduct.values) - val p2: DimProduct = rand(dataProduct.values) - val p3: DimProduct = rand(dataProduct.values) - - println("IDs of products [p1=" + p1.id + ", p2=" + p2.id + ", p3=" + p3.id + ']') - - val prodPurchases = factCache.sql( - "from \"" + REPLICATED_CACHE_NAME + "\".DimStore, \"" + REPLICATED_CACHE_NAME + "\".DimProduct, \"" + - PARTITIONED_CACHE_NAME + "\".FactPurchase " + - "where DimStore.id=FactPurchase.storeId and " + - "DimProduct.id=FactPurchase.productId and " + - "DimStore.name=? and DimProduct.id in(?, ?, ?)", - "Store2", p1.id, p2.id, p3.id) - - printQueryResults("All purchases made at store2 for 3 specific products:", prodPurchases.getAll) - } - - /** - * Print query results. - * - * @param msg Initial message. - * @param res Results to print. - */ - private def printQueryResults[V](msg: String, res: Iterable[Cache.Entry[JavaInt, V]]) { - println(msg) - - for (e <- res) - println(" " + e.getValue.toString) - } - - /** - * Gets random value from given collection. - * - * @param c Input collection (no `null` and not emtpy). - * @return Random value from the input collection. - */ - def rand[T](c: Iterable[_ <: T]): T = { - val n: Int = ThreadLocalRandom.current.nextInt(c.size) - - var i: Int = 0 - - for (t <- c) { - if (i < n) - i += 1 - else - return t - } - - throw new ConcurrentModificationException - } -} - -/** - * Represents a physical store location. In our `snowflake` schema a `store` - * is a `dimension` and will be cached in `CacheMode#REPLICATED` cache. - * - * @param id Primary key. - * @param name Store name. - * @param zip Zip code. - * @param addr Address. - */ -class DimStore( - @ScalarCacheQuerySqlField - val id: Int, - @ScalarCacheQuerySqlField - val name: String, - val zip: String, - val addr: String) { - /** - * `toString` implementation. - */ - override def toString: String = { - val sb: StringBuilder = new StringBuilder - - sb.append("DimStore ") - sb.append("[id=").append(id) - sb.append(", name=").append(name) - sb.append(", zip=").append(zip) - sb.append(", addr=").append(addr) - sb.append(']') - - sb.toString() - } -} - -/** - * Represents a product available for purchase. In our `snowflake` schema a `product` - * is a `dimension` and will be cached in `CacheMode#REPLICATED` cache. - * - * @param id Product ID. - * @param name Product name. - * @param price Product list price. - * @param qty Available product quantity. - */ -class DimProduct( - @ScalarCacheQuerySqlField - val id: Int, - val name: String, - @ScalarCacheQuerySqlField - val price: Float, - val qty: Int) { - /** - * `toString` implementation. - */ - override def toString: String = { - val sb: StringBuilder = new StringBuilder - - sb.append("DimProduct ") - sb.append("[id=").append(id) - sb.append(", name=").append(name) - sb.append(", price=").append(price) - sb.append(", qty=").append(qty) - sb.append(']') - - sb.toString() - } -} - -/** - * Represents a purchase record. In our `snowflake` schema purchase - * is a `fact` and will be cached in larger `CacheMode#PARTITIONED` cache. - * - * @param id Purchase ID. - * @param productId Purchased product ID. - * @param storeId Store ID. - * @param purchasePrice Purchase price. - */ -class FactPurchase( - @ScalarCacheQuerySqlField - val id: Int, - @ScalarCacheQuerySqlField - val productId: Int, - @ScalarCacheQuerySqlField - val storeId: Int, - @ScalarCacheQuerySqlField - val purchasePrice: Float) { - /** - * `toString` implementation. - */ - override def toString: String = { - val sb: StringBuilder = new StringBuilder - - sb.append("FactPurchase ") - sb.append("[id=").append(id) - sb.append(", productId=").append(productId) - sb.append(", storeId=").append(storeId) - sb.append(", purchasePrice=").append(purchasePrice) - sb.append(']') - - sb.toString() - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarTaskExample.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarTaskExample.scala deleted file mode 100644 index 21073e5e7c98a..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarTaskExample.scala +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import java.util - -import org.apache.ignite.compute.{ComputeJob, ComputeJobResult, ComputeTaskSplitAdapter} -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import scala.collection.JavaConversions._ - -/** - * Demonstrates use of full ignite task API using Scalar. Note that using task-based - * ignite enabling gives you all the advanced features of Ignite such as custom topology - * and collision resolution, custom failover, mapping, reduction, load balancing, etc. - * As a trade off in such cases the more code needs to be written vs. simple closure execution. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarTaskExample extends App { - scalar("examples/config/example-ignite.xml") { - ignite$.compute().execute(classOf[IgniteHelloWorld], "Hello Cloud World!") - } - - /** - * This task encapsulates the logic of MapReduce. - */ - class IgniteHelloWorld extends ComputeTaskSplitAdapter[String, Void] { - def split(clusterSize: Int, arg: String): java.util.Collection[_ <: ComputeJob] = { - (for (w <- arg.split(" ")) yield toJob(() => println(w))).toSeq - } - - def reduce(results: util.List[ComputeJobResult]) = null - } -} diff --git a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarWorldShortestMapReduce.scala b/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarWorldShortestMapReduce.scala deleted file mode 100644 index 723cdae67d8bd..0000000000000 --- a/examples/src/main/scala/org/apache/ignite/scalar/examples/ScalarWorldShortestMapReduce.scala +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.examples - -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -/** - * Shows the world's shortest MapReduce application that calculates non-space - * length of the input string. This example works equally on one computer or - * on thousands requiring no special configuration or deployment. - *

- * Remote nodes should always be started with special configuration file which - * enables P2P class loading: `'ignite.{sh|bat} examples/config/example-ignite.xml'`. - *

- * Alternatively you can run `ExampleNodeStartup` in another JVM which will - * start node with `examples/config/example-ignite.xml` configuration. - */ -object ScalarWorldShortestMapReduce extends App { - scalar("examples/config/example-ignite.xml") { - val input = "World shortest mapreduce application" - - println("Non-space characters count: " + - ignite$.reduce$[Int, Int](for (w <- input.split(" ")) yield () => w.length, _.sum, null) - ) - } -} diff --git a/examples/src/test/scala/org/apache/ignite/scalar/tests/examples/ScalarExamplesMultiNodeSelfTest.scala b/examples/src/test/scala/org/apache/ignite/scalar/tests/examples/ScalarExamplesMultiNodeSelfTest.scala deleted file mode 100644 index 57efe975d9280..0000000000000 --- a/examples/src/test/scala/org/apache/ignite/scalar/tests/examples/ScalarExamplesMultiNodeSelfTest.scala +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests.examples - -/** - * Scalar examples multi-node self test. - */ -class ScalarExamplesMultiNodeSelfTest extends ScalarExamplesSelfTest { - /** */ - protected override def beforeTest() { - startRemoteNodes() - } - - /** */ - protected override def getTestTimeout: Long = { - 10 * 60 * 1000 - } -} diff --git a/examples/src/test/scala/org/apache/ignite/scalar/tests/examples/ScalarExamplesSelfTest.scala b/examples/src/test/scala/org/apache/ignite/scalar/tests/examples/ScalarExamplesSelfTest.scala deleted file mode 100644 index a76da9f42a4a9..0000000000000 --- a/examples/src/test/scala/org/apache/ignite/scalar/tests/examples/ScalarExamplesSelfTest.scala +++ /dev/null @@ -1,119 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests.examples - -import org.apache.ignite.scalar.examples._ -import org.apache.ignite.scalar.examples.spark._ -import org.apache.ignite.scalar.scalar -import org.apache.ignite.testframework.junits.common.GridAbstractExamplesTest -import org.junit.Test -import org.scalatest.Suite - -/** - * Scalar examples self test. - */ -class ScalarExamplesSelfTest extends GridAbstractExamplesTest with Suite { - /** */ - private def EMPTY_ARGS = Array.empty[String] - - /** */ - @Test - def testScalarCacheAffinitySimpleExample() { - ScalarCacheAffinityExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarCacheEntryProcessorExample() { - ScalarCacheEntryProcessorExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarCacheExample() { - ScalarCacheExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarCacheQueryExample() { - ScalarCacheQueryExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarClosureExample() { - ScalarClosureExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarContinuationExample() { - ScalarContinuationExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarCreditRiskExample() { - ScalarCreditRiskExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarPingPongExample() { - scalar("modules/scalar/src/test/resources/spring-ping-pong-partner.xml") { - ScalarPingPongExample.main(EMPTY_ARGS) - } - } - - /** */ - @Test - def testScalarPopularNumbersRealTimeExample() { - ScalarCachePopularNumbersExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarPrimeExample() { - ScalarPrimeExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarTaskExample() { - ScalarTaskExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarWorldShortestMapReduceExample() { - ScalarWorldShortestMapReduce.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarSnowflakeSchemaExample() { - ScalarSnowflakeSchemaExample.main(EMPTY_ARGS) - } - - /** */ - @Test - def testScalarSharedRDDExample() { - ScalarSharedRDDExample.main(EMPTY_ARGS) - } -} diff --git a/examples/src/test/scala/org/apache/ignite/scalar/testsuites/ScalarExamplesSelfTestSuite.scala b/examples/src/test/scala/org/apache/ignite/scalar/testsuites/ScalarExamplesSelfTestSuite.scala deleted file mode 100644 index e28b2ba1e8153..0000000000000 --- a/examples/src/test/scala/org/apache/ignite/scalar/testsuites/ScalarExamplesSelfTestSuite.scala +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.testsuites - -import org.apache.ignite.IgniteSystemProperties._ -import org.apache.ignite.scalar.tests.examples.{ScalarExamplesMultiNodeSelfTest, ScalarExamplesSelfTest} -import org.apache.ignite.testframework.GridTestUtils -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -/** - * - */ -@RunWith(classOf[JUnitRunner]) -class ScalarExamplesSelfTestSuite extends Suites( - new ScalarExamplesSelfTest, - new ScalarExamplesMultiNodeSelfTest -) { - System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, - GridTestUtils.getNextMulticastGroup(classOf[ScalarExamplesSelfTest])) -} diff --git a/modules/aop/src/test/config/aop/aspectj/META-INF/aop.xml b/modules/aop/src/test/config/aop/aspectj/META-INF/aop.xml index 8741bd18cd57a..f80c4a5164be7 100644 --- a/modules/aop/src/test/config/aop/aspectj/META-INF/aop.xml +++ b/modules/aop/src/test/config/aop/aspectj/META-INF/aop.xml @@ -93,7 +93,6 @@ - @@ -239,7 +238,6 @@ - diff --git a/modules/bom/pom.xml b/modules/bom/pom.xml index 83d6855e2c8ce..e591f017fea56 100644 --- a/modules/bom/pom.xml +++ b/modules/bom/pom.xml @@ -136,11 +136,6 @@ ignite-ssh ${revision} - - ${project.groupId} - ignite-scalar - ${revision} - ${project.groupId} ignite-spring diff --git a/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java index 6588b911c79e8..1dfb16975f670 100755 --- a/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java +++ b/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java @@ -6778,7 +6778,6 @@ public static boolean isGrid(Class cls) { * Replaces all occurrences of {@code org.apache.ignite.} with {@code o.a.i.}, * {@code org.apache.ignite.internal.} with {@code o.a.i.i.}, * {@code org.apache.ignite.internal.visor.} with {@code o.a.i.i.v.} and - * {@code org.apache.ignite.scalar.} with {@code o.a.i.s.}. * * @param s String to replace in. * @return Replaces string. @@ -6786,7 +6785,6 @@ public static boolean isGrid(Class cls) { public static String compact(String s) { return s.replace("org.apache.ignite.internal.visor.", "o.a.i.i.v."). replace("org.apache.ignite.internal.", "o.a.i.i."). - replace("org.apache.ignite.scalar.", "o.a.i.s."). replace("org.apache.ignite.", "o.a.i."); } diff --git a/modules/core/src/test/config/examples.properties b/modules/core/src/test/config/examples.properties index 2144533b03700..cca6862d10db0 100644 --- a/modules/core/src/test/config/examples.properties +++ b/modules/core/src/test/config/examples.properties @@ -15,11 +15,4 @@ # limitations under the License. # -ScalarCacheAffinityExample1=examples/config/example-ignite.xml -ScalarCacheAffinityExample2=examples/config/example-ignite.xml -ScalarCacheAffinitySimpleExample=examples/config/example-ignite.xml -ScalarCacheExample=examples/config/example-ignite.xml -ScalarCacheQueryExample=examples/config/example-ignite.xml -ScalarCountGraphTrianglesExample=examples/config/example-ignite.xml -ScalarPopularNumbersRealTimeExample=examples/config/example-ignite.xml DataRegionExample=examples/config/example-data-regions.xml \ No newline at end of file diff --git a/modules/osgi-karaf/src/main/resources/features.xml b/modules/osgi-karaf/src/main/resources/features.xml index 150ffdd0787f1..4251bc69b7216 100644 --- a/modules/osgi-karaf/src/main/resources/features.xml +++ b/modules/osgi-karaf/src/main/resources/features.xml @@ -42,7 +42,6 @@ ignite-rest-http - ignite-scalar-2.11 ignite-schedule ignite-slf4j ignite-spring @@ -202,22 +201,6 @@ mvn:org.apache.ignite/ignite-rest-http/${project.version} - -

- -
- mvn:org.scala-lang/scala-library/${scala210.library.version} - mvn:org.apache.ignite/ignite-scalar_2.10/${project.version} - - - -
- -
- mvn:org.scala-lang/scala-library/${scala.library.version} - mvn:org.apache.ignite/ignite-scalar/${project.version} -
-
diff --git a/modules/osgi/src/test/java/org/apache/ignite/osgi/AbstractIgniteKarafTest.java b/modules/osgi/src/test/java/org/apache/ignite/osgi/AbstractIgniteKarafTest.java index 34d0b9b3faa7b..5f019e76b3306 100644 --- a/modules/osgi/src/test/java/org/apache/ignite/osgi/AbstractIgniteKarafTest.java +++ b/modules/osgi/src/test/java/org/apache/ignite/osgi/AbstractIgniteKarafTest.java @@ -19,6 +19,7 @@ import java.io.File; import java.util.Arrays; +import java.util.Collections; import java.util.HashSet; import java.util.List; import java.util.Set; @@ -49,8 +50,7 @@ @ExamReactorStrategy(PerMethod.class) public abstract class AbstractIgniteKarafTest { /** Features we do not expect to be installed. */ - protected static final Set IGNORED_FEATURES = new HashSet<>( - Arrays.asList("ignite-log4j", "ignite-scalar-2.10")); + protected static final Set IGNORED_FEATURES = new HashSet<>(Collections.singletonList("ignite-log4j")); /** Regex matching ignite features. */ protected static final String IGNITE_FEATURES_NAME_REGEX = "ignite.*"; diff --git a/modules/scalar-2.10/README.txt b/modules/scalar-2.10/README.txt deleted file mode 100644 index 535a19313a4cc..0000000000000 --- a/modules/scalar-2.10/README.txt +++ /dev/null @@ -1,4 +0,0 @@ -Apache Ignite Scalar Module ---------------------------- - -Apache Ignite Scalar module to be build with Scala 2.10. diff --git a/modules/scalar-2.10/licenses/apache-2.0.txt b/modules/scalar-2.10/licenses/apache-2.0.txt deleted file mode 100644 index d645695673349..0000000000000 --- a/modules/scalar-2.10/licenses/apache-2.0.txt +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/modules/scalar-2.10/pom.xml b/modules/scalar-2.10/pom.xml deleted file mode 100644 index 51e89b08e0bc8..0000000000000 --- a/modules/scalar-2.10/pom.xml +++ /dev/null @@ -1,214 +0,0 @@ - - - - - - - 4.0.0 - - - org.apache.ignite - ignite-parent-internal - ${revision} - ../../parent-internal/pom.xml - - - ignite-scalar_2.10 - ${revision} - http://ignite.apache.org - - - - ${project.groupId} - ignite-core - - - - org.scala-lang - scala-library - ${scala210.library.version} - - - - ${project.groupId} - ignite-core - test-jar - test - - - - ${project.groupId} - ignite-tools - test - - - - ${project.groupId} - ignite-spring - test - - - - ${project.groupId} - ignite-indexing - test - - - - org.scalatest - scalatest_2.10 - 2.2.2 - test - - - org.scala-lang - scala-library - - - - - - - ../scalar/src/main/scala - - - - ../scalar/src/main/scala - - **/*.scala - - - - - - - ../scalar/src/test/scala - - **/*.scala - - - - - - - net.alchim31.maven - scala-maven-plugin - - - -nobootcp - - - - - - - org.apache.felix - maven-bundle-plugin - - - - org.apache.maven.plugins - maven-deploy-plugin - 2.8.2 - - false - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/modules/scalar/README.txt b/modules/scalar/README.txt deleted file mode 100644 index 38c5879cba3e7..0000000000000 --- a/modules/scalar/README.txt +++ /dev/null @@ -1,32 +0,0 @@ -Apache Ignite Scalar Module ---------------------------- - -Apache Ignite Scalar module provides Scala-based DSL with extensions and shortcuts for Apache Ignite API. - -To enable Scalar module when starting a standalone node, move 'optional/ignite-scalar' folder to -'libs' folder before running 'ignite.{sh|bat}' script. The content of the module folder will -be added to classpath in this case. - -Importing Scalar Module In Maven Project ----------------------------------------- - -If you are using Maven to manage dependencies of your project, you can add Scalar module -dependency like this (replace '${ignite.version}' with actual Ignite version you are -interested in): - - - ... - - ... - - org.apache.ignite - ignite-scalar - ${ignite.version} - - ... - - ... - diff --git a/modules/scalar/licenses/apache-2.0.txt b/modules/scalar/licenses/apache-2.0.txt deleted file mode 100644 index d645695673349..0000000000000 --- a/modules/scalar/licenses/apache-2.0.txt +++ /dev/null @@ -1,202 +0,0 @@ - - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/modules/scalar/pom.xml b/modules/scalar/pom.xml deleted file mode 100644 index 8c4a5a7473bca..0000000000000 --- a/modules/scalar/pom.xml +++ /dev/null @@ -1,209 +0,0 @@ - - - - - - - 4.0.0 - - - org.apache.ignite - ignite-parent-internal - ${revision} - ../../parent-internal/pom.xml - - - ignite-scalar - - http://ignite.apache.org - - - - ${project.groupId} - ignite-core - - - - org.scala-lang - scala-library - ${scala.library.version} - - - - ${project.groupId} - ignite-core - test-jar - test - - - - ${project.groupId} - ignite-tools - test - - - - ${project.groupId} - ignite-spring - test - - - - ${project.groupId} - ignite-indexing - test - - - - org.scalatest - scalatest_2.11 - ${scala.test.version} - test - - - org.scala-lang - scala-library - - - - - - - - - - org.apache.felix - maven-bundle-plugin - - - - org.apache.maven.plugins - maven-deploy-plugin - 2.8.2 - - false - - - - - net.alchim31.maven - scala-maven-plugin - - - scaladoc - prepare-package - - doc - doc-jar - - - Ignite Scalar - Ignite Scalar - ${maven.javadoc.skip} - - - - - - - - - - javadoc - - - - org.apache.maven.plugins - maven-antrun-plugin - - - scaladoc-postprocessing - - run - - initialize - - - - - - - - - - - - - - - - - - - - Ignite™ - Scalar DSL, ver. ${project.version} -
- 2022 Copyright © Apache Software Foundation - - - - ]]> -
- - - - - - - src="package.html" - src=org/apache/ignite/scalar/scalar$.html - - - - - location.replace("package.html") - location.replace("org/apache/ignite/scalar/scalar$.html") - - - - - docs.scala-lang.org/overviews/scaladoc/usage.html#members - docs.scala-lang.org/overviews/scaladoc/interface.html - - - - - - - - - - -
-
-
-
-
-
-
-
-
-
diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/ScalarConversions.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/ScalarConversions.scala deleted file mode 100644 index 32e67587d6edb..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/ScalarConversions.scala +++ /dev/null @@ -1,1217 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar - -import org.apache.ignite.{IgniteCache, Ignite} -import org.apache.ignite.cluster.ClusterGroup -import org.apache.ignite.compute.ComputeJob -import org.apache.ignite.internal.util.lang._ -import org.apache.ignite.lang._ -import org.apache.ignite.scalar.lang._ -import org.apache.ignite.scalar.pimps._ -import org.jetbrains.annotations._ - -import java.util.TimerTask - -import scala.collection._ -import scala.util.control.Breaks._ - -/** - * ==Overview== - * Mixin for `scalar` object providing `implicit` and `explicit` conversions between - * Java and Scala Ignite components. - * - * It is very important to review this class as it defines what `implicit` conversions - * will take place when using Scalar. Note that object `scalar` mixes in this - * trait and therefore brings with it all implicits into the scope. - */ -trait ScalarConversions { - /** - * Helper transformer from Java collection to Scala sequence. - * - * @param c Java collection to transform. - * @param f Transforming function. - */ - def toScalaSeq[A, B](@Nullable c: java.util.Collection[A], f: A => B): Seq[B] = { - assert(f != null) - - if (c == null) - return null - - val iter = c.iterator - - val lst = new mutable.ListBuffer[B] - - while (iter.hasNext) lst += f(iter.next) - - lst.toSeq - } - - /** - * Helper transformer from Java iterator to Scala sequence. - * - * @param i Java iterator to transform. - * @param f Transforming function. - */ - def toScalaSeq[A, B](@Nullable i: java.util.Iterator[A], f: A => B): Seq[B] = { - assert(f != null) - - if (i == null) - return null - - val lst = new mutable.ListBuffer[B] - - while (i.hasNext) lst += f(i.next) - - lst.toSeq - } - - /** - * Helper converter from Java iterator to Scala sequence. - * - * @param i Java iterator to convert. - */ - def toScalaSeq[A](@Nullable i: java.util.Iterator[A]): Seq[A] = - toScalaSeq(i, (e: A) => e) - - /** - * Helper transformer from Java iterable to Scala sequence. - * - * @param i Java iterable to transform. - * @param f Transforming function. - */ - def toScalaSeq[A, B](@Nullable i: java.lang.Iterable[A], f: A => B): Seq[B] = { - assert(f != null) - - if (i == null) return null - - toScalaSeq(i.iterator, f) - } - - /** - * Helper converter from Java iterable to Scala sequence. - * - * @param i Java iterable to convert. - */ - def toScalaSeq[A](@Nullable i: java.lang.Iterable[A]): Seq[A] = - toScalaSeq(i, (e: A) => e) - -// /** -// * Helper converter from Java collection to Scala sequence. -// * -// * @param c Java collection to convert. -// */ -// def toScalaSeq[A](@Nullable c: java.util.Collection[A]): Seq[A] = -// toScalaSeq(c, (e: A) => e) - - /** - * Helper converter from Java entry collection to Scala iterable of pair. - * - * @param c Java collection to convert. - */ - def toScalaItr[K, V](@Nullable c: java.util.Collection[java.util.Map.Entry[K, V]]): Iterable[(K, V)] = { - val lst = new mutable.ListBuffer[(K, V)] - - c.toArray().foreach { - case f: java.util.Map.Entry[K, V] => lst += Tuple2(f.getKey(), f.getValue()) - } - - lst - } - - /** - * Helper transformer from Scala sequence to Java collection. - * - * @param s Scala sequence to transform. - * @param f Transforming function. - */ - def toJavaCollection[A, B](@Nullable s: Seq[A], f: A => B): java.util.Collection[B] = { - assert(f != null) - - if (s == null) return null - - val lst = new java.util.ArrayList[B](s.length) - - s.foreach(a => lst.add(f(a))) - - lst - } - - /** - * Helper converter from Scala sequence to Java collection. - * - * @param s Scala sequence to convert. - */ - def toJavaCollection[A](@Nullable s: Seq[A]): java.util.Collection[A] = - toJavaCollection(s, (e: A) => e) - - /** - * Helper transformer from Scala iterator to Java collection. - * - * @param i Scala iterator to transform. - * @param f Transforming function. - */ - def toJavaCollection[A, B](@Nullable i: Iterator[A], f: A => B): java.util.Collection[B] = { - assert(f != null) - - if (i == null) return null - - val lst = new java.util.ArrayList[B] - - i.foreach(a => lst.add(f(a))) - - lst - } - - /** - * Converts from `Symbol` to `String`. - * - * @param s Symbol to convert. - */ - implicit def fromSymbol(s: Symbol): String = - if (s == null) - null - else - s.toString().substring(1) - - /** - * Introduction of `^^` operator for `Any` type that will call `break`. - * - * @param v `Any` value. - */ - implicit def toReturnable(v: Any) = new { - // Ignore the warning below. - def ^^ { - break() - } - } - - - /** - * Explicit converter for `TimerTask`. Note that since `TimerTask` implements `Runnable` - * we can't use the implicit conversion. - * - * @param f Closure to convert. - * @return Time task instance. - */ - def timerTask(f: => Unit): TimerTask = new TimerTask { - def run() { - f - } - } - - /** - * Extension for `Tuple2`. - * - * @param t Tuple to improve. - */ - implicit def toTuple2x[T1, T2](t: (T1, T2)) = new { - def isSome: Boolean = - t._1 != null || t._2 != null - - def isNone: Boolean = - !isSome - - def isAll: Boolean = - t._1 != null && t._2 != null - - def opt1: Option[T1] = - Option(t._1) - - def opt2: Option[T2] = - Option(t._2) - } - - /** - * Extension for `Tuple3`. - * - * @param t Tuple to improve. - */ - implicit def toTuple3x[T1, T2, T3](t: (T1, T2, T3)) = new { - def isSome: Boolean = - t._1 != null || t._2 != null || t._3 != null - - def isNone: Boolean = - !isSome - - def isAll: Boolean = - t._1 != null && t._2 != null && t._3 != null - - def opt1: Option[T1] = - Option(t._1) - - def opt2: Option[T2] = - Option(t._2) - - def opt3: Option[T3] = - Option(t._3) - } - -// /** -// * Implicit converter from cache KV-pair predicate to cache entry predicate. Note that predicate -// * will use peek() -// * -// * @param p Cache KV-pair predicate to convert. -// */ -// implicit def toEntryPred[K, V](p: (K, V) => Boolean): (_ >: Cache.Entry[K, V]) => Boolean = -// (e: Cache.Entry[K, V]) => p(e.getKey, e.getValue) - - /** - * Implicit converter from vararg of one-argument Scala functions to Java `GridPredicate`s. - * - * @param s Sequence of one-argument Scala functions to convert. - */ - implicit def toVarArgs[T](s: Seq[T => Boolean]): Seq[IgnitePredicate[_ >: T]] = - s.map((f: T => Boolean) => toPredicate(f)) - - /** - * Implicit converter from vararg of two-argument Scala functions to Java `GridPredicate2`s. - * - * @param s Sequence of two-argument Scala functions to convert. - */ - implicit def toVarArgs2[T1, T2](s: Seq[(T1, T2) => Boolean]): Seq[IgniteBiPredicate[_ >: T1, _ >: T2]] = - s.map((f: (T1, T2) => Boolean) => toPredicate2(f)) - - /** - * Implicit converter from vararg of three-argument Scala functions to Java `GridPredicate3`s. - * - * @param s Sequence of three-argument Scala functions to convert. - */ - implicit def toVarArgs3[T1, T2, T3](s: Seq[(T1, T2, T3) => Boolean]): - Seq[GridPredicate3[_ >: T1, _ >: T2, _ >: T3]] = - s.map((f: (T1, T2, T3) => Boolean) => toPredicate3(f)) - - /** - * Implicit converter from Scala function and Java `GridReducer`. - * - * @param r Scala function to convert. - */ - implicit def toReducer[E, R](r: Seq[E] => R): IgniteReducer[E, R] = - new ScalarReducer(r) - - /** - * Implicit converter from Java `GridReducer` to Scala function. - * - * @param r Java `GridReducer` to convert. - */ - implicit def fromReducer[E, R](r: IgniteReducer[E, R]): Seq[E] => R = - new ScalarReducerFunction[E, R](r) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param r Java-side reducer to pimp. - */ - implicit def reducerDotScala[E, R](r: IgniteReducer[E, R]) = new { - def scala: Seq[E] => R = - fromReducer(r) - } - - /** - * Implicit converter from Scala function and Java `GridReducer2`. - * - * @param r Scala function to convert. - */ - implicit def toReducer2[E1, E2, R](r: (Seq[E1], Seq[E2]) => R): IgniteReducer2[E1, E2, R] = - new ScalarReducer2(r) - - /** - * Implicit converter from Java `GridReducer2` to Scala function. - * - * @param r Java `GridReducer2` to convert. - */ - implicit def fromReducer2[E1, E2, R](r: IgniteReducer2[E1, E2, R]): (Seq[E1], Seq[E2]) => R = - new ScalarReducer2Function[E1, E2, R](r) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param r Java-side reducer to pimp. - */ - implicit def reducer2DotScala[E1, E2, R](r: IgniteReducer2[E1, E2, R]) = new { - def scala: (Seq[E1], Seq[E2]) => R = - fromReducer2(r) - } - - /** - * Implicit converter from Scala function and Java `GridReducer3`. - * - * @param r Scala function to convert. - */ - implicit def toReducer3[E1, E2, E3, R](r: (Seq[E1], Seq[E2], Seq[E3]) => R): IgniteReducer3[E1, E2, E3, R] = - new ScalarReducer3(r) - - /** - * Implicit converter from Java `GridReducer3` to Scala function. - * - * @param r Java `GridReducer3` to convert. - */ - implicit def fromReducer3[E1, E2, E3, R](r: IgniteReducer3[E1, E2, E3, R]): (Seq[E1], Seq[E2], Seq[E3]) => R = - new ScalarReducer3Function[E1, E2, E3, R](r) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param r Java-side reducer to pimp. - */ - implicit def reducer3DotScala[E1, E2, E3, R](r: IgniteReducer3[E1, E2, E3, R]) = new { - def scala: (Seq[E1], Seq[E2], Seq[E3]) => R = - fromReducer3(r) - } - - /** - * Implicit converter from `Grid` to `ScalarGridPimp` "pimp". - * - * @param impl Grid to convert. - */ - implicit def toScalarGrid(impl: Ignite): ScalarGridPimp = - ScalarGridPimp(impl) - - /** - * Implicit converter from `GridProjection` to `ScalarProjectionPimp` "pimp". - * - * @param impl Grid projection to convert. - */ - implicit def toScalarProjection(impl: ClusterGroup): ScalarProjectionPimp[ClusterGroup] = - ScalarProjectionPimp(impl) - - /** - * Implicit converter from `Cache` to `ScalarCachePimp` "pimp". - * - * @param impl Grid cache to convert. - */ - implicit def toScalarCache[K, V](impl: IgniteCache[K, V]): ScalarCachePimp[K, V] = - ScalarCachePimp[K, V](impl) - - /** - * Implicit converter from Scala function to `ComputeJob`. - * - * @param f Scala function to convert. - */ - implicit def toJob(f: () => Any): ComputeJob = - new ScalarJob(f) - - /** - * Implicit converter from Scala tuple to `GridTuple2`. - * - * @param t Scala tuple to convert. - */ - implicit def toTuple2[A, B](t: (A, B)): IgniteBiTuple[A, B] = - new IgniteBiTuple[A, B](t._1, t._2) - - /** - * Implicit converter from `GridTuple2` to Scala tuple. - * - * @param t `GridTuple2` to convert. - */ - implicit def fromTuple2[A, B](t: IgniteBiTuple[A, B]): (A, B) = - (t.get1, t.get2) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param t Java-side tuple to pimp. - */ - implicit def tuple2DotScala[A, B](t: IgniteBiTuple[A, B]) = new { - def scala: (A, B) = - fromTuple2(t) - } - - /** - * Implicit converter from Scala tuple to `GridTuple3`. - * - * @param t Scala tuple to convert. - */ - implicit def toTuple3[A, B, C](t: (A, B, C)): GridTuple3[A, B, C] = - new GridTuple3[A, B, C](t._1, t._2, t._3) - - /** - * Implicit converter from `GridTuple3` to Scala tuple. - * - * @param t `GridTuple3` to convert. - */ - implicit def fromTuple3[A, B, C](t: GridTuple3[A, B, C]): (A, B, C) = - (t.get1, t.get2, t.get3) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param t Java-side tuple to pimp. - */ - implicit def tuple3DotScala[A, B, C](t: GridTuple3[A, B, C]) = new { - def scala: (A, B, C) = - fromTuple3(t) - } - - /** - * Implicit converter from Scala tuple to `GridTuple4`. - * - * @param t Scala tuple to convert. - */ - implicit def toTuple4[A, B, C, D](t: (A, B, C, D)): GridTuple4[A, B, C, D] = - new GridTuple4[A, B, C, D](t._1, t._2, t._3, t._4) - - /** - * Implicit converter from `GridTuple4` to Scala tuple. - * - * @param t `GridTuple4` to convert. - */ - implicit def fromTuple4[A, B, C, D](t: GridTuple4[A, B, C, D]): (A, B, C, D) = - (t.get1, t.get2, t.get3, t.get4) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param t Java-side tuple to pimp. - */ - implicit def tuple4DotScala[A, B, C, D](t: GridTuple4[A, B, C, D]) = new { - def scala: (A, B, C, D) = - fromTuple4(t) - } - - /** - * Implicit converter from Scala tuple to `GridTuple5`. - * - * @param t Scala tuple to convert. - */ - implicit def toTuple5[A, B, C, D, E](t: (A, B, C, D, E)): GridTuple5[A, B, C, D, E] = - new GridTuple5[A, B, C, D, E](t._1, t._2, t._3, t._4, t._5) - - /** - * Implicit converter from `GridTuple5` to Scala tuple. - * - * @param t `GridTuple5` to convert. - */ - implicit def fromTuple5[A, B, C, D, E](t: GridTuple5[A, B, C, D, E]): (A, B, C, D, E) = - (t.get1, t.get2, t.get3, t.get4, t.get5) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param t Java-side tuple to pimp. - */ - implicit def tuple5DotScala[A, B, C, D, E](t: GridTuple5[A, B, C, D, E]) = new { - def scala: (A, B, C, D, E) = - fromTuple5(t) - } - - /** - * Implicit converter from Scala function to `GridInClosure`. - * - * @param f Scala function to convert. - */ - implicit def toInClosure[T](f: T => Unit): IgniteInClosure[T] = - f match { - case (p: ScalarInClosureFunction[T]) => p.inner - case _ => new ScalarInClosure[T](f) - } - - /** - * Implicit converter from Scala function to `GridInClosureX`. - * - * @param f Scala function to convert. - */ - def toInClosureX[T](f: T => Unit): IgniteInClosureX[T] = - f match { - case (p: ScalarInClosureXFunction[T]) => p.inner - case _ => new ScalarInClosureX[T](f) - } - - /** - * Implicit converter from `GridInClosure` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromInClosure[T](f: IgniteInClosure[T]): T => Unit = - new ScalarInClosureFunction[T](f) - - /** - * Implicit converter from `GridInClosureX` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromInClosureX[T](f: IgniteInClosureX[T]): T => Unit = - new ScalarInClosureXFunction[T](f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def inClosureDotScala[T](f: IgniteInClosure[T]) = new { - def scala: T => Unit = - fromInClosure(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def inClosureXDotScala[T](f: IgniteInClosureX[T]) = new { - def scala: T => Unit = - fromInClosureX(f) - } - - /** - * Implicit converter from Scala function to `GridInClosure2`. - * - * @param f Scala function to convert. - */ - implicit def toInClosure2[T1, T2](f: (T1, T2) => Unit): IgniteBiInClosure[T1, T2] = - f match { - case (p: ScalarInClosure2Function[T1, T2]) => p.inner - case _ => new ScalarInClosure2[T1, T2](f) - } - - /** - * Implicit converter from Scala function to `GridInClosure2X`. - * - * @param f Scala function to convert. - */ - implicit def toInClosure2X[T1, T2](f: (T1, T2) => Unit): IgniteInClosure2X[T1, T2] = - f match { - case (p: ScalarInClosure2XFunction[T1, T2]) => p.inner - case _ => new ScalarInClosure2X[T1, T2](f) - } - - /** - * Implicit converter from `GridInClosure2` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromInClosure2[T1, T2](f: IgniteBiInClosure[T1, T2]): (T1, T2) => Unit = - new ScalarInClosure2Function(f) - - /** - * Implicit converter from `GridInClosure2X` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromInClosure2X[T1, T2](f: IgniteInClosure2X[T1, T2]): (T1, T2) => Unit = - new ScalarInClosure2XFunction(f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def inClosure2DotScala[T1, T2](f: IgniteBiInClosure[T1, T2]) = new { - def scala: (T1, T2) => Unit = - fromInClosure2(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def inClosure2XDotScala[T1, T2](f: IgniteInClosure2X[T1, T2]) = new { - def scala: (T1, T2) => Unit = - fromInClosure2X(f) - } - - /** - * Implicit converter from Scala function to `GridInClosure3`. - * - * @param f Scala function to convert. - */ - implicit def toInClosure3[T1, T2, T3](f: (T1, T2, T3) => Unit): GridInClosure3[T1, T2, T3] = - f match { - case (p: ScalarInClosure3Function[T1, T2, T3]) => p.inner - case _ => new ScalarInClosure3[T1, T2, T3](f) - } - - /** - * Implicit converter from Scala function to `GridInClosure3X`. - * - * @param f Scala function to convert. - */ - def toInClosure3X[T1, T2, T3](f: (T1, T2, T3) => Unit): GridInClosure3X[T1, T2, T3] = - f match { - case (p: ScalarInClosure3XFunction[T1, T2, T3]) => p.inner - case _ => new ScalarInClosure3X[T1, T2, T3](f) - } - - /** - * Implicit converter from `GridInClosure3` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromInClosure3[T1, T2, T3](f: GridInClosure3[T1, T2, T3]): (T1, T2, T3) => Unit = - new ScalarInClosure3Function(f) - - /** - * Implicit converter from `GridInClosure3X` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromInClosure3X[T1, T2, T3](f: GridInClosure3X[T1, T2, T3]): (T1, T2, T3) => Unit = - new ScalarInClosure3XFunction(f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def inClosure3DotScala[T1, T2, T3](f: GridInClosure3[T1, T2, T3]) = new { - def scala: (T1, T2, T3) => Unit = - fromInClosure3(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def inClosure3XDotScala[T1, T2, T3](f: GridInClosure3X[T1, T2, T3]) = new { - def scala: (T1, T2, T3) => Unit = - fromInClosure3X(f) - } - - /** - * Implicit converter from Scala function to `GridOutClosure`. - * - * @param f Scala function to convert. - */ - implicit def toCallable[R](f: () => R): IgniteCallable[R] = - f match { - case p: ScalarOutClosureFunction[R] => p.inner - case _ => new ScalarOutClosure[R](f) - } - - /** - * Implicit converter from Scala function to `GridOutClosureX`. - * - * @param f Scala function to convert. - */ - def toOutClosureX[R](f: () => R): IgniteOutClosureX[R] = - f match { - case (p: ScalarOutClosureXFunction[R]) => p.inner - case _ => new ScalarOutClosureX[R](f) - } - - /** - * Implicit converter from `GridOutClosure` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromOutClosure[R](f: IgniteCallable[R]): () => R = - new ScalarOutClosureFunction[R](f) - - /** - * Implicit converter from `GridOutClosureX` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromOutClosureX[R](f: IgniteOutClosureX[R]): () => R = - new ScalarOutClosureXFunction[R](f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def outClosureDotScala[R](f: IgniteCallable[R]) = new { - def scala: () => R = - fromOutClosure(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def outClosureXDotScala[R](f: IgniteOutClosureX[R]) = new { - def scala: () => R = - fromOutClosureX(f) - } - - /** - * Implicit converter from Scala function to `GridAbsClosure`. - * - * @param f Scala function to convert. - */ - implicit def toRunnable(f: () => Unit): IgniteRunnable = - f match { - case (f: ScalarAbsClosureFunction) => f.inner - case _ => new ScalarAbsClosure(f) - } - - /** - * Implicit converter from Scala function to `GridAbsClosureX`. - * - * @param f Scala function to convert. - */ - def toAbsClosureX(f: () => Unit): GridAbsClosureX = - f match { - case (f: ScalarAbsClosureXFunction) => f.inner - case _ => new ScalarAbsClosureX(f) - } - - /** - * Implicit converter from `GridAbsClosure` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromAbsClosure(f: GridAbsClosure): () => Unit = - new ScalarAbsClosureFunction(f) - - /** - * Implicit converter from `GridAbsClosureX` to Scala wrapping function. - * - * @param f Grid closure to convert. - */ - implicit def fromAbsClosureX(f: GridAbsClosureX): () => Unit = - new ScalarAbsClosureXFunction(f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side absolute closure to pimp. - */ - implicit def absClosureDotScala(f: GridAbsClosure) = new { - def scala: () => Unit = - fromAbsClosure(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side absolute closure to pimp. - */ - implicit def absClosureXDotScala(f: GridAbsClosureX) = new { - def scala: () => Unit = - fromAbsClosureX(f) - } - - /** - * Implicit converter from Scala predicate to `GridAbsPredicate`. - * - * @param f Scala predicate to convert. - */ - implicit def toAbsPredicate(f: () => Boolean): GridAbsPredicate = - f match { - case (p: ScalarAbsPredicateFunction) => p.inner - case _ => new ScalarAbsPredicate(f) - } - - /** - * Implicit converter from Scala predicate to `GridAbsPredicateX`. - * - * @param f Scala predicate to convert. - */ - implicit def toAbsPredicateX(f: () => Boolean): GridAbsPredicateX = - f match { - case (p: ScalarAbsPredicateXFunction) => p.inner - case _ => new ScalarAbsPredicateX(f) - } - - /** - * Implicit converter from `GridAbsPredicate` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromAbsPredicate(p: GridAbsPredicate): () => Boolean = - new ScalarAbsPredicateFunction(p) - - /** - * Implicit converter from `GridAbsPredicateX` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromAbsPredicateX(p: GridAbsPredicateX): () => Boolean = - new ScalarAbsPredicateXFunction(p) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def absPredicateDotScala(p: GridAbsPredicate) = new { - def scala: () => Boolean = - fromAbsPredicate(p) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def absPredicateXDotScala(p: GridAbsPredicateX) = new { - def scala: () => Boolean = - fromAbsPredicateX(p) - } - - /** - * Implicit converter from `java.lang.Runnable` to `GridAbsClosure`. - * - * @param r Java runnable to convert. - */ - implicit def toAbsClosure2(r: java.lang.Runnable): GridAbsClosure = - GridFunc.as(r) - - /** - * Implicit converter from Scala predicate to Scala wrapping predicate. - * - * @param f Scala predicate to convert. - */ - implicit def toPredicate[T](f: T => Boolean) = - f match { - case null => null - case (p: ScalarPredicateFunction[T]) => p.inner - case _ => new ScalarPredicate[T](f) - } - - /** - * Implicit converter from Scala predicate to Scala wrapping predicate. - * - * @param f Scala predicate to convert. - */ - def toPredicateX[T](f: T => Boolean) = - f match { - case (p: ScalarPredicateXFunction[T]) => p.inner - case _ => new ScalarPredicateX[T](f) - } - - /** - * Implicit converter from `GridPredicate` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromPredicate[T](p: IgnitePredicate[T]): T => Boolean = - new ScalarPredicateFunction[T](p) - - /** - * Implicit converter from `GridPredicate` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromPredicateX[T](p: IgnitePredicateX[T]): T => Boolean = - new ScalarPredicateXFunction[T](p) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def predicateDotScala[T](p: IgnitePredicate[T]) = new { - def scala: T => Boolean = - fromPredicate(p) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def predicateXDotScala[T](p: IgnitePredicateX[T]) = new { - def scala: T => Boolean = - fromPredicateX(p) - } - - /** - * Implicit converter from Scala predicate to Scala wrapping predicate. - * - * @param f Scala predicate to convert. - */ - implicit def toPredicate2[T1, T2](f: (T1, T2) => Boolean) = - f match { - case (p: ScalarPredicate2Function[T1, T2]) => p.inner - case _ => new ScalarPredicate2[T1, T2](f) - } - - /** - * Implicit converter from Scala predicate to Scala wrapping predicate. - * - * @param f Scala predicate to convert. - */ - def toPredicate2X[T1, T2](f: (T1, T2) => Boolean) = - f match { - case (p: ScalarPredicate2XFunction[T1, T2]) => p.inner - case _ => new ScalarPredicate2X[T1, T2](f) - } - - /** - * Implicit converter from `GridPredicate2X` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromPredicate2[T1, T2](p: IgniteBiPredicate[T1, T2]): (T1, T2) => Boolean = - new ScalarPredicate2Function[T1, T2](p) - - /** - * Implicit converter from `GridPredicate2X` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromPredicate2X[T1, T2](p: IgnitePredicate2X[T1, T2]): (T1, T2) => Boolean = - new ScalarPredicate2XFunction[T1, T2](p) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def predicate2DotScala[T1, T2](p: IgniteBiPredicate[T1, T2]) = new { - def scala: (T1, T2) => Boolean = - fromPredicate2(p) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def predicate2XDotScala[T1, T2](p: IgnitePredicate2X[T1, T2]) = new { - def scala: (T1, T2) => Boolean = - fromPredicate2X(p) - } - - /** - * Implicit converter from Scala predicate to Scala wrapping predicate. - * - * @param f Scala predicate to convert. - */ - implicit def toPredicate3[T1, T2, T3](f: (T1, T2, T3) => Boolean) = - f match { - case (p: ScalarPredicate3Function[T1, T2, T3]) => p.inner - case _ => new ScalarPredicate3[T1, T2, T3](f) - } - - /** - * Implicit converter from Scala predicate to Scala wrapping predicate. - * - * @param f Scala predicate to convert. - */ - def toPredicate32[T1, T2, T3](f: (T1, T2, T3) => Boolean) = - f match { - case (p: ScalarPredicate3XFunction[T1, T2, T3]) => p.inner - case _ => new ScalarPredicate3X[T1, T2, T3](f) - } - - /** - * Implicit converter from `GridPredicate3X` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromPredicate3[T1, T2, T3](p: GridPredicate3[T1, T2, T3]): (T1, T2, T3) => Boolean = - new ScalarPredicate3Function[T1, T2, T3](p) - - /** - * Implicit converter from `GridPredicate3X` to Scala wrapping predicate. - * - * @param p Grid predicate to convert. - */ - implicit def fromPredicate3X[T1, T2, T3](p: GridPredicate3X[T1, T2, T3]): (T1, T2, T3) => Boolean = - new ScalarPredicate3XFunction[T1, T2, T3](p) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def predicate3DotScala[T1, T2, T3](p: GridPredicate3[T1, T2, T3]) = new { - def scala: (T1, T2, T3) => Boolean = - fromPredicate3(p) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param p Java-side predicate to pimp. - */ - implicit def predicate3XDotScala[T1, T2, T3](p: GridPredicate3X[T1, T2, T3]) = new { - def scala: (T1, T2, T3) => Boolean = - fromPredicate3X(p) - } - - /** - * Implicit converter from Scala closure to `GridClosure`. - * - * @param f Scala closure to convert. - */ - implicit def toClosure[A, R](f: A => R): IgniteClosure[A, R] = - f match { - case (c: ScalarClosureFunction[A, R]) => c.inner - case _ => new ScalarClosure[A, R](f) - } - - /** - * Implicit converter from Scala closure to `GridClosureX`. - * - * @param f Scala closure to convert. - */ - def toClosureX[A, R](f: A => R): IgniteClosureX[A, R] = - f match { - case (c: ScalarClosureXFunction[A, R]) => c.inner - case _ => new ScalarClosureX[A, R](f) - } - - /** - * Implicit converter from `GridClosure` to Scala wrapping closure. - * - * @param f Grid closure to convert. - */ - implicit def fromClosure[A, R](f: IgniteClosure[A, R]): A => R = - new ScalarClosureFunction[A, R](f) - - /** - * Implicit converter from `GridClosureX` to Scala wrapping closure. - * - * @param f Grid closure to convert. - */ - implicit def fromClosureX[A, R](f: IgniteClosureX[A, R]): A => R = - new ScalarClosureXFunction[A, R](f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def closureDotScala[A, R](f: IgniteClosure[A, R]) = new { - def scala: A => R = - fromClosure(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def closureXDotScala[A, R](f: IgniteClosureX[A, R]) = new { - def scala: A => R = - fromClosureX(f) - } - - /** - * Implicit converter from Scala closure to `GridClosure2`. - * - * @param f Scala closure to convert. - */ - implicit def toClosure2[A1, A2, R](f: (A1, A2) => R): IgniteBiClosure[A1, A2, R] = - f match { - case (p: ScalarClosure2Function[A1, A2, R]) => p.inner - case _ => new ScalarClosure2[A1, A2, R](f) - } - - /** - * Implicit converter from Scala closure to `GridClosure2X`. - * - * @param f Scala closure to convert. - */ - def toClosure2X[A1, A2, R](f: (A1, A2) => R): IgniteClosure2X[A1, A2, R] = - f match { - case (p: ScalarClosure2XFunction[A1, A2, R]) => p.inner - case _ => new ScalarClosure2X[A1, A2, R](f) - } - - /** - * Implicit converter from `GridClosure2X` to Scala wrapping closure. - * - * @param f Grid closure to convert. - */ - implicit def fromClosure2[A1, A2, R](f: IgniteBiClosure[A1, A2, R]): (A1, A2) => R = - new ScalarClosure2Function[A1, A2, R](f) - - /** - * Implicit converter from `GridClosure2X` to Scala wrapping closure. - * - * @param f Grid closure to convert. - */ - implicit def fromClosure2X[A1, A2, R](f: IgniteClosure2X[A1, A2, R]): (A1, A2) => R = - new ScalarClosure2XFunction[A1, A2, R](f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def closure2DotScala[A1, A2, R](f: IgniteBiClosure[A1, A2, R]) = new { - def scala: (A1, A2) => R = - fromClosure2(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def closure2XDotScala[A1, A2, R](f: IgniteClosure2X[A1, A2, R]) = new { - def scala: (A1, A2) => R = - fromClosure2X(f) - } - - /** - * Implicit converter from Scala closure to `GridClosure3X`. - * - * @param f Scala closure to convert. - */ - implicit def toClosure3[A1, A2, A3, R](f: (A1, A2, A3) => R): GridClosure3[A1, A2, A3, R] = - f match { - case (p: ScalarClosure3Function[A1, A2, A3, R]) => p.inner - case _ => new ScalarClosure3[A1, A2, A3, R](f) - } - - /** - * Implicit converter from Scala closure to `GridClosure3X`. - * - * @param f Scala closure to convert. - */ - def toClosure3X[A1, A2, A3, R](f: (A1, A2, A3) => R): GridClosure3X[A1, A2, A3, R] = - f match { - case (p: ScalarClosure3XFunction[A1, A2, A3, R]) => p.inner - case _ => new ScalarClosure3X[A1, A2, A3, R](f) - } - - /** - * Implicit converter from `GridClosure3` to Scala wrapping closure. - * - * @param f Grid closure to convert. - */ - implicit def fromClosure3[A1, A2, A3, R](f: GridClosure3[A1, A2, A3, R]): (A1, A2, A3) => R = - new ScalarClosure3Function[A1, A2, A3, R](f) - - /** - * Implicit converter from `GridClosure3X` to Scala wrapping closure. - * - * @param f Grid closure to convert. - */ - implicit def fromClosure3X[A1, A2, A3, R](f: GridClosure3X[A1, A2, A3, R]): (A1, A2, A3) => R = - new ScalarClosure3XFunction[A1, A2, A3, R](f) - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def closure3DotScala[A1, A2, A3, R](f: GridClosure3[A1, A2, A3, R]) = new { - def scala: (A1, A2, A3) => R = - fromClosure3(f) - } - - /** - * Pimp for adding explicit conversion method `scala`. - * - * @param f Java-side closure to pimp. - */ - implicit def closure3XDotScala[A1, A2, A3, R](f: GridClosure3X[A1, A2, A3, R]) = new { - def scala: (A1, A2, A3) => R = - fromClosure3X(f) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/Packet.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/Packet.scala deleted file mode 100644 index 19dbe2dc50034..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/Packet.scala +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar - -/** - * Contains Scala side adapters for implicits conversion. - */ -package object lang diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosure.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosure.scala deleted file mode 100644 index 9d1832d2e4441..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosure.scala +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridPeerDeployAwareAdapter -import org.apache.ignite.lang.IgniteRunnable - -/** - * Peer deploy aware adapter for Java's `GridRunnable`. - */ -class ScalarAbsClosure(private val f: () => Unit) extends GridPeerDeployAwareAdapter with IgniteRunnable { - assert(f != null) - - peerDeployLike(f) - - /** - * Delegates to passed in function. - */ - def run() { - f() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureFunction.scala deleted file mode 100644 index fe0d0f41d13bb..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureFunction.scala +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -/** - * Wrapping Scala function for `GridAbsClosure`. - */ -class ScalarAbsClosureFunction(val inner: Runnable) extends (() => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply() { - inner.run() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureX.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureX.scala deleted file mode 100644 index cc4444ab707ea..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureX.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.GridAbsClosureX - -/** - * Peer deploy aware adapter for Java's `GridAbsClosureX`. - */ -class ScalarAbsClosureX(private val f: () => Unit) extends GridAbsClosureX { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx() { - f() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureXFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureXFunction.scala deleted file mode 100644 index f9b3d9ddf53ff..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsClosureXFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridAbsClosureX - -/** - * Wrapping Scala function for `GridAbsClosureX`. - */ -class ScalarAbsClosureXFunction(val inner: GridAbsClosureX) extends (() => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply() { - inner.applyx() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicate.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicate.scala deleted file mode 100644 index f9f0b5aeff784..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicate.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridAbsPredicate - -/** - * Peer deploy aware adapter for Java's `GridAbsPredicate`. - */ -class ScalarAbsPredicate(private val f: () => Boolean) extends GridAbsPredicate { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(): Boolean = { - f() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateFunction.scala deleted file mode 100644 index 4e4098c33e468..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridAbsPredicate - -/** - * Wrapping Scala function for `GridAbsPredicate`. - */ -class ScalarAbsPredicateFunction(val inner: GridAbsPredicate) extends (() => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(): Boolean = { - inner.apply - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateX.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateX.scala deleted file mode 100644 index 91ae29a8e9cc6..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateX.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.GridAbsPredicateX - -/** - * Peer deploy aware adapter for Java's `GridAbsPredicateX`. - */ -class ScalarAbsPredicateX(private val f: () => Boolean) extends GridAbsPredicateX { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(): Boolean = { - f() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateXFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateXFunction.scala deleted file mode 100644 index 66cf155c25d3d..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarAbsPredicateXFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridAbsPredicateX - -/** - * Wrapping Scala function for `GridAbsPredicateX`. - */ -class ScalarAbsPredicateXFunction(val inner: GridAbsPredicateX) extends (() => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(): Boolean = { - inner.applyx - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure.scala deleted file mode 100644 index 6a9d7ca542d60..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteClosure - -/** - * Peer deploy aware adapter for Java's `GridClosure`. - */ -class ScalarClosure[E, R](private val f: E => R) extends IgniteClosure[E, R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(e: E): R = { - f(e) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2.scala deleted file mode 100644 index 1362290420adf..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteBiClosure - -/** - * Peer deploy aware adapter for Java's `GridClosure2`. - */ -class ScalarClosure2[E1, E2, R](private val f: (E1, E2) => R) extends IgniteBiClosure[E1, E2, R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(e1: E1, e2: E2): R = { - f(e1, e2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2Function.scala deleted file mode 100644 index 2f4c89b14c32c..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2Function.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteBiClosure - -/** - * Wrapping Scala function for `GridClosure2`. - */ -class ScalarClosure2Function[T1, T2, R](val inner: IgniteBiClosure[T1, T2, R]) extends ((T1, T2) => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2): R = { - inner.apply(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2X.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2X.scala deleted file mode 100644 index 23c42b82c6af6..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2X.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgniteClosure2X - -/** - * Peer deploy aware adapter for Java's `GridClosure2X`. - */ -class ScalarClosure2X[E1, E2, R](private val f: (E1, E2) => R) extends IgniteClosure2X[E1, E2, R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(e1: E1, e2: E2): R = { - f(e1, e2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2XFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2XFunction.scala deleted file mode 100644 index 0317a6a254da4..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure2XFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteClosure2X - -/** - * Wrapping Scala function for `GridClosure2X`. - */ -class ScalarClosure2XFunction[T1, T2, R](val inner: IgniteClosure2X[T1, T2, R]) extends ((T1, T2) => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2): R = { - inner.applyx(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3.scala deleted file mode 100644 index 1d890282cf0af..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridClosure3 - -/** - * Peer deploy aware adapter for Java's `GridClosure3`. - */ -class ScalarClosure3[E1, E2, E3, R](private val f: (E1, E2, E3) => R) extends GridClosure3[E1, E2, E3, R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(e1: E1, e2: E2, e3: E3): R = { - f(e1, e2, e3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3Function.scala deleted file mode 100644 index 18e9f786b0fa2..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3Function.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridClosure3 - -/** - * Wrapping Scala function for `GridClosure3`. - */ -class ScalarClosure3Function[T1, T2, T3, R](val inner: GridClosure3[T1, T2, T3, R]) extends ((T1, T2, T3) => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2, t3: T3): R = { - inner.apply(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3X.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3X.scala deleted file mode 100644 index 6ba5dece0b81c..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3X.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.GridClosure3X - -/** - * Peer deploy aware adapter for Java's `GridClosure3X`. - */ -class ScalarClosure3X[E1, E2, E3, R](private val f: (E1, E2, E3) => R) extends GridClosure3X[E1, E2, E3, R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(e1: E1, e2: E2, e3: E3): R = { - f(e1, e2, e3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3XFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3XFunction.scala deleted file mode 100644 index 3b4a6fea75196..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosure3XFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridClosure3X - -/** - * Wrapping Scala function for `GridClosure3X`. - */ -class ScalarClosure3XFunction[T1, T2, T3, R](val inner: GridClosure3X[T1, T2, T3, R]) extends ((T1, T2, T3) => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2, t3: T3): R = { - inner.applyx(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureFunction.scala deleted file mode 100644 index 67e660349fc01..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteClosure - -/** - * Wrapping Scala function for `GridClosure`. - */ -class ScalarClosureFunction[T, R](val inner: IgniteClosure[T, R]) extends (T => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t: T): R = { - inner.apply(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureX.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureX.scala deleted file mode 100644 index 82307337a66a3..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureX.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgniteClosureX - -/** - * Peer deploy aware adapter for Java's `GridClosureX`. - */ -class ScalarClosureX[E, R](private val f: E => R) extends IgniteClosureX[E, R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(e: E): R = { - f(e) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureXFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureXFunction.scala deleted file mode 100644 index 3dfbaae4eb7d7..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarClosureXFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteClosureX - -/** - * Wrapping Scala function for `GridClosureX`. - */ -class ScalarClosureXFunction[T, R](val inner: IgniteClosureX[T, R]) extends (T => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t: T): R = { - inner.applyx(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure.scala deleted file mode 100644 index fe3ff0fe54284..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteInClosure - -/** - * Peer deploy aware adapter for Java's `GridInClosure`. - */ -class ScalarInClosure[T](private val f: T => Unit) extends IgniteInClosure[T] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(t: T) { - f(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2.scala deleted file mode 100644 index b27cf04867562..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteBiInClosure - -/** - * Peer deploy aware adapter for Java's `GridInClosure2`. - */ -class ScalarInClosure2[T1, T2](private val f: (T1, T2) => Unit) extends IgniteBiInClosure[T1, T2] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(t1: T1, t2: T2) { - f(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2Function.scala deleted file mode 100644 index 1f31adb41a572..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2Function.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteBiInClosure - -/** - * Wrapping Scala function for `GridInClosure2`. - */ -class ScalarInClosure2Function[T1, T2](val inner: IgniteBiInClosure[T1, T2]) extends ((T1, T2) => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2) { - inner.apply(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2X.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2X.scala deleted file mode 100644 index 5064a456ab1f3..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2X.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgniteInClosure2X - -/** - * Peer deploy aware adapter for Java's `GridInClosure2X`. - */ -class ScalarInClosure2X[T1, T2](private val f: (T1, T2) => Unit) extends IgniteInClosure2X[T1, T2] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(t1: T1, t2: T2) { - f(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2XFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2XFunction.scala deleted file mode 100644 index 9602304916980..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure2XFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteInClosure2X - -/** - * Wrapping Scala function for `GridInClosure2X`. - */ -class ScalarInClosure2XFunction[T1, T2](val inner: IgniteInClosure2X[T1, T2]) extends ((T1, T2) => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2) { - inner.applyx(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3.scala deleted file mode 100644 index 766d538ebd8e8..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridInClosure3 - -/** - * Peer deploy aware adapter for Java's `GridInClosure3`. - */ -class ScalarInClosure3[T1, T2, T3](private val f: (T1, T2, T3) => Unit) extends GridInClosure3[T1, T2, T3] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - def apply(t1: T1, t2: T2, t3: T3) { - f(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3Function.scala deleted file mode 100644 index 73b11bc49fb1f..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3Function.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridInClosure3 - -/** - * Wrapping Scala function for `GridInClosure3`. - */ -class ScalarInClosure3Function[T1, T2, T3](val inner: GridInClosure3[T1, T2, T3]) extends ((T1, T2, T3) => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2, t3: T3) { - inner.apply(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3X.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3X.scala deleted file mode 100644 index ac1c0e6110344..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3X.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.GridInClosure3X - -/** - * Peer deploy aware adapter for Java's `GridInClosure3X`. - */ -class ScalarInClosure3X[T1, T2, T3](private val f: (T1, T2, T3) => Unit) extends GridInClosure3X[T1, T2, T3] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(t1: T1, t2: T2, t3: T3) { - f(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3XFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3XFunction.scala deleted file mode 100644 index b21ef52dbf685..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosure3XFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridInClosure3X - -/** - * Wrapping Scala function for `GridInClosure3X`. - */ -class ScalarInClosure3XFunction[T1, T2, T3](val inner: GridInClosure3X[T1, T2, T3]) extends ((T1, T2, T3) => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t1: T1, t2: T2, t3: T3) { - inner.applyx(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureFunction.scala deleted file mode 100644 index 4f76660f4a426..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteInClosure - -/** - * Wrapping Scala function for `GridInClosure`. - */ -class ScalarInClosureFunction[T](val inner: IgniteInClosure[T]) extends (T => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t: T) { - inner.apply(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureX.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureX.scala deleted file mode 100644 index f3108fc547c1d..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureX.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgniteInClosureX - -/** - * Peer deploy aware adapter for Java's `GridInClosureX`. - */ -class ScalarInClosureX[T](private val f: T => Unit) extends IgniteInClosureX[T] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(t: T) { - f(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureXFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureXFunction.scala deleted file mode 100644 index cb4facf2b09b0..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarInClosureXFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteInClosureX - -/** - * Wrapping Scala function for `GridInClosureX`. - */ -class ScalarInClosureXFunction[T](val inner: IgniteInClosureX[T]) extends (T => Unit) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(t: T) { - inner.applyx(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarJob.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarJob.scala deleted file mode 100644 index cfc10c2a39142..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarJob.scala +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.compute.ComputeJobAdapter - -/** - * Peer deploy aware adapter for Java's `ComputeJob`. - */ -class ScalarJob(private val inner: () => Any) extends ComputeJobAdapter { - assert(inner != null) - - /** - * Delegates to passed in function. - */ - def execute = inner().asInstanceOf[AnyRef] -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosure.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosure.scala deleted file mode 100644 index 6c816d9e282cd..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosure.scala +++ /dev/null @@ -1,47 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridPeerDeployAwareAdapter -import org.apache.ignite.lang.{IgniteCallable, IgniteOutClosure} - -import java.util.concurrent.Callable - -/** - * Peer deploy aware adapter for Java's `GridOutClosure`. - */ -class ScalarOutClosure[R](private val f: () => R) extends GridPeerDeployAwareAdapter - with IgniteOutClosure[R] with IgniteCallable[R] { - assert(f != null) - - peerDeployLike(f) - - /** - * Delegates to passed in function. - */ - def apply: R = { - f() - } - - /** - * Delegates to passed in function. - */ - def call: R = { - f() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureFunction.scala deleted file mode 100644 index 5795bf3a8d35f..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureFunction.scala +++ /dev/null @@ -1,35 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import java.util.concurrent.Callable - -import org.apache.ignite.lang.IgniteCallable - -/** - * Wrapping Scala function for `Callable` and specifically for `GridOutClosure`. - */ -class ScalarOutClosureFunction[R](val inner: IgniteCallable[R]) extends (() => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(): R = - inner.call() -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureX.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureX.scala deleted file mode 100644 index 949e7200a6db7..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureX.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgniteOutClosureX - -/** - * Peer deploy aware adapter for Java's `GridOutClosureX`. - */ -class ScalarOutClosureX[R](private val f: () => R) extends IgniteOutClosureX[R] { - assert(f != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(): R = { - f() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureXFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureXFunction.scala deleted file mode 100644 index 7f63a7b56d4a2..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarOutClosureXFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteOutClosureX - -/** - * Wrapping Scala function for `GridOutClosureX`. - */ -class ScalarOutClosureXFunction[R](val inner: IgniteOutClosureX[R]) extends (() => R) { - assert(inner != null) - - /** - * Delegates to passed in grid closure. - */ - def apply(): R = { - inner.applyx() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate.scala deleted file mode 100644 index 82e007a7bdabb..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate.scala +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgnitePredicate - -/** - * Peer deploy aware adapter for Java's `GridPredicate`. - */ -class ScalarPredicate[T](private val p: T => Boolean) extends IgnitePredicate[T] { - assert(p != null) - - /** - * Delegates to passed in function. - */ - def apply(e: T) = p(e) -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2.scala deleted file mode 100644 index 866e6e182ce3a..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2.scala +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteBiPredicate - -/** - * Peer deploy aware adapter for Java's `GridPredicate2`. - */ -class ScalarPredicate2[T1, T2](private val p: (T1, T2) => Boolean) extends IgniteBiPredicate[T1, T2] { - assert(p != null) - - /** - * Delegates to passed in function. - */ - def apply(e1: T1, e2: T2) = p(e1, e2) -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2Function.scala deleted file mode 100644 index 413ca74f1991e..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2Function.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteBiPredicate - -/** - * Wrapping Scala function for `GridPredicate2`. - */ -class ScalarPredicate2Function[T1, T2](val inner: IgniteBiPredicate[T1, T2]) extends ((T1, T2) => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(t1: T1, t2: T2): Boolean = { - inner.apply(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2X.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2X.scala deleted file mode 100644 index b737192b1b036..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2X.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgnitePredicate2X - -/** - * Peer deploy aware adapter for Java's `GridPredicate2X`. - */ -class ScalarPredicate2X[T1, T2](private val p: (T1, T2) => Boolean) extends IgnitePredicate2X[T1, T2] { - assert(p != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(e1: T1, e2: T2): Boolean = { - p(e1, e2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2XFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2XFunction.scala deleted file mode 100644 index fb326e7b24b36..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate2XFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgnitePredicate2X - -/** - * Wrapping Scala function for `GridPredicate2X`. - */ -class ScalarPredicate2XFunction[T1, T2](val inner: IgnitePredicate2X[T1, T2]) extends ((T1, T2) => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(t1: T1, t2: T2): Boolean = { - inner.applyx(t1, t2) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3.scala deleted file mode 100644 index 1890f655212d4..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3.scala +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridPredicate3 - -/** - * Peer deploy aware adapter for Java's `GridPredicate3`. - */ -class ScalarPredicate3[T1, T2, T3](private val p: (T1, T2, T3) => Boolean) extends GridPredicate3[T1, T2, T3] { - assert(p != null) - - /** - * Delegates to passed in function. - */ - def apply(e1: T1, e2: T2, e3: T3) = p(e1, e2, e3) -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3Function.scala deleted file mode 100644 index daddee81ee87e..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3Function.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridPredicate3 - -/** - * Wrapping Scala function for `GridPredicate3`. - */ -class ScalarPredicate3Function[T1, T2, T3](val inner: GridPredicate3[T1, T2, T3]) extends ((T1, T2, T3) => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(t1: T1, t2: T2, t3: T3): Boolean = { - inner.apply(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3X.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3X.scala deleted file mode 100644 index bdbce23fd9bd3..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3X.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.GridPredicate3X - -/** - * Peer deploy aware adapter for Java's `GridPredicate3X`. - */ -class ScalarPredicate3X[T1, T2, T3](private val p: (T1, T2, T3) => Boolean) extends GridPredicate3X[T1, T2, T3] { - assert(p != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(e1: T1, e2: T2, e3: T3): Boolean = { - p(e1, e2, e3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3XFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3XFunction.scala deleted file mode 100644 index b8a218f3c9e0c..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicate3XFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.GridPredicate3X - -/** - * Wrapping Scala function for `GridPredicate3X`. - */ -class ScalarPredicate3XFunction[T1, T2, T3](val inner: GridPredicate3X[T1, T2, T3]) extends ((T1, T2, T3) => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(t1: T1, t2: T2, t3: T3): Boolean = { - inner.applyx(t1, t2, t3) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateFunction.scala deleted file mode 100644 index 495efbc140a20..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgnitePredicate - -/** - * Wrapping Scala function for `GridPredicate`. - */ -class ScalarPredicateFunction[T](val inner: IgnitePredicate[T]) extends (T => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(t: T): Boolean = { - inner.apply(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateX.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateX.scala deleted file mode 100644 index c8676f31be783..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateX.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite._ -import org.apache.ignite.internal.util.lang.IgnitePredicateX - -/** - * Peer deploy aware adapter for Java's `GridPredicateX`. - */ -class ScalarPredicateX[T](private val p: T => Boolean) extends IgnitePredicateX[T] { - assert(p != null) - - /** - * Delegates to passed in function. - */ - @throws(classOf[IgniteCheckedException]) - def applyx(e: T): Boolean = { - p(e) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateXFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateXFunction.scala deleted file mode 100644 index 831b53e18bf8b..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarPredicateXFunction.scala +++ /dev/null @@ -1,34 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgnitePredicateX - -/** - * Wrapping Scala function for `GridPredicateX`. - */ -class ScalarPredicateXFunction[T](val inner: IgnitePredicateX[T]) extends (T => Boolean) { - assert(inner != null) - - /** - * Delegates to passed in grid predicate. - */ - def apply(t: T) = { - inner.applyx(t) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer.scala deleted file mode 100644 index d855e3db51d2d..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer.scala +++ /dev/null @@ -1,47 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteReducer - -import scala.collection._ - -/** - * Peer deploy aware adapter for Java's `GridReducer`. - */ -class ScalarReducer[E, R](private val r: Seq[E] => R) extends IgniteReducer[E, R] { - assert(r != null) - - private val buf = new mutable.ListBuffer[E] - - /** - * Delegates to passed in function. - */ - def reduce = r(buf.toSeq) - - /** - * Collects given value. - * - * @param e Value to collect for later reduction. - */ - def collect(e: E) = { - buf += e - - true - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer2.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer2.scala deleted file mode 100644 index 8c96498182315..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer2.scala +++ /dev/null @@ -1,50 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteReducer2 - -import scala.collection._ - -/** - * Peer deploy aware adapter for Java's `GridReducer2`. - */ -class ScalarReducer2[E1, E2, R](private val r: (Seq[E1], Seq[E2]) => R) extends IgniteReducer2[E1, E2, R] { - assert(r != null) - - private val buf1 = new mutable.ListBuffer[E1] - private val buf2 = new mutable.ListBuffer[E2] - - /** - * Delegates to passed in function. - */ - def apply = r(buf1.toSeq, buf2.toSeq) - - /** - * Collects given values. - * - * @param e1 Value to collect for later reduction. - * @param e2 Value to collect for later reduction. - */ - def collect(e1: E1, e2: E2) = { - buf1 += e1 - buf2 += e2 - - true - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer2Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer2Function.scala deleted file mode 100644 index 9b1a1c6b5aba6..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer2Function.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteReducer2 - -/** - * Wrapping Scala function for `GridReducer2`. - */ -class ScalarReducer2Function[E1, E2, R](val inner: IgniteReducer2[E1, E2, R]) extends ((Seq[E1], Seq[E2]) => R) { - assert(inner != null) - - /** - * Delegates to passed in grid reducer. - */ - def apply(s1: Seq[E1], s2: Seq[E2]) = { - for (e1 <- s1; e2 <- s2) inner.collect(e1, e2) - - inner.apply() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer3.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer3.scala deleted file mode 100644 index 448e5ae061b66..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer3.scala +++ /dev/null @@ -1,54 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteReducer3 - -import scala.collection._ - -/** - * Peer deploy aware adapter for Java's `GridReducer3`. - */ -class ScalarReducer3[E1, E2, E3, R](private val r: (Seq[E1], Seq[E2], Seq[E3]) => R) - extends IgniteReducer3[E1, E2, E3, R] { - assert(r != null) - - private val buf1 = new mutable.ListBuffer[E1] - private val buf2 = new mutable.ListBuffer[E2] - private val buf3 = new mutable.ListBuffer[E3] - - /** - * Delegates to passed in function. - */ - def apply = r(buf1.toSeq, buf2.toSeq, buf3.toSeq) - - /** - * Collects given values. - * - * @param e1 Value to collect for later reduction. - * @param e2 Value to collect for later reduction. - * @param e3 Value to collect for later reduction. - */ - def collect(e1: E1, e2: E2, e3: E3) = { - buf1 += e1 - buf2 += e2 - buf3 += e3 - - true - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer3Function.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer3Function.scala deleted file mode 100644 index d1d8255aca6d3..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducer3Function.scala +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.internal.util.lang.IgniteReducer3 - -/** - * Wrapping Scala function for `GridReducer3`. - */ -class ScalarReducer3Function[E1, E2, E3, R](val inner: IgniteReducer3[E1, E2, E3, R]) extends - ((Seq[E1], Seq[E2], Seq[E3]) => R) { - assert(inner != null) - - /** - * Delegates to passed in grid reducer. - */ - def apply(s1: Seq[E1], s2: Seq[E2], s3: Seq[E3]) = { - for (e1 <- s1; e2 <- s2; e3 <- s3) inner.collect(e1, e2, e3) - - inner.apply() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducerFunction.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducerFunction.scala deleted file mode 100644 index 5c864276d83a4..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/lang/ScalarReducerFunction.scala +++ /dev/null @@ -1,36 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.lang - -import org.apache.ignite.lang.IgniteReducer - -/** - * Wrapping Scala function for `GridReducer`. - */ -class ScalarReducerFunction[E1, R](val inner: IgniteReducer[E1, R]) extends (Seq[E1] => R) { - assert(inner != null) - - /** - * Delegates to passed in grid reducer. - */ - def apply(s: Seq[E1]) = { - s foreach inner.collect _ - - inner.reduce() - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/Packet.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/Packet.scala deleted file mode 100644 index 0dc99b623b52f..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/Packet.scala +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar - -/** - * Contains Scala "Pimp" implementations for main Ignite entities. - */ -package object pimps diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/PimpedType.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/PimpedType.scala deleted file mode 100644 index 61ddd0d514e61..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/PimpedType.scala +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.pimps - -/** - * Sub class to create a wrapper type for `X` as documentation that the sub class follows the - * 'pimp my library' pattern. http://www.artima.com/weblogs/viewpost.jsp?thread=179766 - *

- * The companion object provides an implicit conversion to unwrap `value`. - */ -trait PimpedType[X] { - val value: X -} - -object PimpedType { - implicit def UnwrapPimpedType[X](p: PimpedType[X]): X = p.value -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarCachePimp.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarCachePimp.scala deleted file mode 100644 index 95916dd949d7b..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarCachePimp.scala +++ /dev/null @@ -1,657 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.pimps - -import org.apache.ignite.cache.query._ - -import javax.cache.Cache - -import org.apache.ignite._ -import org.apache.ignite.lang.{IgnitePredicate, IgniteReducer} -import org.apache.ignite.scalar.scalar._ -import org.jetbrains.annotations.Nullable - -import java.util.{List => JavaList, Set => JavaSet} - -import scala.collection._ -import scala.collection.JavaConversions._ - -/** - * Companion object. - */ -object ScalarCachePimp { - /** - * Creates new Scalar cache projection pimp with given Java-side implementation. - * - * @param impl Java-side implementation. - */ - def apply[K, V](impl: IgniteCache[K, V]) = { - if (impl == null) - throw new NullPointerException("impl") - - val pimp = new ScalarCachePimp[K, V] - - pimp.impl = impl - - pimp - } -} - -/** - * ==Overview== - * Defines Scalar "pimp" for `IgniteCache` on Java side. - * - * Essentially this class extends Java `IgniteCache` interface with Scala specific - * API adapters using primarily implicit conversions defined in `ScalarConversions` object. What - * it means is that you can use functions defined in this class on object - * of Java `IgniteCache` type. Scala will automatically (implicitly) convert it into - * Scalar's pimp and replace the original call with a call on that pimp. - * - * Note that Scalar provide extensive library of implicit conversion between Java and - * Scala Ignite counterparts in `ScalarConversions` object - * - * ==Suffix '$' In Names== - * Symbol `$` is used in names when they conflict with the names in the base Java class - * that Scala pimp is shadowing or with Java package name that your Scala code is importing. - * Instead of giving two different names to the same function we've decided to simply mark - * Scala's side method with `$` suffix. - */ -class ScalarCachePimp[@specialized K, @specialized V] extends PimpedType[IgniteCache[K, V]] -with Iterable[Cache.Entry[K, V]] with Ordered[IgniteCache[K, V]] { - /** */ - lazy val value: IgniteCache[K, V] = impl - - /** */ - protected var impl: IgniteCache[K, V] = _ - - /** Type alias. */ - protected type EntryPred = (Cache.Entry[K, V]) => Boolean - - /** Type alias. */ - protected type KvPred = (K, V) => Boolean - - protected def toJavaSet[T](it: Iterable[T]): JavaSet[T] = new java.util.HashSet[T](asJavaCollection(it)) - - /** - * Compares this cache name to the given cache name. - * - * @param that Another cache instance to compare names with. - */ - def compare(that: IgniteCache[K, V]): Int = that.getName.compareTo(value.getName) - - /** - * Gets iterator for cache entries. - */ - def iterator = toScalaSeq(value.iterator).iterator - - /** - * Unwraps sequence of functions to sequence of Ignite predicates. - */ - private def unwrap(@Nullable p: Seq[EntryPred]): Seq[IgnitePredicate[Cache.Entry[K, V]]] = - if (p == null) - null - else - p map ((f: EntryPred) => toPredicate(f)) - - /** - * Converts reduce function to Grid Reducer that takes map entries. - * - * @param rdc Reduce function. - * @return Entry reducer. - */ - private def toEntryReducer[R](rdc: Iterable[(K, V)] => R): IgniteReducer[java.util.Map.Entry[K, V], R] = { - new IgniteReducer[java.util.Map.Entry[K, V], R] { - private var seq = Seq.empty[(K, V)] - - def collect(e: java.util.Map.Entry[K, V]): Boolean = { - seq +:= (e.getKey, e.getValue) - - true - } - - def reduce(): R = { - rdc(seq) - } - } - } - - /** - * Retrieves value mapped to the specified key from cache. The return value of `null` - * means entry did not pass the provided filter or cache has no mapping for the key. - * - * @param k Key to retrieve the value for. - * @return Value for the given key. - */ - def apply(k: K): V = - value.get(k) - - /** - * Returns the value associated with a key, or a default value if the key is not contained in the map. - * - * @param k The key. - * @param default A computation that yields a default value in case key is not in cache. - * @return The cache value associated with `key` if it exists, otherwise the result - * of the `default` computation. - */ - def getOrElse(k: K, default: => V) = { - opt(k) match { - case Some(v) => v - case None => default - } - } - - /** - * Retrieves value mapped to the specified key from cache as an option. The return value - * of `null` means entry did not pass the provided filter or cache has no mapping for the key. - * - * @param k Key to retrieve the value for. - * @return Value for the given key. - * @see `IgniteCache.get(...)` - */ - def opt(k: K): Option[V] = - Option(value.get(k)) - - /** - * Converts given type of corresponding Java type, if Scala does - * auto-conversion for a given type. Only primitive types and Strings - * are supported. - * - * @param c Type to convert. - */ - private def toJavaType(c: Class[_]) = { - assert(c != null) - - // Hopefully if-else is faster here than a normal matching. - if (c == classOf[Int]) - classOf[java.lang.Integer] - else if (c == classOf[Boolean]) - classOf[java.lang.Boolean] - else if (c == classOf[String]) - classOf[java.lang.String] - else if (c == classOf[Char]) - classOf[java.lang.Character] - else if (c == classOf[Long]) - classOf[java.lang.Long] - else if (c == classOf[Double]) - classOf[java.lang.Double] - else if (c == classOf[Float]) - classOf[java.lang.Float] - else if (c == classOf[Short]) - classOf[java.lang.Short] - else if (c == classOf[Byte]) - classOf[java.lang.Byte] - else if (c == classOf[Symbol]) - throw new IgniteCheckedException("Cache type projeciton on 'scala.Symbol' are not supported.") - else - c - } - - /** - * Stores given key-value pair in cache. If filters are provided, then entries will - * be stored in cache only if they pass the filter. Note that filter check is atomic, - * so value stored in cache is guaranteed to be consistent with the filters. - *

- * If write-through is enabled, the stored value will be persisted to `GridCacheStore` - * via `GridCacheStore#put(String, GridCacheTx, Object, Object)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param kv Key-Value pair to store in cache. - * @return `True` if value was stored in cache, `false` otherwise. - * @see `IgniteCache#putx(...)` - */ - def putx$(kv: (K, V)): Boolean = value.putIfAbsent(kv._1, kv._2) - - /** - * Stores given key-value pair in cache. If filters are provided, then entries will - * be stored in cache only if they pass the filter. Note that filter check is atomic, - * so value stored in cache is guaranteed to be consistent with the filters. - *

- * If write-through is enabled, the stored value will be persisted to `GridCacheStore` - * via `GridCacheStore#put(String, GridCacheTx, Object, Object)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param kv Key-Value pair to store in cache. - * @return Previous value associated with specified key, or `null` - * if entry did not pass the filter, or if there was no mapping for the key in swap - * or in persistent storage. - * @see `IgniteCache#put(...)` - */ - def put$(kv: (K, V)): V = value.getAndReplace(kv._1, kv._2) - - /** - * Stores given key-value pair in cache. If filters are provided, then entries will - * be stored in cache only if they pass the filter. Note that filter check is atomic, - * so value stored in cache is guaranteed to be consistent with the filters. - *

- * If write-through is enabled, the stored value will be persisted to `GridCacheStore` - * via `GridCacheStore#put(String, GridCacheTx, Object, Object)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param kv Key-Value pair to store in cache. - * @return Previous value associated with specified key as an option. - * @see `IgniteCache#put(...)` - */ - def putOpt$(kv: (K, V)): Option[V] = Option(value.getAndReplace(kv._1, kv._2)) - - /** - * Operator alias for the same function `putx$`. - * - * @param kv Key-Value pair to store in cache. - * @return `True` if value was stored in cache, `false` otherwise. - * @see `IgniteCache#putx(...)` - */ - def +=(kv: (K, V)): Boolean = - putx$(kv) - - /** - * Stores given key-value pairs in cache. - * - * If write-through is enabled, the stored values will be persisted to `GridCacheStore` - * via `GridCacheStore#putAll(String, GridCacheTx, Map)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param kv1 Key-value pair to store in cache. - * @param kv2 Key-value pair to store in cache. - * @param kvs Optional key-value pairs to store in cache. - * @see `IgniteCache#putAll(...)` - */ - def putAll$(kv1: (K, V), kv2: (K, V), @Nullable kvs: (K, V)*) { - var m = mutable.Map.empty[K, V] - - m += (kv1, kv2) - - if (kvs != null) - kvs foreach (m += _) - - value.putAll(m) - } - - /** - * Stores given key-value pairs from the sequence in cache. - * - * If write-through is enabled, the stored values will be persisted to `GridCacheStore` - * via `GridCacheStore#putAll(String, GridCacheTx, Map)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param kvs Key-value pairs to store in cache. If `null` this function is no-op. - * @see `IgniteCache#putAll(...)` - */ - def putAll$(@Nullable kvs: Seq[(K, V)]) { - if (kvs != null) - value.putAll(mutable.Map(kvs: _*)) - } - - /** - * Removes given key mappings from cache. - * - * If write-through is enabled, the values will be removed from `GridCacheStore` - * via `GridCacheStore#removeAll(String, GridCacheTx, Collection)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param ks Sequence of additional keys to remove. If `null` - this function is no-op. - * @see `IgniteCache#removeAll(...)` - */ - def removeAll$(@Nullable ks: Seq[K]) { - if (ks != null) - value.removeAll(toJavaSet(ks)) - } - - /** - * Operator alias for the same function `putAll$`. - * - * @param kv1 Key-value pair to store in cache. - * @param kv2 Key-value pair to store in cache. - * @param kvs Optional key-value pairs to store in cache. - * @see `IgniteCache#putAll(...)` - */ - def +=(kv1: (K, V), kv2: (K, V), @Nullable kvs: (K, V)*) { - putAll$(kv1, kv2, kvs: _*) - } - - /** - * Removes given key mapping from cache. If cache previously contained value for the given key, - * then this value is returned. Otherwise, in case of `CacheMode#REPLICATED` caches, - * the value will be loaded from swap and, if it's not there, and read-through is allowed, - * from the underlying `GridCacheStore` storage. In case of `CacheMode#PARTITIONED` - * caches, the value will be loaded from the primary node, which in its turn may load the value - * from the swap storage, and consecutively, if it's not in swap and read-through is allowed, - * from the underlying persistent storage. If value has to be loaded from persistent - * storage, `GridCacheStore#load(String, GridCacheTx, Object)` method will be used. - * - * If the returned value is not needed, method `removex$(...)` should - * always be used instead of this one to avoid the overhead associated with returning of the - * previous value. - * - * If write-through is enabled, the value will be removed from 'GridCacheStore' - * via `GridCacheStore#remove(String, GridCacheTx, Object)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param k Key whose mapping is to be removed from cache. - * @return Previous value associated with specified key, or `null` - * if there was no value for this key. - * @see `IgniteCache#remove(...)` - */ - def remove$(k: K): V = value.getAndRemove(k) - - /** - * Removes given key mapping from cache. If cache previously contained value for the given key, - * then this value is returned. Otherwise, in case of `CacheMode#REPLICATED` caches, - * the value will be loaded from swap and, if it's not there, and read-through is allowed, - * from the underlying `GridCacheStore` storage. In case of `CacheMode#PARTITIONED` - * caches, the value will be loaded from the primary node, which in its turn may load the value - * from the swap storage, and consecutively, if it's not in swap and read-through is allowed, - * from the underlying persistent storage. If value has to be loaded from persistent - * storage, `GridCacheStore#load(String, GridCacheTx, Object)` method will be used. - * - * If the returned value is not needed, method `removex$(...)` should - * always be used instead of this one to avoid the overhead associated with returning of the - * previous value. - * - * If write-through is enabled, the value will be removed from 'GridCacheStore' - * via `GridCacheStore#remove(String, GridCacheTx, Object)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param k Key whose mapping is to be removed from cache. - * @return Previous value associated with specified key as an option. - * @see `IgniteCache#remove(...)` - */ - def removeOpt$(k: K): Option[V] = - Option(value.getAndRemove(k)) - - /** - * Operator alias for the same function `remove$`. - * - * @param k Key whose mapping is to be removed from cache. - * @return Previous value associated with specified key, or `null` - * if there was no value for this key. - * @see `IgniteCache#remove(...)` - */ - def -=(k: K): V = remove$(k) - - /** - * Removes given key mappings from cache. - * - * If write-through is enabled, the values will be removed from `GridCacheStore` - * via `GridCacheStore#removeAll(String, GridCacheTx, Collection)` method. - * - * ===Transactions=== - * This method is transactional and will enlist the entry into ongoing transaction - * if there is one. - * - * @param k1 1st key to remove. - * @param k2 2nd key to remove. - * @param ks Optional sequence of additional keys to remove. - * @see `IgniteCache#removeAll(...)` - */ - def removeAll$(k1: K, k2: K, @Nullable ks: K*) { - val s = new mutable.ArrayBuffer[K](2 + (if (ks == null) 0 else ks.length)) - - s += k1 - s += k2 - - if (ks != null) - ks foreach (s += _) - - value.removeAll(toJavaSet(s)) - } - - /** - * Operator alias for the same function `remove$`. - * - * @param k1 1st key to remove. - * @param k2 2nd key to remove. - * @param ks Optional sequence of additional keys to remove. - * @see `IgniteCache#removeAll(...)` - */ - def -=(k1: K, k2: K, @Nullable ks: K*) { - removeAll$(k1, k2, ks: _*) - } - - /** - * Creates and executes ad-hoc `SCAN` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * @param cls Query values class. Since cache can, in general, contain values of any subtype of `V` - * query needs to know the exact type it should operate on. - * @param kvp Filter to be used prior to returning key-value pairs to user. See `CacheQuery` for more details. - * @return Collection of cache key-value pairs. - */ - def scan(cls: Class[_ <: V], kvp: KvPred): QueryCursor[Cache.Entry[K, V]] = { - assert(cls != null) - assert(kvp != null) - - value.query(new ScanQuery(kvp)) - } - - /** - * Creates and executes ad-hoc `SCAN` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * Note that query value class will be taken implicitly as exact type `V` of this - * cache projection. - * - * @param kvp Filter to be used prior to returning key-value pairs to user. See `CacheQuery` for more details. - * @return Collection of cache key-value pairs. - */ - def scan(kvp: KvPred)(implicit m: Manifest[V]): QueryCursor[Cache.Entry[K, V]] = { - assert(kvp != null) - - scan(m.erasure.asInstanceOf[Class[V]], kvp) - } - - /** - * Creates and executes ad-hoc `SQL` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * @param cls Query values class. Since cache can, in general, contain values of any subtype of `V` - * query needs to know the exact type it should operate on. - * @param clause Query SQL clause. See `CacheQuery` for more details. - * @param args Optional list of query arguments. - * @return Collection of cache key-value pairs. - */ - def sql(cls: Class[_ <: V], clause: String, args: Any*): QueryCursor[Cache.Entry[K, V]] = { - assert(cls != null) - assert(clause != null) - assert(args != null) - - val query = new SqlQuery[K, V](cls, clause) - - if (args != null && args.size > 0) - query.setArgs(args.map(_.asInstanceOf[AnyRef]) : _*) - - value.query(query) - } - - /** - * Creates and executes ad-hoc `SQL` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * @param cls Query values class. Since cache can, in general, contain values of any subtype of `V` - * query needs to know the exact type it should operate on. - * @param clause Query SQL clause. See `CacheQuery` for more details. - * @return Collection of cache key-value pairs. - */ - def sql(cls: Class[_ <: V], clause: String): QueryCursor[Cache.Entry[K, V]] = { - assert(cls != null) - assert(clause != null) - - sql(cls, clause, Nil:_*) - } - - /** - * Creates and executes ad-hoc `SQL` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * Note that query value class will be taken implicitly as exact type `V` of this - * cache projection. - * - * @param clause Query SQL clause. See `CacheQuery` for more details. - * @param args Optional list of query arguments. - * @return Collection of cache key-value pairs. - */ - def sql(clause: String, args: Any*) - (implicit m: Manifest[V]): QueryCursor[Cache.Entry[K, V]] = { - assert(clause != null) - assert(args != null) - - sql(m.erasure.asInstanceOf[Class[V]], clause, args:_*) - } - - /** - * Creates and executes ad-hoc `TEXT` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * @param cls Query values class. Since cache can, in general, contain values of any subtype of `V` - * query needs to know the exact type it should operate on. - * @param clause Query text clause. See `CacheQuery` for more details. - * @return Collection of cache key-value pairs. - */ - def text(cls: Class[_ <: V], clause: String): QueryCursor[Cache.Entry[K, V]] = { - assert(cls != null) - assert(clause != null) - - value.query(new TextQuery(cls, clause)) - } - - /** - * Creates and executes ad-hoc `TEXT` query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * Note that query value class will be taken implicitly as exact type `V` of this - * cache projection. - * - * @param clause Query text clause. See `CacheQuery` for more details. - * @return Collection of cache key-value pairs. - */ - def text(clause: String)(implicit m: Manifest[V]): QueryCursor[Cache.Entry[K, V]] = { - assert(clause != null) - - text(m.erasure.asInstanceOf[Class[V]], clause) - } - - /** - * Creates and executes ad-hoc `SQL` fields query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * @param clause Query SQL clause. See `CacheQuery` for more details. - * @param args Optional list of query arguments. - * @return Sequence of sequences of field values. - */ - def sqlFields(clause: String, args: Any*): QueryCursor[JavaList[_]] = { - assert(clause != null) - assert(args != null) - - val query = new SqlFieldsQuery(clause) - - if (args != null && args.nonEmpty) - query.setArgs(args.map(_.asInstanceOf[AnyRef]) : _*) - - value.query(query) - } - - /** - * Creates and executes ad-hoc `SQL` no-arg fields query returning its result. - * - * Note that if query is executed more than once (potentially with different - * arguments) it is more performant to create query via standard mechanism - * and execute it multiple times with different arguments. The analogy is - * similar to JDBC `PreparedStatement`. Note also that this function will return - * all results at once without pagination and therefore memory limits should be - * taken into account. - * - * @param clause Query SQL clause. See `CacheQuery` for more details. - * @return Sequence of sequences of field values. - */ - def sqlFields(clause: String): QueryCursor[JavaList[_]] = { - assert(clause != null) - - sqlFields(clause, Nil:_*) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarGridPimp.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarGridPimp.scala deleted file mode 100644 index 0f1dfaf27dd51..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarGridPimp.scala +++ /dev/null @@ -1,92 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.pimps - -import org.apache.ignite.scheduler.SchedulerFuture -import org.apache.ignite.{Ignite, IgniteCluster} -import org.jetbrains.annotations.Nullable - -/** - * Companion object. - */ -object ScalarGridPimp { - /** - * Creates new Scalar grid pimp with given Java-side implementation. - * - * @param impl Java-side implementation. - */ - def apply(impl: Ignite) = { - if (impl == null) - throw new NullPointerException("impl") - - val pimp = new ScalarGridPimp - - pimp.impl = impl.cluster() - - pimp - } -} - -/** - * ==Overview== - * Defines Scalar "pimp" for `Grid` on Java side. - * - * Essentially this class extends Java `GridProjection` interface with Scala specific - * API adapters using primarily implicit conversions defined in `ScalarConversions` object. What - * it means is that you can use functions defined in this class on object - * of Java `GridProjection` type. Scala will automatically (implicitly) convert it into - * Scalar's pimp and replace the original call with a call on that pimp. - * - * Note that Scalar provide extensive library of implicit conversion between Java and - * Scala Ignite counterparts in `ScalarConversions` object - * - * ==Suffix '$' In Names== - * Symbol `$` is used in names when they conflict with the names in the base Java class - * that Scala pimp is shadowing or with Java package name that your Scala code is importing. - * Instead of giving two different names to the same function we've decided to simply mark - * Scala's side method with `$` suffix. - */ -class ScalarGridPimp extends ScalarProjectionPimp[IgniteCluster] with ScalarTaskThreadContext[IgniteCluster] { - /** - * Schedules closure for execution using local cron-based scheduling. - * - * @param s Closure to schedule to run as a background cron-based job. - * @param ptrn Scheduling pattern in UNIX cron format with optional prefix `{n1, n2}` - * where `n1` is delay of scheduling in seconds and `n2` is the number of execution. Both - * parameters are optional. - */ - def scheduleLocalCall[R](@Nullable s: Call[R], ptrn: String): SchedulerFuture[R] = { - assert(ptrn != null) - - value.ignite().scheduler().scheduleLocal(toCallable(s), ptrn) - } - - /** - * Schedules closure for execution using local cron-based scheduling. - * - * @param s Closure to schedule to run as a background cron-based job. - * @param ptrn Scheduling pattern in UNIX cron format with optional prefix `{n1, n2}` - * where `n1` is delay of scheduling in seconds and `n2` is the number of execution. Both - * parameters are optional. - */ - def scheduleLocalRun(@Nullable s: Run, ptrn: String): SchedulerFuture[_] = { - assert(ptrn != null) - - value.ignite().scheduler().scheduleLocal(toRunnable(s), ptrn) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarProjectionPimp.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarProjectionPimp.scala deleted file mode 100644 index b1a6b4f739bb0..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarProjectionPimp.scala +++ /dev/null @@ -1,649 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.pimps - -import org.apache.ignite.cluster.{ClusterGroupEmptyException, ClusterGroup, ClusterNode} -import org.apache.ignite.lang.{IgniteFuture, IgnitePredicate} -import org.jetbrains.annotations._ - -/** - * Companion object. - */ -object ScalarProjectionPimp { - /** - * Creates new Scalar projection pimp with given Java-side implementation. - * - * @param impl Java-side implementation. - */ - def apply(impl: ClusterGroup) = { - if (impl == null) - throw new NullPointerException("impl") - - val pimp = new ScalarProjectionPimp[ClusterGroup] - - pimp.impl = impl - - pimp - } -} - -/** - * ==Overview== - * Defines Scalar "pimp" for `GridProjection` on Java side. - * - * Essentially this class extends Java `GridProjection` interface with Scala specific - * API adapters using primarily implicit conversions defined in `ScalarConversions` object. What - * it means is that you can use functions defined in this class on object - * of Java `GridProjection` type. Scala will automatically (implicitly) convert it into - * Scalar's pimp and replace the original call with a call on that pimp. - * - * Note that Scalar provide extensive library of implicit conversion between Java and - * Scala Ignite counterparts in `ScalarConversions` object - * - * ==Suffix '$' In Names== - * Symbol `$` is used in names when they conflict with the names in the base Java class - * that Scala pimp is shadowing or with Java package name that your Scala code is importing. - * Instead of giving two different names to the same function we've decided to simply mark - * Scala's side method with `$` suffix. - */ -class ScalarProjectionPimp[A <: ClusterGroup] extends PimpedType[A] with Iterable[ClusterNode] - with ScalarTaskThreadContext[A] { - /** */ - lazy val value: A = impl - - /** */ - protected var impl: A = _ - - /** Type alias for '() => Unit'. */ - protected type Run = () => Unit - - /** Type alias for '() => R'. */ - protected type Call[R] = () => R - - /** Type alias for '(E1) => R'. */ - protected type Call1[E1, R] = (E1) => R - - /** Type alias for '(E1, E2) => R'. */ - protected type Call2[E1, E2, R] = (E1, E2) => R - - /** Type alias for '(E1, E2, E3) => R'. */ - protected type Call3[E1, E2, E3, R] = (E1, E2, E3) => R - - /** Type alias for '() => Boolean'. */ - protected type Pred = () => Boolean - - /** Type alias for '(E1) => Boolean'. */ - protected type Pred1[E1] = (E1) => Boolean - - /** Type alias for '(E1, E2) => Boolean'. */ - protected type Pred2[E1, E2] = (E1, E2) => Boolean - - /** Type alias for '(E1, E2, E3) => Boolean'. */ - protected type Pred3[E1, E2, E3] = (E1, E2, E3) => Boolean - - /** Type alias for node filter predicate. */ - protected type NF = IgnitePredicate[ClusterNode] - - /** - * Gets iterator for this projection's nodes. - */ - def iterator = nodes$(null).iterator - - /** - * Utility function to workaround issue that `GridProjection` does not permit `null` predicates. - * - * @param p Optional predicate. - * @return If `p` not `null` return projection for this predicate otherwise return pimped projection. - */ - private def forPredicate(@Nullable p: NF): ClusterGroup = - if (p != null) value.forPredicate(p) else value - - /** - * Gets sequence of all nodes in this projection for given predicate. - * - * @param p Optional node filter predicates. It `null` provided - all nodes will be returned. - * @see `org.apache.ignite.cluster.ClusterGroup.nodes(...)` - */ - def nodes$(@Nullable p: NF): Seq[ClusterNode] = - toScalaSeq(forPredicate(p).nodes()) - - /** - * Gets sequence of all remote nodes in this projection for given predicate. - * - * @param p Optional node filter predicate. It `null` provided - all remote nodes will be returned. - * @see `org.apache.ignite.cluster.ClusterGroup.remoteNodes(...)` - */ - def remoteNodes$(@Nullable p: NF = null): Seq[ClusterNode] = - toScalaSeq(forPredicate(p).forRemotes().nodes()) - - /** - * Alias for method `send$(...)`. - * - * @param obj Optional object to send. If `null` - this method is no-op. - * @param p Optional node filter predicates. If none provided or `null` - - * all nodes in the projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.send(...)` - */ - def !<(@Nullable obj: AnyRef, @Nullable p: NF) { - value.ignite().message(forPredicate(p)).send(null, obj) - } - - /** - * Alias for method `send$(...)`. - * - * @param seq Optional sequence of objects to send. If empty or `null` - this - * method is no-op. - * @param p Optional node filter predicate. If none provided or `null` - - * all nodes in the projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.send(...)` - */ - def !<(@Nullable seq: Seq[AnyRef], @Nullable p: NF) { - value.ignite().message(forPredicate(p)).send(null, seq) - } - - /** - * Sends given object to the nodes in this projection. - * - * @param obj Optional object to send. If `null` - this method is no-op. - * @param p Optional node filter predicate. If none provided or `null` - - * all nodes in the projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.send(...)` - */ - def send$(@Nullable obj: AnyRef, @Nullable p: NF) { - value.ignite().message(forPredicate(p)).send(null, obj) - } - - /** - * Sends given object to the nodes in this projection. - * - * @param seq Optional sequence of objects to send. If empty or `null` - this - * method is no-op. - * @param p Optional node filter predicate. If `null` provided - all nodes in the projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.send(...)` - */ - def send$(@Nullable seq: Seq[AnyRef], @Nullable p: NF) { - value.ignite().message(forPredicate(p)).send(null, seq) - } - - /** - * Synchronous closures call on this projection with return value. - * This call will block until all results are received and ready. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method is no-op and returns `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Sequence of result values from all nodes where given closures were executed or `null` (see above). - */ - def call$[R](@Nullable s: Seq[Call[R]], @Nullable p: NF): Seq[R] = - toScalaSeq(callAsync$(s, p).get) - - /** - * Synchronous closures call on this projection with return value. - * This call will block until all results are received and ready. If this projection - * is empty than `dflt` closure will be executed and its result returned. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method is no-op and returns `null`. - * @param dflt Closure to execute if projection is empty. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Sequence of result values from all nodes where given closures were executed or `null` (see above). - */ - def callSafe[R](@Nullable s: Seq[Call[R]], dflt: () => Seq[R], @Nullable p: NF): Seq[R] = { - assert(dflt != null) - - try - call$(s, p) - catch { - case _: ClusterGroupEmptyException => dflt() - } - } - - /** - * Alias for the same function `call$`. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method is no-op and returns `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Sequence of result values from all nodes where given closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def #<[R](@Nullable s: Seq[Call[R]], @Nullable p: NF): Seq[R] = - call$(s, p) - - /** - * Synchronous closure call on this projection with return value. - * This call will block until all results are received and ready. - * - * @param s Optional closure to call. If `null` - this method is no-op and returns `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Sequence of result values from all nodes where given closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def call$[R](@Nullable s: Call[R], @Nullable p: NF): Seq[R] = - call$(Seq(s), p) - - /** - * Synchronous closure call on this projection with return value. - * This call will block until all results are received and ready. If this projection - * is empty than `dflt` closure will be executed and its result returned. - * - * @param s Optional closure to call. If `null` - this method is no-op and returns `null`. - * @param dflt Closure to execute if projection is empty. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Sequence of result values from all nodes where given closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def callSafe[R](@Nullable s: Call[R], dflt: () => Seq[R], @Nullable p: NF): Seq[R] = { - assert(dflt != null) - - try - call$(Seq(s), p) - catch { - case _: ClusterGroupEmptyException => dflt() - } - } - - /** - * Alias for the same function `call$`. - * - * @param s Optional closure to call. If `null` - this method is no-op and returns `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Sequence of result values from all nodes where given closures were executed - * or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def #<[R](@Nullable s: Call[R], @Nullable p: NF): Seq[R] = - call$(s, p) - - /** - * Synchronous closures call on this projection without return value. - * This call will block until all executions are complete. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method is no-op. - * @param p Optional node filter predicate. If `null` provided- all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def run$(@Nullable s: Seq[Run], @Nullable p: NF) { - runAsync$(s, p).get - } - - /** - * Synchronous broadcast closure call on this projection without return value. - * - * @param r Closure to run all nodes in projection. - * @param p Optional node filter predicate. If `null` provided- all nodes in projection will be used. - */ - def bcastRun(@Nullable r: Run, @Nullable p: NF) { - value.ignite().compute(forPredicate(p)).broadcast(toRunnable(r)) - } - - /** - * Synchronous closures call on this projection without return value. - * This call will block until all executions are complete. If this projection - * is empty than `dflt` closure will be executed. - * - * @param s Optional sequence of closures to call. If empty or `null` - this - * method is no-op. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @param dflt Closure to execute if projection is empty. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def runSafe(@Nullable s: Seq[Run], @Nullable dflt: Run, @Nullable p: NF) { - try { - run$(s, p) - } - catch { - case _: ClusterGroupEmptyException => if (dflt != null) dflt() else () - } - } - - /** - * Alias alias for the same function `run$`. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method is no-op. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def *<(@Nullable s: Seq[Run], @Nullable p: NF) { - run$(s, p) - } - - /** - * Synchronous closure call on this projection without return value. - * This call will block until all executions are complete. - * - * @param s Optional closure to call. If empty or `null` - this method is no-op. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def run$(@Nullable s: Run, @Nullable p: NF) { - run$(Seq(s), p) - } - - /** - * Synchronous closure call on this projection without return value. - * This call will block until all executions are complete. If this projection - * is empty than `dflt` closure will be executed. - * - * @param s Optional closure to call. If empty or `null` - this method is no-op. - * @param dflt Closure to execute if projection is empty. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def runSafe(@Nullable s: Run, @Nullable dflt: Run, @Nullable p: NF) { - try { - run$(s, p) - } - catch { - case _: ClusterGroupEmptyException => if (dflt != null) dflt() else () - } - } - - /** - * Alias for the same function `run$`. - * - * @param s Optional closure to call. If empty or `null` - this method is no-op. - * @param p Optional node filter predicate. If none provided or `null` - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def *<(@Nullable s: Run, @Nullable p: NF) { - run$(s, p) - } - - /** - * Asynchronous closures call on this projection with return value. This call will - * return immediately with the future that can be used to wait asynchronously for the results. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method - * is no-op and finished future over `null` is returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Future of Java collection containing result values from all nodes where given - * closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def callAsync$[R](@Nullable s: Seq[Call[R]], @Nullable p: NF): - IgniteFuture[java.util.Collection[R]] = { - val comp = value.ignite().compute(forPredicate(p)) - - comp.callAsync[R](toJavaCollection(s, (f: Call[R]) => toCallable(f))) - } - - /** - * Alias for the same function `callAsync$`. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method - * is no-op and finished future over `null` is returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Future of Java collection containing result values from all nodes where given - * closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def #?[R](@Nullable s: Seq[Call[R]], @Nullable p: NF): IgniteFuture[java.util.Collection[R]] = { - callAsync$(s, p) - } - - /** - * Asynchronous closure call on this projection with return value. This call will - * return immediately with the future that can be used to wait asynchronously for the results. - * - * @param s Optional closure to call. If `null` - this method is no-op and finished - * future over `null` is returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Future of Java collection containing result values from all nodes where given - * closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def callAsync$[R](@Nullable s: Call[R], @Nullable p: NF): IgniteFuture[java.util.Collection[R]] = { - callAsync$(Seq(s), p) - } - - /** - * Alias for the same function `callAsync$`. - * - * @param s Optional closure to call. If `null` - this method is no-op and finished - * future over `null` is returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Future of Java collection containing result values from all nodes where given - * closures were executed or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def #?[R](@Nullable s: Call[R], @Nullable p: NF): IgniteFuture[java.util.Collection[R]] = { - callAsync$(s, p) - } - - /** - * Asynchronous closures call on this projection without return value. This call will - * return immediately with the future that can be used to wait asynchronously for the results. - * - * @param s Optional sequence of absolute closures to call. If empty or `null` - this method - * is no-op and finished future over `null` will be returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def runAsync$(@Nullable s: Seq[Run], @Nullable p: NF): IgniteFuture[_] = { - val comp = value.ignite().compute(forPredicate(p)) - - comp.runAsync(toJavaCollection(s, (f: Run) => toRunnable(f))) - } - - /** - * Alias for the same function `runAsync$`. - * - * @param s Optional sequence of absolute closures to call. If empty or `null` - this method - * is no-op and finished future over `null` will be returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.call(...)` - */ - def *?(@Nullable s: Seq[Run], @Nullable p: NF): IgniteFuture[_] = { - runAsync$(s, p) - } - - /** - * Asynchronous closure call on this projection without return value. This call will - * return immediately with the future that can be used to wait asynchronously for the results. - * - * @param s Optional absolute closure to call. If `null` - this method - * is no-op and finished future over `null` will be returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def runAsync$(@Nullable s: Run, @Nullable p: NF): IgniteFuture[_] = { - runAsync$(Seq(s), p) - } - - /** - * Alias for the same function `runAsync$`. - * - * @param s Optional absolute closure to call. If `null` - this method - * is no-op and finished future over `null` will be returned. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @see `org.apache.ignite.cluster.ClusterGroup.run(...)` - */ - def *?(@Nullable s: Run, @Nullable p: NF): IgniteFuture[_] = { - runAsync$(s, p) - } - - /** - * Asynchronous closures execution on this projection with reduction. This call will - * return immediately with the future that can be used to wait asynchronously for the results. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method - * is no-op and will return finished future over `null`. - * @param r Optional reduction function. If `null` - this method - * is no-op and will return finished future over `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Future over the reduced result or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.reduce(...)` - */ - def reduceAsync$[R1, R2](s: Seq[Call[R1]], r: Seq[R1] => R2, @Nullable p: NF): IgniteFuture[R2] = { - assert(s != null && r != null) - - val comp = value.ignite().compute(forPredicate(p)) - - comp.callAsync(toJavaCollection(s, (f: Call[R1]) => toCallable(f)), r) - } - - /** - * Alias for the same function `reduceAsync$`. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method - * is no-op and will return finished future over `null`. - * @param r Optional reduction function. If `null` - this method - * is no-op and will return finished future over `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Future over the reduced result or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.reduce(...)` - */ - def @?[R1, R2](s: Seq[Call[R1]], r: Seq[R1] => R2, @Nullable p: NF): IgniteFuture[R2] = { - reduceAsync$(s, r, p) - } - - /** - * Synchronous closures execution on this projection with reduction. - * This call will block until all results are reduced. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method - * is no-op and will return `null`. - * @param r Optional reduction function. If `null` - this method - * is no-op and will return `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Reduced result or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.reduce(...)` - */ - def reduce$[R1, R2](@Nullable s: Seq[Call[R1]], @Nullable r: Seq[R1] => R2, @Nullable p: NF): R2 = - reduceAsync$(s, r, p).get - - /** - * Synchronous closures execution on this projection with reduction. - * This call will block until all results are reduced. If this projection - * is empty than `dflt` closure will be executed and its result returned. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method - * is no-op and will return `null`. - * @param r Optional reduction function. If `null` - this method - * is no-op and will return `null`. - * @param dflt Closure to execute if projection is empty. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Reduced result or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.reduce(...)` - */ - def reduceSafe[R1, R2](@Nullable s: Seq[Call[R1]], @Nullable r: Seq[R1] => R2, - dflt: () => R2, @Nullable p: NF): R2 = { - assert(dflt != null) - - try - reduceAsync$(s, r, p).get - catch { - case _: ClusterGroupEmptyException => dflt() - } - } - - /** - * Alias for the same function `reduce$`. - * - * @param s Optional sequence of closures to call. If empty or `null` - this method is no-op and will return `null`. - * @param r Optional reduction function. If `null` - this method is no-op and will return `null`. - * @param p Optional node filter predicate. If `null` provided - all nodes in projection will be used. - * @return Reduced result or `null` (see above). - * @see `org.apache.ignite.cluster.ClusterGroup.reduce(...)` - */ - def @<[R1, R2](@Nullable s: Seq[Call[R1]], @Nullable r: Seq[R1] => R2, @Nullable p: NF): R2 = - reduceAsync$(s, r, p).get - - /** - * Executes given closure on the nodes where data for provided affinity key is located. This - * is known as affinity co-location between compute grid (a closure) and in-memory data grid - * (value with affinity key). Note that implementation of multiple executions of the same closure will - * be wrapped as a single task that splits into multiple `job`s that will be mapped to nodes - * with provided affinity keys. - * - * This method will block until its execution is complete or an exception is thrown. - * All default SPI implementations configured for this grid instance will be - * used (i.e. failover, load balancing, collision resolution, etc.). - * Note that if you need greater control on any aspects of Java code execution on the grid - * you should implement `ComputeTask` which will provide you with full control over the execution. - * - * Notice that `Runnable` and `Callable` implementations must support serialization as required - * by the configured marshaller. For example, JDK marshaller will require that implementations would - * be serializable. Other marshallers, e.g. JBoss marshaller, may not have this limitation. Please consult - * with specific marshaller implementation for the details. Note that all closures and predicates in - * `org.apache.ignite.lang` package are serializable and can be freely used in the distributed - * context with all marshallers currently shipped with Ignite. - * - * @param cacheName Name of the cache to use for affinity co-location. - * @param affKey Affinity key. - * @param r Closure to affinity co-located on the node with given affinity key and execute. - * If `null` - this method is no-op. - * @param p Optional filtering predicate. If `null` provided - all nodes in this projection will be used for topology. - * @throws IgniteCheckedException Thrown in case of any error. - * @throws ClusterGroupEmptyException Thrown in case when this projection is empty. - * Note that in case of dynamic projection this method will take a snapshot of all the - * nodes at the time of this call, apply all filtering predicates, if any, and if the - * resulting collection of nodes is empty - the exception will be thrown. - * @throws IgniteInterruptedException Subclass of `IgniteException` thrown if the wait was interrupted. - * @throws IgniteFutureCancelledException Subclass of `IgniteException` thrown if computation was cancelled. - */ - def affinityRun$(cacheName: String, @Nullable affKey: Any, @Nullable r: Run, @Nullable p: NF) { - affinityRunAsync$(cacheName, affKey, r, p).get - } - - /** - * Executes given closure on the nodes where data for provided affinity key is located. This - * is known as affinity co-location between compute grid (a closure) and in-memory data grid - * (value with affinity key). Note that implementation of multiple executions of the same closure will - * be wrapped as a single task that splits into multiple `job`s that will be mapped to nodes - * with provided affinity keys. - * - * Unlike its sibling method `affinityRun(String, Collection, Runnable, GridPredicate[])` this method does - * not block and returns immediately with future. All default SPI implementations - * configured for this grid instance will be used (i.e. failover, load balancing, collision resolution, etc.). - * Note that if you need greater control on any aspects of Java code execution on the grid - * you should implement `ComputeTask` which will provide you with full control over the execution. - * - * Note that class `GridAbsClosure` implements `Runnable` and class `GridOutClosure` - * implements `Callable` interface. Note also that class `GridFunc` and typedefs provide rich - * APIs and functionality for closures and predicates based processing in Ignite. While Java interfaces - * `Runnable` and `Callable` allow for lowest common denominator for APIs - it is advisable - * to use richer Functional Programming support provided by Ignite available in `org.apache.ignite.lang` - * package. - * - * Notice that `Runnable` and `Callable` implementations must support serialization as required - * by the configured marshaller. For example, JDK marshaller will require that implementations would - * be serializable. Other marshallers, e.g. JBoss marshaller, may not have this limitation. Please consult - * with specific marshaller implementation for the details. Note that all closures and predicates in - * `org.apache.ignite.lang` package are serializable and can be freely used in the distributed - * context with all marshallers currently shipped with Ignite. - * - * @param cacheName Name of the cache to use for affinity co-location. - * @param affKey Affinity key. - * @param r Closure to affinity co-located on the node with given affinity key and execute. - * If `null` - this method is no-op. - * @param p Optional filtering predicate. If `null` provided - all nodes in this projection will be used for topology. - * @throws IgniteCheckedException Thrown in case of any error. - * @throws ClusterGroupEmptyCheckedException Thrown in case when this projection is empty. - * Note that in case of dynamic projection this method will take a snapshot of all the - * nodes at the time of this call, apply all filtering predicates, if any, and if the - * resulting collection of nodes is empty - the exception will be thrown. - * @return Non-cancellable future of this execution. - * @throws IgniteInterruptedException Subclass of `IgniteException` thrown if the wait was interrupted. - * @throws IgniteFutureCancelledException Subclass of `IgniteException` thrown if computation was cancelled. - */ - def affinityRunAsync$(cacheName: String, @Nullable affKey: Any, @Nullable r: Run, - @Nullable p: NF): IgniteFuture[_] = { - val comp = value.ignite().compute(forPredicate(p)) - - comp.affinityRunAsync(cacheName, affKey, toRunnable(r)) - } -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarTaskThreadContext.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarTaskThreadContext.scala deleted file mode 100644 index 544ed402925a7..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/pimps/ScalarTaskThreadContext.scala +++ /dev/null @@ -1,45 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.pimps - -import org.apache.ignite.cluster.ClusterGroup -import org.apache.ignite.scalar.ScalarConversions -import org.jetbrains.annotations._ - -/** - * This trait provide mixin for properly typed version of `GridProjection#with...()` methods. - * - * Method on `GridProjection` always returns an instance of type `GridProjection` even when - * called on a sub-class. This trait's methods return the instance of the same type - * it was called on. - */ -trait ScalarTaskThreadContext[T <: ClusterGroup] extends ScalarConversions { this: PimpedType[T] => - /** - * Properly typed version of `Compute#withName(...)` method. - * - * @param taskName Name of the task. - */ - def withName$(@Nullable taskName: String): T = - value.ignite().compute(value).withName(taskName).asInstanceOf[T] - - /** - * Properly typed version of `Compute#withNoFailover()` method. - */ - def withNoFailover$(): T = - value.ignite().compute(value).withNoFailover().asInstanceOf[T] -} diff --git a/modules/scalar/src/main/scala/org/apache/ignite/scalar/scalar.scala b/modules/scalar/src/main/scala/org/apache/ignite/scalar/scalar.scala deleted file mode 100644 index 35c95fc23ab15..0000000000000 --- a/modules/scalar/src/main/scala/org/apache/ignite/scalar/scalar.scala +++ /dev/null @@ -1,472 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar - -import org.apache.ignite._ -import org.apache.ignite.cache.CacheMode -import org.apache.ignite.cache.query.annotations.{QuerySqlField, QueryTextField} -import org.apache.ignite.cluster.ClusterNode -import org.apache.ignite.configuration.{CacheConfiguration, IgniteConfiguration} -import org.apache.ignite.internal.IgniteVersionUtils._ -import org.jetbrains.annotations.Nullable - -import java.net.URL -import java.util.UUID - -import scala.annotation.meta.field - -/** - * {{{ - * ________ ______ ______ _______ - * __ ___/_____________ ____ /______ _________ __/__ \ __ __ \ - * _____ \ _ ___/_ __ `/__ / _ __ `/__ ___/ ____/ / _ / / / - * ____/ / / /__ / /_/ / _ / / /_/ / _ / _ __/___/ /_/ / - * /____/ \___/ \__,_/ /_/ \__,_/ /_/ /____/_(_)____/ - * - * }}} - * - * ==Overview== - * `scalar` is the main object that encapsulates Scalar DSL. It includes global functions - * on "scalar" keyword, helper converters as well as necessary implicit conversions. `scalar` also - * mimics many methods in `Ignite` class from Java side. - * - * The idea behind Scalar DSL - '''zero additional logic and only conversions''' implemented - * using Scala "Pimp" pattern. Note that most of the Scalar DSL development happened on Java - * side of Ignite 3.0 product line - Java APIs had to be adjusted quite significantly to - * support natural adaptation of functional APIs. That basically means that all functional - * logic must be available on Java side and Scalar only provides conversions from Scala - * language constructs to Java constructs. Note that currently Ignite supports Scala 2.8 - * and up only. - * - * This design approach ensures that Java side does not starve and usage paradigm - * is mostly the same between Java and Scala - yet with full power of Scala behind. - * In other words, Scalar only adds Scala specifics, but not greatly altering semantics - * of how Ignite APIs work. Most of the time the code in Scalar can be written in - * Java in almost the same number of lines. - * - * ==Suffix '$' In Names== - * Symbol `$` is used in names when they conflict with the names in the base Java class - * that Scala pimp is shadowing or with Java package name that your Scala code is importing. - * Instead of giving two different names to the same function we've decided to simply mark - * Scala's side method with `$` suffix. - * - * ==Importing== - * Scalar needs to be imported in a proper way so that necessary objects and implicit - * conversions got available in the scope: - *

- * import org.apache.ignite.scalar._
- * import scalar._
- * 
- * This way you import object `scalar` as well as all methods declared or inherited in that - * object as well. - * - * ==Examples== - * Here are few short examples of how Scalar can be used to program routine distributed - * task. All examples below use default Ignite configuration and default grid. All these - * examples take an implicit advantage of auto-discovery and failover, load balancing and - * collision resolution, zero deployment and many other underlying technologies in the - * Ignite - while remaining absolutely distilled to the core domain logic. - * - * This code snippet prints out full topology: - *
- * scalar {
- *     grid$ foreach (n => println("Node: " + n.id8))
- * }
- * 
- * The obligatory example - cloud enabled `Hello World!`. It splits the phrase - * into multiple words and prints each word on a separate grid node: - *
- * scalar {
- *     grid$ *< (SPREAD, (for (w <- "Hello World!".split(" ")) yield () => println(w)))
- * }
- * 
- * This example broadcasts message to all nodes: - *
- * scalar {
- *     grid$ *< (BROADCAST, () => println("Broadcasting!!!"))
- * }
- * 
- * This example "greets" remote nodes only (note usage of Java-side closure): - *
- * scalar {
- *     val me = grid$.localNode.id
- *     grid$.remoteProjection() *< (BROADCAST, F.println("Greetings from: " + me))
- * }
- * 
- * - * Next example creates a function that calculates lengths of the string - * using MapReduce type of processing by splitting the input string into - * multiple substrings, calculating each substring length on the remote - * node and aggregating results for the final length of the original string: - *
- * def count(msg: String) =
- *     grid$ @< (SPREAD, for (w <- msg.split(" ")) yield () => w.length, (s: Seq[Int]) => s.sum)
- * 
- * This example shows a simple example of how Scalar can be used to work with in-memory data grid: - *
- * scalar {
- *     val t = cache$[Symbol, Double]("partitioned")
- *     t += ('symbol -> 2.0)
- *     t -= ('symbol)
- * }
- * 
- */ -object scalar extends ScalarConversions { - /** Type alias for `QuerySqlField`. */ - type ScalarCacheQuerySqlField = QuerySqlField @field - - /** Type alias for `QueryTextField`. */ - type ScalarCacheQueryTextField = QueryTextField @field - - /** - * Prints Scalar ASCII-logo. - */ - def logo() { - val NL = System getProperty "line.separator" - - val s = - " ________ ______ " + NL + - " __ ___/_____________ ____ /______ _________ " + NL + - " _____ \\ _ ___/_ __ `/__ / _ __ `/__ ___/ " + NL + - " ____/ / / /__ / /_/ / _ / / /_/ / _ / " + NL + - " /____/ \\___/ \\__,_/ /_/ \\__,_/ /_/ " + NL + NL + - " IGNITE SCALAR" + - " " + COPYRIGHT + NL - - println(s) - } - - /** - * Note that grid instance will be stopped with cancel flat set to `true`. - * - * @param g Grid instance. - * @param body Closure with grid instance as body's parameter. - */ - private def init[T](g: Ignite, body: Ignite => T): T = { - assert(g != null, body != null) - - try { - body(g) - } - finally { - Ignition.stop(g.name, true) - } - } - - /** - * Note that grid instance will be stopped with cancel flat set to `true`. - * - * @param g Grid instance. - * @param body Passed by name body. - */ - private def init0[T](g: Ignite, body: => T): T = { - assert(g != null) - - try { - body - } - finally { - Ignition.stop(g.name, true) - } - } - - /** - * Executes given closure within automatically managed default grid instance. - * If default grid is already started the passed in closure will simply - * execute. - * - * @param body Closure to execute within automatically managed default grid instance. - */ - def apply(body: Ignite => Unit) { - if (!isStarted) init(Ignition.start, body) else body(ignite$) - } - - /** - * Executes given closure within automatically managed default grid instance. - * If default grid is already started the passed in closure will simply - * execute. - * - * @param body Closure to execute within automatically managed default grid instance. - */ - def apply[T](body: Ignite => T): T = - if (!isStarted) init(Ignition.start, body) else body(ignite$) - - /** - * Executes given closure within automatically managed default grid instance. - * If default grid is already started the passed in closure will simply - * execute. - * - * @param body Closure to execute within automatically managed default grid instance. - */ - def apply[T](body: => T): T = - if (!isStarted) init0(Ignition.start, body) else body - - /** - * Executes given closure within automatically managed default grid instance. - * If default grid is already started the passed in closure will simply - * execute. - * - * @param body Closure to execute within automatically managed grid instance. - */ - def apply(body: => Unit) { - if (!isStarted) init0(Ignition.start, body) else body - } - - /** - * Executes given closure within automatically managed grid instance. - * - * @param springCfgPath Spring XML configuration file path or URL. - * @param body Closure to execute within automatically managed grid instance. - */ - def apply(springCfgPath: String)(body: => Unit) { - init0(Ignition.start(springCfgPath), body) - } - - /** - * Executes given closure within automatically managed grid instance. - * - * @param cfg Grid configuration instance. - * @param body Closure to execute within automatically managed grid instance. - */ - def apply(cfg: IgniteConfiguration)(body: => Unit) { - init0(Ignition.start(cfg), body) - } - - /** - * Executes given closure within automatically managed grid instance. - * - * @param springCfgUrl Spring XML configuration file URL. - * @param body Closure to execute within automatically managed grid instance. - */ - def apply(springCfgUrl: URL)(body: => Unit) { - init0(Ignition.start(springCfgUrl), body) - } - - /** - * Gets named cache from default grid. - * - * @param cacheName Name of the cache to get. - */ - @inline def cache$[K, V](cacheName: String): Option[IgniteCache[K, V]] = - Option(Ignition.ignite.cache(cacheName)) - - /** - * Creates cache with specified parameters in default grid. - * - * @param cacheName Name of the cache to get. - */ - @inline def createCache$[K, V](cacheName: String, cacheMode: CacheMode = CacheMode.PARTITIONED, - indexedTypes: Seq[Class[_]] = Seq.empty): IgniteCache[K, V] = { - val cfg = new CacheConfiguration[K, V]() - - cfg.setName(cacheName) - cfg.setCacheMode(cacheMode) - cfg.setIndexedTypes(indexedTypes:_*) - - Ignition.ignite.createCache(cfg) - } - - /** - * Destroy cache with specified name. - * - * @param cacheName Name of the cache to destroy. - */ - @inline def destroyCache$(cacheName: String) = { - Ignition.ignite.destroyCache(cacheName) - } - - /** - * Gets named cache from specified grid. - * - * @param igniteInstanceName Name of the Ignite instance. - * @param cacheName Name of the cache to get. - */ - @inline def cache$[K, V](@Nullable igniteInstanceName: String, - cacheName: String): Option[IgniteCache[K, V]] = - ignite$(igniteInstanceName) match { - case Some(g) => Option(g.cache(cacheName)) - case None => None - } - - /** - * Gets a new instance of data streamer associated with given cache name. - * - * @param cacheName Cache name (`null` for default cache). - * @param bufSize Per node buffer size. - * @return New instance of data streamer. - */ - @inline def dataStreamer$[K, V]( - cacheName: String, - bufSize: Int): IgniteDataStreamer[K, V] = { - val dl = ignite$.dataStreamer[K, V](cacheName) - - dl.perNodeBufferSize(bufSize) - - dl - } - - /** - * Gets default grid instance. - */ - @inline def ignite$: Ignite = Ignition.ignite - - /** - * Gets node ID as ID8 string. - */ - def nid8$(node: ClusterNode) = node.id().toString.take(8).toUpperCase - - /** - * Gets named Ignite instance. - * - * @param name Ignite instance name. - */ - @inline def ignite$(@Nullable name: String): Option[Ignite] = - try { - Option(Ignition.ignite(name)) - } - catch { - case _: IllegalStateException => None - } - - /** - * Gets grid for given node ID. - * - * @param locNodeId Local node ID for which to get grid instance option. - */ - @inline def grid$(locNodeId: UUID): Option[Ignite] = { - assert(locNodeId != null) - - try { - Option(Ignition.ignite(locNodeId)) - } - catch { - case _: IllegalStateException => None - } - } - - /** - * Tests if specified grid is started. - * - * @param name Gird name. - */ - def isStarted(@Nullable name: String) = - Ignition.state(name) == IgniteState.STARTED - - /** - * Tests if specified grid is stopped. - * - * @param name Gird name. - */ - def isStopped(@Nullable name: String) = - Ignition.state(name) == IgniteState.STOPPED - - /** - * Tests if default grid is started. - */ - def isStarted = - Ignition.state == IgniteState.STARTED - - /** - * Tests if default grid is stopped. - */ - def isStopped = - Ignition.state == IgniteState.STOPPED - - /** - * Stops given Ignite instance and specified cancel flag. - * If specified Ignite instance is already stopped - it's no-op. - * - * @param name Ignite instance name to cancel. - * @param cancel Whether or not to cancel all currently running jobs. - */ - def stop(@Nullable name: String, cancel: Boolean) = - if (isStarted(name)) - Ignition.stop(name, cancel) - - /** - * Stops default grid with given cancel flag. - * If default grid is already stopped - it's no-op. - * - * @param cancel Whether or not to cancel all currently running jobs. - */ - def stop(cancel: Boolean) = - if (isStarted) Ignition.stop(cancel) - - /** - * Stops default grid with cancel flag set to `true`. - * If default grid is already stopped - it's no-op. - */ - def stop() = - if (isStarted) Ignition.stop(true) - - /** - * Sets daemon flag to grid factory. Note that this method should be called - * before grid instance starts. - * - * @param f Daemon flag to set. - */ - def daemon(f: Boolean) { - Ignition.setDaemon(f) - } - - /** - * Gets daemon flag set in the grid factory. - */ - def isDaemon = - Ignition.isDaemon - - /** - * Starts default grid. It's no-op if default grid is already started. - * - * @return Started grid. - */ - def start(): Ignite = { - if (!isStarted) Ignition.start else ignite$ - } - - /** - * Starts grid with given parameter(s). - * - * @param springCfgPath Spring XML configuration file path or URL. - * @return Started grid. If Spring configuration contains multiple grid instances, - * then the 1st found instance is returned. - */ - def start(@Nullable springCfgPath: String): Ignite = { - Ignition.start(springCfgPath) - } - - /** - * Starts grid with given parameter(s). - * - * @param cfg Grid configuration. This cannot be `null`. - * @return Started grid. - */ - def start(cfg: IgniteConfiguration): Ignite = { - Ignition.start(cfg) - } - - /** - * Starts grid with given parameter(s). - * - * @param springCfgUrl Spring XML configuration file URL. - * @return Started grid. - */ - def start(springCfgUrl: URL): Ignite = { - Ignition.start(springCfgUrl) - } -} diff --git a/modules/scalar/src/test/resources/spring-cache.xml b/modules/scalar/src/test/resources/spring-cache.xml deleted file mode 100644 index fab6d55f68259..0000000000000 --- a/modules/scalar/src/test/resources/spring-cache.xml +++ /dev/null @@ -1,88 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - java.lang.Integer - org.apache.ignite.scalar.tests.ObjectValue - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 127.0.0.1:47500..47509 - - - - - - - - - - - - - - - - - diff --git a/modules/scalar/src/test/resources/spring-ping-pong-partner.xml b/modules/scalar/src/test/resources/spring-ping-pong-partner.xml deleted file mode 100644 index 766d9fb94d405..0000000000000 --- a/modules/scalar/src/test/resources/spring-ping-pong-partner.xml +++ /dev/null @@ -1,85 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 127.0.0.1:47500..47509 - - - - - - - - - diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarAffinityRoutingSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarAffinityRoutingSpec.scala deleted file mode 100644 index 5f9c531bab7ee..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarAffinityRoutingSpec.scala +++ /dev/null @@ -1,68 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.Ignition -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -/** - * Tests for `affinityRun..` and `affinityCall..` methods. - */ -@RunWith(classOf[JUnitRunner]) -class ScalarAffinityRoutingSpec extends FlatSpec with ShouldMatchers with BeforeAndAfterAll { - private val CFG = "modules/scalar/src/test/resources/spring-cache.xml" - - /** Cache name. */ - private val CACHE_NAME = "partitioned_tx" - - "affinityRun$ method" should "run correctly" in scalar(CFG) { - val c = cache$[Int, Int](CACHE_NAME).get - -// c += (0 -> 0) -// c += (1 -> 1) -// c += (2 -> 2) - - val cnt = Ignition.ignite.atomicLong("affinityRun", 0, true) - - ignite$.affinityRun$(CACHE_NAME, 0, () => { cnt.incrementAndGet() }, null) - ignite$.affinityRun$(CACHE_NAME, 1, () => { cnt.incrementAndGet() }, null) - ignite$.affinityRun$(CACHE_NAME, 2, () => { cnt.incrementAndGet() }, null) - - assert(cnt.get === 3) - } - - "affinityRunAsync$ method" should "run correctly" in scalar(CFG) { - val c = cache$[Int, Int](CACHE_NAME).get - -// c += (0 -> 0) -// c += (1 -> 1) -// c += (2 -> 2) - - val cnt = Ignition.ignite.atomicLong("affinityRunAsync", 0, true) - - ignite$.affinityRunAsync$(CACHE_NAME, 0, () => { cnt.incrementAndGet() }, null).get - ignite$.affinityRunAsync$(CACHE_NAME, 1, () => { cnt.incrementAndGet() }, null).get - ignite$.affinityRunAsync$(CACHE_NAME, 2, () => { cnt.incrementAndGet() }, null).get - - assert(cnt.get === 3) - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarCacheQueriesSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarCacheQueriesSpec.scala deleted file mode 100644 index 52ddf233b8f8a..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarCacheQueriesSpec.scala +++ /dev/null @@ -1,224 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.IgniteCache -import org.apache.ignite.cache.CachePeekMode -import org.apache.ignite.cluster.ClusterNode -import org.apache.ignite.scalar.scalar._ -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -import scala.collection.JavaConversions._ - -/** - * Tests for Scalar cache queries API. - */ -@RunWith(classOf[JUnitRunner]) -class ScalarCacheQueriesSpec extends FunSpec with ShouldMatchers with BeforeAndAfterAll { - /** Entries count. */ - private val ENTRY_CNT = 10 - - /** Words. */ - private val WORDS = List("", "one", "two", "three", "four", "five", - "six", "seven", "eight", "nine", "ten") - - /** Node. */ - private var n: ClusterNode = null - - /** Cache. */ - private var c: IgniteCache[Int, ObjectValue] = null - - /** - * Start node and put data to cache. - */ - override def beforeAll() { - n = start("modules/scalar/src/test/resources/spring-cache.xml").cluster().localNode - - c = cache$[Int, ObjectValue]("default").get - - (1 to ENTRY_CNT).foreach(i => c.put(i, ObjectValue(i, "str " + WORDS(i)))) - - assert(c.size(Array.empty[CachePeekMode]:_*) == ENTRY_CNT) - - c.foreach(e => println(e.getKey + " -> " + e.getValue)) - } - - /** - * Stop node. - */ - override def afterAll() { - stop() - } - - describe("Scalar cache queries API") { - it("should correctly execute SCAN queries") { - var res = c.scan(classOf[ObjectValue], (k: Int, v: ObjectValue) => k > 5 && v.intVal < 8).getAll - - assert(res.size == 2) - - res.foreach(t => assert(t.getKey > 5 && t.getKey < 8 && t.getKey == t.getValue.intVal)) - - res = c.scan((k: Int, v: ObjectValue) => k > 5 && v.intVal < 8).getAll - - assert(res.size == 2) - - res.foreach(t => assert(t.getKey > 5 && t.getKey < 8 && t.getKey == t.getValue.intVal)) - - res = c.scan(classOf[ObjectValue], (k: Int, v: ObjectValue) => k > 5 && v.intVal < 8).getAll - - assert(res.size == 2) - - res.foreach(t => assert(t.getKey > 5 && t.getKey < 8 && t.getKey == t.getValue.intVal)) - - res = c.scan((k: Int, v: ObjectValue) => k > 5 && v.intVal < 8).getAll - - assert(res.size == 2) - - res.foreach(t => assert(t.getKey > 5 && t.getKey < 8 && t.getKey == t.getValue.intVal)) - } - - it("should correctly execute SQL queries") { - var res = c.sql(classOf[ObjectValue], "intVal > 5").getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql(classOf[ObjectValue], "intVal > ?", 5).getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql("intVal > 5").getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql("intVal > ?", 5).getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql(classOf[ObjectValue], "intVal > 5").getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql(classOf[ObjectValue], "intVal > ?", 5).getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql("intVal > 5").getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - - res = c.sql("intVal > ?", 5).getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.getKey > 5 && t.getKey == t.getValue.intVal)) - } - - it("should correctly execute TEXT queries") { - var res = c.text(classOf[ObjectValue], "str").getAll - - assert(res.size == ENTRY_CNT) - - res = c.text(classOf[ObjectValue], "five").getAll - - assert(res.size == 1) - assert(res.head.getKey == 5) - - res = c.text("str").getAll - - assert(res.size == ENTRY_CNT) - - res = c.text("five").getAll - - assert(res.size == 1) - assert(res.head.getKey == 5) - - res = c.text(classOf[ObjectValue], "str").getAll - - assert(res.size == ENTRY_CNT) - - res = c.text(classOf[ObjectValue], "five").getAll - - assert(res.size == 1) - assert(res.head.getKey == 5) - - res = c.text("str").getAll - - assert(res.size == ENTRY_CNT) - - res = c.text("five").getAll - - assert(res.size == 1) - assert(res.head.getKey == 5) - } - - it("should correctly execute fields queries") { - var res = c.sqlFields("select intVal from ObjectValue where intVal > 5").getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.size == 1 && t.head.asInstanceOf[Int] > 5)) - - res = c.sqlFields("select intVal from ObjectValue where intVal > ?", 5).getAll - - assert(res.size == ENTRY_CNT - 5) - - res.foreach(t => assert(t.size == 1 && t.head.asInstanceOf[Int] > 5)) - } - - it("should correctly execute queries with multiple arguments") { - val res = c.sql("from ObjectValue where intVal in (?, ?, ?)", 1, 2, 3).getAll - - assert(res.size == 3) - } - } -} - -/** - * Object for queries. - */ -private case class ObjectValue( - /** Integer value. */ - @ScalarCacheQuerySqlField - intVal: Int, - - /** String value. */ - @ScalarCacheQueryTextField - strVal: String -) { - override def toString: String = { - "ObjectValue [" + intVal + ", " + strVal + "]" - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarCacheSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarCacheSpec.scala deleted file mode 100644 index 853cc16d1ff2f..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarCacheSpec.scala +++ /dev/null @@ -1,83 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.events.Event -import org.apache.ignite.events.EventType._ -import org.apache.ignite.lang.IgnitePredicate -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -import scala.collection.JavaConversions._ - -/** - * Scalar cache test. - */ -@RunWith(classOf[JUnitRunner]) -class ScalarCacheSpec extends FunSpec with ShouldMatchers { - private val CFG = "modules/scalar/src/test/resources/spring-cache.xml" - - describe("Scalar cache") { - - it("should work properly via Java APIs") { - scalar(CFG) { - registerListener() - - val c = cache$[Int, Int]("partitioned").get - - c.put(1, 1) - c.put(2, 2) - - c.iterator() foreach println - - println("Size is: " + c.size()) - } - } - } - - /** - * This method will register listener for cache events on all nodes, - * so we can actually see what happens underneath locally and remotely. - */ - def registerListener() { - val g = ignite$ - - g *< (() => { - val lsnr = new IgnitePredicate[Event]() { - override def apply(e: Event): Boolean = { - println(e.shortDisplay) - - true - } - } - - if (g.cluster().nodeLocalMap[String, AnyRef].putIfAbsent("lsnr", lsnr) == null) { - g.events.localListen(lsnr, - EVT_CACHE_OBJECT_PUT, - EVT_CACHE_OBJECT_READ, - EVT_CACHE_OBJECT_REMOVED) - - println("Listener is registered.") - } - }, null) - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarConversionsSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarConversionsSpec.scala deleted file mode 100644 index c7664226e3a52..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarConversionsSpec.scala +++ /dev/null @@ -1,255 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.internal.util.lang._ -import org.apache.ignite.lang._ -import org.apache.ignite.scalar.scalar._ -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner -import org.scalatest.matchers.ShouldMatchers - -import java.util.concurrent.atomic._ - -/** - * - */ -@RunWith(classOf[JUnitRunner]) -class ScalarConversionsSpec extends FunSpec with ShouldMatchers { - describe("Scalar mixin") { - - it("should convert reducer") { - val r = new IgniteReducer[Int, Int] { - var sum = 0 - - override def collect(e: Int): Boolean = { - sum += e - - true - } - - override def reduce(): Int = { - sum - } - } - - assert(r.scala.apply(Seq(1, 2, 3)) == 6) - } - - it("should convert reducer 2") { - val r = new IgniteReducer2[Int, Int, Int] { - var sum = 0 - - override def collect(e1: Int, e2: Int): Boolean = { - sum += e1 * e2 - - true - } - - override def apply(): Int = { - sum - } - } - - assert(r.scala.apply(Seq(1, 2), Seq(3, 4)) == 21) - } - - it("should convert reducer 3") { - val r = new IgniteReducer3[Int, Int, Int, Int] { - var sum = 0 - - override def collect(e1: Int, e2: Int, e3: Int): Boolean = { - sum += e1 * e2 * e3 - - true - } - - override def apply(): Int = { - sum - } - } - - assert(r.scala.apply(Seq(1, 2), Seq(1, 2), Seq(1, 2)) == 27) - } - - it("should convert tuple 2") { - val t = new IgniteBiTuple[Int, Int](1, 2) - - assert(t.scala._1 == 1) - assert(t.scala._2 == 2) - } - - it("should convert tuple 3") { - val t = new GridTuple3[Int, Int, Int](1, 2, 3) - - assert(t.scala._1 == 1) - assert(t.scala._2 == 2) - assert(t.scala._3 == 3) - } - - it("should convert tuple 4") { - val t = new GridTuple4[Int, Int, Int, Int](1, 2, 3, 4) - - assert(t.scala._1 == 1) - assert(t.scala._2 == 2) - assert(t.scala._3 == 3) - assert(t.scala._4 == 4) - } - - it("should convert tuple 5") { - val t = new GridTuple5[Int, Int, Int, Int, Int](1, 2, 3, 4, 5) - - assert(t.scala._1 == 1) - assert(t.scala._2 == 2) - assert(t.scala._3 == 3) - assert(t.scala._4 == 4) - assert(t.scala._5 == 5) - } - - it("should convert in closure") { - val i = new AtomicInteger() - - val f = new IgniteInClosure[Int] { - override def apply(e: Int) { - i.set(e * 3) - } - } - - f.scala.apply(3) - - assert(i.get == 9) - } - - it("should convert in closure 2") { - val i = new AtomicInteger() - - val f = new IgniteBiInClosure[Int, Int] { - override def apply(e1: Int, e2: Int) { - i.set(e1 + e2) - } - } - - f.scala.apply(3, 3) - - assert(i.get == 6) - } - - it("should convert in closure 3") { - val i = new AtomicInteger() - - val f = new GridInClosure3[Int, Int, Int] { - override def apply(e1: Int, e2: Int, e3: Int) { - i.set(e1 + e2 + e3) - } - } - - f.scala.apply(3, 3, 3) - - assert(i.get == 9) - } - - it("should convert absolute closure") { - val i = new AtomicInteger() - - val f = new GridAbsClosure { - override def apply() { - i.set(3) - } - } - - f.scala.apply() - - assert(i.get == 3) - } - - it("should convert absolute predicate") { - val i = new AtomicInteger() - - val p = new GridAbsPredicate { - override def apply(): Boolean = - i.get > 5 - } - - i.set(5) - - assert(!p.scala.apply()) - - i.set(6) - - assert(p.scala.apply()) - } - - it("should convert predicate") { - val p = new IgnitePredicate[Int] { - override def apply(e: Int): Boolean = - e > 5 - } - - assert(!p.scala.apply(5)) - assert(p.scala.apply(6)) - } - - it("should convert predicate 2") { - val p = new IgniteBiPredicate[Int, Int] { - override def apply(e1: Int, e2: Int): Boolean = - e1 + e2 > 5 - } - - assert(!p.scala.apply(2, 3)) - assert(p.scala.apply(3, 3)) - } - - it("should convert predicate 3") { - val p = new GridPredicate3[Int, Int, Int] { - override def apply(e1: Int, e2: Int, e3: Int): Boolean = - e1 + e2 + e3 > 5 - } - - assert(!p.scala.apply(1, 2, 2)) - assert(p.scala.apply(2, 2, 2)) - } - - it("should convert closure") { - val f = new IgniteClosure[Int, Int] { - override def apply(e: Int): Int = - e * 3 - } - - assert(f.scala.apply(3) == 9) - } - - it("should convert closure 2") { - val f = new IgniteBiClosure[Int, Int, Int] { - override def apply(e1: Int, e2: Int): Int = - e1 + e2 - } - - assert(f.scala.apply(3, 3) == 6) - } - - it("should convert closure 3") { - val f = new GridClosure3[Int, Int, Int, Int] { - override def apply(e1: Int, e2: Int, e3: Int): Int = - e1 + e2 + e3 - } - - assert(f.scala.apply(3, 3, 3) == 9) - } - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarProjectionSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarProjectionSpec.scala deleted file mode 100644 index 479357cda3857..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarProjectionSpec.scala +++ /dev/null @@ -1,163 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.Ignition -import org.apache.ignite.cluster.ClusterNode -import org.apache.ignite.configuration.IgniteConfiguration -import org.apache.ignite.messaging.MessagingListenActor -import org.apache.ignite.scalar.scalar -import org.apache.ignite.scalar.scalar._ - -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -import java.util.UUID - -import scala.collection.JavaConversions._ - -/** - * Scalar cache test. - */ -@RunWith(classOf[JUnitRunner]) -class ScalarProjectionSpec extends FunSpec with ShouldMatchers with BeforeAndAfterAll { - /** - * - */ - override def beforeAll() { - Ignition.start(gridConfig("node-1", false)) - Ignition.start(gridConfig("node-2", true)) - } - - /** - * - */ - override def afterAll() { - Ignition.stop("node-1", true) - Ignition.stop("node-2", true) - } - - /** - * - * @param name Ignite instance name. - * @param shown Shown flag. - */ - private def gridConfig(name: String, shown: Boolean): IgniteConfiguration = { - val attrs: java.util.Map[String, Boolean] = Map[String, Boolean]("shown" -> shown) - - val cfg = new IgniteConfiguration - - cfg.setIgniteInstanceName(name) - cfg.setUserAttributes(attrs) - - cfg - } - - describe("ScalarProjectionPimp class") { - it("should return all nodes") { - scalar(gridConfig("node-scalar", true)) { - assertResult(3)(ignite$("node-scalar").get.cluster().nodes().size) - } - } - - it("should return shown nodes") { - scalar(gridConfig("node-scalar", true)) { - assert(ignite$("node-scalar").get.nodes$( - (node: ClusterNode) => node.attribute[Boolean]("shown")).size == 2) - } - } - - it("should return all remote nodes") { - scalar(gridConfig("node-scalar", true)) { - assertResult(2)(ignite$("node-scalar").get.remoteNodes$().size) - } - } - - it("should return shown remote nodes") { - scalar(gridConfig("node-scalar", true)) { - assert(ignite$("node-scalar").get.remoteNodes$((node: ClusterNode) => - node.attribute[Boolean]("shown")).size == 1) - } - } - - it("should correctly send messages") { - scalar(gridConfig("node-scalar", true)) { - ignite$("node-1").get.message().remoteListen(null, new MessagingListenActor[Any]() { - def receive(nodeId: UUID, msg: Any) { - println("node-1 received " + msg) - } - }) - - ignite$("node-2").get.message().remoteListen(null, new MessagingListenActor[Any]() { - def receive(nodeId: UUID, msg: Any) { - println("node-2 received " + msg) - } - }) - - ignite$("node-scalar").get !<("Message", null) - ignite$("node-scalar").get !<(Seq("Message1", "Message2"), null) - } - } - - it("should correctly make calls") { - scalar(gridConfig("node-scalar", true)) { - println("CALL RESULT: " + ignite$("node-scalar").get #<(() => "Message", null)) - - println("ASYNC CALL RESULT: " + ignite$("node-scalar").get.callAsync$[String](() => "Message", null).get) - - val call1: () => String = () => "Message1" - val call2: () => String = () => "Message2" - - println("MULTIPLE CALL RESULT: " + ignite$("node-scalar").get #<(Seq(call1, call2), null)) - - println("MULTIPLE ASYNC CALL RESULT: " + - (ignite$("node-scalar").get #?(Seq(call1, call2), null)).get) - } - } - - it("should correctly make runs") { - scalar(gridConfig("node-scalar", true)) { - ignite$("node-scalar").get *<(() => println("RUN RESULT: Message"), null) - - (ignite$("node-scalar").get *?(() => println("ASYNC RUN RESULT: Message"), null)).get - - val run1: () => Unit = () => println("RUN 1 RESULT: Message1") - val run2: () => Unit = () => println("RUN 2 RESULT: Message2") - - ignite$("node-scalar").get *<(Seq(run1, run2), null) - - val runAsync1: () => Unit = () => println("ASYNC RUN 1 RESULT: Message1") - val runAsync2: () => Unit = () => println("ASYNC RUN 2 RESULT: Message2") - - (ignite$("node-scalar").get *?(Seq(runAsync1, runAsync2), null)).get - } - } - - it("should correctly reduce") { - scalar(gridConfig("node-scalar", true)) { - val call1: () => Int = () => 15 - val call2: () => Int = () => 82 - - assert(ignite$("node-scalar").get @<(Seq(call1, call2), (n: Seq[Int]) => n.sum, null) == 97) - assert(ignite$("node-scalar").get.reduceAsync$(Seq(call1, call2), ( - n: Seq[Int]) => n.sum, null).get == 97) - } - } - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarReturnableSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarReturnableSpec.scala deleted file mode 100644 index 2927dd737322f..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarReturnableSpec.scala +++ /dev/null @@ -1,60 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.scalar.scalar._ - -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -import scala.util.control.Breaks._ - -/** - * - */ -@RunWith(classOf[JUnitRunner]) -class ScalarReturnableSpec extends FunSpec with ShouldMatchers { - describe("Scalar '^^'") { - it("should work") { - var i = 0 - - breakable { - while (true) { - if (i == 0) - println("Only once!") ^^ - - i += 1 - } - } - - assert(i == 0) - } - - // Ignore exception below. - def test() = breakable { - while (true) { - println("Only once!") ^^ - } - } - - it("should also work") { - test() - } - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarSpec.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarSpec.scala deleted file mode 100644 index b6fc014a0db46..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/tests/ScalarSpec.scala +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.tests - -import org.apache.ignite.scalar.scalar -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -/** - * - */ -@RunWith(classOf[JUnitRunner]) -class ScalarSpec extends FunSpec { - describe("Scalar") { - it("should start and stop") { - scalar start() - scalar.logo() - scalar stop() - } - } -} diff --git a/modules/scalar/src/test/scala/org/apache/ignite/scalar/testsuites/ScalarSelfTestSuite.scala b/modules/scalar/src/test/scala/org/apache/ignite/scalar/testsuites/ScalarSelfTestSuite.scala deleted file mode 100644 index a9ee6c44c82ec..0000000000000 --- a/modules/scalar/src/test/scala/org/apache/ignite/scalar/testsuites/ScalarSelfTestSuite.scala +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.ignite.scalar.testsuites - -import org.apache.ignite.IgniteSystemProperties._ -import org.apache.ignite.scalar.tests._ -import org.apache.ignite.testframework.GridTestUtils -import org.junit.runner.RunWith -import org.scalatest._ -import org.scalatest.junit.JUnitRunner - -/** - * - */ -@RunWith(classOf[JUnitRunner]) -class ScalarSelfTestSuite extends Suites( - new ScalarAffinityRoutingSpec, - new ScalarCacheQueriesSpec, - new ScalarCacheSpec, - new ScalarConversionsSpec, - new ScalarProjectionSpec, - new ScalarReturnableSpec, - new ScalarSpec -) { - System.setProperty(IGNITE_OVERRIDE_MCAST_GRP, - GridTestUtils.getNextMulticastGroup(classOf[ScalarSelfTestSuite])) -} diff --git a/pom.xml b/pom.xml index 5bc22617ecfc9..b2a2ae944e125 100644 --- a/pom.xml +++ b/pom.xml @@ -104,8 +104,6 @@ all-scala - modules/scalar-2.10 - modules/scalar modules/spark modules/spark-2.4 modules/visor-console-2.10 @@ -561,8 +559,6 @@ - modules/scalar - modules/spark modules/visor-console modules/visor-plugins @@ -577,7 +573,6 @@ modules/spark-2.4 - modules/scalar @@ -589,7 +584,6 @@ - modules/scalar-2.10 modules/visor-console-2.10 modules/visor-plugins