基本統計資料 - 基於 RDD 的 API
\[ \newcommand{\R}{\mathbb{R}} \newcommand{\E}{\mathbb{E}} \newcommand{\x}{\mathbf{x}} \newcommand{\y}{\mathbf{y}} \newcommand{\wv}{\mathbf{w}} \newcommand{\av}{\mathbf{\alpha}} \newcommand{\bv}{\mathbf{b}} \newcommand{\N}{\mathbb{N}} \newcommand{\id}{\mathbf{I}} \newcommand{\ind}{\mathbf{1}} \newcommand{\0}{\mathbf{0}} \newcommand{\unit}{\mathbf{e}} \newcommand{\one}{\mathbf{1}} \newcommand{\zero}{\mathbf{0}} \]
摘要統計資料
我們提供欄位摘要統計資料,可透過 Statistics
中提供的 colStats
函數取得 RDD[Vector]
。
colStats()
會傳回 MultivariateStatisticalSummary
的執行個體,其中包含欄位最大值、最小值、平均值、變異數和非零個數,以及總計數。
請參閱 MultivariateStatisticalSummary
Python 文件,以取得更多關於 API 的詳細資料。
import numpy as np
from pyspark.mllib.stat import Statistics
mat = sc.parallelize(
[np.array([1.0, 10.0, 100.0]), np.array([2.0, 20.0, 200.0]), np.array([3.0, 30.0, 300.0])]
) # an RDD of Vectors
# Compute column summary statistics.
summary = Statistics.colStats(mat)
print(summary.mean()) # a dense vector containing the mean value for each column
print(summary.variance()) # column-wise variance
print(summary.numNonzeros()) # number of nonzeros in each column
colStats()
會傳回 MultivariateStatisticalSummary
的執行個體,其中包含欄位最大值、最小值、平均值、變異數和非零個數,以及總計數。
請參閱 MultivariateStatisticalSummary
Scala 文件,以取得更多關於 API 的詳細資料。
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.stat.{MultivariateStatisticalSummary, Statistics}
val observations = sc.parallelize(
Seq(
Vectors.dense(1.0, 10.0, 100.0),
Vectors.dense(2.0, 20.0, 200.0),
Vectors.dense(3.0, 30.0, 300.0)
)
)
// Compute column summary statistics.
val summary: MultivariateStatisticalSummary = Statistics.colStats(observations)
println(summary.mean) // a dense vector containing the mean value for each column
println(summary.variance) // column-wise variance
println(summary.numNonzeros) // number of nonzeros in each column
colStats()
會傳回 MultivariateStatisticalSummary
的執行個體,其中包含欄位最大值、最小值、平均值、變異數和非零個數,以及總計數。
請參閱 MultivariateStatisticalSummary
Java 文件,以取得更多關於 API 的詳細資料。
import java.util.Arrays;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
import org.apache.spark.mllib.stat.MultivariateStatisticalSummary;
import org.apache.spark.mllib.stat.Statistics;
JavaRDD<Vector> mat = jsc.parallelize(
Arrays.asList(
Vectors.dense(1.0, 10.0, 100.0),
Vectors.dense(2.0, 20.0, 200.0),
Vectors.dense(3.0, 30.0, 300.0)
)
); // an RDD of Vectors
// Compute column summary statistics.
MultivariateStatisticalSummary summary = Statistics.colStats(mat.rdd());
System.out.println(summary.mean()); // a dense vector containing the mean value for each column
System.out.println(summary.variance()); // column-wise variance
System.out.println(summary.numNonzeros()); // number of nonzeros in each column
相關性
在統計資料中,計算兩組資料之間的相關性是一項常見的運算。在 spark.mllib
中,我們提供彈性,可以在許多組資料之間計算成對相關性。目前支援的相關性方法為 Pearson 相關性和 Spearman 相關性。
Statistics
提供方法來計算組資料之間的相關性。根據輸入類型,兩個 RDD[Double]
或一個 RDD[Vector]
,輸出將分別為 Double
或相關性 Matrix
。
請參閱 Statistics
Python 文件,以取得更多關於 API 的詳細資訊。
from pyspark.mllib.stat import Statistics
seriesX = sc.parallelize([1.0, 2.0, 3.0, 3.0, 5.0]) # a series
# seriesY must have the same number of partitions and cardinality as seriesX
seriesY = sc.parallelize([11.0, 22.0, 33.0, 33.0, 555.0])
# Compute the correlation using Pearson's method. Enter "spearman" for Spearman's method.
# If a method is not specified, Pearson's method will be used by default.
print("Correlation is: " + str(Statistics.corr(seriesX, seriesY, method="pearson")))
data = sc.parallelize(
[np.array([1.0, 10.0, 100.0]), np.array([2.0, 20.0, 200.0]), np.array([5.0, 33.0, 366.0])]
) # an RDD of Vectors
# calculate the correlation matrix using Pearson's method. Use "spearman" for Spearman's method.
# If a method is not specified, Pearson's method will be used by default.
print(Statistics.corr(data, method="pearson"))
Statistics
提供方法來計算組資料之間的相關性。根據輸入類型,兩個 RDD[Double]
或一個 RDD[Vector]
,輸出將分別為 Double
或相關性 Matrix
。
請參閱 Statistics
Scala 文件,以取得關於 API 的詳細資訊。
import org.apache.spark.mllib.linalg._
import org.apache.spark.mllib.stat.Statistics
import org.apache.spark.rdd.RDD
val seriesX: RDD[Double] = sc.parallelize(Array(1, 2, 3, 3, 5)) // a series
// must have the same number of partitions and cardinality as seriesX
val seriesY: RDD[Double] = sc.parallelize(Array(11, 22, 33, 33, 555))
// compute the correlation using Pearson's method. Enter "spearman" for Spearman's method. If a
// method is not specified, Pearson's method will be used by default.
val correlation: Double = Statistics.corr(seriesX, seriesY, "pearson")
println(s"Correlation is: $correlation")
val data: RDD[Vector] = sc.parallelize(
Seq(
Vectors.dense(1.0, 10.0, 100.0),
Vectors.dense(2.0, 20.0, 200.0),
Vectors.dense(5.0, 33.0, 366.0))
) // note that each Vector is a row and not a column
// calculate the correlation matrix using Pearson's method. Use "spearman" for Spearman's method
// If a method is not specified, Pearson's method will be used by default.
val correlMatrix: Matrix = Statistics.corr(data, "pearson")
println(correlMatrix.toString)
Statistics
提供方法來計算組資料之間的相關性。根據輸入類型,兩個 JavaDoubleRDD
或一個 JavaRDD<Vector>
,輸出將分別為 Double
或相關性 Matrix
。
請參閱 Statistics
Java 文件,以取得關於 API 的詳細資訊。
import java.util.Arrays;
import org.apache.spark.api.java.JavaDoubleRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.mllib.linalg.Matrix;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
import org.apache.spark.mllib.stat.Statistics;
JavaDoubleRDD seriesX = jsc.parallelizeDoubles(
Arrays.asList(1.0, 2.0, 3.0, 3.0, 5.0)); // a series
// must have the same number of partitions and cardinality as seriesX
JavaDoubleRDD seriesY = jsc.parallelizeDoubles(
Arrays.asList(11.0, 22.0, 33.0, 33.0, 555.0));
// compute the correlation using Pearson's method. Enter "spearman" for Spearman's method.
// If a method is not specified, Pearson's method will be used by default.
double correlation = Statistics.corr(seriesX.srdd(), seriesY.srdd(), "pearson");
System.out.println("Correlation is: " + correlation);
// note that each Vector is a row and not a column
JavaRDD<Vector> data = jsc.parallelize(
Arrays.asList(
Vectors.dense(1.0, 10.0, 100.0),
Vectors.dense(2.0, 20.0, 200.0),
Vectors.dense(5.0, 33.0, 366.0)
)
);
// calculate the correlation matrix using Pearson's method.
// Use "spearman" for Spearman's method.
// If a method is not specified, Pearson's method will be used by default.
Matrix correlMatrix = Statistics.corr(data.rdd(), "pearson");
System.out.println(correlMatrix.toString());
分層抽樣
與存在於 spark.mllib
中的其他統計函數不同,分層抽樣方法 sampleByKey
和 sampleByKeyExact
可以對鍵值對的 RDD 執行。對於分層抽樣,鍵可以視為標籤,而值可以視為特定屬性。例如,鍵可以是男性或女性,或文件 ID,而各自的值可以是人口中人們的年齡清單或文件中字詞的清單。 sampleByKey
方法會擲硬幣來決定是否要抽樣觀察,因此需要對資料進行一次遍歷,並提供預期的樣本大小。 sampleByKeyExact
所需的資源比 sampleByKey
中使用的按層簡單隨機抽樣所需的多得多,但將提供具有 99.99% 信心的精確抽樣大小。 sampleByKeyExact
目前不支援 python。
sampleByKey()
允許使用者取樣大約 $\lceil f_k \cdot n_k \rceil \, \forall k \in K$ 個項目,其中 $f_k$ 是鍵 $k$ 的期望分數,$n_k$ 是鍵 $k$ 的鍵值對數量,而 $K$ 是鍵的集合。
注意: sampleByKeyExact()
目前不支援 Python。
# an RDD of any key value pairs
data = sc.parallelize([(1, 'a'), (1, 'b'), (2, 'c'), (2, 'd'), (2, 'e'), (3, 'f')])
# specify the exact fraction desired from each key as a dictionary
fractions = {1: 0.1, 2: 0.6, 3: 0.3}
approxSample = data.sampleByKey(False, fractions)
sampleByKeyExact()
允許使用者精確取樣 $\lceil f_k \cdot n_k \rceil \, \forall k \in K$ 個項目,其中 $f_k$ 是鍵 $k$ 的期望分數,$n_k$ 是鍵 $k$ 的鍵值對數量,而 $K$ 是鍵的集合。不放回取樣需要額外執行一次 RDD 以保證取樣大小,而放回取樣則需要額外執行兩次。
// an RDD[(K, V)] of any key value pairs
val data = sc.parallelize(
Seq((1, 'a'), (1, 'b'), (2, 'c'), (2, 'd'), (2, 'e'), (3, 'f')))
// specify the exact fraction desired from each key
val fractions = Map(1 -> 0.1, 2 -> 0.6, 3 -> 0.3)
// Get an approximate sample from each stratum
val approxSample = data.sampleByKey(withReplacement = false, fractions = fractions)
// Get an exact sample from each stratum
val exactSample = data.sampleByKeyExact(withReplacement = false, fractions = fractions)
sampleByKeyExact()
允許使用者精確取樣 $\lceil f_k \cdot n_k \rceil \, \forall k \in K$ 個項目,其中 $f_k$ 是鍵 $k$ 的期望分數,$n_k$ 是鍵 $k$ 的鍵值對數量,而 $K$ 是鍵的集合。不放回取樣需要額外執行一次 RDD 以保證取樣大小,而放回取樣則需要額外執行兩次。
import java.util.*;
import scala.Tuple2;
import org.apache.spark.api.java.JavaPairRDD;
List<Tuple2<Integer, Character>> list = Arrays.asList(
new Tuple2<>(1, 'a'),
new Tuple2<>(1, 'b'),
new Tuple2<>(2, 'c'),
new Tuple2<>(2, 'd'),
new Tuple2<>(2, 'e'),
new Tuple2<>(3, 'f')
);
JavaPairRDD<Integer, Character> data = jsc.parallelizePairs(list);
// specify the exact fraction desired from each key Map<K, Double>
ImmutableMap<Integer, Double> fractions = ImmutableMap.of(1, 0.1, 2, 0.6, 3, 0.3);
// Get an approximate sample from each stratum
JavaPairRDD<Integer, Character> approxSample = data.sampleByKey(false, fractions);
// Get an exact sample from each stratum
JavaPairRDD<Integer, Character> exactSample = data.sampleByKeyExact(false, fractions);
假設檢定
假設檢定是統計學中強大的工具,用於判斷結果是否具有統計顯著性,也就是這個結果是否發生在機率下。 spark.mllib
目前支援皮爾森卡方 ( $\chi^2$) 檢定以進行符合度和獨立性檢定。輸入資料類型會決定是否執行符合度或獨立性檢定。符合度檢定需要 Vector
的輸入類型,而獨立性檢定則需要 Matrix
作為輸入。
spark.mllib
也支援輸入類型 RDD[LabeledPoint]
,以透過卡方獨立性檢定來啟用特徵選取。
Statistics
提供執行皮爾森卡方檢定的方法。下列範例示範如何執行和詮釋假設檢定。
請參閱 Statistics
Python 文件,以取得更多關於 API 的詳細資訊。
from pyspark.mllib.linalg import Matrices, Vectors
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.stat import Statistics
vec = Vectors.dense(0.1, 0.15, 0.2, 0.3, 0.25) # a vector composed of the frequencies of events
# compute the goodness of fit. If a second vector to test against
# is not supplied as a parameter, the test runs against a uniform distribution.
goodnessOfFitTestResult = Statistics.chiSqTest(vec)
# summary of the test including the p-value, degrees of freedom,
# test statistic, the method used, and the null hypothesis.
print("%s\n" % goodnessOfFitTestResult)
mat = Matrices.dense(3, 2, [1.0, 3.0, 5.0, 2.0, 4.0, 6.0]) # a contingency matrix
# conduct Pearson's independence test on the input contingency matrix
independenceTestResult = Statistics.chiSqTest(mat)
# summary of the test including the p-value, degrees of freedom,
# test statistic, the method used, and the null hypothesis.
print("%s\n" % independenceTestResult)
obs = sc.parallelize(
[LabeledPoint(1.0, [1.0, 0.0, 3.0]),
LabeledPoint(1.0, [1.0, 2.0, 0.0]),
LabeledPoint(1.0, [-1.0, 0.0, -0.5])]
) # LabeledPoint(label, feature)
# The contingency table is constructed from an RDD of LabeledPoint and used to conduct
# the independence test. Returns an array containing the ChiSquaredTestResult for every feature
# against the label.
featureTestResults = Statistics.chiSqTest(obs)
for i, result in enumerate(featureTestResults):
print("Column %d:\n%s" % (i + 1, result))
Statistics
提供執行皮爾森卡方檢定的方法。下列範例示範如何執行和詮釋假設檢定。
import org.apache.spark.mllib.linalg._
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.stat.Statistics
import org.apache.spark.mllib.stat.test.ChiSqTestResult
import org.apache.spark.rdd.RDD
// a vector composed of the frequencies of events
val vec: Vector = Vectors.dense(0.1, 0.15, 0.2, 0.3, 0.25)
// compute the goodness of fit. If a second vector to test against is not supplied
// as a parameter, the test runs against a uniform distribution.
val goodnessOfFitTestResult = Statistics.chiSqTest(vec)
// summary of the test including the p-value, degrees of freedom, test statistic, the method
// used, and the null hypothesis.
println(s"$goodnessOfFitTestResult\n")
// a contingency matrix. Create a dense matrix ((1.0, 2.0), (3.0, 4.0), (5.0, 6.0))
val mat: Matrix = Matrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
// conduct Pearson's independence test on the input contingency matrix
val independenceTestResult = Statistics.chiSqTest(mat)
// summary of the test including the p-value, degrees of freedom
println(s"$independenceTestResult\n")
val obs: RDD[LabeledPoint] =
sc.parallelize(
Seq(
LabeledPoint(1.0, Vectors.dense(1.0, 0.0, 3.0)),
LabeledPoint(1.0, Vectors.dense(1.0, 2.0, 0.0)),
LabeledPoint(-1.0, Vectors.dense(-1.0, 0.0, -0.5)
)
)
) // (label, feature) pairs.
// The contingency table is constructed from the raw (label, feature) pairs and used to conduct
// the independence test. Returns an array containing the ChiSquaredTestResult for every feature
// against the label.
val featureTestResults: Array[ChiSqTestResult] = Statistics.chiSqTest(obs)
featureTestResults.zipWithIndex.foreach { case (k, v) =>
println(s"Column ${(v + 1)} :")
println(k)
} // summary of the test
Statistics
提供執行皮爾森卡方檢定的方法。下列範例示範如何執行和詮釋假設檢定。
請參閱 ChiSqTestResult
Java 文件,以取得 API 詳細資料。
import java.util.Arrays;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.mllib.linalg.Matrices;
import org.apache.spark.mllib.linalg.Matrix;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
import org.apache.spark.mllib.regression.LabeledPoint;
import org.apache.spark.mllib.stat.Statistics;
import org.apache.spark.mllib.stat.test.ChiSqTestResult;
// a vector composed of the frequencies of events
Vector vec = Vectors.dense(0.1, 0.15, 0.2, 0.3, 0.25);
// compute the goodness of fit. If a second vector to test against is not supplied
// as a parameter, the test runs against a uniform distribution.
ChiSqTestResult goodnessOfFitTestResult = Statistics.chiSqTest(vec);
// summary of the test including the p-value, degrees of freedom, test statistic,
// the method used, and the null hypothesis.
System.out.println(goodnessOfFitTestResult + "\n");
// Create a contingency matrix ((1.0, 2.0), (3.0, 4.0), (5.0, 6.0))
Matrix mat = Matrices.dense(3, 2, new double[]{1.0, 3.0, 5.0, 2.0, 4.0, 6.0});
// conduct Pearson's independence test on the input contingency matrix
ChiSqTestResult independenceTestResult = Statistics.chiSqTest(mat);
// summary of the test including the p-value, degrees of freedom...
System.out.println(independenceTestResult + "\n");
// an RDD of labeled points
JavaRDD<LabeledPoint> obs = jsc.parallelize(
Arrays.asList(
new LabeledPoint(1.0, Vectors.dense(1.0, 0.0, 3.0)),
new LabeledPoint(1.0, Vectors.dense(1.0, 2.0, 0.0)),
new LabeledPoint(-1.0, Vectors.dense(-1.0, 0.0, -0.5))
)
);
// The contingency table is constructed from the raw (label, feature) pairs and used to conduct
// the independence test. Returns an array containing the ChiSquaredTestResult for every feature
// against the label.
ChiSqTestResult[] featureTestResults = Statistics.chiSqTest(obs.rdd());
int i = 1;
for (ChiSqTestResult result : featureTestResults) {
System.out.println("Column " + i + ":");
System.out.println(result + "\n"); // summary of the test
i++;
}
此外,spark.mllib
提供柯爾莫哥洛夫-史米諾夫 (KS) 檢定的 1 個樣本、2 個單邊實作,用於機率分佈的相等性。透過提供理論分佈的名稱 (目前僅支援常態分佈) 及其參數,或計算累積分佈的函數,依據給定的理論分佈,使用者可以檢定其樣本取自該分佈的虛無假設。在使用者針對常態分佈進行檢定 (distName="norm"
) 但未提供分佈參數的情況下,檢定會初始化為標準常態分佈,並記錄適當的訊息。
Statistics
提供執行 1 個樣本、2 個單邊柯爾莫哥洛夫-史米諾夫檢定的方法。下列範例示範如何執行和詮釋假設檢定。
請參閱 Statistics
Python 文件,以取得更多關於 API 的詳細資訊。
from pyspark.mllib.stat import Statistics
parallelData = sc.parallelize([0.1, 0.15, 0.2, 0.3, 0.25])
# run a KS test for the sample versus a standard normal distribution
testResult = Statistics.kolmogorovSmirnovTest(parallelData, "norm", 0, 1)
# summary of the test including the p-value, test statistic, and null hypothesis
# if our p-value indicates significance, we can reject the null hypothesis
# Note that the Scala functionality of calling Statistics.kolmogorovSmirnovTest with
# a lambda to calculate the CDF is not made available in the Python API
print(testResult)
Statistics
提供執行 1 樣本、2 面 Kolmogorov-Smirnov 檢定的方法。下列範例示範如何執行和詮釋假設檢定。
請參閱 Statistics
Scala 文件,以取得關於 API 的詳細資訊。
import org.apache.spark.mllib.stat.Statistics
import org.apache.spark.rdd.RDD
val data: RDD[Double] = sc.parallelize(Seq(0.1, 0.15, 0.2, 0.3, 0.25)) // an RDD of sample data
// run a KS test for the sample versus a standard normal distribution
val testResult = Statistics.kolmogorovSmirnovTest(data, "norm", 0, 1)
// summary of the test including the p-value, test statistic, and null hypothesis if our p-value
// indicates significance, we can reject the null hypothesis.
println(testResult)
println()
// perform a KS test using a cumulative distribution function of our making
val myCDF = Map(0.1 -> 0.2, 0.15 -> 0.6, 0.2 -> 0.05, 0.3 -> 0.05, 0.25 -> 0.1)
val testResult2 = Statistics.kolmogorovSmirnovTest(data, myCDF)
println(testResult2)
Statistics
提供執行 1 樣本、2 面 Kolmogorov-Smirnov 檢定的方法。下列範例示範如何執行和詮釋假設檢定。
請參閱 Statistics
Java 文件,以取得關於 API 的詳細資訊。
import java.util.Arrays;
import org.apache.spark.api.java.JavaDoubleRDD;
import org.apache.spark.mllib.stat.Statistics;
import org.apache.spark.mllib.stat.test.KolmogorovSmirnovTestResult;
JavaDoubleRDD data = jsc.parallelizeDoubles(Arrays.asList(0.1, 0.15, 0.2, 0.3, 0.25));
KolmogorovSmirnovTestResult testResult =
Statistics.kolmogorovSmirnovTest(data, "norm", 0.0, 1.0);
// summary of the test including the p-value, test statistic, and null hypothesis
// if our p-value indicates significance, we can reject the null hypothesis
System.out.println(testResult);
串流顯著性檢定
spark.mllib
提供一些線上實作的檢定,以支援 A/B 測試等使用案例。這些檢定可以在 Spark Streaming DStream[(Boolean, Double)]
中執行,其中每個組的第 1 個元素表示控制組 (false
) 或處理組 (true
),而第 2 個元素是觀察值的數值。
串流顯著性檢定支援下列參數
peacePeriod
- 要忽略的串流中初始資料點數目,用於減輕新奇效果。windowSize
- 要執行假設檢定的過去批次數目。設定為0
將使用所有先前的批次執行累積處理。
StreamingTest
提供串流假設檢定。
val data = ssc.textFileStream(dataDir).map(line => line.split(",") match {
case Array(label, value) => BinarySample(label.toBoolean, value.toDouble)
})
val streamingTest = new StreamingTest()
.setPeacePeriod(0)
.setWindowSize(0)
.setTestMethod("welch")
val out = streamingTest.registerStream(data)
out.print()
StreamingTest
提供串流假設檢定。
import org.apache.spark.mllib.stat.test.BinarySample;
import org.apache.spark.mllib.stat.test.StreamingTest;
import org.apache.spark.mllib.stat.test.StreamingTestResult;
JavaDStream<BinarySample> data = ssc.textFileStream(dataDir).map(line -> {
String[] ts = line.split(",");
boolean label = Boolean.parseBoolean(ts[0]);
double value = Double.parseDouble(ts[1]);
return new BinarySample(label, value);
});
StreamingTest streamingTest = new StreamingTest()
.setPeacePeriod(0)
.setWindowSize(0)
.setTestMethod("welch");
JavaDStream<StreamingTestResult> out = streamingTest.registerStream(data);
out.print();
隨機資料產生
隨機資料產生對隨機演算法、原型製作和效能測試很有用。spark.mllib
支援產生具有從特定分配中抽取的 i.i.d. 值的隨機 RDD:均勻、標準常態或泊松。
RandomRDDs
提供工廠方法來產生隨機雙精度 RDD 或向量 RDD。以下範例會產生一個隨機雙精度 RDD,其值遵循標準常態分佈 N(0, 1)
,然後將其對應到 N(1, 4)
。
請參閱 RandomRDDs
Python 文件 以取得更多有關 API 的詳細資訊。
from pyspark.mllib.random import RandomRDDs
sc = ... # SparkContext
# Generate a random double RDD that contains 1 million i.i.d. values drawn from the
# standard normal distribution `N(0, 1)`, evenly distributed in 10 partitions.
u = RandomRDDs.normalRDD(sc, 1000000L, 10)
# Apply a transform to get a random double RDD following `N(1, 4)`.
v = u.map(lambda x: 1.0 + 2.0 * x)
RandomRDDs
提供工廠方法來產生隨機雙精度 RDD 或向量 RDD。以下範例會產生一個隨機雙精度 RDD,其值遵循標準常態分佈 N(0, 1)
,然後將其對應到 N(1, 4)
。
請參閱 RandomRDDs
Scala 文件 以取得有關 API 的詳細資訊。
import org.apache.spark.SparkContext
import org.apache.spark.mllib.random.RandomRDDs._
val sc: SparkContext = ...
// Generate a random double RDD that contains 1 million i.i.d. values drawn from the
// standard normal distribution `N(0, 1)`, evenly distributed in 10 partitions.
val u = normalRDD(sc, 1000000L, 10)
// Apply a transform to get a random double RDD following `N(1, 4)`.
val v = u.map(x => 1.0 + 2.0 * x)
RandomRDDs
提供工廠方法來產生隨機雙精度 RDD 或向量 RDD。以下範例會產生一個隨機雙精度 RDD,其值遵循標準常態分佈 N(0, 1)
,然後將其對應到 N(1, 4)
。
請參閱 RandomRDDs
Java 文件 以取得有關 API 的詳細資訊。
import org.apache.spark.SparkContext;
import org.apache.spark.api.JavaDoubleRDD;
import static org.apache.spark.mllib.random.RandomRDDs.*;
JavaSparkContext jsc = ...
// Generate a random double RDD that contains 1 million i.i.d. values drawn from the
// standard normal distribution `N(0, 1)`, evenly distributed in 10 partitions.
JavaDoubleRDD u = normalJavaRDD(jsc, 1000000L, 10);
// Apply a transform to get a random double RDD following `N(1, 4)`.
JavaDoubleRDD v = u.mapToDouble(x -> 1.0 + 2.0 * x);
核密度估計
核密度估計 是一種有用的技術,用於視覺化經驗機率分佈,而不需要假設觀察到的樣本所抽取的特定分佈。它會計算隨機變數機率密度函數的估計值,在給定的點集合中評估。它會透過將經驗分佈在特定點的 PDF 表示為以每個樣本為中心的常態分佈 PDF 的平均值來達成此估計。
KernelDensity
提供方法來從樣本的 RDD 計算核密度估計。以下範例示範如何執行此操作。
請參閱 KernelDensity
Python 文件 以取得更多有關 API 的詳細資訊。
from pyspark.mllib.stat import KernelDensity
# an RDD of sample data
data = sc.parallelize([1.0, 1.0, 1.0, 2.0, 3.0, 4.0, 5.0, 5.0, 6.0, 7.0, 8.0, 9.0, 9.0])
# Construct the density estimator with the sample data and a standard deviation for the Gaussian
# kernels
kd = KernelDensity()
kd.setSample(data)
kd.setBandwidth(3.0)
# Find density estimates for the given values
densities = kd.estimate([-1.0, 2.0, 5.0])
KernelDensity
提供方法來從樣本的 RDD 計算核密度估計。下列範例示範如何執行此操作。
請參閱 KernelDensity
Scala 文件,以取得 API 的詳細資料。
import org.apache.spark.mllib.stat.KernelDensity
import org.apache.spark.rdd.RDD
// an RDD of sample data
val data: RDD[Double] = sc.parallelize(Seq(1, 1, 1, 2, 3, 4, 5, 5, 6, 7, 8, 9, 9))
// Construct the density estimator with the sample data and a standard deviation
// for the Gaussian kernels
val kd = new KernelDensity()
.setSample(data)
.setBandwidth(3.0)
// Find density estimates for the given values
val densities = kd.estimate(Array(-1.0, 2.0, 5.0))
KernelDensity
提供方法來從樣本的 RDD 計算核密度估計。下列範例示範如何執行此操作。
請參閱 KernelDensity
Java 文件,以取得 API 的詳細資料。
import java.util.Arrays;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.mllib.stat.KernelDensity;
// an RDD of sample data
JavaRDD<Double> data = jsc.parallelize(
Arrays.asList(1.0, 1.0, 1.0, 2.0, 3.0, 4.0, 5.0, 5.0, 6.0, 7.0, 8.0, 9.0, 9.0));
// Construct the density estimator with the sample data
// and a standard deviation for the Gaussian kernels
KernelDensity kd = new KernelDensity().setSample(data).setBandwidth(3.0);
// Find density estimates for the given values
double[] densities = kd.estimate(new double[]{-1.0, 2.0, 5.0});
System.out.println(Arrays.toString(densities));