public class SparkSession
extends java.lang.Object
implements scala.Serializable
To create a SparkSession, use the following builder pattern:
SparkSession.builder()
.master("local")
.appName("Word Count")
.config("spark.some.config.option", "some-value").
.getOrCreate()
Modifier and Type | Class and Description |
---|---|
static class |
SparkSession.Builder
Builder for
SparkSession . |
class |
SparkSession.implicits$
:: Experimental ::
(Scala-specific) Implicit methods available in Scala for converting
common Scala objects into
DataFrame s. |
Modifier and Type | Method and Description |
---|---|
protected Dataset<Row> |
applySchemaToPythonRDD(RDD<java.lang.Object[]> rdd,
java.lang.String schemaString)
Apply a schema defined by the schemaString to an RDD.
|
protected Dataset<Row> |
applySchemaToPythonRDD(RDD<java.lang.Object[]> rdd,
StructType schema)
Apply a schema defined by the schema to an RDD.
|
Dataset<Row> |
baseRelationToDataFrame(BaseRelation baseRelation)
Convert a
BaseRelation created for external data sources into a DataFrame . |
static SparkSession.Builder |
builder()
Creates a
SparkSession.Builder for constructing a SparkSession . |
protected org.apache.spark.sql.execution.CacheManager |
cacheManager() |
Catalog |
catalog()
Interface through which the user may create, drop, alter or query underlying
databases, tables, functions etc.
|
RuntimeConfig |
conf()
Runtime configuration interface for Spark.
|
Dataset<Row> |
createDataFrame(JavaRDD<?> rdd,
java.lang.Class<?> beanClass)
Applies a schema to an RDD of Java Beans.
|
Dataset<Row> |
createDataFrame(JavaRDD<Row> rowRDD,
StructType schema)
|
Dataset<Row> |
createDataFrame(java.util.List<?> data,
java.lang.Class<?> beanClass)
Applies a schema to an List of Java Beans.
|
Dataset<Row> |
createDataFrame(java.util.List<Row> rows,
StructType schema)
|
Dataset<Row> |
createDataFrame(RDD<?> rdd,
java.lang.Class<?> beanClass)
Applies a schema to an RDD of Java Beans.
|
<A extends scala.Product> |
createDataFrame(RDD<A> rdd,
scala.reflect.api.TypeTags.TypeTag<A> evidence$1)
:: Experimental ::
Creates a
DataFrame from an RDD of Product (e.g. |
Dataset<Row> |
createDataFrame(RDD<Row> rowRDD,
StructType schema)
|
protected Dataset<Row> |
createDataFrame(RDD<Row> rowRDD,
StructType schema,
boolean needsConversion)
Creates a
DataFrame from an RDD[Row]. |
<A extends scala.Product> |
createDataFrame(scala.collection.Seq<A> data,
scala.reflect.api.TypeTags.TypeTag<A> evidence$2)
:: Experimental ::
Creates a
DataFrame from a local Seq of Product. |
<T> Dataset<T> |
createDataset(java.util.List<T> data,
Encoder<T> evidence$5) |
<T> Dataset<T> |
createDataset(RDD<T> data,
Encoder<T> evidence$4) |
<T> Dataset<T> |
createDataset(scala.collection.Seq<T> data,
Encoder<T> evidence$3) |
protected void |
createTempView(java.lang.String viewName,
Dataset<Row> df,
boolean replaceIfExists)
Creates a temporary view with a DataFrame.
|
Dataset<Row> |
emptyDataFrame()
:: Experimental ::
Returns a
DataFrame with no rows or columns. |
protected org.apache.spark.sql.execution.QueryExecution |
executePlan(org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan) |
protected org.apache.spark.sql.execution.QueryExecution |
executeSql(java.lang.String sql) |
ExperimentalMethods |
experimental()
:: Experimental ::
A collection of methods that are considered experimental, but can be used to hook into
the query planner for advanced functionality.
|
protected org.apache.spark.sql.catalyst.catalog.ExternalCatalog |
externalCatalog() |
SparkSession.implicits$ |
implicits()
Accessor for nested Scala object
|
protected static void |
initializeLogIfNecessary(boolean isInterpreter) |
protected Dataset<Row> |
internalCreateDataFrame(RDD<org.apache.spark.sql.catalyst.InternalRow> catalystRows,
StructType schema)
Creates a
DataFrame from an RDD[Row]. |
protected static boolean |
isTraceEnabled() |
protected org.apache.spark.sql.execution.ui.SQLListener |
listener() |
ExecutionListenerManager |
listenerManager()
:: Experimental ::
An interface to register custom
QueryExecutionListener s
that listen for execution metrics. |
protected static org.slf4j.Logger |
log() |
protected static void |
logDebug(scala.Function0<java.lang.String> msg) |
protected static void |
logDebug(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static void |
logError(scala.Function0<java.lang.String> msg) |
protected static void |
logError(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static void |
logInfo(scala.Function0<java.lang.String> msg) |
protected static void |
logInfo(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static java.lang.String |
logName() |
protected static void |
logTrace(scala.Function0<java.lang.String> msg) |
protected static void |
logTrace(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
protected static void |
logWarning(scala.Function0<java.lang.String> msg) |
protected static void |
logWarning(scala.Function0<java.lang.String> msg,
java.lang.Throwable throwable) |
SparkSession |
newSession()
Start a new session with isolated SQL configurations, temporary tables, registered
functions are isolated, but sharing the underlying
SparkContext and cached data. |
protected DataType |
parseDataType(java.lang.String dataTypeString)
Parses the data type in our internal string representation.
|
protected org.apache.spark.sql.catalyst.plans.logical.LogicalPlan |
parseSql(java.lang.String sql) |
Dataset<java.lang.Long> |
range(long end)
:: Experimental ::
Creates a
Dataset with a single LongType column named id , containing elements
in an range from 0 to end (exclusive) with step value 1. |
Dataset<java.lang.Long> |
range(long start,
long end)
:: Experimental ::
Creates a
Dataset with a single LongType column named id , containing elements
in an range from start to end (exclusive) with step value 1. |
Dataset<java.lang.Long> |
range(long start,
long end,
long step)
:: Experimental ::
Creates a
Dataset with a single LongType column named id , containing elements
in an range from start to end (exclusive) with an step value. |
Dataset<java.lang.Long> |
range(long start,
long end,
long step,
int numPartitions)
:: Experimental ::
Creates a
Dataset with a single LongType column named id , containing elements
in an range from start to end (exclusive) with an step value, with partition number
specified. |
DataFrameReader |
read()
:: Experimental ::
Returns a
DataFrameReader that can be used to read data and streams in as a DataFrame . |
protected org.apache.spark.sql.internal.SessionState |
sessionState()
State isolated across sessions, including SQL configurations, temporary tables, registered
functions, and everything else that accepts a
SQLConf . |
protected void |
setWrappedContext(SQLContext sqlContext) |
protected org.apache.spark.sql.internal.SharedState |
sharedState()
State shared across sessions, including the
SparkContext , cached data, listener,
and a catalog that interacts with external systems. |
SparkContext |
sparkContext() |
Dataset<Row> |
sql(java.lang.String sqlText)
Executes a SQL query using Spark, returning the result as a
DataFrame . |
void |
stop()
Stop the underlying
SparkContext . |
ContinuousQueryManager |
streams()
|
Dataset<Row> |
table(java.lang.String tableName)
Returns the specified table as a
DataFrame . |
protected Dataset<Row> |
table(org.apache.spark.sql.catalyst.TableIdentifier tableIdent) |
UDFRegistration |
udf()
A collection of methods for registering user-defined functions (UDF).
|
protected SQLContext |
wrapped() |
public static SparkSession.Builder builder()
SparkSession.Builder
for constructing a SparkSession
.protected static java.lang.String logName()
protected static org.slf4j.Logger log()
protected static void logInfo(scala.Function0<java.lang.String> msg)
protected static void logDebug(scala.Function0<java.lang.String> msg)
protected static void logTrace(scala.Function0<java.lang.String> msg)
protected static void logWarning(scala.Function0<java.lang.String> msg)
protected static void logError(scala.Function0<java.lang.String> msg)
protected static void logInfo(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logDebug(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logTrace(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logWarning(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static void logError(scala.Function0<java.lang.String> msg, java.lang.Throwable throwable)
protected static boolean isTraceEnabled()
protected static void initializeLogIfNecessary(boolean isInterpreter)
public SparkContext sparkContext()
protected org.apache.spark.sql.internal.SharedState sharedState()
SparkContext
, cached data, listener,
and a catalog that interacts with external systems.protected org.apache.spark.sql.internal.SessionState sessionState()
SQLConf
.protected SQLContext wrapped()
protected void setWrappedContext(SQLContext sqlContext)
protected org.apache.spark.sql.execution.CacheManager cacheManager()
protected org.apache.spark.sql.execution.ui.SQLListener listener()
protected org.apache.spark.sql.catalyst.catalog.ExternalCatalog externalCatalog()
public RuntimeConfig conf()
This is the interface through which the user can get and set all Spark and Hadoop
configurations that are relevant to Spark SQL. When getting the value of a config,
this defaults to the value set in the underlying SparkContext
, if any.
public ExecutionListenerManager listenerManager()
QueryExecutionListener
s
that listen for execution metrics.
public ExperimentalMethods experimental()
public UDFRegistration udf()
The following example registers a Scala closure as UDF:
sparkSession.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
The following example registers a UDF in Java:
sparkSession.udf().register("myUDF",
new UDF2<Integer, String, String>() {
@Override
public String call(Integer arg1, String arg2) {
return arg2 + arg1;
}
}, DataTypes.StringType);
Or, to use Java 8 lambda syntax:
sparkSession.udf().register("myUDF",
(Integer arg1, String arg2) -> arg2 + arg1,
DataTypes.StringType);
public ContinuousQueryManager streams()
public SparkSession newSession()
SparkContext
and cached data.
Note: Other than the SparkContext
, all shared state is initialized lazily.
This method will force the initialization of the shared state to ensure that parent
and child sessions are set up with the same shared state. If the underlying catalog
implementation is Hive, this will initialize the metastore, which may take some time.
public Dataset<Row> emptyDataFrame()
DataFrame
with no rows or columns.
public <A extends scala.Product> Dataset<Row> createDataFrame(RDD<A> rdd, scala.reflect.api.TypeTags.TypeTag<A> evidence$1)
DataFrame
from an RDD of Product (e.g. case classes, tuples).
rdd
- (undocumented)evidence$1
- (undocumented)public <A extends scala.Product> Dataset<Row> createDataFrame(scala.collection.Seq<A> data, scala.reflect.api.TypeTags.TypeTag<A> evidence$2)
DataFrame
from a local Seq of Product.
data
- (undocumented)evidence$2
- (undocumented)public Dataset<Row> createDataFrame(RDD<Row> rowRDD, StructType schema)
DataFrame
from an RDD
containing Row
s using the given schema.
It is important to make sure that the structure of every Row
of the provided RDD matches
the provided schema. Otherwise, there will be runtime exception.
Example:
import org.apache.spark.sql._
import org.apache.spark.sql.types._
val sparkSession = new org.apache.spark.sql.SparkSession(sc)
val schema =
StructType(
StructField("name", StringType, false) ::
StructField("age", IntegerType, true) :: Nil)
val people =
sc.textFile("examples/src/main/resources/people.txt").map(
_.split(",")).map(p => Row(p(0), p(1).trim.toInt))
val dataFrame = sparkSession.createDataFrame(people, schema)
dataFrame.printSchema
// root
// |-- name: string (nullable = false)
// |-- age: integer (nullable = true)
dataFrame.createOrReplaceTempView("people")
sparkSession.sql("select name from people").collect.foreach(println)
rowRDD
- (undocumented)schema
- (undocumented)public Dataset<Row> createDataFrame(JavaRDD<Row> rowRDD, StructType schema)
DataFrame
from an JavaRDD
containing Row
s using the given schema.
It is important to make sure that the structure of every Row
of the provided RDD matches
the provided schema. Otherwise, there will be runtime exception.
rowRDD
- (undocumented)schema
- (undocumented)public Dataset<Row> createDataFrame(java.util.List<Row> rows, StructType schema)
DataFrame
from an List
containing Row
s using the given schema.
It is important to make sure that the structure of every Row
of the provided List matches
the provided schema. Otherwise, there will be runtime exception.
rows
- (undocumented)schema
- (undocumented)public Dataset<Row> createDataFrame(RDD<?> rdd, java.lang.Class<?> beanClass)
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
rdd
- (undocumented)beanClass
- (undocumented)public Dataset<Row> createDataFrame(JavaRDD<?> rdd, java.lang.Class<?> beanClass)
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
rdd
- (undocumented)beanClass
- (undocumented)public Dataset<Row> createDataFrame(java.util.List<?> data, java.lang.Class<?> beanClass)
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
data
- (undocumented)beanClass
- (undocumented)public Dataset<Row> baseRelationToDataFrame(BaseRelation baseRelation)
BaseRelation
created for external data sources into a DataFrame
.
baseRelation
- (undocumented)public <T> Dataset<T> createDataset(scala.collection.Seq<T> data, Encoder<T> evidence$3)
public Dataset<java.lang.Long> range(long end)
Dataset
with a single LongType
column named id
, containing elements
in an range from 0 to end
(exclusive) with step value 1.
end
- (undocumented)public Dataset<java.lang.Long> range(long start, long end)
Dataset
with a single LongType
column named id
, containing elements
in an range from start
to end
(exclusive) with step value 1.
start
- (undocumented)end
- (undocumented)public Dataset<java.lang.Long> range(long start, long end, long step)
Dataset
with a single LongType
column named id
, containing elements
in an range from start
to end
(exclusive) with an step value.
start
- (undocumented)end
- (undocumented)step
- (undocumented)public Dataset<java.lang.Long> range(long start, long end, long step, int numPartitions)
Dataset
with a single LongType
column named id
, containing elements
in an range from start
to end
(exclusive) with an step value, with partition number
specified.
start
- (undocumented)end
- (undocumented)step
- (undocumented)numPartitions
- (undocumented)protected Dataset<Row> internalCreateDataFrame(RDD<org.apache.spark.sql.catalyst.InternalRow> catalystRows, StructType schema)
DataFrame
from an RDD[Row].
User can specify whether the input rows should be converted to Catalyst rows.catalystRows
- (undocumented)schema
- (undocumented)protected Dataset<Row> createDataFrame(RDD<Row> rowRDD, StructType schema, boolean needsConversion)
DataFrame
from an RDD[Row].
User can specify whether the input rows should be converted to Catalyst rows.rowRDD
- (undocumented)schema
- (undocumented)needsConversion
- (undocumented)public Catalog catalog()
public Dataset<Row> table(java.lang.String tableName)
DataFrame
.
tableName
- (undocumented)protected void createTempView(java.lang.String viewName, Dataset<Row> df, boolean replaceIfExists)
SparkSession
.viewName
- (undocumented)df
- (undocumented)replaceIfExists
- (undocumented)public Dataset<Row> sql(java.lang.String sqlText)
DataFrame
.
The dialect that is used for SQL parsing can be configured with 'spark.sql.dialect'.
sqlText
- (undocumented)public DataFrameReader read()
DataFrameReader
that can be used to read data and streams in as a DataFrame
.
sparkSession.read.parquet("/path/to/file.parquet")
sparkSession.read.schema(schema).json("/path/to/file.json")
public SparkSession.implicits$ implicits()
public void stop()
SparkContext
.
protected org.apache.spark.sql.catalyst.plans.logical.LogicalPlan parseSql(java.lang.String sql)
protected org.apache.spark.sql.execution.QueryExecution executeSql(java.lang.String sql)
protected org.apache.spark.sql.execution.QueryExecution executePlan(org.apache.spark.sql.catalyst.plans.logical.LogicalPlan plan)
protected DataType parseDataType(java.lang.String dataTypeString)
toString
in scala.
It is only used by PySpark.dataTypeString
- (undocumented)protected Dataset<Row> applySchemaToPythonRDD(RDD<java.lang.Object[]> rdd, java.lang.String schemaString)
rdd
- (undocumented)schemaString
- (undocumented)protected Dataset<Row> applySchemaToPythonRDD(RDD<java.lang.Object[]> rdd, StructType schema)
rdd
- (undocumented)schema
- (undocumented)