Using ScalaPB with Spark
#
IntroductionBy default, Spark uses reflection to derive schemas and encoders from case
classes. This doesn't work well when there are messages that contain types that
Spark does not understand such as enums, ByteString
s and oneof
s. To get around this, sparksql-scalapb provides its own Encoder
s for protocol buffers.
However, it turns out there is another obstacle. Spark does not provide any mechanism to compose user-provided encoders with its own reflection-derived Encoders. Therefore, merely providing an Encoder
for protocol buffers is insufficient to derive an encoder for regular case-classes that contain a protobuf as a field. To solve this problem, ScalaPB uses frameless which relies on implicit search to derive encoders. This approach enables combining ScalaPB's encoders with frameless encoders that takes care for all non-protobuf types.
#
Setting up your projectWe are going to use sbt-assembly to deploy a fat JAR containing ScalaPB, and your compiled protos. Make sure in project/plugins.sbt you have a line that adds sbt-assembly:
To add sparksql-scalapb to your project, add one of the following lines that matches both the version of ScalaPB and Spark you use:
Known issue: Spark 3.2.1 is binary incompatible with Spark 3.2.0 in some of its internal APIs being used. If you use Spark 3.2.0, please stick to sparksql-scalapb 1.0.0-M1.
Spark ships with an old version of Google's Protocol Buffers runtime that is not compatible with the current version. In addition, it comes with incompatible versions of scala-collection-compat and shapeless. Therefore, we need to shade these libraries. Add the following to your build.sbt:
See complete example of build.sbt.
#
Using sparksql-scalapbWe assume you have a SparkSession
assigned to the variable spark
. In a standalone Scala program, this can be created with:
IMPORTANT: Ensure you do not import spark.implicits._
to avoid ambiguity between ScalaPB provided encoders and Spark's default encoders. You may want to import StringToColumn
to convert $"col name"
into a Column
. Add an import scalapb.spark.Implicits
to add ScalaPB's encoders for protocol buffers into the implicit search scope:
The code snippets below use the Person
message.
We start by creating some test data:
We can create a DataFrame
from the test data:
and then process it as any other Dataframe in Spark:
Using the datasets API it is possible to bring the data back to ScalaPB case classes:
You can create a Dataset directly using Spark APIs:
#
From Binary to protos and backIn some situations, you may need to deal with datasets that contain serialized protocol buffers. This can be handled by mapping the datasets through ScalaPB's parseFrom
and toByteArray
functions.
Let's start by preparing a dataset with test binary data by mapping our testData
:
To turn this dataset into a Dataset[Person]
, we map it through parseFrom
:
to turn a dataset of protos into Dataset[Array[Byte]]
:
#
On enumsIn SparkSQL-ScalaPB, enums are represented as strings. Unrecognized enum values are represented as strings containing the numeric value.
#
Dataframes and Datasets from RDDs#
UDFsIf you need to write a UDF that returns a message, it would not pick up our encoder and you may get a runtime failure. To work around this, sparksql-scalapb provides ProtoSQL.udf
to create UDFs. For example, if you need to parse a binary column into a proto:
#
Primitive wrappersIn ProtoSQL 0.9.x and 0.10.x, primitive wrappers are represented in Spark as structs
witha single field named value
. A better representation in Spark would be a
nullable field of the primitive type. The better representation will be the
default in 0.11.x. To enable this representation today, replace the usages of
scalapb.spark.ProtoSQL
with scalapb.spark.ProtoSQL.withPrimitiveWrappers
.
Instead of importing scalapb.spark.Implicits._
, import
scalapb.spark.ProtoSQL.implicits._
See example in WrappersSpec.
<none> is not a term
#
Datasets and You will see this error if for some reason Spark's Encoder
s are being picked up
instead of the ones provided by sparksql-scalapb. Please ensure you are not importing spark.implicits._
. See instructions above for imports.
#
ExampleCheck out a complete example here.