By default, Spark uses reflection to derive schemas and encoders from case
classes. This doesn't work well when there are messages that contain types that
Spark does not understand such as enums,
oneofs. To get around this, sparksql-scalapb provides its own
Encoders for protocol buffers.
However, it turns out there is another obstacle. Spark does not provide any mechanism to compose user-provided encoders with its own reflection-derived Encoders. Therefore, merely providing an
Encoder for protocol buffers is insufficient to derive an encoder for regular case-classes that contain a protobuf as a field. To solve this problem, ScalaPB uses frameless which relies on implicit search to derive encoders. This approach enables combining ScalaPB's encoders with frameless encoders that takes care for all non-protobuf types.
The version of sparksql-scalapb needs to match the Spark and ScalaPB version:
We are going to use sbt-assembly to deploy a fat JAR containing ScalaPB, and your compiled protos. Make sure in project/plugins.sbt you have a line that adds sbt-assembly:
build.sbt add a dependency on
Spark ships with an old version of Google's Protocol Buffers runtime that is not compatible with the current version. Therefore, we need to shade our copy of the Protocol Buffer runtime. Spark 3 also ships with an incompatible version of scala-collection-compat. Add the following to your build.sbt:
We assume you have a
SparkSession assigned to the variable
spark. In a standalone Scala program, this can be created with:
IMPORTANT: Ensure you do not import
spark.implicits._ to avoid ambiguity between ScalaPB provided encoders and Spark's default encoders. You may want to import
StringToColumn to convert
$"col name" into a
Column. Add an import
scalapb.spark.Implicits to add ScalaPB's encoders for protocol buffers into the implicit search scope:
The code snippets below use the
We start by creating some test data:
We can create a
DataFrame from the test data:
and then process it as any other Dataframe in Spark:
Using the datasets API it is possible to bring the data back to ScalaPB case classes:
You can create a Dataset directly using Spark APIs:
In some situations, you may need to deal with datasets that contain serialized protocol buffers. This can be handled by mapping the datasets through ScalaPB's
Let's start by preparing a dataset with test binary data by mapping our
To turn this dataset into a
Dataset[Person], we map it through
to turn a dataset of protos into
In SparkSQL-ScalaPB, enums are represented as strings. Unrecognized enum values are represented as strings containing the numeric value.
If you need to write a UDF that returns a message, it would not pick up our encoder and you may get a runtime failure. To work around this, sparksql-scalapb provides
ProtoSQL.udf to create UDFs. For example, if you need to parse a binary column into a proto:
In ProtoSQL 0.9.x and 0.10.x, primitive wrappers are represented in Spark as structs
witha single field named
value. A better representation in Spark would be a
nullable field of the primitive type. The better representation will be the
default in 0.11.x. To enable this representation today, replace the usages of
Instead of importing
See example in WrappersSpec.
<none> is not a term#
You will see this error if for some reason Spark's
Encoders are being picked up
instead of the ones provided by sparksql-scalapb. Please ensure you are not importing
spark.implicits._. See instructions above for imports.
Check out a complete example here.