I'm trying to follow the guide but I cannot execute in a pipeline.
I created a notebook with these statements:
%%spark
import org.apache.spark.sql.DataFrameReader
spark.sql("CREATE DATABASE IF NOT EXISTS nyctaxi")
val df = spark.read.sqlanalytics("devoteamnlwsSQL.dbo.Trip")
df.write.mode("overwrite").saveAsTable("nyctaxi.trip")
The notebook works fine but if I try to debug or trigger a pipeline with a Synapse Notebook action that calls the notebook I get the following error:
error: value sqlanalytics is not a member of org.apache.spark.sql.DataFrameReader
Thanks
⚠Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
@zincob - Thanks for posting the issue. We are looking into it and will get back to you shortly.
@zincob - As per this document, while running sqlanalytics command from pipeline, importing the modules step needs to be included in the notebook as below.

Thanks for the feedback, the pipeline is working!
I strongly suggest to include the document that you shared in the tutorial.
IMHO the tutorial doesn't contain all the required information to successfully follow it.
@zincob - Thanks for your contribution, I've addressed this issue in PR: MicrosoftDocs/azure-docs-pr#134761. The changes should be live EOD or tomorrow.
Most helpful comment
@zincob - As per this document, while running sqlanalytics command from pipeline, importing the modules step needs to be included in the notebook as below.