Skip to content

Commit 4405080

Browse files
committed
Renaming file to match the 3-11 date.
Changing class name in sample code to match open new namespaces
1 parent a964f3c commit 4405080

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

_posts/2021-03-11-introducing-sql-delta-import.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Importing data into a Delta Lake table is as easy as
2626

2727
```shell script
2828
spark-submit /
29-
--class "com.scribd.importer.spark.ImportRunner" sql-delta-import_2.12-0.2.1-SNAPSHOT.jar /
29+
--class "io.delta.connectors.spark.JDBC.ImportRunner" sql-delta-import_2.12-0.2.1-SNAPSHOT.jar /
3030
--jdbc-url jdbc:mysql://hostName:port/database /
3131
--source source.table
3232
--destination destination.table
@@ -49,7 +49,7 @@ optimize data storage for best performance on reads by just adding a couple of c
4949
spark-submit /
5050
--conf spark.databricks.delta.optimizeWrite.enabled=true /
5151
--conf spark.databricks.delta.autoCompact.enabled=true /
52-
--class "com.scribd.importer.spark.ImportRunner" sql-delta-import_2.12-0.2.1-SNAPSHOT.jar /
52+
--class "io.delta.connectors.spark.JDBC.ImportRunner" sql-delta-import_2.12-0.2.1-SNAPSHOT.jar /
5353
--jdbc-url jdbc:mysql://hostName:port/database /
5454
--source source.table
5555
--destination destination.table
@@ -72,7 +72,7 @@ concurrency thus allowing you to tune those parameters independently
7272
spark-submit --num-executors 15 --executor-cores 4 /
7373
--conf spark.databricks.delta.optimizeWrite.enabled=true /
7474
--conf spark.databricks.delta.autoCompact.enabled=true /
75-
--class "com.scribd.importer.spark.ImportRunner" sql-delta-import_2.12-0.2.1-SNAPSHOT.jar /
75+
--class "io.delta.connectors.spark.JDBC.ImportRunner" sql-delta-import_2.12-0.2.1-SNAPSHOT.jar /
7676
--jdbc-url jdbc:mysql://hostName:port/database /
7777
--source source.table
7878
--destination destination.table

0 commit comments

Comments
 (0)