Skip to content

Commit 57ffefb

Browse files
MarkDaoustcopybara-github
authored andcommitted
Add validation_steps to fix Colab.
PiperOrigin-RevId: 324701287
1 parent c0c7ef6 commit 57ffefb

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

site/en/guide/tpu.ipynb

+6-3
Original file line numberDiff line numberDiff line change
@@ -354,14 +354,16 @@
354354
"\n",
355355
"batch_size = 200\n",
356356
"steps_per_epoch = 60000 // batch_size\n",
357+
"validation_steps = 10000 // batch_size\n",
357358
"\n",
358359
"train_dataset = get_dataset(batch_size, is_training=True)\n",
359360
"test_dataset = get_dataset(batch_size, is_training=False)\n",
360361
"\n",
361362
"model.fit(train_dataset,\n",
362363
" epochs=5,\n",
363364
" steps_per_epoch=steps_per_epoch,\n",
364-
" validation_data=test_dataset)"
365+
" validation_data=test_dataset, \n",
366+
" validation_steps=validation_steps)"
365367
]
366368
},
367369
{
@@ -371,7 +373,7 @@
371373
"id": "8hSGBIYtUugJ"
372374
},
373375
"source": [
374-
"To reduce python overhead, and maximize the performance of your TPU, try out the **experimental** `experimental_steps_per_execution` argument to `Model.compile`. Here it approximately **doubles** the throughput:"
376+
"To reduce python overhead, and maximize the performance of your TPU, try out the **experimental** `experimental_steps_per_execution` argument to `Model.compile`. Here it increases throughput by about 50%:"
375377
]
376378
},
377379
{
@@ -395,7 +397,8 @@
395397
"model.fit(train_dataset,\n",
396398
" epochs=5,\n",
397399
" steps_per_epoch=steps_per_epoch,\n",
398-
" validation_data=test_dataset)"
400+
" validation_data=test_dataset,\n",
401+
" validation_steps=validation_steps)"
399402
]
400403
},
401404
{

0 commit comments

Comments
 (0)