@@ -51,18 +51,19 @@ dataset for Oxford-IIIT Pets lives
51
51
[ here] ( http://www.robots.ox.ac.uk/~vgg/data/pets/ ) . You will need to download
52
52
both the image dataset [ ` images.tar.gz ` ] ( http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz )
53
53
and the groundtruth data [ ` annotations.tar.gz ` ] ( http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz )
54
- to the ` tensorflow/models ` directory and unzip them. This may take some time.
54
+ to the ` tensorflow/models/research/ ` directory and unzip them. This may take
55
+ some time.
55
56
56
57
``` bash
57
- # From tensorflow/models/
58
+ # From tensorflow/models/research/
58
59
wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
59
60
wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
60
61
tar -xvf images.tar.gz
61
62
tar -xvf annotations.tar.gz
62
63
```
63
64
64
- After downloading the tarballs, your ` tensorflow/models ` directory should appear
65
- as follows:
65
+ After downloading the tarballs, your ` tensorflow/models/research/ ` directory
66
+ should appear as follows:
66
67
67
68
``` lang-none
68
69
- images.tar.gz
@@ -76,10 +77,10 @@ as follows:
76
77
The Tensorflow Object Detection API expects data to be in the TFRecord format,
77
78
so we'll now run the ` create_pet_tf_record ` script to convert from the raw
78
79
Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the
79
- ` tensorflow/models ` directory:
80
+ ` tensorflow/models/research/ ` directory:
80
81
81
82
``` bash
82
- # From tensorflow/models/
83
+ # From tensorflow/models/research/
83
84
python object_detection/create_pet_tf_record.py \
84
85
--label_map_path=object_detection/data/pet_label_map.pbtxt \
85
86
--data_dir=` pwd` \
@@ -90,14 +91,14 @@ Note: It is normal to see some warnings when running this script. You may ignore
90
91
them.
91
92
92
93
Two TFRecord files named ` pet_train.record ` and ` pet_val.record ` should be
93
- generated in the ` tensorflow/models ` directory.
94
+ generated in the ` tensorflow/models/research/ ` directory.
94
95
95
96
Now that the data has been generated, we'll need to upload it to Google Cloud
96
97
Storage so the data can be accessed by ML Engine. Run the following command to
97
98
copy the files into your GCS bucket (substituting ` ${YOUR_GCS_BUCKET} ` ):
98
99
99
100
``` bash
100
- # From tensorflow/models/
101
+ # From tensorflow/models/research/
101
102
gsutil cp pet_train.record gs://${YOUR_GCS_BUCKET} /data/pet_train.record
102
103
gsutil cp pet_val.record gs://${YOUR_GCS_BUCKET} /data/pet_val.record
103
104
gsutil cp object_detection/data/pet_label_map.pbtxt gs://${YOUR_GCS_BUCKET} /data/pet_label_map.pbtxt
@@ -145,7 +146,7 @@ upload your edited file onto GCS, making note of the path it was uploaded to
145
146
(we'll need it when starting the training/eval jobs).
146
147
147
148
``` bash
148
- # From tensorflow/models/
149
+ # From tensorflow/models/research/
149
150
150
151
# Edit the faster_rcnn_resnet101_pets.config template. Please note that there
151
152
# are multiple places where PATH_TO_BE_CONFIGURED needs to be set.
@@ -187,10 +188,10 @@ Before we can start a job on Google Cloud ML Engine, we must:
187
188
2 . Write a cluster configuration for our Google Cloud ML job.
188
189
189
190
To package the Tensorflow Object Detection code, run the following commands from
190
- the ` tensorflow/models/ ` directory:
191
+ the ` tensorflow/models/research/ ` directory:
191
192
192
193
``` bash
193
- # From tensorflow/models/
194
+ # From tensorflow/models/research/
194
195
python setup.py sdist
195
196
(cd slim && python setup.py sdist)
196
197
```
@@ -202,11 +203,11 @@ For running the training Cloud ML job, we'll configure the cluster to use 10
202
203
training jobs (1 master + 9 workers) and three parameters servers. The
203
204
configuration file can be found at ` object_detection/samples/cloud/cloud.yml ` .
204
205
205
- To start training, execute the following command from the ` tensorflow/models/ `
206
- directory:
206
+ To start training, execute the following command from the
207
+ ` tensorflow/models/research/ ` directory:
207
208
208
209
``` bash
209
- # From tensorflow/models/
210
+ # From tensorflow/models/research/
210
211
gcloud ml-engine jobs submit training ` whoami` _object_detection_` date +%s` \
211
212
--job-dir=gs://${YOUR_GCS_BUCKET} /train \
212
213
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \
@@ -221,7 +222,7 @@ gcloud ml-engine jobs submit training `whoami`_object_detection_`date +%s` \
221
222
Once training has started, we can run an evaluation concurrently:
222
223
223
224
``` bash
224
- # From tensorflow/models/
225
+ # From tensorflow/models/research/
225
226
gcloud ml-engine jobs submit training ` whoami` _object_detection_eval_` date +%s` \
226
227
--job-dir=gs://${YOUR_GCS_BUCKET} /train \
227
228
--packages dist/object_detection-0.1.tar.gz,slim/dist/slim-0.1.tar.gz \
@@ -288,10 +289,10 @@ three files:
288
289
* ` model.ckpt-${CHECKPOINT_NUMBER}.meta `
289
290
290
291
After you've identified a candidate checkpoint to export, run the following
291
- command from ` tensorflow/models ` :
292
+ command from ` tensorflow/models/research/ ` :
292
293
293
294
``` bash
294
- # From tensorflow/models
295
+ # From tensorflow/models/research/
295
296
gsutil cp gs://${YOUR_GCS_BUCKET} /train/model.ckpt-${CHECKPOINT_NUMBER} .* .
296
297
python object_detection/export_inference_graph.py \
297
298
--input_type image_tensor \
0 commit comments