You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: tutorials/03-advanced/image_captioning/README.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,8 +2,8 @@
2
2
This is a Pytorch implementation of the OBJ2TEXT-YOLO + CNN-RNN image captioning model
3
3
proposed in the paper [OBJ2TEXT: Generating Visually Descriptive Language from Object Layouts
4
4
](https://arxiv.org/abs/1707.07102). The Torch implementation can be found [here](https://github.com/uvavision/obj2text-neuraltalk2).
5
-
Note that I have changed the default image transformation operations from `transforms.Compose([transforms.RandomCrop(args.crop_size), transforms.RandomHorizontalFlip(), ...` to
6
-
`transforms.Compose([transforms.Scale(args.crop_size), ...`. For more information please visit [the project page](http://www.cs.virginia.edu/~xy4cm/obj2text/).
5
+
Note that I have changed the default image transformation operations from `[RandomCrop(args.crop_size), RandomHorizontalFlip(), ...]` to
6
+
`[Scale(args.crop_size), ...]`. For more information please visit [the project page](http://www.cs.virginia.edu/~xy4cm/obj2text/).
7
7
8
8

0 commit comments