Skip to content

Commit ae72623

Browse files
authored
Create README.md
1 parent c39eea2 commit ae72623

File tree

1 file changed

+42
-0
lines changed

1 file changed

+42
-0
lines changed
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
## Usage
2+
3+
4+
#### 1. Clone the repositories
5+
```bash
6+
$ git clone https://github.com/pdollar/coco.git
7+
$ git clone https://github.com/yunjey/pytorch-tutorial.git
8+
$ cd pytorch-tutorial/tutorials/09 - Image Captioning
9+
```
10+
11+
#### 2. Download the dataset
12+
13+
```bash
14+
$ pip install -r requirements
15+
$ chmod +x download.sh
16+
$ ./donwload.sh
17+
```
18+
19+
#### 3. Preprocessing
20+
21+
```bash
22+
$ python vocab.py
23+
```
24+
25+
#### 4. Train the model
26+
27+
```bash
28+
$ python train.py
29+
```
30+
31+
#### 5. Generate captions
32+
If you want to generate captions from MSCOCO validation dataset, see [evaluate_model.ipynb](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/09%20-%20Image%20Captioning/evaluate_model.ipynb). Otherwise, if you want to generate captions from custom image file, run command as below.
33+
34+
```bash
35+
$ python sample.py --image=sample_image.jpg
36+
```
37+
38+
<br>
39+
40+
## Pretrained model
41+
42+
If you do not want to train the model yourself, you can use a pretrained model. I have provided the pretrained model as a zip file. You can download the file [here](https://www.dropbox.com/s/cngzozkk73imjdh/trained_model.zip?dl=0) and extract it to `model` directory.

0 commit comments

Comments
 (0)