Skip to content

Commit cc55c5b

Browse files
committed
updated readme with GPU info
1 parent 6a61ffc commit cc55c5b

File tree

1 file changed

+26
-17
lines changed

1 file changed

+26
-17
lines changed

README.md

Lines changed: 26 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,10 @@ You could potentially do steps 1-2 in openframeworks as well, but it seems a lot
1111

1212
*(TBH Since training deep learning models takes so long and is very non-realtime, I don't think it makes too much sense to put yourself through the pain of implementing models in a syntactically tortuous language like C++, when C-backed, highly optimized, often GPU based python front ends are available for building and training models. However, once a model is trained, linking it to all kinds of other bits in realtime in an openframworks-like environment is where the fun's at!)*
1313

14-
**Note**: The pre-compiled library I provide is for **Linux only**, though building for OSX should be very simple (I just don't have a Mac right now). Windows might be a bit more of a pain since Bazel (the build platform) is *nix only - and would involve either porting Bazel, or rebuilding make/cmake files.
15-
The project files for the examples are for **QTCreator**, so should work on all platforms out of the box? But anyways it's just one library and 3 header include files, so setting up other IDEs should be very simple.
14+
**Note**: The pre-compiled library I provide is for **Linux only**, and I've provided libraries for both **CPU** and **GPU**. Though building for OSX should be very simple, I just don't have a Mac right now (Tensorflow supports GPU only on linux currently). Windows might be a bit more of a pain since Bazel (the build platform) is *nix only - and would involve either porting Bazel, or rebuilding make/cmake files.
15+
The project files for the examples are for **QTCreator**, so should work on all platforms out of the box. But anyways it's just one library and a few header include files, so setting up other IDEs should be very simple.
1616

17-
Since this is such an early version of the addon, I'll probably break backwards compatibility with new updates. Sorry about that!
17+
Since this is such an early version of the addon, I'll probably break backwards compatibility with new updates. Sorry about that!
1818

1919
And there are a number of issues which I'll mention at the end.
2020

@@ -26,15 +26,15 @@ I have a bunch more half-finished examples which I need to tidy up. In the meant
2626
The hello world (no not MNIST, that comes next). Build a graph in python that multiplies two numbers. Load the graph in openframeworks and hey presto. 100s of lines of code, just to build a simple multiplication function.
2727

2828
## example-mnist
29-
MNIST clasffication with two different models - shallow and deep. Both models are built and trained in python (in bin/py folder). Loaded, manipulated and interacted with in openframeworks.
29+
MNIST clasffication with two different models - shallow and deep. Both models are built and trained in python (in bin/py folder). Loaded, manipulated and interacted with in openframeworks. Comment/uncomment the #define GO_DEEP line at the top of the .cpp to switch between the two.
3030
![](https://cloud.githubusercontent.com/assets/144230/12665280/8fa4612a-c62e-11e5-950e-eaec14d4211d.png)
3131

3232
####Single layer softmax regression:
33-
Very simple, quick'n'easy but not very good. Trains in seconds. Accuracy on validation ~90%.
33+
Very simple multinomial logistic regression. Quick'n'easy but not very good. Trains in seconds. Accuracy on test set ~90%.
3434
Implementation of https://www.tensorflow.org/versions/0.6.0/tutorials/mnist/beginners/index.html
3535

3636
####Deep(ish) Convolutional Neural Network:
37-
Conv layers, maxpools, RELU's etc. Slower and heavier than above, but much better. Trains in a few minutes (on CPU). Accuracy 99.2%
37+
Basic convolutional neural network. Very similar to LeNet. Conv layers, maxpools, RELU's etc. Slower and heavier than above, but much better. Trains in a few minutes (on CPU). Accuracy 99.2%
3838
Implementation of https://www.tensorflow.org/versions/0.6.0/tutorials/mnist/pros/index.html#build-a-multilayer-convolutional-network
3939

4040

@@ -62,7 +62,7 @@ Get openframeworks for linux (download or clone repo) http://openframeworks.cc/d
6262
Follow instructions on setting it up, dependencies, compiling etc. http://openframeworks.cc/setup/linux-install/
6363

6464
## Get QT Creator IDE
65-
I've supplied project files for QT Creator IDE. So quickest way to get up and running is to use that (I'd never used it before, but so far it looks pretty decent). Download and install the QT Creator IDE http://openframeworks.cc/setup/qtcreator/
65+
I've supplied project files for QT Creator IDE. So quickest way to get up and running is to use that. I'd never used it before, but I'm really liking it so far. (Note, you don't need Qt SDK. just the IDE). Download and install the QT Creator IDE http://openframeworks.cc/setup/qtcreator/
6666
It shouldn't be too hard to setup a new projects for other IDEs. More on this below.
6767

6868

@@ -71,14 +71,10 @@ Download or clone the repo ofxMSATensorFlow into your openframeworks/addons fold
7171
https://github.com/memo/ofxMSATensorFlow
7272

7373
## Download binaries
74-
**Important**: You need the precompiled library, and data for the examples. I don't include these in the repo as they're huge. You can find them zipped up in the Releases section. The 'exdata' contains data for the examples. Copy the files to their corresponding folders. (e.g. from *downloaded/examples-mnist/bin/data/model-deep/* to *ofxMSATensorFlow/example-mnist/bin/data/model-deep/*).
75-
And make sure to download the lib for your platform (currently only linux64).
74+
**Important**: You need the precompiled library, and data for the examples. I don't include these in the repo as they're huge. You can find them zipped up in the Releases section of this repo. Copy the files to their corresponding folders. e.g. from example-mnist-data.tar.gz / data to ofxMSATensorFlow/example-mnist/data. etc. And make sure to download the lib for your platform (currently only linux64). e.g. to ofxMSATensorFlow/libs/tensorflow/lib/linux64/libtensorflow_cc.so (GPU instructions below)
7675

7776
https://github.com/memo/ofxMSATensorFlow/releases
7877

79-
80-
81-
8278
## Set your library folder
8379
I made the library a shared library (.so) instead of static (.a) because it's huge! (340MB for debug).
8480
It was easier this way, can think about alternatives for the future.
@@ -95,6 +91,15 @@ Save and close. Then in the terminal again type
9591
sudo ldconfig
9692

9793

94+
## GPU
95+
96+
If you want to use your GPU (currently linux only) you need to:
97+
98+
1. Install CUDA and cuDNN https://www.tensorflow.org/versions/0.6.0/get_started/os_setup.html#install_cuda
99+
2. I provide a pre-compiled library for GPU support. Use this library instead of the other one. I.e. from the releases tab of this repo, download the zip ofxMSATensorFlow_lib_linux64_GPU and copy contents to ofxMSATensorFlow/libs/tensorflow/lib/linux64/libtensorflow_cc.so (Note that the library has the same name, but is much larger. 136MB vs 42MB for release)
100+
101+
The above will automatically switch to using GPU implementations of all operations where possible. However for more intricate control (see multi-GPU systems etc) https://www.tensorflow.org/versions/0.6.0/how_tos/using_gpu/index.html
102+
98103

99104
## THAT'S IT!!! (See my notes section at the end for caveats)
100105
You can open projects from QT Creator by selecting the .qbs file, edit, run etc.
@@ -164,7 +169,7 @@ On the tensorflow website 'build from sources section' there are instructions to
164169
*(Note, due to various reasons, I had to do sudo pip install for the last command)*
165170

166171

167-
**This is the important step to rebuld the c++ lib. Go to the root of your tensorflow folder and type**
172+
**This is the important step to rebuild the c++ lib. Go to the root of your tensorflow folder and type**
168173

169174
# for optimized (release) lib (~42.MB)
170175
bazel build -c opt //tensorflow:libtensorflow_cc.so
@@ -183,9 +188,9 @@ Also note that some of the headers needed are generated by the build processes m
183188

184189

185190
### Protobuf
186-
Is a PITA. Protobuf has caused endless pain for me on various projects. Usually due to version issues. Tensorflow requires >v3. Public release is v2.6.1. So if you have that installed somewhere on your system it might break things. The instructions above should install v3+ (v3.0.0.a3 at time of writing) from source. But if you run into problems mentioning protobuf, it's probably a version conflict.
191+
Is a PITA. Protobuf has caused endless pain for me on various projects. As far as I understand, protobuf is a library for serialization, and generates headers based on message serialization structure defined in .proto files. It's a nice idea, but is incredibly sensistive to versions. I.e. it doesn't seem backwards compatible so if you're using a library compiled with one version of protobuf with headers generated from an ever so slightly different version, it'll fail. Tensorflow requires >v3. Public release is v2.6.1. So if you have that installed somewhere on your system it might break things. The instructions above should install v3+ (v3.0.0.a3 at time of writing) from source. But if you run into problems mentioning protobuf, it's probably a version conflict. The below should fix it, BUT it might break other software which requires older versions of protobuf. Running isolated environments would be your best bet in that case. JOY.
187192

188-
#### Python
193+
#### Protobuf problems in Python
189194

190195
If you have problems in python regarding protobuf (e.g. when importing) try the below:
191196

@@ -199,14 +204,18 @@ Note. The name of the tensorflow-xxxx.whl might be different on your system. Loo
199204
ls -l /tmp/tensorflow_pkg
200205

201206

202-
#### C++
207+
#### Protobuf problems in C++
203208
In some cases, you might have problems in C++. If you have remnants of old protobuf headers somewhere in your header search paths, the compiler might load that instead of the v3+ ones and cry about it. In this case I've had to install protobuf from source (not the python pip wheel, but the actual library in the system).
204209

205210
First remove all traces of protobuf installed via apt
206211

207212
sudo apt-get purge libprotobuf-dev
208213

209-
Clone the protobuf repo to a *new* folder (not the one inside tensorflow, as it can mess up things). Go into the folder and type
214+
Clone the protobuf repo to a *new* folder (don't build from the protobuf folder inside tensorflow, as it can mess things up). Somewhere other than your tensorflow folder:
215+
216+
git clone https://github.com/google/protobuf
217+
218+
Go into the folder and type
210219

211220
sudo apt-get install autoconf automake libtool curl # get dependencies
212221
./autogen.sh

0 commit comments

Comments
 (0)