読者です 読者をやめる 読者になる 読者になる

ゲレの日曜工作プログラミング

ゲーム好きの中の人が、自分が作りたいアプリを作る記録です

ただただtensorflowを動かしただけの話:エオルゼア翻訳

ゲレです。

前回のあらすじ、tensorflowのチュートリアルがうまく動きませんでした。

 

gelehrtecrest.hatenablog.com

 

 

 なので動かしました。

 

eorzea_translation $ source bin/activate (eorzea_translation) eorzea_translation $ python tensorflow/tensorflow/examples/tutorials/mnist/fully_connected_feed.py Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes. Extracting /tmp/tensorflow/mnist/input_data/train-images-idx3-ubyte.gz Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes. Extracting /tmp/tensorflow/mnist/input_data/train-labels-idx1-ubyte.gz Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes. Extracting /tmp/tensorflow/mnist/input_data/t10k-images-idx3-ubyte.gz Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes. Extracting /tmp/tensorflow/mnist/input_data/t10k-labels-idx1-ubyte.gz W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. Step 0: loss = 2.33 (0.012 sec) Step 100: loss = 2.18 (0.003 sec) Step 200: loss = 1.95 (0.003 sec) Step 300: loss = 1.66 (0.002 sec) Step 400: loss = 1.37 (0.003 sec) Step 500: loss = 0.97 (0.003 sec) Step 600: loss = 0.81 (0.003 sec) Step 700: loss = 0.61 (0.003 sec) Step 800: loss = 0.65 (0.003 sec) Step 900: loss = 0.45 (0.003 sec) Training Data Eval: Num examples: 55000 Num correct: 47594 Precision @ 1: 0.8653 Validation Data Eval: Num examples: 5000 Num correct: 4358 Precision @ 1: 0.8716 Test Data Eval: Num examples: 10000 Num correct: 8713 Precision @ 1: 0.8713 Step 1000: loss = 0.50 (0.010 sec) Step 1100: loss = 0.41 (0.096 sec) Step 1200: loss = 0.39 (0.002 sec) Step 1300: loss = 0.32 (0.003 sec) Step 1400: loss = 0.58 (0.003 sec) Step 1500: loss = 0.37 (0.003 sec) Step 1600: loss = 0.44 (0.003 sec) Step 1700: loss = 0.33 (0.003 sec) Step 1800: loss = 0.35 (0.003 sec) Step 1900: loss = 0.53 (0.003 sec) Training Data Eval: Num examples: 55000 Num correct: 49434 Precision @ 1: 0.8988 Validation Data Eval: Num examples: 5000 Num correct: 4524 Precision @ 1: 0.9048 Test Data Eval: Num examples: 10000 Num correct: 9032 Precision @ 1: 0.9032

 

うん、動いてますね。とりあえず一安心。

このままこのプロジェクトを続けたいのですが、ちょっとここで一呼吸、次回の記事は別の話になります。

 

あ、あとこの本読んでます。

 

 まだ半分ぐらいしか読んでないですが、TensorFlowの基本的な考え方をソースコードを交えて書いているので、面白そうです。読み終わったらレビュー書きますね。

では今日はここまで。