<label id="jgr5k"></label>
    <legend id="jgr5k"><track id="jgr5k"></track></legend>

    <sub id="jgr5k"></sub>
  1. <u id="jgr5k"></u>
      久草国产视频,91资源总站,在线免费看AV,丁香婷婷社区,久久精品99久久久久久久久,色天使av,无码探花,香蕉av在线
      您正在使用IE低版瀏覽器,為了您的雷峰網(wǎng)賬號安全和更好的產(chǎn)品體驗,強烈建議使用更快更安全的瀏覽器
      此為臨時鏈接,僅用于文章預(yù)覽,將在時失效
      人工智能開發(fā)者 正文
      發(fā)私信給AI研習社
      發(fā)送

      0

      如何利用微信監(jiān)管你的TF訓練?

      本文作者: AI研習社 2017-11-15 17:42
      導語:實在是很簡單……

      雷鋒網(wǎng)按:本文作者Coldwings,本文整理自作者在知乎發(fā)布的文章《利用微信監(jiān)管你的TF訓練》,雷鋒網(wǎng)獲其授權(quán)發(fā)布。

      之前回答問題【在機器學習模型的訓練期間,大概幾十分鐘到幾小時不等,大家都會在等實驗的時候做什么?】的時候,說到可以用微信來管著訓練,完全不用守著。沒想到這么受歡迎……

      原問題下的回答如下

      不知道有哪些朋友是在TF/keras/chainer/mxnet等框架下用python擼的….…

      這可是python啊……上itchat,弄個微信號加自己為好友(或者自己發(fā)自己),訓練進展跟著一路發(fā)消息給自己就好了,做了可視化的話順便把圖也一并發(fā)過來。

      然后就能安心睡覺/逛街/泡妞/寫答案了。

      講道理,甚至簡單的參數(shù)調(diào)整都可以照著用手機來……

      大體效果如下

       如何利用微信監(jiān)管你的TF訓練?

       如何利用微信監(jiān)管你的TF訓練?

      當然可以做得更全面一些。最可靠的辦法自然是干脆地做一個http服務(wù)或者一個rpc,然而這樣往往太麻煩。本著簡單高效的原則,幾行代碼能起到效果方便自己當然是最好的,接入微信或者web真就是不錯的選擇了。只是查看的話,TensorBoard就很好,但是如果想加入一些自定義操作,還是自行定制的。echat.js做成web,或者itchat做個微信服務(wù),都是挺不賴的選擇。      

      正文如下

      這里折騰一個例子。以TensorFlow的example中,利用CNN處理MNIST的程序為例,我們做一點點小小的修改。

      首先這里放上寫完的代碼:

      #!/usr/bin/env python
      # coding: utf-8

      '''
      A Convolutional Network implementation example using TensorFlow library.
      This example is using the MNIST database of handwritten digits
      (http://yann.lecun.com/exdb/mnist/)
      Author: Aymeric Damien
      Project: https://github.com/aymericdamien/TensorFlow-Examples/


      Add a itchat controller with multi thread
      '''

      from __future__ import print_function

      import tensorflow as tf

      # Import MNIST data
      from tensorflow.examples.tutorials.mnist import input_data

      # Import itchat & threading
      import itchat
      import threading

      # Create a running status flag
      lock = threading.Lock()
      running = False

      # Parameters
      learning_rate = 0.001
      training_iters = 200000
      batch_size = 128
      display_step = 10

      def nn_train(wechat_name, param):
         global lock, running
         # Lock
         with lock:
             running = True

         # mnist data reading
         mnist = input_data.read_data_sets("data/", one_hot=True)

         # Parameters
         # learning_rate = 0.001
         # training_iters = 200000
         # batch_size = 128
         # display_step = 10
         learning_rate, training_iters, batch_size, display_step = param

         # Network Parameters
         n_input = 784 # MNIST data input (img shape: 28*28)
         n_classes = 10 # MNIST total classes (0-9 digits)
         dropout = 0.75 # Dropout, probability to keep units

         # tf Graph input
         x = tf.placeholder(tf.float32, [None, n_input])
         y = tf.placeholder(tf.float32, [None, n_classes])
         keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)


         # Create some wrappers for simplicity
         def conv2d(x, W, b, strides=1):
             # Conv2D wrapper, with bias and relu activation
             x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
             x = tf.nn.bias_add(x, b)
             return tf.nn.relu(x)


         def maxpool2d(x, k=2):
             # MaxPool2D wrapper
             return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                                 padding='SAME')


         # Create model
         def conv_net(x, weights, biases, dropout):
             # Reshape input picture
             x = tf.reshape(x, shape=[-1, 28, 28, 1])

             # Convolution Layer
             conv1 = conv2d(x, weights['wc1'], biases['bc1'])
             # Max Pooling (down-sampling)
             conv1 = maxpool2d(conv1, k=2)

             # Convolution Layer
             conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
             # Max Pooling (down-sampling)
             conv2 = maxpool2d(conv2, k=2)

             # Fully connected layer
             # Reshape conv2 output to fit fully connected layer input
             fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
             fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
             fc1 = tf.nn.relu(fc1)
             # Apply Dropout
             fc1 = tf.nn.dropout(fc1, dropout)

             # Output, class prediction
             out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
             return out

         # Store layers weight & bias
         weights = {
             # 5x5 conv, 1 input, 32 outputs
             'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
             # 5x5 conv, 32 inputs, 64 outputs
             'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
             # fully connected, 7*7*64 inputs, 1024 outputs
             'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
             # 1024 inputs, 10 outputs (class prediction)
             'out': tf.Variable(tf.random_normal([1024, n_classes]))
         }

         biases = {
             'bc1': tf.Variable(tf.random_normal([32])),
             'bc2': tf.Variable(tf.random_normal([64])),
             'bd1': tf.Variable(tf.random_normal([1024])),
             'out': tf.Variable(tf.random_normal([n_classes]))
         }

         # Construct model
         pred = conv_net(x, weights, biases, keep_prob)

         # Define loss and optimizer
         cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
         optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

         # Evaluate model
         correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
         accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))


         # Initializing the variables
         init = tf.global_variables_initializer()

         # Launch the graph
         with tf.Session() as sess:
             sess.run(init)
             step = 1
             # Keep training until reach max iterations
             print('Wait for lock')
             with lock:
                 run_state = running
             print('Start')
             while step * batch_size < training_iters and run_state:
                 batch_x, batch_y = mnist.train.next_batch(batch_size)
                 # Run optimization op (backprop)
                 sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
                                             keep_prob: dropout})
                 if step % display_step == 0:
                     # Calculate batch loss and accuracy
                     loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                                     y: batch_y,
                                                                     keep_prob: 1.})
                     print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                         "{:.6f}".format(loss) + ", Training Accuracy= " + \
                         "{:.5f}".format(acc))
                     itchat.send("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                         "{:.6f}".format(loss) + ", Training Accuracy= " + \
                                 "{:.5f}".format(acc), wechat_name)
                 step += 1
                 with lock:
                     run_state = running
             print("Optimization Finished!")
             itchat.send("Optimization Finished!", wechat_name)

             # Calculate accuracy for 256 mnist test images
             print("Testing Accuracy:", \
                 sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                             keep_prob: 1.}))
             itchat.send("Testing Accuracy: %s" %
                 sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                               keep_prob: 1.}), wechat_name)

         with lock:
             running = False

      @itchat.msg_register([itchat.content.TEXT])
      def chat_trigger(msg):
         global lock, running, learning_rate, training_iters, batch_size, display_step
         if msg['Text'] == u'開始':
             print('Starting')
             with lock:
                 run_state = running
             if not run_state:
                 try:
                     threading.Thread(target=nn_train, args=(msg['FromUserName'], (learning_rate, training_iters, batch_size, display_step))).start()
                 except:
                     msg.reply('Running')
         elif msg['Text'] == u'停止':
             print('Stopping')
             with lock:
                 running = False
         elif msg['Text'] == u'參數(shù)':
             itchat.send('lr=%f, ti=%d, bs=%d, ds=%d'%(learning_rate, training_iters, batch_size, display_step),msg['FromUserName'])
         else:
             try:
                 param = msg['Text'].split()
                 key, value = param
                 print(key, value)
                 if key == 'lr':
                     learning_rate = float(value)
                 elif key == 'ti':
                     training_iters = int(value)
                 elif key == 'bs':
                     batch_size = int(value)
                 elif key == 'ds':
                     display_step = int(value)
             except:
                 pass


      if __name__ == '__main__':
         itchat.auto_login(hotReload=True)
         itchat.run()

      這段代碼里面,我所做的修改主要是:

      0.導入了itchat和threading

      1. 把原本的腳本里網(wǎng)絡(luò)構(gòu)成和訓練的部分甩到了一個函數(shù)nn_train里

      def nn_train(wechat_name, param):
         global lock, running
         # Lock
         with lock:
             running = True

         # mnist data reading
         mnist = input_data.read_data_sets("data/", one_hot=True)

         # Parameters
         # learning_rate = 0.001
         # training_iters = 200000
         # batch_size = 128
         # display_step = 10
         learning_rate, training_iters, batch_size, display_step = param

         # Network Parameters
         n_input = 784 # MNIST data input (img shape: 28*28)
         n_classes = 10 # MNIST total classes (0-9 digits)
         dropout = 0.75 # Dropout, probability to keep units

         # tf Graph input
         x = tf.placeholder(tf.float32, [None, n_input])
         y = tf.placeholder(tf.float32, [None, n_classes])
         keep_prob = tf.placeholder(tf.float32) #dropout (keep probability)


         # Create some wrappers for simplicity
         def conv2d(x, W, b, strides=1):
             # Conv2D wrapper, with bias and relu activation
             x = tf.nn.conv2d(x, W, strides=[1, strides, strides, 1], padding='SAME')
             x = tf.nn.bias_add(x, b)
             return tf.nn.relu(x)


         def maxpool2d(x, k=2):
             # MaxPool2D wrapper
             return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1],
                                 padding='SAME')


         # Create model
         def conv_net(x, weights, biases, dropout):
             # Reshape input picture
             x = tf.reshape(x, shape=[-1, 28, 28, 1])

             # Convolution Layer
             conv1 = conv2d(x, weights['wc1'], biases['bc1'])
             # Max Pooling (down-sampling)
             conv1 = maxpool2d(conv1, k=2)

             # Convolution Layer
             conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
             # Max Pooling (down-sampling)
             conv2 = maxpool2d(conv2, k=2)

             # Fully connected layer
             # Reshape conv2 output to fit fully connected layer input
             fc1 = tf.reshape(conv2, [-1, weights['wd1'].get_shape().as_list()[0]])
             fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
             fc1 = tf.nn.relu(fc1)
             # Apply Dropout
             fc1 = tf.nn.dropout(fc1, dropout)

             # Output, class prediction
             out = tf.add(tf.matmul(fc1, weights['out']), biases['out'])
             return out

         # Store layers weight & bias
         weights = {
             # 5x5 conv, 1 input, 32 outputs
             'wc1': tf.Variable(tf.random_normal([5, 5, 1, 32])),
             # 5x5 conv, 32 inputs, 64 outputs
             'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
             # fully connected, 7*7*64 inputs, 1024 outputs
             'wd1': tf.Variable(tf.random_normal([7*7*64, 1024])),
             # 1024 inputs, 10 outputs (class prediction)
             'out': tf.Variable(tf.random_normal([1024, n_classes]))
         }

         biases = {
             'bc1': tf.Variable(tf.random_normal([32])),
             'bc2': tf.Variable(tf.random_normal([64])),
             'bd1': tf.Variable(tf.random_normal([1024])),
             'out': tf.Variable(tf.random_normal([n_classes]))
         }

         # Construct model
         pred = conv_net(x, weights, biases, keep_prob)

         # Define loss and optimizer
         cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
         optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)

         # Evaluate model
         correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
         accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))


         # Initializing the variables
         init = tf.global_variables_initializer()

         # Launch the graph
         with tf.Session() as sess:
             sess.run(init)
             step = 1
             # Keep training until reach max iterations
             print('Wait for lock')
             with lock:
                 run_state = running
             print('Start')
             while step * batch_size < training_iters and run_state:
                 batch_x, batch_y = mnist.train.next_batch(batch_size)
                 # Run optimization op (backprop)
                 sess.run(optimizer, feed_dict={x: batch_x, y: batch_y,
                                             keep_prob: dropout})
                 if step % display_step == 0:
                     # Calculate batch loss and accuracy
                     loss, acc = sess.run([cost, accuracy], feed_dict={x: batch_x,
                                                                     y: batch_y,
                                                                     keep_prob: 1.})
                     print("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                         "{:.6f}".format(loss) + ", Training Accuracy= " + \
                         "{:.5f}".format(acc))
                     itchat.send("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                         "{:.6f}".format(loss) + ", Training Accuracy= " + \
                                 "{:.5f}".format(acc), wechat_name)
                 step += 1
                 with lock:
                     run_state = running
             print("Optimization Finished!")
             itchat.send("Optimization Finished!", wechat_name)

             # Calculate accuracy for 256 mnist test images
             print("Testing Accuracy:", \
                 sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                             keep_prob: 1.}))
             itchat.send("Testing Accuracy: %s" %
                 sess.run(accuracy, feed_dict={x: mnist.test.images[:256],
                                             y: mnist.test.labels[:256],
                                               keep_prob: 1.}), wechat_name)

         with lock:
             running = False

      這里大部分是跟原本的代碼一樣的,不過首先所有print的地方都加了個itchat.send來輸出日志,此外加了個帶鎖的狀態(tài)量running用來做運行開關(guān)。此外,部分參數(shù)是通過函數(shù)參數(shù)傳入的。

      然后呢,寫了個itchat的handler

      @itchat.msg_register([itchat.content.TEXT])
      def chat_trigger(msg):
         global lock, running, learning_rate, training_iters, batch_size, display_step
         if msg['Text'] == u'開始':
             print('Starting')
             with lock:
                 run_state = running
             if not run_state:
                 try:
                     threading.Thread(target=nn_train, args=(msg['FromUserName'], (learning_rate, training_iters, batch_size, display_step))).start()
                 except:
                     msg.reply('Running')

      作用是,如果收到微信消息,內(nèi)容為『開始』,那就跑訓練的函數(shù)(當然,為了防止阻塞,放在了另一個線程里)

      最后再在腳本主流程里使用itchat登錄微信并且啟動itchat的服務(wù),這樣就實現(xiàn)了基本的控制。

      if __name__ == '__main__':
         itchat.auto_login(hotReload=True)
         itchat.run()

      但是我們不滿足于此,我還希望可以對流程進行一些控制,對參數(shù)進行一些修改,于是乎:

      @itchat.msg_register([itchat.content.TEXT])
      def chat_trigger(msg):
         global lock, running, learning_rate, training_iters, batch_size, display_step
         if msg['Text'] == u'開始':
             print('Starting')
             with lock:
                 run_state = running
             if not run_state:
                 try:
                     threading.Thread(target=nn_train, args=(msg['FromUserName'], (learning_rate, training_iters, batch_size, display_step))).start()
                 except:
                     msg.reply('Running')
         elif msg['Text'] == u'停止':
             print('Stopping')
             with lock:
                 running = False
         elif msg['Text'] == u'參數(shù)':
             itchat.send('lr=%f, ti=%d, bs=%d, ds=%d'%(learning_rate, training_iters, batch_size, display_step),msg['FromUserName'])
         else:
             try:
                 param = msg['Text'].split()
                 key, value = param
                 print(key, value)
                 if key == 'lr':
                     learning_rate = float(value)
                 elif key == 'ti':
                     training_iters = int(value)
                 elif key == 'bs':
                     batch_size = int(value)
                 elif key == 'ds':
                     display_step = int(value)
             except:
                 pass

      通過這個,我們可以在epoch中途停止(因為nn_train里通過檢查running標志來確定是否需要停下來),也可以在訓練開始前調(diào)整learning_rate等幾個參數(shù)。

      實在是很簡單……

      雷峰網(wǎng)版權(quán)文章,未經(jīng)授權(quán)禁止轉(zhuǎn)載。詳情見轉(zhuǎn)載須知

       如何利用微信監(jiān)管你的TF訓練?

      分享:
      相關(guān)文章

      編輯

      聚焦數(shù)據(jù)科學,連接 AI 開發(fā)者。更多精彩內(nèi)容,請訪問:yanxishe.com
      當月熱門文章
      最新文章
      請?zhí)顚懮暾埲速Y料
      姓名
      電話
      郵箱
      微信號
      作品鏈接
      個人簡介
      為了您的賬戶安全,請驗證郵箱
      您的郵箱還未驗證,完成可獲20積分喲!
      請驗證您的郵箱
      立即驗證
      完善賬號信息
      您的賬號已經(jīng)綁定,現(xiàn)在您可以設(shè)置密碼以方便用郵箱登錄
      立即設(shè)置 以后再說
      主站蜘蛛池模板: 九寨沟县| 爆爽久久久一区二区又大又黄又嫩 | 男人天堂久久| 宣恩县| 亚洲第一av网站| 亚洲天堂成人黄色在线播放| 精品无码一区二区三区爱欲| 中文字幕熟妇无码专区 | 亚洲国产天堂一区二区三区| 国产人妖乱国产精品人妖| 少妇熟女久久综合网色欲| 麻豆精品在线| 中文一区二区视频| 美女大量吞精在线观看456| 中文国产日韩欧美二视频| 98人妻| 在线亚洲午夜片av大片| 亚洲国产中文乱| 无码国产精成人午夜视频不卡| 国产精品久久久久久av| 免费a级毛片18以上观看精品| jizzjizz韩国| 激情综合网激情综合| 亚洲av永久无码精品天堂久久| 国产成人免费一区二区三区| 久久亚洲A?V| 美女视频黄频大全免费| 超级碰在线视频| 一起草无码| 拜城县| 色偷偷88888欧美精品久久久| 越南女子杂交内射bbwxz| 久久中文成人版| 欧洲熟妇色xxxx欧美老妇性| 国产精品123| 校花人妻老师双飞| 久久99国产伦精品免费| 狠狠色成人综合首页| 欧美国产精品不卡在线观看| 男女动态无遮挡动态图| 曰本丰满熟妇xxxx性|