FMUSER Wirless Transmit Video And Audio More Easier !

[email protected] WhatsApp +8618078869184
Language

    Easily learn artificial smart chips easily!

     

    "In the field of deep learning, the most important are data and operation. Whoever has more data and faster operation will have an advantage. Therefore, in terms of processor selection, GPU, which can be used for general basic computing and has faster operation speed, has quickly become the mainstream chip of Artificial Intelligence Computing. It can be said that in the past few years, especially since 2015, the outbreak of artificial intelligence is due to the wide application of NVIDIA's GPU 1、 Artificial intelligence and deep learning In 2016, the go duel between alphago and Li Shishi's Jiuduan undoubtedly aroused a new round of attention in the field of artificial intelligence all over the world. Five months before the battle with Li Shishi, alphago's go grade score rose to 3168 points due to defeating the second stage of European go champion fan Hui, while Li Shishi, who ranked second in the world at that time, had 3532 points. According to this level score, alphago has only about 11% chance of winning each set, and the result is that it won 4-1 against Li Shishi three months later. Alphago's learning ability is frightening. 1. Artificial intelligence: let machines think like people Since alphago, "artificial intelligence" has become a hot word in 2016, but as early as 1956, several computer scientists first proposed this concept at the Dartmouth conference. They dreamed of using computers that had just appeared at that time to construct complex machines with the same essential characteristics as human intelligence, that is, what we call "strong artificial intelligence" today. This omnipotent machine has all our senses, all our rationality, and can even think like us. People always see such machines in movies: friendly, like C-3PO in Star Wars; Evil, such as the terminator. At present, strong artificial intelligence only exists in movies and science fiction. The reason is not difficult to understand. We can't realize them, at least not yet. What we can achieve at present is generally called "weak artificial intelligence". Weak artificial intelligence is a technology that can perform specific tasks like or even better than people. For example, image classification on pinterest, or face recognition on Facebook. The realization method of these artificial intelligence technologies is "machine learning". 2. Machine learning: making artificial intelligence real The core of artificial intelligence is to make yourself more intelligent through continuous machine learning. The most basic approach of machine learning is to use algorithms to analyze data, learn from it, and then make decisions and predict events in the real world. Different from the traditional hard coded software programs to solve specific tasks, machine learning uses a large amount of data to "train" and learn how to complete tasks from the data through various algorithms. The most successful application of machine learning is computer vision, although it still needs a lot of manual coding to complete the work. Take the identification stop sign as an example: people need to manually write a shape detection program to judge whether the detection object has eight edges; Write a classifier to recognize the letter "s-t-o-p". Using these manually written classifiers and edge detection filters, people can finally develop algorithms to identify where the sign starts and ends, so as to perceive the image and judge whether the image is a stop sign. The result is good, but it's not the kind of success that can cheer people up. Especially in haze days, the signboard becomes not so clearly visible, or it is partially obscured by trees, so the algorithm is difficult to succeed. This is why for a long time, the performance of computer vision has been unable to approach human ability. It is too rigid and easily disturbed by environmental conditions. 3. Artificial neural network: give depth to machine learning Artificial neural network is an important algorithm in early machine learning, which has experienced ups and downs for decades. The principle of neural network is inspired by the physiological structure of our brain - interconnected neurons. However, unlike a neuron in the brain can connect any neuron within a certain distance, the artificial neural network has discrete layers, and only other layers in line with the direction of data transmission are connected each time. For example, we can cut an image into image blocks and input them to the first layer of neural network. Each neuron in the first layer transmits data to the second layer. The neurons in the second layer do similar work, transfer the data to the third layer, and so on until the last layer, and then generate the results. Each neuron assigns a weight to its input, and the correctness of this weight is directly related to the task it performs. The final output is determined by the sum of these weights. We still take the stop sign as an example: break all elements of the image of a stop sign, and then "check" with neurons: octagonal shape, fire engine like red color, bright and prominent letters, typical size of traffic signs and stationary characteristics, etc. The task of neural network is to give a conclusion whether it is a stop sign or not. The neural network will give a well thought out guess - "probability vector" according to all the weights. In this example, the system may give such a result: 86% may be a stop sign; 7% may be a speed limit sign; 5% may be a kite hanging in a tree and so on. Then the network structure tells the neural network whether its conclusion is correct. Even this example is relatively advanced. Until recently, neural network was also forgotten by the artificial intelligence circle. In fact, neural network already existed in the early stage of artificial intelligence, but neural network has little contribution to "intelligence". The main problem is that even the most basic neural network needs a lot of operation, and this operation demand is difficult to be met. 4. Deep learning: eliminate the error of neural network Deep learning is derived from artificial neural network. It is a multi hidden layer hierarchical structure with large neural network that needs to be trained. Each layer is equivalent to a machine learning that can solve different aspects of the problem. Using this deep nonlinear network structure, deep learning can realize the approximation of complex functions, represent the distributed representation of input data, and then show a strong ability to learn the essential characteristics of data sets from a few sample sets, and make the probability vector more convergent. In short, the data processing and learning methods of deep learning neural network are more similar to those of human brain neurons and more accurate than traditional neural network. Let's look back at this example of stop sign recognition: the deep learning neural network extracts the characterization data from hundreds or even millions of stop sign images, and modulates the weight of neuron input more accurately through repeated training. It can get the correct results every time whether there is fog, sunny or rainy. Only then can we say that the neural network has successfully learned a stop sign. Google's alphago also learned how to play go first, and then trained his neural network by playing chess with himself. This training made alphago successfully beat Li Shishi with higher grade scores three months later. 2、 Realization of deep learning Deep learning is like the diamond at the top of machine learning, giving AI a brighter future. It has achieved all kinds of tasks that we never thought of, making almost all machine auxiliary functions possible. Better film recommendation, smart wear, even driverless cars and preventive medical care are all near or about to be realized. Artificial intelligence is now, tomorrow. I'll take your C-3PO. Just have your terminator. However, as mentioned earlier, artificial neural network, the predecessor of deep learning, has existed for nearly 30 years, but it didn't rise again until the last 5 to 10 years. What's the reason? 1. Learning algorithms that break through limitations In the 1990s, many shallow machine learning algorithms including support vector machine (SVM) and maximum entropy method (LR) were proposed one after another, making the artificial neural network based on back propagation algorithm (BP) gradually fade out of people's sight because of its irreparable disadvantages. Until 2006, Geoffrey Hinton, a professor at the University of Toronto in Canada and a leader in machine learning, and his students published an article in science, which solved the problems of over fitting and difficult training of back-propagation algorithm, thus starting the wave of deep learning in academia and industry. The essence of deep learning is to learn more useful features by building machine learning models with many hidden layers and massive training data, so as to finally improve the accuracy of classification or prediction. Therefore, "depth model" is the means and "feature learning" is the purpose. Different from traditional shallow learning, deep learning is different in that: ·The depth of the model structure is emphasized. There are usually 5-layer, 6-layer, or even 10-layer hidden layer nodes; ·It clearly highlights the importance of feature learning, that is, through layer by layer feature transformation, the feature representation of samples in the original space is transformed into a new feature space, so as to make classification or prediction easier. The difference of this algorithm increased the demand for the amount of training data and parallel computing power. At that time, mobile devices were not popular, which made the collection of unstructured data not so easy. 2. Sudden burst of data flood The deep learning model needs a lot of data training to obtain the ideal effect. Taking speech recognition as an example, only in its acoustic modeling part, the algorithm is faced with billions to billions of training sample data. Due to the scarcity of training samples, artificial intelligence can not become the mainstream algorithm in the application field of artificial intelligence even after the breakthrough of algorithm. Until 2012, the interconnected equipment, machines and systems distributed all over the world promoted the huge growth of the number of unstructured data, and finally made a qualitative leap in reliability, and the era of big data came. How big is big data? In one day, 168 million DVDs can be engraved with all the content generated by the Internet; 294 billion e-mails were sent, equivalent to the number of paper letters in the United States in two years; 2 million community posts were issued, equivalent to the text volume of time magazine in 770; 378000 mobile phones were sold, 371000 times higher than the number of babies born every day in the world. However, even if all the information created by people every day, including voice calls, e-mail and information, as well as all the pictures, videos and music uploaded, the amount of information can not match the amount of digital information about people's own activities created every day. We are still in the initial stage of the so-called "Internet of things". With the maturity of technology, our communication equipment, vehicles and wearable technology will be able to connect and communicate with each other, and the increase of information will continue in geometric multiples. 3. Hard to meet hardware requirements The sudden burst of data flood meets the requirements of deep learning algorithm for the amount of training data, but the implementation of the algorithm also needs the high operation speed of the corresponding processor. At present, the popular traditional CPU processor architecture, including x86 and arm, often needs hundreds or even thousands of instructions to complete the processing of a neuron. However, this architecture is very clumsy for the computing needs of deep learning of massive data operation without too many program instructions. Especially under the current power consumption limit, it is impossible to speed up the instruction execution speed by increasing the CPU dominant frequency. This contradiction is becoming more and more irreconcilable. Deep learning researchers urgently need an alternative hardware to meet the computing needs of massive data. Perhaps one day a new processor architecture specially designed for artificial intelligence will be born, but in the decades before that, artificial intelligence still has to move forward, so it can only improve the existing processor and make it a computing architecture that can adapt to high-throughput computing to the greatest extent. At present, there are two main ways to improve the existing processors: ·GPU generalization: Use the graphics processor GPU as a vector processor. In this architecture, the characteristics of GPU which is good at floating-point operation will be fully utilized to make it a general-purpose computing chip GPGPU which can carry out parallel processing. NVIDIA has launched relevant hardware products and software development tools since the second half of 2006. At present, NVIDIA is the leader in the artificial intelligence hardware market. ·Multicore processor isomerization: Integrate other processor cores such as GPU or FPGA into the CPU. In this architecture, the floating-point operation and signal processing that the CPU core is not good at will be executed by other programmable cores integrated on the same chip, and both GPU and FPGA are famous for being good at floating-point operation. AMD and Intel are committed to heterogeneous processors based on GPU and FPGA respectively, hoping to enter the artificial intelligence market. 3、 Existing market - General chip GPU In the field of deep learning, the most important is data and operation. Whoever has more data and faster operation will have an advantage. Therefore, in terms of processor selection, GPU, which can be used for general basic computing and has faster operation speed, has quickly become the mainstream chip of Artificial Intelligence Computing. It can be said that in the past few years, especially since 2015, the outbreak of artificial intelligence is due to the wide application of NVIDIA's GPU, which makes parallel computing faster, cheaper and more effective. 1. What is GPU? GPU was originally used as a microprocessor to run drawing operations on personal computers, workstations, game consoles and some mobile devices. It can quickly process every pixel on the image. Later, scientists found that its massive data parallel computing ability coincided with the needs of deep learning. Therefore, it was first introduced into deep learning. In 2011, Professor Wu Enda took the lead in applying it to Google's brain and achieved amazing results. The results showed that 12 NVIDIA GPUs could provide deep learning performance equivalent to 2000 CPUs. After that, researchers from New York University, University of Toronto and Swiss artificial intelligence laboratory accelerated their deep neural networks on GPUs. 2. Design differences between GPU and CPU So how is the fast computing power of GPU obtained? This goes back to the original design goal of the chip. The CPU needs strong computing power to process different types of data and logical judgment ability to process branches and jumps, which makes the internal structure of the CPU extremely complex; The graphics processor GPU is initially faced with highly unified types, interdependent large-scale data and a pure computing environment that does not need to be interrupted, so GPU only needs high-speed operation without logical judgment. The difference between the target computing environment determines the different design architecture of GPU and CPU: Design of CPU based on low delay ·A large amount of cache space cache, which is convenient to extract data quickly. The CPU stores a large amount of accessed data in the cache. When it needs to access these data again, it does not need to extract them from the memory with a large amount of data, but directly from the cache. ·The powerful Arithmetic Unit ALU can complete arithmetic calculation in a very short clock cycle. Today's CPU can achieve 64bit double precision. It only takes 1 ~ 3 clock cycles to perform double precision floating-point source calculation, addition and multiplication, and the clock cycle frequency reaches 1.532 ~ 3gigahertz. ·Complex logic control unit, when the program contains multiple branches, it reduces the delay by providing branch prediction. ·Devices including a comparison circuit unit and a forwarding circuit unit

     

     

     

     

    List all Question

    Nickname

    Email

    Questions

    Our other product:

    Professional FM Radio Station Equipment Package

     



     

    Hotel IPTV Solution

     


      Enter email  to get a surprise

      fmuser.org

      es.fmuser.org
      it.fmuser.org
      fr.fmuser.org
      de.fmuser.org
      af.fmuser.org ->Afrikaans
      sq.fmuser.org ->Albanian
      ar.fmuser.org ->Arabic
      hy.fmuser.org ->Armenian
      az.fmuser.org ->Azerbaijani
      eu.fmuser.org ->Basque
      be.fmuser.org ->Belarusian
      bg.fmuser.org ->Bulgarian
      ca.fmuser.org ->Catalan
      zh-CN.fmuser.org ->Chinese (Simplified)
      zh-TW.fmuser.org ->Chinese (Traditional)
      hr.fmuser.org ->Croatian
      cs.fmuser.org ->Czech
      da.fmuser.org ->Danish
      nl.fmuser.org ->Dutch
      et.fmuser.org ->Estonian
      tl.fmuser.org ->Filipino
      fi.fmuser.org ->Finnish
      fr.fmuser.org ->French
      gl.fmuser.org ->Galician
      ka.fmuser.org ->Georgian
      de.fmuser.org ->German
      el.fmuser.org ->Greek
      ht.fmuser.org ->Haitian Creole
      iw.fmuser.org ->Hebrew
      hi.fmuser.org ->Hindi
      hu.fmuser.org ->Hungarian
      is.fmuser.org ->Icelandic
      id.fmuser.org ->Indonesian
      ga.fmuser.org ->Irish
      it.fmuser.org ->Italian
      ja.fmuser.org ->Japanese
      ko.fmuser.org ->Korean
      lv.fmuser.org ->Latvian
      lt.fmuser.org ->Lithuanian
      mk.fmuser.org ->Macedonian
      ms.fmuser.org ->Malay
      mt.fmuser.org ->Maltese
      no.fmuser.org ->Norwegian
      fa.fmuser.org ->Persian
      pl.fmuser.org ->Polish
      pt.fmuser.org ->Portuguese
      ro.fmuser.org ->Romanian
      ru.fmuser.org ->Russian
      sr.fmuser.org ->Serbian
      sk.fmuser.org ->Slovak
      sl.fmuser.org ->Slovenian
      es.fmuser.org ->Spanish
      sw.fmuser.org ->Swahili
      sv.fmuser.org ->Swedish
      th.fmuser.org ->Thai
      tr.fmuser.org ->Turkish
      uk.fmuser.org ->Ukrainian
      ur.fmuser.org ->Urdu
      vi.fmuser.org ->Vietnamese
      cy.fmuser.org ->Welsh
      yi.fmuser.org ->Yiddish

       
  •  

    FMUSER Wirless Transmit Video And Audio More Easier !

  • Contact

    Address:
    No.305 Room HuiLan Building No.273 Huanpu Road Guangzhou China 510620

    E-mail:
    [email protected]

    Tel / WhatApps:
    +8618078869184

  • Categories

  • Newsletter

    FIRST OR FULL NAME

    E-mail

  • paypal solution  Western UnionBank OF China
    E-mail:[email protected]   WhatsApp:+8618078869184   Skype:sky198710021 Chat with me
    Copyright 2006-2020 Powered By www.fmuser.org

    Contact Us