Componentschip.com is a Electronic Components Distributor.

Componentschip.com is a Electronic Components Distributor.

Fujitsu Wants To Join The War Of Global Artificial Intelligence

2017-07-17 15:28:18 | 日記

Fujitsu is also investing in the Global Artificial Intelligence (AI) Technology Development Contest and is currently developing an AI-specific microprocessor called "Deep Learning Unit" (DLU), claiming that the microprocessor and rival products Compared to 10 times better performance per watt unit, the first DLU microprocessor is expected to be launched in Fujitsu 2018 fiscal year, whether the market leader NVIDIA to form a challenge pressure, it is worth observing.

According to Top 500 website reported that Fujitsu since 2015 will be put into DLU chip development work, but after Fujitsu rarely disclosed the details of this microprocessor design, until June 2017 held "ISC 2017" Conference, Fujitsu AI Takumi Maruyama, senior director of the AI Platform Division, disclosed the company's commitment to the development of AI and high-performance computing (HPC). For the first time, the details of DLU microprocessors were introduced. Maruyama is currently engaged in DLU chip development project.

Maruyama pointed out that DLU microprocessors and other variety of depth learning (DL) built the same processor, are highly dependent on low-precision computing in the neural network processing to optimize performance and energy efficiency, it is worth noting that , DLU microprocessors support FP32, FP16, INT16 and INT8 data types. At the highest level, the DLU microprocessor is composed of several "Deep Learning Processing Units" (DPU), which are linked to each other through a high-performance structure or can be considered as a depth study core. Individual master core management is performed on the DPU and is responsible for coordinating memory near tasks between the DPU and the chip built-in memory controller.

It is worth noting that each DPU is composed of 16 depth learning elements (DPE), which is the actual numerical operation of the place; each DPE is by 8 SIMD execution unit together with a very large registration file (Register File; RF), the RF is completely controlled by the software. In addition, the DLU package will contain a certain number of second-generation high-bandwidth memory (HBM2), which provides high-speed processor-required data, which will also include a device for communicating with other DLU microprocessors via Tofu interconnect technology , Fujitsu expects DLU microprocessors to be launched in 2018 and will come in the form of a coprocessor, driven by a set of central processing units (CPUs) to drive DLU microprocessors.You can also buy it from electronic component distributor.

Starting with the next generation of DLU microprocessor technology, Fujitsu plans to embed DLU microprocessors into a set of CPUs in some form, but Fujitsu has not yet revealed when the next generation will be introduced. With the off-chip network design described above, Fujitsu envisions the future to be able to build a very large system with DLU microprocessors, aiming to create an extensible platform for handling the largest and most complex in-depth learning problems. Fujitsu's ultimate goal is to build a DLU microprocessor product line in addition to the SPARC processor product line for the general market.

Fujitsu understands that AI and machine learning (ML) are expected to dominate global technology applications in the near future. If you do not follow fear of being marginalized in the future, Fujitsu is currently in the forefront of this market, but Intel (Intel), AMD (AMD) and the United Kingdom AI chip hardware design startups Graphcore and other manufacturers, are in the development of its own AI chip technology actively into the next 6 to 12 months may have to introduce new product lines, Will become a new competitor to Fujitsu DLU microprocessors.

NVIDIA's advantage in this area is that the company's deep learning software support for its own graphics chip (GPU) enables NVIDIA to achieve a greater lead in the AI chip market for processing the neural network's software architecture The number of not only more and still growing, but NVIDIA can provide full support, the other hand, Microsoft (Microsoft), CNTK, Theano, MXNet, Torch, TensorFlow and Caffe and other manufacturers can only support the main software architecture.

Even so, for companies with large capital sizes such as Fujitsu and others, there are already a lot of deep learning software that has been written, but it is still a lot of money relative to the amount that may be developed over the next few years. This means that in this area the next few years still have to accommodate other new competitors, many of the space, which makes Fujitsu and other new manufacturers still have the opportunity to grab this market opportunities.

Reference: 74hc595 and ds18b20



ジャンル:
ウェブログ
コメント   この記事についてブログを書く
この記事をはてなブックマークに追加
« The Performance Summary Of ... | トップ | Microsemi Launched New miCl... »

コメントを投稿

日記」カテゴリの最新記事

トラックバック

この記事のトラックバック  Ping-URL
  • 30日以上前の記事に対するトラックバックは受け取らないよう設定されております。
  • 送信元の記事内容が半角英数のみのトラックバックは受け取らないよう設定されております。
  • このブログへのリンクがない記事からのトラックバックは受け取らないよう設定されております。
  • ※ブログ管理者のみ、編集画面で設定の変更が可能です。