Apache MXNet (incubating)

Tuesday July 23, 2019

Apache MXNet (incubating) 1.5.0 release is now available

Today the Apache MXNet community is pleased to announce the 1.5.0 release of Apache MXNet deep learning framework. We would like to thank the Apache MXNet community for all their valuable contributions towards the MXNet 1.5.0 release.

With this release, we bring the following new features to our users. For a comprehensive list of major features and bug fixes, please check out the full Apache MXNet 1.5.0 release notes.

Automatic Mixed Precision (experimental)

Training Deep Learning networks is a very computationally intensive task. Novel model architectures tend to have increasing numbers of layers and parameters, which slow down training. Fortunately, software optimizations and new generations of training hardware make it a feasible task.

However, most of the hardware and software optimization opportunities exist in exploiting lower precision (e.g. FP16) to, for example, utilize Tensor Cores available on new Volta and Turing GPUs. While training in FP16 showed great success in image classification tasks, other more complicated neural networks typically stayed in FP32 due to difficulties in applying the FP16 training guidelines.

That is where AMP (Automatic Mixed Precision) comes into play. It automatically applies the guidelines of FP16 training, using FP16 precision where it provides the most benefit, while conservatively keeping in full FP32 precision operations unsafe to do in FP16. To learn more about AMP, check out this tutorial .

MKL-DNN: Reduced precision inference and RNN API support

Two advanced features, fused computation and reduced-precision kernels, are introduced by MKL-DNN in the recent version. These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies. MXNet MKL-DNN backend provides optimized implementations for various operators covering a broad range of applications including image classification, object detection, and natural language processing. Refer to the MKL-DNN operator documentation for more information.

Dynamic Shape(experimental)

MXNet now supports Dynamic Shape in both imperative and symbolic mode. MXNet used to require that operators statically infer the output shapes from the input shapes. However, there exist some operators that the output depends on input data or input arguments. Now MXNet supports operators with dynamic shape such as contrib.while_loop, contrib.cond, and mxnet.ndarray.contrib.boolean_mask

Large Tensor Support

Before MXNet 1.5.0, MXNet supports maximal tensor size of around 4 billon (2^32). This is due to uint32_t being used as the default data type for tensor size, as well as variable indexing. Now you can enable large tensor support by changing the following build flag to 1: USE_INT64_TENSOR_SIZE = 1.(Note this is set to 0 by default) This enabled large scale training for example large graph network training using Deep Graph Library.

Dependency Update

MXNet has added support for CUDA 10, CUDA 10.1, cudnn7.5, NCCL 2.4.2, and numpy 1.16.0.

These updates are available through PyPI packages and build from source, refer to installation guid for more details.

Gluon Fit API(experimental)

Training a model in Gluon requires users to write the training loop. This is useful because of its imperative nature, however repeating the same code across multiple models can become tedious and repetitive with boilerplate code. We have introduced an Estimator and Fit API to help facilitate training loop.

Additional feature improvements and bug fixes

In addition to the new features, we have the following improvement on existing features, performance, documentation, and bug fixes:

  1. Improved experience with front-end APIs: Python, Java, Scala, C++, Clojure, Julia, Perl, and R

  2. Several important performance improvements on both CPU and GPU

  3. Fixed 200+ bugs to improve usability

  4. Added 10+ examples/tutorials and 50+ documentation updates

  5. 100+ updates on build, test, and continuous integration system to improve MXNet contributor experience

Getting Started with MXNet:

To get started with MXNet, visit the install page. To learn more about MXNet Gluon interface and deep learning, you can follow our comprehensive book: Dive into Deep Learning. It covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. You can check out other related MXNet tutorials, MXNet blog posts, MXNet Twitter and MXNet YouTube channel.

Have fun with MXNet 1.5.0!

Acknowledgements:

We would like to thank everyone from the Apache MXNet community for their contributions to the 1.5.0 release.

Comments:

Nice stuff, really cool article!

Posted by Homepage on July 24, 2019 at 08:27 AM UTC #

i am really excited.These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies.i hope it will significantly improve across a wide range of model architectures

Posted by haris hameed on July 24, 2019 at 10:33 AM UTC #

I'm excited about the updates. Really nice! Hope it will help with the new https://dreamteam.gg/fortnite section on the platform.

Posted by Fortnite Stats on July 25, 2019 at 10:04 AM UTC #

I really liked this content..

Posted by Best seo on July 31, 2019 at 01:14 PM UTC #

thanks for sharing

Posted by Connect on August 03, 2019 at 05:59 AM UTC #

nice !

Posted by konnekt on August 12, 2019 at 12:39 PM UTC #

good job thanks.i am really excited.These features can significantly speed up the inference performance on CPU for a broad range of deep learning topologies.i hope it will significantly improve across a wide range of model architectures.family health centre https://healthin.bid/family-health-center-_-healthin-bid/25

Posted by sajjad hussain laghari on August 14, 2019 at 02:21 PM UTC #

Great update. I shared it on my website, high tech news here : https://mesjeuxmobiles.com/sujets/high-tech/

Posted by CamZi on August 15, 2019 at 04:44 PM UTC #

Post a Comment:
  • HTML Syntax: NOT allowed

Calendar

Search

Hot Blogs (today's hits)

Tag Cloud

Categories

Feeds

Links

Navigation