-
Notifications
You must be signed in to change notification settings - Fork 6.8k
Conversation
src/operator/mshadow_op.h
Outdated
@@ -126,6 +126,13 @@ MXNET_UNARY_MATH_OP_NC(relu, a > DType(0) ? a : DType(0)); | |||
|
|||
MXNET_UNARY_MATH_OP_NC(relu_grad, a > DType(0) ? DType(1) : DType(0)); | |||
|
|||
MXNET_UNARY_MATH_OP_NC(selu, DType(1.0507009873554804934193349852946f) * (a > DType(0) ? a : |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This constant appears twice in your code. Use a const variable here.
src/operator/mshadow_op.h
Outdated
|
||
MXNET_UNARY_MATH_OP_NC(selu_grad, | ||
DType(1.0507009873554804934193349852946f) * | ||
(a > DType(0) ? DType(1) : DType(1.6732632423543772848170429916717f + a))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Use const variable SELU_ALPHA
check_symbolic_forward(y, [xa], [ya], rtol=rtol, atol=atol, dtype=dtype) | ||
check_symbolic_backward(y, [xa], [np.ones(shape)], [ga], rtol=rtol, atol=atol, dtype=dtype) | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove extra blank line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check out PEP8 style guide: https://www.python.org/dev/peps/pep-0008/
xa[abs(xa) < eps] = 0.1 | ||
ya = fselu(xa) | ||
ga = fselu_grad(np.ones(shape).astype(dtype), xa, ya) | ||
# Skip numeric check for float16 type to get rid of flaky behavior |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have an issue to address the flakiness for float16 here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's related to the low precision of fp16, we can also increase rtol and atol to address it
def fselu(x): | ||
neg_indices = x < 0 | ||
out = x.copy() | ||
out[neg_indices] = 1.6732632423543772848170429916717 * np.expm1(out[neg_indices]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here: use constant variable
neg_indices = x < 0 | ||
out = np.ones(x.shape).astype(x.dtype) | ||
out[neg_indices] = y[neg_indices] + 1.6732632423543772848170429916717 | ||
return out * 1.0507009873554804934193349852946 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here: use constant variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no cpp-like constant in Python, maybe you're talking about declaring some variables to hold those values first?
Please also update the operator documentation section in leaky_relu.cc |
@apeforest All comments addressed already. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
adding tutorial index pages to whitelist added custom fork feature adding settings to turn off/on doc sets using custom fork directory for artifacts automate upstream branch refresh switched to boolean types and added debug messaging build will copy current config files to each version build build will copy current config files to each version build stashing config files before checking out new version put mxnet.css as artifact to be copied during build fix formatting issues in h tags refactored to build each version in a different folder grab latest README from local fork using settings.ini for document sets per version fix R doc config for mxnet root matching conf.py updates to current and excluding 3rdparty folder align R doc gen bug fix with other PR 11970 pass the current tag in the make args and set to default if empty fix bug for default version and add BUILD_VER to make html call turning off scala docs for versions less than 1.2.0 turning off r docs until CI can handle it enabling new docs build capability in CI failover to fetching remote branch Remove stale Keras-MXNet tests from MXNet repo (apache#11902) Disable flaky cpp test (apache#12056) Adjusting tolerance level and removing fixed seed for tests: test_ifft, test_fft (apache#12010) * adjusting tolerance level and removing fixed seed * CI retrigger * removing status [MXNET-774] Flaky test in test_executor.py:test_bind (apache#12016) * fix test bind, remove fixed seed * add tracking info * remove tracking info fix flaky test_quantization.test_get_optimal_thresholds (apache#12004) removed fixed seed 1234 (apache#12072) tested with 100k runs, no failures improve error message of cudnn operators (apache#11886) Fix for undefined variable errors (apache#12037) * Undefined name in initializer * Fix undefined name in test_mkldnn * Fix for undefined names in examples Fix undefined_variable lint errors in examples (apache#12052) * Fix lint errors in dqn example * Fix lint error in gluon example * Fix undefined error in autoencoder example MXNET-776 [Perl] Better documentation/bug fixes. (apache#12038) * MXNET-776 1) Several new metric classes. 2) Improved documentation. 3) Bugfixes. * added links and fixed a typo. Redesign Jenkinsfiles (apache#12000) * Rework Jenkinsfile * Add functionality to assign node labels dynamically * Extract functions into util file * Change all Jenkinsfiles to use utils * Make a new commit... * Address review comments 1 * Address review comments 2 fix unidirectional model's parameter format (apache#12055) * fix unidirectional model's parameter format * Update rnn_layer.py Fix syntax errors in Jenkinsfiles (apache#12095) [MXAPPS-581] Straight Dope nightly fixes. (apache#11934) Enable 3 notebooks that were failing tests after making updates to the Straight Dope book. We also add pandas required by one of these notebooks. Fix jenkinsfile syntax errors (apache#12096) remove fixed seed for test_triplet_loss (apache#12011) got rid of fixed seed for test_optimizer/test_operator_gpu.test_ftml (apache#12003) [MXNET-696] Fix undefined variable errors (apache#11982) * Fix undefined error in image segmentation ctx is used undefined. Setting the default ctx to cpu and editing the comment to let the user know that it can be changed to GPU as required. * Fix undefined names in SSD example maskUtils is disabled. Remove code referencing it. Initializing start_offset. got rid of fixed seed for test_optimizer/test_operator_gpu.test_nag (apache#11981) Fix flaky test for elementwise_sum (apache#11959) Re-enabling test_operator.test_binary_math_operators (apache#11712) (apache#12053) Test passes on CPU and GPU (10000 runs) update docs to explain CPU incompatibilities (apache#11931) removed fixed from test_optimizer.test_signum (apache#12088) Add missing object to tests/nightly/model_backwards_compatibility_check/JenkinsfileForMBCC (apache#12108) Add GetName function in Symbol class for cpp pack (apache#12076) Add unique number of parameters to summary output in Gluon Block (apache#12077) * add unique parameters in summary output * rebuild Update fully_connected.cc documentation (apache#12097) [MXNET-244] Update RaspberryPI instructions (apache#11562) * Update RaspberryPI instructions [MXNET-749] Correct usages of `CutSubgraph` in 3 control flow operators (apache#12078) * Fix cut graph * Copy only when necessary * Add unittest for while_loop * Add unittest for foreach * Add unittest for cond * Avoid magic number: 0 => kUndefinedStorage [MXNET-703] TensorRT runtime integration (apache#11325) * [MXNET-703] TensorRT runtime integration Co-authored-by: Clement Fuji-Tsang <[email protected]> Co-authored-by: Kellen Sunderland <[email protected]> * correctly assign self._optimized_symbol in executor * declare GetTrtCompatibleSubsets and ReplaceSubgraph only if MXNET_USE_TENSORRT * add comments in ReplaceSubgraph * Addressing Haibin's code review points * Check that shared_buffer is not empty when USE_TENSORRT is set * Added check that TensorRT binding is for inference only * Removed redundant decl. * WIP Refactored TRT integration and tests * Add more build guards, remove unused code * Remove ccache report * Remove redundant const in declaration * Clean Cmake TRT files * Remove TensorRT env var usage We don't want to use environment variables with TensorRT yet, the logic being that we want to try and have as much fwd compatiblity as possible when working on an experimental feature. Were we to add env vars they would have to be gaurenteed to work in the future until a major version change. Moving the functionality to a contrib call reduces this risk. * Use contrib optimize_graph instaed of bind * Clean up cycle detector * Convert lenet test to contrib optimize * Protect interface with trt build flag * Fix whitespace issues * Add another build guard to c_api * Move get_optimized_symbol to contrib area * Ignore gz files in test folder * Make trt optimization implicit * Remove unused declaration * Replace build guards with runtime errors * Change default value of TensorRT to off This is change applies to both TensorRT and non-TensorRT builds. * Warn user when TRT not active at runtime * Move TensorRTBind declaration, add descriptive errors * Test TensorRT graph execution, fix bugs * Fix lint and whitespace issues * Fix typo * Removed default value for set_use_tensorrt * Improved documentation and fixed spacing issues * Move static exec funcs to util files * Update comments to match util style * Apply const to loop element * Fix a few namespace issues * Make static funcs inline to avoid compiler warning * Remove unused inference code from lenet5_train * Add explicit trt contrib bind, update tests to use it * Rename trt bind call * Remove documentation that is not needed for trt * Reorder arguments, allow position calling Decrease success rate to make test more stable (apache#12092) I have added this test back to unit test coverage and decreased success rate even more, to make sure that fails would happen even more rare Add Clojure to website nav (apache#12075) * adding clojure to API navigation * adding clojure to the sidebar * switched order Fix flaky tests for quantize and requantize (apache#12040) [MXNET-703] Use relative path for symbol import (apache#12124) Fix shared memory with gluon dataloader, add option pin_memory (apache#11908) * use threading for mp dataloader fetching, allow pin_memory option * allow pin tuple of data into cpu_pinned * fix as_in_context if not cpu_pinned * fix cpu_pinned * fix unittest for windows, update doc that windows mp is available * fix pin_memory * fix lint * always use simplequeue for data queue * remove main thread clearing for data_queue * do not use outside folder as pythonpath but run nosetests inside * use :MXNET_LIBRARY_PATH= to locate dll * fix dll path * correct dll path reduce a copy for rowsparse parameter.reduce (apache#12039) GPU Memory Query to C API (apache#12083) * add support for GPU memory query * remove lint take custom dataset into consideration (apache#12093) [MXNET-782] Fix Custom Metric Creation in R tutorial (apache#12117) * fix tutorial * install instructions * fix typo [MXAPPS-805] Notebook execution failures in CI. (apache#12068) * [MXAPPS-805] Notebook execution failures in CI. * Add a retry policy when starting a notebook executor to handle the failure to start a notebook executor (due to a port collision, kernel taking too long to start, etc.). * Change logging level for tests to INFO so that we have more informative test output. * Make retry logic for Jupyter notebook execution specific to the error message we are looking for to prevent false positives in the retry logic. rm wrong infertype for AdaptiveAvgPool and BilinearReisze2D (apache#12098) Document MXNET_LIBRARY_PATH environment variable which was not documented explicitly. (apache#12074) Generalized reshape_like operator (apache#11928) * first commit * fix documentation * changed static_cast<bool>(end) to end.has_value() fixed documentation issues * change begin from int to optional * test None as lhs fix cython nnvm include path (apache#12133) CI scripts refinements. Separate Py2 and Py3 installs cripts. Fix perms. (apache#12125) zipfian random sampler without replacement (apache#12113) * code compiles * update doc * fix bug and add test * fix lint update dmlc-core (apache#12129) Fix quantized graphpass bug (apache#11937) * fix quantized graphpass bug * add residual quantization testcase * handle dtype and backend issues support selu activation function (apache#12059) Fix flaky test test_operator_gpu:deformable_conv and deformable_psroi_pooling (apache#12070) [MXNET-767] Fix flaky test for kl_loss (apache#11963) * Fix flaky test for kl_loss * remove comment. [MXNET-788] Fix for issue apache#11733 pooling op test (apache#12067) * added support to check_consistency function to generate random numbers for a specific datatype (ie. fp16) this ensures that for tests that compare results among different precisions, that data is generated in the least precise type and casted to the most precise changed test_pooling_with_type test case to specify fp16 precision for random input data renamed the 2nd test_pooling_with_type function to test_pooling_with_type2 so it doesnt redefine the first and both are tested fixed equation formatting issue in pooling operator description Added myself to the contributors readme file * updated from latest in master (had old version of the file) * shortened lines per lint spec * renamed default_type argument to rand_type for clarity updated function docstring with argument description removed rand_type setting for non-max pooling tests * cleaned up check_consistency function docstring Do not show "needs to register block" warning for registered blocks. (apache#12130) Fix precision issue of test case test_rnnrelu_bidirectional (apache#12099) * adjust tolerance only for relu for fixing test case bug * only adjust torence for test_rnnrelu_bidirectional and adjust back on test_rnnrelu_sym Accelerate the performance of topk for CPU side (apache#12085) * Accelerate the performance of topk for CPU side * Add comments for the code changes Remove unused TensorRT code (apache#12147) Removing some python code that isn't in the current TensorRT execution paths. This should make the code more readable and avoid potential linting errors. Thanks to @vandanavk for pointing out the dead code and @cclauss for a quick alternative fix. Co-authored-by: Vandana Kannan <[email protected]> Co-authored-by: cclauss <[email protected]> Disable test_io.test_CSVIter (apache#12146) Fix RAT license checker which is broken in trunk (apache#12148) Remove obsolete CI folder set bind flag after bind completes (apache#12155) Fix MXPredReshape in the c_predict_api (apache#11493) * Fix MXPredReshape in the c_predict_api. * Add unittest for the C predict API. * Fix path in the test. * Fix for Windows. * Try again to fix for Windows. * One more try to fix test on Windows. * Try again with CI. * Try importing from mxnet first if cannot find the amalgamation lib. * Add a log message when libmxnet_predict.so is not found. * Set specific rtol and atol values. * Fix missing rtol and atol values. * Empty commit. * Try again with CI. * One more try with CI. * Retry CI. [Flaky Test] Fix test_gluon_model_zoo.test_models when MXNET_MKLDNN_DEBUG=1 (apache#12069) * reorder inputs * use function flatten vs build in method * update similar array atoi to 0.01 * fix reorder * enable MXNET_MKLDNN_DEBUG in CI * add exclude debug flag * fix lint * add warning log for excluded op * retrigger RAT check readme updated (apache#12170) update ndarray stack Doc for apache#11925 (apache#12015) * update ndarray stack Doc Add worker_fn argument to multiworker function (apache#12177) * add worker_fn argument to multiworker function * fix pylin Remove fixed seed for test_huber tests (apache#12169) Removed fixed seed and increased learning rate and tolerance for test_nadam (apache#12164) documentation changes. added full reference (apache#12153) * documentation changes. added full reference * fixing lint * fixing more lint * jenkins * adding the coding line utf-8 Partially enable flaky test for norm operator (apache#12027) add examples for slicing option (apache#11918) Module predict API can accept NDArray as input (apache#12166) * forward and predict can accept nd.array np.array [MXNET-744] Docs build tools update (apache#11990) [MXNET-744] Docs build tools update (apache#11990) [MXNET-696] Fix undefined name errors (apache#12137) * Fix undefined name error in neural style example * Fix import exception error * Fix undefined name in AUCMetric * Fix undefined name in a3c example Fix profiler executer when memonger is used (apache#12152) add handling for grad req type other than kNullOp for indices (apache#11983) Fix a minor bug in deformable_im2col.cuh (apache#12060) Function `deformable_col2im_coord ` called deformable_col2im_coord_gpu_kernel but check the deformable_col2im_gpu_kernel. [MXNet-744] Fix website build pipeline Python 3 issues (apache#12195) * Fix website build pipeline Python 3 issues (apache#12195) Fix MKLDNNSum cpp test failure (apache#12080) bump timeout on Jenkins for docs/website to 120 min (apache#12199) * bump timeout on Jenkins to 120 min * add branches to settings using v notation; apply appropiate settings Fixing typo in python/mxnet/symbol/image.py (apache#12194) Fixing typo in python/mxnet/symbol/image.py Fix the topk regression issue (apache#12197) (apache#12202) * Fix the topk regression issue (apache#12197) * Add comments pull changes in from master
Description
Address #11496 and add selu activation function
Checklist
Essentials
Changes
selu
option in leaky_reluComments
Test passed 10000 times on both CPU & GPU: