Weekly Machine Learning Opensource Roundup – Feb. 15, 2018

Examples

3D Machine Learning
A collection of paper, code, and dataset for 3D Machine Learning

Machine Learning Links and Lessons Learned
A list of lessons learned, best practices, and study links of machine learning

Deep Reinforcement Learning
The learner is not told which actions to take, but instead must discover which actions yield the most reward by trial and error.

DeepTraffic3D
A 3D view the DeepTraffic project as part of MIT Deep Learning for Self-Driving Cars

Convolution Visualizer
This interactive visualization demonstrates how various convolution parameters affect shapes and data dependencies between the input, weight and output matrices.

CapsNet Visualization
A visualization of the CapsNet layers to better understand how it works

Toolsets

Tensor Comprehensions
Tensor Comprehensions (TC) is a domain specific language to express machine learning workloads. It is a fully-functional C++ library to automatically synthesize high-performance machine learning kernels using Halide, ISL and NVRTC or LLVM. TC additionally provides basic integration with Caffe2 and pybind11 bindings for use with python.

ARM Systolic CNN AcceLErator Simulator (SCALE Sim)
A CNN accelerator simulator that provides cycle-accurate timing, power/energy, memory bandwidth and traces results for a specified accelerator configuration and neural network architecture

SHAP (SHapley Additive exPlanations)
A unified approach to explain the output of any machine learning model

Marija
Data exploration and visualisation for Elasticsearch.

Models

DeepType
OpenAI’s latest approach to learning symbolic structures from data allows them to discover a set of task specific constraints on a neural network in the form of a type system, to guide its understanding of documents, and obtain state of the art accuracy at recognizing entities in natural language.

Attention-Based Guided Structured Sparsity of Deep Neural Networks
An attention mechanism that simultaneously controls the sparsity intensity and supervised network pruning by keeping important information bottlenecks of the network to be active.

Weakly Supervised Segmentation with TensorFlow
The idea behind weakly supervised segmentation is to train a model using cheap-to-generate label approximations (e.g., bounding boxes) as substitute/guiding labels for computer vision classification tasks that usually require very detailed labels.

Prototypical Networks for Few shot Learning in PyTorch
Simple alternative Implementation of Prototypical Networks for Few Shot Learning in Pytorch


Like to add your project? tweet @stkim1!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s