Deep learning is a machine learning technique which models high level of abstraction through distributed representation. Having multiple layers, each preforming a transformation to input received from the previous layer using supervised or unsupervised learning algorithm. Multiple layer and number of units in each layer generate different level of abstraction, in this way more abstract concepts are learned from the lower level of abstraction in the network. The first working learning algorithm for supervised deep feedforward multilayer perceptrons was published by  Ivakhnenoka and Lapa in 1965, but after 2008 its been raining with different researcher working on various architecture based on deep learning. Ever since deep learning is a fast-growing field, and new variants, or algorithms appear every few weeks, majorly in the form of various deep neural network form.

I like to visualize it has a architecture, which has revamped the artificial neural networks in particular along with various other machine learning algorithm. I used the word “revamped” but it was more to relate with artificial neural network, even though deep learning was not the only reason for the revamp. Availability of data worldwide due to internet (which was not the case in early days of ANN) would share the credit equally or may be more.

I like to work more on python when it comes to my research work but if you want to work on deep learning all languages has choice with various packages being made available, here the list of packages I am aware of:

Language package name Description
Python Theano  Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently.
Tensorflow TensorFlow isn’t a rigid neural networks library. If you can express your computation as a data flow graph, you can use TensorFlow.
Keras Keras is a high-level neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license.
DeepPy Deepy is a deep learning framework for designing models with complex architectures. Many important components such as LSTM and Batch Normalization are implemented inside. Although highly flexible, deepy maintains a clean high-level interface.
Pylearn2 Pylearn2 is a machine learning library. Most of its functionality is built on top of Theano. This means you can write Pylearn2 plugins (new models, algorithms, etc) using mathematical expressions, and Theano will optimize and stabilize those expressions for you, and compile them to a backend of your choice (CPU or GPU).
nolearn nolearn contains a number of wrappers and abstractions around existing neural network libraries, most notably Lasagne, along with a few machine learning utility modules. All code is written to be compatible with scikit-learn.
Hebel Hebel is a library for deep learning with neural networks in Python using GPU acceleration with CUDA through PyCUDA. It implements the most important types of neural network models and offers a variety of different activation functions and training methods such as momentum, Nesterov momentum, dropout, and early stopping.
Java Deeplearning4j Deeplearning4j is a domain-specific language to configure deep neural networks, which are made of multiple layers. Everything starts with a MultiLayerConfiguration, which organizes those layers and their hyperparameters.
ND4J ND4J and ND4S are scientific computing libraries for the JVM. They are meant to be used in production environments, which means routines are designed to run fast with minimum RAM requirements.
C++ Intel® MKL-DNN Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) is an open source performance library for Deep Learning (DL) applications intended for acceleration of DL frameworks on Intel® architecture. Intel® MKL-DNN includes highly vectorized and threaded building blocks to implement convolutional neural networks (CNN) with C and C++ interfaces. We created this project to help the DL community innovate on Intel® processors.
Nvida Digits The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning into the hands of engineers and data scientists. DIGITS can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks.
Singa Singa is an Apache Incubating project for developing an open source deep learning library. It provides a flexible architecture for scalable distributed training, is extensible to run over a wide range of hardware, and has a focus on health-care applications.
MATLAB Convnet and CUDA-convnet Convolution neural networks (CNNs or ConvNets) are essential tools for deep learning, and are especially suited for image recognition. You can construct a CNN architecture, train a network, and use the trained network to predict class labels. You can also extract features from a pre-trained network, and use these features to train a linear classifier. Neural Network Toolbox also enables you to perform transfer learning; that is, retrain the last fully connected layer of an existing CNN on new data.
MatConvNet MatConvNet is a MATLAB toolbox implementing Convolutional Neural Networks (CNNs) for computer vision applications. It is simple, efficient, and can run and learn state-of-the-art CNNs. Many pre-trained CNNs for image classification, segmentation, face recognition, and text detection are available.
R Deepnet: deep learning toolkit in R Implement some deep learning architectures and neural network algorithms, including BP,RBM,DBN,Deep autoencoder and so on.
Darch The creation of this package is motivated by the papers from G. Hinton et. al. from 2006 (see references for details) and from the matlab source code developed in this context. This package provides the possibility to generate deep architecture networks (darch) like the deep belief networks from Hinton et. al.. The deep architectures can then be trained with the contrastive divergence method. After this pre-training it can be fine tuned with several learning methods like backpropagation, resilient backpropagtion and conjugate gradients.
Haskell DNNGraph DNNGraph is a  DSL for specifying the model. This uses the lens library for elegant, compostable constructions, and the fgl graph library for specifying the network layout.
Lisp Lush Lush(lisp universal shell) is an object-oriented programming language designed for researchers, experimenters, and engineers interested in large-scale numerical and graphic applications. It comes with rich set of deep learning libraries as a part of machine learning libraries.
Lau Torch Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.
JavaScript Convnet.js ConvNetJS is a JavaScript library for training Deep Learning models (Neural Networks) entirely in your browser. Open a tab and you’re training. No software requirements, no compilers, no installations, no GPUs, no sweat.