Transformers: Product Documentation
Transformers are based on the idea of mimicking the human cognitive process of selectively focusing only on specific parts of any information resource, compared to the rest, to solve a problem. From machine translation to natural language processing tasks and beyond, transformer models have come to redefine the state-of-the-art across several machine learning research domains and continue to enable spectacular breakthroughs within the broader artificial intelligence community. This project aims to explain what the Transformer model is, how it works, all the important concepts you’d need to gain a useful intuition, and a technical implementation to solve a specific real-world task.
A Deeper Analysis Of Instagram’s New Changes. Here’s How Your Brand Can Effectively Leverage Them!
Ever wondered how you can leverage the power of Instagram marketing to bolster your brand? Well, you’ve come to the right place! This article breaks down the secrets of Instagram, provides key insights and suggestions (in tons!) on how you can use different Instagram tips, tricks & proven strategies, tailor them to your business model, and ultimately grow your brand, user engagement, and business profit. Read on!
DeepCancer: Detecting Cancer via Deep Generative Learning Through Gene Expressions
DeepCancer: a deep generative machine learning model that learns features from unlabelled microarray data. This model is used in conjunction with traditional classifiers that perform classification of the tissue samples as either being cancerous or non-cancerous.
DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning
We design an end-to-end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype
Deep Convolutional Neural Network for Large-Scale Scene Classification
We implement a Caffe-based convolutional neural network using the Places2 dataset for a large-scale visual recognition application. We leverage previous research experience by using very small 3 x 3 convolution filters in our architecture. By varying depth of weight layers, we are able to obtain a suitable parameterisation level for training a model towards improved recognition ability compared to configurations used in prior art.