OpenAI – GPT3 is magical

Recently, OpenAI came up with a new version GPT-3, a natural language processing deep-learning model, with 175 billion parameters , (100x more than the previous version, GPT-2. GPT-3 is an autoregressive language model, it comes in 8 sizes, ranging from 125M to 175B parameters. Lets evaluate what we can do with giant language models.

What is magical ?

GPT-3 is great milestone in the artificial intelligence community. GPT-3 scraped almost every text data on the internet. “One API to rule many” this approach is path breaking. It’s surprising how many things you can do with that one API. Few of them are below

Text to SQL Conversion : yes, it generates SQl Code for you
Text Summarization : Quick summuration of large text data
Text to AI code : Yes, it generate code for you in Keras, python
Question and answers : on internet content

Isn’t it magical? Let’s explore more

Back ground

We may observe, There’s been such a great progress in NLP since 2012. Introduction of Word2vec bring self-supervised way based on the context, which is technically is transfer learning.
Later Transformers lead this space, which can learn the dependencies between any tokens in the input, and then move through this successive clever routing system, which is learned as part of the training phase in the original recurring neural networks.

Source : Analytics Vidya

Technical Details

Transfer Learning is the improvement of learning another task through the transfer of knowledge from a related undertaking task that has already been learned.

Transformers: The Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.

there are two architectural patterns that we see with these transformers’ architectures

he first is the autoregressive pattern where the model is just predicting the next word and the next word and the next word and the answer from the previous prediction gets fed into the model the next time around.

The other pattern is denoising autoencoder. You feed in an input sentence and then you typically add some noise to it and then you say what you expect.

GPT-3 was trained on V100 GPU on the part of a high-bandwidth cluster provided by Microsoft. Evaluation of GPT-3 is done under 3 conditions:
Few-shot learning
One-shot learning
Zero-shot learning


Below definitions based on Wikipedia

Zero-shot learning aims where instances may not have been seen during training and now it’s machine’s turn to do the job.
One-shot learning is just like the previous one but the difference is the number of seen instances which will be one here.
Few-shot learning refers to the practice of feeding a learning model with a very small amount of training data, contrary to the normal practice of using a large amount of data.

Summary :

In spite of some drawbacks, the GPT-3 platform is one of the best models for NLP tasks. Every new piece of technology will help society, and this model is one more step in that direction.

exited to learn more ? below links wiil help you further

https://beta.openai.com/

https://openai.com/blog/openai-api/

Serverless and Kubernetes (PoV)

Both technologies may have the same ultimate goal, both are at very different stages of their “life-cycle” Kubernetes package everything neatly into self-sufficient containers that can be run anywhere
The key to successful deployment right now is to know how to choose between Kubernetes and serverless.

Serverless
      scales up and down automatically based on demand
      Pay as you use

Ex: New Apps with medium scale and auto scaling,  Light weight Mobile Apps
 
Containers
 Have their own benefits apart from the obvious “portability” .
   They also help avoid vendor lock-in, which is a major unique selling proposition.      

      Containers give you all control you need over your environment and infrastructure

Ex: Complex Apps with dynamic scaling needs, Cloud agnostic Apps, Infra intensive Apps

My point of view  : The point of intersection we’re looking for is probably in the future, where Kubernetes gets to a point where all the configuration, complexity, and cluster management is abstracted away, or AWS Fargate gets to a point where it offers “Kubernetes-level” control over our environment.

We’re looking for “powerful” tools that make us champions over our environment to create more business value.

Intro to Kubernetes

Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) — are playing a key role in providing advanced features to the businesses to manage their Kubernetes architecture.

Kubernetes architecture (Courtesy google)

•Cluster consist set of nodes. a node is a VM or Machine. It is fail safe  •Master node manage, plan, schedule, monitor all nodes and perform Orchestration of containers

Master node components

API server : frontend

etcd           : distributed key value server , implementing logs 

Scheduler  : distributing work  

controller  : brain behind orchestration. It makes decisions to manage containers

Worker node components

         Container runtime engine

         Kubelet service : Agent sits in each node

         Kube-proxy       : Rules to allow worker nodes to work with each other

Containers

Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment.

Benefits :

  • •Container Orchestration
  • Multiple hosts / clustering
  • Easy to deploy multiple instances
  • Automatic scale-up/down
  • Load balancing
  • Configuration management
  • Security

Available technologies in Market

Docker is a bit like a virtual machine. But unlike a virtual machine, rather than creating a whole virtual operating system, Docker allows applications to use the same Linux kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer. This gives a significant performance boost and reduces the size of the application •Kubernetes

Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications.  •Mesos

Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos
kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elasticsearch) with API’s for resource management and scheduling across entire datacenter and cloud environments. •Windows

Docker and Microsoft have a joint engineering relationship to deliver a consistent Docker experience for developers and operators. All Windows Server 2016 and later versions come with Docker Engine – Enterprise. Additionally, developers can leverage Docker naively with Windows 10 via Docker Desktop. Docker Windows containers work the same way as it does on Linux: same Docker CL, API, image format and content distribution services. •Rocket

Rocket is a container system developed by CoreOS as a light weight and secure alternative to Docker. It is built based on an open container standard known as “App Container” or “appc” specification. This allows rkt images to be portable across many container systems (such as Kurma, JetPack, etc.) that follow the “appc” open format.

Containers Orchestration Leaders

Kubernetes

  •   — Bit Complex setup
  •   — Has lot of options
  •   — Customization support
  •   — Support GCP, Azure, AWS
  •   — Support native security

Docker Swarm

  •   — Easy setup
  •   — Lack of autoscaling and
  •   — Good for Small and Medium grade apps
  •   — Unable to manage complex production grade apps

Mesos

  •   — Complex setup
  •   — has options

Machine Learning – part 3

Top Machine learning Algorithm types

Selecting the right algorithm is a key part of any machine learning project, and because there are dozens to choose from, understanding their strengths and weaknesses in various business applications is essential In machine learning, one of our most common goal or use case is either prediction or clustering.

Most common prediction problems are divided into two sub categories:

Regression problems, where the variable to predict is numerical (e.g., the price of a house)

Classification problems, where the variable to predict is part of one of some number of pre-defined categories, which can be as simple as “yes” or “no.” (for example, predict whether a certain piece of equipment will experience a mechanical failure)

Algorithm types

Introduce the most prominent and common algorithms used in machine learning

These algorithms come in three groups:

  • Linear models
  • Tree-based models,
  • Neural networks.

Metrics of evaluation :

There are several metrics for evaluating machine learning models, depending on whether you are working with a regression modeler a classification model.

Regression models, you want to look at mean squared error and R2.

Mean squared error is calculated by computing the square of all errors and averaging them over all observations. The lower this number is, the more accurate your predictions were.

R2(pronounced R-Squared) is the percentage of the observed variance from the mean that is explained (that is, predicted) by your model. R2 always falls between 0 and 1, and a higher number is better.

classification models, the most simple metric for evaluating a model is accuracy.

Accuracy is a common word, but in this case we have a very specific way of calculating it. Accuracy is the percentage of observations which were correctly predicted by the

model. Accuracy is simple to understand, but should be interpreted with caution, in particular when the various classes to predict are unbalanced.

ROC AUC, which is a measure of accuracy and stability. AUC stands for “area under the curve”. A higher ROC AUC generally means you have a better model.

Machine learning – Part 2

The types of Machine learning

Types of ML

 Supervised Learning:

Used when we know the correct answers from past data, but need to predict future outcomes. 

 Effectively using a trial and error based statistical improvement process, the machine gradually improves accuracy by testing results against a set of values provided by a supervisor.

Unsupervised Learning:

Where there is no distinct correct answer, but we want to discover something new from the data. Most often used to classify or group data, for example, to classify music on Spotify, to help recommend which albums you might listen to. 

It doesn’t need a domain expert but involves constant improvements towards a predefined goal. It’s a technique that often deploys Neural Networks, for example, DeepMind in which AphaGo played a million games of Go against itself to eventually become the world champion.

The Machine learning Process

Define the Problem: start with a clearly defined problem and objective in mind.

Collect the data: The greater the volume and variety of appropriate data, the more accurate the machine learning model will become. This can come from spreadsheets, text files, and databases in addition to commercially available data sources.

Prepare the data: Which involves analyzing, cleaning and understanding the data. Removing or correcting outliers (wildly wrong values); this often takes upwards of 60% of the overall time and effort. The data is then separated into two distinct parts, Training and Test data.

Train the model: Against a set of training data — used to identify the patterns or correlations in the data or make predictions, while gradually improving accuracy using a repeating trial and error improvement method.

Evaluate the model: By comparing the accuracy of the results against the set of test data. It’s important not to evaluate the model against the data used to train the system to ensure an unbiased and independent test.

Deploy and Improve: Which can involve trying a completely different algorithm or gathering a greater variety or volume of data. You could, for example, improve house price prediction by estimating the value of subsequent home improvements using data provided by homeowners.

Machine Learning — Part 1

Background :
Analytics is a scientific process of transforming data into insights for the purpose of making better decisions. Analytics is always an action-driven approach. Increasingly, “analytics” is used to describe statistical and mathematical data analysis that clusters, segments, scores and predicts what scenarios are most likely to happen.

Analytics is the main technique, and this includes:
Descriptive Analytics
To identify what happened?

This typically involves reports that help describe what has happened. For example, to compare this month’s sales to the same time last year.

Diagnostic Analytics
Attempts to explain why it’s happening?

which typically involves using dashboards with OLAP capability to drill into and investigate the data, along with Data Mining techniques to find correlations.

Predictive Analytics
Attempts to estimate what might happen?

Predictive analytics encompasses a variety of statistical techniques that analyze current and historical facts to make predictions about future or unknown events.

Machine Learning is a subset of Artificial Intelligence whereby a machine learns from past experience, ie. Data. Machine Learning (ML) fits into the Predictive Analytics space.

So machine learning is, more or less, a way for computers to learn things without being specifically programmed.

A Machine Learning algorithm doesn’t literally write code, but it builds up a computer model of the world, which it then modifies based upon how it’s trained.

How does that actually happens?

Algorithms!!
Machine learning algorithms use computational methods to “learn” information directly from data without relying on a predetermined equation as a model.

Artificial intelligence, Machine learning, Deep learning and are strongly associated. Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

Compare Amazon ML, Microsoft Azure ML

  • Microsoft Azure ML
  • Microsoft Azure ML was made generally available in Feb’2015 so like Amazon ML it’s relatively young offering but it’s a feature-rich offering! There’s something for everyone — beginner to advanced users. Users who are just starting out, they also have a workflow that helps them get started quickly and for intermediate/advanced users, there is support of R and IPython notebooks
    about Azure ML:
    Azure ML has a workflow and a visual editor that beginners can easily follow and build their first ML project with Azure ML!

    Azure ML supports following data sources: CSV, SQL Database tables, RData among others. You can check out the list here: https://azure.microsoft.com/en-us/documentation/articles/machine-learning-data-science-import-data/ — you can also automate it using the Azure Data Factory which is one of the other services that is part of the Azure Cloud offering as well.
    Azure ML has common data cleaning and transformation tasks that you can use or you can also build the data pipeline using R code with Azure ML
    Azure ML supports following problems:
    Binary & Multiclass classification
    Regression
    Clustering
    Recommendations
    Anomaly detection
    For each problem, Azure ML gives you the option to try multiple algorithms — you can also bring other algorithms supported on R or Anaconda Python

    Azure ML also helps you tune the parameters for each algorithm — in fact they have a “sweep parameter” task that iterates you multiple input options for each algorithm parameter and identifies the optimal parameter setting for your problem
    Azure ML also makes it easy to compare the performance of different algorithms and help you select the best one for the problem at hand!
    It also supports R and Anaconda Python notebooks so you can port your existing R/Python code as well and use Azure Platform to operationalize your Machine learning project

  • Amazon ML
  • Amazon ML was announced in April 2015 and it’s a relatively young offering and so it’s understandable that it’s limited in capabilities/algorithms offered. It seems that Amazon launched a version-1 of their ML product for their existing AWS customers to help them get started — and if there is more demand from customers then I think the service would evolve over time. Here are few more things you should know about Amazon ML:
    Amazon ML has a wizard that walks you through each step and so it enables developers without ML know-how to get started
    Amazon ML supports data sources available on AWS platform like Redshift, S3 etc — so you will have to move your data to AWS before you can use Amazon ML — but it’s great if you are an existing customer!
    Amazon ML supports basic data cleaning and transformation tasks — but you will have to do the heavy lifting of cleaning/transformation data somewhere else for intermediate to complex needs.
    Amazon ML currently supports following ML problems:
    Regression
    Binary and Multi-class classification
    Amazon ML does not let the developer select the algorithm for the problem at hand — for instance, if you have a binary classification problem then it automatically uses Logistic Regression algorithm for you. It doesn’t let you change the algorithm to something like Two-class SVM or Two-class decision forest
    For each algorithm, you can set some training and evaluation parameters — so it’s limiting for advanced users
    Amazon ML does give you common performance metrics to evaluate your model’s performance — for example, if you are building a binary classification model then it gives you Binary AUC.
    So with that, let’s use our framework, to evaluate Amazon ML:

    Machine learning as a service
    ML-as-a-service platforms cover most infrastructure issues as far as data pre-processing, model training, and model evaluation, with further prediction performed in a cloud. Prediction results can be bridged with your internal IT infrastructure through REST APIs. Amazon Machine Learning, Azure Machine Learning, and Google Prediction API are three leading cloud services that allow for fast model training and deployment with little to no data science expertise. These should be considered first if you assemble a homegrown data science team out of available software engineers. Have a look at our data science team structures story to have a better idea of roles distribution.

    This post isn’t intending to provide exhaustive instructions of when and how to use these platforms, but rather what to look for before you start reading through their documentation.

    Amazon Machine Learning
    Amazon Machine Learning is one of the most automated solutions on the market and the best fit for deadline-sensitive operations. The service can load data from multiple sources, including Amazon RDS, Amazon Redshift, CSV files, etc. All data preprocessing operations are performed automatically: The service identifies which fields are categorical and which are numerical, and it doesn’t ask a user to choose the methods of further data preprocessing (dimensionality reduction and whitening).

    Prediction capacities of Amazon ML are limited to three options: binary classification, multiclass classification, and regression. That said, Amazon doesn’t support any unsupervised learning methods, and a user must select a target variable to label it in a training set. Also, a user isn’t required to know any machine learning methods because Amazon chooses them automatically after looking at the provided data.

    This high automation level acts both as an advantage and disadvantage for Amazon ML use. If you need a fully automated yet limited solution, the service can match your expectations. However, it doesn’t contribute a lot to understanding machine learning specifics and can’t be used as a launch pad to train domestic developers in data science.

    Microsoft Azure Machine Learning
    Unlike the Amazon ML product, Azure Machine Learning is aimed at setting a powerful playground both for newcomers and experienced data scientists. Almost all operations in Azure ML must be completed manually. This includes data exploration, preprocessing, choosing methods, and validating modeling results.

    Approaching machine learning with Azure entails quite a steep learning curve. But it eventually leads to a deeper understanding of all major techniques in the field. On the other hand, Azure ML supports graphical interface to visualize each step within the workflow. Perhaps the main benefit of using Azure is the variety of algorithms available to play with. The Studio supports around 100 methods that address classification (binary+multiclass), anomaly detection, regression, recommendation, and text analysis. It’s worth mentioning that the platform has one clustering algorithm (K-means).

    Happy machine learning !!

    Microsoft Bots

    A bot is a web service that interacts with users in a conversational format. Users start conversations with your bot from any channel that you’ve configured your bot to work on

    Microsoft Bot Framework is a comprehensive offering to build and deploy high quality bots for your users to enjoy in their favorite conversation experiences

    Bot Framework helps to build and connect intelligent bots to interact with your users naturally wherever they are — from your website or Skype, Office 365 mail, Teams and other popular services.

    Language Understanding Intelligent Service (LUIS) offers a fast and effective way of adding language understanding to applications. To give your bot more human-like senses, you can incorporate LUIS.

    Microsoft Bot Builder
    Supports .NET, Node, and C#
    Send/Receive Messages, Conversation System
    Unifies Platform events
    Connects to many platforms
    Uses “Write once run anywhere” style
    Best for platform agnostic products
    Quick Clarification, per docs:

    The Microsoft Bot Framework provides just what you need to build and connect intelligent bots that interact naturally wherever your users are talking, from text/sms to Skype, Slack, Office 365 mail and other popular services.
    MS Bot Builder is the sdk you use to create bots which connect to MS Bot Framework.

    A little confusing, I know.

    Bot Framework is itself a platform that connects your bot to many different platforms. To use Bot Builder entails registering with MS Bot Framework and connecting your bot to it.

    MS Bot Builder supports many programming languages and platforms. It really excels at building an interaction once and having it run on the maximum amount of platforms (ie FB Messenger, Skype, Alexa, etc)

    If your bot will rely heavily on NLP, Microsoft Bot Framework makes it easy to integrate with their LUIS NLP service.

    Bot Framework has support for Azure and scaling right out of the box. It focuses on a connector based strategy, so you must first register your bot with Microsoft, and then register with individual channels and link them to the bot.

    Support for langauges like .NET and C# makes it the default choice for many enterprises and developers working outside Node JS.You must register your bot with Microsoft Bot Framework first and create connections through their Website.

    The write once run anywhere is useful if you’re trying to approximate a similar experience across a wide variety of touch points, but lacks the ability to custom tailor content to the platform it will be consumed through.

    If you’re building a product that relies heavily on NLP and being omnipresent across many different channels (including Voice and skype), then Bot Framework is likely your best choice.

    Security – Privileged account management

    What is it
    It is the security and business discipline that enables right individuals to access right resources at the right times and for right reasons.

    It is frequently used as an information security and governance tool to help companies in meeting compliance regulations and to prevent internal data breaches through the use of privileged accounts

    A shared framework controls the access of authorized users and other identities to elevated privileges across multiple systems deployed in an organization.

    Tools

    PAM tool is an enterprise privileged account management platform that provides secure monitor and control of credentials used in an organization.

    Various tools are available in market. Two Market leaders are Cyber Ark, Secret server

    Store credentials of various types (SQL, Service Accounts, AD accounts)
    SQL access without the user knowing the credentials used for access.
    Remote access to servers without the user knowing the credentials used to access.
    Easy Approval workflow
    Audit Trail

    Happy learning!