Hey Guys!
As we all are aware that Artificial Intelligence is the all the buzz nowadays and it can also be considerd as one of the most discussed technologies of present times. I'm actually gonna give you an insight into the wonderful world of AI in layman terms.
Let's start with some history :
->It all started when Jhon McCarthy,a computer scientist coined the term 'AI' at the Dartmouth conference back in 1956.
->According to him AI is,“The science and engineering of making intelligent machines, especially intelligent computer programs”.
The Evolution
The main reason that AI seems like such a big thing now is due to the advances in big data technology and high computating power.AI software typically learns from patterns.Not only is AI a popular topic in mainstream news,but it has also been implemented in our day-to-day lives with digital assistants such as alexa or siri,social media and email marketing algorithms,amazon or netflix recommendation algorithms and even bigger applications such as self driving cars.There is numerous scope for this field presently and also in the future with more availabilty of big data and affordable cloud computing.This is a field that will attract more resources,investments and tech geeks alike in the near future.
Know The Difference:
There is a lot of confusion regarding many of the 'terms' which are just thrown around together.Let us address that first.
The following are the list of terms you need to know to get started-
1.AI(Artificial Intelligence)-it is a technological discipline that involves creating smarter machines.
2.ML(Machine Learning)-It is a subset of AI that refers to systems that can learn by themselves.
3.DL(Deep Learning)-It is ML but for it refers to systems that learn from experience on large data sets.
One of the approaches to ML is Artificial Neural Networks(ANN) which refers to models of human neural networks that are designed to help computers learn. Examples include Amazon's recommendation engine that uses user's browsing data to effectively assimilate information and generate recommendations by showing you that "customers who bought this item also bought these items".It is a part of DL and which will be discussed in detail later on.
AI can be used to convert audio to text using machines.The underlying technologies invloved in this are Natural Language Processing(NLP) which is processing of the text to understand meaning and Automated Speech Recognition(ASR) which is the processing of speech to text.
In summary, DL is subset of ML and both are subsets of AI. ASR & NLP are fall under AI and overlap with ML & DL.
Now let us explore ML and DL in more detail...
ML and it's models:
In ML,the models are commonly divided into three different catagories:-
1)Supervised Learning-
In this,an output label is associated with every input in the dataset.Your data has known labels as output(discrete or real-valued).It involves a supervisor that is more knowledgeable than the neural network itself.The supervisor guides the system by tagging the output.
Examples include,
->An ML system that can learn to mark e-mails as 'spam' or 'not-spam' based off hundreds of available datasets(emails) to help the system distinguish the characteristics of mails that are 'spam' from those which are 'not spam'.
Classification: Where you need to categorize a certain observation into a group.
Examples include,
->classifying a dot based off it's colour
->predicting the possibility of rainfall
->Detecting fraud or evaluating risk for frauds etc.
Regression: This is a type of problem where we need to predict and forecast for the continuous-response values.You have an existing data set & outputs (supervised learning) and your algorithm predicts the outcome based on a fitting function.
Examples include,
->Predicting financial results, stock prices
->how many total runs can be on board in a cricket game?
2)Unsupervised Learning-
This is an unaided type of learning where there are no output labels or feedback loops for your data.This can be used when there is no example data set(known) and you are looking for a hidden pattern in the data.The system understands or learns based on the data provided.Since this is difficult to implement,it is not commonly used and supervised learning is preferred.
Clustering:This is a type of unsupervised learning problem where we group similar things together.
Examples include,
->Clustering news articles into different themes.
->Clustering tweets based off thier content.
->Also has applications in politics,shopping,health care and real estate
fig:Clustering in ML
Association:In association we discover rules that describe to large portions of our data. (ex:people who buy X also tend to buy Y).They are usually encountered when we purchase items based upon previous purchases.
A common example is using association algorithms in market basket analysis of our shopping data,i.e;given many baskets,how one item in a basket can predict other items for the same basket.
fig:market basket analysis
3.Reinforcement Learning(RL)-
In this type of learning,systems are trained by recieving 'rewards' or 'punishments'.This type of strategy is built on observation,trail and error wherein the agent makes a decision based on the environment.If the observation turns out to be negative,the algorithm adjusts it's weights to make a different decision the next time around.This is a part of Deep Learning(DL).
fig:reinforcement learning
Reinforcement learning algorithms try to find the best ways to earn the greatest reward.Rewards may be as simple as winning a game,earning money or as complex as beating other opponents.
They produce state-of-art results on very human tasks such as a machine beating a human in old school video games.The software learns moves from many human-to-human games.
Deep Learning:
“Deep Learning” is also a method of statistical learning that extracts features/attributes from raw data sets.
DL does this by utilizing multi-layer artificial neural networks with many hidden layers stacked one after the other.
The thing which differentiates ML from DL is that,it utilizes sophisticated algorithms and requires more powerful computational resources.
These are specially designed computers with high performance CPUs or GPUs.
Artificial Neural Networks(ANNs):
->Our brains are very complex networks with about 10 billion neuron each connected to 10 thousand other neurons.
->Each of these neurons receives electro-chemical signals and passes these messages to other neurons.
->Inspired by the functionality of biological brain cells,Artificial Neural Networks(ANNs) were created.
->ANN is modeled using layers of artificial neurons to receive input and apply an activation function along with a human set threshold.
Although it all sounds very complex,in reality,DL has already found a place in our daily lives through efficient human level image classification,handwriting/speech recognition and also autonomous driving.It has also been optimized for complex ad targetting on our news feeds.
An 'ANN' can be divided into '5' basic components:
i)Input nodes: Each input node is associated with a numerical value, which can be any real number.
ex:one pixel value of an image
ii)Connections:Each connection that departs from the input node has a weight (w) associated with it and this can be any real number. The ANN runs and propagates millions of times to optimize these “w” values.
iii)Weighted Sum: All the values of the input nodes and weights of the connections are brought together to be used as inputs for a weighted sum.
iv)Transfer or Activation function:The artificial neuron will only fire when the sum of the inputs exceeds a threshold. These are parameters set by us.
v)Output Node:Associated with the function of the weighted sum of the input nodes.
More on Deep Learning...
->Deep-learning networks are distinguished from the more general single-hidden-layer neural networks by their depth.
->Depth is the number of node layers where there are more than one hidden layers thus need for more computation power for forward/backward optimization while training, testing and eventually running these ANNs.
Among the layers,you can find input layer, hidden layers and an output layer.
The layers act like the biological neurons that you have read about above.
The outputs of one layer serve as the inputs for the next layer.
Now that we have discussed the main aspects of ML,DL and AI,let us move on to the applications of ML and the best ML frameworks of the present day...
Applications of ML:
Machine learning is actively being used today, perhaps in many more places than one would expect. We probably use a learning algorithm dozens of time without even knowing it. Applications of Machine Learning include:
->Spam Detector:Our mail agent like Gmail or Hotmail does a lot of hard work for us in classifying the mails and moving the spam mails to spam folder. This is again achieved by a spam classifier running in the back end of mail application.
->Photo tagging Applications: Be it facebook or any other photo tagging application, the ability to tag friends makes it even more happening. It is all possible because of a face recognition algorithm that runs behind the application.
->Web Search Engine: One of the reasons why search engines like google, bing etc work so well is because the system has learnt how to rank pages through a complex learning algorithm.
-> Virtual Personal Assistants:
Siri, Alexa, Google Now are some of the popular examples of virtual personal assistants.
They assist in finding information, when asked over voice. All you need to do is activate them and ask.
For answering, your personal assistant looks out for the information, recalls your related queries, or send a command to other resources (like phone apps) to collect info.
Machine learning is an important part of these personal assistants as they collect and refine the information on the basis of your previous involvement with them.
-> Predictions while Commuting:
i)Traffic Predictions: We all have been using GPS navigation services. While we do that, our current locations and velocities are being saved at a central server for managing traffic. This data is then used to build a map of current traffic. Machine learning in such scenarios helps to estimate the regions where congestion can be found on the basis of daily experiences.
ii)Online Transportation Networks: When booking a cab, the app estimates the price of the ride.Jeff Schneider, the engineering lead at Uber ATC reveals in a an interview that they use ML to define price surge hours by predicting the rider demand. In the entire cycle of the services, ML is playing a major role.
-> Social Media Services:
From personalizing your news feed to better ads targeting, social media platforms are utilizing machine learning for their own and user benefits. EX:face recognition and 'people you may know'
->Online Customer Support:
A number of websites nowadays offer the option to chat with customer support representative while they are navigating within the site.
In most of the cases, you talk to a chatbot. These bots tend to extract information from the website and present it to the customers. Meanwhile, the chatbots advances with time.
They tend to understand the user queries better and serve them with better answers, which is possible due to its machine learning algorithms.
The best ML frameworks:
The best Deep Learning frameworks are either an interface or tools that help developers construct Deep Learning models with ease. Additionally, it eliminates the need to understand the details of the ML/DL algorithms of the Machine Learning and Deep Learning platforms.
Your choice of machine learning frameworks depends entirely on the algorithm requirements, your expertise, and the client’s budget.
Some of them are-
1)TensorFlow
2)Google Cloud ML Engine
3)Apache Mahout
4)Shogun
5)Sci-Kit Learn
6)PyTorch or TORCH
7)H2O
8)Microsoft Cognitive Toolkit (CNTK)
9)Apache MXNet
10)Apple’s Core ML
Ethical Dilema's:
Microsoft CEO Satya Nadella “ We need to take accountability for the AI we create...”
While the core objective of AI is to augment humans, there is a lot of discussion around ethics of AI as well.
Some include-
1)AI and unemployment: There a huge debate surrounding this issue.In the beginning AI will eliminate some of the human tasks though if we can find ways to adopt and re-skill ourselves then it has potential to create more jobs than it eliminates.
2)Biased Robots?:We should ensure our training sets, algorithms or parameters are not “biased” against the goal of the critical applications.
3)Security and Privacy:Better regulations and policies can eliminate this concern.
4)Inequality of AI capabilities:AI education must be normalized and democratized to reduce the digital divide amongst countries.This can be done through open source libraries and free access to online information.
5)Human interactions & cognitive skills:The more we leverage robots the interactions would go down and our dependence on AI will increase.But we are still far far away from a time where robots could overtake humans!
I would like to conclude my blog post by saying the scope of AI is infinite.Also,we can expect to see a lot more implementaion of AI in the near future with numerous chances for further advancements. The best we can do is learn more about this,apply it and try to create better solutions to help mankind!!
Thank you for reading my blog! I hope it helped to give you an insight into the vast and wonderful world of AI!!
(Name:Sri Varsha E-mail:srivarsha.11@gmail.com)
Top comments (6)
Great blog! Got an insight into what they mean and it's applications!
Thanks rosh!
This is one of the best intros to Machine Learning, I really like the way you put AI, ML and DL into context whilst still acknowleding other areas that don't neatly overlap (such as NLP).
I would suggest that the application for social media should probably be listed as part of a wider ethical dilemma: is our society mature enough to work with AI? I would argue that the general population needs to have a base level of understanding of AI, the problems it can and cannot solve and the context that we are using AI in.
Given the continued effectiveness of propaganda I would argue that current implementations of AI would be used for the benefit of a few, as opposed to all. An immediate example is Twitter deleting accounts posting the same content as the US president, or Facebook's refusal to remove propaganda.
Before people like you and me go off and build the technology we need to have an honest look at what our work is going to be used for.
My own contribution to NLP is here: twittertextclustering.shinyapps.io...) - it's an example of unsupervised clustering, I basically dip my hand into a waterfall of tweets and automatically sort them into their topics. This could be used either to provide information to people, or to hide it. Or I could easily manipulate the source to show results I want (I haven't, and would never).
What's your opinion? Do you think our society will be able to keep with the technology?
I'll come back to this when I have time to try out ML, thanks.
thanks for taking the time to check it out Sean!
thanks alot for the read really appreciated will pin it for further usage!