A few months back at Amdocs, I was working on a project related to the telecommunication industry, where I was first introduced to AppDynamics. Since then, I have been obsessed with this tool’s power. For a beginner like me in the world of large-scale projects, I was surprised that one tool could monitor application Infrastructure, give you code-level visibility along with deep analysis, optimization, and bottleneck prediction in complex information systems.
In this blog, I will explain AppDynamics in brief for all the Performance Engineers out there. Let’s get started.
AppDynamics is a tool use to measure the performance of the application and monitor it with intelligence providing simple but effective data to get an idea of the application’s behavior.. We know large scale applications are complex to understand and we may need external teams help to understand if any issue occurs. Also, it is very difficult to understand the working of an application in the cloud environment as it works by taking input from different servers. And, all these problems can be taken care using AppDynamics.
The AppDynamics Controller is the brain where all data is manipulates. AppDynamics agent collects and sends the data to the controller. Controller store that data and helps to analyze it. Agents capture performance activity across application code, servers and network nodes. The controller updates in real-time, even in very complex applications with hundreds of agents. The Controller helps to monitor, troubleshoot and analyze your entire application from backend infrastructure to the client.
Agent:
AppDynamics agents are responsible to collect the data from each server and sent it to the controller. Once deployed, Agents immediately monitor every line of code. Unique tags are assigned to every method call and every request header. This allows AppDynamics to trace every transaction from start to finish even in modern, distributed applications.
Appdynamics Agents: There are three types of appdynamics agents
App Agent: This agent is responsible to collect the data related to application like server load, response time, errors, etc. We can have different-different types of app agent (Java agent, .NET agent, PHP agent, etc) based on application technology.
Machine Agent: Machine agents are responsible to collect data related to CPU utilization, memory utilization, disk usage etc.
DB Agent: Database agents are responsible to collects performance data about your database instances and database servers. It tracks your queries, stored procedures etc.
Merits of AppDynamics
The entire infrastructure and building of the application can be viewed with the help of AD. This helps to check the application and its working and the maintenance of the same. All these can be done in less time with fewer resources involved.
The good thing about the tools is its integration with the load runner tool for performance testing. We can get the exact method and class names from the code which helps in finding the bottleneck.
AppDynamics supports many platforms, and it’s set up is relatively easy. It also supports customized monitoring, and allows you to build what you need.
Receive alerts based on custom or built-in health rules, including rules against dynamic performance baselines that alert you to issues in the context of business transactions.
Analyze your applications at the code execution level using snapshots.
AppDynamics support all major technologies like Java, .NET, PHP, Node.js, etc.
Demerits of AppDynamics
I think dashboard can be improve as it’s lacking many features like searching data within the dashboard.
Price of the AppDynamics is high. But it has many valuable features. This is not for small businesses.
Needs huge storage if you keep the transaction history and most of the analytics features rely on that.
It’s has two versions – the actual and lite. The lite version only shows data from last 2 hours. It is certainly not enough to monitor.
Conclusion
Despite all the cons and high costs, ignoring the few limitations and enjoying the application is advisable. AppDynamics is one of the best tools in the market, having a large community and jobs available.
Statistical Analysis and Data Analytics are getting more popular day by day. R programming language has gained a lot of popularity over the years because of it’s simple and easy to use approach after Python. R was created and developed by Ross Ihaka and Robert Gentleman. The name “R” was partly derived from the first letters of the authors’ names and also as a play on the name of the S programming language. R is very domain specific unlike Python.
Why is R so important?
R is taught all over the world in many universities and used in many companies for vital and important business operations. In various data science and statistics operations and applications, we have to deal with various types of data. R can be use to perform tasks, such as Data Cleaning, Feature Selection, Feature Engineering and so on. It is also easily connected with databases like Spark and Hadoop. R provides excellent features for data exploration and data investigation. Apart from that, R provides you with the ability to build aesthetic web-applications. Using the R Shiny package, you can develop interactive dashboards straight from the console of your R IDE.
Differences between R and Python:
The main distinction between the two languages is in their approach to data science. Both open source programming languages are supported by large communities, continuously extending their libraries and tools. But while R is mainly used for statistical analysis, Python provides a more general approach to data wrangling.
R provides various packages for the graphical interpretation of data. Python also has libraries for visualization, but it is a bit complex than R. R has a pretty-printed library which helps in building publication-quality graphs.
To use R, developers and analysts start with R Studio. In the case of Python, Anaconda is used.
Well, in the end, whether to use R or Python is decided by your need and demand of the project you are going to work on. It will also depend on the problem you are trying to solve.
Getting started with R is very simple. One needs to have basic math, statistics and programming knowledge.
Some Advantages of R:
R is platform independent. Basically, it can run without any issues on Windows, Mac or Linux.
R has powerful tools for statistics. It has a consistent and incorporated set of tools which can be used to do various tasks. The notation of vectors in R programming is a very powerful feature.
As I said earlier, R is open source, we don’t need to pay money or buy a license to use R. Anyone can use R, without any limitations.
The R community is constantly growing. Many new packages are getting created in R.
Let’s understand some basic concepts of R language
Before proceeding with this section, you should have a basic understanding of coding. A basic understanding of any of the programming languages will help you in understanding the R programming concepts.
R Language data types
1 . Data Types:
In all programming languages, we store data in various variables. All these variables have their data types. Some space is stored in the memory for storing the data. Let us have a look at the various data types in R.
Logical Data Type
We all know that logical data type is basically either true or false. Let us implement it using code.
var_1<- FALSE
cat(var_1,"\n")
cat("The data type is: ",class(var_1),"\n\n")
Output
Numeric Data Type
The float/ decimal value in R is known as Numeric Data type in R. It is taken as the default computational type for data.
var_2<- 234.56
cat(var_2,"\n")
cat("The data type is: ",class(var_2),"\n\n")
output
Integer Data Type
Non decimal or floating point numbers are stored as integers. The only difference between the implementation of the numeric and integer data type is the “L” which indicates R to store it as an integer. Integer data type is available in all programming languages and same is the case for R.
var_3<- 45L
cat(var_3,"\n")
cat("The data type is: ",class(var_3),"\n\n")
output
Complex Data Type
Complex data types are also available in R. Implementation is very easy and simple. Let us have a look.
var_4<- 34+ 3i
cat(var_4,"\n")
cat("The data type is: ",class(var_4),"\n\n")
output
Character Data Type
It is used to store strings and characters in R. Use and implementation is very simple and easy.
var_5<- "R Programming"
cat(var_5,"\n")
cat("The data type is: ",class(var_5),"\n\n")
Output
2 . Variables:
A variable is nothing but a memory location, which is used to store values in a program. Variables in R language can be used to store numbers (real and complex), words, matrices, and even tables.
# Variable example using equal operator.
variable.1 = 6
# Variable example using leftward operator.
variable.2 <- "Capable Machine"
# Variable example rightward operator.
13L -> variable.3
print(variable.1)
cat ("variable.1 is ", variable.1 ,"\n")
cat ("variable.2 is ", variable.2 ,"\n")
cat ("variable.3 is ", variable.3 ,"\n")
Output
Decision making:
Decision making is most familiar concept in coding i.e. If-else statement. The decision making statement executes a block of code if a specified condition is true (+). If the condition is false (-), another block of code is executed.
Image from google
# Create vector quantity
quantity <- 10000
# Set the is-else statement
if (quantity > 7500) {
print('Popular Blog on CapableMahine')
}
else {
print('Not Popular')
}
Output
3 . Loops:
A loop statement allows us to execute a statement or group of statements multiple times. There are three loops in R programming languages.
For Loop
This Loop is used for repeating a specific section of code a known number of times.
for (initialization_Statement; test_Expression;
update_Statement)
{
// statements inside the body of the loop
}
Repeat Loop
This is used to iterate a section of code as other loops. But, It is a special kind of loop where there is no condition to exit from the loop. For exiting, we include a break statement with a user-defined condition.
repeat {
commands
if(condition) {
break
}
}
While Loop
This Loop is used to repeat a specific section of code an unknown number of times, until a condition is met.
while (test_expression)
{
statement
}
4 . Function
A function is a section of code that performs a specific task. Function can be called and reused multiple times in the code. You can pass some information to a function and it can send that information back. R programming languages have built-in functions that you can access, but you can create your own functions too.
“An R function is created by using the keyword function.” There is the following syntax of R function:
func_name <- function(arg_1, arg_2, ...)
{
Function body
}
Function Components
The different parts of a function are −
Function Name − Name of the function stored in R environment as an object with name.
Arguments − An argument is referred to the values that are passed within a function when the function is called.
Function Body − The function body contains a logic part that defines what the function does.
Return Value − The return value of a function is the last expression in the function body to be evaluated.
This was the brief overview of R programming language. If you want to learn R in detail then I would suggest to take tutorials from YouTube or other online platforms.
Conclusion –
R community is changing and seeing that it’s been a part of the rapid expansion of the data science field. Within the next several years we may expect many new machine learning start-ups to be created which will aim at robust connectivity with R and other open-source analytical and Big Data tools. This is an exciting area of research and hopefully, the coming years will shape and strengthen the position of the R language in this field. For more information on R, please visit:https://www.r-project.org/
In the world of technology, every small business is trying to grow through internet. Which causing web applications in demand. As a developer I can say first thing comes in our mind is which JavaScript Library should we use. In this blog I am going to discuss React.js .
I learned react during my project days. Believe me it is a very interesting topic to learn and work on. I have developed many React applications, which turn out to be surprisingly very fast and efficient. And that made me to write this blog.
So, let’s dig into it –
React is a JavaScript library to build fast and interactive user interface. React was created by Jordan Walke, a software engineer at Facebook, who released an early prototype of React called “FaxJS”. He was influenced by XHP, an HTML component library for PHP. It was first deployed on Facebook’s News Feed in 2011 and later on Instagram in 2012. It was open-sourced at JSConf US in May 2013.
Fun Fact – If you want to expand job opportunities as a developer you should have react in your resume.
So, lets understand, what is react and how react is so better than any other library in market.
The heart of all react applications are Components. A component is usually piece of user interfaces.
How React works?
Web applications with react, builds a bunch of independent, isolated and reusable components and compose them to build complex interfaces easily. Every react application has at least one component which refer to as root component and it contains other child components. Basically every react application is a tree of components.
Image from google
Let’s understand this by an example –
lets imagine if you are building application like Facebook. Always keeps in mind every web page has navigation menu, footer menu and some information. So, let’s split web page into components like navbar, profile, intro, follow, feed and and then we can build them separately and combine them together to create a single page.
Facebook Image
Before getting into the Components, let’s understand state and props –
State
State is a heart of React and built-in object that is used to contain data or information about the component. A component’s state could change over the time; whenever it changes, the component re-renders. The change in state can happen as a response to user request or system-generated events, and these changes determine the behavior of the component.
Props is nothing but a global react object which passes the information to a component. We can pass props to any component as we declare attributes for any HTML tag. Have a look at the below code snippet:
<DemoComponent Prop = "capablemachine" />
How to define components?
There are two ways to define components, first is Class Base Components and second is Functional Base Components.
Class Base Components –
class base component extends react component class and it required us to use render method. Render method means to render the content on the sight.
class base component extends react component class and it required us to use render method. Render method means to render the content on the sight. These components are simple classes (made up of multiple functions that add functionality to the application). All class based components are child classes for the Component class of ReactJS.
class Welcome extends React.Component{
render(){
return <h1>Hello, {this.props.name}</h1>;
}
}
Functional Base Components
functional base components are actually pure JavaScript functions that accepts props as its argument and return some JSX. So JSX is basically java script xml, it is a syntax that allows us to write html and also you can write java script inside of it. This components is called as stateless component as it does not required to manage state. Below is the syntax of functional base components.
function Welcome(props) {
return <h1>Hello, {props.name}</h1>;
}
There is one more concept called Virtual DOM which make react so fast.
What is DOM?
DOM is nothing but a Document Object Model, it defines structure of documents and browser converts your web page or web document into DOM which is representation of document of an object. So what makes react so fast? Lets say we have DOM as you can see below in the picture, So what react will do is it will make a two copies of real DOM which is represented in yellow color into the picture and will call it virtual DOM.
Image from Google
Whenever anything updates in your application, it gets rendered into virtual representation of one of the DOM copy. After that it will compare with the other copy of virtual DOM and if he detect any change which is represent in red color it will update the Real DOM with that change. React never read from the real DOM, it will interact with virtual DOM and when if it sees any change it goes and update the real DOM very efficiently, making react fast. And that’s makes react most popular.
Model View Controller
To make complex web application much easier MVC played biggest role. MVC is known as Model View Controller, the goal of this pattern is to display large application in specific section that all have their own purpose. To understand each section let’s take a look into the picture shown below.
Image from capablemachine.com
First the user requests a specific page from the server, and the server sends all that request to the specific controller. This controller will be responsible for handling all the request from client, and It will tell the rest of the servers what to do with that request.
Controller acts as a middleware between two sections ; Model and View. The first thing happens when controller receives a request is that it asks question to model based on the request. The model is responsible for handling all the data logic of the request. It means the model interacts with the database and handles all the validations such as saving, updating, deleting, of the data.
The controller should never interact with data logic directly, it always use model for interaction. This means controller never has to worry about how to handle the data. It’s only job is to tell model what to do and respond on what the model returns. That means model doesn’t have to worry about failure or success that’s all handle by the controller.
After the model sends response back to the controller, the controller needs to interact with the view to render the data to the user. The view only concerns about how to present the information controller sends. The view dynamically render the HTML based on the data send by the controller. View only cares about how to present the data and send back the final presentation to the controller. Controller will send this presentation as a response back to the user. The important thing to note about this design is that the model and view never interact with each other directly.
Any interactions between the model and view are done through controller, having the controller between the model and the view means the data presentation and data logic are completely separate. That makes complex application much easier.
Let’s take a real time example of how MVC handles request and response.
Image from capablemachine.com
Imagine a user sends request of cats to the server and server send this request to the controller which handle the cats, now controller ask the model to return a list of all cats. Next the model queries the database as you can see in the diagram and return back the list of cats to the controller. If the response back from the mode was successful, then only the controller will ask to view to return a presentation of the list of cats. Now, view takes the list of cats from the controller and render the list into html page that can be use by the browser. The controller then takes that presentation and returns back to the user, if the model returned error instead of cats response, the controller will handle that error and will ask view to return a error message presentation and now this presentation will return to the user instead of list of cats. As you can see from this example the model handles all the data, the view handles all the presentation, and the controller just tells the model and view what to do.
The model handles all the data, the view handles all the presentation, and the controller just tells the model and view what to do.
My suggestion –
If you want to become a front end developer in future, start learning React as it uses in many MNC companies like Facebook, Netflix, Skype, Instagram, Airbnb, Tesla, and many more. Also, It has a large community which will help you to get lots of recourses for free.
If you have any queries related to React and front end development, comment below.
These days most people are trying to learn new technologies, and a very few wants to stick to the foundation. Yes! I am talking about Data Structure and Algorithm (DSA) – the most basic skill to learn any software technology. It is a misconception that coding is all about learning new languages, new libraries, frameworks and even new tools. I suggest you to not fall in this trap. Coding is about building an efficient logic, which can be achieved with help of Data Structure and Algorithm.
“Success lies in a masterful consistency around the fundamentals.”
Robin Sharma
I can still remember when it seemed that everything in my life was falling apart and that I could not do any good with the career I chose. I would clear the coding rounds of every company that I was eligible for, but would fail miserably in the interviews that followed. This went on for a while and I started questioning myself. I introspected and asked myself, am I really bad at what I’m trying to do, or I lack at some area I don’t even know…
After further introspecting and talking with seniors and colleagues I finally got my answer. It was simple. I was little to not acquainted with Data Structures. That in turn resulted in my lack of confidence in the interviews. I soon corrected my mistake, learnt about Data Structures as best I could and an interview no longer seemed to be a daunting task.
So, here are some tips to boost your Data Structure and Algorithm (DSA) skills –
1. Spend time on theory
First mistake people do is they do not focus on theory, like they know the steps of code but don’t know what is happening and how it is working in the background?
As a beginner, you must not be afraid of spending too much time reading. What you must keep in mind is to understand every line. Because “DSA is not a language, it is a concept”.
Best approach to improve your theory knowledge is to take paper and try to draw flow chart while reading. This will help you to remember steps and concept as well.
2. Divide & Conquer
Now a lot of you cannot write data structure code only because you get nervous by just seeing the size of that problem statement.
A divide and conquer rule is a strategy of solving a large problem by
breaking the problem into smaller sub-problems
solving the sub-problems, and
combining them to get the desired output.
For example – You have to write a code for a doctor application where among all the patients only the first 4 will show, and the doctor has to press a button so see the next 4 patients , then probably you will not do it, because it sounds complex, now If I say write a program which will return first 4 numbers then second 4 numbers and so on then you might do it because it looks simpler. This is what I am trying to say If you just divide your problem into subproblems and try to fix that then you might be able to write the code.
3. Choose one
Get into one boat and stay in it. Don’t step in multiple boats at once. It will only result in a fall. Explore a few sources, choose one and stick to it.
You can consider taking one of the innumerable courses available on the internet. I will recommend you to learn from YouTube.
Here is the list of best YouTube channel to learn DSA –
Mycodeschool
Abdul Bari
Harvard University
Neso Academy
Apni kasha
MIT opencourseware
4. Time Management
Try to devote at least two hour time daily. Divide it into two slots of 1 hour each.
Use first 1 hour 15 min to learn a new DSA concept every day. But remember, source should be same everyday. Do not try to learn from different sources otherwise you may get confused.
Utilize the next 45 minutes solving the problems based on concept learned earlier that day. And, after submitting the code, look solutions of others to know how to improve the code to optimize the time & memory complexities if possible.
Note – Spend more time on reading and understanding to get clarity.
5. Consistency
Last step is very important i.e. consistency. Consistency is what increases your capacity. It’s not about just doing it. Its about how regular you do it.
That’s why It is necessary that even you are good in DSA, you practice it daily. The best way to keep in touch with DSA is competitive programming. This way you will keep getting better at it which will assist you in cracking the interviews.
Last month me and my team has developed E-commerce website on electronics shopping and the first challenge we face was: what technology should we choose? Spring Boot or Node.js. It was really tough to choose one because both have their own pros and cons.
First let’s learn some fundamentals of both the technologies.
Spring Boot
Spring Boot is a platform that makes it easy to create stand-alone, production-grade Spring based Applications that you can “just run”.
Key Features :
It has a lot of default functions which help you to create Spring application faster.
Comes with embedded HTTP servers like Jetty and Tomcat to test web applications.
Helps to avoid all the manual work of writing boilerplate code, annotations, and complex XML configurations.
It increases productivity as you can create Spring application quickly.
Allows for easily connecting with database and queue services like Oracle, PostgreSQL, MySQL, MongoDB.
Node.js
Node.js is an open-source, cross-platform, back-end JavaScript runtime environment that runs on the V8 engine and executes JavaScript code outside a web browser.
Key Features :
Helps to build fast real-time, high-traffic apps.
It makes it possible to code in JavaScript for both the client and server side.
Node.js increases the efficiency of the development process as it fills the gap between frontend and backend developers.
The ever-growing NPM (Node Package Manager) gives developers multiple tools and modules to use, thus further boosting their productivity.
Easy knowledge sharing within a team.
A huge number of free tools.
Which is better?
This was the question my team has to tackle. It was tough 😦
Let’s end the suspense. We choose Node.js over Spring Boot.
We come to know as with the numbers for web apps increasing by the day, complexities also have increased, and that If we have to build high performance website in no time without complications then we have to go with Node.js.
Let’s understand this in detail.
There are several points I like to discuss so you can understand why we choose Node.js
1. Input / Output Model
I/O model is the process of displaying the data by the computer. It is of two types, blocking and non-blocking. Blocking refers to a thread that can’t do anything until the entire I/O is fully received. While Non-Blocking refers to a thread that does not wait for other requests, it just perform any way and shows the result. Node.js is a Non-Blocking model which helps in memory utilization and It can send several requests at the same time.
2. Concurrency
For enterprise web applications, high concurrency is required. As Java Spring is multi-threaded, it requires a thread for each request, and it becomes expensive as it demands many threads to achieve full concurrency. While the Node.js is single-threaded, the CPU will be busy when it is operating on full load and OS will not break down until the request is serviced.
3. Popularity
Do you know? Most popular company using Node.js is Netflix. Why Netflix using it anyway? Netflix is so popular that over 1 million users watch movies, series on Netflix per hour and that means to much load on on server. So, they needed the technology which is simple and flexible to handle, so if somehow system fails, engineers can be able to handle errors fast.
Image source : mobileappdaily.com
Other companies using Node.js for same reason are Uber, LinkedIn, Medium, PayPal, NASA, eBay.
4. Community
Based on last point we can see JavaScript has very large community on StackOverflow and other related websites. which makes it easier to tackle some complications.
By keeping all these points in our mind we choose Node.js. But It does not mean I hate Spring Boot or Spring Boot is not better than Node.js. It is totally depend on software requirement and your teams mindset. Let me know your thoughts in comment section below.
Any web development project is incomplete without database. A database is a collection of information that is organized so that it can be easily accessed, managed and updated. Today, there are N number of databases available in the market. Out of which, MySQL is a most popular open source database software backed by Oracle. MySQL is not perfect, but it is flexible enough to work well in very demanding environments.
So, What is MySQL?
MySQL is a open source Relational Database Management System based on the SQL language created by a Swedish company, MySQL AB, founded byDavid Axmark, Allan Larsson and Michael “Monty” Widenius. The first version of MySQL appeared on 23 May 1995. A relational database is a set of tables (datasets with rows and columns) that contain information relating to other tables in the database. MySQL is written in C and C++. Its SQL parser is written in yacc, but it uses a home-brewed lexical analyzer.
How Does MySQL Work?
MySQL has a client-server architecture and can be use in any networked environment. Every client can make request to server using some network. The server processes client requests and returns the results back to the client. In MySQL it is not necessary to have client on the same system as server. Client can send request to remote server using internet connection, but important thing is that server should be in running state at that time.
In MySQL the server is multi-threaded and makes use of all CPU available. Also, it is multi-user, scalable, and robustly designed for mission-critical, heavy-load production systems. It provides both transactional and non-transactional storage engines and supports the addition of other storage engines.
Key Features
MySQL is very fast reliable and flexible Database Management System. MySQL is open source, i.e. anyone can use it for free. And, anyone can modify the code. It supports all the major platforms like Windows, Linux, Solaris, macOS, and FreeBSD.
On the other hand, MySQL developer community is very active and that’s why MySQL gets frequent software updates. The current stable version of MySQL is version 8.0, and developers claims that it provide up to 2 times faster experience than the previous version, isn’t it great?
You know, It is very difficult to handle bad data stored in database than it is to keep the bad data in the first place. Before MySQL there were some ways to ensure that you were only putting same data type in same column and in specified range. But, because of MySQL native JSON data type, there was no way until recently to make sure that certain key/values were present, of the right data type, and in a proper range.
To connect and execute MySQL statements from another language or environment, there are standards-based MySQL connectors and API are available. It provides API’s for C, C++, Eiffel, Java, Perl, PHP and Python. In addition, OLE DB and ODBC providers exist for MySQL data connection in the Microsoft environment.
Their is MySQL .NET Native Provider, which allows native MySQL to .NET access without the need for OLE DB.
MySQL allows transactions to be rolled back, commit, and crash recovery. Most importantly it has a very low memory leakage problem hence it is more efficient and people prefer it.
MySQL provides high productivity to developers by using Triggers, Stored procedures.
MySQL provides a unified visual database graphical user interface tool named “MySQL Workbench” to work with database architects, developers, and Database Administrators.
MySQL version 8.0 provides support for dual passwords: one is the current password, and another is a secondary password, which allows us to transition to the new password.
GUI
MySQL is well-known for its ease of use, but definitely an interface is needed. The open nature of MySQL had spawned quite a number of third party front-ends in addition to their own official one.
MySQL Workbench – the legit incorporated surroundings for MySQL. Developed through MySQL AB, it lets in customers to manage MySQL databases and layout database systems using visible graphical equipment.
DBEdit – A cross-platform database editor supporting Oracle, DB2, MySQL and any JDBC driver providing DB.DBEdit is free, open source and is hosted on SourceForge under GNU licence.
Final Thoughts
The points we have discussed above states that MySQL has too many merits over other databases available. And I would recommend everyone to start your software journey by learning MySQL database.
According to a recent report by Transparency Market Research (TMR), the edge analytics market is foreseen to project a strong growth with a noticeable CAGR (Compound Annual Growth Rate) of 27.6% within the forecast period.
What is Edge Analytics?
Edge analytics is the advanced data analysis method that enables users to get access to real-time processing and extracting the unstructured data captured and stored on the edge of network devices. Edge analytics provides the automatic analytical computation of generated data in a real-time mode without sending the data back to the centralized data store or server.
In this technique, data is collected, processed, and analyzed at the sensor, device, or touchpoint itself.
Benefits of Edge Analytics:
Reduce the latency of data analytics:
If we are performing predictive maintenance then it will be beneficial to analyze the data at that particular sensor and shut off that sensor.
Scalability of data analytics:
As sensors and devices grow, the data collected by them will also grow exponentially so Edge analytics enables organizations to scale their processing and analytics capabilities by decentralizing to the sites where the data is actually collected.
Edge analytics helps get around the problem of low bandwidth environments:
Edge analytics alleviates this problem by delivering analytics capabilities in these remote locations.
Edge analytics will probably reduce overall expenses by minimizing bandwidth, scaling of the operations, and reducing the latency of critical decisions.
Increased security due to decentralization:
Absolute control over the IP protecting data transmission, since it’s harder to bring down an entire network of hidden devices with a single DDoS attack, than a centralized server.
Use Cases of Edge Analytics:
Retail customer behavior analysis: Retailers can leverage data from a range of sensors, including parking lot sensors, shopping cart tags, and store cameras. By applying analytics to the data collected from these devices, retailers can offer personalized solutions for everyone with the help of behavioral targeting.
Remote monitoring and maintenance for various industries: Industries such as energy and manufacturing may require instant response when any machine fails to work or needs maintenance. Without the need for centralized data analytics, organizations can identify signs of failure faster and take action before any bottleneck can arise within the system.
Smart Surveillance: Businesses can use the benefit of real-time intruder detection edge services for their security. By using raw images from security cameras, edge analytics can detect and track any suspicious activity.
Tools for Edge Analytics:
AWS IoT Greengrass
Cisco SmartAdvisor
Dell Statistica
HPE Edgeline
IBM Watson IoT Edge Analytics
Intel IoT Developer Kit
Microsoft Azure IoT Edge
Oracle Edge Analytics (OEA)
PTC ThingWorx Analytics
Streaming Lite by SAP HANA
Challenges for Edge Analytics:
Security: Cloud environments are designed with security in mind because breaches on the cloud are quite costly for the business. However, edge security is also important because some edge devices make decisions about the real-world behavior of machines. Breaches can result in the sabotage of equipment, other costly machine errors, or at least misinformation.
Maintenance: Some edge analytics systems share only their output with the cloud due to bandwidth or storage constraints. Then, businesses have no chance to review the raw inputs that led to the analyses that are shared with the cloud systems. Therefore, they need to make sure that inputs are processed with the latest analytics software, relying on outdated models can lead businesses to make decisions on wrong information.
The greatest value of a picture is when it forces us to notice what we never expected to see.
Data analytics is the method of exploring raw data sets in order to find trends and draw conclusions about the information they contain. Across the globe companies are considering various analytic solutions to discover what will allow them to get the most out of their information. Let’s see the two main types of Data Analytics methods; Descriptive and Predictive Analytics.
descriptive analytics
As name suggests Descriptive Analytics takes raw data and describes that data into human understandable language. Descriptive analytics is the easiest form of analytics that mainly uses simple descriptive statistics, data visualization techniques, and business-related queries to understand past data. One of the primary objectives of descriptive analytics is amazing ways of data summarization.
A most common example of Descriptive Analytics is business reports that simply provide a historic review of an organization’s operations, sales, financials, customers, and stakeholders.
Let see this by example in our day to day life –
Below figure shows visualization of relationship break-ups reported in Facebook.
You can see that spike in breakups occurred during spring break and in December before Christmas. HAHAHA!!!!! There could be many reasons for increase in breakups during December.
Many believe that since December is a holiday season in foreign country, couples get a lot of time to talk to each other, probably that is where the problem starts.
However, descriptive analytics is not about why a pattern exists, but about what the pattern means for a business. How it can help a business to grow.
The fact that there is an observable increase in breakups during December can analyze from following data:
1. Data from online dating sites.
2. Data from relationship counsellors and lawyers.
3. Data from the brand of alcohol individual drink.
4. Data from cafés .
.
.
These types of data can be combined and can used to do visualization to connect some dots.
The more in tune a business is with its historical data, the more effective they can be in adapting their reporting and future strategies for data optimization.
So, as I said earlier Descriptive Analytics using visualization identifies trends in the data and connects the dots to gain insights about associated businesses. In addition to visualization, Descriptive Analytics uses descriptive statistics and queries to gain insights from the data.
PREDICTIVE ANALYTICS
In the analytics capability maturity model (ACMM), predictive analytics comes after descriptive analytics and is the most important analytics capability. It is nothing but predicting the future events such as forecasting demand for products/services, customer churn, employee attrition, loan defaults, fraudulent transactions, insurance claim, and stock market fluctuations.
Descriptive analytics is used for finding what has happened in the past, predictive analytics is used for predicting what is likely to happen in the future.
Anecdotal evidence suggests that predictive analytics is the most frequently used type of analytics across several industries. The reason for this is that almost every organization would like to forecast the demand for the products that they sell, prices of the materials used by them, and so on.
Irrespective of the type of business, organizations would like to forecast the demand for their products or services and understand the causes of demand fluctuations. The use of predictive analytics can reveal relationships that were previously unknown and are not intuitive.
Let’s see Predictive Analytics by real time examples
Netflix – Predicts which movie their customer is likely to watch next. 75% of what customer watch at Netflix is from product recommendations. Recommendations can be based on watch history or based on user rating.
Amazon – Uses predictive analytics to recommend products to their customers. It is reported that 35% of Amazon’s sales is achieved through their recommender system
Moneyball – “Moneyball,” the Oakland Athletics baseball team used analytics and evidence-based data to assemble a competitive team.
The examples shown in above represent a tiny fraction of the predictive analytics applications used in the industry.
Companies such as Procter & Gamble use analytics as a competitive strategy every critical management decision is made using analytics. If one were to search for the reasons behind highly successful companies, one would usually find analytics being deployed as the competitive strategy.
Google also developed accurate prediction models that could predict events such as the outcome of political elections, the launch date of a product, or action(s) taken by competitors.
Do you know
Across globe the most widely used predictive modeling techniques are decision trees, regression and neural networks.
What do you need to get started using predictive analytics in your project?
First Step to use predictive analytics is you need to find problem to solve. What do you want to understand and predict?
Second step is find the appropriate data. Without data you cannot predict future. Data selection is considered one of the most time-consuming aspects of the analysis process. So be prepared for that.
Third step is pre processing of the data. In order to get accurate result, your data must be clean and should contain minimum outliers. The data can have many irrelevant and missing parts. To handle this part, data cleaning is done. It involves handling of missing data, noisy data etc.
Fourth step is model building. That means developing the models to work on your chosen data – and that’s where you get your results. In 2020 it happens very few time that you need to build model from scratch in Machine Learning. N-numbers of model are already available in the market. You just have to import them and tune the parameters.
However, the advancements in predictive analytics will surely pave the way for its development. Hope this article gave you a better understanding of the analytics spectrum.
Moneyball is the story of general manager Billy Beane of Oakland A’s Baseball team. The story of attempting to create a baseball team on a low budget by employing data analysis to acquire new players. Moneyball Data Science
Plot
In 2001, Billy Beane’s Oakland A’s team lose to the Yankees( Baseball team) in the playoffs then again lose three matches with other agency, because of the lack of money given by owner of team to manage and purchase players. The main issue was that Oakland A’s top players was purchase by rich agencies and what they were left with is only average players.
So, how can Billy form a competitive team by giving less salary to the topmost players?
Billy Beane
One day at meeting with Yankee’s general manager, Billy meets Peter Brand(analyst at Yankees) who is Economist from Yale’s University. After continuous efforts Billy convinces Peter to work for him.
Using data analytics, Peter hires the best players he could with an extremely limited budget for payroll. With approximately $41 million in salary, the Oakland A’s ultimately manage to compete with larger market teams such as the Yankees, who spent over $125 million for the same.
So, How did he do it?
Peter Brand
Peter did not like the conventional method used by the baseball hiring authority to evaluate and select the players. He said :
Your goal shouldn’t be to buy players. Your goal should be to buy wins. In order to buy wins you need to buy runs.
Peter build a computer program which helps to do yearly analysis of each player and maintain score of each player. Then, he used to visualize the performance of player and then discuss with Billy and Players about how the performance of players could be improved, and which line up of players could lead to winning matches.
Peter’s player purchasing strategy was simple, he performed data mining on every player in baseball and identified the pattern of market. He observed that everyone is purchasing player based on home run hitters with high batting average.
But, he thought why should not purchase player with having high on-based percentage(OBP). OBP means how frequently a batter reaches base. It is depending on the speed of player.
After analyzing everything he said :
“Players with a high OBP will be more efficient and useful than those with low OBP, even when those with the lower percentage ultimately hit more home runs and were faster and even stronger.”
But, unfortunately this strategy failed in first few matches and everyone blamed Billy for it. But, Billy was very confident about his analyst and economist Mr. Peter Brand. As a result this strategy starts showing result after they consider other factors too. like –
There was an incident I like to share:
Peter observed one player named Chad Bradford, who is relief pitcher. He observed that Chad is the most under-rated player in the baseball. Because he used to throw funny. Peter went to Billy and said this could be the best player for our team. He is the most effective relief pitchers in entire baseball. This guy real value would be $3 million a year but we can get him for $237,000 ($0.2 million).
Like this Peter and Billy scratch every under-rated player and manage to get them together with very low cost.
Moneyball theory success
Oakland A’s become first team to win 20 consecutive matches. Also, capture four American league west titles, and make five playoff appearances. Red Sox( baseball team ) tried to hire Billy with $1.25 million. But, Billy Beane refused that offer and continued with Oakland A’s.
Now, Not only baseball but other games like football, cricket uses Moneyball theory ; to be precise, uses data analytics.
Billy Beane in his 20th season at the helm of the Oakland A’s, has been the subject of a best-selling book and Academy Award-nominated movie, and is recognized beyond the world of sports for his innovative mind.
Final Thoughts
Data science and artificial intelligence are already making a great impact across diverse industries. Sports is no exception. Data Science can help in various ways: Help captains make the right decisions, Predicts final score, Deeper analysis of match performances, and patterns.
Data analytics was not new in baseball, since 1960s data was available and everyone use to do analysis. But, no one ever believe in its result.
Moneyball succeeded for the Oakland A’s not only because of data analytics but also because of Beane’s out of the box thinking ability, the leader who understood and believe in the players and Peter Brand the Economist’s potential.
So, “You must have belief in what you do and must be consistent with your work.”
The world of medicine is changing due to the implementation of Artificial Intelligence. Fast improvement in computer science and availability of huge amount of data in the field of medicine makes machine learning system to tackle increasingly complex learning tasks, often with unbelievable success. The increasing focus of AI in medicine has led to some experts suggesting that someday AI may even replace doctors. Machine Learning In Medicine
Aim of this blog is to provide you with context, to interpret study results and attempt to make sense of the digital health.
If medical professionals want to get ahead of the curve, they should get familiarized with the basics of Machine Learning and have an idea of what medical problems they aim to solve.
Applied Machine Learning in Healthcare
1. Google Computers Trained to Detect Breast Cancer
Google is using the power of computer-based neural network to detect breast cancer (mainly occurs in women and rarely in men) by training the tool to look for cell patterns in slides of tissue.
Pathologist always face a problem of having huge amount of data to make a final conclusion. Data is slides containing cells from tissue biopsies, thinly sliced and stained. This data must be scanned in search of any abnormal cell.
There can be many slides per patient. And each slide contains more than 10 gigapixels.
Even well trained doctors make mistakes and may arrive to different conclusions. But, Google introduce a well trained neural network system to look at specific pattern in slides containing cells.
The Google team found that the system can autonomously learn what pathology looks like. The computer was educated by studying billions of images donated from Radboud University Medical Center in the Netherlands.
Fun fact – This system has achieved 89 percent accuracy, beyond the 73 percent score of a human pathologist.
2. Neural Network for Detection of Diabetic Retinopathy in Retinal Fundus Photographs
From the last few years, the number of diabetes patients has increased exponentially. As a result, diabetic retinopathy (DR) has also become a big challenge.
According to study it is seen that more than 30% of diabetic patient faces an eye issue. Diabetic Retinopathy (DR) is an eye ailment which caused by damage to the blood vessels of the light-sensitive tissue at the back of the eye (retina) that influences eighty to eighty-five percent of the patients who have diabetes from a long time.
The retinal fundus images are commonly used for detection and analysis of diabetic retinopathy disease.
It is found that Deep Neural Networks helps to detect retinopathic eye by identifying abnormality on pupil with very high accuracy.
I suggest you to do this project personally. You can get data from Kaggle. The Kaggle platform provides a large set of high-resolution fundus images taken under a variety of imaging conditions.
3. Drug Discovery and Manufacturing
Artificial Intelligence is playing crucial role in pharma industry. Only one percent of drugs reaches the market. Because, many medicines are rejected due to their ineffectiveness or danger to human health. Which reduced the net worth of many pharmaceutical companies.
As a result more than 200 start up companies are implementing machine learning in there business model.
What pharmaceutical startups are doing with the help of machine learning ?
Finding correlation and association between different diseases, targets, and drugs. Which helps them to Re-purpose drugs for new indications.
Processing raw images, drugs, and genomic data sets, Genomics is an interdisciplinary field of biology focusing on the structure, function, evolution and mapping. Which allows researchers to Integrate rapid analytics and machine learning capabilities into existing business processes to improve care and enhance discoveries.
Generating organized big data from the analysis of published scientific research papers. Which helps to extract structured biological information to enhance drug discovery applications.
Analyzing and visualizing applicable results from multiple biomedical data sources. Allows researchers to gain a deeper understanding of a topic and avoid missing key information.
4. Clinical Studies
Machine learning has several potential applications in the field of clinical studies and research. Clinical studies means doing research by the help of human volunteers. And what I read is clinical research costs a lot of time and money.
I want to be frank with my audience. I have very little knowledge about clinical research. But, I will try to learn more about this in near future and will update this blog post.
Until then, If you know anything about this topic, please share your views and knowledge in the comment section below.