Closer’s Data Science Approach

Data Science Figures

Data Science Projects consider large amounts of data with the help of artificial intelligence, machine learning, statistics, scientific methods, exploratory data analysis, and programming skills. However, knowledge generated by Data Science Projects is not useful in the hands of people that explore data all day. It is useful in the hands of those that don’t have that role in the organizations but need fast access to what data brings.

More than knowing what happened (Descriptive Analytics), high-performing companies need to know what will happen (Predictive Analytics) and what actions they should take to make it happen (Prescriptive Analytics). That is why businesses from all over the world are developing Data Science Projects to succeed.

Those companies are using Advanced Analytics Techniques such as Machine Learning, Optimization, Sentiment Analysis, Cluster Analysis or Predictive Modelling to collect, analyse and process data that generate insights to make better business decisions, improving operations, creating smarter products and developing personalized marketing campaigns.

At Closer, we develop algorithms and customized mathematical models based on our Data Science, Data Analytics and Data Engineering experience, enabling companies to use the data at their disposal and converting it into competitive advantages.



The world is full of data: pieces of information formatted and stored according to specific purposes. Data can be written and numbered on paper; or be represented by bits and bytes stored in a computer’s memory; it can also consist in facts inside a person’s mind. Most of this data is unstructured.


Information is the collection of data, which is structured, processed, systematised, and presented with assigned meaning It improves the reliability of the data acquired, excluding uncertainty or irrelevance. Once it processes data through the process of interpretation, prediction or explanation, information is critical.


Knowledge is a set of signs representing the meaning or the content of thoughts, justifiably believed as truth by an individual. It’s the pattern that connects and generally provides a high level of predictability as to what is described or what will happen next.


We can look at the insight step as an observation method: a different way of viewing data that leads to new perspectives or discoveries. Insights can be drawn from data or information, leading to contextualised observation about the subject of study.


Wisdom happens when understanding of fundamental principles are embodied.  According to the Cambridge Dictionary, it is “the ability to use your knowledge and experience to make good decisions and judgments”. Wisdom is essentially systemic.


Closer Analytics

What are your business challenges?

Some examples that we address:

Customer acquisition
  • Should I approve this credit line?
  • Who is the best target for my campaign?
Customer usage and growth
  • Which campaign is the most promising one for which customer?
  • Which clients will buy a car and when? 
  • Whom do we approach with a differentiated offer?
Risk and profitability
  • How can we optimize our pricing model?
  • How can we optimize our scoring models further?
  • Apply PufferFish portfolio optimization
  • Who will leave us and when? What should I do to avoid it?
Streaming Analytics

The analysis of great quantities of data is processed by streaming analytics. This technique runs continuous queries, also called event streams, triggered by specific actions - think of measurable activities such as financial transactions, websites click, equipment failure, or costumer’s behaviour. All data originated by these actions can be used as business value. Streaming analytics takes fast-moving live data coming from different sources to raise automated, real-time actions or alerts. This way, it’s possible to extract immediate insights from fast and ever-growing volumes of data.

Machine Learning

Machine Learning Models are a technique of Artificial Intelligence (AI) that gives the computer the ability to learn by itself. In the last few years, it has become an essential skill for data scientist and data analysts – in fact, for everyone who wants to transform massive amounts of data into trends and predictions. At Closer, we’re no strangers to concepts like algorithms, regressions, statistics, algebra, and probabilities. Machine learning and data use previous experience to learn and improve, detect patterns, and predict behaviours.

Big Data Modelling & Processing

Data modelling is the process of analysing, discovering, and documenting the available data resources. By creating conceptual, deep learning models, a company develops its own lexicon and relationships between topics of interest. A data driven model is built using components that act as abstractions of real-world things.

On the other hand, Big Data processing comprehend the 5 V's of big data: velocity, volume, value, variety and veracity. As it deals with high volume, non-static, and frequently updated data coming from a large variety of formats, it is essential to ensure validation to achieve reliable results.


The volume, variety, and speed of today’s information make it difficult to monitor without the right tools. When data originates from different sources, it can reveal inaccuracies, inconsistencies, redundant information, and other conflicting issues. Optimization is the key to organize data. The optimization process uses sophisticated tools to access, cleanse and put data in order. Every business aspect – from production to sales, from samples to marketing – can be optimized bringing important benefits.

Sentiment Analysis

Closer helps companies to analyse people’s feelings and opinions through sentiment analysis. It is one of the most complexes of data-science techniques. Sentiment analysis is a natural language processing (NLP) model that determines whether the available data is positive, negative, or neutral.

Data science projects use specific algorithms and software to perform sentiment analysis. The context of the messages, feedback, and channels of communication (social media, live chat, e-mail, and others) are the variables considered to detect the emotional tone behind online conversations.

  • Rule-based data science projects: Systems that automatically perform sentiment analysis based on a set of manually crafted rules.
  • Automatic: Systems that rely on machine learning techniques to learn from data.
  • Hybrid: Systems combining both rule-based and automatic approaches.
Cluster Analysis

Cluster analysis is a statistical method for processing unlabelled data. It is identified as an unsupervised learning algorithm. Cluster analysis works by organizing items into groups, based on how closely associated they are. By identifying similar features on different clusters, it is possible to segmentate targets, thus enabling data cleaning and data reduction.

Clustering can also identify unwanted items, such as outliers and noises. The similarities between groups of subjects allows data scientists to measure a whole set of characteristics and outcome results. It is currently used in data science projects of market research and audience segmentation.

Predictive Modelling

Who doesn't want to know the future? Companies certainly do, and they spare no effort to predict future behaviours on the markets they operate. At Closer, we use predictive modelling and analytics tools in our Data Science Projects. We look for current and historical data patterns to determine if they are likely to emerge again. We collect data, formulate statistical models, and make predictions based on models that are reviewed every time that additional data becomes available.

Predictive modelling considers performances in the past to assess how likely markets or consumers will behave in the future. It creates models combining complex information, detect patterns, and calculate transactions. This allows businesses to decide how to use its resources, to reduce risks and to prepare for possible future events – possibly taking advantage of the knowledge collected.

How to build Data Science Projects

How to build Data Science Projects

Data Science Projects are all around us – from Google Maps to Facebook suggestion of friends, or fake news detection. Data scientists can know everything about analytics, machine learning models, and Artificial Intelligence (AI). But can you? For someone without specific technical skills and programming courses, it can be difficult to understand. It turns out that it takes more than an analytic mind to set up a valid model. Data science projects request a strong dose of creativity, problem-solving approach, structured way of thinking, and good storytelling skills.

Here is how Closer outlines a data science projects.

  1. Find a Perspective

A hypothesis-driven approach is a good way to start any project. Otherwise, we would possibly be facing thousands of variables to analyse. The goal is to solve a specific problem and answer a question. So, we verify the available and credible data to select the variables we´ll need. If they're not available, we develop feature engineering to find new ways to collect it.

  1. Breakdown Problems

Closer uses state-of-the-art tools and techniques to address any data-related question. But we also have a structured thinking approach to understand businesses. We analyse each aspect and break it down into a data problem.

  1. Data Cleaning

We take time to clean out data, including missing value imputation, outlier treatment, and encoding categorical features. All data science projects must consider cleaning duplicated, outdated, and unnecessary data.

  1. Data Exploration

After mining the data, we explore it deeply to obtain valid insights. It helps us to understand de dataset, unfold patterns and visualize the whole picture, getting precious pieces of information.

  1. Data Modelling

Software engineering and computer science are essential to model deployment. Even the most accurate model must be delivered to clients with neat coding, so they can understand and use it. We know that most organizations won’t have the computational power to support complex models.

  1. Benchmark Modelling

Closer uses benchmark models to evaluate the proposed methods and to compare different metamodels types. It’s important to understand the problem statement, the type of data and the variables attributed.

Decision Trees

In decision analysis, a decision tree visually represents the decision-making process. It’s a highly accurate Machine Learning model that explains information, interpret results, and predicts behaviours.

Decision Trees clearly formulate and easily shows a hierarchical structure of the knowledge learned in Data Science Projects. They include both classification and regression. Decision Trees, artificial and neural networks and algorithms are a form of deep learning that imitate human thinking and help them understand data.

This is how Closer looks at a Decision Tree model:

  • Root: The base of the decision tree.
  • Splitting: The process of dividing a node into multiple sub-nodes.
  • Decision node: When a sub-node is further split into additional sub-nodes.
  • Leaf: When a sub-node does not further split into additional sub-nodes; represents possible outcomes.
  • Pruning: The process of removing sub-nodes of a decision tree.
  • Branch: A subsection of the decision tree consisting of multiple nodes.


When it comes to Data Science Projects, access to information is the key. Stay up to date and keep yourself informed about Data Science trends, best practices and much more. We hope you find these scientific insights relevant for your business.

Advanced Analytics

Advanced Analytics

What is it and why does it matter to me?

Business Intelligence

Business Intelligence

Paving the road for Augmented Analytics

Big Data

Big Data

Big Data is not about Big Data

Quantitative Tragedy

Artificial Intelligence

The Black Box Fallacy