Frequently Asked Questions

What is artificial intelligence?

It has many definitions, but we could say that it is an area of computer research that attempts to train computers so they can carry out tasks that only humans could do before.

We can also define it as something that gives machines the ability to learn how to adapt and make good decisions as a person would. This technology brings machines closer to models of thought, perception, and action that make them appear intelligent.

Moreover, it is not only technology that “does” things, but it also learns how to do them.

How do we classify artificial intelligence?

Scientific literature has defined three types of Artificial Intelligence: Narrow artificial intelligence, general artificial intelligence, and Superintelligence or singularity.

Narrow artificial intelligence performs a specific task, no matter how complex it may be. For example, recommendation systems in real-time, adaptive systems for automatic control of ships, virtual assistants, etc.

General artificial intelligence can carry out a wide range of human activities. Among them, infer new knowledge or solve problems to which it has not been previously exposed.

Superintelligence is that which additionally acquires consciousness and surpass humans in cognitive tasks.

What is the relationship between people and artificial intelligence?

The first relationship between people and this technology arises from the collective imagination that we have created since childhood thanks to novels, films, and videogames.

Thanks to these human creations, we all have a preconceived idea of what artificial intelligence is. Because of the necessary conflict in works of fiction, when we hear these words we imagine Skynet accompanied by Terminator robots killing off humanity.

Far from the current reality, we should better think that artificial intelligence is something similar to an advanced Excel worksheet that helps us in specific tasks; and that the combination of both human and artificial intelligence solves certain problems better.

From the point of view of the information process, we humans easily adapt to the analysis of unstructured information and the recognition of unusual circumstances and their consequences. On the other hand, machines are suitable for analyzing large amounts of data and extracting patterns, and they do so with a higher degree of accuracy than humans.

In short, as has happened in all technological revolutions, the use of artificial intelligence is replacing some jobs that are susceptible to being automated and is creating others, but it obviously is not going to put an end to our way of producing, much less to our species.

How does artificial intelligence impact on today's society?

Robot process automation is already replacing people with machines, just like virtual assistants in call centers.

On the other hand, there is an unmet demand for professionals who know how to create and maintain systems related to data and its treatment, such as mathematicians or engineers who know how to transfer specific knowledge from any discipline to machines.

As in any revolution, there will be low-skilled jobs that will disappear and others that will be created. At the end of the 19th century, 40% of the population of the United States worked in agriculture. A century later, it was only 2%. The question is: what should we do so that the “farmers” of the 21st century are not left out of the labor market?

Hopefully, governments will consider it and take the necessary steps. For example, in the late 19th century, the United States banned children from working and put them in school, creating the workforce of the future. Today governments like Finland are training the entire population in artificial intelligence with free courses.

What is the degree of evolution of artificial intelligence?

Robot process automation is already replacing people with machines, just like virtual assistants in call centers.

On the other hand, there is an unmet demand for professionals who know how to create and maintain systems related to data and its treatment, such as mathematicians or engineers who know how to transfer specific knowledge from any discipline to machines.

As in any revolution, there will be low-skilled jobs that will disappear and others that will be created. At the end of the 19th century, 40% of the population of the United States worked in agriculture. A century later, it was only 2%. The question is: what should we do so that the “farmers” of the 21st century are not left out of the labor market?

Hopefully, governments will consider it and take the necessary steps. For example, in the late 19th century, the United States banned children from working and put them in school, creating the workforce of the future. Today governments like Finland are training the entire population in artificial intelligence with free courses.

How is artificial intelligence transforming drug development?

Nowadays, one of the main advances that artificial intelligence is bringing to the pharmaceutical industry is the discovery and development of new drugs through different processes:

  • Helping researchers discover drugs.
  • Developing more affordable drugs through the creation of polypharmacological profiles.
  • Finding faster ways to treat diseases.
  • Helping scientists improve outcomes in the discovery of rare diseases.

The applications of this technology are countless and priceless, for example, to track and predict epidemiological outbreaks using all the information available from multiple sources — official data, satellite images, or information from social media.

However, artificial intelligence still has a promising future ahead of it in this industry, and much of it takes place in business processes.

Advances in natural language processing — a discipline of artificial intelligence that investigates how to get machines and people to communicate using natural languages— make it possible for machines to turn text into knowledge.

This can be applied to an enormous number of processes: from the analysis of commercial documents that must comply with legal requirements to the monitoring of pharmacovigilance, including listening to and caring for patients.

Why is now the time to implement Artificial Intelligence?

There are basically four reasons.

The first is related to the possibility of storing and accessing the necessary data to implement projects based on artificial intelligence. At this point, it is essential to stress that it is more important to identify what information is necessary to solve a specific problem than the amount of data stored, which implies having a clear strategy with the data.

The second reason is the easy access to computer resources that allow the training of practically any model. The third is, precisely, the access to pre-trained models, such as the Transformers in natural language processing, which allow its use for particular purposes within each company.

Finally, nowadays, artificial intelligence is built using different components, which facilitates its use and shortens the time to produce results.

What is Machine Learning (ML)?

It is a discipline of artificial intelligence based on the identification of patterns over a large amount of data. Basically, they are methods and systems that allow us to predict, extract, summarize, optimize, and adapt the information. Besides, these systems must be capable of self-improving through use or training.

Machine Learning solves problems of the type input-output by creating mathematical functions or algorithms automatically. For this, formulating them properly is critical, starting by defining exactly what we want to solve and, if possible, with Machine Learning.

To simplify, we could say that there are two types of Machine Learning: supervised and unsupervised. In supervised learning, you need previous work to identify patterns and take action, while unsupervised learning doesn’t need this step.

What is Natural Language Processing (NLP)?

Natural Language Processing is a branch of artificial intelligence that helps computers understand, interpret, and manage human language.

The development of Natural Language Processing is not only based on computer technology, as it works closely with linguistics. The goal is to enable communication between humans and machines in the same way that we communicate with each other. The challenge is for the machine to be able to interpret human language, which is extremely complex and diverse and that we humans express in infinite ways, both verbally and in writing.

Natural Language Processing is important because it helps solve the ambiguity of language by converting words into numbers (vectors) so that all kinds of mathematical functions and hypotheses can be applied. In general, such words lack what we understand by meaning, which makes them a kind of empty shell.

Natural Language Processing uses Machine Learning and Deep Learning methods with supervised and unsupervised learning. Currently, the most powerful algorithms in Natural Language Processing are the so-called Transformers. It is also necessary to provide machines with syntactic rules and, finally, with semantic understanding, as well as with information about the specific knowledge domains to be treated.

What is Deep Learning (DL)?

Like Machine Learning, Deep Learning is a type of algorithm that solves input-output problems. These two types of algorithms are differentiated by the number of levels they use. At the initial level, the least abstract, the network learns something simple and then sends this information to the next level. The next level takes this simple information, combines it, composes somewhat more complex information, and passes it to the third level; and so on.

Therefore, they are like a waterfall with processing units that extract and transform variables. As in Machine Learning, the algorithms can use supervised learning or unsupervised learning, and applications include data modeling and pattern recognition.

They are widely used in Natural Language Processing and in this field basically consists of a recurrent process of coding and decoding the words that have previously become vectors to apply them to the science of mathematics.

What is Natural Language Generation (NLG)?

Natural Language Generation is a complementary discipline to Natural Language Processing that focuses on transforming data into a written or spoken narrative as if it was done by a human, seeking to communicate information in the most intelligible way possible for people. It combines analytical knowledge with synthesized text to create contextualized narratives. 

What is Robotic Process Automation (RPA)?

When companies carry out a digital transformation project, the aim is to eliminate repetitive tasks performed by humans and replace them with machines.

Robotic Process Automation is a software that simulates the work of a person who must access multiple information systems to extract data and load it into other systems.

Therefore, they are software robots that are added as layers above the systems already in place and allow the simulation of repetitive work, avoiding possible errors, and speeding up the process.

What are Axons?

Axons are a semantic methodology that connects concepts based on their meaning. They structure the knowledge of a specific domain linking the concepts with semantic content.

We can use them to create ontologies automatically by obtaining a knowledge graph, from which we can develop metrics that allow us to structure the semantic analogy.

They also help us to categorize and classify concepts that can be used to create recommendation systems.

What is an ontology?

An ontology defines the terms used to describe and represent a specific area of knowledge or domain.

The name ontology comes from the definition of the philosophical terms «ontos» (being) and «logos» (reasoning), as «part of the metaphysics of being in general and its properties». It seeks to explain what exists beyond the physical.

In the field of technology, an ontology is the semantic structure of a domain in a univocal way that allows communication between people, between a person and a computer, and between computers.

The aim of having an ontology is to be able to interpret without ambiguity the information of a domain to manage and generate knowledge.

What is a Knowledge Graph?

A graph is a set of objects (nodes) that relate to each other through connections (edges). Through this visual representation, graphs allow us to study the relationships that exist between units that interact with each other.

The nodes contain data, and their labels or metadata are related to each other through the edges. This allows you to maintain multiple and varied data schemes over time without the need for redesign. A Knowledge Graph unifies heterogeneous and distributed information and makes it able to be questioned by both machines and people; and allows it to be visualized, making it understandable in its context.

They are also capable of storing structured data, including metadata that implicitly provides structure and context to the information.

Due to these characteristics, they are a useful solution for storing data extracted from documents that, together with Machine Learning algorithms, allow us to visualize all knowledge in a unified way, carry out advanced analysis of relationships and search for patterns.

In short, the Knowledge Graphs allow to store the data, give it structure and context while offering systems of interrogation, information retrieval, knowledge discovery, and analysis that make them essential in fields such as Natural Language Processing.

What is a Transformer?

A Transformer is a Deep Learning model used mainly in natural language processing.

In addition to vectorizing words, it adds an attention mechanism to analyze the overall behavior of the sentence, both in encoding and decoding.

This mechanism helps to understand the context, just as we humans do, capturing the essence of each sentence rather than the meaning of each word.

What is kernel?

It is a mathematical method of classification that helps to solve complex problems by transforming linear algorithms into non-linear algorithms.

What is dezzai capable of?

With enough medical documents and dictionaries, dezzai is able to create a medical ontology.

In the same way, an ontology created from the documentation and knowledge accumulated by a banking company can help both human agents and bots in the management of incidents.

These are just two examples of the power of dezzai. Want to read more cases? Click here.

What makes axon technology different?

The technology of dezzai goes far beyond gathering knowledge and creating summaries or texts.

Axons represent a new computational approach to semantics, giving machines an associative capacity that starts from the very meaning of words.

This allows concepts to be related, synonyms to be identified, knowledge to be structured automatically, disambiguation to be performed and, in general, the construction of models that improve the artificial “understanding” of a text.

Why do we use graphs?

We store graphs showing the relationships between concepts in our database, structured by their meanings. Then, our Machine Learning algorithms allow you to visualize all the knowledge in a unified way, aiding in advanced analysis of relationships and in looking for patterns.

We’re here for you

WordPress Cookie Plugin by Real Cookie Banner