Frequently Asked Questions

What is artificial intelligence?

It has many definitions, but we could say that it is an area of computer research that attempts to train computers so they can carry out tasks that only humans could do before.

We can also define it as something that gives machines the ability to learn how to adapt and make effective decisions as a person would. This technology brings machines closer to models of thought, perception, and action that make them appear intelligent.

Moreover, it is not only technology that “does” things, but it also learns how to do them.

How do we classify artificial intelligence?

Scientific literature has defined three types of Artificial Intelligence: Narrow artificial intelligence, general artificial intelligence, and Superintelligence or singularity.

Narrow artificial intelligence performs a specific task, no matter how complex it may be. For example, recommendation systems in real-time, adaptive systems for automatic ship control, virtual assistants, etc.

General artificial intelligence can carry out a wide range of human activities. Among them, infer new knowledge or solve problems to which it has not been previously exposed.

Superintelligence additionally acquires consciousness and surpass humans in cognitive tasks.

What is the relationship between people and artificial intelligence?

The first relationship between people and this technology arises from the collective imagination that we have created as children due to novels, films, and videogames.

As a result of these human creations, we all have a preconceived idea of what artificial intelligence is. Because of the necessary conflict in works of fiction, when we hear these words we imagine Skynet accompanied by Terminator robots killing off humanity.

This is far from the current reality, we should instead view artificial intelligence is something similar to an advanced Excel worksheet that helps us with specific tasks. The combination of both human and artificial intelligence solves certain problems more effectively.

From the point of view of the information process, we humans easily adapt to the analysis of unstructured information and the recognition of unusual circumstances and their consequences. On the other hand, machines are more suited to analyzing large amounts of data and extracting patterns, and they do so with a higher degree of accuracy than humans.

In short, as has happened in all technological revolutions, the use of artificial intelligence is replacing some jobs that are susceptible to being automated and is creating others. But this is of course not going to put an end to our way of producing, and much less to our species.

How does artificial intelligence impact on today's society?

Robot process automation is already replacing people with machines, just like virtual assistants in call centers.

However, there is an unmet demand for professionals who know how to create and maintain systems related to data and its treatment, such as mathematicians or engineers. These experts know how to transfer specific knowledge from any discipline to machines.

As with any revolution, there are be low-skilled jobs that will disappear and others that will be created. At the end of the 19th century, 40% of the population of the US population worked in agriculture. A century later, it was only 2%. The question is: what should we do so that the “farmers” of the 21st century are not left out of the labor market?

Hopefully, governments will consider this and take the necessary steps. For example, in the late 19th century, the US banned children from working and put them in school, creating the workforce of the future. Today, governments such as Finland are offering free artificial intelligence courses to the entire population.

How far has artificial intelligence evolved?

According to our 2020 state-of-the-art survey on artificial intelligence, professionals from various fields scored the current degree of artificial intelligence development 4.9 out of 10. 

Amara’s Law tells us that we tend to overestimate the effect of technology in the short term and underestimate it in the long term. The reality is that narrow AI is advancing rapidly, but we are still far from simulating the human brain.

If we consider artificial intelligence as a mixture of mathematics, computation, and knowledge, it seems logical to think that a change in any of the above aspects could accelerate progress. Topics such as quantum computing or the evolution of natural language processing and natural language generation could change perspectives and advance mass use.

Regarding emulators of the human brain, reputable authors such as Robin Hanson believe that we are still two to four centuries away from machines developing something similar to our human capabilities.

How is artificial intelligence transforming drug development?

Nowadays, one of the main advances that artificial intelligence is bringing to the pharmaceutical industry is the discovery and development of new drugs through different processes:

  • Helping researchers discover drugs.
  • Developing more affordable drugs through the creation of polypharmacological profiles.
  • Finding faster ways to treat diseases.
  • Helping scientists improve outcomes in the discovery of rare diseases.

The applications of this technology are countless and priceless. For example, to track and predict epidemiological outbreaks using all the information available from multiple sources — official data, satellite images, or information from social media.

However, artificial intelligence still has a promising future ahead in this industry, and much of this takes place in business processes.

Advances in natural language processing — a discipline of artificial intelligence that investigates how to get machines and people to communicate using natural languages— make it possible for machines to turn text into knowledge.

This can be applied to an enormous number of processes: from the analysis of commercial documents that must comply with legal requirements to the monitoring of pharmacovigilance, including listening to and caring for patients.

Why is now the time to implement Artificial Intelligence?

There are four main reasons.

The first is related to the possibility of storing and accessing the necessary data to implement projects based on artificial intelligence. At this point, it is essential to stress that it is more important to identify what information is necessary to solve a specific problem than the amount of data stored, which implies having a clear strategy for the data.

The second reason is the easy access to computer resources that allow the training of practically any model. The third is, precisely, the access to pre-trained models, such as the Transformers in natural language processing, which allow its use for particular purposes within each company.

The final reason is that nowadays artificial intelligence is built using different components, which facilitates its use and shortens the time to produce results.

What is Machine Learning (ML)?

It is a discipline of artificial intelligence based on the identification of patterns over a large amount of data. Essentially, they are methods and systems that allow us to predict, extract, summarize, optimize, and adapt the information. Also, these systems must be capable of self-improving through use or training.

Machine Learning solves problems of the type input-output by creating mathematical functions or algorithms automatically. So, formulating them properly is critical, starting by defining exactly what we want to solve and, if possible, with Machine Learning.

To simplify, we could say that there are two types of Machine Learning: supervised and unsupervised. In supervised learning, you need previous work to identify patterns and take action, while unsupervised learning does not require this step.

What is Natural Language Processing (NLP)?

Natural Language Processing is a branch of artificial intelligence that helps computers understand, interpret, and manage human language.

The development of Natural Language Processing is not only based on computer technology, as it works closely with linguistics. The goal is to enable communication between humans and machines in the same way that we communicate with each other. The challenge is for the machine to be able to interpret human language, which is extremely complex and diverse and that we humans express in infinite ways, both verbally and in writing.

Natural Language Processing is important because it helps solve the ambiguity of language by converting words into numbers (vectors) so that all kinds of mathematical functions and hypotheses can be applied. In general, such words lack what we understand by meaning, which makes them a kind of empty shell.

Natural Language Processing uses Machine Learning and Deep Learning methods with supervised and unsupervised learning. Currently, the most powerful algorithms in Natural Language Processing are the so-called Transformers. It is also necessary to provide machines with syntactic rules and semantic understanding, as well as with information about the specific knowledge domains to be treated.

What is Deep Learning (DL)?

Just as with Machine Learning, Deep Learning is a type of algorithm that solves input-output problems. These two types of algorithms are differentiated by the number of levels they use. At the initial level, the least abstract, the network learns something simple and then sends this information to the next level. The next level takes this simple information, combines it, composes more complex information, and passes it to the third level; and so on.

Therefore, they are like a waterfall with processing units that extract and transform variables. As in Machine Learning, the algorithms can use supervised learning or unsupervised learning, and applications include data modeling and pattern recognition.

They are widely used in Natural Language Processing and in this field consists of a recurrent process of coding and decoding the words that have previously become vectors to apply them to the science of mathematics.

What is Natural Language Generation (NLG)?

Natural Language Generation is a complementary discipline to Natural Language Processing that focuses on transforming data into a written or spoken narrative as if it was done by a human. In this way, seeking to communicate information in the most intelligible way possible for people. It combines analytical knowledge with synthesized text to create contextualized narratives. 

What is Robotic Process Automation (RPA)?

When companies carry out a digital transformation project, the aim is to eliminate repetitive tasks performed by humans and replace them with machines.

Robotic Process Automation is a software that simulates the work of a person who must access multiple information systems to extract data and load it into other systems.

Therefore, they are software robots that are added as layers above the systems already in place and allow the simulation of repetitive work, avoiding possible errors, and speeding up the process.

What are Axons?

Axons are a semantic methodology that connects concepts based on their meaning. They structure the knowledge of a specific domain linking the concepts with semantic content.

We can use them to create ontologies automatically by obtaining a knowledge graph, from which we can develop metrics that allow us to structure the semantic analogy.

They also help us to categorize and classify concepts that can be used to create recommendation systems.

What is an ontology?

An ontology defines the terms used to describe and represent a specific area of knowledge or domain.

The name ontology comes from the definition of the philosophical terms «ontos» (being) and «logos» (reasoning), as «part of the metaphysics of being in general and its properties». It seeks to explain what exists beyond the physical.

In the field of technology, an ontology is the semantic structure of a domain in a univocal way that allows communication between people, between a person and a computer, as well as between computers.

The aim of having an ontology is to be able to interpret the information of a domain, without ambiguity, to manage and generate knowledge.

What is a Knowledge Graph?

A graph is a set of objects (nodes) that relate to each other through connections (edges). Through this visual representation, graphs allow us to study the relationships that exist between units that interact with each other.

The nodes contain data, and their labels or metadata are related to each other through the edges. This allows you to maintain multiple and varied data schemes over time without the need for redesign. A Knowledge Graph unifies heterogeneous and distributed information and gives it the ability to be questioned by both machines and people, It also allows it to be visualized, making it understandable in its context.

They are also capable of storing structured data, including metadata that implicitly provides structure and context to the information.

Due to these characteristics, they are a useful solution for storing data extracted from documents that, together with Machine Learning algorithms, allow us to visualize all knowledge in a unified way, carry out advanced analysis of relationships and search for patterns.

In short, the Knowledge Graphs allow the storage of data,  and give it structure and context. It also offers systems of interrogation, information retrieval, knowledge discovery, and analysis that make them essential in fields such as Natural Language Processing.

What is a Transformer?

A Transformer is a Deep Learning model used mainly in natural language processing.

In addition to vectorizing words, it adds an attention mechanism to analyze the overall behavior of the sentence, both in encoding and decoding.

This mechanism helps to understand the context, just as we humans do, capturing the essence of each sentence rather than the meaning of each word.

What is kernel?

It is a mathematical method of classification that helps solve complex problems by transforming linear algorithms into non-linear algorithms.

What is dezzai capable of?
With enough medical documents and dictionaries, dezzai is able to create a medical ontology.

In the same way, an ontology created from the documentation and knowledge accumulated by a banking company can help both human agents and bots in the management of incidents.

These are just two examples of the power of dezzai. Want to read more cases? Click here.

What makes axon technology different?

The technology of dezzai goes far beyond gathering knowledge and creating summaries or texts.

Axons represent a new computational approach to semantics, giving machines an associative capacity that starts from the very meaning of words.

This allows concepts to be related, synonyms to be identified, knowledge to be structured automatically, disambiguation to be performed and, in general, the construction of models that improve the artificial “understanding” of a text.

Why do we use graphs?

We store graphs in a database to show the relationship between concepts according to their meaning through a graphical representation.

In this way, and thanks to machine learning algorithms, it is possible to visualise all the knowledge in a unified way, perform an advanced analysis of the relationships and search for patterns.

We’re here for you

dezzai

Subscribe and receive in your inbox the latest news, updates, and content of interest on artificial intelligence.

Join us

Use cases

Blog posts

Book a demo

Madrid office

6 Pollensa Street, ECU Bldg.

2nd floor, Las Rozas,

Madrid 28290. Spain.

Tel.: +34 916492292

We are certified by:

SelloAENORISO27001_NEG
member
IQNet
CE
WordPress Cookie Plugin by Real Cookie Banner