Cyber Warfare, l’Italia deve rafforzare le sue difese

L’Huffington Post

– Pochi giorni fa una serie di attacchi cibernetici alla pressoché sconosciuta Dyn Corporation americana – che gestisce uno dei principali “centralini” che fa funzionare l’Internet in Nord America – ha messo in ginocchio il web negli Usa per diverse ore, bloccando l’accesso a molti siti di grande importanza: Twitter, Spotify, Pinterest, parti della galassia Amazon, il New York Times, la BBC, la CNN e parecchi altri.

Gli attacchi in sé, per quanto ripetuti e massicci, non erano particolarmente sofisticati. Il Governo americano ha fatto sapere infatti che l’azione poteva essere stata condotta anche da dei teenager, che non era necessario attribuire l’azione ad “altri governi”. La volontà di minimizzare forse c’era, ma il suggerimento Usa è perfettamente plausibile – una chiara indicazione di come sono vulnerabili i sistemi informatici che sempre di più governano le nostre vite.

Leggi l’articolo di Eugenio Santagata, CEO di CY4Gate SpA, Vice Presidente Esecutivo, Direttore.

The opportunities offered by today’s big data and unstructured information are a great impulse for companies to choose solutions based on text mining approaches and applications. Internal and external text content essential for business—customer service records, insurance claims, clinical trial data, as well as emails, news and social media content—is everywhere today. Text mining approaches and applications are essential for helping companies take advantage of this information for improving strategic business activities and boosting decision making.

Unlike data mining, which is designed to work with structured data, text mining (or text analytics) focuses on text-heavy business data and is intended to handle full-text documents, emails and web content. In other words, text mining handles the most relevant part of today’s enterprise business data that is stored as unstructured content in the form of text (over 80% of business data is unstructured).

Text mining approaches

The most common text mining approach involves a representation of text that is based on keywords. A keyword based methodology can be combined with other statistical elements (machine learning and pattern recognition techniques, for example) to discover relationships between different elements in text by recognizing repetitive patterns in present in the content.  These approaches do not attempt to understand language, and may only retrieve relationships at a superficial level.

Text mining based on intelligent technologies such as artificial intelligence and semantic technology can leverage an understanding of language to more deeply understand a text. This enables extraction of the most useful information and knowledge hidden in text content and improves the overall analysis and management of information.

Text mining applications

As a powerful approach that improves a number of activities, from information management, data analysis and business intelligence to social media monitoring, text mining is successfully applied in a variety of industries. Here are some examples of the most used text mining applications:

Knowledge management software

Text mining supports organizations in managing unstructured information, identifying connections and relationships in information, and in extracting relevant entities to improve knowledge management activities.

Customer intelligence software

Text mining helps companies to get the most out of customer data by capturing new needs and opinions from text to improve customer support through the ability to understand what clients are saying (for example via social media.)

Entity extraction software

Text mining extracts the entities that matter the most (people, places, brands, etc. and also relevant relationships, concepts, facts and emotions) from text. A better analysis of text means better results for entity extraction that can be integrated into other platforms and applications to improve business intelligence activities, for example.

Visit our Solutions page to Discover Expert System’s Text analytics Solutions


– Qualche giorno fa Microsoft e le principali dotcom hanno annunciato uno sforzo congiunto per sviluppare e promuovere l’Intelligenza Artificiale (AI): ne abbiamo parlato con Luigi Conti, VP Corporate Strategy & Development di Expert System.

Leggi l’intervista

Business France

– Ascolta l’intervento di Maurizio Mencarini, ‎EMEA Head of Sales – Intelligence di Expert System, all’evento dedicato alla seconda edizione de “Il mese dell’investimento in Francia”, organizzato dall’Ambasciata di Francia in Italia e Business France Italia.


With many years of experience in developing and implementing different kinds of information management or process automation process projects, we are in a unique position to share what we believe are the most effective text analytics best practices and the most innovative and value rich text analytics applications.

Let’s start with Text Analytics best practices by focusing on a couple of areas that can have a significant impact on a project’s success, but that are often ignored by customers and/or consultants.

Text Analytics best practices

  1. In Big Data text analytics applications like innovation or security intelligence projects, you don’t really need to be selective in choosing the sources or content to include in the analysis. One of the biggest advantages of working with an advanced text analytics engine is that it does a lot of the heavy lifting for you! The engine can analyze content at scale, and it offers plenty of capabilities and criteria for intelligently filtering the content. This approach delivers results that are both manageable and actionable because you can accurately eliminate all of the irrelevant content in the post-acquisition phase. Given that extremely relevant data are often retrieved from the most unlikely, unexpected sources, this text analytics best practice is essential because it allows for more effective results and higher ROI.
  2. If you have advanced needs for information intelligence or process automation processes, make sure you include a semantic based engine in the text analytics application selection process. In this way, you can compare and contrast the results with the results of pure statistics or machine learning based engines. A semantic based gives you the ability to understand the meaning of words in context from the beginning. These systems still need a bit of refinement to augment the knowledge with domain- or company-specific knowledge, but they do not require the cumbersome and resource-intensive training that pure machine learning systems require. While this text analytics best practice might require disclosure that our Cogito platform uses this exact approach, our many years of experience working on a wide variety of advanced projects supports its validity as a best practice. In the real world, for text analytics applications, the pure machine learning approach is simply too long and too expensive. Achieving significant ROI in a reasonable timeframe with such a heavy investment is very difficult.

Text Analytics applications

Together with the explosion of available information we are all experiencing, the number of text analytics applications that provide business value to organizations are growing by the day. The three applications below, while perhaps less known, offer organizations the highest ROI potential, based on our experience in the field:

  1. Automating the insurance claim process. This is a very expensive process for any organization in the sector. Outsourcing claims to external providers (as many companies have done) is only a partial solution. Often, the negative impact on customer satisfaction resulting from inexperienced employees working on claims offsets any initial savings. With an advanced engine, this activity can be automated with an text analytics application that automatically extracts data from all the forms submitted by customers, compares them with customer coverage data in the company DB, and recommends approve/reject to the agent in charge of the final decision. In this scenario, you can expect savings of 50-60% when compared to outsourcing, with less exposure to errors.
  2. Automating customer interactions. This text analytics application includes self-service apps accessible from a website or via text message, as well as the automatic categorization of emails or support to call center agents through natural language based question answering engines. The utility and popularity of intelligent smartphone agents is increasing both awareness and adoption of these types of text analytics applications in many types of organizations.
  3. Intelligence event tracking applications. There is a growing importance for organizations to be able to discover, anticipate or just tracking events that impact their various activities. The cost of a partial understanding of events around us can be catastrophic in certain scenarios. Just imagine the impact of not knowing what the competition is doing; if product development is behind the curve in learning about new market trends and disruptive technologies; or the operational risks of trusting the wrong supply chain partners. Companies are surprisingly still using very manual processes to track these situations, often resulting in learning too late about a critical event. An advanced text analytics application, through fast and accurate understanding of information streams and content, can be your eyes and ears, your sensors in the world, working 24/7 to track what is happening in the echo-system, isolating events and alerting you in real time.

– Prediction is tough and rarely a guarantee. If we were able to predict the future, there would be no Vegas, or at minimum a much poorer version. Weather would cause fewer surprises and the World Series, the Davis Cup, the Stanley Cup, the World Cup, the Super Bowl, despite the not so predictable commercials, and the Daytona 500 would not be nearly as interesting to watch. In the world we live in, there are so many variables, nuisances and unknowns, it is difficult at best, to predict a specific outcome or what the future may hold. If we were able to consistently and accurately predict the future, it would be similar to having tomorrow’s Wall Street Journal, and those with access, would retire rich the day after tomorrow.

Read the article on Dataversity

A technology that strives to understand human communication must be able to understand meaning in language. In this post, we take a deeper look at a core component of our Cogito technology, the semantic disambiguator, and how it determines word meaning and sentence meaning.

To start, let’s clarify our definitions of words and sentences from a linguistic point of view.

Let’s see word meaning and sentence meaning in semantics

A “word” is a string of characters that can have different meanings (jaguar: car or animal?; driver: one who drives a vehicle or the part of a computer?; rows, the plural noun or the third singular person of the verb to row?). A “sentence” is a group of words that express a specific thought: to capture it, we need to understand how words relate to other words (“Paul, Jack’s brother, is married to Linda“. Linda is married to Paul, not Jack.).

Going back to school

To understand word meaning and sentence meaning, our semantic disambiguator engine must be able to automatically resolve ambiguities to understand the meaning of each word in a text.

Let’s consider this sentence:

John Smith is accused of the murders of two police officers.

To understand the word meaning and sentence meaning in any phrase, the disambiguator performs four consecutive phases of analysis:

  1. Lexical analysis

    This phase breaks up the stream of text into meaningful elements called tokens. The sequence of “atomic” elements resulting from this process will be further elaborated in the next phase of analysis.

John > human proper noun
Smith > human proper noun
is > verb
accused >noun
of > preposition
the >article

  1. Grammatical analysis

    During this phase, each token in the text is assigned a part of speech. The semantic disambiguator is able to recognize inflected forms, conjugations and identify nouns, proper nouns and so on. Starting from a mere sequence of tokens, what results from this elaboration is a sequence of elements. Some of them have been grouped to form collocations (police officer) and every token or group of tokens is represented by a block that identifies its part of speech.

John Smith > human proper noun
is accused > predicate nominal

  1. Syntactical Analysis

    During this phase, the disambiguator operates several word grouping operations on different levels in order to reproduce the way that words are linked to one another to form sentences. Sentences are further analyzed and elaborated to attribute a logical role to each phrase (subject, object, verb, complement, etc.) and to identify relationships between verbs, subjects and objects and between these and other complements whenever possible. In our example, the sentence is made of a single independent clause, where John Smith is recognized as subject of the sentence

John Smith > subject
is accused > nominal predicate

  1. Semantic Analysis

    During the last and most complex phase, the tokens recognized during grammatical analysis are associated with a specific meaning. Each token can be associated to several concepts; the choice is made by considering the base form of each token with respect to its part of speech, the grammatical and syntactical characteristics of the token, the position of the token in the sentence and the relation of the token to the syntactical elements surrounding it.

Like the human brain, the disambiguator eliminates all candidate terms for each token except one, which will be definitively associated to the token. When the disambiguator comes across an unknown element in a text (for example, human proper names), it tries to infer word meaning and sentence meaning by considering the context in which each token appears to determine its meaning.

Is accused > to accuse > to blame
police officer > policeman, police woman, law enforcement officer

Want to learn more about the disambiguation process? Try our demo



– Mentre dovremo aspettare qualche decennio per vedere eserciti composti da robot, nel contesto della cyber warfare sono già disponibili sistemi in grado di operare a una velocità impossibile all’essere umano. Nelle intenzioni del Pentagono si tratta sempre di sistemi che non puntano a sostituire l’uomo, ma a supportarlo in modo efficace.

Leggi l’articolo di Andrea Melegari


– Websim Action

«L’intelligenza artificiale è una grandissima occasione per l’industria hi-tech italiana, un treno che il nostro paese non può perdere». A dirlo è Stefano Spaggiari, fondatore e amministratore delegato di Expert System [EXSY.MI], azienda modenese leader nel settore delle intelligenze artificiali semantiche per l’analisi dei Big Data, che spiega: «a livello globale non c’è ancora un player consolidato come leader assoluto, è una corsa dove nessuno ha ancora un vantaggio commerciale o tecnologico decisivo».

Il nostro paese ha perso la corsa in molti, forse troppi campi dell’information technology, ma la partita delle intelligenze artificiali è ancora aperta, anche a società ancora piccole ma tecnologicamente all’avanguardia, come Expert System, considerata nella top 10 mondiale delle tecnologie di analisi semantica dei big data, al pari di giganti come IBM o HP.

Leggi l’articolo

Expert System is the Platinum Sponsor at the 2016 KMWorld Taxonomy Boot Camp this November 14-17, 2016 in Washington DC.  We are thrilled to have 2 sessions where you can see Expert System speak – Daniel Mayer, CEO speaking on “Cognitive Meets Taxonomy” and Bryan Bell, EVP speaking on “Context Navigation & Semantics”.

This year’s theme, Hacking KM: People, Processes, & Technologies, will look at novel ways to support knowledge sharing and organizational culture. It will examine processes and technology to foster collaboration between self-organizing and cross-functional teams to promote early delivery and stimulate innovation, continuous improvement, and encourage rapid and flexible response to change.  We look forward to seeing you at this year’s event!

Learn more