Expert IQ Report: Chi è Elena Ferrante?

L’identità dell’autrice Elena Ferrante resta, ad oggi, ancora un mistero.

È autrice di best seller come L’amore molesto (1992) e del ciclo letterario L’amica geniale (2011-2014), che hanno venduto milioni di copie in tutto il mondo. Di lei non si sa quasi nulla: il suo nome, infatti, è uno pseudonimo con cui ha celato la sua vera identità. Le indagini non si arrestano e per tentare di risolvere l’enigma sono state pubblicate alcune ricerche che utilizzavano vari criteri, sia economici sia di analisi sui testi. Ma ancora nessun nome è certo.

Senza porsi come obiettivo di scovare l’identità che si cela sotto l’alias “Elena Ferrante”, Expert System ha seguito due piste.
La prima è la meno battuta, e parte dall’idea che i libri pubblicati con il nome Elena Ferrante non siano stati tutti scritti dalla stessa persona.
La seconda pista è quella “più tradizionale” e paragona il linguaggio e lo stile di scrittura della Ferrante con quelli dei principali sospettati che si sono avvicendati nel tempo: Anita Raja, Domenico Starnone, marito della Raja, il romanziere Marco Santagata e il saggista Goffredo Fofi. Inoltre, tramite una metodologia differente basata sul machine learning, è stato effettuato uno specifico confronto fra la Ferrante, la Raja e Starnone.

Scarica il report

Lettera 43

– Il dibattito su come proteggere la razza umana dalla supremazia degli androidi si è recentemente, e clamorosamente, riaperto. Le tre regole che Asimov definì nel 1942 – e che stabilivano, secondo un preciso ordine di priorità, che un robot non può danneggiare gli esseri umani; deve obbedire loro; deve proteggere la propria esistenza – sono state considerate «non più adeguate» da un gruppo di oltre 700 ricercatori ed esperti di intelligenza artificiale, riunitisi in un convegno in California lo scorso gennaio.

Leggi l’articolo di Andrea Melegari, SEVP di Expert System

In recent years, the relevance of open source intelligence (OSINT) has grown within the field of information management. In fact, open source intelligence provides a large amount of the information used by intelligence analysts and corporate security analysts to identify potential hidden risks or make strategic decisions in time. While there are many advantages to using OSINT, if you are choosing to use open source intelligence, there are also certain limitations and weaknesses that you’ll want to take into consideration when planning your strategy.

Let’s take a look at the advantages and disadvantages of OSINT.

OSINT Advantages

One of the biggest advantages of using OSINT is cost: OSINT is much less expensive compared to traditional information collecting tools. OSINT offers a potentially greater return on investment and this feature is particularly relevant for organizations with a tight intelligence budget.

In addition to the cost advantage, OSINT has many advantages when it comes to accessing and sharing information. Information can be legally and easily shared with anyone ; open sources are always available and constantly up to date on any topic. Twitter, for example, is open and accessible.

Last but not least, information gathered from public sources is a great resource for intelligence for national security and can be used to support creation of long-term strategies for a variety of business goals.

OSINT Disadvantages and weaknesses

On the other hand, OSINT is not without its limits. One of the biggest problems of OSINT is one of potential information overload; filtering insight from the “noise” can be difficult. In fact, without valuable OSINT tools, finding and searching the right information can bea time-consuming activity.

OSINT is also not ready to use; it requires a large amount of analytical work from humans in order to distinguish valid, verified information from fasle, misleading or simply inaccurate news and information. OSINT must be validated.

To get the most of OSINT requires a detailed analysis and understanding of the requirements for using it. Moreover, the choice of using OSINT should not be based on cost versustraditional intelligence, because OSINT doesn’t exclude the traditional intelligence gathered from classified sources. Combining OSINT and traditional intelligence sources is a powerful approach for business intelligence.

Expert System parteciperà alla conferenza annuale ASIS European Security Conference & Exhibition che si terrà dal 29 al 31 marzo presso Fiera Milano Congressi, Milano.

L’evento, organizzato da ASIS, organizzazione leader per i professionisti della sicurezza in tutto il mondo, riunisce i professionisti del settore Security e i fornitori europei di tecnologia con dibattiti, sessioni formative, keynote e mostre per confrontarsi su temi attuali nei settori Intelligence e Security.

Durante la Technology & Solutions Track, Maurizio Mencarini, EMEA Head of Sales – Intelligence Division, e Alessandro Monico, Italy Sales Director – Corporate Division di Expert System, guideranno la sessione “Using Cognitive Computing to Gain Real-Time, Actionable Insights from Your Data”.

Visita lo stand #A9 di Expert System per una demo live della tecnologia cognitiva Cogito o contattaci per pianificare un incontro durante l’evento.

Visita il sito dell’evento per saperne di più

The growth in big data for healthcare provides a real opportunity for improving the quality of diagnosis and treatment. However, unlocking the value of this information requires innovative solutions. Cognitive technology in healthcare is one such solution. By combining individual medical information with larger-scale statistics and scientific data, these applications allow doctors to identify targeted treatment by immediately accessing all of the available information about similar cases.

For doctors and medical professionals, research studies, patient medical records, demographic studies and medical literature are great resources for treatment and diagnosis—as long as it is easily and immediately accessible. Manual methods simply don’t cut it when it comes to the scale of information available, and human interpretation of this information can miss or overlook correlations in a patient’s symptoms or medical history.

Personalized Medicine: how cognitive technology in healthcare can improve it

Personalized Medicine is an approach based on individual medical information. According to Wikipedia, it is “a medical procedure that separates patients into different groups—with medical decisions, practices, interventions and/or products being tailored to the individual patient based on their predicted response or risk of disease”. New kinds of data and new information sources (for example wearable devices, digital medical records, doctor-patient chats, discussions on medical association websites, etc.) generate new and interesting information to empower Personalized Medicine.

The approach of cognitive technology in healthcare analyzes and understands all of the structured and unstructured information that is related to the patient’s condition. These applications understand words and sentences in the way that humans do. They allow systems to capture all of the useful information in a medical record or from health monitoring devices. This approach brings together a variety of information, from the number of daily steps tracked by a mobile app to lab results archived in the hospital, dietary habits and data recording previous past surgery or procedures described in the digital medical record. And it allows doctors to see the connections between diverse and complex information types.

Access to this information through cognitive technologies helps improve the clinical decision process and allows doctors to compare similar cases and suggest the best treatment, drug therapy and, proper dosage, etc. for each patient.

Cognitive technology in healthcare offers a range of benefits that support a Personalized Medicine approach:
Improves communication between doctor and patient, especially through digital channels
Efficiently manages content by extracting important medical knowledge from
from different sources (clinical trial reports, medical websites, etc.)
Integrates structured data with unstructured information about patients, treatments, drugs, etc.
Provides access to relevant information about new trends, clinical risks and innovations in healthcare
– Last but not least, it combines scientific data with personalized medical information to make it accessible and available for medical consultation

Download our white paper “Empowering Personalized Medicine with Semantic Technology” to learn more

Improving our ability to find the right information at the right time is important for every business and every user. The constant growth in unstructured information makes text mining applications increasingly important in achieving this goal. In looking ahead to see how text mining will solve some of our biggest challenges in extracting the value from large and noisy volumes of unstructured data, I thought I’d share some of the text mining research papers that I’ve recently come across. From clustering, problems related to entities and more, these text mining research papers focus on certain techniques, and may prove helpful for others facing similar issues.

Starting from categorization and classification, “Support-vector networks,” an older but still relevant a text mining research paper by Corinna Cortes and Vladimir Vapnik is worth mentioning. Another paper on the same topic is “Text categorization with support vector machines: Learning with many relevant features” by Thorsten Joachims.

Moving on to clustering, the text mining research paper “A comparison of document clustering techniques” by Michael Steinbach, George Karypis and Vipin Kumar from the Department of Computer Science at the University of Minnesota provides the foundation for understanding how clustering algorithms work.

Information scientist Don R. Swanson is known as one of the most respected scholars in text mining. His text mining papers are relevant for understanding the big picture evolution of the topic. Here is a collection of some of his work: (Don R Swanson – Google Scholar).

Among the text mining research papers focusing on the problem of entity linking, in other words linking entities in a document to the entities in the Wikipedia page, for example, I found a valuable resource in Ji Heng. Heng is an Associate Professor, Computer Science at Rensselaer Polytechnic Institute. You can take a look at

several papers that are available on her website(publication) by searching for entity linking.

It is also worth to mention the text mining research paper “Local and Global Algorithms for Disambiguation to Wikipedia”. This paper is authored by Lev Ratinov, Dan Roth, Doug Downey and Mike Anderson from the University of Illinois at Urbana-Champaign and focuses on entity linking as an optimization problem.

Finally, the last text mining research paper that I will include on entity linking is “A Neighborhood Relevance Model for Entity Linking” by Jeffrey Dalton and Laura Dietz from the University of Massachusetts. This paper focuses on the extremely important aspect of disambiguating context in information.

The last text mining technique that I would reference is about pure extraction. This topic offers many interesting text mining research papers for this area. For specific reasons related to the development of entity recognition in different languages, I would recommend the following paper about name recognition for Arabic, which is a real challenge: “A Rule Based Persons Names Arabic Extraction System,” is authored by Ali Elsebai and Farid Meziane from the University of Salford and Fatma Zohra Belkredim from the Universitie Hassiba Ben Bouali in Chlef, Algeria.

In addition, to the above text mining research papers, I would like to suggest a few books that I covered in a previous post:


Atomic Reach

– Generating content without knowing who your audience really is can be creating content waste and costing your business. To implement a true strategy where you know who is really engaged with your brand, Personas expert, Ardath Albee shares her insights on what she’s seen from Fortune 1000 companies. Get industry tips, best practices, and mistakes to avoid when building your personas, on this episode of Content Marketing for the Future Podcast.

Listen to the podcast with Luca Scagliarini, CMO of Expert System

Cyber Defense Solutions Tips and Curiosities – Today’s cyber threats are becoming increasingly difficult to defend against, even for the strongest cyber defense solutions available. Digital innovation, connected products and devices (known as the Internet of Things), financial crises in an ever-changing regulatory environment, and digital attacks and other cyber crimes are just a few of the forces impacting organizations in every industry. This includes the public sector, which requires national cyber defense solutions, to private companies in finance, manufacturing, energy and utilities etc.

Not surprisingly, cyber security, which includes cyber defense solutions, is one of the fastest growing markets in the world. The Cybersecurity Market Report estimates that it will hit $170 billion by 2020, up from $75 billion in 2015 and $3.5 billion in 2004. As a result, today the worldwide spending on cybersecurity and cyber defense solutions is predicted to reach $1 trillion in five years, from 2017 to 2021. In response, significant progress has been made to strengthen organizations’ internal cybersecurity capabilities (Path to cyber resilience: Sense, resist, react, EY’s 19th Global Information Security Survey 2016-2017.)

How can we improve cyber defense by leveraging data in a timely manner?

As organizations work in an increasingly open and interconnected world, and threats are constantly changing the cyber defense landscape, a more effective data management and intelligence process help analysts and businesses strike the most appropriate balance between risks, opportunities and benefits.

In fact, data is one of an organization’s most strategic assets and in this scenario, data analytics can effectively support cyber defense strategies by turning data into useful information (intelligence) needed to support more effective decision-making processes.

The problem is that the management of unstructured data remains one of the main issues in knowledge management. When it comes to texts that are unstructured and expressed in natural language such as news, articles, web pages, blog posts and comments, social media information streams, researches, reports, common technology and tools have fallen short in transforming this into actionable intelligence.

More intelligent capabilities are necessary, as well as a new approach that is successful in transforming unstructured information into intelligence.

Bringing text analytics and cognitive computing to cyber defense

As a basic requirement for unstructured data is that it needs to be understandable and findable. Advanced text analytics capabilities make this possible, and play a key role in responding to today’s new cyber threats by making sense of unstructured information.

Organizations are increasingly leveraging cognitive computing based on semantic technology to analyze and mitigate cyber risks. Thanks to the ability to find and monitor reliable information sources, cognitive cyber security and cyber intelligence software comprehend and correlate data in the most accurate and complete way. This allows users to and leverage any and all available strategic data for actionable intelligence.

To learn more about cognitive intelligence, visit Expert System’s Cyber Security Software Solutions website section.

Lettera 43

– Due ricercatori, in 10 mesi e con fondi limitati, hanno realizzato un software in grado di sconfiggere quattro assi di poker texano. Un’impresa che abbiamo il dovere di non sottovalutare. E di valorizzare.

Pochi giorni fa è terminato al River Casino di Pittsburgh il torneo di poker texano “Cervello contro Intelligenza Artificiale”. Libratus, un software, ha prevalso nettamente sui migliori giocatori professionisti. L’evento, tuttavia, non ha avuto grande eco. Forse è sembrata una “non notizia”, dopo casi come quello di Google AlphaGo che ha recentemente battuto il campione del mondo di Go, replicando quanto fatto da Ibm Deep Blue nel 1996 e da Ibm Watson nel 2011 che, rispettivamente, sconfissero un numero uno degli scacchi (Garry Kasparov) e sbancarono al quiz televisivo Jeopardy.

Leggi l’articolo di Andrea Melegari, SEVP, Defense, Intelligence & Security, Expert System

Cogito 14 accelera la trasformazione del business e semplifica l’adozione dell’intelligenza artificiale

– Expert System prosegue la strategia di crescita incentrata su innovazione e differenziazione di prodotto. La nuova release della tecnologia Cogito semplifica ulteriormente lo sviluppo di soluzioni avanzate per l’automazione dei processi aziendali (robotic process automation) e l’information intelligence.

Grazie alle sinergie fra i “Cogito Lab” e alla valorizzazione delle risorse recentemente acquisite (in particolare attraverso TEMIS, realtà francese leader nella text analytics), con la nuova release di Cogito sono state implementate nuove lingue e innovative funzionalità, fra cui:

Cogito Studio

Un ambiente di sviluppo completo, che consente agli utenti di personalizzare direttamente la tecnologia di base Cogito e rende l’implementazione dei progetti più facile e veloce. Cogito Studio comprende ora Cogito Studio Express, un’applicazione web che semplifica agli utenti sia la progettazione e il mantenimento di tassonomie/ontologie specifiche sia di avere il pieno controllo sulle attività di text analytics.

Cogito Knowledge Graph

Il Knowledge Graph di Cogito è una ricca base di conoscenza che contiene milioni di concetti con le rispettive forme lessicali, le diverse proprietà e le relazioni utili a comprendere e a disambiguare il significato delle parole e delle frasi contenute nei testi. Arricchito di nuove ontologie di dominio, quali finance, bio pharma, ecc., il Knowledge Graph sfrutta ora anche specifiche tecniche di machine learning che contribuiscono ad arricchirne la conoscenza automaticamente da testi, senza supervisione o con la supervisione di esperti in materia. Ciò consente di sviluppare in tempi rapidissimi le personalizzazioni richieste per soddisfare le esigenze più avanzate in termini di automazione dei processi e attività di information intelligence.

Cogito API

Tramite un’estensione delle API, viene ora semplificato lo sviluppo e l’integrazione dei prodotti Cogito in altre piattaforme o in architetture preesistenti, con evidenti vantaggi sotto il profilo dei tempi di implementazione.

In sintesi, l’obiettivo della nuova release di Cogito è rendere l’intelligenza artificiale sempre più accessibile e conveniente, offrendo alle organizzazioni un ROI più elevato grazie alla coniugazione della massima accuratezza nell’analisi delle informazioni con attività di customizzazione e implementazione oggi fortemente ottimizzate.

“Si parla sempre di più di intelligenza artificiale, tuttavia spesso in modo vago o in contesti lontani da quello aziendale”, ha dichiarato Marco Varone, Presidente e CTO di Expert System. “Anche per questo siamo soddisfatti della nuova release di Cogito, per la concretezza che offre consentendo di risparmiare tempo e risorse per migliorare i processi strategici, aumentando l’efficienza delle attività basate sull’utilizzo delle informazioni testuali.