Friday, August 15, 2014

New LOD cloud draft includes 558 semantic web datasets



Chris Bizer announced a new draft version of the LOD cloud with 558 linked datasets connected by 2883 linking sets. Last call for new datasets (submit at DataHub) for this version is 2014-08-20.

Sunday, July 27, 2014

2013 Journal Metrics data computed from Elsevier's Scopus data


Eugene Garfield first published the idea of analyzing citation patterns in scientific publications in his 1955 Science paper, Citation Indexes for Science: A New Dimension in Documentation through Association of Ideas. He subsequently popularized the impact factor metric for journalsand many other bibliographic concepts and founded the Institute for Scientific Information to provide products and services around them.

In the last decade, digital libraries, online publishing, text mining and big data analytics have combined to produce new bibliometric datasets and metrics. Google's Scholar Metrics, for example, uses measures derived from the popular  h-index concept. Microsoft's Academic Search uses a PageRank like algorithm to weigh citations based on the metric for their source.  Thompson Reuters, which acquired Garfield's ISI in 1992, still relies largely on the traditional impact factor in its Citation Index. These new datasets and metrics have also stimulated a lively debate on the value of such analysis and the dangers of putting too much reliance on them.

Elsevier's Journal Metrics site publishes journal citation metrics computed with data from their Scopus bibliographic database, which covers nearly 21,000 titles from over 5,000 publishers in the scientific, technical, medical, and social science fields. Last week the site added data from 2013, using  three measures of a journal's impact based on an analysis of its paper's citations.
  • Source Normalized Impact per Paper (SNIP), a measure of contextual citation impact that weights citations based on the total number of citations in a subject field.
  • Impact Per Publication (IPP), an estimate of the average number of citations a paper will receive in tree years.
  • SCImago Journal Rank (SJR), a PageRank-like measure that takes into account the "prestige" of the citing sources.
We were happy to see that the metrics for the Journal of Web Semantics remain strong, with 2013 values for SNIP, IPP and SJR of 4.51, 3.14 and 2.13, respectively.  Our analysis, described below, shows that these metrics put the journal in the top 5-10% of a set of 130 journals in our "space".

To put these in context, we wanted to compare these to other journals that regularly publish similar papers. The Journal Metrics site has a very limited search function, but you can download all of the data as a CSV file. We downloaded the data, used grep to select out just the journals in the Computer Science category and whose names contained any of the strings web, semantic, knowledge, data, intellig, agent or ontolo. The data for the resulting 130 journals for the last three years is available as a Google spreadsheet.

All of these metrics have shortcomings and should be taken with a grain of salt.  Some, like Elsevier's, are based on data from a curated set of publications with several years (e.g., three or even five) years of data available, so new journals are not included. Others, like Google's basic citation counts, weigh a citation from a paper in Science the same as one from an undergraduate research paper found on the Web.  Journals that publish a handful of very high quality papers each year fare better on some measures but are dominated by publications that publish a large number of articles, from top quality to mediocre, on others.  Nonetheless, taken together, the different metrics offer insights into the significance and utility of a journal's published articles based on citations from the research community.

Sunday, July 13, 2014

Preprint: Tailored Semantic Annotation for Semantic Search


Rafael Berlanga, Victoria Nebot and Maria Pérez, Tailored Semantic Annotation for Semantic Search, Web Semantics: Science, Services and Agents on the World Wide Web, to appear, 2014.

Abstract: This paper presents a novel method for semantic annotation and search of a target corpus using several knowledge resources (KRs). This method relies on a formal statistical framework in which KR concepts and corpus documents are homogeneously represented using statistical language models. Under this framework, we can perform all the necessary operations for an efficient and effective semantic annotation of the corpus.  Firstly, we propose a coarse tailoring of the KRs w.r.t the target corpus with the main goal of reducing the ambiguity of the annotations and their computational overhead. Then, we propose the generation of concept profiles, which allow measuring the semantic overlap of the KRs as well as performing a finer tailoring of them. Finally, we propose how to semantically represent documents and queries in terms of the KRs concepts and the statistical framework to perform semantic search. Experiments have been carried out with a corpus about web resources which includes several Life Sciences catalogues and Wikipedia pages related to web resources in general (e.g., databases, tools, services, etc). Results demonstrate that the proposed method is more effective and efficient than state-of-the-art methods relying on either context-free annotation or keyword-based search.


Wednesday, July 2, 2014

Preprint: Konclude: System Description


Preprint: Andreas Steigmiller, Thorsten Liebig, Birte Glimm, Konclude: System Description, Web Semantics: Science, Services and Agents on the World Wide Web, to appear, 2014.

This paper introduces Konclude, a high-performance reasoner for the Description Logic SROIQV. The supported ontology language is a superset of the logic underlying OWL 2 extended by nominal schemas, which allows for expressing arbitrary DL-safe rules. Konclude's reasoning core is primarily based on the well-known tableau calculus for expressive Description Logics. In addition, Konclude also incorporates adaptations of more specialised procedures, such as consequence-based reasoning, in order to support the tableau algorithm. Konclude is designed for performance and uses well-known optimisations such as absorption or caching, but also implements several new optimisation techniques. The system can furthermore take advantage of multiple CPU's at several levels of its processing architecture. This paper describes Konclude's interface options, reasoner architecture, processing workflow, and key optimisations. Furthermore, we provide results of a comparison with other widely used OWL 2 reasoning systems, which show that Konclude performs eminently well on ontologies from any language fragment of OWL 2.

Tuesday, July 1, 2014

Preprint: Everything you always wanted to know about blank nodes (but were afraid to ask)


Aidan Hogan, Marcelo Arenas, Alejandro Mallea and Axel Polleres, Everything You Always Wanted to Know About Blank Nodes, Web Semantics: Science, Services and Agents on the World Wide Web, to appear, 2014.

In this paper we thoroughly cover the issue of blank nodes, which have been defined in RDF as 'existential variables'. We first introduce the theoretical precedent for existential blank nodes from first order logic and incomplete information in database theory. We then cover the different (and sometimes incompatible) treatment of blank nodes across the W3C stack of RDF-related standards. We present an empirical survey of the blank nodes present in a large sample of RDF data published on the Web (the BTC–2012 dataset), where we find that 25.7% of unique RDF terms are blank nodes, that 44.9% of documents and 66.2% of domains featured use of at least one blank node, and that aside from one Linked Data domain whose RDF data contains many "blank node cycles", the vast majority of blank nodes form tree structures that are efficient to compute simple entailment over. With respect to the RDF-merge of the full data, we show that 6.1% of blank-nodes are redundant under simple entailment. The vast majority of non-lean cases are isomorphisms resulting from multiple blank nodes with no discriminating information being given within an RDF document or documents being duplicated in multiple Web locations. Although simple entailment is NP-complete and leanness-checking is coNP-complete, in computing this latter result, we demonstrate that in practice, real-world RDF graphs are sufficiently "rich" in ground information for problematic cases to be avoided by non-naive algorithms.

Sunday, June 29, 2014

JWS ranked highly in the 2014 Google Scholar Metrics data



Google has released its 2014 Google Scholar Metrics data, which estimated the visibility and influence of journals and selected conferences based on citations to articles published in 2009-2013 and indexed in Google Scholar as of mid-June 2013. The primary measure is a publication venue's h5-index, a variation on the popular h-index. Google defines a venue's h5-index as the largest number h such that h articles published in the last five years have at least h citations each. A related measure, h5-median is also computed for a venue as the median number of citations for the articles that make up its h5-index.

Journal of Web Semantics 2014 h5-index was 36 and its h5-median was 62. This puts the JWS among the top venues for the Google-defined category Databases and Information Systems as well as among the top venues whose names contain one of the words web, semantics, knowledge, intelligence or intelligent.

Here are the 36 articles that make up the JWS's h5-index for 2014.

Friday, June 20, 2014

JWS preprint: Identifying Relevant Concept Attributes to Support Mapping Maintenance under Ontology Evolution




Duy Dinh, Julio Cesar Dos Reis, Cédric Pruskia, Marcos Da Silveiraa and Chantal Reynaud-Delaître, Identifying Relevant Concept Attributes to Support Mapping Maintenance under Ontology Evolution, Web Semantics: Science, Services and Agents on the World Wide Web, to appear, 2014.

Abstract: The success of distributed and semantic-enabled systems relies on the use of up-to-date ontologies and mappings between them. However, the size, quantity and dynamics of existing ontologies demand a huge maintenance effort pushing towards the development of automatic tools supporting this laborious task. This article proposes a novel method, investigating different types of similarity measures, to identify concepts' attributes that served to define existing mappings. The obtained experimental results reveal that our proposed method allows one to identify the relevant attributes for supporting mapping maintenance, since we found correlations between ontology changes affecting the identified attributes and mapping changes.

Friday, June 6, 2014

CFP: Special issue on machine learning and data mining for the Semantic Web



The Journal of Web Semantics seeks submissions of original research papers for a special issue on machine learning and data mining for the Semantic Web dealing with analytical, theoretical, empirical, and practical aspects of machine learning and data mining for all areas of the Semantic Web. Submissions are due by December 15, 2014.

In the last years, machine learning, as well as data mining approaches have become the main focus of many research works and initiatives related to the Semantic Web and the Web of Data. Challenges imposed by the large scale of Web Data, the uncertainty related to contradictory and incomplete information, and also, by properties and characteristics of Linked Data represent an interesting domain for emerging machine learning and data mining approaches.

For this special issue, we invite high quality contributions from all areas of research that address any aspects of the aforentioned challenges. Topics of interest include but are not limited to the following.
  • Ontology-based data mining
  • Automatic (and semi-automatic) ontology learning and population
  • Distant-supervision (or weak-supervision) methods based on ontologies and knowledge bases
  • Web mining using semantic information
  • Meta-learning for the Semantic Web
  • Cognitive-inspired approaches and exploratory search in the Semantic Web
  • Discovery science involving linked data and ontologies
  • Data mining and machine learning applied to information extraction in the semantic web
  • Big Data analytics involving linked data
  • Inductive reasoning on uncertain knowledge for the Semantic Web
  • Ontology matching and instance matching using machine learning and data mining
  • Data mining and knowledge discovery in the Web of data
  • Knowledge base maintenance using Machine Learning and Data Mining
  • Crowdsourcing and the Semantic Web
  • Mining the social Semantic Web
We solicit contributions that address these challenges, as well as reports on novel applications with the potential to push Semantic Web and machine learning/data mining cooperation forward.

Submission guidelines

The Journal of Web Semantics solicits original scientific contributions of high quality. Following the overall mission of the journal, we emphasize the publication of papers that combine theories, methods and experiments from different subject areas in order to deliver innovative semantic methods and applications. The publication of large-scale experiments and their analysis is also encouraged to clearly illustrate scenarios and methods that introduce semantics into existing Web interfaces, contents and services.

Submission of your manuscript is welcome provided that it, or any translation of it, has not been copyrighted or published and is not being submitted for publication elsewhere. Upon acceptance of an article, the author(s) will be asked to transfer copyright of the article to the publisher. This transfer will ensure the widest possible dissemination of information. Manuscripts should be prepared for publication in accordance with instructions given in the JWS guide for authors. The submission and review process will be carried out using Elsevier's Web-based EES system. Final decisions of accepted papers will be approved by an editor in chief.

Final review copies of accepted publications will appear in print and at the archival online server. Author preprints of the articles will be made freely accessible on the JWS preprint server.

Important Dates

  • Call for papers: June 2014
  • Submission deadline: 15 December 2014
  • Author notification: 15 April 2015
  • Submission deadline for revisions: 15 June 2015
  • Author notification: 1 August 2015

Special issue editors