Building metrics is the key to science and technology measurement and translating the measures to work better. Metric is not confined to draw numbers and analyze. Selecting a metric depends on synthesizing multiple directions and evidence-based data. Producing benchmarks and standards in science and technology evaluation and implementing them with real data, mark the full-fledged system. Quite recently, we are witnessing the emergence of new metrics-based models aiming to reflect science and technology growth.


The proposed Science and Technology Metrics (STMet 2019) conference is concerned with how the use of many metrics may involve the assessment of different aspects of science and technology which are best expressed using different approaches. Besides the standard conference track and the Doctoral Symposium, the STMetrics will host a number of workshops and tutorials related to the theme of the conference.  The purpose of the workshops is to provide participants with a friendly, interactive atmosphere for presenting novel ideas and discussing their application of metrics in Science and Technology System.  The goal of the tutorials is to enable the participants to familiarise themselves with theoretical as well as practical aspects and application of metrics.


The proposed conference consists of invited talks, presentations, tutorials, workshops and discussions. The STMet has tracks on specialized topics. The STMet address the below themes, not limited.

bullet,blueCitation-based metrics
bullet,blueRanking Systems
bullet,blueRanking of Journals, Institutions, and Countries
bullet,blueDiscipline and Domain Analysis
bullet,blueEconomic factors
bullet,blueDatabases and datasets for evaluation
bullet,blueEvaluation Tools and Indicators
bullet,blueWeb-based metrics
bullet,blueVisibility and impact
bullet,blueText-based Metrics
bullet,blueInnovation indicators
bullet,blueWearable Devices
bullet,blueOpen data
bullet,blueScientific Collaboration and cooperation analysis
bullet,blueNational Evaluation Systems
bullet,blueOpen access and open publishing
bullet,blueNew indices for evaluation
bullet,blueWeb-based metrics
bullet,blueData Science and Digital Repositories
bullet,blueScientific Visualization
bullet,bluee-Science in the Cloud
bullet,blueScientific Journalism
bullet,blueScientific Publications

Track 1: Open Science
Science relies on the sharing of ideas, research results, data and methods. Quiet often many
scientific discoveries and innovations fail to reach the society because of the closed scientific
publications. If Science is built based on existing knowledge it needs to reach every one and hence
Open Science leads the knowledge transaction. This track addresses the challenges and issues and
pave the way to develop and nurture open science.

Track 2: Peer Review
Peer review is the process by which experts critically examine the research contributions using
standard metrics. Peer review helps maintain and enhance quality both directly by detecting
weaknesses and errors in specific works. These reviews enable to decide publication, research
grants, employment, promotion, and tenure. Peer review promotes accountability and improves
quality of work.

Track 3: Content and Text mining based metrics
This track enable to present works that use numeric indices to process unstructured (textual)
information, and to build various data mining (statistical and machine learning) algorithms.
Information can be extracted to derive summaries for the words contained in the documents or to
compute summaries for the documents based on the words contained in them. One can analyze
words, clusters of words used in documents, and we could analyze documents and determine
similarities between them or how they are related to other variables. The studies of text mining will
“turn text into numbers” (meaningful indices), which can then be incorporated in other analyses.

Track 4: Technology Transfer Metrics

The technology transfer enables the outreach of the research outcomes to a wider level. The technology transfer measures are of interest to practitioners, program managers, and policymakers in the evaluation of technology transfer programs. Building metrics can lead to a better measurement of effectiveness, efficiency, and return on investment. The introduction of newer metrics will be helpful to identify the benchmarking tools. The proposed track will discuss the currently existing metrics and the research on the development of new metrics as well as their applications.

Chair: Mohammad Hassanzadeh, Tarbiat Modares University, Iran

Track 5: Open Science Metrics

The primary goal of this track is to enable to construct, identify, and specify relevant metrics and indicators for open science. Measauring the extent of openness in science and creating or using metrics is important. Measuring Open science help to realize the innovation. We can thus find how collaboration and knowledge sharing occurs and how institutions work on it. Openness also yield enormous data and knoweldge. This track gauge using available data about opennes s in science.

Track  6: Role of Unique Identifiers in Scientific Research Metrics

Traditional discovery methods are insufficient in an overwhelmed high volume of researchers, institutions and research publications and data environment. Unique identifiers play a major role to distinguish people, work, institutions and optimizing their impact and discoverability. Especially in e-world, they support to reduce errors, omissions, and duplication, provide comprehensiveness of the information and make sure the digital footprints of interactions and networking across different platforms, databases and social networks are captured and preserved. Several unique identification systems are being introduced recently such as DOI, ORCID, ROR, ISNI DataCite, etc. Through this track, we propose to discuss the analysis of data or case studies of different unique identifiers and their impact in research metrics and evaluation.

Co-Chairs: J.K. Vijayakumar, King Abdullah University of Science and Technology, Saudi Arabia


All submitted papers will be reviewed by a double-blind (at least three reviewers), and participative
peer review. The review process will enable the selection process of those that will be accepted for
their presentation at the international conference. Authors of accepted papers who registered in
the conference can have access to the evaluations and possible feedback provided by the reviewers
who recommended the acceptance of their papers, so they can accordingly improve the final version
of their papers.

Mentoring support is available to young researchers and authors of developing countries.

The post-conference modified versions of the papers will be published in the following journals.

1.Journal of Digital Information Management
2. Journal of Contemporary Eastern Asia
3. Special Section in Research Evaluation
4. Journal of Scientometric Research
5. Malaysian Journal of Library & Information Science
6. International Journal of Computational Linguistics Research
7. Webology

Important Dates

Full Paper Submission October 01, 2019
Notification of Acceptance/Rejection October 25, 2019
Registration Due November 23, 2019
Camera Ready Due November 23, 2019
Workshops/Tutorials/Demos December 03, 2019
Main conference December 02-04, 2019