Open Access
Open access
Bioinformatics, volume 16, issue 9, pages 776-785

MaxSub: an automated measure for the assessment of protein structure prediction quality

Publication typeJournal Article
Publication date2000-09-01
Journal: Bioinformatics
Quartile SCImago
Q1
Quartile WOS
Q1
Impact factor5.8
ISSN13674803, 13674811, 14602059
Biochemistry
Computer Science Applications
Molecular Biology
Statistics and Probability
Computational Mathematics
Computational Theory and Mathematics
Abstract
Evaluating the accuracy of predicted models is critical for assessing structure prediction methods. Because this problem is not trivial, a large number of different assessment measures have been proposed by various authors, and it has already become an active subfield of research (Moult et al. (1997,1999) and CAFASP (Fischer et al. 1999) prediction experiments have demonstrated that it has been difficult to choose one single, 'best' method to be used in the evaluation. Consequently, the CASP3 evaluation was carried out using an extensive set of especially developed numerical measures, coupled with human-expert intervention. As part of our efforts towards a higher level of automation in the structure prediction field, here we investigate the suitability of a fully automated, simple, objective, quantitative and reproducible method that can be used in the automatic assessment of models in the upcoming CAFASP2 experiment. Such a method should (a) produce one single number that measures the quality of a predicted model and (b) perform similarly to human-expert evaluations.MaxSub is a new and independently developed method that further builds and extends some of the evaluation methods introduced at CASP3. MaxSub aims at identifying the largest subset of C(alpha) atoms of a model that superimpose 'well' over the experimental structure, and produces a single normalized score that represents the quality of the model. Because there exists no evaluation method for assessment measures of predicted models, it is not easy to evaluate how good our new measure is. Even though an exact comparison of MaxSub and the CASP3 assessment is not straightforward, here we use a test-bed extracted from the CASP3 fold-recognition models. A rough qualitative comparison of the performance of MaxSub vis-a-vis the human-expert assessment carried out at CASP3 shows that there is a good agreement for the more accurate models and for the better predicting groups. As expected, some differences were observed among the medium to poor models and groups. Overall, the top six predicting groups ranked using the fully automated MaxSub are also the top six groups ranked at CASP3. We conclude that MaxSub is a suitable method for the automatic evaluation of models.

Top-30

Journals

5
10
15
20
25
30
35
40
45
Proteins: Structure, Function and Genetics
42 publications, 14%
Bioinformatics
27 publications, 9%
BMC Bioinformatics
24 publications, 8%
Protein Science
14 publications, 4.67%
Lecture Notes in Computer Science
8 publications, 2.67%
PLoS ONE
6 publications, 2%
Journal of Bioinformatics and Computational Biology
5 publications, 1.67%
Journal of Molecular Biology
4 publications, 1.33%
Journal of Computational Biology
4 publications, 1.33%
Scientific Reports
4 publications, 1.33%
Computational Biology and Chemistry
4 publications, 1.33%
IEEE/ACM Transactions on Computational Biology and Bioinformatics
4 publications, 1.33%
Nucleic Acids Research
4 publications, 1.33%
International Journal of Molecular Sciences
3 publications, 1%
Nature Methods
3 publications, 1%
Journal of Computer-Aided Molecular Design
3 publications, 1%
Molecular Biology
3 publications, 1%
PLoS Computational Biology
3 publications, 1%
Structure
3 publications, 1%
Protein Engineering, Design and Selection
3 publications, 1%
Molecules
2 publications, 0.67%
Biomolecules
2 publications, 0.67%
BMC Structural Biology
2 publications, 0.67%
Theoretical Biology and Medical Modelling
2 publications, 0.67%
Journal of Biomolecular NMR
2 publications, 0.67%
Biophysical Journal
2 publications, 0.67%
Journal of Computational Chemistry
2 publications, 0.67%
Journal of Chemical Information and Modeling
2 publications, 0.67%
RSC Advances
2 publications, 0.67%
5
10
15
20
25
30
35
40
45

Publishers

10
20
30
40
50
60
70
Springer Nature
66 publications, 22%
Wiley
65 publications, 21.67%
Oxford University Press
35 publications, 11.67%
Elsevier
26 publications, 8.67%
IEEE
17 publications, 5.67%
Multidisciplinary Digital Publishing Institute (MDPI)
10 publications, 3.33%
Public Library of Science (PLoS)
9 publications, 3%
Cold Spring Harbor Laboratory
9 publications, 3%
American Chemical Society (ACS)
6 publications, 2%
World Scientific
5 publications, 1.67%
Association for Computing Machinery (ACM)
5 publications, 1.67%
Mary Ann Liebert
4 publications, 1.33%
Pleiades Publishing
3 publications, 1%
Royal Society of Chemistry (RSC)
3 publications, 1%
Biophysical Society
2 publications, 0.67%
IGI Global
2 publications, 0.67%
Taylor & Francis
2 publications, 0.67%
American Association for the Advancement of Science (AAAS)
2 publications, 0.67%
Hindawi Limited
2 publications, 0.67%
American Institute of Physics (AIP)
2 publications, 0.67%
Bioscientifica
1 publication, 0.33%
International Union of Crystallography (IUCr)
1 publication, 0.33%
Pharmaceutical Society of Japan
1 publication, 0.33%
The Royal Society
1 publication, 0.33%
Science in China Press
1 publication, 0.33%
American Society for Biochemistry and Molecular Biology
1 publication, 0.33%
Chem-Bio Informatics Society
1 publication, 0.33%
American Society for Microbiology
1 publication, 0.33%
Proceedings of the National Academy of Sciences (PNAS)
1 publication, 0.33%
10
20
30
40
50
60
70
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
Share
Cite this
GOST |
Cite this
GOST Copy
Siew N. MaxSub: an automated measure for the assessment of protein structure prediction quality // Bioinformatics. 2000. Vol. 16. No. 9. pp. 776-785.
GOST all authors (up to 50) Copy
Siew N. MaxSub: an automated measure for the assessment of protein structure prediction quality // Bioinformatics. 2000. Vol. 16. No. 9. pp. 776-785.
RIS |
Cite this
RIS Copy
TY - JOUR
DO - 10.1093/bioinformatics/16.9.776
UR - https://doi.org/10.1093/bioinformatics/16.9.776
TI - MaxSub: an automated measure for the assessment of protein structure prediction quality
T2 - Bioinformatics
AU - Siew, N
PY - 2000
DA - 2000/09/01
PB - Oxford University Press
SP - 776-785
IS - 9
VL - 16
SN - 1367-4803
SN - 1367-4811
SN - 1460-2059
ER -
BibTex |
Cite this
BibTex Copy
@article{2000_Siew,
author = {N Siew},
title = {MaxSub: an automated measure for the assessment of protein structure prediction quality},
journal = {Bioinformatics},
year = {2000},
volume = {16},
publisher = {Oxford University Press},
month = {sep},
url = {https://doi.org/10.1093/bioinformatics/16.9.776},
number = {9},
pages = {776--785},
doi = {10.1093/bioinformatics/16.9.776}
}
MLA
Cite this
MLA Copy
Siew, N.. “MaxSub: an automated measure for the assessment of protein structure prediction quality.” Bioinformatics, vol. 16, no. 9, Sep. 2000, pp. 776-785. https://doi.org/10.1093/bioinformatics/16.9.776.
Found error?