|검색연산자||기 능||검색식 예|
|( )||우선순위가 가장 높은 연산자||예1) (나노 & (기계 | machine))|
|&, 공백||두 개의 검색어(식)를 모두 포함하고 있는 문서 검색||예1) TI:(나노 & 기계)|
|ㅣ||두 개의 검색어(식) 중 하나 이상 포함된 문서 검색||예1) TI:(줄기세포 | 면역)|
|~||~ 이후에 있는 검색어가 포함된 문서는 제외||예1) TI:(황금 ~ 백금)|
홈 > BT정보 > 기술동향
Proteomics: Pathways and Biomarkers
Proteomics: Pathways and Biomarkers
As Technical Barriers Fall, the Role of Proteins in Clinical Medicine Is Being Transformed
Fortunately, we are beginning to understand the differences between these cases, and have begun to adapt the technology and the way we use it to address the harder questions successfully.
For simplicity, I refer here to proteomics in the service of basic biology as Type 1 proteomics, and to population proteomics (mainly clinical applications) as Type 2. My belief, outlined in this article, is that we are at an inflection point in terms of what Type 2 proteomics can deliver in clinical applications.
Type 1 proteomics is making major contributions toward illuminating biological model systems for clues to basic mechanisms. Phosphorylation, glycosylation,
However, the price that has been paid to achieve deep proteome coverage has been the limited number of samples subjected to analysis, which represents a tradeoff that works well in pursuit of widely conserved, basic biological mechanisms, but necessarily slights the complexities of how such mechanisms behave in nonidentical individuals.
In a sense, the strengths and weaknesses of this approach to basic biology mirror those involved in the use of biological model systems generally: they represent a simplification needed to provide a clear view of some biological mechanism, but don’t necessarily ensure that the mechanism operates similarly “in the wild”.
Proteomics has had less success so far in finding clinical biomarkers, a classical Type 2 application and an area of biology in which population heterogeneity is a key limitation. Essentially, the quest for biomarkers is a search for biological mechanisms that are invariant, or nearly so, across a real population of individuals.
In principle, all mechanistic discoveries from Type 1 work can be considered candidate biomarkers to the extent that the mechanism in question is related to disease, drug treatment, etc.
Published biomarker studies have identified some disease association for almost 25% of the 20,000 human proteins. Unfortunately, none of these prospective “discoveries” has yet been confirmed at the level required to achieve FDA clearance for a clinically useful protein test.
That proteins can be excellent clinical biomarkers is indisputable: 109 proteins are measured by FDA-cleared tests and another 96 by generally available laboratory-developed tests (homebrews) through the efforts of a multibillion dollar in vitro diagnostics (IVD) industry. Cardiac troponin, for example, when measured in the blood, is the clinical definition of a heart attack.
Yet the rate at which new protein analytes are cleared has remained flat at about 1.5 new proteins per year for the last 15 years, much lower than the rate during the initial wave of monoclonal antibody-driven discoveries and far less than the number required to address critical diagnostically underserved indications such as Alzheimer disease, COPD, and stroke.
Unfortunately, overcoming this barrier has proven quite difficult using the favored tools of deep coverage (Type 1) proteomics, with their high cost per sample and limited quantitative precision.
What should be the central component of the “biomarker pipeline” is missing: an easily accessible capacity to accurately measure, in large clinical sample sets, the candidate biomarkers emerging from proteomic (or genomic) studies.
Werner Zolg crystallized this requirement by pointing out that good analytical data from at least 1,500 samples is required to support a convincing case for serious, i.e., commercial, interest in any protein biomarker. Of the thousands of papers in biomarker proteomics, I can only think of one that involved more than 1,000 samples. All the rest fall short of the Zolg number.
This means that the biomarkers “discovered” in these studies have not been tested to a level that establishes real clinical utility (often referred to as “verification”). Absence of such data leaves us speculating as to the fraction of published candidates that ultimately ought to find use in medicine, but a persuasive case can be made that the failure rate is greater than that of drug candidates going into Phase I trials, and probably exceeds 95%.
Clinical verification of new protein biomarkers is constrained by several factors, including lack of grant funding available to “confirm the discoveries of others” and, until recently, the lack of a suitable technology base. Immunoassays, the default method of high-throughput protein quantitation, are difficult and expensive to construct and more difficult to multiplex in a reliable fashion as required in large-scale candidate verification.
Mass spectrometry has now emerged as the favored path for development of the targeted assays required for Type 2 research, largely as a result of applying to peptides the multiple reaction monitoring (MRM) technology long used by analytical chemists for quantitation of smaller molecules.
MRM measurements provide near-absolute structural specificity, true internal standardization and flexible multiplexing, none of which is available in conventional immunoassays.
MRM has also overcome one of the long-standing criticisms of proteomics—reproducibility. At one point, in the wake of the SELDI debacle, it was believed, especially in the genomics community, that the methods of proteomics were simply not reliable enough to get the same result in different labs.
Multilaboratory efforts, largely spearheaded by the NCI’s CPTAC program, have now shown that peptide MRM measurements are accurate and consistent across different labs and instrument platforms, as analytical chemists knew they would be.
This approach, called SISCAPA (for stable isotope standards and capture by antipeptide antibodies), when applied at the peptide level, provides a general platform for rapid biomarker assay development and has been proposed as a means to provide quantitation of the entire human baseline proteome (the hPDQ project).
SISCAPA may shortly play a role in the clinical laboratory as well, as shown in Hoofnagle’s work on the thyroglobulin assay, where tryptic digestion removes most of the sources of interference that plague this clinical immunoassay (and many others).
By isolating low abundance target peptides to a state of near purity, and thus no longer requiring extensive LC separation prior to MS, very high throughput can be achieved (12–60 samples per hour) at sensitivities reaching below 1 ng/mL in the case of a protein in plasma.
Further gains in sensitivity occur with each improvement in mass spectrometer design (e.g., the recent introduction of ion funnel technology), suggesting that within two to four years MS-based assays will equal the best current immunoassays while using only 10 μL of plasma for up to 50 analytes.
However a much more rational alternative path is emerging based on the ability of automated affinity-MS platforms to simultaneously fill roles in Type 2 biomarker research, clinical evaluation of candidates, and finally, in the clinical lab itself. Common instrument platforms justify greater investments in performance and automation and, ultimately, lower cost per result.
Current evidence supports the notion that the same assay reagents (antibodies and internal standards) may also suffice from research to clinic, albeit with increasing levels of quality and regulatory documentation.
Thus, the technical barriers limiting translation of candidate biomarkers can be radically reduced, and several disconnected steps of the current process removed. With these improvements, and assuming the availability of stored samples appropriate to the specific clinical questions at hand, it will be feasible to test thousands of candidate biomarkers, and to translate the successes into clinical use in something on the order of five years.
Of even greater long-term significance is the change this could bring about in the economics of healthcare. MS-based analysis provides a very low incremental cost per protein analyte once a sample is in process, as opposed to testing of each protein in separate aliquots in current clinical-quality immunoassay instruments. This paradigm change makes the development of multiplex tests more practical and less expensive, opening the way to successful tests for more complex and heterogeneous diseases including cancer.
The positive economic impact of early disease detection at reasonable cost is enormous, and justifies substantial efforts to re-engineer our currently unproductive biomarker pipeline.
친구에게 이 자료를 추천하기
|한국단백체학회 2017 KHUPO Proteomics 국제학술대회||2017-02-21|
|KHUPO 16th Annual International Proteomics Conference||2015-12-28|
|Proteomics identifies DNA repair toolbox||2015-05-06|
|Using proteomics to profile switchgrass||2015-03-13|
|KHUPO 15th Annual International Proteomics Conference||2015-02-03|
|Applying Machine Learning to Proteomics Data||2013-11-29|
|Proteomics System Catches Jumping Genes in the Act||2013-11-22|
|Metabolic Fingerprinting: Using Proteomics to Identify Proteins in Gymnosperm Pollination Drops||2013-04-11|
|BioFocus to Offer Activiomics' Proteomics Technology||2012-07-26|
|GSK snags proteomics platform tech in $98M Cellzome buyout||2012-05-16|