Back to Results
First PageMeta Content
Information retrieval / IR evaluation / Precision and recall / Learning to rank / Confidence interval / Relevance / Ranking function / Standard error / Discounted cumulative gain / Statistics / Information science / Science


Measuring the Reusability of Test Collections Ben Carterette† , Evgeniy Gabrilovich‡ , Vanja Josifovski‡ , Donald Metzler‡ † Department of Computer & Information Sciences, University of Delaware, Newark, DE ‡
Add to Reading List

Document Date: 2009-12-30 01:36:32


Open Document

File Size: 550,81 KB

Share Result on Facebook

City

Santa Clara / Cambridge / London / New York City / Newark / New York / /

Company

Pearson / MIT Press / EMAP / Kendall / Yahoo! / Information Retrieval Systems / /

Country

United States / /

Currency

USD / /

/

Event

Product Issues / Product Recall / /

Facility

University of Delaware / University of Massachuetts / /

IndustryTerm

hypothesis above about confidence in systems / Web-type tasks / m3 test systems / test systems / Web topic / Web track runs / hp=Web / Web topics / search services / validation systems / Web search / worst-performing systems / track Web / Web track / documents systems / rank new systems / retrieval systems / /

Organization

MIT / Department of Computer & Information Sciences / University of Massachuetts / University of Delaware / /

Person

Ben Carterette / Donald Metzler / Evgeniy Gabrilovich / Ai / /

/

Position

Performance Evaluation General / /

Product

judged relevant documents / m2 / m1 / judged relevant doc / /

ProvinceOrState

Delaware / New York / Massachusetts / /

Technology

Knowledge Management / /

SocialTag