The CVRank algorithm generates a comparable rating and ranking of achievements stored in a database. It allows a user (the user can be an individual professional, an enterprise member or talent manager, a recruiter) to rate and rank persons in databases of achievements or merits, such as any database of documents containing resumes or curricula vitae, referring to academic (exams, courses, certifications, titles) or professional (job positions, publications, works) fields.
CVRank provides with a ranking method that is scalable and can be applied to large databases, such as those available through the Internet, offering search and comparison possibilities to all users, either recruiters or career/job seekers.
The information contained herein is a derivative work from the original document disclosing the algorithm, the provisional patent application, filed (12-MAY-2011) in the US Patent and Trademark Office, which was officially published (28-MAR-2012) in CVRank.org - and is therefore not patentable - under the Creative Commons Attribution/Share-Alike License (CC-BY-SA). For more on the provisional patent application, read CVRank's history.
Large numbers of companies and institutions seeking talented students, researchers, and employees, are confronted with the challenge of ranking candidates for the position available. One standard practice among human resource departments is to create a job description for each open position, then advertise the position along with the description. Recruiters and job seekers then have to review and analyze these descriptions in order to determine a match between job seekers and particular jobs. Due to the high number of applicants it is necessary to short-list and rank submitted curricula vitae based on their suitability for the job requirements. To reduce costs, error and time there is a strong desire from companies towards automating the processes of: specifying the requirements criteria for a given job (experience, skills, etc.) and matching between the applicants’ profiles and the job requirements; to produce an applicants’ ranking policy that gives consistent and fair results, which can be legally justified. However both these processes involve a high level of uncertainty, as they require the input of different occupation domain experts in the decision making process. These experts will have different opinions, expectations and interpretations for the requirements specification as well as for the applicants matching and ranking criteria.
Computer-implemented methods and Internet services
Due to the developments in computer technology and its increase in popularity, a number of searching and ranking tools are available to a person searching on computer-based private databases of resumes or job offers, as well as on the Internet for recruiters and job seekers searching for the right job, based on matching and ranking of achievements. Such methods and processes are offered, for example, on well-known Internet Web sites including www.Monster.com, www.LinkedIn.com, www.Yahoo.com, www.CareerBuilder.com. Searching tools and selection methods currently available require the job seeker to select various criteria in the form of keywords, such as desired locations, types of jobs, desired compensation levels, etc. Similarly, the employers provide, in addition to the job description, levels of skill, education, years of experience, etc. required to be considered for a particular job, and some available tools help in rating such keywords. Searching tools then look up the seeker’s keywords in a database of job descriptions and return, or display those job descriptions that contain the job seeker’s keywords. Other methods try to help recruiters automate the process of ranking and selection, or help determine the consistency and reliability of each expert’s decision making behaviors, trying to ensure that experts’ decisions are unbiased and correctly weighted according to their level of knowledge and experience; with such methods, a standardized method for rating and ranking is focused on the individual expertise in human resource departments, instead of on the candidates.
Problems of the traditional approach
These known methods and processes, however, fail to adequately filter prospective candidates according to their achievements. As such, the company or recruiter looking for prospective candidates may be inundated with resumes, many of whom are not close to the quality of candidates the company or recruiter is looking for. Often, recruiters also need to rate and rank personally all achievements and merits of applicants one after another, according to previously agreed, loose working and scoring schemes, to assign overall static ratings to each candidate. These ad hoc, highly personalized, and irreproducible methods tend to be highly inefficient and biased. In this human-based rating and ranking process, with or without the help of known automated methods and processes, lots of curricula from highly qualified candidates might be lost, and unwanted curricula might get to the latest, more expensive stages of personal selection.
- Unlike static scales, CVRank is an algorithm that turns persons' achievements into a simple, comparable mathematical function.
- Unlike subjective experience and scaling, CVRank relies on documented data (grades obtained, wages, scores,...), public comparisons (ratings and rankings of institutions and companies, purchasing power parity,...) and international classifications (ISCED, ICO,...).
- Unlike other algorithms, CVRank's main objective is to allow for the rating of persons according to the addition of their achievements as selected by the user, using the different types of this method.
- It may also:
- take into account differences in achievements by age,
- be summed up to obtain group achievements (or expectations),
- take into account friend contacts to adjust the social rating.
To sum up, rather than determining relevance only from static rating schemes, or relying on the intuition or experience of those responsible of human resource or job recruitment departments, this invention assumes the validity of the measurement of scores or rewards obtained, as well as of the measurement of generally accepted difficulty levels of academic and professional achievements, and of performance ratings of entities. From these measurements, it offers the relative quantity of ability proven with them. That quantification may thus more intuitively be used for comparison of achievements, which is ultimately required by institutions and companies to select the best suited candidates.
Item Response Theory
The item response theory is a known scientific method of psychometrics and education whose statistical models are used for the development and analysis of single tasks called items (like questions in test exams), and to validate tests formed by such items and their results, or to estimate a selected ability or a group of them according to scores in items. The intuitive relationship that these statistical models offer between scores, items, and abilities, is used in the CVRank algorithm to quantify the ability that achievements of persons demonstrate.
In this method, the generally accepted scores in evaluations, academic or professional levels of achievement, and performance rankings, are assumed to measure what they intend to measure, and therefore the strength and validity of the algorithm is dependent on their assumptions and improvement as measuring tools, and especially on the user’s preferences, whereas the models used by CVRank to obtain that quantification remain anchored in strong scientific foundations. New assumptions are then made in this algorithm, so that certain constraints of the item response theory models are broken, and emphasis is put on keeping a consistent and comparable rating of achievements, instead of on scientific validity.
The original document, the provisional patent application, was intended as the first approach to the algorithm, and is therefore an early alpha version, which we call version 0.
We are currently working on version 1, whose main difference with the preceding, more theoretical version, is its pragmatic approach, its dependence upon the output of software (especially CVRanking) interaction.
For discussion on the different aspects, see communication channels.
The CVR types are the different achievement fields to which the CVRank algorithm can be applied.
The education CVRank (EduCVR) provides a rating based on the scores obtained in evaluations of different academic exams, degree and non-degree courses, certifications, or titles obtained, and on the performance of the issuing institution.
The work CVRank (WorkCVR) provides a rating of job positions, based on the wage/salary obtained, on the professional level developed, and on the performance rating of the hiring entity.
The language CVRank (LangCVR) provides a rating based on the depth of knowledge achieved in the languages spoken by candidates, depending on the difficulty of the language, and on the performance of the evaluating institution.
The research CVRank (ResCVR) provides a rating of the different publications related to research, i.e. relying upon citations and impact (or similar) factors for its evaluation.
The publication CVRank (PubCVR) rates publications according to its (media) type, number of readers/viewers, and earnings obtained.
The skill CVRank (SkillCVR) rates other skills not rated elsewhere, especially those recognized officially through certifications, diplomas, examinations,... and not included in the EduCVR.