Skip to main content

NLSY79 Child and Young Adult

Gaps in Employment

Between-Job Gaps

Young Adult respondents provide dates for the beginning and ending of each job reported in a survey round or carried over from a previous round. From 1994 through 1998, the Young Adult survey included a section called "Gaps When Not Working or in Military." Respondents were asked the main reason they did not work during any gaps in employment: on strike, on layoff, quit job but returned to same employer, job ended for a period of time but began again, or some other reason. If the respondents were on unpaid vacation or leave, they provided details for that leave: going to school, in the Armed Forces, pregnancy, had health problems, had problems with child care, had other personal or family reason, school shut down (for school employees only), or did not want to work. In addition, respondents provided the number of weeks they were looking for work or on a layoff. If they were not looking for work during the gap, they provided the main reason (for instance: ill, in jail, transportation problems). The "Gaps When Not Working or in Military" section was eliminated in 2000. Researchers can still determine when such gaps appear, however, by using information collected on start and stop dates for both jobs and military service.

Within-Job Gaps

From 1994 to 1998, the Young Adult survey included the same detailed questions about within-job gaps as did the NLSY79. From 2000 to 2014, only two questions about within-job gaps were asked: has the respondent taken any unpaid leave of one week or more, and, if so, the total number of weeks of unpaid work.

Maternity Leave

Beginning in 2008, the Young Adult survey has included a series of questions for female respondents on work experiences around the birth of each child. These questions were modeled on the 1983 maternity leave questions in the NLSY79. In 2008, this series was asked retrospectively about leaves surrounding the births of all children. Beginning in 2010, maternity leave questions are only asked of YA mothers for whom these data had not been previously collected or if they had not returned to work as of the date of the last interview.

Comparison to Other NLS Cohorts: The NLSY79 has collected detailed information about both within-job and between-job gaps. These time and tenure questions provide information on a respondent's time spent with an employer, time spent away from an employer during which the employment contract was maintained or renewed, and periods of time when the respondent was neither working for an employer nor serving in the active forces. Prior to 1998, with the exception of the 1983 maternity leave questions, maternity leave was not explicitly collected. Since 1988, female respondents (only) are asked for information on the total number of separate periods of paid leave from an employer which were taken due to either pregnancy or birth of a child. Start and stop dates are collected for each period of leave.

NLSY97 respondents provide the start and stop dates of each employee and freelance job, as well as military service. The survey also collects information about periods of a week or more when the respondent was not working at a given job. Tenure at current or last job is available for the Older Men for 1966, 1967, 1968, 1969, and 1971, and for the Younger Men for 1967, 1969, and 1971. For the Mature and Young Women, users may be able to create tenure variables for the later survey years by combining start and stop dates and data on within-job gaps. For more precise details about the content of each survey, consult the appropriate cohort's User's Guide using the tabs above for more information.

Survey Instruments Employment-related questions are found in the Young Adult Instrument, Section 7, Jobs & Employers Supplements. For 1994 through 1998, see Section 8, Gaps When Not Working or in the Military.
Areas of Interest YA Between Jobs

Fringe Benefits

Important Information About Using Fringe Benefits Data

These data do not reflect actual coverage by a specific benefit, but rather a respondent's reported knowledge of whether his or her employer made such a benefit available. Starting in 2000, respondents answered these questions only for Job #1, the current or most recent primary job.

From 1994 on, Young Adult respondents who have worked at least 20 hours a week as a regular employee have answered questions about employer-provided fringe benefits. These benefits include paid vacation and sick days, health insurance, life insurance, dental insurance, maternity/paternity leave, retirement, flexible hours, profit sharing, training or educational opportunities, and subsidized child care. The 1994 to 1998 survey years asked these fringe benefit questions for all jobs lasting 10 weeks or more. Starting in 2000, respondents answered these questions only for Job #1, the current or most recent primary job.

Comparison to Other NLS Cohorts: Data have been collected during each NLSY79 survey on the availability of benefits provided by employers.  Questions on benefits for the NLSY97 cohort are only asked of respondents who report an employee job lasting at least 13 weeks that ended after the date of their 16th birthday, or who are age 16 and over and report an on-going employee job at which they have worked at least 13 weeks. Information on benefits has been collected for the Mature Women in 1977, 1982, 1987, 1989, and 1995-2003; for the Young Women in 1978 and each survey since 1983; and for the Young Men in 1976 and 1981. The exact categories of benefits for which information was recorded may vary; generally, less information was collected in earlier years. For more precise details about the content of each survey, consult the appropriate cohort's User's Guide using the tabs above for more information.

Survey Instruments Employment-related variables are found in the Young Adult Instrument, Section 7, Jobs & Employers Supplements.
Areas of Interest YA Job Information

Training

The Young Adult survey originally collected information about training received outside of regular schooling or the military, as well as about certificates, licenses and journeyman's cards. 

Prior to 2000, this section collected detailed questions on up to six training experiences. From 2000 through 2006 detailed questions were asked only about the current or most recent training program, if applicable. Respondents were asked to identify the type of training and the duration of the program, as well as the source of money used to pay for the training. Respondents were also asked for a total number of additional training programs they had attended either ever or since the date of last interview. Beginning in 2008, the only training questions retained were those pertaining to certificates, licenses and journeyman's cards for practicing professions and what professions these are for. Young Adults who are still in high school do not enter this section. 

Users should note that the questions pertaining to certificates, licenses and journeyman's cards have been asked every round since 1994, and Census occupation codes are available for each certificate, license or journeyman's card reported.

Comparison to Other NLS Cohorts. Information has been collected during all NLSY79 survey years on the type of organization providing the training in which respondents participated. In addition to regularly fielded general training questions, special data collections have focused on government training administered in the early years of the NLSY79, high school courses, degrees and certifications, and time use.

The NLSY97 asks all respondents who were at least 16 years old whether they had ever participated in any occupational training programs outside of their regular schooling. For each program, the survey then collects basic information, including the type of program, start and stop dates, time devoted to the training, periods of nonattendance lasting a week or longer during training sessions of at least two weeks, and whether the program was completed (and if not, the reason). In each round, respondents were asked about whether they participated in government training programs. Additional questions asked for specific programs and their duration.

Original Cohort respondents were asked questions about training both on and off the job and focused on government training programs. For the Young Men, details concerning training received in the military (other than basic training) were gathered in the 1966, 1969, 1971, 1976, and 1981 surveys. In 1975, among other additions, a new provider, "government program or agency", was added to the "Training" section of the Young Women survey. Beginning in 1984, a new category, "government agency", was added to the "Training" section of the Mature Women survey. For more precise details about the content of each survey, consult the appropriate cohort's User's Guide using the tabs above for more information.

Survey Instruments Training-related questions are found in the Young Adult Instrument, Section 11, Young Adult Other Training.
Areas of Interest YA Training

School Survey

Sample & Survey Design

A separate, one-time survey was conducted in 1995-1996 of the schools attended by NLSY79 children (over the age of five) in the 1994 and 1995 school years. The survey collected information about the characteristics of the school, graduation rate, ethnic and gender composition of student body and staff, school policies and practices, and community involvement. Information was also obtained about the child's academic success, social adjustment, participation in school activities, the child's grade level, attendance record, and involvement in special programs. The third part of the survey collected standardized test scores from student transcripts for each child. 

Data Collection & Instrumentation

The Child School Survey data collection had several components. The Principal Questionnaire, completed by the principal of the school, included information about characteristics of the school, school policies and practices, and school-community interfaces. A second Child Schooling Questionnaire, filled out by school office personnel for each child, included grade, attendance, involvement in special programs and grade level information. Requests for transcripts yielded standardized test scores for about 34 percent of the children.

The Child School Survey Data

The Child data file contains 375 Child School Survey variables for a sample of about 3,000 children. Due to confidentiality restrictions, not all the items that were asked in each Child School Survey questionnaire appear on the public file. The original eligible universe of children consisted of those enrolled between grades one and twelve in the 1994-1995 school year. For a few children, enrollment status referred to their 1993-1994 school year, but for most the reference period was the 1994-1995 school year. Children under the age of 15 as of the end of 1994 were eligible for data collection if they were living with their mother; older children could be living either with their mother or in other types of residence. Children also needed to be at least age 5 at the time of interview. An estimated 4,441 children met these eligibility criteria.

For 334 children, information was obtained from more than one school, since the child attended more than one school during the interview window.  Additionally, some children were eligible for inclusion in only one of the two years, so the data collection window encompassed only that one school year. The data file includes information for these children for up to two schools. Information collected during the 1995-1996 year could only be collected for schools attended during the preceding two years as the waiver formed signed by the parent only permitted access to records available in the schools the children had attended during that period.

Documentation

The Child School survey variables are assigned to the CHILD SCHOOL SURVEY area of interest. Unlike all the other Child variables, the school survey variables are identified by reference numbers that begin with the letter "S." The question names for variables from the Child Schooling Questionnaire are prefixed with a "C" following by the school number (1 or 2), while those from the Principal Questionnaire begin with a "P". The question items in the file are named according to the sequence in which they appeared in the field questionnaires. Users are encouraged to access copies of the actual instruments (see the Questionnaires page).

School Discipline

 

Child

School Behavior

Information about child behavior at school is collected from the mother, starting with the 1988 survey. The mother reports, for all school age children, whether the child has ever had any behavior problems at school resulting in a note or being asked to come in and talk to the teacher or principal. She also reports, in each survey round, whether the child has ever been suspended or expelled from school and the grade in which the event first happened. These items are assigned to the year-specific MOTHER SUPPLEMENT area of interest and can be found in the Child documentation as follows: 

Question Name   Variable Title  
MS880852 SCHOOL & FAMILY BACKGROUND: CHILD'S BEHAVIOR REQUIRED PARENT AT SCHOOL 1988
MS901447 SCHOOL & FAMILY BACKGROUND: CH'S BEHAVIOR REQUIRED PARENT AT SCHOOL 1990
MS921447 SCHOOL & FAMILY BACKGROUND: CHILD'S BEHAVIOR REQUIRED PARENT AT SCHOOL 1992
MS941611 SCHOOL & FAMILY BACKGROUND: CHILD'S BEHAVIOR REQUIRED PARENT AT SCHOOL 1994
MS961611 SCHOOL & FAMILY BACKGROUND: CHILD BEHAVIOR REQUIRED PARENT AT SCHOOL 1996
MS985007 SCHOOL & FAMILY BACKGROUND: CHILD BEHAVIOR REQUIRED PARENT AT SCHOOL 1998
BKGN-38 CHILD BACKGROUND: CHILD BEHAVIOR REQUIRED PARENT AT SCHOOL 2000-2004
MS-BKGN-38 CHILD BACKGROUND: CHILD BEHAVIOR REQUIRED PARENT AT SCHOOL 2006-2014
MS880855 SCHOOL & FAMILY BACKGROUND: CHILD EVER SUSPENDED FROM SCHOOL 1988
MS901451 SCHOOL & FAMILY BACKGROUND: CHILD EVER SUSPENDED FROM SCHOOL 1990
MS961631 SCHOOL & FAMILY BACKGROUND: WAS CHILD EVER SUSPENDED FROM SCHOOL 1996
MS985012 SCHOOL & FAMILY BACKGROUND: WAS CHILD EVER SUSPENDED FROM SCHOOL 1998
MS921451 SCHOOL & FAMILY BACKGROUND: WAS CHILD EVER SUSPENDED FROM SCHOOL? 1992
MS941615 SCHOOL & FAMILY BACKGROUND: WAS CHILD EVER SUSPENDED FROM SCHOOL? 1994
BKGN-40 CHILD BACKGROUND: WAS CHILD EVER SUSPENDED FROM SCHOOL 2000-2004
MS-BKGN-40 CHILD BACKGROUND: WAS CHILD EVER SUSPENDED FROM SCHOOL 2006-2014
MS961633 SCHOOL & FAMILY BACKGROUND: CHILD GRADE WHEN FIRST SUSPENDED 1996
MS985012A SCHOOL & FAMILY BACKGROUND: CHILD GRADE WHEN FIRST SUSPENDED 1998
MS921453 SCHOOL & FAMILY BACKGROUND: CHILD'S GRADE WHEN FIRST SUSPENDED 1992
MS941617 SCHOOL & FAMILY BACKGROUND: CHILD'S GRADE WHEN FIRST SUSPENDED 1994
MS880856 SCHOOL & FAMILY BACKGROUND: CHILD'S GRADE WHEN SUSPENDED 1988
MS901453 SCHOOL & FAMILY BACKGROUND: CHILD'S GRADE WHEN SUSPENDED 1990
BKGN-40A CHILD BACKGROUND: CHILD GRADE WHEN FIRST SUSPENDED 2000-2004
MS-BKGN-40A CHILD BACKGROUND: CHILD GRADE WHEN FIRST SUSPENDED 2006-2014

In 2000 only, the above items were asked in the Child Supplement and therefore assigned to the year-specific CHILD SUPPLEMENT area of interest. In all other survey rounds, they are found in the MOTHER SUPPLEMENT area of interest.

School Survey

As part of the 1995-1996 Child School Survey, there is information on the number of times the child was suspended in 1993-1995 and the number of students suspended from the same school at the same grade level. The School Survey also reports the number of times the child was expelled and the number of children at the same grade level expelled in the child's school. These items, documented as follows, are assigned to the CHILD SCHOOL SURVEY area of interest:

Table 1. Variable titles and question names for school discipline questions from Child School Survey

Variable Title School #1 School #2
SCHOOLING #(1 OR 2): NUMBER TIMES CHILD SUSPENDED IN 1994-1995 YEAR C1Q15 C2Q15
SCHOOLING #(1 OR 2): NUMBER TIMES WAS CHILD SUSPENDED IN 1993-1994 YEAR C1Q31 C2Q31
SCHOOLING #(1 OR 2): NUMBER TIMES THIS CHILD HAS BEEN EXPELLED FROM SCHOOL C1Q39 C2Q39
SCHOOLING #(1 OR 2): NUMBER OF CHILDREN SUSPENDED AT THIS GRADE LEVEL C1Q49 C2Q49
SCHOOLING #(1 OR 2): NUMBER CHILDREN EXPELLED AT THIS GRADE LEVEL C1Q50 C2Q50

For some children, information was obtained from more than one school, since the child attended more than one school during the interview window. The question names for variables from the Child Schooling Questionnaire are prefixed with a "C" following by the school number (1 or 2). Titles for questions from the first school begin wtih "SCHOOLING #1", and titles for questions from the second school begin with "SCHOOLING #2".

BPI Scale

As part of the antisocial subscale of the Behavior Problems Index, completed by mothers in all survey years for children ages 4 and older, there are two items related to school discipline:

BEHAVIOR PROBLEMS INDEX: CHILD IS DISOBEDIENT AT SCHOOL
BEHAVIOR PROBLEMS INDEX: CHILD HAS TROUBLE GETTING ALONG WITH TEACHER

The BPI items are assigned to the MOTHER SUPPLEMENT area of interest for each survey year. Since 2008, the BPI items are also assigned to the ASSESSMENT ITEMS area of interest.

Survey Instruments Questions related to school discipline are found in the Mother Supplement. Questions about school suspensions and expulsions were asked as part of the Child School Survey.
Areas of Interest MOTHER SUPPLEMENT
CHILD SCHOOL SURVEY
CHILD SUPPLEMENT
ASSESSMENT ITEMS

 

Young Adult

While Young Adult respondents are not asked explicitly about any school discipline, they do provide information about the main reason they left school, as well as the main reason for any gaps in secondary school attendance, with one of the answer categories for those questions being suspension/expulsion.

Comparison to Other NLS Cohorts: The 1980 NLSY79 survey included several questions on school discipline problems: whether respondents had ever been suspended or expelled from school, and if so, the number of times, date of most recent disciplinary action, and when/if the youth had returned to school. Similar to the Young Adult, the questions in the NLSY79 schooling section about reasons for nonenrollment include expulsion/suspension. Information was collected on behavior problems evidenced by children of NLSY79 respondents that resulted in either the parent's notification or disciplinary action. NLSY97 respondents are asked whether they have been suspended and, if so, for what periods. The Young Women and the Young Men surveys ask respondents whether they have ever been suspended or expelled from school. For more precise details about the content of each survey, consult the appropriate cohort User's Guide using the tabs above for more information.

Survey Instruments Questions related to school discipline are found in the Young Adult Instrument, Section 4, Regular Schooling.
Areas of Interest YA Schooling

Aptitude, Achievement & Intelligence Scores

The NLSY79 Child surveys contain a wide range of detailed assessment information about the children of female respondents. From 1986 through 2014, a battery of child cognitive, socio-emotional, and physiological assessments were administered biennially for age-appropriate children. Assessments related to aptitude, achievement, and cognitive ability are listed below. Each individual assessment is discussed in more detail in the Assessments section of the topical guide. Users may also wish to review the Introduction to the Assessments section, which contains general information about the administration of the child assessments.

  1. Peabody Individual Achievement Test (PIAT) Math - (American Guidance Service), a PIAT subtest that offers a wide-range measure of achievement in mathematics for children with a PPVT age of five years or older.
  2. PIAT Reading Recognition and Reading Comprehension - (American Guidance Service), PIAT subtests that assess the attained reading knowledge and comprehension of children with a PPVT age of five and older.
  3. Peabody Picture Vocabulary Test-Revised (PPVT-R), Form L - (American Guidance Service), a wide-range test used to measure the hearing vocabulary knowledge of children whose PPVT age is three and above.  Administered to children age 4 and 5 or 10 and 11 starting with the 1996 survey round.
  4. Parts of the Body - ten items, developed by Kagan, that measure the ability of children aged one or two to identify various parts of their bodies. This assessment was not administered after 1988.
  5. Memory for Locations - an assessment, developed by Kagan, that measures the ability of children eight months of age through three years to remember the location of an object which is subsequently hidden from view. This assessment was not used after 1988.
  6. Verbal Memory - a subtest of the McCarthy Scales of Children's Abilities (Psychological Corporation) that assesses short-term verbal memory of children aged three through six years to remember words, sentences, or major concepts from a short story. Part C, the story, was not used after the 1990 survey. This assessment was not administered after 1994.
  7. Memory for Digit Span - a component of the revised Wechsler Intelligence Scales for Children (Psychological Corporation) which assesses the ability of children seven through eleven years of age to remember and repeat numbers sequentially in forward and reverse order.

Peabody Picture Vocabulary Test - Revised (PPVT-R)

Created variables

  • PPVTyyyy. PEABODY PICTURE VOCABULARY TEST-REVISED FORM L (PPVT): TOTAL RAW SCORE
  • PPVTZyyyy. PEABODY PICTURE VOCABULARY TEST-REVISED FORM L (PPVT): TOTAL STANDARD SCORE
  • PPVTPyyyy. PEABODY PICTURE VOCABULARY TEST-REVISED FORM L (PPVT): TOTAL PERCENTILE SCORE
  • PPV_ERRORyyyy. PPVT: TOTAL # OF ERRORS BETWEEN BASAL AND CEILING (available 2000 - 2014)
  • PPV_BASALyyyy. PPVT: FINAL BASAL (available 2000 - 2014)
  • PPVTMOyyyy. PPVT AGE OF CHILD (IN MONTHS) AT CHILD ASSESSMENT DATE

The Peabody Picture Vocabulary Test, revised edition (PPVT-R) "measures an individual's receptive (hearing) vocabulary for Standard American English and provides, at the same time, a quick estimate of verbal ability or scholastic aptitude" (Dunn and Dunn, 1981). The PPVT-R was designed for use with individuals aged 2½ to 40 years. The English language version of the PPVT-R consists of 175 vocabulary items of generally increasing difficulty. The child listens to a word uttered by the interviewer and then selects one of four pictures that best describes the word's meaning. The PPVT-R has been administered, with some exceptions, to NLSY79 children between the ages of 3-18 years of age until 1994, when children 15 and older moved into the Young Adult survey. Variations in the patterns of administration are somewhat complex for this assessment so the user is encouraged to examine Table 4 in the Child Assessments—Introduction section in order to understand which samples of children took this test over the various survey years. The last survey round to include the PPVT-R was 2014.

Description of the PPVT

The PPVT-R consists of 175 stimulus words and 175 corresponding image plates. Each image plate contains 4 black-and-white drawings, one of which best represents the meaning of the corresponding stimulus word. There are also 5 training words and image plates. Readers who wish to examine more than a single example of the actual images (or "plates") presented to the child, should access the PPVT-R Manual and materials (Dunn and Dunn, 1981) or contact NLS User Services. There are two parallel forms of the PPVT-R; Form "L" has been used by the NLSY79 Child at all assessment rounds. PPVT-R items are numbered in order of increasing difficulty.

In 1986, the PPVT assessment was administered only in English. A Spanish version of the PPVT-R, the Test de Vocabulario en Imagenes Peabody or "TVIP," was introduced into the child survey in 1988 and used through the 2000 survey round for a small number of children who preferred to answer the Spanish version.  For this reason, post-1986 assessment results may be less culturally biased than the 1986 version. After 2000, the Spanish version of the PPVT-R was no longer administered.

Administration of the PPVT

Prior to 2000, the child viewed the images on the PPVT easel. Starting in 2000 the interviewer read from a laminated PPVT word list while the child matched the word by selecting one of four on-screen images designed to reproduce the pictures from the original PPVT easel.

Five training items were administered at the beginning of the PPVT assessment in order to familiarize children with the task. The first item, or starting point, was determined based on the child's PPVT age. Starting at an age-specific level of difficulty is intended to reduce the number of items that are too easy or too difficult, in order to minimize boredom or frustration. The suggested starting points for each age can be found in the PPVT-R manual (Dunn and Dunn, 1981).  

Testing began with the starting point and proceeded forward until the child made an incorrect response. If the child  made 8 or more correct responses before the first error, a "basal" was established. The basal is defined as the last item in the highest series of 8 consecutive correct answers. Once the basal was established, testing proceeded forwards, until the child made six errors in eight consecutive items. If, however, the child gave an incorrect response before 8 consecutive correct answers had been made, testing proceeded backwards, beginning at the item just before the starting point, until 8 consecutive correct responses had been made. If a child did not make eight consecutive responses even after administering all of the items, he or she was given a basal of one. If a child had more than one series of 8 consecutive correct answers, the highest basal was used to compute the raw score. 

A "ceiling" was established when a child incorrectly identified six of eight consecutive items. The ceiling was defined as the last item in the lowest series of eight consecutive items with six incorrect responses. If more than one ceiling was identified, the lowest ceiling was used to compute the raw score. The assessment was complete once both a basal and a ceiling had been established. The ceiling was set to 175 if the child never made six errors in eight consecutive item

Scoring the PPVT

A child's raw score is the number of correct answers below the ceiling. Note that all answers below the highest basal were counted as correct, even if the child answered some of these items incorrectly. The raw score can be calculated by subtracting the number of errors between the highest basal and lowest ceiling from the item number of the lowest ceiling.

As with PIAT Math and Reading Comprehension, it was possible, primarily in the pre-CAPI years, to improve the overall quality and completion level by utilizing information on the actual responses where "correct-wrong" check items had inadvertently been skipped by the interviewer. For a precise statement of the scoring protocol and the norm derivations, the user should consult the PPVT-R Manual (Dunn and Dunn, 1981, pp. 96-110, 126).

Age eligibility for the PPVT

Variations in the patterns of administration are somewhat complex for this assessment so the user is encouraged to examine Table 4 in the Child Assessments—Introduction section in order to understand which samples of children took this test over the various survey years.

In 1986, all children age three and over were given this assessment. In 1988, all ten- and eleven-year-olds (our "index" population) as well as other children age three and over who had not previously completed the assessment in 1986 were given this assessment. In 1990, all children age ten and eleven as well as all other children age four and over who had not previously completed the assessment were eligible for the PPVT-R assessment.  In the 1992 survey round, all children age three and over were eligible to be assessed. Thus, there are at least two survey points (1986 and 1992) in which all age-eligible children who were still being interviewed had a PPVT-R score. Of course, many of these children may also have had an intervening (at age 10 or 11) PPVT-R score. Starting in 1998, the administration of the PPVT-R was largely limited to 4- and 5-year-old children who had not been previously administered the test as well as the index group of children 10-11 years old. In 2004, 2006 and 2010-2014, a small number of children outside of the age range for PPVT assessment but who did not have prior valid scores were administered the PPVT. The last survey round to include the PPVT-R was 2014.

Norms for the PPVT

The PPVT-R was standardized on a nationally representative sample of children and youth. The norming sample included 4,200 children in 1979, and norms development took place in 1980 (Dunn and Dunn, 1981). For a comprehensive discussion of this norming procedure, researchers should refer to the PPVT-R Manual for Forms L and M (Dunn and Dunn, 1981). Age-specific standard scores (with a mean of 100 and standard deviation of 15) and corresponding percentile ranks are provided in the PPVT-R Manual.

Beginning in 1990, the procedure used to create the NLSY79 Child PPVT-R normed scores was refined in two important ways. First, children with raw scores that translated into standard scores between 20 and 39 are now normed using the PPVT-R Supplementary Norms Tables (American Guidance Service, 1981).  Second, raw scores that would translate to normed standard scores above the maximum provided are assigned standard scores of 160, and raw scores translating to standard scores below the minimum are now assigned standard scores of 20. Prior to 1990, children with these scores were assigned a standard score of zero.  The revised 1986-1988 normed scores are available in the current public data release.

Users may note one important distinction between the PPVT-R and PIAT scores--a difference of particular interest to those who plan to use both assessments concurrently. Whereas the PIAT assessments show relatively high mean scores (see discussions of the PIAT in the PIAT Math and Reading sections of this users guide), the PPVT-R mean scores are more comparable to those of the norming sample.

Completion rates for the PPVT

Table 6 in the Child Assessments—Introduction section contains the completion rate for the PPVT-R in 2014, the last survey round to include the PPVT-R.

Validity and reliability of the PPVT

The PPVT-R is among the best-established indicators of verbal intelligence and scholastic aptitude across childhood. It is among the most frequently cited tests in Mitchell's (1983) "Tests in Print." Numerous studies have replicated the reliability estimates from the PPVT standardization sample. The NLSY Child Handbook: 1986-1990 synthesizes much of this work. This report also provides cross-year (1986-1990) reliability and validity evaluation using the NLSY79 Child data. The NLSY Children, 1992: Description and Evaluation contains an evaluation of the quality issues for the 1992 PPVT-R sample, which included the full spectrum of children age three and over. These analyses show strong associations between a full range of social and demographic priors and 1992 PPVT-R scores. The report also documents strong independent linkages between PPVT-R scores in 1986 and PPVT, PIAT Reading and Mathematics, and SPPC scores in 1992. Typically, stronger associations are found for white and Hispanic than for black children. Both of these documents are available on the Research/Technical Reports page.

Age and racial differences on the PPVT

The youngest children administered this test historically scored the poorest, probably reflecting their unfamiliarity with a testing environment. Their lower scores did not reflect lower status as these younger children have parents with more education than do the older children.

More than for any of the other assessments, substantial racial and ethnic variations exist for the PPVT, and these variations remain in multivariate analyses even with demographic and socio-economic controls. The reader is referred to The NLSY Children, 1992: Description and Evaluation for a more comprehensive evaluation of racial, ethnic, and socio-economic differentials in PPVT-R scores using the 1992 NLSY79 data which included PPVT-R assessment scores for all children age 3 years and over. This document is available on the Research/Technical Reports page.

PPVT scores in the database

Three types of PPVT scores are provided in each survey round from 1986 through 2014: a raw score, a standard score, and a percentile score. Documentation of the PPVT scores for 2014, the most recent round to include the PPVT, can be found in Table 1 in the Child Assessments—Introduction section.

Areas of Interest Assessment [scores]
Assessment Items
Child Supplement [PPVT items]
Child Background [PPVT age]

PIAT Reading (Reading Recognition/Reading Comprehension)

PIAT Reading Recognition

Created variables

  • RECOGyyyy. PIAT READING RECOGNITION: TOTAL RAW SCORE
  • RECOGZyyyy. PIAT READING RECOGNITION: TOTAL STANDARD SCORE
  • RECOGPyyyy. PIAT READING RECOGNITION: TOTAL PERCENTILE SCORE
  • PRR_ERRORyyyy. PIAT READING RECOGNITION: TOTAL # OF ERRORS BETWEEN BASAL AND CEILING (available 2000 - 2014)
  • PRR_BASALyyyy. PIAT READING RECOGNITION: FINAL BASAL (available 2000 - 2014)

The Peabody Individual Achievement Test (PIAT) Reading Recognition subtest, one of five in the PIAT series, measures word recognition and pronunciation ability, essential components of reading achievement. Children read a word silently, then say it aloud. PIAT Reading Recognition contains 84 items, each with four options, which increase in difficulty from preschool to high school levels. Skills assessed include matching letters, naming names, and reading single words aloud. To quote directly from the PIAT manual, the rationale for the reading recognition subtest is as follows:

"In a technical sense, after the first 18 readiness-type items, the general objective of the reading recognition subtest is to measure skills in translating sequences of printed alphabetic symbols which form words, into speech sounds that can be understood by others as words. This subtest might also be viewed as an oral reading test. While it is recognized that reading aloud is only one aspect of general reading ability, it is a skill useful throughout life in a wide range of everyday situations in or out of school" (Dunn and Markwardt 1970: 19-20). The authors also recognize that "performance on the reading recognition subtest becomes increasingly confounded with the acculturation factors as one moves beyond the early grades."

This assessment was administered, in the Child Supplement (available on the Questionnaires page), to children below young adult age who were five and over.  The scoring decisions and procedures were identical to those described for the PIAT Mathematics assessment. The last survey round to include the PIAT Reading Recognition Assessment was 2014.

Description of PIAT Reading Recognition

A description of the administration process and a list of the words uttered by the interviewer are included in the public user version of the Child Supplement. The only difference in the implementation procedures between the PIAT Mathematics and PIAT Reading Recognition assessments was that the entry point into the Reading Recognition assessment was based on the child's score in the Mathematics assessment, although entering at the correct point is not essential to the scoring.

Through 2008, Child respondents who terminated the PIAT Math prematurely began the PIAT Reading Recognition assessment with the same starting point question as PIAT Math, based on the respondent's grade in school. Beginning in 2010, children who terminated the PIAT Math assessment prematurely began PIAT Reading Recognition at question 19, regardless of grade in school. Children who terminated PIAT Reading Recognition early started the PIAT-Reading Comprehension assessment at question 19 as well.

Scoring the PIAT Reading Recognition

The scoring decisions and procedures were identical to those described for the PIAT Mathematics assessment

Norms for PIAT Reading Recognition

The norming sample has a mean of 100 and a standard deviation of 15; these were normed against standards based on a national sample of children in the United States in 1968. As with PIAT Mathematics, it is important to note that the norming sample for Reading Recognition was selected, and the norming carried out, in the late 1960s. This has implications for interpreting the standardized scores of the children in the NLSY79 sample (see also the discussion in the section of the NLSY79 Child/YA User's Guide on the PIAT Mathematics assessment. 

Scoring changes for PIAT Reading

Changes were introduced beginning with the 1990 PIAT norming scheme to improve the utility of these measures and to simplify their use. First, children between the ages of 60 and 62 months (for whom no normed percentile scores had been available in 1986 or 1988) were normed using percentile scores designed for children enrolled in the first third of the kindergarten year, the closest approximation available to ages 60 to 62 months.

Starting in 1994, children with raw scores translating to percentiles below the established minimum were assigned percentile scores of one; children with raw scores translating to percentile scores above the maximum are assigned percentile scores of 99. In prior years, the "out-of-range" children had been arbitrarily assigned scores of 0, which led to some inadvertent misuse of the data. (Through 1994, children more than 217 months of age were assigned normed scores of -4 since they were beyond the maximum ages for which national normed scores are available.)

Completion rates for PIAT Reading Recognition

Table 6 in the Child Assessments—Introduction section contains the completion rate for PIAT Reading Recognition in 2014, the most recent survey round to include the PIAT Reading Recognition assessment.

Most children with invalid Reading Recognition scores (assigned a value of -3) either did not enter the assessment or prematurely terminated the assessment. In some instances, a careful review of the individual responses in conjunction with an examination of the interviewer's actual scoring calculations permitted clarification, and ultimately scoring, of previously invalid cases. This type of data review and rescoring was more prevalent during the years prior to 1994 when the assessments were administered on paper without the benefit of CAPI scoring.

It is important to note, however, that while interviewers were able to record the actual response to each PIAT Math item, the nature of the PIAT Reading Recognition made this infeasible for each individual item. In contrast with the PIAT Mathematics assessment, it was not possible to rectify inadvertent skips for some children on the PIAT Reading Recognition assessment where the "correct-noncorrect" check item inadvertently was left blank. This is one reason why the overall response rate is slightly lower on the PIAT Reading Recognition assessment than the PIAT Math assessment in years prior to 1994. Researchers who plan to use the PIAT Reading Recognition assessment extensively are encouraged to examine the individual response patterns. Where a particular researcher does not require great precision on this particular outcome (e.g., a categorization of scores into a number of discrete categories being sufficient), it is possible to reduce the non-completion rate. In a number of cases, while an exact score may not be determined, an appropriate score determination (e.g., within two or three points, or a score of at least a certain level) may be possible.

Validity and reliability for PIAT Reading Recognition

As is true for the PIAT mathematics assessment, the recognition assessment is considered quite reliable and valid. The NLSY Child Handbook: 1986-1990 includes a comprehensive discussion of these issues, drawing on material from the PIAT Manual as well as a variety of research that has been completed using the NLSY79 Child PIAT reading data. This discussion also includes internal CHRR evaluation of the cross-year correlations with other NLSY79 PIAT scores and the full spectrum of other cognitive assessments. Analyses presented in The NLSY Children, 1992: Description and Evaluation offer evidence of strong longitudinal independent associations between PIAT reading and a full set of demographic and socio-economic priors. In general, this assessment, like the other Peabody assessments, is widely used and has a well-established record in research. These documents are available on the Research/Technical Reports page.

PIAT Reading Recognition scores in the database

Three scores are reported for the PIAT Reading Recognition assessment in the child data file for each survey round from 1986 through 2014:

  • an overall nonnormed raw score
  • two normed scores: a percentile score and a standard score

Question names for the PIAT Reading Recognition scores for 2014, the most recent round to include the PIAT Reading Recognition assessment, appear in Table 1 in the Child Assessments—Introduction section.

PIAT Reading Comprehension

Created variables

  • COMPyyyy. PIAT READING COMPREHENSION: TOTAL RAW SCORE
  • COMPZyyyy. PIAT READING COMPREHENSION: TOTAL STANDARD SCORE
  • COMPPyyyy. PIAT READING COMPREHENSION: TOTAL PERCENTILE SCORE
  • PRC_ERRORyyyy. PIAT READING COMPREHENSION: TOTAL # OF ERRORS BETWEEN BASAL AND CEILING (available 2000 - current survey round)
  • PRC_BASALyyyy. PIAT READING COMPREHENSION: FINAL BASAL (available 2000 - 2014)

The Peabody Individual Achievement Test (PIAT) Reading Comprehension subtest measures a child's ability to derive meaning from sentences that are read silently. For each of 66 items of increasing difficulty, the child silently reads a sentence once and then selects one of four pictures that best portrays the meaning of the sentence.

"While understanding the meaning of individual words is important, comprehending passages is more representative of practical reading ability since the context factor is built in, which plays an important role, not only in deciphering the intended meaning of specific words, but of the total passage. Therefore, the format selected for the reading subtest is one of a series of sentences of increasing difficulty. The 66 items in Reading Comprehension are number 19 through 84, with item 19 corresponding in difficulty with item 19 in Reading Recognition." (Dunn and Markwardt, 1970, pp. 21-22). The last survey round to include the PIAT Reading Comprehension assessment was 2014.

Administration of PIAT Reading Comprehension

Children who scored less than 19 on Reading Recognition were assigned their Reading Recognition score as their Reading Comprehension score. If they scored at least 19 on the Reading Recognition assessment, their Reading Recognition score determined the entry point to Reading Comprehension. Entering at the correct location is, however, not essential to the scoring.  

Scoring the PIAT Reading Comprehension

Basals and ceilings on PIAT Reading Comprehension and an overall nonnormed raw score were determined in a manner identical to the other PIAT procedures.  The only difference is that children for whom a basal could not be computed (but who otherwise completed the comprehension assessment) were automatically assigned a basal of 19. Administration instructions can be found in the assessment section of the Child Supplement.

Age eligibility for PIAT Reading Comprehension

In 1994 through 2014, the PIAT Reading Comprehension assessment was administered to all children below young adult age whose age was five years and over and who scored at least 19 on the Reading Recognition assessment. (From 1986 through 1992, PIAT Reading Comprehension was actually administered to all children who scored 15 or higher on Reading Recognition. This lowered threshold was used to maximize our ability to score the Reading Comprehension assessment for those cases where interviewers made minor addition errors in totaling the Reading Recognition test, computing actual scores of 19 or more as only being 15 through 18.)

Norms for PIAT Reading Comprehension

As with the other PIAT tests, norming was accomplished in the late 1960s with all of its attendant potential analytical problems. These are noted in more detail in the discussion above about the PIAT Mathematics subtest. For a precise statement of the scoring decisions and the norm derivations, the user should consult Dunn and Dunn (1981) and Dunn and Markwardt (1970).

Scoring Changes for PIAT Reading Comprehension

Changes were introduced beginning with the 1990 PIAT norming scheme to improve the utility of these measures and to simplify their use. First, children between the ages of 60 and 62 months (for whom no normed percentile scores had been available previously) were normed using percentile scores designed for children enrolled in the first third of the kindergarten year, the closest approximation available to ages 60 to 62 months.

As of the 1994 round, children with raw scores translating to percentiles below the established minimum were assigned percentile scores of one; children with raw scores translating to percentile scores above the maximum are assigned percentile scores of 99. In prior years, the "out-of-range" children had been assigned scores of 0, which led to some inadvertent misuse of the data. (Prior to 1994, children more than 217 months of age are assigned normed scores of -4 since they are beyond the maximum ages for which normed scores are available.)

Completion Rates for PIAT Reading Comprehension

Table 6 in the Child Assessments—Introduction section contains the completion rate for PIAT Reading Recognition in 2014, the most recent survey round to include the PIAT Reading Comprehension assessment.

Reading Comprehension completion rates have typically been lower than many of the other assessments. In the earlier (particularly non-CAPI) survey period, several reasons may account for lower comprehension completion rates (as low as 86% in 1992). In some instances, the assessment was simply skipped over with no reason given. In other instances, a valid Reading Recognition score was available, but the interviewer neglected to assess the child on Reading Comprehension. More typically, the Reading Comprehension assessment was attempted, but the interviewer did not attempt a sufficient number of items to attain a basal or ceiling. An apparently common problem was where an interviewer entered Reading Comprehension at a fairly low level, apparently tested a child, but did not record all of the responses. As with all of the assessments, the researcher is encouraged to examine the scoring patterns for the invalid responses. Depending on one's research objectives, some flexibility in rescoring may be possible.

Validity and Reliability for PIAT Reading Comprehension 

As with the other PIAT assessments, Reading Comprehension is generally considered to be a highly reliable and valid assessment that has been extensively used for research purposes. This version was normed in the late 1960s and thus is subject to the same analytical constraints as the other PIAT assessments.

Readers interested in additional detail regarding specific research based on this NLSY79 assessment, should examine the PIAT discussion in the NLSY Child Handbook: 1986-1990 and review the most recent articles based on the NLSY79 Child reading assessment data by accessing the NLS online bibliography.  Additional information documenting the association between PIAT Comprehension and a full range of socio-economic and demographic maternal and family antecedents can be found in The NLSY Children, 1992: Description and Evaluation. Distributions of the PIAT Reading Comprehension scores are summarized in the Table series 9 in the Selected Assessment Tables reports (Table series 8 in 2004). All of these documents are available on the Research/Technical Reports page.

PIAT Reading Comprehension Scores in the Database

The NLSY79 Child dataset provides the following PIAT Reading Comprehension scores in each survey round from 1986 through 2014: an overall nonnormed raw score that can range from 0 to 84, a normed percentile score, and a normed standard score. Question names for the PIAT reading comprehension scores for 2014, the most recent round to include PIAT Reading Comprehension, are listed in Table 1 in the Child Assessments—Introduction section. It should be noted that many younger children (aged seven years and below) who receive low raw scores cannot be given normed scores because their scores are out of the range of the national PIAT sample used in the norming procedure. These children have been assigned "-4" codes on the percentile and standard score variables. Researchers wishing to keep these children in their analyses will need to consider special decision rules. The way to identify these children is to cross-classify children by their raw score and standard score. These cases will have a raw score of zero or greater but a standard and percentile score of -4.

If one is using the PIAT Reading Comprehension assessment for analyzing five- and six-year-olds, the proportion of children without a standard score is a major constraint that cannot be ignored. A large proportion of five- and six-years-olds with a valid raw score on Reading Comprehension could not be given a normed score. All of these children had raw scores below 19 and thus, had their Reading Recognition score imputed as the Comprehension score; one solution for the youngest children (those with ages under 7) is to limit analyses to Reading Recognition. Another possible strategy is to use the raw score and to include an age control in one's equations.

By applying procedures parallel to those used with PIAT Mathematics, it was sometimes possible to clarify the score of a previously "unscorable" child by carefully examining the individual response patterns, particularly where the actual response for the "correct-incorrect" item had not been completed. This was more relevant in the 1986 to 1992 "pre-CAPI" administration survey rounds. In this way, we were able to retrieve a number of cases not previously scorable.  Depending on a researcher's individual inclination or need for precision, it may be possible to score, in an approximate manner, a number of additional children.  In order to accomplish this, the researcher will need to examine the individual PIAT comprehension items. Researchers who plan to use this outcome extensively are encouraged to examine the individual item responses.

Areas of Interest Assessment [scores]
Assessment Items
Child Supplement

PIAT Mathematics

Created variables

  • MATHyyyy. PIAT MATH: TOTAL RAW SCORE
  • MATHPyyyy. PIAT MATH: TOTAL PERCENTILE SCORE
  • MATHZyyyy. PIAT MATH: TOTAL STANDARD SCORE
  • MAT_ERRORyyyy. PIAT MATH: TOTAL # OF ERRORS BETWEEN BASAL AND CEILING (available 2000 - 2014)
  • MAT_BASALyyyy. PIAT MATH: FINAL BASAL (available 2000 - 2014)

The Peabody Individual Achievement Test (PIAT) is a wide-range measure of academic achievement for children aged five and over. It is among the most widely used brief assessment of academic achievement with high test-retest reliability and concurrent validity. The NLSY79 Child Supplement includes three subtests from the full PIAT battery: the Mathematics, Reading Recognition, and Reading Comprehension assessments. Many of the comments made here about the PIAT math subtest are equally appropriate for the other PIAT (as well as PPVT) assessments. The last survey round to include the PIAT Mathematics was 2014.

Description of the PIAT Math

The PIAT Mathematics assessment protocol used in the field is described in the documentation for the Child Supplement (available on the Questionnaires page). This subscale measures a child's attainment in mathematics as taught in mainstream education. It consists of 84 multiple-choice items of increasing difficulty. It begins with such early skills as recognizing numerals and progresses to measuring advanced concepts in geometry and trigonometry. The child looks at each problem on an easel page and then chooses an answer by pointing to or naming one of four answer options.

Administration of the PIAT Math

Administration of this assessment was relatively straightforward. Children entered the assessment at an age-appropriate item (although this is not essential to the scoring) and established a "basal" by attaining five consecutive correct responses. If no basal was achieved then a basal of "1" was assigned (see PPVT). In 1986 and from 1996 to 2014, a "ceiling" was reached when five of seven items are answered incorrectly. From 1988 to 1994, a "ceiling" was reached when five items in a row were incorrectly answered. The non-normalized raw score is equivalent to the ceiling item minus the number of incorrect responses between the basal and the ceiling scores.

Age eligibility for the PIAT Math

The PIAT Mathematics assessment was administered to all children below young adult age whose age was five years and above in every survey round from 1986 to 2014.  

Norms for the PIAT Math

For a precise statement of the norm derivations, the user should consult the PIAT Manual (Dunn and Markwardt, 1970, pp. 81-91, 95). In interpreting the normed scores, the researcher should note that the PIAT assessments used in the NLSY79 Child were normed about 30 years ago. Social changes affecting the mathematics and reading knowledge of small children in recent years undoubtedly have altered the mean and dispersion of the reading distribution over this time period. In this regard, a revised version of the PIAT ("PIAT-R") was released in 1986, but this release occurred too late to incorporate as a 1986 child assessment. We opted to maintain internal continuity within the NLSY79 by continuing to use the 1968 version of the PIAT.

Normalized percentile and standard scores were derived on an age-specific basis from the child's raw score. The norming sample has a mean of 100 and a standard deviation of 15. The norming procedures essentially were a two-step process with the percentile scores being derived from the raw scores and the standard scores from the percentile scores. The question names for the raw and normed PIAT Math scores for 2014 (the last round the PIAT Math was administered) are listed in Table 1 in the Child Assessments—Introduction section.

The overall (weighted) standard score means for NLSY79 children completing the PIAT Mathematics have been higher compared to what one might expect from a full national cross-section. It is likely that this pattern at least partly reflects changes that have occurred in American society over the last 30 years. For example, it is very possible that factors such as child educational television viewing patterns or involvement in pre-school programs have improved younger children's readiness for mathematics and reading, if not their advanced capability.

Changes in PIAT norming scheme

Beginning with 1990, changes were introduced into the PIAT norming scheme to improve the utility of these measures and to simplify their use. First, children between the ages of 60 and 62 months (for whom no normed percentile scores had been available previously) were normed using percentile scores designed for children enrolled in the first third of the kindergarten year, the closest approximation available to ages 60 to 62 months.

Starting in 1994, children with raw scores translating to percentiles that were below the established minimum were assigned percentile scores of "1"; children with raw scores translating to percentile scores above the maximum were assigned percentile scores of 99. In prior years, the "out-of-range" children had been assigned arbitrarily scores of 0, which led to some inadvertent misuse of the data. (Prior to the 1994 period, children who were more than 217 months of age were assigned normed scores of -4, since they were beyond the maximum ages for which nationals normed scores are available.)

Completion rates for the PIAT Math

Between 1986 and 1992 when the survey was administered by paper and pencil, most invalidly skipped items in the PIATs fell into two categories. First, some children were inadvertently skipped over even though they were of an appropriate age. Second, a number of children could not be scored because the scoring decision rules were incorrectly followed so either a basal or ceiling could not be obtained. The introduction of computer-assisted personal interview (CAPI) technology in the 1994 child data collection prevented incorrect skips from occurring and also took the decision making regarding basal and ceiling procedures out of the hands of the interviewer. In the pre-CAPI survey years (1986-1992), when the child assessments were administered on paper, some cases had items with the "correct-incorrect" designation left blank by the interviewer. Since the actual responses to each item were recorded, scoring of these items was frequently possible.

Table 6 in the Child Assessments—Introduction section contains the completion rate for the PIAT Math in 2014, the last survey round to include the PIAT Math.

Validity and reliability for the PIAT Math

In general, the PIAT Math is a highly reliable and valid assessment.  As detailed in the NLSY Child Handbook: 1986-1990 and The NLSY Children, 1992: Description and Evaluation, both available on the Research/Technical Reports page, the PIAT Math is correlated closely with a variety of other cognitive measures.  It is both predicted by and predicts scores on a variety of the other NLSY Child assessments. A particularly strong analytical advantage derived from all of the PIAT assessments is the fact that they have now been asked repeatedly of children aged five and over. Many children in the sample completed these assessments more than three times and most of the children in the Young Adult sample have multiple PIAT administrations in their NLSY79 history (see Tables 7-8 in the Child Assessments—Introduction section). This pattern of repeat assessment permits the careful examination of their developmental profiles in relation to school and early-career development. A more detailed discussion of repeat assessment can be found under Repeat Assessments in the Child Assessments—Introduction section.

PIAT Math scores in the database

Three types of scores are provided in each survey year from 1986 through 2014 for each assessed, age-eligible child: a raw score, a standard score, and a percentile score. Documentation for the PIAT Math scores for 2014, the most recent round to include the PIAT Math, is included in Table 1 in the Child Assessments—Introduction section.

Areas of Interest Assessment [scores]
Assessment Items
Child Supplement

Wechsler Intelligence Scale for Children - Memory for Digit Span

Created variables

  • DIGITyyyy. DIGIT SPAN: TOTAL RAW SCORE
  • DIGITFyyyy. DIGIT SPAN: DIGITS FORWARD RAW SCORE
  • DIGITByyyy. DIGIT SPAN: DIGITS BACKWARD RAW SCORE
  • DIGITZyyyy. DIGIT SPAN: TOTAL STANDARD SCORE

The Memory for Digit Span assessment, a component of the Wechsler Intelligence Scales for Children-Revised (WISC-R), is a measure of short-term memory for children aged seven and over (Wechsler 1974). The WISC-R is one of the best normed and most highly respected measures of child intelligence (although it should be noted that the Digit Span component is one of the two parts of the Wechsler scale not used in establishing IQ tables). The last survey round to include Memory for Digit Span was 2014.

Description of the Memory for Digit Span

There are two parts to the Memory for Digit Span assessment: Digits Forward and Digits Backward. Each tap distinct but interdependent cognitive functions. Digits Forward primarily taps short-term auditory memory while Digits Backward measures the child's ability to manipulate verbal information while in temporary storage. In Digits Forward, the child listens to and repeats a sequence of numbers spoken aloud by the interviewer. In Digits Backward, the child listens to a sequence of numbers and repeats them in reverse order. In both parts, the length of each sequence of numbers increases as the child responds correctly. The precise instructions and items used in this assessment can be found in the Memory for Digit Span section of the NLSY79 Child Supplement, available on the Questionnaires page.

Administration of the Memory for Digit Span

The child was instructed to repeat a series of numbers (with increasing numbers of digits) forward and a different series of digits in reverse order. Each correct response was worth one point; with a maximum of 14 points for each subscore series and hence 28 for the total score. The forward digit sequence was completed prior to beginning the backward digit sequence. However, entry into the reverse sequence was not contingent on successful entry or completion of the forward sequence. Prior to 2002, where appropriate, this assessment was administered in Spanish.

Age eligibility for the Memory for Digit Span

Starting in 1996 through 2014, this assessment was administered to all children age seven through 11 years. In prior rounds, it was administered to children ages seven and over who had not previously received the assessment, and to all ten and eleven year olds (see Table 4 in the Child Assessments—Introduction).

Norms for the Memory for Digit Span

Whereas the normed scores for the other assessments are based on a mean of 100 and a standard deviation of 15, the Digit Span assessment was normed against a distribution that has a mean of 10 and a standard deviation of 3. Norms are only available for the total score. The norms are published in the WISC manual (Wechsler 1974: 118-150). 

Completion rates for the Memory for Digit Span

The overall completion rate for Digit Span in the most recent survey rounds is between 79 and 86 percent, a slight drop from previous rounds. This overall level of completion generally held across all three race/ethnicity categories in 2014, the last survey round to include the Digit Span (see Table 6 in the Child Assessments—Introduction section).

Validity and reliability for the Memory for Digit Span

In multivariate analyses carried out with the 1992 data that controlled for a wide range of demographic and socio-economic antecedents, the scores of black and Hispanic children were not below those of non-Hispanic, non-black children on either the forward or backward assessment (The NLSY Children, 1992: Description and Evaluation). In the same analyses, it was also found that the two 1986 Digit Span subscores, in particular the reverse order "digit backwards" assessment, were useful independent predictors of all of the PIAT scores for older children in 1992. Users who want more detailed information about the reliability and validity of these assessments and a brief discussion of other literature about studies that have used these assessments should consult the NLSY Child Handbook: 1986-1990 and The NLSY Children, 1992: Description and Evaluation, available on the Research/Technical Reports page.

Digit Span scores in the database

Three "raw" scores (one for each of the two subscales and one for the total score) are provided in each survey year from 1986 through 2014, along with one overall age-appropriate normed (standard) score. The complete listing of question names for assessment scores for 2014, the most recent survey round to include the Digit Span, can be found in Table 1 in the Child Assessments—Introduction section.

Areas of Interest Assessment [scores]
Assessment Items
Child Supplement
Subscribe to NLSY79 Child and Young Adult