A Randomized Trial Comparison of the Effects of Verbal and Pictorial Naturalistic Communication Strategies on Spoken Language for Young Children with Autism Laura E Schreibman 1, Aubyn C Stahmer 2

First i would like to give you my essay that i have type out . please use everything in my essay because there are specific questions that need to be awnser. Secondly i will send you the questions that my teacher requested it has to be strictly about that . THirdly the awnsers with those questions chat gpt gave me the awnsers to it when i paste the entire article to it . make sure everything aligns with that question . I will also post you the link to the article it self. The study by Schreibman and Stahmer (2014) aimed to help young children that were nonverbal and minimally verbal with autism. Young children were randomly assigned in two intervention groups to compared the effects of Pivotal Response Training (PRT), and The Picture Exchange Communication System(PECS). Although there was no agreement for single behavioral treatment, the importance was to evaluate the methods of verbally based approach and the pictural method for validity and reliability. In the past studies there was lack of comparisons due to different results with the treatment. Additionally, PRT and PECS were great tools for behavioral intervention but studies want to aim towards the best to help kids with communication strategies. There was 34 males and 5 females included in this study. All 39 of those children from ages 20-45 months where diagnosed with autism. The objective of the two interventions was to use cognitive functioning, word use, and appropriate age groups to make certain no substantial change differences in PECS and PRT groups(Koegel et al., 1987). Table one provided a detailed demographic data to show the assessment tools use for diagnosis and the parental involvement. This allow them to have more detailed information because children are motivated to open up and learn faster and naturally. The size sample was relatively small for only 39 participants and suggested a larger group might help to show different responses. However, the small group was reasonable for a controlled behavioral intervention study( Bondy & Frost, 2002). The groups were split up so the researchers could look for consistent patterns for statistical analysis and find changes that happens overtime. The inclusion and exclusion criteria was justified because they made it clear that use a very trusted ADI-R and ADOS-G for children that never had PRT or PECS therapy before , under 48 months of age, and limited of speech. Kids were not allowed if they had a mental disability, brain problems, loss of vision, or hearing difficulties. Children were tested for how many words they could say, and how well they could think and learn quickly. This was a random pick between groups to make sure it was fair and not biased (Koegel et al., 1989). Studies maintained contact with the caregivers to monitor if there following the treatment plan properly. However some of them was aware of what groups kids were in which could potentially affect how they judged the results which is bias. Majority of this sample was to look at kids who were nonverbal more than others was who were talkative. Despite some of the issues the results are conclusive and trustworthy to use. There was evidence that the study was ethical. Studies always made sure to gain permission from the parents before testing’s and involved them with every detail about their child. They make sure every children receive more than one of the treatments to make things fair and to treat all families equally. The most common tools there were using for testing is the Mullen Scales of Early Learning, Autism Diagostic Observation Schedule, Vineland Adaptive Behavior Scales, and One-Word Picture Vocabulary Test (Fenson et al., 2006; Gardner, 1990; Sparrow & Cicchetti, 1989). These test were appropriate and trustworthy for children who don’t talk as much. These tools align with the intervention objections, communication, and age appropriate. The calibration make sure the therapists and caregivers follow the treatments in the right format to maintain consistency. All of the sessions where recorded and videos were watched to be aware if therapists were on track. Those that failed below the criteria were removed or retrained according to (Koegel et al., 1989). Therapist must be trained to a standard of 80 percent of correct implementation before service delivery. Additionally, test were also done by checking the videos without knowing which group the child was in to prevent any bias. The test were well validated and standardized according to Sparrow & Cicchetti, 1989). Each test help with different things the MSEL(Mullen Scales of Early Learning) was for learning and thinking skills. Expressive one word picture vocabulary test to help the child name objected which is for expressive language. CDI(Communicative Development Inventories) was a checklist for the parents to fill out about how there child communicates. Vineland Adaptive Behavior Scales was a test that looks at how socially or talkative that child is in an environment. Autism Diagnostic Interview was a parent interview to receive more information about their child that has autism. Autism Diagnostic Observation Schedule was a test trained by professional to look for signs for autism. There test were conducted on a lot of children and are known to be effective for younger children with autism. They also used independent variables such as PRT(Pivotal Response Treatment) or PECS(Picture Exchange Communication System). The dependents variables was also measure for the children spoken vocabulary , expressive / adaptive communication skills. The measurements were taken before, after treatment and follow up to see if the effects lasted. These measurements strengthened the validity and reliability of the testing’s. The test environment was in natural space for the kids to feel comfortable and to stay engage by having a well equipped playrooms. The materials were based on the child’s preference and developmental level. Treatment was also maintained at home which allowed parents to stay involved in the interventions to gain knowledge and guidance. Educators guided the caregiver through each step by reading the manuals, modeling strategies, and feedback. The parents and caregivers had to consist of 80 percent fidelity criteria. These criteria was right for because they also did a randomized control trial by comparing two therapists. They both receive the same treatment and did a three month check up to see if the results were the same. They analyze the results and both had 247 hours total and 23 weeks. The goal was for it to align with spoken language with children that where nonverbal with the(PRT and PECS). These standardized test and people not being aware what group the children were in had a increased internal validity. The PECS and PRT was consistent across groups . Some were aware of the groups the child was in which cause bias but it did not affect the results much. There were only two families dropped out of the intervention. Some threats were addressed as well others remain. Schreibman and Stahmann wants to make sure that PRT and PECS could also work long term . There evidence was by using mixed-model ANOVAs, chi-square tests, and effect sizes to compare PRT and PECS across time, making sure the results were statistically valid and meaningful. In conclusion it was found the PRT and PECS were effective for spoken language for young children with autism. However it was minimally for verbal children with autism. Both of these methods were valid so studies focus on the approach because there work best for children with autism.

Article: https://pmc.ncbi.nlm.nih.gov/articles/PMC400539

Minor Clarifications / Gaps (but still aligned overall): from chapgpt it says that i did everything as far as summarizing the summary and following the questions my teacher told me to awnser there was 14 questions in all but this is what it told me .

  • Spoken Language Gains: Your summary says “results are conclusive and trustworthy,” which might overstate it slightly. The full article is more cautious, noting “extreme variability” in progress and that neither treatment was superior.
  • Crossover Between Treatments: Your summary states that “every child received more than one treatment”, but the article clearly states that parents were asked to stick to the assigned treatment and were offered the alternative only after the study.
  • Tools Named in Summary: The summary merges tools and citations a bit loosely (e.g., “Fenson et al., 2006” cited with “One-Word Picture Vocabulary Test,” but that test is from Gardner, 1990). Minor, but worth noting.

OK HERE ARE THE QUESTIONS MY TEACHER WILL LIKE ME TO AWNSER . REMEMBER EVERYTHING NEEDS TO BE 100 PERCENT NO NO NO PLAGARISM . ASLO HERE IS THE LINK TO THE ARTICLE:https://pmc.ncbi.nlm.nih.gov/articles/PMC4005390/#S1 OR https://pmc.ncbi.nlm.nih.gov/articles/PMC4005390/ BOTH OF LINKS SHOULD WORK .

Subjects/Participants

1. Were the participants or specimens adequately described? Yes. The study included 39 children aged 20–45 months (M = 29.2, SD = 5.67), diagnosed with autism via the Autism Diagnostic Interview–Revised (ADI‐R) and the Autism Diagnostic Observation Schedule–Generic (ADOS‐G) (Lord et al., 1994, 2000). All participants were nonverbal or minimally verbal (fewer than ten functional words), consisting of 34 males and 5 females across two university-affiliated sites (Bondy & Frost, 2002). The sample was stratified and balanced by age, cognitive functioning, and word use, ensuring no significant baseline differences between the PECS and PRT groups (Koegel et al., 1987). Parental involvement was specified, noting 32 mothers and 7 fathers in the study. Demographic data including verbal ability and cognitive levels enhance transparency and allow replication.

2. Was the sample size appropriate? The trial began with 41 participants and concluded with 39, divided into 20 in the PRT group and 19 in the PECS group (Bondy & Frost, 2002). Though modest, this sample size is typical for early-intervention autism research and sufficient to identify significant time effects with medium-to-large effect sizes. However, large standard deviations indicate variability that a larger sample could mitigate. Attrition was low, supporting statistical power and internal validity. While the absence of a no-treatment control limits direct causal inferences, the sample size remains reasonable for the comparison being made.

3. Were inclusion and exclusion criteria clearly defined and justified? Yes. Criteria included age under 48 months, ≤9 intelligible words, confirmed ASD diagnosis, no severe sensory or neurological impairment, no prior exposure to PECS or PRT, and informed parental consent to avoid cross-treatment (Bondy & Frost, 2002; Koegel et al., 1987). These filters created a relatively homogeneous sample of minimally verbal young children suitable for direct treatment comparisons. Ongoing monitoring ensured blackout of outside augmentative interventions. The rigorous criteria enhanced internal validity and grounded the study in a clearly defined population.

4. Was subject selection free from bias threatening internal or external validity? Yes. Stratified randomization across the key variables of age, cognitive function, and word use reduced selection bias (Koegel et al., 1989). Therapists and parents were uninformed about which sessions would be fidelity checked, helping to control expectancy effects. Attrition was minimal and balanced—one dropout per group—further reducing selection biases. While a minority of post-treatment assessments at Site 2 were unblinded, most remained blinded, and the authors transparently discuss this limitation. Overall, the design maintained strong internal and external validity protections.

5. Was there evidence of ethical protection for participants? Yes. Ethical standards were upheld: parents provided informed consent, and participants were excluded if they had major comorbidities or prior exposure to tested interventions (Bondy & Frost, 2002). Parents were offered the alternative intervention post-study and continued other standard therapies, recognizing familial needs. Therapist and parent educator fidelity training protected participants from substandard implementation. Blind assessments for most ADOS administrations ensured unbiased data collection. These measures collectively emphasize participant safety and respect.

Materials

6. Were the tools, instruments, or materials used appropriate for the study? Absolutely. The study employed well-established instruments: the Mullen Scales of Early Learning (MSEL) for expressive communication, Expressive One-Word Picture Vocabulary Test–Revised (EOWPVT) for vocabulary, MacArthur–Bates Communicative Development Inventory (CDI) for parent-reported lexicon, and Vineland Adaptive Behavior Scales (VABS) for communication skills (Fenson et al., 2006; Gardner, 1990; Sparrow & Cicchetti, 1989). The PECS Phase scoring system was also utilized to measure augmentative communication progress (Bondy & Frost, 2002). These measures are age-appropriate, reliable, and valid for assessing communication development in young children with autism. They align directly with the intervention objectives and support construct validity.

7. Were calibration procedures described and adequate? Yes. Therapists were trained to a standard of 80% correct implementation before service delivery, and fidelity was assessed in videotaped sessions by blinded coders every 10 hours. Those falling below criterion were removed and retrained, ensuring consistency (Koegel et al., 1989). Parent educators also met the same fidelity standard. Blind fidelity checks prevented bias in selecting sessions for review. These rigorous calibration protocols ensure consistent and accurate application of PRT and PECS procedures.

8. Was there evidence of reliability and validity for the instruments? Yes. All measures used—MSEL, EOWPVT, CDI, VABS, ADI‐R, and ADOS‐G—are standardized and well-validated in young children with autism (Lord et al., 1994, 2000; Sparrow & Cicchetti, 1989). The majority of assessments, including ADOS scoring, were conducted blind to treatment condition, enhancing objectivity. Reliability checks for both staff administering assessments and fidelity coders were built into the protocol. Outcomes were consistently supported by effect sizes and p-values, demonstrating measurement integrity.

9. Were independent and dependent variables properly defined and measured? Yes. The independent variable was clearly defined as the type of intervention: PRT versus PECS. Dependent variables included expressive language ability (MSEL), vocabulary (EOWPVT, CDI), adaptive communication (VABS), phase of PECS acquisition, and parent satisfaction ratings. Measurements were taken at baseline, immediately post-intervention, and at three-month follow-up intervals. Statistical analyses—mixed-model ANOVA and ordinal modeling—were well-suited to these data structures. Variable operationalization aligns clearly with treatment goals and research hypotheses.

Procedures

10. Was the test environment appropriate and clearly described? Yes. Parent education occurred in structured lab-based playrooms containing appropriate toys and developmentally sensitive materials. Home-based intervention took place in natural environments where children typically function. Generalization settings were described but not used for intervention activities, ensuring controlled study conditions. Videotaping sessions supported fidelity assessments without disrupting routine. This thoughtful design fosters ecological validity and replicability.

11. Were subject instructions and researcher protocols standardized? Yes. Both interventions adhered strictly to established manuals (Koegel et al., 1989; Frost & Bondy, 2002). Parent educators guided families through structured manual readings, modeling, and feedback until fidelity was achieved. The same 80% fidelity criterion applied across therapists and parents. Both conditions received identical session structures and durations. The use of retraining protocols and consistent scheduling further fostered procedural uniformity.

12. Were procedures appropriate for the research design? Yes. A randomized control trial design, stratified by baseline characteristics, was optimal for comparing two active interventions. Treatment intensity was equivalent: approximately 247 hours over 23 weeks in both conditions. Therapist training, fidelity monitoring, and blind assessment preserved internal validity. A follow-up assessment at three months allowed examination of sustained effects. Analytical methods (mixed-model ANOVA and chi-square tests) were suitable for the longitudinal, group-comparison framework.

13. Were threats to internal and external validity adequately addressed (e.g., history, maturation, attrition, pretest effects)?

Yes. The study minimized selection threat through stratified randomization and minimized performance bias via blinded fidelity checks. Attrition was low and balanced, reducing dropout bias. Assessing outside therapies eliminated confounding from extraneous services. Blind assessments curbed detection bias, although some post-tests at Site 2 were not blind—an admitted limitation. However, the authors openly discuss maturation threats, measurement bias, and generalizability, providing a balanced evaluation of validity threats.

14. Were data analysis procedures clearly described and appropriate for the design? Yes. The authors conducted mixed-model repeated-measures ANOVAs (3 time points × 2 groups) with Greenhouse–Geisser correction to account for violations of sphericity. They also used ordinal mixed models and chi-square tests for EOWPVT data. This analytical approach was transparent, with F values, p-values, and effect sizes reported for significant findings. The absence of treatment × time interaction was explicitly stated. These methods are statistically sound and aligned with the study’s longitudinal comparative design.

Ace Your Assignments! 🏆 - Hire a Professional Essay Writer Now!

Why Choose Our Essay Writing Service?

  • ✅ Original writing: Our expert writers will write each paper from scratch, ensuring complete originality, zero plagiarism and AI free content.
  • ✅ Expert Writers: Our seasoned professionals are ready to deliver top-quality papers tailored to your needs.
  • ✅ Guaranteed Good Grades: Impress your professors with outstanding work.
  • ✅ Fast Turnaround: Need it urgently? We've got you covered!
  • ✅ 100% Confidentiality: Customer privacy is our number one priority. Your identity is anonymous to our writers.
🎓 Why wait? Let us help you succeed! Our Writers are waiting..

Get started

Starts at $9 /page

How our paper writing service works

It's very simple!

  • Fill out the order form

    Complete the order form by providing as much information as possible, and then click the submit button.

  • Choose writer

    Select your preferred writer for the project, or let us assign the best writer for you.

  • Add funds

    Allocate funds to your wallet. You can release these funds to the writer incrementally, after each section is completed and meets your expected quality.

  • Ready

    Download the finished work. Review the paper and request free edits if needed. Optionally, rate the writer and leave a review.