APEAC 2017 - Spotlight Sessions
PDF Print E-mail

Home | Speakers’ Profile & Synopses | Spotlight Sessions | Programme | Registration | Venue |


 


“Digging Deeper into PISA: The Challenges for Schools”

DR YURI BELFALI, ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT (OECD)

This session examines the role of schools and school-level actors for enhancing the quality and inclusive learning of students.  In light of the international results of PISA, participants will further discuss lessons and practices to support learning improvement drawing on the experiences of their countries and schools.

  • What are the major challenges for schools, seen by PISA results, in terms of quality and equity in education and students’ well-being?
  • How international assessment could inform policies and practices to address challenges for schools?

What are the lessons and practices in Asia-Pacific that could let policy makers and practitioner’s identify possible actions to address challenges for schools?


“Assessing Deep Learning and Engaging Students Effectively – Innovation and Challenge at School Level”

PROFESSOR MASAHIRO ARIMOTO, TOHOKU UNIVERSITY, JAPAN

DR JOAN HERMAN, UNIVERSITY OF CALIFORNIA, LOS ANGELES, UNITED STATES

This session will address how assessment can be used at the school level to engage students in and promote student learning. The session will begin by considering the purpose of assessment and alternative models for the use of assessment to promote deeper– models used in Japan, US and Singapore. Among the questions the session could address are:

  • What is deeper learning? What are priority learning goals for achieving it?
  • Why assess deeper learning?
  • How are deeper learning goals best assessed? What are the strengths and weaknesses of different strategies?
  • What teacher capacities are needed?
  • What school level conditions and factors influence how well assessment is used to promote improvement?
  • What criteria should be used to assure high quality assessment of deeper learning?

“What Do Traditional Tests Tell Us About Students?”

PROFESSOR ROBERT COE, DURHAM UNIVERSITY, UNITED KINGDOM

DR MARTIN WALKER, DURHAM UNIVERSITY, UNITED KINGDOM

Have we measured what we think we have measured?

In many education systems, it has tended to be that case that significant emphasis is placed on the results of final examinations. Typical examples would be O’ level and A’ level examinations.

The importance of doing well in the final test can affect the way that teachers teach and students learn. This could involve the well-known “teaching to the test” effects or perhaps a more subtle introduction of bias towards the kinds of knowledge and skills that are required to do well in the test.

It is not unusual for traditional pencil and paper tests to require students to demonstrate considerable amounts of knowledge recall and simple understanding. This might not be a problem in itself but if the results of such tests are taken to be measures of higher order skills such as true problem solving, complex analysis or transference of knowledge from familiar to unfamiliar situations, what the test has actually measured might not fit very well with the desire to explore theses higher level skills in students.

This session will explore the extent to which traditional examinations provide useful information about students and will consider:

1.      The educational purposes that the school system sought to attain

2.      How learning experiences were selected which are likely to be useful in attaining these objectives

3.      How learning experiences can be organized for effective teaching

4.      How effective traditional assessments might be at evaluating the learning experiences

(Tyler R.W. 1949)

We will look at some typical examination questions and explore the kinds of knowledge and skills that are required in order to achieve the marks in the mark scheme.

We will also consider what is meant by high order and low order skills and what the balance of these might look like in a typical examination.


“Recent Developments in the Measurement, Evaluation and Promotion of Educational Effectiveness”

PROFESSOR ROBERT COE, DURHAM UNIVERSITY, UNITED KINGDOM

PROFESSOR ELAINE CHAPMAN, UNIVERSITY OF WESTERN AUSTRALIA, AUSTRALIA

Part 1: What Do We Understand by Educational Effectiveness? – Professor Robert Coe

Educational effectiveness is the aim of every teacher and school principal and has been the subject of more than 50 years of research. Despite this, we know surprisingly little about it that is both solid and useful.

 In this part of the session we will explore the following questions:

  • What does it mean for a teacher or school to be ‘effective’?
  • What sorts of caveats and confounds should be included in this understanding?
  • Are some practices known to be ‘effective’?
  • Can individual teachers and schools monitor their own effectiveness in ways that are useful?

 

Part 2: How Do We Measure and Assess Educational Effectiveness? – Professor Elaine Chapman

Whilst historically, schools and other education institutions have relied primarily upon academic performance measures as indicators of educational effectiveness, recent decades have seen a shift toward a more holistic view of effectiveness. Many institutions, for example, now gather data on measures of student engagement, critical thinking, and creativity within their ongoing quality improvement exercises. Singapore represents one of the countries that has been at the forefront of this shift, fueled by broad-scale initiatives such its 1997 Thinking Schools, Learning Nation policy. This policy recognized, amongst other factors, the importance of learning process variables in preparing students for their future educational and employment pursuits. The policy also brought into sharper focus the need for schools to target 21st century skills (e.g., creative and critical thinking skills) within any curricula and pedagogical developments. Whilst there appears to be general agreement that this ideological shift was a significant move forward, the implementation of the policy has proven more difficult, owing in large part to difficulties in assessing these challenging constructs.

In this part of the session we will focus on:

(i) current views on the domains of assessment that are relevant for evaluating educational effectiveness in the 21st century;

(ii) current measures that can be used by schools for the purposes of quality improvement; and

(iii) potential barriers in the use of student assessment data for continuous quality improvement.


“Two burning Issues - Assessing 21st Century Skills, and De-Stressing the Test: Preparing Students for High Stakes Assessments”

DR CECILIA CHAN, THE UNIVERSITY OF HONG KONG, HONG KONG

MS MARTHA KAUFELDT, UNITED STATES

Assessing 21st Century Skills – Dr Cecilia Chan

1.     Should we assess/measure 21st century skills? And why?

2.     Who should be the assessor(s) of these skills?

3.     Given that 21st century skills are often graduate attributes of schools, how are school principals dealing with these as KPIs? In other words, what are the KPIs for schools when it comes to the 21st century skills for student learning?

4.     Should the KPI assessment of students be incorporated into their overall academic grades/transcripts? And how?

5.     To what extent the KPIs should be based on outcomes versus behaviours displayed?

6.     How are generic competencies currently measured in your schools? Do you have any best practice examples to share?

 

De-stressing the Test: Preparing Students for High Stakes Assessments – Ms Martha Kaufeldt

Preparing students to show what they know on high-stakes tests may have less to do with covering curriculum, tedious review, and practice questions, and more to do with developing student self-efficacy and a growth mindset. Learn how a veteran PD specialist returned to the classroom and used problem-based experiential learning to focus on the application of content and skills in a brain-friendly environment. Opportunities to analyze problems, take initiative, and successfully apply solutions, improved students’ perception of self-efficacy. Encouraging students to work hard and persevere even when experiencing setbacks helped develop growth mindsets. Frequent experiences with academic vocabulary (often used in standardized tests) ensured that students were on a level playing field. Routine use of the technology used during test administration reduced anxiety about the test experience. In addition, integrating Mindfulness techniques as a powerful tool for success developed thoughtful, reflective test-takers! Learn strategies to set up your classroom so that your students can feel confident even during stressful exams!


“Using Item Response Theory to Create More Reliable School Examinations”

DR MARTIN WALKER, DURHAM UNIVERSITY, UNITED KINGDOM

What information could schools be given about test questions?

Many high-stakes final examinations consist of questions that are not pre-tested and examination papers vary little from year to year.

There has been a tendency to accept that the test writers know what they are doing and that the questions must be good questions because they are on the examination paper.

In this session, we will consider the kind of information that could be made available to school about examination questions and ask whether such information about tests and test questions is readily available.

CEM at Durham University makes considerable use of item response theory (IRT) in analysing test questions and building tests which have known psychometric properties. We will look at use of IRT models to explore question functioning and look at the following properties of some test questions:

a)      the simple facility value (how many students got it right?)

b)      item difficulty versus student ability (which students got which things right and wrong?)

c)      whether increase in score on a question increases in line with ability

d)      how much expected versus unexpected behavior is revealed by a question

The session will not include the mathematics behind this information (Rasch, 1960). We will look at the kind of information that schools could have about their own tests and ask the question, “is this information available for high-stakes tests”?

We will look at some graphical displays of test data with particular focus on the use of Wright maps to provide information about how well the difficulty of a test is aligned with the ability of the students taking the test.

This will be an interactive session but you will not have to do any maths.