Tetra Tech International Development: Diamond Sponsor Exclusive Workshop Partner

Conference workshop program: Tuesday 17 September 2024

>>> DOWNLOAD a printable conference workshop program

Having trouble seeing the full overview table? Switch to mobile version here.
View Monday conference workhop program here.

The following categories will help you select sessions best suited to your interests: Foundational – Intermediate – Advanced
For details on each category, see here.

8–9am REGISTRATION
9am–12:30pm WORKSHOP PROGRAM

Full day workshop:
Clearing the way for better evaluation – an introduction to practical ways of increasing credibility by identifying, managing and reducing error

Presented by Samantha Abbato

FOUNDATIONAL – INTERMEDIATE 

> Details
> Register

Full day workshop:
Planning impact evaluations of social policy: When to undertake, planning and avoiding common pitfalls

Presented by Zid Mancenido, Dan Carr

INTERMEDIATE – ADVANCED 

> Details
> Register

Full day workshop:
Scaling impact: New ways to plan, manage, and evaluate scaling

Presented by John Gargani

INTERMEDIATE – ADVANCED

> Details
> Register

Half day workshop:
Insights on Indigenous evaluation co-design in the Warlpiri Yitakima-ninjaku, warrirninjaku, payirninjaku manu pina-jarrinjaku project

Presented by Marlkirdi Rose Napaljarri, Kathleen Gibson (MK) Napurrula, Tyrone Spencer Japaljarri, Jamie Gorman, Alex Gyles

FOUNDATIONAL – INTERMEDIATE

> Details
> Register

Full day workshop:
Using behavioural science in evaluation

Presented by Kizzy Gandy

INTERMEDIATE – ADVANCED 

> Details
> Register


 

Full day workshop:
Navigating AI in evaluation: From basics to advanced applications

Presented by Gerard Atkinson, Tuli Keidar

FOUNDATIONAL – INTERMEDIATE

> Details
Register

 
12:30–1:30pm LUNCH
1:30–5pm WORKSHOP PROGRAM

Abbato workshop continued

Mancenido, Carr workshop continued

Gargani workshop continued

Half day workshop:
Evaluation and philanthropy: Designing funder evaluations to support strategic wayfinding

Presented by George Argyrous, Elizabeth Branigan

INTERMEDIATE

> Details
Register

Gandy workshop continued 

Atkinson, Keidar workshop continued

Workshop details


Full day workshop – Clearing the way for better evaluation – An introduction to practical ways of increasing credibility by identifying, managing and reducing error

presented by Samantha Abbato  | FOUNDATIONAL – INTERMEDIATE

WORKSHOP DESCRIPTION

Almost fifty years ago, Michael Scriven (Scriven, 1975) called on us to consider and mitigate one of the two sources of error in evaluation: the role of bias. Yet, as Daniel Kahneman and others (Kahneman et al., 2021) have argued, noise is even more worthy of consideration for evaluative processes and judgements, firstly because it is often a source of greater error than bias, and secondly, because it is easier to detect and reduce. Because evaluation is, above and beyond anything else, a judgment-making activity that combines evidence with values, the stakes are high for identifying and minimising sources of noise and bias. 

The presence of error in all aspects of evaluation activities can (and does) result in wrong decisions and ensuing actions. Our evaluative judgements and ultimately the decisions they are based on (eg., whether an initiative should continue, be expanded, be adjusted and improved, or cease) are only as good as the systems, processes and methods we rely on in making them. 

In this practical workshop, we provide a low-jargon introduction to understanding, identifying and managing the two components of error – noise and bias. We demonstrate through real-world relatable evaluation examples, how unnoticed noise and bias seriously affect the credibility of evaluation findings, judgements and decisions. We provide an opportunity for evaluators to explore and discuss noise and bias in the evaluations they are involved in, supporting improved practice, increased credibility and ethical alignment.

Using an engaging cartoon field guide and card game approach we help participants spot some of the most common sources of error in evaluation processes and judgements. Finally, we share practical strategies to improve participants' own evaluations and judge the credibility of evaluations reported to them that they can take back to their workplace and current projects. Practical tools introduced include a bias spotters guide, a noise audit and an error observation checklist.

ABOUT THE FACILITATOR

Samantha Abbato is an experienced trainer with more than 25 years of health and community sector experience and strong methodological expertise across qualitative and quantitative disciplines, including public health, epidemiology, medical anthropology, biostatistics, and mathematics. 

Sam engages in a utilisation-focused approach, mixed (qualitative and quantitative) methods, and evaluation case studies and stories for building evaluation capacity and making a difference in the health and community sectors. Combining a rigorous scientific foundation with extensive practical experience of evaluation processes, methods and decision-making, the facilitator is well-positioned to provide practical tools and guidance to support evaluators and those commissioning evaluation to bolster their work by identifying, managing and reducing bias and noise.

> back to overview > register


Full day workshop – Planning impact evaluations of social policy: When to undertake, planning and avoiding common pitfalls 

presented by Zid Mancenido and Dan Carr  | INTERMEDIATE – ADVANCED     

WORKSHOP DESCRIPTION

Tailored for public servants and evaluators, this day-long workshop offers a deep dive into the nuances of thoughtfully implementing experimental and quasi-experimental evaluation methods to identify the impact of social policy programs. We will go beyond theories of cluster randomised trials, regression discontinuity designs, and differences-in-differences approaches to illustrate how to design rigorous impact evaluations tailored to program rollout, policy context and the intention of the evaluation itself.  

The morning will explore fundamental principles underlying experimental and quasi-experimental methods, from which participants will grasp the significance of randomisation in establishing causality and mitigating bias, as well as the conditions under which these can (and can not) be achieved when ‘pure randomisation’ is not possible. In exploring these principles, participants will be prompted to consider how the circumstances of their particular policy contexts may be leveraged to design more rigorous evaluations.

The afternoon will shift to practical impact evaluation challenges, many of which require considerable craft knowledge to address. Our experience is that when these challenges not given sufficient thought in the design of impact evaluations, they can result in studies which offer statistically uninformative conclusions that, if acted on by policymakers, can lead to unjustified decisions to expand or close programs. 

Topics that will be canvassed include: participant recruitment for generalisability, streamlining data collection, managing attrition, ensuring appropriate randomisation procedures, selecting appropriate outcome measures, and managing data security, privacy and ethics considerations. Throughout the workshop, emphasis will be placed on using real-world case studies to illustrate key points and facilitated group activities to apply what is learned to the work of participants. By the workshop's conclusion, participants will be more attuned to key considerations in commissioning and conducting impact evaluations, resulting in evaluation projects that are less likely to generate uninformative conclusions. 

A pre-requisite is understanding of the concept of statistical significance.

ABOUT THE FACILITATORS

Zid Mancenido is the Senior Manager, Research and Evaluation at the Australian Education Research Organisation (AERO), where he leads a team of researchers to generate high-quality evidence, make high-quality evidence accessible, and enhance the use of evidence in Australian education. He concurrently holds an appointment as a Lecturer on Education at the Harvard Graduate School of Education, where he teaches a foundation course on impact evaluation and evidence. Zid holds a Masters degree in Education Policy and Management and a PhD in Education Policy from Harvard.

Dan Carr is a Program Director at AERO where he oversees a range of research and evaluation projects. He has a decade of experience in evaluation and public policy, including working for the Behavioural Insights Team (commonly known as the ‘nudge unit’) where he conducted evaluations of social, health and consumer policy programs. He has delivered training on conducting randomised controlled trials (RCTs) and other evaluation methods for public servants in several countries, and acts as a peer reviewer for RCTs commissioned by the UK-based Education Endowment Foundation. He holds an Honours in Economics (First Class) degree from Monash University. 

> back to overview > register


Full day workshop – Scaling impact: New ways to plan, manage and evaluate scaling

presented by John Gargani  | INTERMEDIATE – ADVANCED     

WORKSHOP DESCRIPTION

In this one-day workshop, participants will learn a new approach to scaling the social and environmental impacts of programs, policies, products, and investments. The approach is based on the book Scaling Impact: Innovation for the Public Good written by Robert McClean and John Gargani, and it is grounded in their collaborations with social innovators in the Global South. The workshop goes beyond the book, reflecting the authors’ most recent thinking, and challenges participants to adopt a new scaling mindset. Participants will be introduced to the core concepts of the book then practice what they learned by engaging in small-group, hands-on activities. The workshop is intended as an introduction, and participants will be provided with free resources to continue their learning.

Participants should have an intermediate or advanced understanding of evaluation. They should know what a logic model is and recognize that programs, policies, and products create impacts in complex environments. Participants may come from any field, sector, or functional role. Program designers, managers, and evaluators are welcome.

By the end of the workshop, participants will be able to define impact, scaling, operational scale, and scaling impact; use the four principles of scaling; address scaling risks; and apply the dynamic evaluation systems model.

Participants should have an intermediate or advanced understanding of evaluation. Program designers, managers, and evaluators are welcome.

The workshop strengthens theoretical foundations by helping participants acquire a new body of knowledge and theory about scaling. It encourages them to pay close attention to culture, stakeholders, and context by developing a new ‘scaling mindset’ that puts the people affected in control.

ABOUT THE FACILITATOR

John Gargani has 30 years of experience as an evaluator, researcher, writer, speaker, and teacher. He is a former President of the American Evaluation Association, coauthor of the book Scaling Impact: Innovation for the Public Good, and a frequent international speaker on topics related to scaling, program design, and impact measurement. Currently, he is conducting research on new evaluation methods that integrate diverse understandings of impact and value. He holds a Ph.D. from the University of California, Berkeley, where he studied evaluation and measurement; an M.S. in Statistics from New York University; and an M.B.A from the Wharton School of the University of Pennsylvania.

> back to overview > register


Half day (morning) workshop – Insights on Indigenous evaluation co-design in the Warlpiri Yitakimaninjaku, warrirninjaku, payirninjaku manu pina-jarrinjaku project

presented by Marlkirdi Rose Napaljarri,Kathleen Gibson (MK) Napurrula, Tyrone Spencer Japaljarri, Jamie Gorman | FOUNDATIONAL – INTERMEDIATE

WORKSHOP DESCRIPTION

In this workshop, a team of four Warlpiri community researchers will share their insights into the principles and practices of Indigenous-led evaluation, drawing on their experience of developing and implementing the Yitakimaninjaku, warrirninjaku, payirninjaku manu pina-jarrinjaku (YWPP) 'Tracking and learning' project. YWPP exemplifies creative and adaptive monitoring, evaluation and learning that is rooted in local context, culture and ways of knowing and valuing. 

By the end of the workshop a attendees will be able to:

  • appreciate how evaluation approaches can be co-designed in ways that weave together Indigenous and non-Indigenous worldviews
  • analyse the challenges and opportunities of working in complex and evolving language ecologies and cultural protocols
  • apply good practice in community research with Indigenous communities in order to be sensitive to local culture and contribute towards self-determination. 

This workshop will adopt a dialogical teaching and learning strategy based on Indigenous pedagogic approaches. Learning resources will include a multimedia presentation, discussion and practical activities. 

The workshop is suitable for participants with any level of evaluation experience who are new to working with Indigenous communities. It may also be suitable for participants with experience working with Indigenous communities but who wish to gain an understanding of Indigenous evaluation approaches.

The workshop will support evaluators to build knowledge, skills and values in relation to the professional competency standards from Domain 1 (Evaluative Attitude and Professional Practice), Domain 3 (Attention to Culture, Stakeholders, and Context) and Domain 6 (Interpersonal Skills). 

As an exploration of an an innovative approach to partnership and co-design in evaluation by Indigenous people, this workshop sits within the category of ‘New tools; approaches and ways of thinking for a transforming context’.

ABOUT THE FACILITATORS

Marlkirdi Rose Napaljarri is a senior researcher at the Institute of Human Security and Social Change at La Trobe University who hails from Lajamanu community in the Northern Territory. She has over 20 years’ experience as an educator, mentor, linguist and researcher. She plays an important role in translating materials and concepts for two-way learning in Warlpiri communities. Marlkirdi wants to be a positive role model for her community. She oversaw the creation of the YWPP framework, the WETT map and provides guidance and mentoring support to the YWPP team of Warlpiri and non-Warlpiri practitioners.

Kathleen Gibson (MK) Napurrula is a Community Researcher from Nyirrpi. She attended school in both Nyirrpi and at Yirara Boarding School in Mparntwe (Alice Springs). She believes that boarding school can help young people learn new things and grow in confidence, and wants young Warlpiri people today to have the same opportunities. 

She started her career working at Nyirrpi Childcare, as she has a passion for working with children and young people. This passion led her into a long term career working to support young people in her Community. This included working for the Warlpiri Youth Development Aboriginal Corporation (WYDAC), where MK first became a Committee Member and then went onto being a Cultural Advisor and Board Member. MK also has experience working for World Vision’s Unlock Literacy Program and at PAW Media. MK is currently working as a Local Language Assistant Teacher at Nyirrpi School, working with children one-on-one to support strong Warlpiri Language and Culture, under the school’s Bilingual Education program. MK was elected as a new GMAAAC Committee member at the start of 2023, and is on Nyirrpi Community’s Local Authority Board. 

MK became interested in WETT’s YWPP Community Research Program because she believes it is important to listen the needs and aspirations of young people today. She is keen for action, and wants to see young kids to grow up to be strong and confident in their self, Culture, Language and Knowledge, so that they can follow their heart and become her Community’s future leaders. 

Tyrone Spencer Japaljarri is a community researcher from Yuendumu. After time serving in the Australian Army, he followed in the footsteps of his mother, who retired from teaching after forty years, and began working in the school and the Night Patrol at Yuendumu. He became interested in WETT’s community research program to help kardiya to understand Yapa ways. He has worked as a community researcher for several years and he has good relationships in communities. He creates a comfortable and relaxed feeling for Yapa participating in communities. Tyrone has a strong understanding of Yapa culture that helps the research team to work in respectful ways.  For Tyrone, community unity when the community gets together for cultural business is really important. 

Jamie Gorman is the Monitoring, Evaluation and Learning Co-ordinator for the Warlpiri Education and Training Trust, based in the Central Land Council. Jamie is an Irish community development practitioner who is passionate about community-driven research and evaluation to support social change. He has worked in community development and youth work practice, research and education roles at local, national and international levels for almost two decades.

Alex Gyles is a Research Fellow working in Monitoring, Evaluation and Learning (MEL) at the Institute for Human Security and Social Change, La Trobe University. He works closely with Marlkirdi Rose on the YWPP project which he finds an exciting learning experience in MEL where local cultural protocols are fundamental to effective MEL design and delivery. He has over ten years of experience supporting processes of social change particularly in community development, governance, and MEL with Aboriginal land councils in the Northern Territory and Western Australia. He has a Bachelor of Arts with Honours in Anthropology and Politics, and a Master of Public Policy and Management. 

> back to overview > register


Half day (afternoon) workshop –  Evaluation and philanthropy: Designing funder evaluations to support strategic wayfinding

presented by George Argyrous and Elizabeth Branigan | INTERMEDIATE 

WORKSHOP DESCRIPTION

This workshop will build Intermediate level skills in designing evaluations that enable purposeful, strategic decision-making and action by funders. It assumes some general knowledge of measurement, evaluation, and learning and will build on this to design evaluation tailored for Australian funders. 

At the end of the 3.5-hour workshop, participants will be able to:

  • craft a framework for a funder evaluation
  • determine how to gather data with a lens on improving funder practices and strategy
  • frame an evaluation report that can directly contribute to improved strategic outcomes.

In the first half of the session, the co-facilitators will be joined by two Australian funders to explore case studies of recent evaluations that were designed to move beyond compliance to contribute to organisational learning and change.  Rich, multi-directional discussions will be encouraged to engage with the nuance of what participants need to know, to understand how this could work for them. 

In the second half, participants will work in small groups to design evaluations for their organisational contexts that yield findings that can inform organisational learning and strategic decision-making, fostering a shift from compliance-driven evaluations to those that actively inform program improvements and innovations. 

By the end of the workshop, participants will have the skills and knowledge necessary to design evaluations that enhance the credibility of their programs and provide meaningful insights for partners, stakeholders, and their own organisational learning. 

This workshop curates a dynamic range of skills, practice, and expertise from the Phil Eval network that will empower philanthropic funders through sharing knowledge, skills, connections, and resources to build their MEL capabilities and drive better philanthropy. It represents a unique opportunity for evaluators to advance the quality and impact of funders’ investments, fostering a culture of continuous improvement and learning within the Australian philanthropic and government funding sectors. 

ALIGNMENT TO LEARNING COMPETENCY FRAMEWORK

It aligns with AES Professional Learning Competency Framework Domains 1,3,4, 6 and 7.

ABOUT THE FACILITATORS

George Argyrous is currently the Head of Measurement, Evaluation, Research, and Learning at the Paul Ramsay Foundation. He has previously worked within the university sector, teaching and research, and also working with other organisations to improve their evidence-based decision-making. He has conducted many evaluations, and has authored several high-level government evaluation frameworks in areas as broad as disaster recovery and countering violent extremism. He has a particular passion for capability building so that practitioners can become more critical users of evidence to inform their work.

Liz Branigan is an expert in facilitation & capability development, with 25+ years’ senior experience in adult education. She is the Manager of the Australian Philanthropic Evaluation Network; a collaborative network of funder evaluators who share insights, challenges, and solutions for effective and ethical evaluation, which is currently hosted by Philanthropy Australia. This is a national initiative to build the evaluation capacity and practice of diverse funders and philanthropic organizations across Australia. Liz works purposefully in ways that empower funders to evaluate their impact, sustainability, and alignment with their mission and values.    

> back to overview > register


Full day workshop – Using behavioural science in evaluation

presented by Kizzy Gandy  | INTERMEDIATE – ADVANCED

WORKSHOP DESCRIPTION

Behavioural science addresses the question ‘What causes people to do X and what can we do to influence those factors?’. Behavioural science is a natural fit with evaluation in two ways:

  1. Theory-based evaluation examines how mechanisms of change drive outcomes. Mechanisms of change are the causes of behaviour. Behavioural science gives us deeper insight into these mechanisms of change, and how to measure them.
  2. Utility focused evaluation must identify practical strategies to influence behaviour in order to provide recommendations for program improvement. Behavioural science offers tools to develop behaviour change strategies and understand the contexts in which they are likely to work.

Few evaluators have skills or confidence to integrate behavioural science into their work. This workshop provides an introduction to behavioural science, worked examples of applying behavioural science to theory-based evaluation, and guidance to practice the approach.

At the end of the workshop, participants will know the history of behavioural science, how behavioural science is used globally, how to develop behaviour change strategies, how to integrate behavioural science into theory-based evaluation, and where to find additional resources. This is an intermediate workshop for those who already have an understanding of theory-based evaluation. The teaching strategies that will be used include:

  • games
  • presentation
  • group work
  • worked examples
  • handbook for future reference.

ABOUT THE FACILITATOR

Kizzy Gandy is an expert in evaluation, behavioural science, and public sector innovation. With 20 years of experience, she has overseen more than 70 evaluations and helped governments and NGOs in more than 20 countries address a range of policy challenges. Kizzy founded and directs Verian's evaluation practice. She has developed the application of behavioural science in evaluation to improve outcomes for public sector clients across Australia. She previously worked for the NSW Government’s Behavioural Insights Unit and the UK Behavioural Insights Team.     

> back to overview > register


Full day workshop – Navigating AI in evaluation: From basics to advanced applications

presented by Gerard Atkinson and Tuli Keidar  | FOUNDATIONAL – INTERMEDIATE

WORKSHOP DESCRIPTION

In the ever-evolving landscape of policy and program evaluation, this workshop aims to equip intermediate-level professionals with a comprehensive understanding of Artificial Intelligence (AI) and its strategic integration into the evaluation process. The workshop comprises five distinct subsessions, each addressing crucial aspects of AI in evaluation.

The purpose of the workshop is to demystify AI, offering participants: 

  • a foundational knowledge base in AI models and approaches
  • an appreciation of ethical considerations in applying AI
  • insights into the range of AI tools available to evaluators
  • practical experience in applying AI tools, including prompt engineering for evaluation contexts, and
  • resources to enable effective evaluation that leverages AI technologies. 

The instructional methods will include a blend of presentations with recap quizzes, case discussions to stimulate critical thinking, and hands-on practical exercises applying OpenAI's ChatGPT to real-world scenarios. This diverse approach ensures an engaging learning experience that caters to the different learning styles of participants.

TARGET AUDIENCE

The target group for this workshop is intermediate-level professionals in the field of policy and program evaluation. While no specific prerequisites are mandated, participants with a basic understanding of evaluation concepts will benefit most from the workshop. No prior experience of AI is required, though participants will need access to a computer, tablet or phone to complete exercises.

ALIGNMENT TO LEARNING COMPETENCY FRAMEWORK

This workshop aligns with domains 4 (Research Methods and Systematic Inquiry) and 5 (Project Management) of the Evaluators' Professional Learning Competency Framework. It caters to professionals seeking to enhance their evaluation practices by incorporating cutting-edge AI techniques.

ABOUT THE FACILITATORS

Gerard Atkinson is a Director at ARTD Consultants who leads the Victorian practice and oversees the Learning and Development program for the firm. He has worked with big data and AI approaches for over 20 years, originally as a physicist then as a strategy consultant and evaluator. Gerard has delivered research on the applications of AI to evaluation including in qualitative data analysis, natural language processing, and rubric synthesis and application. This work with AI and rubrics intersects with his research work on the application of rubric approaches to program and policy evaluation, including as a method for characterising impact across policy portfolios.

Gerard has an MBA in Business Analytics from Southern Methodist University in Dallas, Texas, where he majored in the applications of machine learning to operational data. Gerard has previously presented at AES conferences on big data (2018) and on experimental tests of AI in evaluation (2023). As well as his affiliation with the AES, Gerard is a Qualified Professional Researcher of The Research Society (TRS), and a Graduate of the Australian Institute of Company Directors (AICD). He is also a non-executive director of the Social Impact Measurement Network Australia (SIMNA), and of Brophy Family and Youth Services, a place-based non-profit in Warrnambool, Victoria.

Tuli Keidar is an evaluation consultant at ARTD with experience in a wide range of sectors, including technology, health, education, and disaster response. He is a key member of the ARTD AI working group, which focuses on developing and implementing ethical AI practices such as machine learning and retrieval-augmented generation in the firm's consulting work. Tuli also contributed to the creation of ARTD's internal AI policy, which guides the ethical and safe use of AI technologies and ensures ongoing revision of the policy to reflect the latest AI advancements and changes in data security requirements. In addition to his professional role, Tuli uses his AI expertise to volunteer with disadvantaged youth, employing language models and AI image generators to create tailored educational content and interactive games.

Prior to working in research, Tuli worked as a company manager, operations manager and educator in the specialty coffee sector. Drawing on a modular education model, Tuli launched a barista and coffee roaster training school, developed the course structures and content, and managed its launch and marketing.

> back to overview > register


Tetra Tech International Development: Diamond Sponsor Exclusive Workshop Partner