Tetra Tech International Development: Diamond Sponsor Exclusive Workshop Partner

Conference workshop program: Monday 16 September 2024

>>> DOWNLOAD a printable conference workshop program

Having trouble seeing the full overview table? Switch to mobile version here.
View Tuesday conference workhop program here.

The following categories will help you select sessions best suited to your interests: Foundational – Intermediate – Advanced
For details on each category, see here.

8–9am REGISTRATION
9am–12:30pm WORKSHOP PROGRAM

Full day workshop:
Wayfinding in measurement, evaluation and learning – for new and emerging evaluators

Presented by Shani Rajendra, Jessica Suares

FOUNDATIONAL 

> Details
> Register

Full day workshop:
Beyond box-checking: Reflective practice and collaborative deliberation for socially conscious evaluation work

Presented by Ayesha Boyce, Tiffany Tovey, Neelakshi Rajeev Tewari, Stacy Huff

FOUNDATIONAL – INTERMEDIATE 

> Details
> Register

sold out

Full day workshop:
Evaluation and value for money – shifting the locus of power to affected communities

Presented by Julian King, Nan Wehipeihana

INTERMEDIATE –ADVANCED

> Details
> Register

Full day workshop:
Program logic and beyond: Whose is it and how do we engage with First Nations ways of learning, thinking and being?

Presented by Carol Vale, Marion Norton

FOUNDATIONAL – INTERMEDIATE

> Details
> Register

Half day workshop:
Evaluation and action learning to support implementation

Presented by Lauren Heery, Claire Jennings, Alice Ghazarian

INTERMEDIATE 

> Details
> Register


 

Half day workshop:
Designing and using surveys in realist (and other theory based) evaluation

Presented by Gill Westhorp, Cara Donohue

INTERMEDIATE –ADVANCED

> Details
Register

 
12:30–1:30pm LUNCH
1:30–5pm WORKSHOP PROGRAM

Rajendra, Suares workshop continued

Boyce, Tovey, Tewari, Huff workshop continued

King, Wehipeihana workshop continued

Vale, Norton workshop continued

Half day workshop:
Qualitative causal analysis and impact evaluation using the QuIP

Presented by James Copestake

INTERMEDIATE

> Details
Register

 

Half day workshop:
Using system dynamics for evaluations in complex systems

Presented by Jacquie Davison, Chris Browne and Miriam Spano

FOUNDATIONAL – INTERMEDIATE

> Details
Register

Workshop details


Full day workshop – Wayfinding in measurement, evaluation and learning – for new and emerging evaluators

presented by Shani Rajendra and Jessica Suare | FOUNDATIONAL

WORKSHOP DESCRIPTION

The first steps into the world of evaluation can be daunting. This workshop will equip new and emerging evaluators with an understanding of Measurement, Evaluation and Learning (MEL) so they can engage confidently in the evaluation community, as well as in the forthcoming conference. 

The workshop will:

  • introduce attendees to the foundations of MEL, including the ability to develop a MEL plan and theory of change
  • answer common questions that trip up emerging evaluators, including the distinction between common types of evaluation
  • provide attendees with access to platforms and resources that will best meet their needs. 

Finally, the workshop will provide attendees with an opportunity to engage with the MEL community and discourse beyond the conference. Over the last two years, Clear Horizon has been engaging around ten principles to guide contemporary MEL practice. We invite participants to unpack and critique these emerging principles, joining others evaluators in that discussion and provide feedback online.

This workshop will be interactive and firmly pitched at a foundational level, explicitly designed with the objectives of collaboration and engaging a non-expert audience in mind. Attendees will work in small groups, use handouts and case studies to make concepts tangible, and be shown publicly available Clear Horizon Academy resources. This training draws on Clear Horizon’s highly regarded Academy, which has over 500 learners per year, as well as the development of hundreds of MEL plans and systems.

ALIGNMENT TO LEARNING COMPETENCY FRAMEWORK

By introducing the fundamentals of MEL in an environment that encourages enquiry and self-reflection, this workshop addresses Domains 1 and 2 of the Evaluators' Professional Learning Competency Framework. 

TARGET AUDIENCE

The workshop is categorised as building foundational evaluation skills and capabilities, and to realise this goal, we will be restricting the attendance of this workshop to those with two years of experience or less in the field of evaluation.

ABOUT THE FACILITATORS

Shani Rajendra is a Principal Consultant at Clear Horizon and has been working in the field of evaluation for over 6 years. Shani specialises in incorporating design thinking into evaluative practice and primarily works in community-led or systems change interventions as well as in organisational strategy. Shani has co-developed over 20 MEL plans, including with diverse community groups and in complex settings. Shani co-convenes the AES Design and Evaluation Special Interest Group (DESIG), presenting and/or co-hosting the annual DESIG Learning Sprint Series since 2020. She has also presented on evaluation in other conferences such as AES 2023, the 2019 SDNOW 4 pre-conference event and the Social Enterprise Unconference 2021.

Jessica Suares is a Consultant at Clear Horizon and an emerging evaluator, with less than 1 year of experience in the field. Leaning into some of her confusions and hurdles entering this field, Jessica is passionate about accessible MEL education, creating safe spaces to ask questions. and creating pathways for emerging evaluators to engage more deeply with the wider evaluation community. Her background in science communication – teaching complex scientific concepts to non-technical audiences – means she understands the importance of jargon-free explanations in creating understanding. She has delivered presentations on behalf of Monash University and CSIRO. Jessica currently sits on the SIMNA board of directors.

> back to overview > register


Full day workshop – Beyond box-checking: Reflective practice and collaborative deliberation for socially conscious evaluation work 

presented by Ayesha Boyce, Tiffany Tovey, Neelakshi Rajeev Tewari and Stacy Huff  | FOUNDATIONAL – INTERMEDIATE     

WORKSHOP DESCRIPTION

Reflective practice and collaborative deliberation can serve as a compass to navigate the ever-present polycrisis facing our modern world. 

In a journey akin to the ancient art of wayfinding, evaluators must be intentional and thoughtful, aware of their positionalities, and interpersonally attuned as they traverse complex spaces to engage global and local wicked problems critically and systematically with a socially conscious perspective. Unfortunately, evaluators are often forced to check boxes, primarily focusing on accountability. This leaves us with little resources, time, and space to leverage the power of our reflective capabilities to make wise decisions and gain nuanced insights about how our programs and policies contribute to or do not contribute to the common/social good.

This interactive workshop is based on evidence collected from a five-year research-on-evaluation project that examined how and in what ways evaluators and their clients defined, implemented (program), and collected data (evaluation) around the topics of diversity, equity, and inclusion (DEI). 

In this interactive workshop, we will 

  1. review and define the key concepts of reflective practice, dialogue, and deliberation, 
  2. share a framework and strategies for reflective practice that encourage deeper engagement with problems of practice in project and evaluation work, and, 
  3. utilizing vignettes and reflective prompts, engage attendees in dialogue regarding how they can use these tools in their own practice.

With a combined 35 plus years of experience, Ayesha, Tiffany, Neelakshi, and Stacy have over 25 published manuscripts and over 50 conference presentations. They have also conducted 20 workshops on the topics of reflective practice, conflict resolution, compassion fatigue, facilitation, DEI, and/or social justice in evaluation.

ABOUT THE FACILITATORS

Ayesha Boyce is an associate professor in the Division of Educational Leadership and Innovation at Arizona State University. She also co-directs the STEM Program Evaluation Lab. Boyce’s scholarship focuses on attending to value stances and issues related to diversity, equity, inclusion, access, cultural responsiveness, and social justice within evaluation—especially multi-site, STEM, and contexts with historically and systematically marginalized populations. She also examines teaching, mentoring, and learning in evaluation. Boyce is a 2019 American Evaluation Association (AEA) Marcia Guttentag Promising New Evaluator Award recipient and an AEA Board member. She has presented in North American, Latin American, and African forums.

Tiffany Tovey is the Director of the UNC Greensboro Office of Assessment, Evaluation, and Research Services (OAERS) office and a Clinical Assistant Professor in the Information, Library, and Research Sciences department. Since 2010, she has conducted evaluations in multiple sectors, including K12, higher education, STEM, and community initiatives. Her current research interests revolve around the topics of reflective practice (self and contextual awareness), interpersonal communication, evaluator responsibility, and the role of ignorance in evaluation practice. She leads numerous evaluation and research projects and teaches classes and workshops in evaluation, research methods, interpersonal skills, and reflective practice. 

Neelakshi Rajeev Tewari, M.Ed., is a doctoral candidate in the Arizona State University Educational Policy and Evaluation Ph.D. program. She holds a Masters in Education from the University of Oxford and has previously worked with Google, the World Bank, and Ashoka University – India’s premier liberal arts university. She is a graduate researcher in the STEM Program Evaluation Lab. She is the graduate student lead on a project that examines how evaluators reflect on, define, and measure equity, diversity, and inclusion in US National Science Foundation-funded Advanced Technological Education projects. Her research interests include issues of access and equity in higher education, social transformation through higher education, and internationalization.

Stacy Huff, M.S. is a doctoral candidate in the University of North Carolina Greensboro Educational Research Methodology Ph.D. program on the Program Evaluation track, with a Peace and Conflict Studies certification. She is a graduate researcher in the STEM Program Evaluation Lab and Office of Assessment, Evaluation, and Research Services. She brings extensive experience in facilitation, having led various workshops, including a multi-week session on project planning and workshops focused on conflict resolution and compassion fatigue. Her dissertation examines the relationship between the role of an evaluator and compassion fatigue. 

> back to overview > register


sold out Full day workshop – Evaluation and value for money – shifting the locus of power to affected communities

presented by Julian King and Nan Wehipeihana  | INTERMEDIATE–ADVANCED     

WORKSHOP DESCRIPTION

It is important for good resource allocation, accountability, learning and improvement that policies, programs and other initiatives undergo rigorous evaluations of value for money (VfM). Many evaluators, however, lack training and experience in this area. This workshop offers a set of techniques to address that gap. 

This full-day, interactive workshop places a spotlight on evaluative rubrics as a powerful tool for synthesising criteria, standards, and evidence to make warranted judgements about VfM. Participants will delve into key topics including the collaborative development of rubrics with stakeholders, utilising rubrics for evaluative judgements, and using mixed methods evidence in VfM assessment. The workshop aims to equip evaluators to conduct VfM assessments that share power with stakeholders, elevating the context-specific values of the groups and communities who are supposed to benefit from a policy, program or intervention. 

Key topics include: 

  • What is value for money? 
  • What are rubrics and why use them? 
  • A step-by-step process for developing and using rubrics in evaluation 
  • Co-developing context-specific VfM criteria with stakeholders 
  • Making sense of mixed methods evidence with stakeholders and rubrics 
  • Reporting findings that reflect local values and meet funders’ needs. 

The workshop will cover the practical steps involved in designing and implementing a VfM framework, as well as some core evaluation theory and conceptual principles underpinning the approach, along with tips and tricks for applying the approach in the real world. It includes a series of short PowerPoint presentations, group discussions and interactive exercises. Participants will receive optional pre-workshop reading and a post-workshop take-home pack including a copy of the slides and links to online resources. 

The workshop doesn’t provide detailed instruction in the design and implementation of economic evaluations. There are courses already on offer that focus on economic methods of evaluation. 

ABOUT THE FACILITATORS

Julian King specialises in evaluation and value for money, helping decision-makers, evaluation teams and stakeholders to use evidence and explicit values to provide evaluations that contribute to well-informed decisions and positive impacts for communities. Through PhD research, Julian developed the Value for Investment (VfI) system which is now used worldwide to evaluate complex and hard-to-measure programs and policies (eg. see King et al, 2023). In 2021 Julian received the AES Evaluation Systems Award in recognition of the innovative nature and widespread use of VfI. Julian has over 20 years of experience in workshop facilitation. 

He has delivered workshops for evaluation associations, private companies and NGOs on every continent except Antarctica. He is the Director of Julian King & Associates Ltd, a member of the Kinnect Group, an Associate of Oxford Policy Management, an Honorary Fellow at the University of Melbourne and University Fellow at the Northern Institute. 

Nan Wehipeihana is an independent Māori evaluator from Aotearoa New Zealand and the Director of a bespoke kaupapa Māori company specialising in evaluation, research and policy – with a focus on Māori whānau (extended families), hapū (kinship groups), iwi (tribes) and Māori organisations and providers. She is passionate about protecting, evidencing and growing the space to be Māori in Aotearoa and is well known for her work in indigenous evaluation, developmental evaluation and culturally anchored frameworks. She is the co-editor of one of only two textbooks on Developmental Evaluation, and has run workshops and masterclasses on Developmental Evaluation in Australia, Canada, Indonesia, New Zealand and the United States.

An internationally leading Māori evaluator with more than 20 years experience, Nan was inducted as a Fellow of the Australian Evaluation Society and has twice been awarded the Australian Evaluation Society Policy and Systems award (2013 and 2000). She has keynoted on kaupapa Māori and Indigenous Evaluation in Canada (2018), New Zealand (2015), Australia (2014), South Africa (2011) and Germany (2011).

> back to overview > register


Full day workshop – Program logic and beyond: Whose is it and how do we engage with First Nations ways of learning, thinking and being?

presented by Carol Vale and Marion Norton  | FOUNDATIONAL–INTERMEDIATE 

WORKSHOP DESCRIPTION

This interactive, hands-on workshop will develop your skills and confidence to create, critique and cultivate program logic thinking in contexts you know well and brand new contexts you have just been introduced to. It will take away the mystery and you will be amazed how easy it is. After all it is logical! 

But what if the program logic is limiting our understanding of the real world? Why doesn’t our carefully constructed program logic work? Why does it change during the evaluation process? 

Using quotes and good news stories from stakeholders in Aboriginal and Torres Strait islander communities, participants will adopt personas and develop a program logic from different perspectives. Concepts of listening and hearing, values and principles, will be foregrounded. First Nations people are encouraged to attend and share their voice. 

Guided by descriptions of Indigenous ways of knowing, learning and being, participants will be invited to challenge the assumptions behind their constructed own logics and their world view, and consider other ways of thinking with a cultural lens, a systems lens, a discipline/ profession lens, a service provider lens and, importantly, a (service) receiver lens and a community lens.

TARGET AUDIENCE

The workshop is pitched at foundational and intermediate levels but advanced evaluators are welcome to attend to explore new approaches and ways of thinking about evaluation involving First Nations people.

ABOUT THE FACILITATORS

Carol Vale is a Dunghutti Woman from Armidale, NSW, Co-Founder and CEO of Murawin. She is a Sociologist, specialist inter-cultural consultant and facilitator. She specialises in working with clients across a range of industries, enhancing their organisational capacity through social research, community consultations, stakeholder engagement and evaluation services. Carol draws on her extensive career in the public sector and provides practical tools and professional services to her clients that are culturally sensitive, and outcome focused, while bringing her history and culture. Carol has worked in senior management in Aboriginal Affairs, education, housing, disabilities, and child protection.

Carol’s qualifications include a Graduate Diploma in Public Sector leadership, Griffith University (2007); a Masters of Indigenous Studies, Southern Cross University (2005); and a Bachelor of Arts, University of New England (1999)

Marion Norton has had a long career exploring ways to put the people in need at the centre of service provision. Over the last 30 years Marion has had many roles as an evaluator in Queensland, involving services for adults and young people across education, family and child services, health and criminal justice programs. Marion has worked in both government and non-government roles and enjoys mentoring evaluators in using quantitative and qualitative evaluation techniques to describe participant experiences and give evidence of program impacts.

> back to overview > register


Half day (morning) workshop – Evaluation and action learning to support implementation 

presented by Lauren Heery, Claire Jennings and Alice Ghazaria  | INTERMEDIATE 

WORKSHOP DESCRIPTION

As evaluators we are often commissioned to prove the value of programs and innovations by measuring program outcomes. However, implementation issues often get in the way of being able to provide this ‘proof’. Implementation, particularly in human services, is challenging, and changing practice is hard to do. Finding out at the end of an evaluation that implementation didn’t go as planned doesn’t help us improve. 

Our experience shows us that there is value in taking an iterative, action learning approach to evaluation. This involves sharing findings about implementation in real-time and supporting action to improve implementation. This work is further strengthened by an initial clarificatory evaluation phase, where we articulate the program’s logic or theory of change. 

The purpose of this workshop is to build participants’ capacity to use evaluation to enable real-time program implementation, improvement and adaptation. Participants will leave with:

  • an understanding of the value of clarificatory evaluation in setting implementation evaluation up for success, and how to approach this
  • an understanding of how to apply an action learning approach to evaluation to support program implementation and adaptation
  • an awareness of the enablers and challenges to taking an action learning approach and how to address these.

We will address the following:

  • Clarificatory evaluation and the use of program logic/theory of change
  • Action learning cycles, including collaborative sense making of data and identification of actions.

The workshop will include presentations, facilitated reflective conversations, and applied group activities. The workshop is pitched to those new to action learning approaches, but assumes some basic knowledge of formative evaluation. 

ALIGNMENT TO LEARNING COMPETENCY FRAMEWORK

The workshop will cover the following Professional Learning Competency Framework domains: 1. Evaluative Attitude and Professional Practice; 2. Theoretical Foundations; 6. Interpersonal Skills; 7. Evaluation Activities.

ABOUT THE FACILITATORS

Lauren Heery (MPH, BPhysio(Hons)) is an evaluator and facilitator with nearly 15 years’ experience at the Centre for Community Child Health (at the Murdoch Children’s Research Institute) supporting organisations, service systems and communities to use and generate evidence to improve services and conditions for children and families. Lauren has managed complex formative and summative mixed methods evaluations; regularly delivers evaluation and action learning training and coaching; has presented at several conferences; and has been published in EJA. 

Alice Ghazarian (MPH, BSci) is an evaluator and researcher with over a decade of experience at the Centre for Community Child Health. Alice has led a range of evaluation projects, and is skilled in action learning, process evaluation, outcomes evaluation and developmental evaluation. Alice has presented her work several times at national conferences.  

Claire Jennings (BBSc(Hons)) has worked in the field of evaluation since joining the Centre for Community Child Health in 2010. She has managed several formative and summative mixed methods evaluations for both government and not-for-profit organisations, primarily in the health and education sectors. Claire has presented at conferences several times, most recently in late 2023 at the Early Childhood Australia Conference (Adelaide) and the Evidence and Implementation Summit (Melbourne).    

> back to overview > register


Half day (afternoon) workshop – Qualitative causal analysis and impact evaluation using the QuIP

presented by James Copestake  | INTERMEDIATE 

WORKSHOP DESCRIPTION

The workshop introduces and reflects upon qualitative (including theory-led) approaches to generating credible and cost-effective evidence of the causal effects of health, education, income generation, social and community development activities in complex contexts. 

Building on a decade of designing and using the Qualitative Impact Protocol (QuIP) in more than thirty countries (see www.bathsdr.org) the workshop will provide a step-by-step outline of how to conduct a QuIP study. This includes initial consultations, design, data collection, coding, analysis, interpretation and use. 

Real-world examples will be used to explore how to address key methodological challenges, including how to address potential cognitive biases, ensure cost-effective case/source selection, code and analyse data rigorously and transparently, integrate multiple methods effectively, and make use of visual mapping to communicate findings clearly and ensure their effective application. In addition to addressing how to conduct a QuIP study and why, the workshop will throw light on related approaches including contribution analysis, outcome mapping, process tracing, most significant change and realist evaluation. 

James Copestake, the convener, led the action research behind design and development of the QuIP. Having started out as an economist he is well placed to compare qualitative and quantitative approaches to impact evaluation as well as their integration. 

By the end of the session participants should have a richer understanding of threats to generating evidence of causation and how to address them, and the strengths and limitations of using the QuIP compared to other approaches.

ABOUT THE FACILITATOR

James Copestake is Professor of International Development at the University of Bath in the UK, where he is also Director of Studies for the Doctorate in Policy Research and Practice at the Institute of Policy Research.

His publications range broadly across international development evaluation, finance, management, political economy and theory. They draw on collaborative research conducted particularly in Bolivia, Britain, Ethiopia, Ghana, India, Malawi, Peru and Zambia.

His recent work has focused on mixed methods impact evaluation, particularly incorporating use of the Qualitative Impact Protocol (QuIP) and based on evaluation work conducted through Bath Social and Development Research Ltd (www.bathsdr.org), a social enterprise he cofounded in 2016. He is also currently a trustee of Opportunity International UK.     

> back to overview > register


Half day (morning) workshop – Designing and using surveys in realist (and other theory based) evaluation

presented by Gill Westhorp and Cara Donohue  | INTERMEDIATE – ADVANCED

WORKSHOP DESCRIPTION

Realist Evaluation (RE) is a type of theory based evaluation. By explaining how, why, and in what contexts programmes are, and are not, effective, it supports programme personnel to ‘find their way’ to effective outcomes – including for those not well-served by mainstream programmes.

Although RE was intended to be mixed method, most published realist evaluations use only qualitative methods.  Well-designed surveys can considerably strengthen realist (and other theory based) evaluations.  Differences between realist and other surveys relate to survey samples, content, and analysis. However, there is no guidance for developing surveys to be “fit for realist purpose”.   

This workshop aims to fill that gap.  It will enable participants to understand:

  • purposes for surveys in RE
  • methodological issues 
  • sampling design for realist surveys 
  • how to use surveys to test theories 
  • analytic methods.

The workshop structure will follow the process of using a survey in a realist evaluation, with exercises related to the decision to use a survey, developing the sample, developing the content, and planning for analysis. Key ideas will be supported with practical examples from real evaluations, with time for questions and discussion.  It will enable evaluators to ‘find their way’ in using surveys in realist and theory-based evaluation.

TARGET AUDIENCE

This workshop is intended for participants with a working understanding of realist evaluation, and at least an introductory understanding of survey design and analysis. Sophisticated understanding of quantitative analytic methods is not required.

ALIGNMENT TO LEARNING COMPETENCY FRAMEWORK

The workshop aligns to Evaluator Competency 4, Research Methods and Systematic Inquiry, in particular:

  • prepare a research design that provides a coherent link to the objectives of the evaluation 
  • identify appropriate evaluative criteria and measures likely to generate valid findings 
  • design appropriate sampling methods to maximise learning and avoid bias 
  • employ valid quantitative methods with rigor, and where possible to statistical confidence levels.

ABOUT THE FACILITATORS

Gill Westhorp has over 20 years' experience in realist evaluation in international development, health and human services and community development. 

She leads a realist research and evaluation team at Charles Darwin University, is an adjunct Professor at RMIT University and a Visiting Professor at Northumbria University in the UK. She has conducted, led or advised over 70 realist research and evaluation projects and presented over 70 training workshops in 15 countries, and presented at many conferences. She is a Fellow of the AES, and in 2022 and 2023, she was named in the Stanford University list of the top 2% of most influential scientists in her field. 

Publications include the Rameses standards for realist evaluation and realist review, and: Tilley, N. and Westhorp, G (2020) Quantitative Evaluation Methods. 10,000 word invited entry to the Sage Encyclopedia, Foundations of Social Research Methods; Westhorp, G. (2018) Revisiting mechanisms in realist research and evaluation, in Emmel ND, Greenhalgh J, Manzano A, Monaghan M & Dalkin S (Eds.) (2018) Doing Realist Research. London, SAGE.

Cara Donohue has been a research fellow with the Realist Research, Evaluation and Learning (RREALI) group at CDU’s Northern Institute since 2020 and in the evaluation field for 13 years. She specialises in evaluation, research, and program design, particularly using realist and theory-based approaches. Cara has a work background in the international and community development fields, and has worked in university, international and domestic NGO settings. She has experience with programs serving at-risk youth, low-income, disabled, refugee, Indigenous, and rural populations.

> back to overview > register


Half day (afternoon) workshop – Using system dynamics for evaluations in complex systems

presented by Jacquie Davison, Chris Browne and Miriam Spano  | FOUNDATIONAL – INTERMEDIATE 

WORKSHOP DESCRIPTION

The resurgent call for systemic approaches to complex problems in business, research and policy is well documented. This workshop will build awareness of the benefits of systemic approaches to evaluation. Introducing systems thinking and modelling  tools, we will explore commissioning and conduct of evaluations across a diversity of policy areas.

The workshop will provide participants with an introduction to systems thinking and/ system dynamics methods, applications, and available software solutions. Participants will be guided through the application of three useful systems dynamics practices to the evaluative process.  

Three key system dynamics practices addressed will be:

  1. identification of system features that can be used to describe a problem and a Theory of Change
  2. the practice of participatory modelling/group model building to bring diverse stakeholders together to build a dynamic hypothesis of a complex problem, and  
  3. applying causal analysis (through causal loop diagrams) to understand system leverage points and develop insights ex-ante and ex-post evaluation.

ABOUT THE FACILITATORS

Jacquie Davison works as a research officer and project coordinator for the Sax Institute. She uses participatory systems modelling approaches to support policy agencies to explore complex public health problems and identify solutions through simulations to support policy decision making. Jacquie has a background in public health and healthcare policy at the Commonwealth government, global health policy and programs in the Indo-Pacific, including the design and monitoring of health and education aid projects and a Master of International Public Health (USyd).

Chris Browne is an academic focussing on building literacy in systems approaches at The Australian National University. He holds a PhD (ANU) in systems thinking and his facilitative teaching approach has been recognised with an Australian Award for Teaching Excellence. Chris's teaching and research includes investigating conceptual models of complex systems, methodology of problem-solving processes, strategies for developing intuition of dynamic systems, and processes for constructing and integrating shared conceptual models of systems, with application spaces in healthcare, climate change and education.

Miriam Spano is an accomplished professional in organisational development and management consulting. She holds degrees in international business administration (BBUS),  marketing (BA), and is a graduate of the European Masters of System Dynamics (University of Bergen). She currently pursues a PhD at Monash University, investigating how behavioural sciences support embedding systems thinking in public sector organisations.

> back to overview > register


Tetra Tech International Development: Diamond Sponsor Exclusive Workshop Partner