AlphaPlus is delighted to announce the opening of a new office branch in Bahrain. This strategic move underlines our commitment to providing high quality curriculum, qualification and assessment services to governments and other organisations throughout the Middle East.
The establishment of the Bahrain office will allow AlphaPlus to work more closely with our regional partners, ensuring that we can deliver our services with greater efficiency and responsiveness.
Commenting on the new office, Romy Short, Managing Director of AlphaPlus, said, “The opening of our Bahrain branch marks an exciting new chapter for AlphaPlus. This expansion allows us to strengthen our presence in the Middle East, and we are looking forward to collaborating even more closely with our clients and partners in the region to drive educational excellence.”
With offices in both the UK and Bahrain, AlphaPlus continues to set the standard for excellence in education. We look forward to the opportunities that this new office will bring and to the continued success in supporting our customers’ educational goals.
]]>AlphaPlus has worked with a number of Government agencies in the Middle East and North Africa (MENA) who want to align their Technical and Vocational Education and Training (TVET) with the demands of the local labour market. The aim is always the same: to provide TVET agencies with the information they need to invest in the right skills to meet national and local employer needs. Ultimately the goal is to support economic growth.
Typically, the first phase of work will proceed in two parallel strands:
The outcome of the first phase of work is usually a report, summarising the work carried out and providing key findings and recommendations for the next phase of work.
International best practice is often a key influence. For example, in a recent project we produced a report which included a number of country case studies, where each case study covered:
Such a report lays out the landscape of best practice and how it relates to the needs of the client, and informs future stages of development.
Depending on client requirements, subsequent phases of work may include support for setting up an organisation or group similar in scope to the Unit for Future Skills in England. Such organisations gather labour market intelligence and data from across industry sectors, and provide data sets and visualisations to support a better understanding of current skill mismatches and future demand.
To help set up such a capability, AlphaPlus has worked with clients to propose and lead consultations on appropriate business, operational, and governance models. The proposed models are informed by the findings of the international review (see above), but also by the local context, as understood through the aforementioned stakeholder consultation.
AlphaPlus also help organisations set up new capabilities by scoping, defining and documenting operational processes. For a recent MENA client we defined key processes for a new skills organisation, and produced and consulted on accompanying procedural manuals, providing the start-up organisation with a solid foundation to grow from.
Tracking and acting on labour market information is, these days, fundamentally underpinned by technology and data. AlphaPlus has a long history of helping clients to scope and either develop or procure technology to meet business needs. Our work has ranged from carrying out consultations to understand and document the needs of training providers and employers for a digital learning and assessment platform, to leading on procurement for new digital platforms, to scoping and specifying systems to collect and visualise labour market intelligence and related vocational training. In a recent system design we produced for a TVET client, we identified:
We subsequently worked with the client IT department to advise them on more detailed design and implementation.
Consult, consult, consult. Fully engage with the client and their stakeholders to gain a full understanding of their operating environment, their requirements, and any constraints. Every skills ecosystem is different.
Take the client along with you. You have to know your stuff. If the client is not convinced of the expertise of your team, then any recommendations you make are unlikely to be well received. We only deploy people who have both expertise and experience in the area.
Deploy a multi-disciplinary team. A wide range of different skills and experience are required to lead consultations, carry out research, develop governance models, and understand data and technology. We deploy an “on the ground” team for consultations, but ensure they are backed up by a pool of other specialists who can be rapidly deployed as required.
]]>Sample-based approaches to educational monitoring are assessments designed to measure the educational achievements of a representative sample of students across a country or jurisdiction. Unlike large-scale standardised tests that assess every student, these assessments aim to provide data on overall educational quality and trends at the national level without testing every individual. This approach is cost-effective, manageable, and allows for in-depth data collection while minimising testing burdens.
Sample-based national monitoring assessments typically measure a range of skills and knowledge areas essential to understanding student achievement, educational quality, and curriculum effectiveness. While the specific focus varies by jurisdiction, these assessments generally target core academic subjects, higher-order skills, and competencies relevant to students’ long-term success.
Sample-based assessments are widely used around the world to monitor national educational progress. These assessments help countries track trends in student achievement, inform educational policies, and provide insights into curriculum effectiveness. Notably, such assessments are used in jurisdictions such as the United States, Australia and New Zealand, although with varying approaches:
| Jurisdiction | Assessment | Features |
| United States | National Assessment of Educational Progress (NAEP) | The NAEP is the largest ongoing sample-based assessment in the United States, administered by the National Center for Education Statistics (NCES). It measures student performance in subjects like reading, mathematics, science, and writing for students in grades 4, 8, and 12.NAEP results are used to track academic progress over time, make comparisons among states, and inform educational policy at the national and state levels. |
| Australia | National Assessment Program: Literacy and Numeracy (NAPLAN) Sample Assessments | Australia conducts sample-based assessments in specific curriculum areas such as science, information and communication technology (ICT), and civics and citizenship.These assessments provide insights into students’ proficiency in these domains, with results used to monitor trends and inform educational policy and curriculum design. |
| New Zealand | National Monitoring Study of Student Achievement (NMSSA) | Australia conducts sample-based assessments in specific curriculum areas such as science, information and communication technology (ICT), and civics and citizenship. These assessments provide insights into students’ proficiency in these domains, with results used to monitor trends and inform educational policy and curriculum design. |
These assessments use a statistically representative sample of students to estimate national performance levels, often based on demographics such as age, grade level, location, and socioeconomic factors.
For example, in a national assessment of 4th-grade mathematics, a sample of schools and classrooms from urban, suburban, and rural areas would be selected to mirror the national student population.
Sample-based assessments are typically conducted periodically (e.g., every few years), allowing for trend analysis. By administering similar or comparable tests over time, policymakers and educators can observe changes in educational outcomes, track the effectiveness of reforms, and identify areas needing improvement.
Sample-based assessments often align with national or international educational standards, helping to determine if students are meeting expected competencies. Results can sometimes be benchmarked against international comparative assessments like PISA or TIMSS to see how national performance compares globally. These assessments also employ a sample-based approach.
The results of a national sample-based monitoring programme can inform policy decisions and curriculum adjustments by identifying strengths and weaknesses in the educational system. Results help in allocating resources more effectively, guiding professional development, and shaping educational initiatives.
Since only a sample of students participate, the testing burden on schools, teachers, and students is significantly reduced. This approach can be especially valuable in large countries or in systems with limited testing resources.
As with any assessment, designing an effective and valid assessment is strongly linked to the clarity of the purpose of the assessment. At an early stage, it should be established whether the programme aims to measure national trends, evaluate the impact of curriculum changes, identify disparities or provide insights into specific skill areas. A further important consideration is the intended use of the assessment outcomes and data – will results be used to inform policy, guide curriculum adjustments, improve teaching and learning practices or address equity issues? All decisions about assessment design must be driven by the assessment purpose to ensure validity.
At the outset, it will be necessary to determine an appropriate sample size that balances cost-effectiveness and manageability considerations with the need for robust and reliable data. A sufficiently large and diverse sample allows for robust statistical analysis while ensuring that key groups (e.g. particular demographic groups) are represented and that it is possible to stratify by characteristics such as school type, region or socioeconomic status if the assessment programme is intended to measure such aspects.
A related consideration is to consider potential sources of bias in the sampling or test design. An important aspect of statistical analysis will be to ensure that items are fair to all test-takers.
In addition to academic skills, some sample-based assessments incorporate surveys or questionnaires that assess students’ attitudes toward learning, motivation, self-efficacy, and social-emotional skills such as perseverance, teamwork, and empathy.
These measures are typically collected through student self-reports or teacher surveys, and they provide valuable insights into non-cognitive factors that impact learning and academic performance.
At AlphaPlus, we have extensive experience designing and delivering approaches for sample-based assessments, both in the UK and internationally. In addition, we also have the expertise to digitise these assessments, and analyse and report assessment data so that it is accessible and useful to policymakers, and where appropriate teachers and learners.
AlphaPlus is part of AQA Global Assessment Services. AQA are the UK’s leading examination board, with over one million learners completing AQA qualifications annually. This combined experience makes us experts in the practical application of sample-based assessments.
If you would like support constructing a sample-based assessment approach or you have any questions about this article or how else we can support you, please get in touch with us here.
Gemma O’Brien – National Programme Lead for Personalised Assessments in Wales
]]>
AlphaPlus is proud to have won the 2024 Association for Project Management (APM) PMO (Project Management Office) of the Year Award at the APM Project Management Awards held on Monday 18th November.
This award reflects the commitment of Nick Karamanis as the Head of PMO, in developing and leading a small but perfectly formed team. The judges liked the fact that Alpha Plus have achieved a lot in a short space of time. We were commended for balancing innovation with the need to meet delivery deadlines in multi-million pound programmes, and for building a remarkably positive culture through our development of people, systems and processes. The judges were particularly impressed by our work on Diversity, Educational Assessment, and Sustainability and Economic Assessments.
Gavin Busuttil-Reynaud (Director of Operations) emphasised that being the winners is a recognition of our PMO and extended Project Management community for driving project success and demonstrating project management maturity. This capability is a key enabler in our organisation delivering successful and sustainable change, developing our project community and being trusted assessment partners to a range of clients worldwide. We are particularly proud of how a small organisation is recruiting and nurturing young professionals through both apprenticeship and graduate pathways to build a diverse and talented workforce.

This is the 2nd consecutive year that the AlphaPlus Project Management Community has been recognised by APM, reflecting our commitment to excellence in project management.
Earlier in 2024, we were the Winner of the Developmental Programme of the Year Award (APM, Festival of Education and Research), whilst, in 2023, Nesta Shingler (Project Manager, APM PMQ) was a finalist for the Graduate of the Year Award (APM, Festival of Education and Research).
Objective Clinical Structured Examinations (OSCEs) are a common method of assessment for medical and healthcare students and professionals. They are typically used to assess clinical competence.
An OSCE assesses performance in a simulated clinical environment. Typically, candidates complete a series of timed activities (OSCE stations) in a circuit. This assessment format aims to ensure a consistent (or ‘objective’) experience for candidates. Each station has a trained assessor who assesses performance against standardised assessment criteria to ensure a consistent assessment experience for candidates.
OSCEs mainly focus on the assessment of practical clinical skills such as taking a clinical history or carrying out an injection. However theoretical knowledge also plays an important role.
Miller’s (19901) pyramid of clinical competence is a framework used in medical education to describe the progression of a healthcare professional’s skills and abilities. It consists of four levels, arranged in a hierarchy, with each level building upon the one below.

Miller’s pyramid is used to emphasise the importance of progression from knowledge acquisition (‘knows’) to practical application (‘shows how’) and ultimately achieving clinical proficiency (‘does’).
At the lower levels of the pyramid, learners understand the theory that is the foundation of clinical competence. At the upper levels, learners integrate theory, psychomotor skills and professional attitudes, to perform as health professionals in different contexts. An example of this might be a scenario in which candidates have to assess a ‘patient’ with minimal information, and ask appropriate questions in order to elicit the diagnosis and show they know how to manage it appropriately.
Khan et al (20132) draw a distinction between clinical ‘competency’ and ‘competence’. Competency is the combination of appropriate cognitive, psychomotor and affective skills, whilst competence is an attribute of a person. Khan et al (2013) argue that the performance of an individual on identical clinical tasks can vary considerably depending on the context of the assessment, and that the performance of a candidate in an OSCE might not be the same as their performance in the workplace on identical tasks. OSCEs should therefore be considered as a tool to provide a snapshot of candidates’ demonstrated performance in a particular area in a simulated environment. In real life, non-clinical skills such as leadership and team working play an important role in determining overall performance. An OSCE therefore assesses that a candidate is able to ‘show how’ (Miller, 1990) a candidate would perform in a simulated clinical environment.
While it may not be possible to simulate all aspects of a real clinical environment, OSCEs allow for the practical assessment of clinical skills, problem-solving abilities and knowledge whilst avoiding the inherent variation in real-life clinical contexts. The structured nature of the assessment goes some way towards avoiding bias in the assessment and ensures a consistent and ultimately, fair assessment experience for candidates. OSCEs also allow candidates to be assessed on skills which may not occur in a predictable way in real life, such as the resuscitation of cardiac arrest.
Validity refers to the extent to which an assessment measures what it is intended to measure. In other words, whether the OSCE effectively evaluates the clinical skills, knowledge and competencies it is designed to assess and is fair to candidates.
In the context of OSCEs, the following are crucial for effective assessment design:
A well designed OSCE can drive learning and have a positive educational impact. However, if OSCE stations are not designed to authentically recreate clinical scenarios or the tasks are compartmentalised or driven by checklist scoring, then this creates a risk that performance is not assessed holistically. Candidates can become overly focused on learning skills to pass examinations, rather than increasing their clinical competence (Miller, 1990; Shumway and Harden, 20033; Khan et al 2013).
To avoid this, scoring approaches for OSCEs can use global rating approaches that focus on broad categorisations such as communication and patient safety alongside the specific aspects of clinical procedures being assessed (Khan et al, 2013). In addition, the supporting materials provided to candidates to prepare for the assessment should also be carefully considered to avoid rote learning of specific clinical scenarios.
An important facet of validity is reliability, that is, the extent to which an OSCE provides consistent or dependable results. Reliability must be carefully considered in the design of OSCE circuits as well as individual stations. Standardisation and training of markers and robust quality assurance and monitoring processes must be in place to ensure consistency of marking across candidates. Another aspect of reliability and consistency relates to the use of actors in OSCE stations. Actors can increase the authenticity of assessment by allowing candidates to interact with a real person in order to take a patient history, but actors must be carefully prepared and trained so that they respond to every candidate in a consistent way and do not inadvertently provide information that could alter the demand of the assessment.
Testing candidates across a large sample of clinical cases can provide insight into the breadth of their clinical competence and can increase the reliability of measurement. However, this must be carefully balanced against the need to ensure that the assessment duration is manageable for candidates and to ensure that success in the assessment does not become unduly time-bound.
We have wide experience of working on medical assessments for a variety of different organisations including designing and creating OSCE assessments for the NMC Test of Competence.
For more information and advice about how we can help you with OSCE assessments, please click here to contact us.
[1] Miller’s Pyramid of Professional Competence with examples of assessment… | Download Scientific Diagram (researchgate.net)
[2] Khan, K.Z. et al. (2013). The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part II: Organisation & Administration. Medical teacher, 35(9), pp.e1447–e1463.
[3] Shumway JM, Harden RM (2003). AMEE Guide No. 25: The assessment of learning outcomes for the competent and reflective physician. Med Teach 25:569–584.
]]>When embarking on a qualification project, one of the best questions to start with, is ‘what do you want to use it for?’. Sometimes it transpires that what is really wanted or needed is a training program, or a teaching curriculum, or lesson plans, and a qualification is not the best fit for the need. Or sometimes, the possession of a qualification may be the end goal, but there is journey in terms of standardising training, or ensuring industry standards are pitched right first, before starting a qualification development.
A qualification is important when you want to be able to set the standard at which you want employees/students to be able to perform. A qualification also provides an independent and objective benchmark for performance. It’s hardwired, as opposed to a training program that has flexibility in what and how things might be taught/trained. What’s included in the qualification standards needs to be right too, appropriately covering all the requirements you have and written in a way that is understandable.
Qualifications also include assessment, to check those employees/students can perform to the standard required across the chosen discipline/s. Whereas training or curriculum projects tend to be more about the process i.e. the learning, rather than the output. Training tends not to be about, or heavily focus on checking an individual’s performance level through assessment. It’s true some training can and does include assessment, but it does tend to be locally done, what’s tested and to what standard is often decided upon and carried out by the trainer, this can make it more open to their opinions and biases. Whereas a qualification, with pre-determined approaches and controls around assessment using clearly defined standards, provides a more reliable gauge about those employees/students’ knowledge, abilities and skills.
A key thing about qualifications is reliability – it’s about conformity in what and how things are to be assessed, and ensuring that each time an assessment is carried out the results are dependable and valid. So the next question, after deciding a qualification is what is needed is, what is it you want to know about your employees’/students’ performance in that area? This is an important start point for qualification development, do you want to be sure about practical skills, competency, ability to understand or explain things, or may be its about operating at a safe level?
Setting some time aside to check and challenge the purpose/s, what will be important to assess and high level strategy for the development is recommended at the front end of a qualification project. This can help to avoid disappointment later down the line, where perhaps focus on particular skills, or testing at a higher level may be what’s needed, and it’s missing from the final draft qualification.
How best to assess skills, competency or knowledge and understanding also requires specialist advice. Working with those that have experience of designing and developing assessments that actually test what you want to test, in a way that is manageable and realistic will save time and effort when it comes to delivering assessments for your employees/students.
Having the right help on assessment design can also help when it comes to questions about whether you want to look in more detail at employee/student performance. Perhaps you might want to incentivise them to perform well, to separate out those who may have the capabilities to go onto make a real difference, or be promoted. In these circumstances a grading model to define merit or distinction candidates needs careful development. What those employees/students actually do in their assessment/s also needs to draw out their current and true levels of performance, to ensure a reliable assessment outcome.
A qualification should provide you with a valid and reliable way to understand the capacity of your employees/students. Assessment needs to draw out sufficient evidence of the skills or competency you are looking for too. To make this an efficient process and gain all the positives from going down a qualification route talk to us about the best choice for you. AlphaPlus has experience of creating qualifications in different countries that have different regulations and policy, across many industry sectors and amongst different client types from multinational businesses to governments and the military.
To find out more about how you can turn your learning programme into a qualification, or for more information about our expertise in qualifications, please contact Fraser Talbot at [email protected].
]]>Training is crucial in many education and assessment organisations:
AQA AlphaPlus is recognised as a leading provider of training related to assessment, e-assessment and associated processes. We have delivered assessment-related training to a range of organisations in the UK and internationally since 2006. Our customers include awarding bodies, ministries of education, regulatory authorities and professional bodies.
AlphaPlus is a leading service provider in the planning, development and implementation of assessments in the UK. We spend our days working with assessment agencies from an assessment’s conception, through the specification and question-and-task writing, trialling and performance analysis, to live use and monitoring. As such, we live and breathe assessment.
We are not a publisher – all the intellectual property we create is owned by our clients. As part of this, we believe in supporting our clients to develop inhouse capability and capacity, and this applies particularly to the complex area of assessment. Our internal team regularly draw on their experience to create bespoke training on this topic. They work closely with client teams in skills building and co-development. We have supported work shadowing and other deeply integrated forms of capacity building with clients in the UK and internationally.
Our programmes are typically bespoke courses, with an emphasis on workshop and practical activity, delivered on site to awarding organisation staff teams. Training is delivered by senior staff and directors at AlphaPlus, who are able to draw on their extensive experience from supporting awarding organisations across the sector.
Our programmes are modular, which allows us to customise them to meet your requirements.
AlphaPlus offers a range of standard modules which are then customised to suit the needs of our customers. Topics include:
Training can be delivered at on site, or at an external location. Delegates receive a complete training pack comprising the materials covered, a handbook covering the subject, copies of the exercises, and a targeted reading list for further study.
Our expert assessment managers and training team will design a programme to suit your staff, your specific needs and your budget.
To find out more about AlphaPlus can assist you with your training needs, please contact:
UK – Fraser Talbot [email protected]
International – Syed Shah [email protected]
]]>The International Early Learning and Child Well-being Study (IELS) is an international survey that assesses children at age 5 who attend Early Childhood Education and Care centres and/or schools. It measures Emergent Literacy, Emergent Numeracy, Self-regulation, Empathy & Trust and Pro-social behaviour1.
Empathy is defined as ‘the ability to imagine and understand the thoughts, perspectives and emotions of another person’2. Research exploring empathy in childhood is conflicted with regard to whether it is an ability you are born with, or something that develops over time. Some argue that genes play a role in the development of empathy3 whilst others argue it is learnt through experience4. Either way, research on the development of empathy in childhood consistently suggests it is a skill which can be enhanced during a child’s early years5.
Firstly, according to Hoffman’s stages of empathy development, children start to understand that other people’s feelings and perspectives may be different to their own in nursery or primary school, between the ages of two and eight years old. Secondly, experts highlight that whilst academic skills are key outcomes of the education system, success in a world biased towards neurotypical thinking is not attainable in the absence of sufficient social and emotional skills6. For example, some view empathy as a precursor for effective cooperation, collaboration and communication upon entry to the labour market7. Furthermore, a longitudinal study found that those who were high in empathy in childhood and adolescence tended to have more constructive communication skills, higher levels of resilience and were better at integrating within social networks in adulthood8.
The role of schools has been shown to be important in fostering empathy as a skill4 and primary schools have an opportunity to encourage its development. According to the Consortium on the School-Based Promotion of Social Competence “Schools are widely acknowledged as the major setting in which activities should be undertaken to promote students’ competence and prevent the development of unhealthy behaviours. In contrast to other potential sites for intervention, schools provide access to all children on a regular and consistent basis over the majority of their formative years”9.
Empathy is one of the key four developmental domains which are widely recognised as key early learning and developmental skills that childhood education programmes aim to develop10. For this reason, IELS – an international survey that assess children at age five which identifies key factors that drive or hinder the development of early learning8 – gathers information on the development of children’s empathy at this age.
Currently, empirical research exploring how empathy is interconnected to other competencies, such as emergent literacy, numeracy and self-regulation, is lacking for this age group. There is no common framework which pulls together information at a national scale. As a country, we do not fully understand how empathy and other competencies are developing in our children, and what we can do to promote our children’s development.
By taking part in IELS, your child or school will be contributing to a piece of research that will inform international understanding of children’s development. More than that, our research so far shows that children love doing the games and jump at the opportunity to have 30 minutes out of the classroom!
For more information, please visit the International Early Learning and Child Well-being Study’s website: https://www.oecd.org/education/school/early-learning-and-child-well-being-study/
[1] OECD. (n.d.). The International Early Learning and Child Well-being Study – The Study. https://www.oecd.org/education/school/the-international-early-learning-and-child-well-being-study-the-study.htm
[2] Oxford Reference. (2024). Empathy. https://www.oxfordreference.com/display/10.1093/oi/authority.20110803095750102#:~:text=n.,and%20emotions%20of%20another%20person.
[3] Warrier, V., Toro, R., Chakrabarti, B., the iPSYCH-Broad autism group, Borglum, A., Grove, J., the 23andMe Research Team, Hinds, D., Bourgeron, T., & Baron-Cohen, S. (2018). Genome-wide analyses of self-reported empathy: correlations with autism, schizophrenia, and anorexia nervosa. Translational Psychiatry, 8(35). https://doi.org/10.1038/s41398-017-0082-6
[4] Heyes, C. (2018). Empathy is not in our genes. Neuroscience & Biobehavioral Reviews, 95, 499-507. https://doi.org/10.1016/j.neubiorev.2018.11.001
[5] Schonert-Reichl, K. A., & Oberle, E. (2011). Teaching Empathy to Children: Theoretical and Empirical Considerations and Implications for Practice. In B. Weber, E. Marsal & T. Dobashi (Eds.), The Politics of Empathy: New Interdisciplinary Perspectives on an Ancient Phenomena. LIT Verlag Münster.
[6] Berliner, R., & Materson, T. L. (2015). Review of research: promoting empathy development in early childhood and elementary classroom. Childhood Education, 91(1), 57-64. https://doi.org/10.1080/00094056.2015.1001675
[7] Goldstein, T. R., & Winner, E. (2012). Enhancing empathy and theory of mind. Journal of Cognition and Development, 13(1), 19–37. https://doi.org/10.1080/15248372.2011.573514
[8] Allemand, M., Steiger, A. E., & Fend, H. A. (2015). Empathy development in adolescence predicts social competencies in adulthood. Journal of Personality, 83(2), 127-241. https://doi.org/10.1111/jopy.12098
[9] Consortium on the School-Based Promotion of Social Competence. (1994). The promotion of social competence: Theory, research, practice and policy. In R. J. Haggerty. Sherrod, N. Garmezy, & M. Rutter (Eds.), Stress, risk, resilience in children and adolescents: Processes, mechanisms and interaction. New York: Cambridge University Press.
[10] OECD. (n.d.). The International Early Learning and Child Well-being Study – The Study. https://www.oecd.org/education/school/the-international-early-learning-and-child-well-being-study-the-study.htm
]]>
.
Great Place to Work® is the global authority on workplace culture. They help organisations quantify their culture and produce better business results by creating a high-trust work experience for all employees. Backed by 30 years of data the Great Place To Work Certification
surveys over 10 million employees, over 10,000 companies supported in over 98 countries annually.
In November 2023 AlphaPlus conducted the Great Place To Work survey to gather anonymous data from our staff about how AlphaPlus is as an employer. Questions covered topics such as fairness, support and leadership.
Key highlights from the results include:
| Score | Topic | Detail |
| 97% | Justice | Employees perceive that management promotes inclusive behaviour, avoids discrimination and is committed to ensuring fair appeals. |
| 95% | Leadership | Employee’s experience with leaders’ behaviour and how it resonates with the company’s strategy and values. A positive experience of these behaviours for employees on all levels of the organisation is a key differentiator among the best workplaces and enables companies to execute their strategy consistently. |
| 94% | Community | Community reflects the deepest level of camaraderie that is developed within a group, and measures the extent to which employees consider that there is a sense of ‘family’ or ‘team’. |
| 93% | Fairness | The extent to which employees feel that management practices are fair by assessing the equity, impartiality and justice employees perceive in the workplace. |
| 94% | Support | The provision of training opportunities, resources and equipment as well as appreciation of professional accomplishments. |




Andrew Boyle, AlphaPlus’ Director of HR, says, “We’re pleased we have such a positive Great Place To Work score. AlphaPlus works in the field of educational assessment and standards. As a growing, and – relatively – new business, we believe that we are making our reputation every day. Our commitments to equality, and ethical working are real and sustained. We’ve worked hard to make sure AlphaPlus is such a positive place to work, and are glad our employees agree.”
For more information about our Great Place To Work Certification click here.
]]>When teachers plan their lessons, they set out what they expect their class to know or be able to do at the end of the lesson. However just because a teacher has taught a topic or skill, it doesn’t necessarily mean that the learner has learnt the knowledge or developed the skill successfully.
The difference between teaching and learning is illustrated in the conversation below where a split class is taught by two teachers who are coordinating their work:
Teacher 1: What did you do in class today?
Teacher 2: I did the past tense but I am not sure what the students did!
Whatever is taught by the teacher isn’t necessarily learned by the learners. There are barriers in communication and understanding which may get in the way of learning, and consolidation is often needed by the learners to embed the teaching and help them understand it. Good classroom practice should help with all of that but there can be a significant barrier to this being successful and it is often not clear to the teacher whether the learners have learnt what was taught.
Formative assessments are used to help understand what learners know and can do, and help identify areas of learning which are strengths and areas of learning where additional support through some sort of intervention may be needed. A well designed formative assessment will help the teacher target the gaps in learning that they need to address in their teaching. These formative assessments can be short quizzes or tests in class, worksheets or online activities for homework or more structured assessments at the end of topics or even end of term or year. They all give feedback to the teacher about the individual learners’ abilities.
The diagram below illustrates the idealised feedback process where the formative assessment provides the feedback on the learning to allow the teacher to identify and fill any gaps in learning.

However using this information can be a significant challenge for the teacher, if the learners are of different abilities (that is, they know and can do different things). Delivering personal intervention to a class of 30 learners is not easy to manage given the resource and time constraints in classrooms.
In the different national adaptive assessments in literacy and numeracy that we deliver for two UK governments, we see a far greater spread in the ability of the learners (i.e. what they know and can do) than the difference in ability from one year group to the next (as illustrated in the diagram below). In fact, the ability of the learners at the top of a year will be greater than the ability of average learners five years above. Whilst an individual school or class may not reflect the full spread of ability in the national picture, the spread of ability in a class will still be far wider than the difference in ability between year groups in a school in almost every case.

Feedback from formative assessments is most useful when it is highly specific to the individual learner. However, in many instances, teachers are only able to apply interventions to a whole class of learners due to limitations in the time and resource available. When the variation of the ability of the class is as wide as it can be (see diagram above), then a whole class intervention is unlikely to succeed in addressing the needs of all of the learners. There is a danger that the lowest ability learners get left behind and the highest ability learners become disengaged as they already know or can do what is required.
The challenge is therefore how the feedback loop can be closed successfully. There are a number of possible solutions to this which have been applied in different circumstances. For example:
There is no simple answer to this problem if the role of the teacher is as the provider of knowledge that the learner then learns (as is traditional). Personalised learning that aims to help learners address their areas of weakness and build on areas of strength is difficult to deliver in this way, particularly when a teacher has 30 learners, each with their own educational needs.
So are there any alternatives?
A possible alternative is to have a set of topic-linked online resources which contain clear learner journeys. A diagnostic assessment can ascertain the learners’ strengths and weaknesses and they can engage with the online teaching and learning materials, either individually or in small groups of similar ability learners. The teacher guides the students along their pathways, coaches them if they get stuck and encourages the development of team working, self-reliance and self-regulation and resilience. The learning can be assessed through small, formative end of lesson and end of topic online assessments and interventions identified which the teacher or the learning programme can then deliver to address gaps.
This would be a fundamental change to the way that much teaching and learning takes place and would require significant resourcing in the underpinning infrastructure. However, unless we are able to move away from the teacher providing the instruction to the whole class, then we are unlikely to be able to fully solve the problem of successfully using information from formative assessment and close the virtuous feedback loop of teaching, learning and formative assessment.
If you would like to know more about our experiences of delivering formative assessments or have any thoughts on what could be done to help close the virtuous circle of teaching, learning and assessment then please do get in contact with us.
]]>