Everything you need to know for the 2026 May-June Examinations
This guide consolidates: Standards and Guidelines for the Use of AI in Assessments (April 2025), Responsible Generative AI Policy Framework (November 2024), FAQ for Standards and Guidelines (2025), and Memo: Managing AI Checkers in SBAs (March 10, 2026).
Before 2025, CXC had no published rules governing how students could use AI in their SBAs. No scale levels, no penalties, no formal detection process. That changed in three steps.
Most schools in the region are still catching up to rules that are already in effect. These rules apply whether or not your school has communicated them yet.
CXC classifies AI use into four levels. Understanding these is key to staying compliant.
"At present, CXC limits the use of AI to Level 3."
— Standards and Guidelines, p. 6
This is the most common area of confusion. The key question: Did AI create any written content that ended up in your final submission? If yes, that is Level 4, regardless of how much you edited it afterwards.
If you have heard that your SBA is safe as long as it scores below 20% on an AI detection tool, that is not what CXC's own documents say. The 20% does appear in CXC's FAQ. It is a real number in a real CXC document. But the way students and teachers are repeating it does not reflect what the FAQ actually says.
What you need to know: The 20% is not what protects your SBA. The compliance process is. Declare your AI use, cite it, document your prompts, run an originality checker, and submit the report. That process comes from CXC's governing document and was confirmed by the March 2026 memo. Follow the process in Section 4 of this guide and you are covered. Everything below explains why.
Two questions in the FAQ mention the 20%. The framing is not what most people think, and it matters for how you prepare your SBA.
The FAQ does not say "under 20% means you are safe." Based on the language in Q19, above 20% triggers a justification requirement from the teacher, and even with that justification, the student can still be penalised. Based on CXC's own wording, the 20% functions as a penalty trigger, not a guarantee of safety.
The concern is that students are hearing "20%" and interpreting it as "under 20% means I'm fine." But Q19 makes clear that even above 20%, with a teacher's written justification, the student can still be penalised. If the override does not guarantee safety, the threshold itself is not solid ground to build your compliance on.
This number does not appear in the Standards and Guidelines. It does not appear in the March 2026 memo. It appears only in the FAQ.
This is important context. Look at the rest of the FAQ. Can AI be used in exams? No. That aligns with Level 1. Can AI generate code for IT SBAs? No. That aligns with Level 3. Can AI suggest a topic? Yes. That aligns with Level 2. Can AI generate graphs? Yes, if the rubric does not allocate marks. That is Level 3 in action. Every answer traces back to a scale level in the governing document. Every answer except the 20%.
Students hear "20%" and think: under means safe, over means caught. Neither assumption holds up when you look at how compliance actually works. Here are two real scenarios:
A percentage does not tell CXC whether you crossed the line. The process does.
This is what CXC said most recently. Without ever mentioning the 20%, three lines from the memo made it clear that no single number will decide your SBA's outcome:
FAQ Q21: "If a candidate's original work is flagged as AI-generated, the consequences can include academic penalties. However, CXC also requires human review and supporting evidence before finalising such accusations, and candidates have the right to appeal or defend their work."
This is CXC, in the same FAQ, confirming that no score on its own determines the outcome. Human review, supporting evidence, and the right to defend your work. That is process-based evaluation.
Peer-reviewed studies consistently find that AI detection tools produce false positives at rates that make hard percentage thresholds unreliable, particularly for non-native English speakers. A 2025 evidence synthesis published in MDPI's Information journal, covering research from 2021 to 2024, concluded that these tools "frequently produce false positives and lack transparency." A Stanford study of over 10,000 text samples found false positive rates exceeding 20% for non-native English speakers. Turnitin's own guidance treats scores below 20% as too unreliable to be used as evidence.
Major institutions that initially adopted percentage thresholds are now moving toward process-based evaluation, which is exactly what CXC's Standards and Guidelines already describe. The process-based approach is the stronger position internationally.
Sources: Weber-Wulff et al. (2023), International Journal for Educational Integrity. MDPI Information evidence synthesis (2025). Stanford false positive study (2025). MLA-CCCC Joint Task Force on Writing and AI.
The 20% appears in CXC's FAQ. It is a real number in a real document.
But the governing document, the Standards and Guidelines, uses a process to evaluate compliance, not a percentage. The March 2026 memo confirmed this. The FAQ's own Q21 confirms this.
Following the process in Section 4 of this guide is what protects you. That process comes directly from CXC's own published standards.
We are not saying ignore the 20%.
We are not saying the FAQ does not matter.
We are not saying CXC got it wrong.
We are saying that a number in the FAQ should not replace the compliance process that CXC's governing document requires. CXC's own memo and FAQ Q21 support this reading.
A memo is not a policy, and a percentage is not a standard.
If you used AI, even just for brainstorming, you must cite it in CXC's required format.
Include the name of the AI platform and the year it was accessed.
Your AI citations go in your Appendix. Not your bibliography. Not a footnote. Your Appendix.
Source: Appendix A, Standards and Guidelines, p. 22
You may also be required to sign a declaration stating:
— Appendix E, Standards and Guidelines, p. 26
No AI used at all? CXC's March 2026 memo confirmed: if no AI was used, the Disclosure Form and Originality Report are not required.
| Violation | Penalty |
|---|---|
| Full use of AI in the completion and submission of the SBA | 0 marks awarded |
| Violating the expected Scale Level (e.g. Level 4 when limited to Level 3) | 50% deduction of earned mark Example: 50/60 becomes 25/60 |
"Violations of the expectations of the use of AI according to the Scale Levels will attract 50% deduction of the earned mark." — Standard 4.1(b), p. 17
A 50% deduction on your SBA means you just lost 10% of your final result on one mistake.
A scale level violation does not just reduce your SBA mark. It reduces the percentage that SBA contributes to your entire CSEC result.
The penalty does not stop at your SBA. It pulls down your overall CSEC percentage, and that can be enough to move you from one grade boundary to the next. One violation, one subject, one grade.
CXC uses a multi-layer detection process. No single tool decides your outcome. It is a combination of human review and technology.
A teacher's approval alone does not guarantee the SBA will pass CXC's moderation. The final decision rests with CXC, not the school.
NEW: CXC Memo dated March 10, 2026
"Managing Artificial Intelligence (AI) Checkers in School-Based Assessments (SBAs)" — Dr. Nicole Manning, Director of Operations
"We wish to assure you that decisions will not be based solely on the AI Originality Report."
"We also recognise the variability of some AI detection tools."
CXC will rely on the Disclosure Form details, and concerns flagged during marking will be reviewed using human judgement and supporting evidence.
If no AI was used, the Disclosure Form and Originality Report are not required.
If CXC needs more information, they will reach out to Centres and/or the Local Registrar ahead of releasing preliminary results.
CXC committed to further engaging stakeholders in focus group sessions during July–August 2026.
Key Takeaway: The Standards and Guidelines outline a process: declare your AI use, cite it, document your prompts, run an originality checker, submit the report. CXC's March 2026 memo confirmed that this process-based approach is how they will evaluate SBAs. Follow the process and you are covered.
CXC specifies which AI platforms and originality checkers are approved for use.
"This list may be updated from time to time through the advisement of Ministries of Education." CXC's March 2026 memo also noted that CoPilot (Microsoft) is one example, not the only option. Students and teachers are encouraged to use any available AI detection tool accessible to them.
These answers come from CXC's FAQ and are consistent with the Standards and Guidelines.
"Teachers and parents alike play a vital role in ensuring the integrity of the submissions of the candidates for the examinations." — Standard 4.0, p. 16
CXC explicitly names parents in the responsibility chain. This means CXC expects you to be aware of the rules and to help ensure your child's SBA follows them.