The growth of AI in migration and border control makes the potential for its application to refugee status determination (RSD) a real possibility. However, flawed systems can pose traps for new technologies. AI may improve the consistency, efficiency and accuracy of RSD, but as long as the ‘well-founded fear’ standard remains bipartite, it is unlikely to address the issues that vex credibility assessments. AI will struggle to support the determination of subjective fear, which is already challenged by the limited human capacity to judge the credibility of other humans. If data carries the unconscious biases of the developer, the machine will learn to replicate them. AI’s limited ability to read emotions present challenges in a context defined by vulnerability. The prospective nature of fear is counter intuitive if algorithms learn through historical data. If a ‘well-founded fear of being persecuted’ were to be based on objective risk only, AI’s place within RSD could be justified.