
What we learnt from our recruiting process
What We Learned from Our Shrimp Welfare Research Position Selection Process As the Welfare Footprint Institute grows, we are slowly moving from being just scientists running projects to organizers of
As the Welfare Footprint Institute grows, we are slowly moving from being just scientists running projects to organizers of a broader effort. That comes with responsibility: the mission to measure suffering so it can be reduced on a global scale is not a small task. To do it well, we need more than methods and results—we need people. And bringing in the right people is one of the hardest and most important decisions any organization can make.
Our recent hiring process for a research position in shrimp welfare brought all of this into focus. We received far more interest than we expected, and we also faced a challenge that is becoming common everywhere: AI now shapes how people apply for jobs, often in ways that are hard to recognize and harder to judge. We thought it was worth writing down what we learned—both to be transparent with the many excellent candidates we couldn’t hire, and to share insights that might help others who are building teams in this changing environment.
When we opened applications, 247 people from around the world applied (!). For a specific topic, that number surprised us. What stood out was not just the size of the pool, but the fact that so many applicants were ready to engage seriously with the welfare of arthropods—an area that, until recently, few would consider as part of mainstream animal welfare science. It was encouraging to see such knowledge, interest, and passion directed to this novel field.
We also recognize that most candidates will use AI tools—and we’re not only okay with that, we see it as increasingly essential. Fluency in leveraging AI can expand one’s capacity for reasoning, literature reviews, hypothesis generation, and more. In fact, as highlighted in this video, the real discriminant is not whether someone used AI, but how they used it: Did it support their thinking or merely replace it? We knew what the AI answers would be (we ran several potential questions in ChatGPT, Claude, Gemini, Grok, Perplexity, and Consensus), so in our process, we tried to design evaluations so that genuine thought, not just carefully constructed sentences assisted by AI, could emerge.
To keep the evaluation fair, we ran the first two stages blind. Reviewers did not see names, institutions, or CVs—only answers to the questions. Candidates’ backgrounds were revealed only after Stage 2, once they had also completed a paid task.
This decision mattered. Although we had no specific expectations or requirements in mind when we first started, it became clear through the selection process that the ability to reason well about shrimp welfare did not necessarily depend on having a predictable academic or career profile. If we had started with CVs, we probably would have filtered out strong candidates who didn’t fit the “expected” mold.
The first stage was a short written exercise: five questions on shrimp neurobiology, allostatic overload, the limits of mortality as a welfare measure, vulnerability traits in shrimp farming, and decision-making under conflicting evidence. Out of 247 candidates, 24 advanced (about 10%).
The difference wasn’t simply right versus wrong. Many candidates gave accurate answers, but the strongest stood out because of their originality and depth. They cited relevant papers, drew on personal observations, questioned the framing of the questions, proposed original approaches and were specific about parameter values. They acknowledged gaps in the literature, and pointed to practical constraints likely to emerge in commercial farming operations. That extra layer of reasoning was what made the difference.
In Stage 2, candidates designed welfare indicators for shrimp in biofloc systems and created a classification system for pain caused by skin injuries in salmon. These questions pushed people to move from describing problems to proposing solutions that could actually be used in practice.
Most submissions were strong. What set the finalists apart were some of the following elements:
By contrast, some otherwise excellent answers left key elements underspecified—for example, naming a promising indicator but not explaining how to measure it. At this stage, such details made the difference.
Still, the decision was painful. Many of the 21 candidates who didn’t advance had PhDs, long lists of publications, or substantial field experience. Their answers were thoughtful, and we would have been glad to work with them if we had more positions available.
A few things became clear from this process:
This won’t be our last hire, and we’ll build on what we learned here. Above all, we remain grateful to everyone who applied. This was our first major recruitment process, and the experience taught us valuable lessons about both evaluation and logistics that will benefit future applicants. The interest and quality we saw also reminded us that the field of animal welfare is rich with talent and commitment. Deciding among so many strong applications was one of the hardest things we’ve had to do—and we deeply respect those who joined us in this process, even if only one person could be chosen.
For those still interested in this work, stay connected. We’ll have more positions, and the field itself is expanding beyond just our institute. The fact that 247 people wanted to work on shrimp welfare—something that barely existed as a research area five years ago—suggests this community will keep growing. We hope to be part of making space for more of this talent, whether directly through our own hiring or by helping demonstrate that this work matters.
What We Learned from Our Shrimp Welfare Research Position Selection Process As the Welfare Footprint Institute grows, we are slowly moving from being just scientists running projects to organizers of
A Milestone in Animal Welfare Science: The Welfare Footprint of the Egg and the ISAE 2025 Workshop On August 4th, 2025, during the 58th International Society for Applied Ethology (ISAE)
Unexpected Global Response to WFF Study on Trout Slaughter and Welfare Impact In just three weeks since its publication on 5 June 2025, our latest study in Scientific Reports has