Suppose we have some process to attach to every model of a first-order theory some (permutation) representation of its automorphism group, compatible with elementary embeddings. How can we tell if this is "definable", i.e. really just the points in all models of some imaginary sort of our theory?
In the '80s, Michael Makkai provided the following answer to this question: a functor Mod(T) → Set is definable if and only if it preserves all ultraproducts and all "formal comparison maps" between them (generalizing e.g. the diagonal embedding into an ultrapower). This is known as strong conceptual completeness; formally, the statement is that the category Def(T) of definable sets can be reconstructed up to bi-interpretability as the category of "ultrafunctors" Mod(T) → Set.
Now, any general framework which reconstructs theories from their categories of models should be considerably simplified for ℵ0-categorical theories. Indeed, we show:
If T is ℵ0-categorical, then X : Mod(T) → Set is definable, i.e. isomorphic to (M ↦ Φ(M)) for some formula Φ ∈ T, if and only if X preserves ultraproducts and diagonal embeddings into ultrapowers. This means that all the preservation requirements for ultramorphisms, which a priori get unboundedly complicated, collapse to just diagonal embeddings when T is ℵ0-categorical. We show this definability criterion fails if we remove the ℵ0-categoricity assumption, by constructing examples of theories and non-definable functors Mod(T) → Set which exhibit this.
Time permitting, I will discuss what evA : Mod(T) → Set being a (pre)ultrafunctor allows us to deduce about an arbitrary object A of the classifying topos E(T).