There's a dimension of task-sets that's something like: how much is it the case that solving tasks like this basically boils down to "crunching the numbers"?

We could call this "combinatoricness" or "algebraicness". Algebraic things are soulless, calculative. More to the point, compared to more structure-rich tasks, it's less the case that being a mind implies being good at algebraic tasks, or vice versa.

Examples:

Humans tend to find it fun when there's a continuum of combinatorics/algebra in a domain. In chess, very crudely speaking, you could look one move ahead, two moves ahead, etc. (That's not really how it works, but there's a similar spirit that does hold true. Stronger players will tend to deal with longer, more complex, more contingent, more counterintuitive combinations. Even Karpov would, I assume without knowing, be playing strategically/positionally in response to an environment with such calculating opponents, tuned to shut down such tactics.)

We can ask different things. We can ask "how much explicit structure is there to be recognized in the world of this task-set"; this could range from 0 up to the whole cosmos. We can also ask "how much explicit structure is there, relative to the less thing-like combinatorics". That's an ambiguous question, but for example we could be asking "if there's an agent that performs at such-and-such level when predicting / manipulating / planning / designing stuff in this area, what would tend to be the ratio of explicit-structure to combinatorics?". A fuller analysis would tease these things apart more.

See sophistication.