There's the possibility of having a community of truth, which is minds (or parts of minds, or minds at different times) that are speaking to each other to have accurate propositional-beliefs. This is a possible thing to do, for a mind or a group of minds. Doing a community of truth is powerful: you get science, sane decision-making, etc., when you know what's true. Participating in a community of truth is natural, and even the default: kids take things literally (both in terms of what they believe and what they disbelieve).

The community of truth requires hygiene: both the power and vulnerability of a system of propositional beliefs is that subtle differences (and subtle distortions) can propagate and ramify into important differences. In particular, instrumentality (besides instrumentality that factors through "have accurate beliefs") can't be applied to propositional-beliefs without distortion. In particular in particular, since a community of truth is powerful and therefore somewhat controls decisions, there's an incentive to distort propositional-beliefs to control decisions (rendering them less of a propositional-belief).

Some mental elements are not perfectly separated out propositional-beliefs vs. other sorts of elements, e.g. action-stances, action-patterns, skills, or goals.

As an example: "The enemy's gate is down.". This is arguably either false or meaningless, but it's hard to argue that it's true. However, it's also useful: it's better to "think of it as down" than to "think of it as forward". To "think of it as down" is in particular to "act as though" there is, strategically / tactically speaking, a rotational symmetry about the axis between your gate and the enemy's gate. It would be worse to "act as though" there is a left-right direction, coming from the hallway, to be privileged as a space to maneuver in over the up-down direction, as on a terrestrial battlefield.

Since there are many mental elements that we'd naturally call "beliefs" but that aren't strictly propositional-beliefs, it often makes sense to push on "beliefs" by instrumental criteria other than "have accurate beliefs". It makes more sense to separate out the propositional-belief subelements from the actionful subelements; and if interoperability is required (e.g. when talking to someone else) then applying non-belieflike instrumental criteria to "beliefs" is often especially damaging; but still.

Some such mixed mental elements also have self-fulfillingness. E.g. acting as though you expect someone else to help you with X can cause them to help you with X. This is often a form of lying, i.e. non-propositional-belief-truthtelling that cuts against attempts to separate and truthtrack belieflike beliefs, which is very rarely worth it because it breaks the community of truth.

But it's not always lying. For example, should you get out of bed today? Maybe nothing good would happen. But then, if you don't get out of bed, definitely not much good would happen. If you act as though you expect something good to happen, then it's more likely that something good will happen.

If your belieflike elements and actionful elements aren't separated out, maybe you don't have an immediately available choice that's better than "believe unepistemically that something good would happen". It's certainly much better, especially in the long-run and in communities, if you can separate this out (e.g. into a choice "I'll whole-assedly try out those actions I'd've taken if I expected this to work"), but it's not so rare for the mixed-up thing to be worth it.