I don't think assuming we will have only a few buffer/renderer types (GL, Vk, Shm) is the right way IMHO. E.g. 2D engines have proprietary (low-level) APIs (so it _is_ akin to having a new graphics API). Each graphics vendor would potentially have one. And they'd want to use their own allocators as only they would know what sort of buffers would work with the hardware (pf, alignment, etc). They would also want to do things like bind_to_render_image() just like gl would do.
So keeping things generic in core, and having the platforms be able to decode them would be preferable.
I don't think assuming we will have only a few buffer/renderer types (GL, Vk, Shm) is the right way IMHO. E.g. 2D engines have proprietary (low-level) APIs (so it _is_ akin to having a new graphics API). Each graphics vendor would potentially have one. And they'd want to use their own allocators as only they would know what sort of buffers would work with the hardware (pf, alignment, etc). They would also want to do things like bind_to_ render_ image() just like gl would do.
So keeping things generic in core, and having the platforms be able to decode them would be preferable.