What i said makes is perfectly aligned with what i said originally: 2 and 3 at once, considering you'll have a game developed normally for ps5 then ported to pc running exactly as it should be in new hardware AND the game can have its visual quality dowgraded to run on low-end PCs, regardless if this downgrade involves running it through a different .exe tha uses last gen tech, or is just dropping down some sliders in the new version.
Unless at the third option you're already implying the PC would get both the 'next-gen' version and the downgraded version and allow players to switch between them, though i still wouldn't fully agree as in its not worth the trouble. Especially considering you could still run the game on last gen pcs by dropping down visual quality as i've already shown.
It doesn't make sense, because they are mutually exclusive, because the UE4 lighting paths are lots and lots of recursive work because it takes - IIRC 10 - hours to typically bake a level change for previous gen designed games - from what Daniel said in the UE5 lumen video. Light mass gives them real-time previewing, but is only indicative, and won't catch all situations, like small lighting bleeding that can destroy the look of a scene, and then needs fixed and re-baked.
Designing with UE5 real-time SW GI/RT with hw RT for close up stuff provides developers with feedback to iterate the level designs instantly. And you can't build levels designed for UE4 lighting by using lumen as a previewer - by the answers in the Q&A at the end of the lumen video - so every level would need baked and adjusted, and re-baked. But using proxy meshes - traditional polygon meshes - instead of the nanite geometry, which might be megascans - that far exceed the traditional HW T&L pipeline capabilities even on more powerful cards than a 1060.
Let's take a look at option (2) done to its fullest, and what that really means for PC hardware to do the same but better.
6 CPU cores for game engine logic
2 CPU cores for additional SW RT using AVX
IO complex and SDD being heavily used for streaming data like 8K textures
GPU used for real-time SW GI/RT and BVH accelerators used for some HW RT, using the full fill-rate of GPU.
Even if you can use 4 stronger CPU cores on a PC instead of 6 mobile ones on a console, and rely on more HW RT on a GPU to avoid the AVX need. There is still no way around the IO complex without directStorage and an RTX IO type card with fast nvme SSD, and in this day and age, going below 1080p30 native on a PC game to get round the fillrate just isn't going to help perception of a AAA game at full price IMO.
Nvidia themselves have made the RTX IO card only usable with RTX GPUs - probably because lesser cards don't have enough cache bandwidth and spare compute to do their normal job and do real-time decompression at the same time. So if PlayStation first party games are built with option (2) styled requirements, it is effectively another development cycle to produce the game to work on a 4 core cpu and GTX 1060, and without decompression. And that's assuming that the real-time SW RT isn't integral to how levels play by being dynamic GI.