This is the
symbol grounding problem. A crude metaphor would be the closed circularity of a dictionary, where the meanings of terms consist of referring to other terms and their word-combination descriptions. The definitions never capture the original, phenomenal things and events of the world except when a person is reading them (correlating the symbols to stored experiences or live ones occurring in the sensed environment).
However, "smart" machines are tools -- they usually don't have to apprehend the [information] "objects" they manipulate as humans represent them. Just produce the same responses slash actions as people or yield the results that we want.
Smart machines could potentially survive/evolve after we perished, as long as they are harboring a method of "invisible" representation (descriptive symbolism) that successfully jibes with or correlates to our experienced ("shown") environment, and perchance they were of a future self-replicating and autonomous category.
Since the world of scientific realism revolves heavily around the abstract representations of physics, there arguably shouldn't even be much of a drop-off with respect to that. Which is to say, even we can't directly perceive that reality as it is. As far as the routines of everyday life go, we are still stuck in the "commonsense realism" of our ancestors. Smart machines might be designed that entirely "live" within in a realm of physics conceptions (setting aside how brute survival might be compromised).
So that "descriptive knowledge" of [artificial] philosophical zombies (and the technological or syn-biological substrate that such descriptive information reduces to in their AI brains) can still potentially engage with the world very effectively. Though motives stemming from qualitative feelings and sensations will be absent if no physical dynamics and structural relationships have been substituted by engineers to stimulate analogous behavior/goals.