M E T A D R E A M S With an Investigation into their Potential and Ultimate Future by Benjamin D. Cooley "If a tree falls in an empty forest, does it make a sound?" We've all heard the question, and inevitably, most of us come up with the same answer. Since the universe is consistent as far as each of us has ever experienced, we assume that its laws apply everywhere, and that the laws continue to apply at all times. So when the tree falls in the empty forest we must beleive, based on our own experiences, that the answer is yes, the tree does make a sound. But suppose the universe isn't as consistent as we beleive it to be. Suppose it only extends to the limits of our perceptions, or the limits of the perceptions of all of the intelligent beings on the planet. Then the tree, if no one is there to see it fall, doesn't actually fall, and therefore makes no sound at all. As an answer to the tree question, it's an interesting perspective, and certainly one that's always considered in any discussions, but in what way is it related to real life? It obviously doesn't affect us in our day to day lives where cause and effect seems pretty consistent, nor has it ever come up in research where, so far, every incongruous result has had a definite cause. So, since there is no way to objectively test this hypothesis one way or the other - at least as far as we know, it seems that we might as well stick with the conventional answer to the tree question and forget this one. So why did I bring it up at all? I brought it up because this way of looking at the tree question, or reality in general, provides a solution to a very difficult problem encountered in implementing virtual reality simulations that are meant to be indistinguishable from ordinary reality. Maybe I should start with the problem so that the solution, when explained, makes sense. The problem is that, as the size or resolution of a virtual reality simulation increases, the storage and processing power required to perform the virtual reality simulation also increases. And since the increase in space simulated is exponential (i.e. from cubic foot to cubic miles to cubic lightyears, etc.) the storage required to simulate the space also increases exponentially. Now, I'm talking about simulations that the participants in are supposed to be unable to distinguish from reality, not the simple simulations that they can easily discover are fake, though even artificial looking simulations suffer from the increase of storage to a lesser degree. The storage and processing requirements of the realistic simulations are enormous since they must simulate everything to such a high degree of accuracy, and therefore the need to reduce their storage space and processing power requirements is greater. An easy way to decrease the amount of storage space needed is to remove redundant information like empty space, straight walls, etc. But as the redundant information is removed, resolution of the simulation inevitably suffers. For instance, suppose we were to attempt to reduce the resources needed by the simulation by saying that empty space was completely uniformly empty. The only problem is that real space isn't like that, so the user when he investigates finds that space, which should be filled with the occational atom in differing distributions, is inexplicably empty, or is simply filled with an unbeleivably uniform amount of atoms. So we're stuck, either simulate everything, or be satisfied with the occational inconsistencies. But suppose we divided the data model used in generating the simulation into two distinct catagories: absolute modeling and perception modeling. We'll define absolute modeling as the modeling of reality by storing all information needed to generate the reality to the desired resolution and size. And we'll define perception modeling ass storing only those components of the reality that can be perceived by the users in the reality. Obviously the storage requirements of the Absolute method would tend to be greater than the perception method, and so the perception method seems like the better route. By the way, notice that the two methods are not mutually exclusive. The perception method is actually a superset of the absract method. In other words, the perception method uses the abstract method to simulate reality, but only the components of the reality which the user will actually perceive. Althought the perception method seems much better at first glance, there are obvious problems. Some components of real reality are easy to eliminate using the perception modeling method. For instance, if none of the participants have electron microscopes or radios, the simulation can dispense with simulating the electromagnetic spectrum. Also, since some solid objects may not ever be broken by the participants, their internal structure may not need to be simulated. However, beyond these obvious examples, picking the parts of reality that the participants won't notice have been fudged becomes a lot harder. For instance, suppose we wanted to take a newly plastered virtual wall and represent it as random distribution of lumpy blobs on a straight wall. If the wall was originally plastered by someone in the reality, he would come back and notice that the wall now looked different from how it looked when he plastered it. So the sytem would have to remember the strokes of the plasterer if it wanted to make it look realistic, or perhaps even the motions of his body as he plastered it, then map the random distibution of the plaster blobs into his motions to make it look right, and even then it might not convinve the plasterer. Even then if the plasterer remembered a particular blob on the wall the system is screwed. So eventually, just to make sure the system hadn't missed anything, it would have to record the entire wall down to a faily detailed resolution in an absolute model. As it turns out, based on these kinds of situations, perception based modeling really doesn't provide much savings in the storage and processing department. The increase in the size of storage required for increasingly large simulated areas is still pretty high. Of course there is another solution. You can abandon creating a realistic simulation and create a cardboard mock-up simulation. This type of simulation uses pre-defined cut out algorithms, surfaces, and models, to decrease the amount of storage required. All plastered wall surfaces in this type of reality are either random distributions of blobs, or based on the same surface map, or combinations of the two. What you get is a cookie cutter world where every surface looks exactly like every other, every object works the same. The simulation justs pays lip service to working like the real world, and does not attempt to be consistent or detailed. In our universe everything, even those things that seem the same like mass produced goods, are subtly different. In the cardboard virtual reality everything would look different, but in actuality would all be identical. However, there is another alternative. We can use perception modeling to decrease the amount of storage and processing power needed if we add one more needed component to the virtual reality simulation computer: the minds of those that are to use the simulation. If the computer knows exactly what the participants are going to notice and not notice, it can specifically generate the important things to the correct level of detail and fudge the rest. The participant will never know the difference. Take our plastered wall for instance. The plasterer certainly remembers, perhaps subconsciously, some of the details of the wall, and he also knows generally how the wall should look. So the computer takes that information from the plasterers mind, combines it with its own absolute information as the the position and size of the wall, etc. and recreates the wall to a level sufficient to convince the plasterer that the wall is real and consistent with what he rememebers. With access to the participants minds, the computer knows exactly what it can fudge, and what it can't. And not only that. From the information in the user's minds, the computer can actuall generate most of the reality without really needing much more additional information! What this means is that the storage space and processing power required to generate realistic virtual reality is greatly reduced. In fact, the amount of processing power and storage is not even really primarily related to the amount of space simulated, but to the number of people that are part of the reality, and the proportions of this relationship are linear not geometric. Add one person, add the storage space for that person plus the overhead that his presence adds. The size of the universe, and more importantly, its complexity still play a part, but the primary components of this type of reality are the people themselves. This type of simulation I will call a METADREAM since it really can be defined as a computer assisted group dream that's shared by all the participants in the simulation. The metadream model of virtual reality has several important components: the participants, the reality generator, the reality creator, and the consistency auditor. The participants are the largest component of the system, providing the primary storate for the reality and also the rules and expectations by which the reality is generated. The reality generator extracts the necessary information from the minds of the participants to generate the information that the participants will perceive through their senses, the reality. The reality creator also accesses the reality generator when there are gaps in the information it retrieves from the participants (for example, the distribution of blobs on the plaster wall). The reality creator would take the relevant information from the reality generator (which came originally from the participants), combine it with its own internal models and sets of physical laws, and return sensory information to fill in the gaps. The consistency auditor would catch when the other two systems had made a mistake and possibly roll back the metadream to a previous state and hopefully avoid the mistake when it came up again, or unobtrusively manipulate the metadream to remove the mistake, or evidence of it, from the current point. Together these three systems make up an AI level program which I like to call "The Mother of all Dungeon Masters". Now you might be able to understand why I used the "Tree" question to try and explain my metadream concept. In a metadream, everything is entirely perception based. Only those things that are currently experienced are actually real. Objects such as trees, houses, and other inanimates simply don't exist but for memories or abstract models until someone is around to actually experience them. Therefore the tree falling in the empty forest in a metadream doesn't make a sound, nor does it even fall until the reality creator needs to make a forest and decides that a certain number of trees, based on the climate of the area, and the rainfall, etc. have fallen. The metadream is a pretty powerful concept, but it's not the limit of the potential of virtual reality. For the be all end all of virtual reality I'll coin another term.. Virtual Universe. A virtual universe is a metadream on a grand scale. The virtual universe is the simulation of an entire population of people to the limits of their perception. In our case, our metadream would focus on our own planet, with minor detail only on the other stuff in the universe to make everything look consistent. The metadream would consist of six (or seven?) billion intelligent minds, numerous semi-intelligent minds (who, by the way, can be statistically simulated as their intelligence and their direct interaction with the intelligent population decreases). Of course this particular metadream would require a massive control system to keep track of the different reality models that would be needed for the planet and apply them when needed. However, since most of the models required are localized, the control system can remain relatively decentralized, working perhaps with each continent, each city, and each house, with different levels of detail and differnt models. Once the planet is successfully simulated, the rest of the universe is easy, since the perceptions by the inhabitants on the planet are very limited. So basically, using the metadream form of virtual reality, you can conceivably simulate an ENTIRE UNIVERSE (more or less, i.e. don't look behind those curtains Dorthy!). Maybe not a universe as we understand the concept, a huge infinitely complex, infinitely detailed, expansive cosmos, but still a universe that could fool us if we didn't know the truth. Perhaps if the Grand Dungeon Master of the virtual universe is smart enough, we could possibly be fooled indefinitely. At the very least, we can esacape the cardboard virtual reality simulations and get real, imperfection rich, detailed simulations with metadreams. The metadream and virtual universes will probably be an inevitability in the evolution of our species. They won't mean instant utopia, but they may aid us in reaching our full human potential. Besides, they would be a blast. Think of it. There can be as many types of universes as can be imagined and built. Pern, Middle Earth, Dune, all these places can exist, or at least places that are like these can exist, modified by the fact that there are real people inhabiting them and not fictional characters. In many universes, magic will probably be a reality. Perhaps all those heroic fantasy stories of Tolkein, Feist, Moorecock, and others could actually take place some day. I beleive that over time the real universe will fill up with theses things, each planet containing millions. Imagine the planet earth with a million subuniverses dotted across its surface. An incredible density of life! Perhaps trillions of intelligent beings sharing the same planet, with maybe no more than a billion actually on its surface as living beings at any given time. It would be overcrowding but without any ecological disaster, and inside there would be elbow room for everyone. The systems wouldn't be unlimited, the planet would still only support a certain number of people, but the number would be greatly increased, perhaps a thousand, perhaps even a million times. So, combine the arguments for digital people: imortality, increased intelligence, body independence, and faster and cheaper travel, and combine it with the added possibilities raised by the metadream, and you have a pretty persuasive argument that humanity's future will be a digital one. END.