Dictionary and translator for handheld
New : sensagent is now available on your handheld
A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites !
With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site.
Improve your site content
Add new content to your site from Sensagent by XML.
Crawl products or adds
Get XML access to reach the best products.
Index images and define metadata
Get XML access to fix the meaning of your metadata.
Please, email us to describe your idea.
Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares.
Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame !
Change the target language to find translations.
Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.
Simulated reality is the skeptical hypothesis that reality could be simulated—perhaps by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation.
This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality.
There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.
In brain-computer interface simulations, each participant enters from outside, directly connecting their brain to the simulation computer. The computer transmits sensory data to the participant, reads and responds to their desires and actions in return; in this manner they interact with the simulated world and receive feedback from it. The participant may be induced by any number of possible means to forget, temporarily or otherwise, that they are inside a virtual realm (e.g. "passing through the veil", a term borrowed from Christian tradition, which describes the passage of a soul from an earthly body to an afterlife). While inside the simulation, the participant's consciousness is represented by an avatar, which can look very different from the participant's actual appearance.
In a virtual-people simulation, every inhabitant is a native of the simulated world. They do not have a "real" body in the external reality of the physical world. Instead, each is a fully simulated entity, possessing an appropriate level of consciousness that is implemented using the simulation's own logic (i.e. using its own physics). As such, they could be downloaded from one simulation to another, or even archived and resurrected at a later time. It is also possible that a simulated entity could be moved out of the simulation entirely by means of mind transfer into a synthetic body.
In an emigration simulation, the participant enters the simulation from the outer reality, as in the brain-computer interface simulation, but to a much greater degree. On entry, the participant could use a variety of hypothetical methods to participate in the simulated reality including mind transfer to temporarily relocate their mental processing into a virtual-person. After the simulation is over, the participant's mind is restored along with all new memories and experience gained within (as in the movie The Thirteenth Floor, or when one flatlines in Neuromancer).
Finally, there is the option of a simulated reality being dynamically constructed and modified using real-world matter and energy within an enclosing container or room, such as the "Holodeck" in Star Trek. Upon entering such a space, the real-world person would effectively feel immersed in the simulated environment, with a variety of potential methods being used to convince the user of the presence of motion, gravity, environments, and so on, and with the user presumably able to interact (or not) with the simulated reality.
An intermingled simulation supports both types of consciousness: "players" from the outer reality who are visiting (as a brain-computer interface simulation) or emigrating, and virtual-people who are natives of the simulation and hence lack any physical body in the outer reality.
The Matrix movies feature an intermingled type of simulation: they contain not only human minds (with their physical bodies remaining outside), but also sentient software programs that govern various aspects of the computed realm.
Ten years after Hans Moravec first published the simulation argument (and three years after its update in Moravec's second full pop science book), the philosopher Nick Bostrom developed an argument distinct from the skeptical hypothesis, that we may be living in a simulation. Roughly, his argument proceeds as follows:
In greater detail, Bostrom is attempting to prove a tripartite disjunction, that at least one of these propositions must be true. His argument rests on the premise that given sufficiently advanced technology, it is possible to represent the populated surface of the Earth without recourse to quantum simulation; that the qualia experienced by a simulated consciousness is comparable or equivalent to that of a naturally occurring human consciousness; and that one or more levels of simulation within simulations would be feasible given only a modest expenditure of computational resources in the real world.
If one assumes that humans will not be destroyed or destroy themselves before developing such a technology; and if one assumes that human descendants will have no overriding legal restrictions or moral compunctions against simulating their ancestors; it would be unreasonable to count ourselves among the small minority of genuine ancestors who, sooner or later, will be vastly outnumbered by artificial simulations.
Epistemologically, it is not impossible to tell whether we are living in a simulation. For example, Bostrom suggests that a window could popup saying: "You are living in a simulation. Click here for more information." However, imperfections in a simulated environment might be difficult for the native inhabitants to identify, and for purposes of authenticity, even the simulated memory of a blatant revelation might be purged programmatically. Nonetheless, should any evidence come to light, either for or against the skeptical hypothesis, it would radically alter the aforementioned probability.
As to the question of whether we are living in a simulated reality or a 'real' one, the answer may be 'indistinguishable', in principle. In a commemorative article dedicated to the 'The World Year of Physics 2005', physicist Bin-Guang Ma proposed the theory of 'Relativity of reality'.[unreliable source] The notion appears in ancient philosophy: Zhuangzi's 'Butterfly Dream', and analytical psychology. Without special knowledge of a reference world, one cannot say with absolute skeptical certainty one is experiencing "reality".
Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is relevant to the Simulation hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in generating artificial consciousness or cognition, it would establish the theoretical possibility of a simulated reality. However, the relationship between cognition and phenomenal qualia of consciousness is disputed. It is possible that consciousness requires a vital substrate that a computer cannot provide, and that simulated people, while behaving appropriately, would be philosophical zombies. This would undermine Nick Bostrom's simulation argument; we cannot be a simulate consciousness, if consciousness, as we know it, cannot be simulated. However, the skeptical hypothesis remains intact, we could still be envatted brains, existing as conscious beings within a simulated environment, even if consciousness cannot be simulated.
Some theorists have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (or radical mathematical Platonism) are true then consciousnesses is computation, which in principle is platform independent, and thus admits of simulation. This argument states that a "Platonic realm" or ultimate ensemble would contain every algorithm, including those which implement consciousness.
A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the "dream hypothesis" cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it. One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th century BC. He phrased the problem as the well-known "Butterfly Dream," which went as follows:
Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49)
The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In Meditations on First Philosophy, he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep", and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false".
Chalmers (2003) discusses the dream hypothesis, and notes that this comes in two distinct forms:
Both the dream argument and the Simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth.
Another state of mind in which some argue an individual's perceptions have no physical basis in the real world is called psychosis though psychosis may have a physical basis in the real world and explanations vary.
A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something that no computer can do, it cannot be a computer simulation. (Computability generally means computability by a Turing machine. Hypercomputation (super-Turing computation) introduces other possibilities which will be dealt with separately). In fact, known physics is held to be (Turing) computable, but the statement "physics is computable" needs to be qualified in various ways. Before symbolic computation, a number, thinking particularly of a real number, one with an infinite number of digits, was said to be computable if a Turing machine will continue to spit out digits endlessly, never reaching a "final digit". This runs counter, however, to the idea of simulating physics in real time (or any plausible kind of time). Known physical laws (including those of quantum mechanics) are very much infused with real numbers and continua, and the universe seems to be able to decide their values on a moment-by-moment basis. As Richard Feynman put it:
"It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of space/time is going to do? So I have often made the hypotheses that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities".
The objection could be made that the simulation does not have to run in "real time". It misses an important point, though: the shortfall is not linear; rather it is a matter of performing an infinite number of computational steps in a finite time.
Note that these objections all relate to the idea of reality being exactly simulated. Ordinary computer simulations as used by physicists are always approximations.
These objections do not apply if the hypothetical simulation is being run on a hypercomputer, a hypothetical machine more powerful than a Turing machine. Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No-one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed. The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the simulation hypothesis unfalsifiable and therefore scientifically unacceptable, at least by Popperian standards.
All conventional computers, however, are less than hypercomputational, and the simulated reality hypothesis is usually expressed in terms of conventional computers, i.e. Turing machines.
Roger Penrose, an English mathematical physicist, presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function. (See quantum mind-body problem).
In his book The Fabric of Reality, David Deutsch discusses how the limits to computability imposed by Gödel's Incompleteness Theorem affects the Virtual Reality rendering process. In order to do this, Deutsch invents the notion of a CantGoTu environment (named after Cantor, Gödel, and Turing), using Cantor's diagonal argument to construct an 'impossible' Virtual Reality which a physical VR generator would not be able to generate. The way that this works is to imagine that all VR environments renderable by such a generator can be enumerated, and that we label them VR1, VR2, etc. Slicing time up into discrete chunks we can create an environment which is unlike VR1 in the first timeslice, unlike VR2 in the second timeslice and so on. This environment is not in the list, and so it cannot be generated by the VR generator. Deutsch then goes on to discuss a universal VR generator, which as a physical device would not be able to render all possible environments, but would be able to render those environments which can be rendered by all other physical VR generators. He argues that 'an environment which can be rendered' corresponds to a set of mathematical questions whose answers can be calculated, and discusses various forms of the Turing Principle, which in its initial form refers to the fact that it is possible to build a universal computer which can be programmed to execute any computation that any other machine can do. Attempts to capture the process of virtual reality rendering provides us with a version which states: "It is possible to build a virtual-reality generator, whose repertoire includes every physically possible environment". In other words, a single, buildable physical object can mimic all the behaviours and responses of any other physically possible process or object. This, it is claimed, is what makes reality comprehensible.
Later on in the book, Deutsch goes on to argue for a very strong version of the Turing principle, namely: "It is possible to build a virtual reality generator whose repertoire includes every physically possible environment." However, in order to include every physically possible environment, the computer would have to be able to include a full simulation of the environment containing itself. Even so, a computer running a simulation need not have to run every possible physical moment to be plausible to its inhabitants.
The computational requirements for molecular dynamics are such that in 2002, "while the fastest proteins fold on the order of tens of microseconds", "current single computer processors" could "only simulate on the order of a nanosecond of real-time of folding in full atomic detail per CPU day". To simulate an entire galaxy would require more computing power than can presently be envisioned, assuming that no shortcuts are taken when simulating areas that nobody is observing.
In answer to this objection, Bostrom calculated that simulating the brain functions of all humans who have ever lived would require roughly 1033 to 1036 calculations. He further calculated that a planet-sized computer built with computronium using known nanotechnological methods would perform about 1042 calculations per second — and a planet-sized computer or an even larger stellar system-sized computer is not inherently impossible to build, (although the speed of light could severely constrain the speed at which its subprocessors share data). In any case, a simulation need not compute every single molecular event that occurs inside it; it may only process events that its participants can actively perceive. This is particularly the case if the simulation contained only a handful of people; far less processing power would be needed to make them believe they were in a "world" much larger than was actually the case. A real world example of this could be the observer paradox or Heisenberg Uncertainty Principle - an unobserved region of space is indeterminate until observed - this could be because the simulating computer is not simulating it until it needs to.
The existence of simulated reality is unprovable in any concrete sense: any "evidence" that is directly observed could be another simulation itself. In other words, there is an infinite regress problem with the argument. Even if we are a simulated reality, there is no way to be sure the beings running the simulation are not themselves a simulation, and the operators of that simulation are not a simulation, ad infinitum. Given the premises of the simulation argument, any reality, even one running a simulation, has no better or worse a chance of being a simulation than any other.
It is perhaps erroneous to apply our current sense of feasibility to projects undertaken in an outer reality, where resources and physical laws may be very different. It also assumes designers would need to simulate reality beyond our natural senses.
Also, a simulated reality need not run in real time addressing computational constraints. The inhabitants of a simulated universe would have no way of knowing if one day of subjective time actually required much longer to calculate in their host computer, or vice-versa, or if the simulation is run in pieces on different computers, or with a million generations of monks working weekends on abacuses — all without the simulation missing a beat 'in simulation time'.
A computed simulation may have voids or other errors that manifest inside. As a simple example of this, when the "hall of mirrors" effect occurs in the first person shooter Doom, the game attempts to display "nothing" and obviously fails in its attempt to do so. If a void can be found and tested, and if the observers survive its discovery, then it may reveal the underlying computational substrate. However, lapses in physical law could be attributed to other explanations, for instance inherent instability in the nature of reality.
In fact, bugs could be very common. An interesting question is whether knowledge of bugs or loopholes in a sufficiently powerful simulation are instantly erased the minute they are observed since presumably all thoughts and experiences in a simulated world could be carefully monitored and altered. This would, however, require enormous processing capability in order to simultaneously monitor billions of people at once. Of course, if this is the case we would never be able to act on discovery of bugs. In fact, any simulation significantly determined to protect its existence could erase any proof that it was a simulation whenever it arose, provided it had the enormous capacity necessary to do so.
To take this argument to an even greater extreme, a sufficiently powerful simulation could make its inhabitants think that erasing proof of its existence is difficult. This would mean that the computer actually has an easy time of erasing glitches, but we all think that changing reality requires great power. One could possibly take miracles and paranormal activity as software bugs especially those which seem to have a negative effect on one; this notion has been explored in The Matrix, where déjà vu is considered a sign of crude alteration to the system; and Animatrix where software glitches are concentrated in a house which the neighbors call "haunted", subsequently corrected by the Agents. A possible exploit could regard demons and evil spirits as the 'hackers' who attempt to take advantage of this system.
Additionally, it can be argued that what are in fact errors in the software, we perceive as part of the "proper" reality. For example, it may be the case that tornadoes were never meant to exist in this simulation, but due to an error in the programming came to be. It would then be only suspicious to remove them from this reality and doing so would raise more questions by its inhabitants. In such instance, it would make more sense to leave the "error" in place.
The simulation may contain hidden/secret messages or exits placed there by the designer or by other inhabitants who have solved the riddle in the way that easter eggs in computer games and other media sometimes do. People[who?] have already spent considerable effort searching for patterns or messages within the endless decimal places of the fundamental constants such as e and pi. In Carl Sagan's science fiction novel Contact, Sagan contemplates the possibility of finding a signature embedded in pi (in its base-10 expansion) by the creators of our reality.
However, such messages have not been made public if they have been found, and the argument relies on the messages being truthful. As usual, other hypotheses could explain the same evidence. In any case, if such constants are in fact normal, then at some point an apparently meaningful message will appear in them (this is known as the infinite monkey theorem), not necessarily because it was placed there.
The Easter Egg Theory also assumes that a simulation would want to inform its inhabitants of its real nature; it may not. Otherwise, if we consider that the human race will eventually be capable of creating intelligent programs (i.e. machines) living inside a virtual subspace of our "real" world, then an interesting question would be to define whether or not we will be capable of suppressing from our sentient robots their capability of knowing their artificial nature (see Philip K Dick's Do Androids Dream of Electric Sheep?).
A computer simulation would be limited to the processing power of its host computer, and so there may be aspects of the simulation that are not computed at a fine-grained (e.g. subatomic) level. This might show up as a limitation on the accuracy of information that can be obtained in particle physics.
However, this argument, like many others, assumes that accurate judgments about the simulating computer can be made from within the simulation. If we are being simulated, we might be misled about the nature of computers.
Taken one step further, the "fine grained" elements of our world could themselves be simulated since we never see the sub-atomic particles due to our inherent physical limitations. In order to see such particles we rely on other instruments which appear to magnify or translate that information into a format our limited senses are able to view: computer print out, lens of a microscope, etc. Therefore, we essentially take on faith that they're an accurate portrayal of the fine grained world which appears to exist in a realm beyond our natural senses. Assuming the sub-atomic could also be simulated then the processing power required to generate a realistic world would be greatly reduced.
In theoretical physics, digital physics holds the basic premise that the entire history of our universe is computable in some sense. The hypothesis was pioneered in Konrad Zuse's book Rechnender Raum (translated by MIT into English as Calculating Space, 1970), which focuses on cellular automata. Juergen Schmidhuber suggested that the universe could be a Turing machine, because there is a very short program that outputs all possible programmes in an asymptotically optimal way. Other proponents include Edward Fredkin, Stephen Wolfram, and Nobel laureate Gerard 't Hooft. They hold that the apparently probabilistic nature of quantum physics is not incompatible with the notion of computability. A quantum version of digital physics has recently been proposed by Seth Lloyd. None of these suggestions has been developed into a workable physical theory.
It can be argued that the use of continua in physics constitutes a possible argument against the simulation of a physical universe. Removing the real numbers and uncountable infinities from physics would counter some of the objections noted above, and at least make computer simulation a possibility. However, digital physics must overcome these objections. For instance, cellular automata would appear to be a poor model for the non-locality of quantum mechanics.
Some of the people in a simulated reality may be automatons, philosophical zombies, or 'bots' added to the simulation to make it more realistic or interesting or challenging. Indeed, it is conceivable that every person other than oneself is a bot. Bostrom called this a "me-simulation", in which oneself is the only sovereign lifeform, or at least the only inhabitant who entered the simulation from outside.
Bostrom further elaborated on the idea of bots:
In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or "shadow-people" – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much [computationally] cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience.
The idea of "zombies" has a well known corollary in the video game industry where computer generated characters are known as Non-Player Characters ("NPCs"). The term 'bots' is short for 'robots'. The usage originated as the name given to the simple AI opponents of modern video games.
A brain-computer interface simulated reality may be required to progress at a rate that is near realtime; that is, time within it may be required to pass at approximately the same rate as the outer reality which contains it. This might be the case because the players are interacting with the simulation using brains which still reside in the outer reality. Therefore, if the simulation were to run faster or slower, those brains could notice because they were not contained within it.
It is possible that time passes slower or quicker for brains in a dream state (i.e., in a brain-computer interface trance); however, the point is that they still function at a finite, biological speed, and the simulation must track with them. Unless those interacting with the simulation are augmented and capable of processing information at the same rate as the simulation itself.
A virtual-people or emigration simulated reality, on the other hand, need not. This is because its inhabitants are using the simulation's own physics in order to experience, think, and react. If the simulation were slowed down or sped up, so also would the inhabitants' own senses, brains, and muscles, as well as every other molecule inside. The inhabitants would perceive no change in the passage of time, simply because their method of measuring time is dependent on the cosmic clock that they are seeking to measure. (They could perform the measurement only if they had some access to data from the outer reality.)
For that matter, they could not even detect whether the simulation had been completely halted: a pause in the simulation would pause every life and mind within it. When the simulation was later resumed, the inhabitants would continue exactly as they were before the pause, completely unaware that (for example) their cosmos had been paused and archived for a billion years before being resumed. A simulation could also be created with its inhabitants already possessing memories as though they had already lived part of their lives before; said inhabitants would not be able to tell the difference unless informed of it by the simulation. (Compare with the five minute hypothesis and Last Thursdayism).
One practical implication of this is that a virtual-people or a hybrid simulation does not require a computer powerful enough to model its entire cosmos at full speed. Per the Turing completeness theorem, a simulation can progress at whatever speed its host computer can manage; it would be constrained by available memory but not by computation rate.
Recursive simulation involves a simulation, or an entity in a simulation, creating another simulation within a simulated environment. The 'parent' simulator would be simulating all of the atoms of the computer, atoms which happen to be calculating a 'child' simulation. By way of illustration: in Fallout 3, Metal Gear Solid 2, and Xenosaga, the player character at one point must enter a virtual reality simulation in the game. Alternatively, imagine a Java Runtime Environment running a virtual computer on a "real-world" computer that itself is located within a simulation.
This recursion could continue to infinitely many levels — a simulation containing a computer running a simulation containing a computer running a simulation and so on. The recursion is subject only to one constraint (assuming no level has infinite computational power): each 'nested' simulation must be:
...and must be at least one of the following:
The latter is the basis of the idea that quantum uncertainties are circumstantial evidence that our own reality is a simulation. However, this assumes that there is a finite limitation somewhere in the chain. Assuming an infinite number of simulations within simulations, there need not be any noticeable difference between any of the subsets.
Depending on the nature of the simulation it may be possible to exit by several methods, among them, waking up in the parent world (brain-computer interface) or through mind transfer from the simulated world into a biological or cybernetic body that exists in the parent world. Simulants could also be transferred into other simulated worlds based on events within the simulated world, (i.e., death). Several of these scenarios have been portrayed in popular fiction and motion pictures.
Virtual reality, and to a lesser point simulated reality, are key facets in the Cyberpunk genre, regardless of format.
||This article may contain improper references to self-published sources. Please help improve it by removing references to unreliable sources, where they are used inappropriately. (April 2009)|
||This article's use of external links may not follow Wikipedia's policies or guidelines. Please improve this article by removing excessive or inappropriate external links, and converting useful links where appropriate into footnote references. (June 2010)|