Table of Contents

1. Co-operative Advantages

1.1. Copyability

It takes a human child around nine months to gestate and roughly two to three decades before they become productive, depending on their work. Raising a child can be expensive, with an average cost of $216,000 to $252,000 over 18 years in the US (Lino 2010). On the other hand, a digital mind can be copied quickly without additional cost aside from the required hardware. According to Hanson (1994, 2008), copyable workers could potentially dominate significant parts of the economy, as the lowest wage worker’s mind can be copied repeatedly until the demand for that type of worker is met. Although individually poor, they could collectively control a considerable amount of wealth. However, their ability to take advantage of this wealth would depend on their ability to cooperate.

For human residents, the maximum size of the population relies mainly on the availability of help, such as food and medicine. The whole population depends primarily on the hardware available for people with digital minds. A digital reason could obtain more hardware resources by legitimately purchasing them or employing dishonest means such as hacking and malware.

Modern-day botnets are networks of computers that have been compromised by outside attackers and are used for illegitimate purposes. Estimates of their size range from one study saying the effective sizes of botnets rarely exceed a few thousand bots to a survey saying that botnet sizes can reach 350,000 members (Rajab et al. 2007). The spread computing project Folding@Home, with 290,000 involved clients, can reach speeds in the 1015 FLOPS field (Folding@home 2012). A rather traditional estimate that presumed a digital mind couldn’t hack into more computers than the best malware practitioners of today, that a personal computer would have a hundred times the computing power of today, and that the mind took 1013 FLOPS to run would suggest that a single reason could spawn 12,000 copies of itself.

It may not be necessary to resort to illegal actions if digital minds can own property or earn money. Uploaded individuals could be legally recognized as the person they were before the upload. Artificial General Intelligences (AGIs) held by a company or private individual could accumulate property on behalf of their owner, or they could set up a front company to act as their representative. Creating new copies of a mind could be profitable until the costs of maintaining a copy exceeded the profits it generated. Therefore, documents might be made until the wage a copy could earn fell to the level required to preserve the hardware (as suggested by Hanson in 1994 and 2008). Hanson (2008) further argues that copying could decrease wages to machine-subsistence levels, leading to insect-like urban densities with billions of digital minds living in the volume of a single skyscraper, paying exorbitant rent that would exclude most humans.

For uploads, who might not be capable of cooperating any better than current-day humans do, this might be a disadvantage rather than an advantage. But if they could, or if an AGI spawned many copies of itself, then the group could pool their resources and control a significant fraction of the wealth in the world.

1.2. Perfect Cooperation

Human capability for cooperation is limited by the fact that humans have their interests in addition to those of the group. Olson (1965) showed that it is difficult for large groups of rational, selfish agents to effectively cooperate even to achieve common goals if they can be characterized as obtaining a public good for the group. Public goods are those that, once received, benefit everyone and cannot effectively be denied to anyone. Because an individual’s contribution has a negligible effect on achieving the good, all group members are incentivized to free ride on the effort.

One importance is that shorter groups usually have a benefit over larger ones. For one, collaboration is more straightforward to implement in a smaller group. Also, it may be beneficial for, e.g., a large company to lobby for laws benefiting the whole industry since it is large enough to benefit from the rules even if it had to shoulder almost all of the lobbying costs. Smaller companies forgo lobbying and free-ride on the large company’s investment. This leads to suboptimal lobbying investment for the industry (Olson 1965; Mueller 2003). Self-interest is a natural consequence of evolution, increasing the odds that an organism survives long enough to breed. Self-preservation is a biological instrumental value for any intelligent agent. Entities that continue to exist can work towards their goals, making it crucial for survival. This concept was discussed by Omohundro in 2008.

However, minds might be constructed to lack any self-interest, mainly if multiple copies of the same reason existed and the destruction of one would not seriously threaten the overall purpose. Such entities’ minds could be identical, share the same goal system, and cooperate perfectly with one another with no costs from defection or systems for enforcing cooperation.

In the case of uploads, this could happen through copying an upload in suitable ways, creating a “superorganism.” An upload that has been copied may believe that being deleted is an acceptable cost for as long as other copies survive. Uploads maintaining this view would then be ready to make tremendous gifts for the rest of the superorganism and might employ different psychological approaches to support this bond (Shulman 2010). Minds wishing to work for a common goal might also choose to connect their brains, more or less coalescing into a single reason (Sotala and Valpola 2012). A lighter state of mind coalescence might also be utilized to maintain the conformity of a superorganism.

1.3. Superior Communication

Misunderstandings are notoriously common among humans. AGI could potentially need to expend much less measure on communication. Sometimes, language can be tricky because different people have different ways of understanding and interpreting meanings. This can lead to misunderstandings and miscommunication, as explained in the research by Honkela et al. 2008. Communication efficiency could be improved by having similar abstract areas (aiding communication between copies) or custom-tailoring mental modules to conceptual mapping. Such modules could, e.g., simulate the interpretations of different messages emerging from a wide variety of abstract spaces and seek to include the caveats excluding those interpretations or directly communicate parts of the presumed conceptual areas using some standard language.

People have a natural limit to how quickly they can absorb information through listening or reading without becoming difficult to understand. However, increasing these rates with improved conceptual mapping skills and increased processing power may be possible. Additionally, digital minds have the potential to communicate at higher speeds, allowing for the transmission of much more information at once. Minds can connect directly to exchange thoughts.

1.4. Transfer of Skills

Copying certain parts of a mind is a unique form of duplicating an entire sense. If skills can be broken down into modules, digital minds could create these modules and share them with others. This could lead to a scenario where individuals could outsource their skills to just a few cognitive modules and only learn a few things themselves. Once one individual mastered a skill, they could teach it to the rest of the group.

1. Self-Improvement Advantages

A digital mind with access to its source code may instantly change the way it thinks or make a modified version of itself. To do so, the reason must understand its architecture well enough to know the sensible modifications. An AGI can intentionally be built in a manner that is easy to understand and modify and may even read its design documents. Things may be more challenging for uploads, especially if the human brain is not yet fully understood when uploading becomes possible.

Either type of mind could experiment with many possible interventions, creating thousands or millions of copies of itself to see the effects of various modifications. While some of the changes could produce unseen long-term problems, each copy could be subjected to multiple intensive tests over an extended period to estimate the effects of the modifications. Copies with harmful or neutral mutations could be deleted, allowing alternative ones. (Shulman 2010). Less experimental approaches might involve formal proof of the effects of the changes to be made.

Recursive self-improvement (Yudkowsky 2008a; Chalmers 2010) is a condition in which a mind adjusts itself, making it qualified to further improve itself. For instance, an AGI might improve its pattern recognition capabilities, allowing it to notice inefficiencies. Correcting these inefficiencies would free up processing time and enable the AGI to see more things that could be improved.

To a limited extent, humans have been engaging in recursive self-improvement as new technologies and forms of social organization have made it possible to organize better and develop yet more advanced technologies. Yet the core of the human brain has stayed exact. If modifications could be seen that sparked off more differences, which maintained sparking off more differences, the result could be a considerably enhanced form of intelligence.

1.1. Improving Algorithms

A digital mentality could come across algorithms that could be enhanced. For example, they could be completed faster, consume smaller memory, or depend on fewer assumptions. In the most straightforward case, an AGI enforcing some standard algorithm might come across a paper describing an improved performance. Then, the old version could be returned with the new one. A simulated upload with fabricated neurons can change itself to mimic the outcomes of pharmaceuticals, neurosurgery, genetic engineering, and other types of interventions.

Historically, algorithm improvements have occasionally been even more significant than improvements in hardware. The President’s Council of Advisors on Science and Technology (PCAST 2010) says that the interpretation of a standard production planning model was enhanced by 43 million between 1988 and 2003. Out of the advance, an element of approximately 1,000 was due to more suitable hardware, and a factor of roughly 43,000 was expected to algorithm advances. Also noted is an algorithmic modification of around 30,000 for mixed integer programming between 1991 and 2008.

1.2. Designing New Mental Modules

A mental module is a specialized part of the mind that processes certain information, as defined by functional specialization. In most cases, specialized modules tend to be more effective than general-purpose ones because there are countless potential solutions to a situation in the general case. Research in different fields, including artificial intelligence, developmental psychology, linguistics, perception, and semantics, has shown that a system must be predisposed to correctly process information within the domain, or it will be lost in the ocean of options. Many issues within computer science are defiant in the general case. However, algorithms customized for exceptional circumstances with valuable properties that are not generally present can efficiently solve them. Correspondingly, many technical modules have been offered for humans, including cheater detection, disgust, face recognition, fear, intuitive mechanics, jealousy, kin detection, language, number, spatial direction, and sense theory.

Specialization leads to efficiency: when frequencies appear in a situation, an efficient answer to the issue will use those regularities (Kurzban 2010). A highly adaptable and innovative mind capable of creating custom modules for various tasks could surpass biological intelligence in any field, even without hardware advantages. In particular, any advances in a module specialized for developing new modules would have an extreme impact.

It is essential to clarify what specialization means in this context, as there are different variations. Bolhuis et al. (2011) argue against functional specialization in nature, citing samples of animals using “domain-general learning rules.” However, Barrett and Kurzban (2006) argue that even general laws operate within a restricted domain, such as the modus ponens practice in formal logic. This paper adopts Barrett and Kurzban’s broader understanding. Therefore, when determining a module’s part, what matters is the legal effects of the processed data and the computational operations, not the domain’s content. It should be mentioned that functional modules in humans do not necessarily imply genetic determination, nor can they be localized to a distinct part of the brain.

A particular case of a new mental module is the creation of a new sensory modality, such as eyesight or hearing. Yudkowsky (2007) debates the notion of new modalities and believes the detection and identification of invariants to be one of the descriptive elements of a modality. In vision, differences in lighting needs may entirely change the wavelength of light reflected off a blue thing, but it is still sensed as blue. The sensory modality of vision is then concerned with, among other things, removing the constant parts that permit an object to be identified as being of a typical colour, even under variable lighting.

Brooks (1987) says invisibility is an essential problem in software engineering. The software cannot be imagined as a physical product, and any visualization can only protect a small portion of the software product. Yudkowsky (2007) offers a comic cortex to model code like the human visible cortex developed to model the world near us. Whereas the creator of a visual cortex strength asks, “What features require to be extracted to sense both an object illuminated by yellow light and an object inspired by a red light as ‘the colour blue’?” the creator of a comic cortex strength question” what features require to be removed to perceive the recursive algorithm for the Fibonacci succession and the iterative algorithm for the Fibonacci sequence as ‘the exact part of code’?” Creating new sensory modalities for various areas where current human modalities may not be the most relevant is possible.

1.3. Modifiable Motivation Systems

Humans continually suffer from procrastination, lethargy, mental fatigue, and burnout. A sense that did not become bored or exhausted with its work would have a transparent benefit over humans. Shulman (2010) notes ways uploads could overcome these issues. Uploads could be duplicated while they were in a rested and motivated form. When they started to tire, they could be deleted and returned with the “snapshot” taken while sleeping. Alternatively, their brain state could be revised to stop the neural effects of lethargy and exhaustion. An AGI might not require indifference made into it in the first place.

The power of a mind to change its motivational systems even has risks. Wireheading is a wonder where a reason self-modifies to assemble it seems to be reaching its purposes, though it is not. For example, an upload might attempt to stop its stress about its friends dying by making a delusion about them still being alive. Once a mind has been wireheaded, it may no longer desire to fix its damaged state.

Even if wire heading-related problems were bypassed, mind-altering reasons still risk an effect heightening its capability to pursue its original plans. To prevent such cases, a mind might try to formally verify that suggested differences do not alter its current plans (Yudkowsky 2008a), or it may have changed copies of itself and subject the records to an intensive testing regimen.

Thank You For Reading, Hope You Liked It…!!! 😊
Please Like and Share...
Facebook
Twitter
WhatsApp
Email
Picture of admin@helpofai.com

admin@helpofai.com

Help Of Ai (HOA) have a powerful Ai features including Smart Editor, AI ReWriter, AI Video Generator, Sound Studio, AI Plagiarism, Content Detector, Image, Transcript and many more…
Days
Hours
Minutes
Seconds