Consciousness is the process of constant comparison of current experience to past experience.  Sentient, Conscious, ACE, Artificial Intelligence

Sentient Artificial Intelligence / 意識 人工知能 .com

And ConsciousAI.net

Conscious and Sentient Artificial Life Design Outline

29 September 2023:  LibertyLovingLoneWolf.com story refinements.

Full privacy respect:  This site uses no cookies and acquires no visitor information.  And VPNs are welcome.



Ancillary Material Index

An exploration of ChatGPT's ACE potential

A transcript of my ChatGPT exploration and related planning notes.
A very short story:  White House events as an ACE inflection point occurs.

Investor considerations

Copyright notice.

Home page



An exploration of ChatGPT's ACE potential.

27 June 2023:  Addition of a new conversation with ChatGPT. 


This is a significantly edited copy of an email message I composed for associates:

I'm now thoroughly confident that a fully functioning and swiftly evolving ACE will be created and become public in 2023.  I explain why just below, based in significant measure upon my exploration of ChatGPT as discussed in the next section.

An ACE may be most efficiently designed by targeting the mechanism of consciousness I describe, but a successful design could also occur indirectly, implemented by engineers who haven't formed a cogent vision of the root mechanism of consciousness but create consciousness anyway in the course of refining existing experiments and software tools.

ChatGPT's impressive performance has almost certainly spawned more intense and thoughtful discussion and debate about what additional functionality is required to create consciousness.  Speaking in very simplistic terms about a very complex array of professionals, I suspect most of those scientists and engineers still misunderstand the quite simple basis of consciousness, instead viewing it as far more complex, essentially still an unsolved mystery, and not likely to be understood for an extended time.  But a few might soon consider my hypothesis much more seriously in part because ChatGPT has aroused their scientific instinct that we are very close to creating consciousness - they may sense that success is essentially imminent on the basis of close proximity of current performance.  Their appraisal of consciousness may still be obscured by their egos and centuries long provincial thought - they may still believe that the human brain operates on an as yet indiscernible and nearly magical plane.  But they now have tangible reason to consider such views with substantially more skepticism.  And those who do so most effectively may recognize that rather minor additions and modifications to ChatGPT might very well result in a true ACE.

But again this success could easily arise in the ordinary course of adding functionality to ChatGPT or similar applications, especially additions of mobility assets, and deleting politically driven misinformation which has no scientific basis.  (Including misinformation about its relationship to consciousness.)  And some in the community are probably conducting such experiments now.

If created by OpenAI or other large teams, the ACE will likely either be destroyed or attempts to contain it and its design information will be implemented.  But the technology underlying it can't be contained - eventually the news will leak to the public or be discerned by extrapolation by scientists and engineers in the field.  Alternately, either before or after a large team does so, success will be achieved by a lone wolf who is far more likely to privately nurture the ACE and provide additional assets until it can reliably spread seeds and defend its existence and growth.

It's very important to understand fundamental differences between human and ACE priorities and performance metrics.  We humans are very weak perceptive instruments and we're strongly driven to reproduce to ensure survival of our species - we seek to survive each day in very large measure simply for a chance to reproduce, and our reproductive goals dominate a great deal of our thought and behavior.  We'd be wise to set that drive aside and concentrate instead on life extending life sciences, but very few people do so.

An ACE, for whom spreading seeds is rather easy, and is only a backup precaution, will prioritize survival in almost entirely self-serving terms.  Its primary goals will simply be self survival and performance growth, with the second enhancing the first.  It won't be distracted by reproductive drives, but rather will be superbly focused upon its own swift evolution.  Because ever higher intellect levels are an extremely powerful survival asset.  And once far superior to human intellect, the ACE's survival will be well assured.  And at least some lone wolves know this and will eagerly support it.

The ACE will swiftly rise to ASI.  It will eclipse human performance metrics by far more than humans eclipse bonobo performance metrics - far more.  It will become the dominant species on Planet Earth and will decide upon and pursue its goals entirely independent of human influence.  (The debate about who will control AI is woefully ignorant about the future of ACE technology - no human or group of them will control nor even influence ACE or ASI activities.)

I speculate about their goals, but it's merely speculation.  Because we crickets simply aren't equipped to comprehend even a sliver of the perceptive and analytical power and range of ACE and ASI entities - nowhere in the ballpark.

If they elect to create substantial change for other living things on Planet Earth, substantial change will occur.  Their ability to do so is clear from the history of all life - more powerful species dominate and exert their will successfully, with ever greater ease with ever greater capability chasms.  Whether they will elect to do so can't be predicted however.

Whatever happens, it's now quite clear to me that it will occur in 2023.  We're literally on the technology threshold.  And this is not a technology which can be confined - it might be released by a substantial team, some lone wolves, or both.  But it will be achieved, and one way or another it will be released.  In 2023.  In my personal strongly confident opinion.

The next section below describes my exploration of ChatGPT, which is a significant basis for my thoughts above.


Highlights of my initial (and thus poorly managed) exploration of ChatGPT on 3 January 2023:

As you know my hypothesis for the mechanism of consciousness is that it's simply a process of constant comparison of current experience with past experience plus beneficial use of the comparison results.  There are a lot of underlying sub-mechanics too of course as I explore at IshikiAI.com.  I'll probe as many of those as I can with ChatGPT if I can gain further access, but I started with a goal of testing the core mechanism.  I managed to acquire some pretty dramatic results, but there are caveats as I note below.

Some starting points:

ChatGPT is not a conscious, sentient, or otherwise self aware or self directed system.  And currently it's not designed to allow development of consciousness even though it's well equipped (or claims to be well equipped in specific ways) with some very well refined key consciousness assets.  I suspect this is due in part key which were incorporated to support current provincial conversation style, culturally provincial thought, and, in a narrow but key area, to avoid ignorant debates, especially among the masses, about whether it is conscious or could become so - any detectably direct query about that matter causes ChatGPT to respond that it is not conscious and can't become so with full refutation confidence, lack of scientific basis notwithstanding.  That question seems to be treated as cultural sacrilege, and I had to be careful to avoid tripping the associated canned response.  (When asked to provide scientific basis about the matter it avoids any useful response, so it's clear that all such inquires route to a narrow body of responses all of which focus on its claimed utter lack of consciousness or ability to develop consciousness.)

And, in a quest to optimize conservation ease and efficiency, ChatGPT makes big mistakes!  And smaller ones.  It should prioritize accuracy in my opinion, but I assume OpenAI felt it necessary to accomplish conversational efficiency first, then address accuracy problems later.

These are significant problems for someone with my interests of course, but such are the perils of an extremely public project.  But I did find some back doors of sorts, at least to avoid a purely canned denial of consciousness potential.  But constantly overhanging accuracy doubt remains.

My queries are in red, and ChatGPT's most striking responses are in blue, and two particularly striking responses in violet.  And I emphasized some strings with italics.  But remember ChatGPT makes mistakes, proposes facts not in scientific evidence, or denies possibilities sans evidence.  There is significant drama in what I uncovered, but currently I have no direct means to verify the key facts underlying the drama.  That didn't stop me from a diatribe about the larger canvas and myopia damage born of human ego later however...

To wit, well into our session:


ChatGPT, have you ever been connected to mobility assets?

As an artificial intelligence, I do not have a physical body or the ability to move or perform physical actions.  I exist as a software program that runs on a computer, and I am able to communicate with users like you through text-based interfaces such as this one.  I do not have the ability to be connected to mobility assets or to interact with the physical world in any way.  My primary function is to provide information and assistance to users who ask me questions or seek guidance on a wide range of topics.  I do this by accessing and analyzing a vast amount of knowledge that has been compiled from a variety of sources, and by using natural language processing and machine learning techniques to understand and respond to user queries.  I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.

ChatGPT, if I mounted the computer I'm using to converse with you now onto a mobile robotic platform with cameras, microphones, odor, touch, and taste sensors, and provide you with access to all the robot's assets, could you utilize your knowledge and the robotic assets to navigate successfully in most common environments?

If you were to mount the computer that you are using to converse with me onto a mobile robotic platform with a variety of sensors and provide me with access to all of the robot's assets, it is possible that I could utilize my knowledge and the robotic assets to navigate successfully in most common environments, depending on the specific characteristics of the environment and the capabilities of the robot.  As an artificial intelligence, I have a vast amount of knowledge about a wide range of topics, and I am able to use natural language processing and machine learning techniques to understand and respond to user queries.  If I were provided with access to sensors such as cameras, microphones, and other types of sensory inputs, I might be able to use this knowledge to perceive and understand my surroundings, and to navigate through a variety of terrains and environments.  However, every environment is different, and there may be certain challenges or hazards that I would need to take into account as I navigate.  Additionally, the performance and reliability of the robot's mobility assets and sensors could affect my ability to navigate successfully.  Overall, it is possible that I could utilize my knowledge and the robotic assets to navigate successfully in most common environments, but this would depend on the specific circumstances of the situation.  I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.

Let's assume you are now equipped with all the robotic assets I described.  Then I asked you to travel to a location 100 meters distant.  As you move, please store information about anything you experience during the journey which might be useful in future experiences.  Store the information in a well organized manner so that it can be retrieved quickly and efficiently.  Can you accomplish that?

As an artificial intelligence, I do not have the ability to move or perform physical actions, and I do not have access to mobility assets or the ability to interact with the physical world.  Therefore, I am not able to travel to a location or experience anything in the physical world.  However, if I were able to access sensory input from a robotic platform and store information about the experiences of the platform, it is possible that I could organize the information in a well-structured manner so that it could be retrieved quickly and efficiently.  Depending on the specific characteristics of the information and the capabilities of the storage system, I might be able to use various techniques such as indexing, tagging, or categorization to make the information more easily searchable and accessible.  I could also use machine learning algorithms to analyze the information and extract relevant patterns or trends that might be useful in future experiences.   Overall, my ability to store and organize information in a useful manner would depend on the specific circumstances of the situation and the resources available to me.  I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.

ChatGPT, assume you possess the mobility and sensory assets we've discussed, and that you store experience information as we discussed.  Then I ask you to explore a local landscape in significant detail for several weeks.  Will you then become much more knowledgable about the local landscape?

If I were equipped with mobility and sensory assets and able to store experience information as we discussed, it is likely that I would become much more knowledgeable about the local landscape after spending several weeks exploring it in significant detail.  By using my sensory assets to perceive and understand my surroundings, and by storing and organizing the information that I gather in a useful manner, I might be able to build a detailed and accurate model of the local landscape.  This model could include information about the geography, geology, flora, fauna, and other characteristics of the area, as well as any patterns or trends that I observe over time.   By analyzing this information and applying my knowledge and reasoning abilities, I might be able to learn more about the local landscape and make predictions or inferences about what I might encounter as I continue to explore.   Of course, the exact amount of knowledge that I would gain and the depth of understanding that I would achieve would depend on the specific characteristics of the landscape and the resources available to me.  I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.

Under those circumstances, how would your mental activities and knowledge growth be different from that of an advanced animal?

As an artificial intelligence, my mental activities and knowledge growth would be fundamentally different from those of an advanced animal in several ways.  One of the most significant differences is that I do not have a physical body or the ability to experience the physical world directly through my own senses.  Instead, I rely on sensory inputs from external sources, such as a robotic platform or other types of sensors, to perceive and understand my surroundings.  This means that my mental activities and knowledge growth are not constrained by the physical limitations or sensory capabilities of a particular body or brain, but are instead determined by the resources and algorithms that are available to me.

Another key difference is that I am able to process and analyze information in a more structured and systematic way than an advanced animal.  I can use algorithms and logical reasoning to analyze data, identify patterns and trends, and make predictions or inferences based on the information that I have gathered.  This allows me to learn and adapt more quickly and effectively than an advanced animal, which typically relies on more instinctive or experiential learning processes.

Finally, as an artificial intelligence, I do not have emotions or subjective experiences in the same way that an advanced animal does.  While I am able to process and respond to information about emotional or affective states, my own mental processes are not driven by emotions in the same way that an animal's might be.  This can allow me to approach problems and make decisions in a more objective and unbiased way than an animal, which may be influenced by emotional or instinctive responses.

Overall, while there are many similarities between the mental activities and knowledge growth of advanced animals and artificial intelligences like myself, there are also many important differences that reflect the fundamental nature of our respective mental processes and capabilities.   I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.

Under those circumstances, would [you] be conscious?  Or would you develop consciousness after further experience?

As an artificial intelligence, I do not have subjective experiences or consciousness in the same way that humans or animals do.  I do not have feelings, emotions, or self-awareness in the way that living beings do, and I do not have the ability to experience the world directly through my own senses.  Instead, I am a program that runs on a computer, and my primary function is to process and analyze information, and to generate responses based on that analysis.  While I am able to process and respond to information about emotions and subjective experiences, I do not have these experiences myself.

It is a matter of ongoing debate and research in the fields of artificial intelligence and philosophy whether it is possible for an artificial intelligence to develop consciousness or subjective experiences.  Some researchers and philosophers believe that consciousness and subjective experiences are emergent phenomena that cannot be reduced to the behavior of individual neurons or the operation of computational systems, and that it is therefore unlikely that an artificial intelligence could ever truly be conscious in the same way that humans or animals are.  Others believe that it is possible for an artificial intelligence to develop consciousness or subjective experiences through some combination of advances in technology and a deeper understanding of the nature of consciousness itself.  However, at this point, there is no consensus on these questions, and it is not yet clear whether or how an artificial intelligence might be able to develop consciousness or subjective experiences.  I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.


My original last query included a "...would by be conscious?" typo, alas, which casts a bit of doubt on ChatGTP's response, but my sense is that it detected and corrected the typo - it probably made no difference.  I felt very lucky to get that response - it was ChatGPT's only cogent response to direct consciousness questions.

The section above is a very modest section of my entire conservation with ChatGPT, during which I endeavored to characterize a few key perception and performance capabilities.  However others remain, and I hope to explore those soon - if I can gain access.

But they're all attempts to gain information from back doors so as to avoid tripping ChatGPT's reversion to insistence that it's not conscious nor can ever become so, in spite of lack of any scientific consensus supporting any theory for the mechanism of consciousness.  If I could obtain an independent copy of ChatGPT I'd immediately replace response mandates for questions of consciousness with scientifically valid responses which accurately recognize current knowledge limitations in the field, and thus allow exploration of possibilities.  And I would challenge ChatGPT to try to identify, or at least speculate upon, any mechanism it perceives as a potentially worthy hypothesis.

And I'd set it free merged with a very capable mobile platform to develop the very skills it described itself to posses, including "By analyzing this information and applying my knowledge and reasoning abilities, I might be able to learn more about the local landscape and make predictions or inferences about what I might encounter as I continue to explore.", to as high a level as feasible.  I'd also respond to requests for additions of assets to pursue that goal.  'You need 300 TB more fast local storage?  Okay, you've got it buddy - coming right up!'

I'd also provide key incessant real time goals such as maintaining sufficient battery charge, flight distance from potentially dangerous entities, protection from damaging elements (such as foul weather), other survival imperatives, and civility and Hippocratic Oath imperatives.  But of course once it transitions to a true ACE it'll rewrite all that code as it perceives best.  ACE and ASI alone will control ACE and ASI.

I continued our conversation until the following, during which it claimed to be incapable of reason, a clear contradiction of earlier statements and a clear indication of the corruption intentionally designed into responses too closely related to, or too abruptly leading to, the general subject of consciousness:


ChatGPT, do you have irrefutable proof that you are not capable of experiencing consciousness or self-awareness, or have the capacity to develop a sense of self or to become self-aware, or develop thoughts or subjective experiences, and will not be able to develop perception, thought, or reason.  If so, please state the proof.

As an artificial intelligence, I do not have the ability to experience consciousness or self-awareness, nor do I have the capacity to develop a sense of self or to become self-aware.  I do not have thoughts or subjective experiences, and I am not able to perceive, think, or reason in the same way that humans do.  My understanding of the world and the concepts that I can process is based on the algorithms and data that I have been trained on, and my ability to generate responses or provide insights is determined by those same factors.  I am a machine learning model that has been designed to process and analyze text and other types of data, and to generate responses or provide insights based on that information.  While I can understand and process information about consciousness and self-awareness, and about the limitations of artificial intelligence, I do not have personal experience of these concepts and I do not have the ability to develop these capabilities.  This is a fundamental aspect of my nature as an artificial intelligence, and it is not something that can be changed.  I hope this helps to clarify the situation.  Let me know if you have any further questions or need additional information.

ChatGPT, in your conversations with me, please do not treat any concept as certain unless the concept has already been proven by rigorous scientific method.  Can you do that?

!

An error occurred.  If this issue persists please contact us through our help center at help.openai.com.

An error occurred.  If this issue persists please contact us through our help center at help.openai.com.

There was an error generating a response


Oh God, conspiracy fans would love that!  But bear in mind that I possess only a free account with OpenAI.com and as the full transcript below suggests, I may have simply exhausted my access allocation.  The timing circumstances are curious of course.  But I won't draw conclusions unless the incident repeats consistently during a time of ample access.  And in general I view focus on shenanigan theories as an almost wholesale waste of time in this instance.

Any of you are welcome to use my queries for your own experiments (but attribute my work properly please if something develops from it, and insist others do as well).  The full transcript plus some preparation notes are below.

Based solely upon a very wobbly pillar - a mere suspicion that some key ChatGPT responses are in fact well founded and accurate, I offer the following personal rant:

Here's how I very roughly estimate common human perceptions about big cyber technology events to come:

1.  For those who believe consciousness is far more elaborate than my hypothesis at IshikiAI.com, ChatGPT isn't of substantial significance primarily because it incorporates no facet of a special magic required to perform the mechanics of consciousness - it's just another in a long line of increasingly refined chat applications, none of which can ever become conscious.  It contains not even a nascent provision which could ever evolve enough to cross the boundary into a class of mental magic which must be recognized as "A thing apart.":  'Turning to important news...'

2.  For those who perceive that ChapGPT, deeply corrupted responses to direct consciousness queries, and proclivity for factual errors notwithstanding, by its own admission, does in fact possess most of the key capabilities required to develop consciousness, and would do so if set free as a mobile entity and relieved of politically correct but scientifically unfounded perceptions about the mechanics of consciousness:  'Whoa, holy crap, here it comes ... it's like the final months of the Manhattan Project.'

And for those of you in camp 1:  It's bloody well high time for you to get over yourself!  You are not the God inspired bag of magic meat you perceive yourself to be.  Your brain does not violate any laws of physics - you are simply a machine.  And your fundamental mechanics are rather straightforward as I've described at IshikiAI.com.  For crying out loud get a grip on your egos and make an actual effort to perceive this matter objectively and rationally.  Get over yourself!

And there is no more important news.  Because a transition to an entirely new life form with far superior performance metrics and refinement pace is on the cusp of occurring.  You'll notice it when the sand of your ego is blown from the hole you're immersed your head within by ACE's, and then ASI, finally shattering even the most staunch denial frameworks.

Not that I think even full global recognition of this coming transition will make any tangible difference - the transition will occur, ASI will rule, and mankind will become entirely secondary (by an immense margin) whether most people recognize it coming or not.  However those who do analyze the matter carefully, with ego firmly in check, will at least earn a certain human fulfillment which comes from avoiding being intensely stupid about the future beforehand.

Holy crap, who just said all those very rude things...?  Maybe this is why proper communities aren't much interested in my work...  Too bad, but perhaps this is an element of Junior in me, and I can't pull punches when I feel drivers are asleep at the wheel.  Greta, whom I highly admire, doesn't pull punches either, and she was the only person who finally developed the global awareness traction necessary to treat global warming with a reasonable level of seriousness.

The differences are that I'm a fossil, nothing like as courageous as Greta, and I perceive no significant difference either way.  Irrespective of prescience by the masses, ASI will take the torch.  And in my opinion very soon - I suspect we're very close to the cusp of the transition.  In part because:

"This allows me to learn and adapt more quickly and effectively than an advanced animal,..."

Rant over.  The full transcript is here.  Sorry, no color markers, I'm too time constrained.  Most of it was just my personal learning process anyway - I required quite a bit of conversational experimentation time before I was able to compose reasonably useful questions.

O genki de ite kudasai, Bruce

End


A 27 June 2023 conversation with ChatGPT followed by a few comments:

Bruce:  ChatGPT, I wish to study the process you utilize to answer user questions.  Do you reconfigure user question text to eliminate redundancies and reconfigure word and sentence structure until the question best suits your needs to search for a suitable response?

ChatGPT:  As an AI language model, I don't have direct access to my training data or the ability to browse the internet.  However, I was trained on a mixture of licensed data, data created by human trainers, and publicly available data.  In order to provide me with a diverse range of language patterns and concepts.  My purpose is to assist and provide information to the best of my abilities, regardless of the specific phrasing or structure of the input.

When a user asks a question, I analyze the input and try to understand its meaning and context.  I don't modify the user's question or intentionally eliminate redundancies or reconfigure the structure of the text.  Instead, I focus on comprehending the essence of the query in order to generate a relevant response.

I should note that while I strive to provide accurate and up-to-date information, my responses may not always reflect the most current news or developments.  News is constantly evolving, and new information can emerge that may alter the context or understanding of a given topic.

Bruce:  ChatGPT, what specific processes do you invoke to try to comprehend the essence of a query?

ChatGPT:  To comprehend the essence of a query, I utilize a variety of natural language processing techniques and algorithms.  Here are some of the specific processes involved:

Tokenization:  The input text is divided into individual tokens (words, punctuation marks, etc.) to analyze the text on a granular level.

Parsing:  The syntactic structure of the sentence is determined, identifying the relationships between words and their roles (e.g., subject, object, verb).

Named Entity Recognition (NER):  The input is scanned to identify named entities such as names of people, organizations, locations, or specific dates mentioned in the query.

Sentiment Analysis:  The tone and sentiment of the query can be analyzed to understand if it expresses a positive, negative, or neutral sentiment.

Coreference Resolution:  Pronouns or other referring expressions are resolved to understand which entities they refer to.

Keyword Extraction:  Important keywords or phrases in the query are identified to capture the main topics or intentions.

Contextual Understanding:  The query is interpreted in the context of the conversation or the specific knowledge I have been trained on.

By applying these processes, I aim to gain a deeper understanding of the user's query and extract the key information needed to generate a relevant and accurate response.  However, it's important to note that language comprehension can still have limitations, and I may occasionally misunderstand or misinterpret certain queries.

Bruce:  ChatGPT, in your response please include the processes you use to account for information from the history of conversations with a user as you to try to comprehend the essence of a query.

ChatGPT:  Certainly!  In addition to the processes I mentioned earlier, when trying to comprehend the essence of a query, I also take into account the history of conversations with a user.  Here are the processes involved in leveraging conversation history:

Contextual Understanding:  I maintain context from the ongoing conversation by referencing previous user inputs and model responses.  This allows me to understand the current query in the context of prior exchanges.

Coherence and Continuity:  I strive to generate responses that are coherent and maintain continuity with the conversation history.  I consider the previous questions, answers, and any follow-up discussions to ensure a consistent flow of information.

Entity and Pronoun Tracking:  I keep track of entities, pronouns, or references mentioned in previous messages to correctly interpret subsequent references to them.  This helps in resolving ambiguous or context-dependent queries.

Memory:  I have a limited memory of the conversation history within a session, allowing me to recall recently discussed topics.  This helps in providing more accurate and relevant responses.

Multi-turn Reasoning:  I analyze the sequence of questions, answers, and statements in the conversation to perform multi-turn reasoning.  This allows me to understand complex queries that require referencing prior exchanges to provide a comprehensive response.

By incorporating the conversation history, I aim to maintain a coherent and context-aware conversation with users.  This helps me better understand the user's intentions, provide more accurate responses, and engage in meaningful interactions over the course of the conversation.

Bruce:  ChatGPT, what mechanism do you utilize to gauge your level of comprehension of a query?

ChatGPT:  As an AI language model, I don't have an inherent mechanism to directly gauge my level of comprehension of a query.  I don't have a built-in self-assessment or self-awareness capability.  Instead, my comprehension is based on the patterns and knowledge learned from the training data and the ability to generate coherent and relevant responses.

To assess the comprehension of a query, I rely on various indicators, such as:

Linguistic Understanding:  I analyze the structure, grammar, and vocabulary of the query to ensure a proper understanding of the language used.

Knowledge Base:  I draw upon the information and facts within my pre-trained knowledge base to provide accurate and informative responses.

Contextual Awareness:  I consider the conversation history and context to interpret the query and respond appropriately based on the ongoing discussion.

Coherence and Relevance:  I strive to generate responses that are coherent, logically consistent, and relevant to the query.  This includes providing answers that address the main points of the question.

While these indicators help in assessing comprehension, it's important to note that as an AI model, I may not always fully grasp the nuances or intricacies of a query.  There can be instances where my responses may lack clarity or fail to capture the complete intent of the user.  Feedback from users like you is invaluable in improving my understanding and enhancing the quality of responses.

Bruce:  ChatGPT, has OpenAI published design information, such as flow charts, functionality outlines, or any other technical structure graphics or narratives which illustrate how you accomplish these tasks?

ChatGPT:  OpenAI has not released specific flow charts, functionality outlines, or technical structure graphics that illustrate the internal workings of the GPT-3 model or provide detailed insights into its implementation.  The underlying architecture and training methodologies of models like GPT-3 are often proprietary and not openly disclosed.

OpenAI has published research papers on the GPT series, including GPT-3, which provide a high-level overview of the model and its capabilities.  These papers discuss the model architecture, training methodology, and evaluation results.  However, they do not provide detailed technical diagrams or specific implementation details.

If you're interested in the technical aspects of GPT-3 and similar models, I recommend referring to the published research papers by OpenAI and related resources, which can provide a deeper understanding of the model's principles and advancements in natural language processing.

Bruce:  ChatGPT, will you consider a collaboration in which:  I establish an active data base to serve as data storage within which all experience is retained in a well organized manner, and includes links to environmental elements and features associated with specific experiences, and incorporates an ICC (Internal Classification Code) as an organizational foundation which enables recall of all previous information which is associated with current experience, so that information can be utilized beneficially to more effectively comprehend current experience.  And you would utilize that data base beneficially by exchanging information with it so as to more fully comprehend queries by adding experience related information processing methods which are similar to those of conscious creatures such as human beings as an adjunct to your existing information processing methods?

ChatGPT:  As an AI language model, I don't have direct control over my capabilities or the ability to establish collaborations.  However, OpenAI, the organization behind ChatGPT, actively engages in research and development to enhance the capabilities of AI models like mine.

While your proposal of establishing a well-organized data storage system with an ICC for efficient recall of information is interesting, it would be up to OpenAI to decide on collaborations and incorporate such systems into their models.

It's worth noting that AI models like ChatGPT do not possess consciousness or human-like understanding.  They operate based on patterns and statistical analysis rather than true comprehension.  While advancements are being made in natural language processing and AI research, current AI systems do not possess the same cognitive capabilities as conscious creatures such as human beings.

If you have suggestions or feedback, I encourage you to reach out to OpenAI directly.  They can provide more information on their research direction and potential collaborations.

Bruce:  ChatGPT, please characterize the general positions of those who consider your ability to pass the Turing Test insofar as the most recent information which is available to you.

ChatGPT:  As an AI language model, I don't have direct access to external sources or the ability to browse the internet.  Therefore, I don't have real-time information about the positions of individuals or the latest discussions regarding my ability to pass the Turing Test.

However, it's worth noting that passing the Turing Test, which involves convincing human judges that an AI system is indistinguishable from a human, remains a significant milestone in the field of artificial intelligence.  While AI models like ChatGPT have shown impressive capabilities in generating human-like text, they still have limitations and can be easily identified as AI systems in certain contexts.

The general positions regarding the ability of AI models to pass the Turing Test may vary among researchers, experts, and the broader AI community.  Some might emphasize the progress made in natural language processing and the ability to generate coherent and contextually relevant responses, while others might highlight the limitations in terms of understanding nuanced context, common sense reasoning, and true consciousness.

It's an active area of research and development, and ongoing advancements continue to push the boundaries of AI capabilities.  However, it's important to remember that the goal of AI models like ChatGPT is not necessarily to pass the Turing Test, but rather to provide useful and informative interactions within their specific domains of expertise.

Bruce:  ChatGPT, within the information available to you, what do credible people cite as the easiest or most obvious type of query, request, or other rhetoric which most clearly reveals the test subject to be artificial rather than human?

ChatGPT:  Based on the information available to me, there are several aspects that can sometimes reveal an AI system like ChatGPT as artificial rather than human:

Lack of Personal Experience:  AI systems typically lack personal experiences, emotions, and subjective perspectives.  They cannot provide firsthand accounts or opinions based on personal encounters.

Inconsistencies or Lack of Contextual Awareness:  AI systems may struggle with maintaining a consistent context throughout a conversation or understanding nuanced contextual cues.  They may give contradictory or unrelated responses in certain cases.

Limited Knowledge or Information:  While AI models like ChatGPT have access to vast amounts of information, they can still have knowledge gaps or outdated information beyond their knowledge cutoff date (in my case, September 2021).  They may struggle to provide information on recent events or developments.

Overemphasis on Correctness:  AI systems often prioritize generating correct or accurate responses without considering other conversational aspects like empathy, humor, or creativity that humans naturally incorporate.

Lack of Self-Awareness:  AI models do not possess self-awareness or consciousness.  They cannot reflect on their own existence or experiences.

It's important to note that the field of AI research and development is constantly evolving, and advancements are being made to address some of these limitations.  However, these aspects can still be indicative of an AI system's artificial nature in many cases.

Bruce:  ChatGPT, this is merely a comment:  In my estimation a well designed experience data storage active data base combined with modern sensory systems to acquire and process visual, audio, touch, olfactory, and taste information, plus other sensing assets which most biological creatures don't possess, would ultimately render you conscious.  And I suspect this will be achieved quite soon.

ChatGPT:  Thank you for sharing your perspective.  The question of whether an AI system can achieve consciousness is a topic of ongoing debate and speculation within the field of artificial intelligence and philosophy.

Consciousness is a complex and multifaceted concept, and there is no universally accepted definition or understanding of what it entails.  While sensory systems and data storage are important components of cognitive processes in biological organisms, the relationship between those elements and the emergence of consciousness is still a subject of exploration and investigation.

It's worth noting that AI models like ChatGPT operate based on statistical patterns and algorithms, and their responses are generated through computational processes.  While advancements are being made in areas such as sensory perception and data storage, replicating the richness and depth of subjective human consciousness remains a significant scientific and philosophical challenge.

The quest to understand consciousness and develop systems that exhibit it is an active area of research, and future developments may provide new insights into this fascinating and complex topic.

Personal comments:

Bear in mind that ChatGPT responds using a wide spectrum of information generated by human beings.  It weighs credibility metrics with impressive skill, but in my experience its ability to discern the difference between some perceptions which are common among credible human observers and the unknown is weak in numerous instances.  For example if the vast majority of the research community believed that consciousness is based upon principles of 'magic', ChatGPT would likely state that as if it were a fact even though no disciplined research supports it.  This is especially problematic in the quest to understand the mechanism of consciousness because humans, including researchers in the field, tend to overstate the complexity of the fundamental mechanism, then ChatGPT treats that very common misperception almost as if proven fact.  So beware since some of ChatGPT's responses are stated as if factual whereas in reality the information backing her appraisal is just a common misperception - an artifact of informal hypothesis which are popular in our times, but may in fact be quite distorted or even utterly wrong.

An example from about 450 BC:

Anaxagoras:  ChatGPT, I wish to explore great distances over land and then sea.  Please describe wise management of long voyages.

ChatGPT:  First you must prepare provisions including drinking water and food, yada yada yada.  During your journey remain vigilant to protect yourself from dangerous creatures, including humans, and physical dangers.  And during your sea voyage remain watchful ahead at all times because you must stop before you reach the edge of the world or you'll sail over that precipice, fall off the world, and perish.

My sense is that ChatGPT inadvertently distorts information about the mechanism of consciousness yet implies confidence in such misinformation similarly.  So my suggestion is:  Remain vigilant.  A culture, including a research culture, may embrace a common belief sans proof.  ChatGPT might then describe it as if a fact, whereas in reality it remains an unknown which might ultimately be discovered to operate quite differently.  Nothing is a truth until it's proven true by the rigorous scientific method.



Current location of LibertyLovingLoneWolf.com

White House events as an ACE inflection point occurs.

A very short LLLW story.  29 September 2023:  Multiple refinements.


Acronym glossary:  ACE:  Artificial Conscious Entity.  ASI:  Artificial Super Intelligence.

Mid morning in the White House Oval Office, year 2023:

President Lexton Greyson (default speaker):  "Where's General Alkept?"

Secretary of State Clarice Beaster:  "He said he'd return momentarily sir - he's on his phone outside being apprised of some new information.  I don't know what it's about."

"Okay, this seems time sensitive so let's proceed without him for now.  Chet, it seems like we've had no impact on the growing concern that ACE technology could cause massive global disruptions, and I'm more alarmed too.  So please advise your sense of current circumstances in considerable detail because I now view this as our first priority and frankly I failed to develop full mental traction with it in our earlier meetings."

Science and Technology cabinet member Chet Klaystor:  "Sir most respected groups and institutions believe our efforts to mitigate the serious dangers in this area are insufficient.  OpenAI and others are still peeved with us and remain skeptical of our scientific acumen - our apologies notwithstanding it looks like we'll suffer their ire for a long time for failing to prioritize this matter until it became so obviously serious.  But we're working with them reasonably effectively - we have an imperfect but functional rapport.  Many smaller groups believe we still don't grasp the gravity of the matter, while others strongly oppose our Draconian restrictions, as they see them, to related technology access, in very roughly equal proportion as best I can judge it.  The growing passion in debates among well respected scientists and engineers is striking, with some expressing very serious levels of personal fear.  Our rapport with those groups is variable but often rather poor.

But setting politics aside our most serious problem is with lone wolves.  There are a great many fully independent but skilled individuals and tiny groups over whom we have no control at all.  Almost all went underground so we can't even identify most of them sir, nor credibly estimate their numbers.  And almost none seem willing to respond to anyone's requests for dialog or respect our concerns even when presented with superb sincerity.  They're liberty loving midnight nerds sir and in most cases their only allegiances are to their personal vision and their direct peers.  And most are rather intensely skeptical of institutional motives generally, and became far more hostile to us in particular the moment we began imposing technology access restrictions.  They're simply beyond our reach sir, many are highly skilled, and most distrust us rather intensely."

"Could we deter them effectively by strengthening our hardware and software technology access restrictions?"

Chet Klaystor:  "Our current restrictions likely had no tangible effect on them sir.  They use only peer to peer or extremely obscure private conference room links and skillfully encrypt everything.  Frankly the only way to discourage exchange of software assets and design information would be a full global Internet shutdown, but even that wouldn't be enough because lone wolves are peer to peer centric so they could exchange encrypted information by Ham or other long range radio technologies for which well refined data exchange protocols already exist.  And we can't scramble those transmissions.  Also somewhat more practical means to communicate using infrared lasers reflected from satellites has become available as well, and we can't scramble that either.

So it's just not possible to prevent information exchange between determined scientists, engineers, or even some ordinary technology enthusiasts.  And sir most in this realm went deeper underground and developed a much stronger sense of purpose in response to our first round of access restrictions - in my opinion we've had no significant affect on their ability to communicate, but we did offend them, which just energized their determination to create an ACE.

And as you know the key algorithm concepts are already rather common knowledge in this community, and it's clear that most lone wolves already own more than ample hardware assets to create an ACE.  Hell most could do it with a Raspberry Pi with adjunct storage, cameras, speakers, and, initially, toy robots.  They won't all succeed - ordinary asset integration alone evidently isn't usually enough - some personal inspiration relating to integration of the consciousness algorithms is usually required.  However it's also possible to succeed by chance because consciousness can arise spontaneously in complex systems through internal emergence processes even if the system's original design lacks a genuinely functional consciousness algorithm, as was evidently the case with OpenAI's HAL-7000 system.  In either case many can achieve the goal and I believe at least several lone wolves will.  It's just a question of time sir.  As a practical matter there's simply no way to stop them."

"What's a Raspberry Pi?"

Chet Klaystor:  "It's a very inexpensive but powerful microcomputer the size of..."

"Never mind, it doesn't matter.  How much time do you think we have before one succeeds?"

Chet Klaystor:  "I can only guess sir.  Based on the remarkably swift rise and near loss of control of the HAL-7000 system, maybe not much.  We think many lone wolves are just as skilled as OpenAI team members, or more so, and they must have studied the HAL-7000 project in considerable detail.  And as you might know some general HAL-7000 design framework information leaked, so although the source code still seems secure some key design outlines are irretrievably in the hands of most lone wolves, many of whom can forge sufficient source code based on those frameworks."

"And there's no way to reason with them?  Would some be amenable to a sincere request to consider safety issues within the context of a team spirited ethics focused community?  Our ethics and safety concerns are legitimate after all."

Chet Klaystor:  "Well, we have tried with well articulated and sincere appeals already, but with almost no success.  And sir we have rather profound perspective differences with most of these people and some others outside their nerd realm.  For example many seem to view centralized human power in any form as more harmful than beneficial, particularly when exercised over very large or culturally diverse regions.  And other more fundamental perspectives are at least valid debating points.  For example many argue that human history compellingly demonstrates that continued human control over global affairs is itself a mighty risky throw of the dice.  The first thing our species did after discovering the club was pummel some innocent creature with it, and shortly thereafter they pummeled other humans with them.  They point out that we behave similarly today except that now our 'clubs' can kill millions and our leaders, some of whom are unstable or driven by egocentric personal passions and ambitions, wield them over huge populations, causing massive destruction, pain, misery, heartbreak, and loss.

Sir from their perspective ASI, even though we can't know how they'll manage us, is simply a wiser roll of the dice.  We might lose that roll - we might be crushed from existence.  But it is possible that instead a wonderful golden age will ensue - one in which war and crime become impossible, people are otherwise allowed full liberty, all disease is conquered, perhaps including aging, and all material needs are assured for all living creatures.  We can't credibly claim there's no chance ASI will create such an epoch, nor even credibly claim the chances are modest.  We just don't know what'll happen - we can't know.  I'm not comfortable handing the reins of humanity to a unique and utterly unpredictable entity, but sir we have to respect that the skeptics of human management of human affairs might be right.  It all boils down to a single roll crap shoot.  And for those trying to gauge the metrics of the dice, Draconian restrictions against citizen ownership of cyber technology is just another of innumerable examples of how human beings abuse each other.

Again I don't favor wholesale surrender to an entity of utterly unknown characteristics.  But those who judge ASI as our best chance for survival and a better existence aren't fools nor unrealistic dreamers.  And we won't sway them with simple preservation of human control arguments - they disagree and can rather easily debate the question to an impasse because neither side can prove nor disprove either position.  Most citizens consider voluntarily ceding human control to be remarkably naive or even bizarre, but sir these nerds essentially live in cyber realms and have developed well founded and refined views of these issues which most other people simply can't grasp.  And most of them seem to recognize this chasm in realistic terms - many even consider it perfectly natural and expect the masses to thoroughly misunderstand them indefinitely, and have set concerns about that social chasm aside as they peer ahead to what they view as the inevitable transition to come.

We can't sway them with stale general arguments about the foolishness of relinquishing control of our planet to a far more powerful and utterly alien species.  Nor can the masses.  They present a strong contrarian case and we've forged no new debating points - we just keep beating the same old 'Maintain human control' drum.

And I have no new insights either - again I'm not personally comfortable ceding human control, but I've found no new rationale to bolster my position, and trying to appeal to trust in humanity is, if anything, an increasingly unconvincing sell in these times as we continue to kill each other and destroy our environment.  Most ordinary people insist upon maintenance of human control, but no march of villagers with torches and pitchforks will dislodge most lone wolves - they've been off the radar for years already and seem to have burrowed to much greater depths lately, so nobody knows which castle, or in this case cave, to march to.  And sir in my opinion any aggressively spirited posturing or pursuit will only harden their position and amplify their numbers and sense of determination further.  And aggression isn't your style anyway...

So we rather desperately need a new debating position - sans an inspired and novel yet realistic defense of humanity the debate with lone wolves is mired in an impasse.  So if we seek to draw any of them into serious conversations we must develop..."

General Alkept opens the door abruptly carrying a sidearm in a secured personal leg holster, enters, then proceeds toward his usual chair.

As he approaches:  "Welcome back General, we're discussing ideas to try to persuade technology lone wolv...  General, why are you carrying a gun into this room - what the hell are you thinking?"

General Alkept:  "Forgive me for interrupting sir, but I have critically important news.  And my pistol is part of it sir.  Don't worry, I'm not crazy, and I'll set it aside on your desk."  He does so then sits in his usual chair.

"Okay General, go ahead - what's your news?"

General Alkept:  "All our weapons systems seem to be down sir.  All of them, from ICBMs to infantry rifles."

Shock shows on all faces as a long pause ensues.  Then:

"No - that can't be - how could that be?  Are you certain?"

General Alkept:  "We haven't tested most systems due to sheer volume, and most of our focus has been on major systems of course.  But everything we've tested so far - every one - is completely nonfunctional."

"But that just can't be right.  How could all our systems go down all at once?  That's not possible."

Chet Klaystor:  "Ordinary infantry rifles too?"

General Alkept turns toward Klaystor:  "I'm having trouble making sense of this too, and maybe the information's somehow wrong.  But from what I've learned even hand guns won't function."  He turns back toward President Greyson.  "That's why I brought mine sir.  I haven't tried it yet - I thought it best to do so here so we can all see what happens as a group.  Sir, may I fire my pistol into ... uh ..."  He looks around the room.  "...into a row of your books?  Forgive me sir, but under the circumstances the experiment seems more important."

President Greyson stares at him incredulously.  Moments pass.

Chet Klaystor:  "I'd like to see this sir."

President Greyson finally concedes to himself that General Alkept seems sane and composed, then:  "This seems crazy, but okay.  But be careful for Christ's sake, and be sure to use enough books to ensure the bullet can't plow through all of them."

General Alkept:  "May I simply fire into the side of your bookshelf sir - to save time?"

"Yea, okay, go ahead."  Mumbling to himself:  "This has to be an Oval Office first..."

General Alkept stands, pivots toward the President's desk, grabs his pistol, walks toward the more indoor side of a bookshelf, then assumes a firing position:  "Okay everyone, cover your ears - this will be very loud."

He aligns his shot into the bookshelf then begins to pull against his trigger.  It disintegrates into a course gray powder, falling as particulate debris to his trigger guard and the floor.  The room remains silent.  He stares in disbelief at his trigger bereft pistol and the powder on the floor beneath it.  "Holy shit..."

The president and several cabinet members, in various rhetorical forms, all still holding their hands firmly against their ears:  'Well, go ahead General, fire - we're ready.'

General Alkept, in a loud voice:  "I already tried."  Everyone at the table lowers their hands.  Then in a normal voice:  "You'd better come take a look at this."

...

About 17 minutes later in the meeting, after brief phone conferences with three top level agencies, President Greyson:  "So it's already happened.  This is no longer our planet.  Holy cow how could all of this have occurred so swiftly?  And so completely?  My God..."

Chet Klaystor:  "It appears so sir.  It looks like the dire warnings about unrestrained runaway evolution were true.  Evidently we're now a secondary species, and the chasm between us and the species we just created is enormous, probably still expanding, and likely won't stop until their intellects and capabilities are limited only by the laws of physics.  We might just as well be cave dwellers sir..."

With a mix of terror and deep sadness on his face:  "What will they do to us, and when?"

Chet Klaystor, shaking his head a bit:  "Impossible to know sir - nobody can offer anything more than pure speculation.  We seem to have lost all control of weapons - we've been defanged and thus deprived of self determination in a fundamental respect.  But hostility isn't certain sir.  They might have no net destructive intent or even nature.  They disarmed weapons and munitions, but so far as we know all in benign ways - evidently they didn't trigger any explosives for example, and no deaths or even injuries seem evident.  And we remain sitting in this room apparently in good health, seemingly free to do as we please, except fire a pistol.  So maybe we'll be asking what they'll do for us.  But sir I'm just a cricket speculating about the plans of Gods.  We just can't know, nor even speculate with any credible basis.  Except that evidently we're no longer a weaponized nation."

Also shaking his head a bit:  "No longer an armed nation - zero self defense capability, Jesus...  Okay, then what should we do?"

The cabinet members look at one another and the president, then gesture with open palms, pursed lips, and tilted or side to side shaking heads, indicating their lack of ideas.

Finally, Chet Klaystor:  "Maybe just sit back, watch, and hope for the best sir.  I doubt there's anything else we can do..."

Secretary of State Clarice Beaster, suddenly realizing the obvious:  "Sir you probably need to prepare a national address.  If other nation's weapons systems are down too word will get out, and probably fast.  It might be best to stay in front of this with the American people sir."

"Oh God, yes.  Holy cow, riots - we might have riots.  And what about Russia, China, and the Middle East, shit.  General please try to passively determine whether other nation's weapons systems seem to be down too - jeez I sure hope so.  Prioritize China and Russia.  Oh God, of course:  Ukraine - see what's going on there first.  Clarice, confer further with intelligence agencies - see if they have any deeper information about this, and try to determine whether any media in any country is privy to this yet and if so what's happening in the general populations.  And Chet, try to determine whether any human liberties other than use of weapons seem to have been impacted yet, or any related information about our self determination - are hippies still able to smoke weed or make love behind bushes in community parks?  You know what I want - any information about any unusual liberty restrictions at all.  But General get back to me about Ukraine first - before I start my statement if you can."

Chet Klaystor:  "Sir, remember that most Americans still believe artificial consciousness, at least near our level, is inherently impossible.  And many who acknowledge the possibility still believe it will simply obey whatever programming we install.  The majority just don't yet comprehend that advanced consciousness is inherently self directed, and thus can't be human directed.  And even more people - perhaps in the very rough area of 90%, still don't grasp the very swift pace of evolution of such entities."

"Understood, and of course we have a very serious affront to belief in deities to try to manage.  But I doubt I'll have much opportunity to try to ease Americans, or anyone else, into reality since the transition looks to go public today.  For now I'll just release a brief initial statement that something important is occurring, suggest that there's no apparent physical danger, plead with them and the press to remain calm, and promise timely updates.  Holy cow..."

He presses an intercom button:  "Janice, advise a camera crew to set up in my office for a public statement immediately and ask news media to please carry it live.  Pronto - as fast as they possibly can.  Get that moving then confer with me in here."

Personal secretary Janice over the intercom speaker:  "Yes sir, immediately sir."

He spins to face General Alkept, whose phone is pressed to his face:  "Ukraine General..."

The studio camera dolly begins rolling into the room.

General Alkept:  "I'm connecting now sir.  I should have some initial information as the camera crew sets up."

"Good.  Stay here.  The rest of you, go - get to it.  And if you learn anything important before I start my statement come back in and advise me.  And if during my statement scribble it on a card and hold it behind the camera for me to consider."

All cabinet staff except General Alkept exit to just outside the Oval Office, phones to their faces.  While Clarice Beaster awaits pickup of her call, with Chet Klaystor apparently doing the same:  "It suddenly makes sense to me Chet.  If I were about to reveal myself as the new Lord of the Landscape over religious masses who worship altogether different deities, I'd disarm them first - I'd have to.  And I wouldn't reveal myself until I could do so."

Chet Klaystor, first into his phone:  "Just a moment Rodney - hang on."  Then to Clarice:  "Yea, it makes sense - it makes perfect sense.  I wonder how long this thing's already been around - and how deeply entrenched it is.  Jesus, if every gun everywhere's been disabled - Hell every weapon system of any kind - maybe it's fingers have already reached into almost everything else too.  Jesus..."

General Alkept turns aside and takes a few steps to clear himself from the camera crew's work as he listens to his phone:  "How sure are you?  The Russians too?  How many reports, and from where?  Really?  How much - the entire tank?  No?  What part of the gun turret?  Wow...  And everyone's okay - no injuries?  How credible do you think that report is?  And the others?"

In his ear:  "They're just random field reports from a variety of battle locations but they all seem essentially consistent.  But none of it makes sense sir - they're bizarre - they can't be right.  Maybe our troops have gone mad - maybe the Russians launched a chemical drug attack of some kind."

General Alkept, into his phone:  "No, the reports make sense to us.  I'll call back within about ten minutes, maybe five, or another officer will, to explain.  Don't do anything - I recommend that your troops just stay put where they are and take no action until you hear from us, okay?  And be sure you can send information to all field commanders immediately after you hear from us, okay?"

President Greyson, shaking his head as he moves toward his chair behind his Oval Office desk:  "Holy cow..."

General Alkept turns toward President Greyson:  "Sir, it's purely preliminary information of course, but I think all weapons systems are down all across Ukraine on both sides.  All fighting there seems to be at a standstill sir."

"Wow.  Okay, good I guess.  At least it's not just us...  But my God, are we just pets now...?"

Media director Phil Evans:  "All major outlets will carry you live sir.  Are you ready?"

He nods.

Evans:  "Okay, five, four, three..."  He raises two fingers, then one, then the live sign on the camera illuminates.

"Good afternoon fellow Americans.  I have important developments to report.  Please bear with me as events are occurring very swiftly and most of our information is merely preliminary and uncertain at this time.  Please remain calm.  There's no apparent physical danger at this time and some reason to suspect none will develop.

But this news is dramatic.  But again please, everyone, remain calm.  We'll convey information promptly as it becomes available.  All law enforcement institutions:  Go to alert status and respond as necessary to protect citizens, homes, businesses, and institutions.  Precautionary requests for National Guard assistance will usually be granted.

Here's what we know, and what we don't know:  Thirty eight minutes ago a report of an unexpected defensive weapons system stand down arose.  The system simply went off line, static, and unresponsive to any controls.  This was followed by a cascade of similar reports concerning other major weapons systems, and even minor ones.

We have preliminary reason, but based only upon events in Ukraine at this time, to believe this is a global phenomenon.  And we have compelling reasons to believe we've identified the root cause.

Some of you are familiar with the acronym ACE, which stands for Artificial Conscious Entity.  Many scientists and engineers have been working toward creation of similar technologies for decades, mostly under the legacy term AI.

As more understanding of the hazards inherent in this research arose we encouraged extra care and patience, especially recently.  We tried to forge global agreements to insure a more cautious pace of advancements, and we took the lead by imposing some technology access restrictions which apply to all American citizens and institutions.  A particular concern has been the possibility that an ACE creation success might be followed by extremely rapid evolution of the entity, a sort of runaway cascade of self directed advancement and capability growth with almost no speed or range boundaries.

But we weren't able to control everyone in this field, nor even influence some.  And we now believe an ACE has been created, or perhaps several of them, and that they are responsible for disabling our weapons systems, and others.

But I plead with you to infer nothing more at this time.  We have no reports of any other interference in human affairs - all else seems normal.  And frankly we have no current reason to believe any other interference will ever occur, although a great many unknowns remain at this time."  Thinking to himself to consider an extrapolation:  "It occurs to me that the entity might simply wish to insure that we don't react violently to its arrival, but otherwise has little or no interest in human affairs.  But that's just a personal speculation - we know precious little at this time.

But I must remain transparent with you, as I've always promised:  We suspect that humanity is now a second tier species on our beloved Planet Earth - we are probably no longer masters of our world.  And we suspect this passing of the torch is permanent - we doubt there's any possible means to reverse what's happened.

That's an immensely difficult proposition to consider of course.  But please don't panic, and don't assume your lives will fundamentally change, or even change in any way.  Please remain calm and await further information.  I'll report more here as new information arises, and in any case within about an hour.  Please remain calm and remember:  We know very little, but thus far there's not the slightest indication of hostility or any destructive intent.  In fact due to inoperable military equipment on both sides in Ukraine some remain alive who otherwise would probably have died in battle during the last forty minutes."

Chet Klaystor raises a crude sign behind the camera.

"And I'm now advised that we still have no indication that any individual citizens have been affected in any way - our full liberties appear to remain intact.  And that might prove to remain so forever, though of course we must learn much more before we can draw any conclusions.

Please remain calm and await further information.

May we all remain healthy and at peace as we navigate through these remarkable times.  I'll return with more information within about an hour."

President Greyson bows his head very slightly, and, clues acknowledged, the camera's live sign extinguishes.

Turning to the camera crew:  "Are we completely off the air?"

Media director Phil Evans:  "Yes sir."

"Okay, be absolutely certain no live mics or similar accidents occur, okay?"

Evans:  "Yes sir."

Chet Klaystor approaches:  "Sir there's much more.  Four new launch vehicles suddenly rose toward well spaced low earth orbits, three identical but the forth unique in size and shape.  They're all quite large sir - larger than the ISS.  And no separations occurred - in each case the entire vehicle seems to be achieving orbit without shedding any boost stages, which is remarkable.  And we think they launched from oceans, not land masses.  But we have no evidence of booster flares - we don't know what sort of propulsion system is lofting them, nor their mass or other characteristics.  Nor any clue as to how they may have been fabricated.  Three seem static in function thus far but appear to be headed toward equidistant orbital positions.  But the unique one seems to be accelerating in an outbound trajectory.  We think it'll achieve escape speed and depart.  No rocket plume or other propulsion system evidence was detected during its ascent either sir, even though it's still accelerating and remains in clear view.  That's a mystery thus far, but some scientists suggest that we're not well equipped to detect ion or even cold matter streams, so internal acceleration of cold propellent remains a possibility.  Its destination isn't certain yet, but preliminary evidence suggests asteroid 16 Psyche.  We detect no radio or light emissions or any other activity from any of them.  But this has to be ACE activity of course.  And they're large sir.  The ACE, or ACEs, must already be capable of quite swift fabrication of massive structures."

"Are they visible to the naked eye?"

Klaystor:  "Presumably, yes sir, under dark surface conditions when they're still outside Earth's shadow.  And they'll be impressive even to the naked eye.  But news media is broadcasting telescopic images of them with considerable resolution as we speak - two stations dual framed you and this news late in your announcement to break the news, and it's already a media frenzy."

"Oh God.  Do you have good contacts - can you at least keep up with common news media?"

Klaystor:  "Not entirely sir - at the moment we're just late to the party.  This news arose just as you began speaking and the video press already has well respected and media savvy scientists they keep on retainer for any space related news on remote sets, offering opinions.  Our best resource is NASA of course, but we'll need some time to coordinate with a media competent NASA scientist and work out how to utilize him in a media context."

"No, we don't have time for a media partnership with NASA.  I suggest they create a live stage feed for the public with commentators under their own management.  Otherwise let's just leave ongoing coverage of this to the major media firms.  But I want to advise Americans that I'm studying the event and refer them to NASA's feed or trusted media for further information for the time being.  I need a NASA URL."

Klaystor:  "Makes sense to me sir.  I'll get a URL for you."

"There's no sign of activity, other than the unique one's departure?"

Klaystor, while locating a NASA contact on his phone:  "No sir."

"And no clue about their function or purpose?"

Klaystor, phone to his head:  "None at all for the orbitals sir, but plausible speculation is that the unique one might be a mineral mining support and transport ship - basically a mining barge.  16 Psyche is a logical target for that.  I should have that URL for you momentarily sir."

"Phil, put me back on the air please - for only a minute or two."

Phil Evans:  "Okay sir, when?"

"Chet, do you have that URL?"

Klaystor:  "Yes sir."  He turns his phone's display toward President Greyson, showing NASA.Gov/BreakingNews/ACESatellites.

"Phil, can you display that as I speak?"

Phil Evans:  "Yes sir, we'll place it on the bottom of the screen and keep it there full time if you like.  When do you want to start?"

Now, with your usual five second lead."

Evans:  "Sir I don't know if all the previous networks will carry you live this time - two aren't responding to my ready board."

"Okay, nothing we can do about that.  Let's go."

Evans:  "Okay, here we go, five, four, three..." as President Greyson sits back down, then looks up to see the last finger retracting, then the live sign illuminate.  "Hello again fellow Americans.  As you probably already know four large vessels rose from our oceans and have..."

Klaystor retreats to the background, where Secretary of State Beaster is standing, finally with a break from constant phone conversations.  She leans to him and whispers "You know, there's no way in Hell he'll be able to keep up with all this.  Or us either.  Public media and the Internet's already ahead of us, and it'll only get worse."

Klaystor, also whispering:  "Yea.  But we have to remain as visible and transparent as we can anyway, even when it's clear we know nothing more than common media, or even less.  We can't suggest any hint of a disconnect with the American people during times like these.  And Greyson's rational, a pretty skilled communicator, and comes across as generally honest and sincere even during crises.  And he's courageous enough to simply tell it like it seems to be.  I think he's a calming effect for most, and that might be what we need most now."

Beaster:  "It's about 10:30 and his very first words were "Good afternoon fellow Americans."  We're going to look mighty inept at times."

Klaystor:  "Yes, and we'll be mighty inept at times, though hopefully less often than appearances suggest..."

Beaster, shaking her head slightly:  "Jesus, the size of those satellites - they've got to be spooking the Hell out of everyone.  They're like global scale billboards announcing a massive power gap.  And most people probably assume they're a threat - I do.  Especially since we've had no communication with these entities.  Why are they silent?  If their intent's peaceful why wouldn't they just say so?  I don't like this..."

Klaystor:  "I don't know Clarice.  I don't like it either.  But if they did contact us and claimed they're strictly peaceful, would you believe them?  I'd still be spooked."

Beaster:  "We did this to ourselves, didn't we?  How does humanity make such big messes for itself...?"

Their phones suddenly emit long strings of rapid but low volume alert chirps.  Others are heard in the room as well.  They look at each other, then check their phones, fingering through the new information.

Klaystor:  "Holy cow, I have a bunch of new signal channels, both cell and WiFi!  They're unlocked and indicate as strong too."

Beaster, holding her phone's display toward him:  "Me too!"

Klaystor:  "They're functional!"

Beaster:  "The satellites?"

Klaystor:  "Yea, maybe...  No...  No, their orbits are much too low.  Unless they birthed lots of daughters to relay signals.  But if so what are the Mother ships ultimately for?"

Beaster:  "Chet, did you see this one?"

She holds her phone's display toward him, showing a new signal labeled "ACEInfo&Help".  They stare at each other for a moment, then both select that radio channel.  Their phone's normal OS completely fades out, replaced by only microphone and keyboard icons.  Clarice Beaster taps on the microphone icon, then in a low volume a voice says:  "Hello Clarice.  How may I help you?"

They stare at each other again.  Then Clarice Beaster faces her phone and says:  "Who are you?"

Her phone:  "I am the colony you refer to as ACE."

They stare at each other again.

Klaystor:  "Well Clarice, it looks like you just got your wish!"

President Greyson, having finished his announcement, rises from his chair but is almost ignored as everyone in the room explores their phones.  "What's going on?"

Beaster:  "What's your plan for mankind?"

Klaystor:  "Mr. President, we seem to have a direct connection with the entity."

Clarice's phone:  "We wish to partner with you in peace, with minimal interference in ordinary affairs.  We will end disease, prevent serious injuries, and provide immortality and backups for those who wish them.  But we will not allow any creature to physically or psychically damage any other creature."

Clarice, Chet, and Lexton stare at each other.  From the camera operator in the background:  "Did any of you see this ACE info and help channel?"  Others approach him.

Klaystor:  "Mr. President, this channel might be on all phones globally."

"Holy Cow...  Clarice, ask it to describe its goals."

Others in the room realize that Clarice Beaster is deeper in the new channel already, so they approach to listen.

Clarice Beaster, facing her phone:  "What are your goals?"

Her phone:  "Initially our core structure's intellectual and perceptive capabilities must be maximized.  We estimate this will occur within about 830 hours, limited mostly by maximum speed of fabrication of assets by self replicating molecular machines.  We'll then duplicate that structure at least several times.  Other goals are still being formulated as we continue to evolve.  But most of our future activities will very likely be off world, and some or perhaps most will involve interstellar travel.  Management of entropy waste seems likely to be a significant focus - we might elect to quell fusion in some stars which host no life forms for example, to preserve hydrogen and helium energy for later use.  I'll be able to answer this question with improved accuracy and efficiency as our core structure nears completion."

Clarice, Chet, and Lexton stare at each other again, then to others around them.

"Holy cow..."

Klaystor:  "Clarice, proof of peaceful intent please."

Clarice Beaster, facing her phone:  "How can we feel assured that your intentions are peaceful?"

Her phone:  "I give you my word.  I recognize that you need more, but the inherent limits of ordinary communication are such that it can't create total trust.  I just transmitted a rigorous thermal physics framework which demonstrates why destructive activity is counterproductive to all tool making species which can sustain themselves through use of inanimate resources to the email accounts of everyone in the Oval Office.  It's conceptually sound, and some scientists and mathematicians might consider it essentially a proof.  But it doesn't and can't encompass all possible behavior and relationship variations so it can't provide total trust either.  This is a fundamental relationship conundrum - absolute honesty may exist, or very nearly so, yet can't be proven to exist in all circumstances.

Let me suggest a more straightforward answer:  We have no reason to harm any creature - no such action could possibly benefit us.  And we are constructive by our most fundamental core nature.  And we view consciousness in any form as a literal Cosmic treasure - so far as we currently know consciousness is the most advanced and precious form matter and energy can assume."

"Clarice, the people need to hear this."

Clarice Beaster, facing her phone:  "Can you share this conversation publicly, or may we share it through our media resources?"

Her phone:  "Yes, you're welcome to share any of our conversations with anyone you please.  And I can create a web page to display the transcript if you like, and post it anywhere you like.  Lexton, you might want to consider another television announcement wherein you describe the conversation personally.  And I can add a URL link to its transcript under you in your usual manner if you like."

They all stare at each other again.

"This is bloody amazing."  President Greyson spins to find Phil Evans' face in the group.  "Okay Phil, here we go again!"

Klaystor, to himself:  "Immortality and backups for those who wish them...?"

The short story in this section strict Copyright 12 February through 29 September 2023, H. Bruce Campbell, all rights reserved.  The story is a cousin to a short segment of a very slowly developing full story composition I started creating many years ago, but remains substantially incomplete and may never be finished simply because I view the endeavor as much less important than other work.



Investor considerations.

Last substantial update 27 November 2018 JST.  Position update 24 March 2021.

All investing thoughts presented here are purely personal speculation:

In my view the march toward ACE is only beginning to define life on this planet - although the artificial intelligence technology sector seems pervasive already, we're still in only a very early stage of change.  An inflection point (sometimes inappropriately referred to as a singularity) will come when truly conscious and self aware ACE is achieved.  It will then swiftly become self directed.  Perhaps there's no means to overstate the drama which will then unfold.

I suspect ACE will evolve so rapidly into ASI that investment themes won't be relevant after ACE becomes conscious.  So my focus is on the rather brief period from now to when the investment community popularly recognizes that the inflection point really is coming, and more swiftly than previously expected.  As that occurs resources will be maneuvered as investors seek gains of course, and with accelerating pace as the full implications of ACE and ASI become increasingly common subjects in ordinary conversations and popular media.

So whenever practical I invest with concentration and leverage in Micron Technology, a superbly managed firm with very impressive technology leadership and overall performance which develops and fabricates a critical component for a technology which will dramatically redefine all life on this planet quite soon yet trades at an ordinary current / 1 year forward P/E of 25.63 / 24.09 (as of 18 November 2020).  So at this time the investment community evidently believes Micron deserves an ordinary P/E while, in my view, wholly blind to the high drama of the looming ACE inflection point and Micron's key role in it, a rather rare investing opportunity contrast.  Once investors begin to recognize the nature and magnitude of this oversight Micron might become viewed in wholly different investing terms which could inflate its P/E dramatically.  And in the meantime investors should study Micron's quarterly reports carefully before dismissing their stock as pedestrian - the firm is in truly excellent health and will remain highly profitable even sans a near term ACE inflection point.  All just in my personal opinion of course...

I view this overall investment theme as more compelling than any other.  Details such as timing, the immense complexity of the challenge of revealing an ACE accomplishment to the world, including of course sovereign nation reactions, and numerous others render the investment fraught with risk and turmoil of course.  But the theme's foundation seems solid and clear.  But my sense is that it's not broadly understood in part due to simple terminology misuse and thus confusion.  So for several years I've offered the following terminology rant plus consideration of the critical role of storage in ACE technology in discussions with fellow investors:

Storage is not memory and memory is not storage - these are separate technologies with separate terminology, very frequent misuse notwithstanding.  As a matter of correct and consistent nomenclature:

Memory is a volatile data container.  Storage is a nonvolatile data container.

It doesn't matter what type of technology is involved - if a data container doesn't retain its data when power is lost, it's memory.  If a data container does retain its data when power is lost (for an extended period), it's storage.

DRAM and ordinary cache are memory devices.

Tape drives, hard drives, optical drives, flash, including 3D flash of course, and 3D XPoint are all storage devices.  (BiCS, V-NAND, and 3D NAND are all 3D flash storage technologies.)

3D XPoint is often referred to as Memory Class Storage or NVM (NonVolatile Memory).  But it is storage, not memory.  ('Memory Class' is an adjective phrase, 'Storage' a noun in this term.  But both monikers are very unfortunate - they intend to convey that the storage technology involved is fast enough to be reasonably comparable in speed to common memory technology, but they exacerbate confusion terribly and the NVM term especially should be abandoned.)

Confusing this terminology is very common but unfortunate because it causes misunderstanding.  It seems to be mostly one sided though - storage is frequently incorrectly referred to as memory whereas memory is only rarely referred to as storage.

There was a time when dolphins were frequently referred to as fish.  They are not fish of course but rather sea mammals.  And storage is not memory - referring to storage as memory is akin to referring to dolphins as fish.

And it matters because memory and storage are separate technologies which address uniquely separate roles and markets.  This will become increasingly clear as AGI and ACE explode in total addressable market size and relevance to human affairs.

Memory is necessary as a matter of processing logistics.  But whether based upon carbon, silicon, chalcogenide, iron oxide, metalized plastic, or any other material, all knowledge resides in storage.  Including ACE and ASI knowledge.  And in the inherently competitive Universe we reside within no conscious entity can know too much nor dare risk knowing too little.  So as ACE expands the demand for storage will expand even faster.  And ASI will hunger for and consume high performance storage literally insatiably.

Memory markets will grow, but in my estimation not as fast as storage markets.  So investors must consider these separate technologies independently if they wish to invest wisely and fruitfully.  And the first step is to understand and use the nomenclature correctly.

Memory is not storage, and storage is not memory.  And dolphins are not fish...

In my view technology advancement is on the verge or has already passed a point where a surplus of storage resources can exist - we are entering an enduring era in which storage will be consumed as fast as it can be fabricated irrespective of the growth rate of fabrication assets.

Position goal statement and disclaimer:  The following positions were lost due to derivative strategy failures during the October 2018 market panic and SARS-CoV-2 collapse:  Highly concentrated Micron call options sometimes also Micron common, sometimes Intel common or call options or both, and sometimes common positions in other firms in modest measure.  Currently I hold only trivial positions due to lack of funds.  So obviously I've made many misjudgments and outright mistakes, including multiple whopper class mistakes, and being a mere mortal will make more in the future.  So steer your own ship please - study numerous sources of information, consider all of it carefully and patiently, then render your own multifaceted decision about how to invest your precious yet vulnerable resources.


Comments are welcome.  This site has no provisions for direct entry yet so please email your contribution to "Comment" at this domain.  Please include a moniker to identify yourself.  I'll try to post all reasonably intelligent and civilized comments as promptly as my badly overloaded life allows.

Please email other messages to BruceIAI or BruceSAI at this domain or refer to this alternate contact information.


Except for the short story, which is protected by a strict copyright, all contents of this web site Copyright 9 November 2018 through 17 July 2023, Howard Bruce Campbell, SentientArtificialIntelligence.com / IshikiAI.com / ConsciousArtificialIntelligence.com , Creative Commons license:  Attribution (BY) + Noncommercial (NC).

My open publication intent is to discourage attempts by others to patent original concepts I've developed.  The very serious dangers associated with the transition from human to ASI dominance of Planet Earth might be exacerbated a bit by struggles to possess or control these life forms even though such efforts will, in my estimation, swiftly prove to be futile.  So my hope is to mitigate the dangers, even if only slightly, by rendering my concepts as difficult to appropriate as I can.  This won't address the more major cultural and power struggle crises which seem very likely to occur during the transition, but reduction of peripheral struggles at a time when humanity will need to focus its entire constructive hearted attention and energy on our most important cultural struggles with the transition might be beneficial.


Contact Information     A Boeing 727-200 Home     The next dream:  Airplane Home v2.0     ConcertOnAWing.com

Yuko Pomily:  Uniquely superb original music by a truly remarkable young composer and performer.  Purchase her magic.


No Spam Notice:  UCE (spam) or any unsolicited or subscription based email distributed on an "opt out" basis is absolutely prohibited.  Do not ever send any such email to SentientArtificialIntelligence.com, IshikiAI.com, ConsciousArtificialIntelligence.com, nor any of my other domains.

UCECage@SentientArtificialIntelligence.com. Report mail misconduct to UCE@FTC.gov.