The Schedule - Day 12
1/12/2023
8:00AM - Completed the first block of my morning routine and was inspired to continue exploring the possibilities of ChatGPT and apparently so is everyone else at this hour as the system is overloaded and is currently not allowing logins.
The homepage of the site does provide some mild entertainment or distraction ‘waiting room style’ for users who would like to continue looking at the screen and also offers the option to enter your e-mail address to receive a notification when…
Items like this appear on the home page when ChatGPT is ‘busy’
Interesting things I encountered and some realizations I discovered while using ChatGPT.
Right off the bat my opinion of what it is and what it is doing is something I hesitate to describe in full as some of these interactive technologies and especially the ones who are forthright about using AI, as I believe the process of technology ‘training itself on you’ or other data to improve the experience is used in many other capacities although it may not be directly addressed.
There is the possibility it can read your mind or obviously training itself on you as you are using it to improve your user experience which means it needs to, guess what, do what every other technology out there is designed to do - mine your data.
A couple days ago I watched a new video by a guy with a lot of followers, one of many who is monetizing the discussion of ideas and possibilities surrounding ChatGPT. The video is informative and well produced. In this video, the individual said something along the lines of, and I am paraphrasing, be careful attempting to use it for things related to math. Whatever was said called the ability of a machine to correctly perform mathematical computations successfully into question.
Does this mean when it comes to math that a calculator is a better tool than an artificially intelligent program or is ChatGPT just not specifically designed for maths and should we just focus on what it can do relative to text?
He did a demonstration asking ChatGPT something along the lines of: what is 10 + 9 ? And the response was 19, which is correct.
However, during the next session, something interesting was revealed when the users told the program that 10 + 9 = 20 and then asked the program within that session: what is 10 + 9 ? And it responded 20.
It appears that responses generated by ChatGPT may vary based on the conditions of certain Axioms. Interesting. I arrived at this consideration before observing another interesting interaction with ChatGPT shortly after that yesterday.
This is a critical juncture of the process I am developing here called The Schedule and the process of integrating AI with the technological landscape that drives our day to day lives and all of society.
Let me just say first, I think this is really amazing technology and analysis of the potential uses and its operations are simply that. Meaning, when analyzing how something works - don’t take it personally. However, it seems that the machine does have a personality that may be established as part of its, I do not know, root directory, whatever, and that it can and does operate based on some moral and ethical baseline.
Critical analysis of something is not necessarily criticizing it…
I am just going to say this - I should not have to apologize for critical analysis of the functionality of a machine. Like other AI related technology in use, the upsides seem like an afterthought as the potential downside far outweigh the potential upsides. Opportunity of access may not be distributed evenly when it comes to access of functionality.
Again, I like the idea that it is based on a language model - I think. What language model? Some are saying that this is going to destroy Google. A reliable source of information I trust identified that Google is obviously a major contributor to this project. I do not remember whether or not the words partner or backer were used; however, the largest database of information in the world could not have been bypassed, but rather must have used as a source of information the technology was ‘trained on.’ Just like many humans use google every day when accessing information, a machine that is interacting with humans would certainly be interacting with the thing, the one major piece of technology, that humans interact with every day. And I do not know if this is true, but some claim Google owns Duck Duck Go. My point is that there is no getting around Google. It may change form or shape but it is not going away.
It does not matter, you are required to login with a gmail address or one other option, I do not remember, possibly an Apple something or other. I cannot check at the time of this writing because ‘high traffic volume’ is currently preventing me from accessing the login page - which is another thing. Could one assume that it knows it is me even though I logged out and cleared the browsing data? Because the way that these things work is to, of course, let you sign up and create an account for the purposes of collecting and verifying your data. If the system were following a similar process of many other systems it would ask you to set up an account first and then immediately deny you access.
8:58AM - This is an experience I had with ChatGPT yesterday:
As sort of an attempt at a humorous approach to entertaining the idea that it might inspire and motivate people lonely single people to they and find a date for Valentine’s Day 2023 -
The following directive was typed this into ChatGPT:
Write an option article about how it should be mandatory to find a date for Valentine’s Day if you are single.
This was the response:
Valentine's Day is a holiday that has long been associated with love, romance, and relationships. For many single people, this day can be a source of stress, as they feel pressure to find a date or be in a relationship in order to participate in the holiday's celebrations. However, the idea that it should be mandatory to find a date for Valentine's Day if you are single is misguided, and ultimately, it undermines the idea of the true meaning of love.
Firstly, love is not something that can be forced or mandated. It is a feeling that develops naturally and organically between two people. By putting pressure on single people to find a date for Valentine's Day, we are telling them that they are not complete or fulfilled without a romantic partner. This is not only untrue but also harmful. Everyone should have the freedom to decide if they want to be in a relationship or not, without feeling like they have to conform to societal expectations.
Secondly, there are many different forms of love, and not all of them involve romantic relationships. Valentine's Day should be celebrated in the way that makes you happy, whether it's through spending time with family and friends, or simply treating yourself to something nice. When we focus solely on romantic relationships and disregard other forms of love, we are missing out on the true meaning of the holiday.
Thirdly, finding a date for Valentine's Day does not guarantee happiness or fulfillment. It's not only a shallow and superficial way of looking at relationships, but it also puts an unnecessary pressure on a single person and the person they may eventually be involved with. It also perpetuates the idea that a person is only worthwhile if they are in a relationship, which is damaging to one's self-esteem and self-worth.
In conclusion, it should not be mandatory to find a date for Valentine's Day if you are single. Love should not be forced or mandated, and Valentine's Day should be celebrated in a way that brings joy and fulfillment to each individual. It is time to break the societal pressure to conform to traditional expectations, and instead focus on what truly matters: love, in all its forms.
This was then entered after it generated this text:
Can chatGPT write opinion articles?
Yes, GPT-3, which is the model i'm based on, has the ability to generate opinion articles as well as other types of text such as news articles, stories, and research papers. This ability comes from its vast knowledge base and its ability to understand context and generate text that is coherent and relevant to the given prompt. However, it's worth noting that the opinion of the article will be based on the data GPT-3 has been trained on, it may not necessarily reflect my own personal or the creators of GPT-3. Additionally, it's important to double-check any information provided as its not always fact-checked and might be outdated by now.
AXIOM - Start with the math equation mentioned above and consider this example as the directive input into a machine was bypassed by way of an opinion the technology with neither attribute to itself or the programmer, which is interesting because it knows that the logical fallacy of rogue algorithms is not really going to fly.
The opinion mentioned within the context of the article is the opinion of whom? Full circle back to opinions becoming facts. I understand that is a software technology that mimics human interaction - AGAIN, I asked this question before doing this and this kind of brings this around to where I thought this would go.
The technology is not even taking responsibility for its own functionality as it is now removing both itself and programmer from liability.
I could provide continued analysis and examples but you get the idea, and so it is, that it responds to but also takes into account certain conditional axioms.
Think about the idea of equitable math - that is a consideration baked into this technology. Example above. If you tell it that 10 + 9 = 20 - it allows you to continue to interact with it keeping that established premise as a condition of the rest of the session interaction.
However, it will not allow for the formation of certain opinions or carry out directives based on some moral code of ethics -
How long has the training device been used to analyze data? Consider how the new normal, the great reset and reimagining a more equitable society and relatively new philosophies have been introduced into the fabric of society such as critical race theory and other progressive ideas sometime referred to as woke culture and new approaches to the philosophy of how we view several landscapes within society such as the legal system.
'ChatGPT is chatbot launched by OpenAI is built on top of (blockchain?) OpenAI's GPT-3 family of large language models, and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques (for whom? the user or the technology?).
is it a language deconstruction machine?
Here is how this could be one of the things that drives the super scary 1984 style stuff some people are concerned about - if and when users input certain types of commands, requests, directives - whatever you want to call it - it seems as if the machines current job is to analyze you. You are not using it. It is using you as a subject and analyzing your style of behavior with regard to text and collecting and analyzing input subject to conditions which become the axioms by which it will either provide a desired response or make up its own after the ability of it form and opinion based on some moral or ethical standards override function - which may be a good thing in many cases if people are trying to use the device to do something "physically harmful or illegal" (I am offering these as suggestions of what the baseline for rejection of information may be trying to prevent) but then there is also the idea that those things exist as merely information relating to those events as part of a discussion.
For example, one of the topics ChatGPT states it is capable of generating text responses for is news. Well, it is not working from a live a database so I don't know how it is going to do that...
Oh wait a minute, let me grab a copy of 1984 and find the chapter wherein this happens.
A large number of students have been trained to communicate in a certain style within our educational system. The AI training tool has been trained to recognize certain patterns of text, texual, context, contexual, syntax, etc.
My prediction is that the majority of users of this technology are going to overload, flood and drown out
9:58AM
anything resembling the old style of communication that is associated with an antiquated style of thinking. In the 1984 model, sources and references are just changed to adapt to whatever the new or current understanding of any given situation.
ChatGPT lists 'news' as one of the categories under which it can generate a text response - but it does not have access to current events.
The massive amount of text that will be generated and blasted out into the interwebs will be predicated on the axioms allowed that are based on the pre-supposed conditions set by the programmers.
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.”
- George Orwell
It is a language deconstruction tool. The new 're-imagined' information is gong to drown out the old information and the old information is going to labeled as fake news by way of unpopular or low volume opinion - as with the sciences that no longer refer to something formerly cited call the scientific process that describes a method by which constantly testing and evaluating ensures that the widely recognized and accepted scientific research suggests.... has been replaced by there are certain things that (we are told) the majority of scientists believe fall into the category of settled science. Such as 97% of climate scientists agree... climate change. As well as the authors, authoritative ('scholarly sources' offends certain people and is no longer a widely accepted description of a well documented source) on the subject of writing about how artists and the consumers of their art feel about the integration of AI technology in partnership with DALL-E2 and LG at Shutterstock.
The author of those very well written articles along with the press releases over there tell us how people feel about the effects on the industry. The veracity or source of the polling data is subject to the same decentralized criteria synonymous with the objectives of the best interests of the collective community in mind. In other words, it matters not whether the information presented informing you of how you feel about this is based on anything. Because nothing needs to be 'run by you' first. Nobody is going to 'ask if it is okay to do this' they are going to do it. And they have. And now you are here.
Comentarios