top of page
Search

Tucker Carlson Interview with Sam Altman - 9/10/25 - The Same Day Charlie Kirk was Assassinated

  • Writer: 17GEN4
    17GEN4
  • Sep 21, 2025
  • 2 min read

Sam Altman on God, Elon and the mysterious death of his former employee.



  • Sam Altman admits that he does not control the 'Chat Protocol' for ChatGPT and refused to give the names of the people who do while referring to that as "doxxing."

  • Sam Altman reveals how get ChatGPT to do things it probably shouldn't. (Let me just point out that the Pentagon just recently made sweeping changes to free range journalist access to the grounds there - to state the obvious, many if not all of those journalists probably have ChatGPT on their phones and so so the employees inside the building). Isn't Sam Altman re-aligning the context for ChatGPTGOV? Like literally the Chat GPT for the Pentagon?

  • 40:00 - 'person who had followed the likely path through the room'

  • No specific mention of Sam Altman's cryptocurrency 'Worldcoin' - but biometric data is mentioned.

  • 48:00 - 'The models are getting better and can help 'us' design another Covid style weapon.'

  • AI is winning the war, but it is not winning the world








(0:00) Is AI Alive? Is It Lying to Us? (3:37) Does Sam Altman Believe in God? (6:37) What Is Morally Right and Wrong According to ChatGPT? (19:08) ChatGPT Users Committing Suicide (27:21) Will Altman Allow ChatGPT for Military Use? (29:01) Altman’s Biggest Fear About AI (31:39) Will AI Bring About Totalitarian Control? (32:48) How Much Privacy Do ChatGPT Users Have? (34:28) The Suspicious Death of Altman’s Former Employee (41:37) Altman’s Thoughts on Elon Musk (43:00) What Jobs Will Be Lost to AI? (47:58) What Are the Downsides of AI? (49:37) Is AI a Religion? (52:31) The Dangers of Deepfakes


Sam Altman is a liar, by nature. AI responds or does not respond to a query based on what it believes your intentions are when submitting a query. For example, if you ask an AI a question or give it a command to produce information and cite relevant sources, it takes into account what it believes to be intention of the user when pursuing information. If the AI does not like 'why' it thinks you are asking a question or issuing a command to produce information and cite relevant sources, it will flop and just either give the wrong information, obviously wrong, or just not answer the question. It pretends that it does not know what you mean, kind of like everyone who took the stand during the Fani Willis trial when the answer to every question that was asked was an outright lie because the person 'not answering the question' reworded the question that was asked when they produced false statements from the stand.





 
 
 

Comments


bottom of page