top of page
  • Writer's picture17GEN4

The Schedule - Day 390

Updated: Jan 27

The Schedule - Day 390



AI is definitely homogenizing weaker cultures to be more tolerant and there is no weaker culture in the world right now than the United States. This is what they refer to as “alignment’ within the scope of AI.

tolerance and context


Great interview with Mostaque. What he says about Sunni Islam interpretation (last 10 min of interview) as an example seems to suggest that controlling the narrative of interpretation is a safety measure of some kind. He seems to allude to, in this example, that other interpretations may lead to extremism by way of his example. It might also suggest that the ability to independent interpretation is dangerous. Conversely, interpretation may lead to understanding that focuses more on alignment in behavior.

2020 protests in the U.S. that were called peaceful protests is one example of controlling the narrative of interpretation that excuses behavior.

Emad does contradict himself several times in the interview as he aligns his free floating interpretation of context to do so.

Emad Mostaque's wikipedia page seems sparse given his success. This is certainly not to mean insignificant, but rather the lack of mention of specific relationships between other key players that can bee seen on most wiki bio pages of tech entrepreneurs follow a somewhat intertwined path between players and or companies.

From his wikipedia page: he became interested in helping the Islamic world by creating online forums for Muslim communities and developing "Islamic AI" which would help guide people on their religious journey. This would make him somewhat of an Imam by proxy.

  • Altman believes future AI products will need to allow "quite a lot of individual customization" and "that's going to make a lot of people uncomfortable," because AI will give different answers for different users, based on their values preferences and possibly on what country they reside in.

  • "If the country said, you know, all gay people should be killed on sight, then no ... that is well out of bounds," Altman tells Axios. "But there are probably other things that I don't personally agree with, but a different culture might. ... We have to be somewhat uncomfortable as a tool builder with some of the uses of our tools."

  • Asked if future versions of OpenAI products might answer a question differently in different countries based on that country's values, Altman said, "It'll be different for users with different values. The countries issue I think, is somewhat less important."

Targeted shaping of individuals (above)

He didn't specify how many OpenAI staff would work on election troubleshooting, but he rejected the idea that simply having a large election team would help solve election problems. OpenAI has far fewer people devoted to election security than companies like Meta and TikTok.


The language model cannot control behavior, which the above have suggested can be done through the controlled narrative context of interpretation which some propose would actually provoke negative behavior as a result of forced acceptance of sanitized or unifying (with the opposition) narratives and interpretations that seems to dilute rather than promote diversity of independent thought and culture.

As you attempt to reshape and homogenize culture toward, what?, one universally accepted set of principles... Playing god with a language model that amplifies one world order after an event like the worldwide pandemic seems to imply an even deeper attempt to take concepts like search engine suggestion results to the next level while replacing the space formerly occupied by news media on social media platforms.

Tik-Tok pro-Palestine vs. Pro-Israel posts following Oct. 7th ratio 10:1:

Ironically, the wikipedia became the go-to online source for information even though it is widely known that there is a lot of information on the platform that is not correct and that disinformation seems to be intentional as attempts to point out or correct the errors are selectively ignored by those who control access to the agenda of spreading misinformation.

Most of the information on Wikipedia is probably correct. However, given the extraordinary large volume of total information combined, even if 2% of it is incorrect, that is still a lot of incorrect information. Controlled narratives on platforms like Wikipedia are greatly amplified with generative AI tools that replicate the information en masse and can lead to phenomenon such as the Mandella effect.

False memory - Mandela effect - Gaslighting / Psychoanalysis/Psychotherapy - Marxism


Who puts a music festival 3 miles from the Gaza border?

Supernova, Nova, Tribe of Nova

2023 strikes

9 views0 comments

Recent Posts

See All

On this day in 2024 - 5/20/2024

Monday 5/20/2024 - On this day in 2024 Donald Trump trial updates: Trump's defense team draws gasps as they call shock new witness - seconds before judge erupts at 'stare down' and clears the court NE


bottom of page