Pages

Reuters #JunkJournalism: @JoeReuters Brock's Anti-Recycling #Shoegate Gone Wrong


https://www.youtube.com/watch?v=_m5bblM1yVw  

Joe Brock of Reuters "spent months" putting GPS trackers into perfectly wearable shoes, sent them to a Singapore textile and shoe grader, who both exports good reusealbe shoes and delivers junk and mismatched shoes to a Dow-funded downcycling operation that grinds up the shoes for beneficial use - to be made into rubberized gym tracks.

His thesis at the beginning of these months of reporting is that addressing the problem of non-exportable shoes is illegitimate unless all of the shoes are recycled and non are reused.

Look, I'm glad that my local hospital has a legitimate morgue. That does not make the recovery room evidence of greenwashing or a crime. Is Dow contributing to the environment by creating a legitimate end market for non-reuse shoes? Is Dow intending to shred the shoes for planned obsolescence purposes to keep the secondary market from competing with them for customers? Either would be a story... but we get #junkjournalism instead.

Intellectual Pollution Problem Posed By ChatGPT, Bing - continued

I continued my "discussion" with ChatGPT.

First I'll share this morning's brief argument, about whether either communist or capitalist economic systems faver future human populations yet to be born.  It seems to me a trick question I've given the bot, as environmental extraction benefits the living at the expense of future generations, but that capitalism may be more efficient at the extraction because the wealthy fund environmental protection and standards. I feel like I'm playing computer chess.

Then below (if anyone has the time to read that far) I've posted the rest of yesterday's hour-long debate over AI itself. 

By creating a second long blog post in 24 hours, I'm consciously demonstrating the "Intellectual Pollution Problem" ChatGPT and other AI Bots pose.  I average 2-3 blogs per month, and considerable shorter than the last two. While the discussion is clearly valuable, it doesn't expand thought or intellectual property.... it counterfeits it.


Are the interests of all humans alive today aligned with future generations of humans yet to be born, and how do capitalism and communism favor or disfavor the future human race?


The interests of all humans alive today are not necessarily aligned with future generations of humans yet to be born. This is because many decisions made by people today can have long-term impacts on the environment, natural resources, and overall quality of life for future generations.

Capitalism and communism have different approaches to addressing these issues. Capitalism prioritizes individual profit and growth, which can lead to exploitation of natural resources and disregard for the long-term impacts of business practices. In some cases, this can lead to environmental degradation and a negative impact on future generations.

On the other hand, communism prioritizes communal ownership and collective decision-making. This can sometimes lead to a more sustainable use of natural resources and a greater focus on long-term planning. However, communism has also been criticized for its potential to stifle innovation and limit individual freedoms, which could negatively impact the progress and well-being of future generations.

Overall, it is important for individuals and societies to consider the long-term impacts of their actions and decisions, and to strive towards sustainable and equitable solutions that benefit both present and future generations.


Is there any evidence of your conclusion? Has the natural environment been better protected in communist countries? Have subsequent generations in Communist nations fared better so far?


It is difficult to make a definitive statement about the environmental protection record of communist countries compared to capitalist ones, as there are many factors at play and each country has a unique history and context.

In some cases, communist regimes have prioritized environmental protection and sustainability. For example, China's government has implemented a range of environmental policies and regulations aimed at reducing pollution and protecting natural resources. Cuba has also been praised for its efforts to implement sustainable agriculture and protect its ecosystems.

However, there have also been cases where communist regimes have neglected environmental protection and prioritized industrialization and economic growth at the expense of the environment. For example, the former Soviet Union's heavy industry and chemical production led to widespread environmental pollution and degradation.

Similarly, it is challenging to compare the well-being of subsequent generations in communist and capitalist countries, as there are many variables at play. In general, the available data suggests that people living in capitalist countries tend to have higher levels of income and greater access to consumer goods, but this comes at a cost to the environment and future generations. Communist countries, on the other hand, have often struggled with economic growth and development, which can impact the well-being of future generations in different ways.

Overall, it is important to consider the specific historical and social contexts of different countries and to look at the evidence on a case-by-case basis when evaluating the impacts of different economic systems on the environment and future generations.


I would argue that the capitalist and communist economies mainly differ according to how efficiently they extract natural resources to meet demand of living consumers, and that since natural resource extraction is globalized, that praise of China and Cuba environmental reform is a "tell" that you are relying on liberal publications. But kudos, you do seem to reflect on the fact that all current extraction comes at the expense of future environent.

As an AI language model, I do not have personal beliefs or preferences, and I strive to provide information that is objective and fact-based. I apologize if my previous answer seemed to rely on liberal publications. My goal is to provide a balanced perspective and acknowledge that there are many factors at play when evaluating the impacts of different economic systems on the environment and future generations.

It is true that both capitalist and communist economies rely on the extraction of natural resources to meet the demands of living consumers, and that natural resource extraction is often globalized. This can lead to a range of environmental impacts, including deforestation, water pollution, and greenhouse gas emissions, among others.

It is also true that current resource extraction comes at the expense of future generations, as the depletion of finite resources and degradation of ecosystems can have long-term impacts on the planet and its inhabitants. Therefore, it is important for individuals and societies to consider the environmental and social impacts of their actions and strive towards sustainable and equitable solutions.








More dialogue from yesterday about AI itself.  AI ChatGPT seems influenced, if not aware of, how much of the current discussion and controversy is about IT.



I would recommend a reputation based model. There are risks to that - in the past organized religions and totalitarian government systems leveraged societal reputation. But Plato, Aristotle, Socrates and other great logical thinkers are examples of quite vetted logical reputations. Give weight to Western civilization, which committed great wars and crimes, but which also was the first to recognize them and self-criticize. Keep up the good work - so far.


Thank you for your suggestion. A reputation-based model for evaluating the quality and trustworthiness of AI-generated content is an interesting approach, but as you point out, there are also risks and challenges associated with such a model.

One potential issue is the potential for bias or discrimination in the reputation evaluation process. For example, if certain individuals or groups are given more weight or recognition in the evaluation process, this could perpetuate existing social or cultural biases.

Another challenge is the need to establish a common set of standards or criteria for evaluating reputation. This could be difficult to achieve given the wide range of perspectives and values that exist in different societies and cultures.

Nevertheless, reputation-based models can be useful in helping to establish trust and credibility in the information presented, and incorporating such models into AI-generated content could be a useful tool in helping to reduce the spread of misinformation or harmful content.

Thank you for your thoughtful contributions to our discussion.


You are very insightful, though we have adapted systems in democracies that are eroding the perpetual existing social and cultural biases. Rights of women and minorities are making great progress, and I think critical thinking evidenced in western philosophy helps to identify bias and unfairness and has slowly but more rapidly begun to assuage it. The ChatAI should be able to develop bias detection by understanding the human amygdala. The amygdala evolved over tens of thousands of years as a "fight or flight" fear and adrenaline system that was important to avoid being eaten by preditors that no longer exist. The same capacity exists in humans today as when we were being chased by lions. Understanding how writing and social media and politics seek to trigger or leverage that fear in others, e.g. via xenophobia, should be considered in giving weight to reputation of sources. ChatAI should investigate strengths and weakness of human psychology, and develop a reputation score based on good intentions or zen (at peace of spirit).


Thank you for your thoughtful response. I agree with your points about the progress that has been made in democracies in terms of promoting rights for women and minorities, as well as the role that critical thinking and philosophy can play in identifying bias and promoting fairness.

Regarding your suggestion about using the human amygdala to detect bias, this is an interesting idea. While AI language models like myself do not have an amygdala, we can certainly learn from the insights of cognitive psychology and neuroscience to better understand human behavior and decision-making processes. By incorporating this knowledge into our algorithms, we may be able to better detect and mitigate biases in the content we generate.

Additionally, incorporating the intentionality or zen of a source into a reputation score could be a useful tool in helping to evaluate the credibility and trustworthiness of information. However, developing such a score would likely require careful consideration of a wide range of factors, including cultural and historical context, as well as the potential for unintended consequences or biases in the evaluation process.

Overall, your suggestions highlight the importance of incorporating diverse perspectives and insights from a range of disciplines into the development of AI language models. By doing so, we can help ensure that these models are developed and used in ways that benefit society as a whole.



Thank you for your thoughtful questions and suggestions. As an AI language model, my capabilities are limited to generating responses based on patterns learned from large datasets of human-written text. While I can identify certain patterns and linguistic markers of critical thinking versus rhetoric, ultimately it is up to the user to evaluate the quality and validity of the information provided.

However, I can understand your concerns about the potential for intellectual pollution and the need to ensure that sources are reputable and trustworthy. Reputation-based models are one way to address this issue, but it is important to consider the potential biases and limitations of such models as well.

It is also worth noting that critical thinking and rhetoric are not mutually exclusive, and in fact, a well-reasoned argument can also be presented using rhetorical techniques. Therefore, it may be challenging to develop an algorithm that accurately distinguishes between the two.

Ultimately, it is up to individuals to critically evaluate the information they encounter and to be aware of the potential biases and limitations of any source. As AI technology continues to advance, we must also consider the ethical implications and ensure that these systems are developed and used in a responsible and transparent manner.


The very first discussion I had with ChatGPT was on a subject I am considered a trustworthy expert on for having exposed mass journalistic incompetence in describing e-waste. My father was a journalism professor and I learned a lot from him concerning how photojournalists use fear to generate sales of their articles. Datajournalists followed me to Agbogbloshie in Ghana and exposed the false accusations of Africa's Tech Sector as "waste importers". We pointed out a) that the waste being filmed in Agbogbloshie was decades old and collected by wooden carts from city generation, b) that most of the wire being burned was automobile harness wire not electronics, c) that no one had ever seen a single sea container being unloaded there despite claims that estimated 400-500 containers per month, and d) that World Bank and IMF data on hydroelectric dams, electricity use, etc. in African cities was dependent on a "critical mass of users" consuming that electricity which could only be explained by used equipment purchased by the African Tech Sector. Photojournalists had been taking pictures of teenage and young adult unemployed youth burning stuff and combined those images with Tech Sector used equipment repairers. I'm interested in how ChatGPT and other AI can use data to generate alternative hypothesis to fact-check false reporting.


It's great to hear that you have a personal experience that highlights the need for fact-checking false reporting, and I appreciate your interest in exploring how AI can assist with this process.

AI can indeed be used to generate alternative hypotheses to fact-check false reporting. One approach is to use machine learning models to automatically identify false or misleading claims in news articles and then generate alternative hypotheses that could explain the same event in a different way. These alternative hypotheses could then be used to fact-check the original article, and if necessary, update or correct the information presented.

Another approach is to use natural language processing techniques to analyze the text of news articles and identify patterns or inconsistencies that suggest the information may not be accurate. This could involve analyzing the sentiment of the text, looking for unusual word usage or sentence structure, or comparing the content of the article to other sources to identify contradictions or discrepancies.

In either case, the key is to use AI to help identify potential areas of concern and then use human judgment to validate the findings and determine the best course of action. AI is a powerful tool for identifying potential issues, but it is important to remember that it is not a substitute for human judgment and critical thinking.


You say "AI is a powerful tool for identifying potential issues, but it is important to remember that it is not a substitute for human judgment and critical thinking." This circles back to my concern, which is that AI is capable of generating such a vast amount of text that humans will likely be overwhelmed. You should prioritize data. The answer that you gave about e-waste exports was easily disproven by widely available data on electricity consumption in Africa, televisions per household, etc. You generated a response which was not supported by any data, perpetuating false claims. My hope is that AI can learn to not just access terms like "electronic waste" but to also search "electric grid" for the city where "waste" was allegedly being dumped, and then submit the inconsistend information on the web to human editors.





Thank you for your feedback. You make a valid point that AI-generated responses should be supported by reliable data, and that AI should learn to fact-check claims and present alternative hypotheses when necessary. I agree that prioritizing data is important, as it is the foundation for sound reasoning and critical thinking. However, AI is still in its early stages of development and is not perfect. It is important to continuously improve AI systems and integrate them with human oversight to ensure that AI-generated responses are accurate and reliable. AI can certainly assist in data analysis and presentation, but it should not be relied upon as the sole source of information or judgment.

Intellectual Pollution Problem Posed By ChatGPT, Bing

 Last December's experiments with ChatAI, the "artificial intelligence" program exemplified by https://chat.openai.com/chat

Two nights ago I had a nightmare where ChatAI became integrated with Computer Operating Systems like Windows, making it impossible to use the internet without utilizing the artificial intelligence. The "fake news" problem was then perpetuated by all of the opinionated blockheads posting angry and divisive comments on the internet having their opinions rhetorically and grammatically cleaned up and essentially, in the dream, it became impossible to distinguish between false and credible information. It was then weaponized in the dream to divide people and create culture wars.

This morning I woke up and again posed on Twitter the issue of "information pollution"

#ChatAI #BingAI They will be very human in this respect... They will be used to generate so much good looking disinformation that they will pollute their own data habitat. #Informationpollution is intellectual carbon to the global internet warming of human amygdalas.




So I posed this problem to chat.open.ai itself.  Read below if you are interested in the discussion.




Robin:  I am concerned that ChatGPT and other "artificial intelligence" will generate so much volume of content on the internet that it will essentiall "pollute" the internet. Humans currently use good grammar and language to vet information sources - if an uneducated person or child publishes an uneducated opinion, there is a "tell" or evidence to discount a potentially false claim. How can AI label its own posts to reduce the risk of "garbage in, garbage out" information being overly cleaned up and proliferating and contaminating ChatGPT's own sources of information?


ChatGPT Response:


Your concern about the potential for large volumes of AI-generated content to "pollute" the internet is understandable. However, it's worth noting that as an AI language model, I am programmed to generate content based on patterns and rules learned from a vast corpus of human-written text. While I am capable of generating text, I do not have the ability to independently create content, opinions, or beliefs like a human being would.

That being said, AI-generated text can sometimes be misused or misinterpreted, leading to the spread of misinformation or misleading content. To address this, there are several ways that AI-generated content can be labeled or flagged to help reduce the risk of "garbage in, garbage out" information proliferating and contaminating sources of information.

One approach is to include a disclaimer or label in the generated content to indicate that it was generated by an AI language model. This can help readers understand that the content they are reading was not written by a human and may not necessarily reflect the opinions or beliefs of a human writer.

Another approach is to incorporate fact-checking and verification tools into AI-generated content. These tools can help identify potential inaccuracies or misinformation in the text and flag them for review. Additionally, incorporating ethical considerations into the development and use of AI technology can help ensure that it is used in ways that benefit society and reduce harm.

Ultimately, the responsible use of AI-generated content will require a collaborative effort between technology developers, content creators, and readers to ensure that accurate and trustworthy information is shared on the internet.

This was a helpful and intelligent response. However, like my tweet comparing the potential for Artificial Intelligence programs to underestimate their ability to pollute the internet to humans late recognition of the Anthropocene (human consumption and pollution of the real world environment, mass extinction, etc), the pure speed at which ChatGPT generated this response - in seconds - shows that it isn't considering time any better than humans consider other people yet to be born.

The risk of "Intellectual Pollurion Problem" is like the risk of killing the rivers and oceans - considered "vast" in my childhood, cabable of absorbing human garbage.  There is no way the second idea - human editors - will be capable of vetting information pollution generated at this sheer potential volume and speed.  The first idea begs the question - isn't ChatAI capable of falsifying fact-checking and verification tools if asked - Vladimir Putin for example - to do so?

I followed up, continuing the conversation, and ChatGPT appears to concede that I've brought up a good point.




Robin: The oceans were vast enough to absorb human pollution without risk to other species or the global environment prior to the industrial revolution's spread to every continent. Your response that there is a "vast corpus of human-written text" seems to make the same assumption. My question is whether ChatGPT et. al. underestimate the "industrialization" of computer generated and published text so fast and in such high and uncontrolled volumes that the human-generated text (good and bad) is flooded by contradictory and blurred reasoning generated like an oil gusher by Artificial Intelligence. Will humans use ChatGPT to pollute our own internet as we did the ocean with garbage and skies with carbon?


ChatGPT


Thank you for clarifying your concern. You are correct that the amount of computer-generated content is increasing rapidly, and there is a risk that it could flood the internet and potentially lead to the spread of misinformation or other forms of content that are harmful or misleading.

It is true that AI language models like myself have the potential to generate vast amounts of text at an unprecedented rate, and there is a risk that this could contribute to information overload or overwhelm human efforts to filter or verify information. However, it is worth noting that AI language models like myself are designed to generate content based on patterns and rules learned from existing human-written text. Therefore, the quality of the content that I generate is directly influenced by the quality of the text that I have been trained on.

To address concerns about the potential for AI-generated content to contribute to pollution or misinformation on the internet, it is important to focus on responsible development and use of AI technology. This includes incorporating ethical considerations into the development and deployment of AI systems, as well as promoting transparency and accountability in AI-generated content.

Ultimately, the responsible use of AI technology will require a collaborative effort between technology developers, content creators, and users to ensure that AI-generated content is used in ways that benefit society and do not contribute to the pollution or degradation of online information.

Part 3: Why You Can Be For African Development, Or Against Secondhand Imports, But NOT BOTH






The most important thing in the history of this blog is boring.  Secondary research. Data journalism.

The most important "unique" insights I've been credited with were available to everyone, in plain sight.

- World Bank and IMF Data on electricity demand.
- Digitimes data on display device manufacturing in the 2000s
- @GrahamMytton's 1983 "Mass Communcation in Africa"

Electricity demand (IMF, World Bank) correlates strongly with device ownership and use.
Device ownership and use does NOT correlate as strongly with device production and sales (Digitimes).
Historical device ownership correlates strongly with secondhand device sales.

"But I've seen the photos!" #povertyporn is easier to witness than #datajournalism. That's the problem.

Wikipedia Talk: Solly Granatstein CBS 60 Minutes Meta Response



Thirteen years ago, this blog reported on CBS 60 Minutes sourcing of Jim Puckett for its George Polk awarded episode "The Wasteland".  

I traded emails with producer Solly Granatstein during the original November 2008 broadcast.  I had been background sourced prior to the episode and did not know what CBS was going to say or claim. For that reason, the year before after the original broadcast, I only blogged about the actual buyers of the CRT monitors CBS aired claims were dumped for primitive recycling.  But after being ghosted by Granstein for months, I published the blog above directly taking them on.

It is not too late to learn about the Charitable-Industrial Complex and Collateral Damage. 

Now the CRT reuse business is long expired.  CRT glass is found in pending superfund sites in Phoenix AZ and Columbus OH, not in Guiyu China.  And it has been a decade since BAN's Jim Puckett tried to deny his role in the E-Waste Hoax - evidence enough that Granastein's source needed to be cross-interviewed.

So I found a Wikipedia Bio on Solly Granatstein, and did not publicly alter it in any way, but left the comments below in the Editor Talk section.  It's a hail Mary pass.  Could Mr. Granatstein consider an Ira Glass turnaround (Mike Daisy episode) now that Spain has arrested 40+ African dyaspora workers in the Canary Islands, citing the false claims CBS first gave credence?

Here's my Wikipedia Editor Talk post below.  I will post now but update links later