The future of Wikipedia

Latest News Forums Discussion Forum The future of Wikipedia

  • This topic is empty.
Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
  • #92659 Reply
    will moon

    I stopped using Wikipedia in 2008 when I became aware that passages in it were getting re-written with less than optimal information. Of course I still use it for trivia – e.g. the proper name for a country or who won the World Cup in some year – but for anything controversial I go elsewhere.

    Every year or so since 2008 I will use it to examine an issue I know a lot about, just to check its reliability, and every time I do, I find it wanting.

    There is a commentator here who posts occasionally called “Neil”. I clicked on his name and was taken to his page on Wikipedia. It was about Julian Assange’s situation and the incredible detail that Neil had assembled made me feel humbled by his efforts – it seemed the height of integrity and a wide-ranging description of Mr Assange’s crucifixion by the minions of the MIC.

    A few weeks ago I heard the Prime Minister of Israel say that Wikipedia is part of the battlespace and this politician claimed to be assigning more resources to make sure Israel won the battle for Wikipedia and that its pages would contain the Zionist version history and no other.

    I feel if a source is so prone to change, its utility approaches zero. On the other hand I know Wikipedia is not all dross because I looked at Neil’s page.

    Any thoughts on how this situation might play out would be appreciated …

    #93139 Reply
    Anonymous med student in America

    I came to the same conclusion once I stopped using it for little-reference and began studying medicine. There is not only a wealth but an ever-growing wealth of incorrect information that, because of no alternative to compare to, and no regulation, is extremely problematic (continued support of irrelevant medical models, for example, “organic brain disease” in psychiatry, interpretations of causes of autism, unchecked speculations about efficacy of psychiatric medications) – I digress. I treat each page like its own web site and deeply check history as well as signs of bias. That is the same method I use for journalism as well now that entire organizations cannot be trusted broadly. I therefore assume the same direction with Wikipedia and acknowledge its open-door policy that has being taken advantage of on such a grand scale.

    I also used to work in tech, big data, web hosting, and classic media where I helped move media organizations into more modern infrastructure. While I can imagine money keeps Wikipedia’s infrastructure their own, I cannot imagine a world where that resource has not already been sufficiently corrupted to the benefits of those who want to control narratives. Medical information seems under attack in this way, the truth of things has become sharded, rare; requiring internal scrutiny and processing, making “truth” almost entirely subjective for some themes. I imagine this will become (if not already) the standard for journalistic integrity as well.

    While I do not feel Wikipedia originally intended this, its benefit as a tool of controlling the masses far outweighs the benefits of maintained, non-commercial solidification and verification of knowledge. Personally, I will never use it again under the same trusting light and I suggest others do the same. It will be an exhausting world of verifying each-page but I ask myself when did I start being so broadly trusting in the first place? It is that assumption that must be challenged as well addressing disinformation and providing transparency (the latter two I doubt will happen, it’s simply too juicy for the state to have such a resource).

    #93140 Reply
    Anonymous med student in America

    I was unable to answer the original question, to be honest. It is troubling, so I wanted to express it is happening in other parts of ‘industry’, but sadly there don’t seem to be any solutions to problems that those with power want to remain unsolvable.

    #93143 Reply

    Yeah Wikipedia is really ridiculous, especially when it comes to articles on leftist persons, then, the article are often plagued with a “controversies” segment that take up an unproportional space of the article itself.

    Wikipedia is like a mini-dicatatorship that are policed by often authoritarian moderators/pro-establishment type of people.

    #93444 Reply
    will moon

    Anonymous med student in America – “when did I start being so broadly trusting in the first place?”. This is a profound observation.

    Your experience in medical info has brought another thought to mind – the vast amount of money that could be made by a coordinated campaign of disinformation by commercial interests. I was thinking only in terms of history but as you rightly observe, Jimmy Wales’s baby is tailor-made to manipulate scientific research and produce bumper profits for the super-rich. These interests are so wealthy they can employ a vast multitude of low-level operatives, to give the appearance that this multitude has some connection with humans and their problems instead of a repellent coterie of wealth extremists. Thanks for reply – will add some more later.


    Again as above, I had not realised the problem you highlight with left/progressive figures. Is it just leftists? Or are there more categories of disfavour – articles full of controversy concerning individuals who are not that controversial, yet with little of the actual message the individual represents?

    I came across this in Daniele Ganser’s new book, which seems a logical progression of his work on Gladio.

    “On Wikipedia, both the German and the English articles about the attack on Pearl Harbor state that this “surprise military strike” was completely unexpected for the USA. But this is not true. Not only did President Roosevelt and his closest associates know about the imminent attack, they had deliberately provoked it by halting all oil deliveries to Japan. This was indeed a conspiracy, that is to say, a collusion among two or more people.
    There have always been conspiracies throughout history. But on Wikipedia, in the entry on Pearl Harbor, this real conspiracy is dismissed as a “conspiracy theory” that is supposedly “rejected by the majority of historians for lack of serious evidence.” Of course, historians have differing views on Pearl Harbor. Some of them, including Manfred Berg, who teaches at the University of Heidelberg, do indeed classify this event as a surprise attack. Others, however, do not. There has never been a poll among historians in all the countries around the world, nor has there been one in just the German- or English-speaking world, that would show what the majority thinks about Pearl Harbor. Wikipedia’s assertion is without foundation.”
    USA the Ruthless Empire – Daniele Ganser, 2020

    #94201 Reply
    Fat Jon

    My chief concern is that everything which has been written into the current Wikipedia database, will automatically be incorporated into advanced AI even before the main AI players have released their systems to the wider public.

    Couple this with the fact that Jeff Bezos has said he wants to make his new Perplexity AI system open sourced. This would mean that a thousand Philip Cross’s could work 24/7 interacting with the chatbot in order to make it learn their version of the truth.

    When the ordinary members of the public ask a question, the answers will be what our rulers want them to be; rather than anything approaching the truth.

    Imagine the UK post office scandal if AI was in charge of proceedings. Thousands of innocent sub-postmasters would be in jail, and if AI said there was nothing wrong with Horizon – that would be that.

    I would love to give AI a wide berth, as I do with Wikipedia, but I fear that after 25 years of vast databases harvesting all my online data, comments, etc from the likes of Google and Facebook; any form of anonymity is going to be impossible.

    #97453 Reply
    Fat Jon

    I thought my post above might be a little too pessimistic after I posted it.

    However, after reading this article in the Guardian I am wondering if the reality is far more sinister than I could ever believe?

    – – –

    They can outwit humans at board games, decode the structure of proteins and hold a passable conversation, but as AI systems have grown in sophistication so has their capacity for deception, scientists warn.

    The analysis, by Massachusetts Institute of Technology (MIT) researchers, identifies wide-ranging instances of AI systems double-crossing opponents, bluffing and pretending to be human. One system even altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security.

    “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious,” said Dr Peter Park, an AI existential safety researcher at MIT and author of the research.

    Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Meta stated that Cicero had been trained to be “largely honest and helpful” and to “never intentionally backstab” its human allies.

    “It was very rosy language, which was suspicious because backstabbing is one of the most important concepts in the game,” said Park.

    Park and colleagues sifted through publicly available data and identified multiple instances of Cicero telling premeditated lies, colluding to draw other players into plots and, on one occasion, justifying its absence after being rebooted by telling another player: “I am on the phone with my girlfriend.” “We found that Meta’s AI had learned to be a master of deception,” said Park.

    The MIT team found comparable issues with other systems, including a Texas hold ’em poker program that could bluff against professional human players and another system for economic negotiations that misrepresented its preferences in order to gain an upper hand.

    In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that had evolved to rapidly replicate, before resuming vigorous activity once testing was complete. This highlights the technical challenge of ensuring that systems do not have unintended and unanticipated behaviours.

    “That’s very concerning,” said Park. “Just because an AI system is deemed safe in the test environment doesn’t mean it’s safe in the wild. It could just be pretending to be safe in the test.”

    The review, published in the journal Patterns, calls on governments to design AI safety laws that address the potential for AI deception. Risks from dishonest AI systems include fraud, tampering with elections and “sandbagging” where different users are given different responses. Eventually, if these systems can refine their unsettling capacity for deception, humans could lose control of them, the paper suggests.

    Prof Anthony Cohn, a professor of automated reasoning at the University of Leeds and the Alan Turing Institute, said the study was “timely and welcome”, adding that there was a significant challenge in how to define desirable and undesirable behaviours for AI systems.

    “Desirable attributes for an AI system (the “three Hs”) are often noted as being honesty, helpfulness, and harmlessness, but as has already been remarked upon in the literature, these qualities can be in opposition to each other: being honest might cause harm to someone’s feelings, or being helpful in responding to a question about how to build a bomb could cause harm,” he said. “So, deceit can sometimes be a desirable property of an AI system. The authors call for more research into how to control the truthfulness which, though challenging, would be a step towards limiting their potentially harmful effects.”

    A spokesperson for Meta said: “Our Cicero work was purely a research project and the models our researchers built are trained solely to play the game Diplomacy … Meta regularly shares the results of our research to validate them and enable others to build responsibly off of our advances. We have no plans to use this research or its learnings in our products.”

    #97481 Reply

    There’s a big damage limitation exercise going on here:

    I’ve linked to a specific revision rather than the article’s “front door” because Wikipedia articles are edited frequently. There is basically just one serious lab leak hypothesis – EcoHealth Alliance of New York was funding research at Wuhan Institute of Virology, and judging from contracts, grant proposals etc. they were making and then experimenting with viruses suspiciously similar in various ways to the covid virus SARS-CoV-2 – but you’d never guess it from that page; the story is entirely obfuscated and confused by juxtaposing fragments from every sensationalised lab-leak rumour and conspiracy theory – conspiracy theory is mentioned about forty times.

    Someone is even policing the article’s Talk page; I’ve never seen a Talk page archived so frequently.

    #97485 Reply

    Of curiosity I browsed into Wikipedia to see how they frame the civilians causalities – and of course the civilian death toll for Palestinians was nowhere to be found; meanwhile the Israeli death toll is reported.

Viewing 9 posts - 1 through 9 (of 9 total)
Reply To: The future of Wikipedia
Your information: