Do We Need a Trustpilot for Social Media – and What Would It Mean?

We have started laying the foundation for Trustpilot for Social Media.

The idea is simple: people should get better help understanding whether a website, link, post, or source seems trustworthy, questionable, or worth checking more carefully.

Today, we often meet information without any useful context. A link appears in a feed. A post gets shared. A website looks serious enough. A claim is repeated often enough. And suddenly, people are expected to decide for themselves whether it deserves trust.

That is not always easy.

A trust layer for the web

Trustpilot for Social Media is meant to become a kind of reputation layer for the web.

Not a system that decides truth for everyone. Not a censorship tool. Not an automatic judge.

More like a warning light.

If a site has a long history of misleading content, scams, conspiracy material, or other serious problems, the user should be able to see that before trusting it. If a source has a stronger reputation, that should also be visible. If nothing is known, the system should simply say that.

The browser could show a small label or icon when visiting a website. The user could click it to read more, report a page, or see whether others have flagged the same source.

That kind of context could help people slow down before sharing, reacting, or believing something too quickly.

But reputation systems are risky

A system like this also comes with problems.

A report is not the same thing as a fact. People can misunderstand things. They can also abuse reporting systems on purpose. Competitors, political groups, trolls, and angry users could all try to damage someone else’s reputation.

That means reports must be handled carefully.

A single report should not become a public warning. Community signals should not automatically become truth. Admin review, correction options, and transparency need to be built in from the start.

The system should help people think, not tell them what to think.

GDPR cannot be an afterthought

There is also a privacy side to this.

Reporting a website is one thing. Reporting a person, a social media profile, a comment, or behavior connected to an account is something else.

That can become personal data very quickly.

Because of that, GDPR has to be part of the design from the beginning. The system should collect as little data as possible, avoid storing unnecessary personal details, and clearly separate website reputation from anything related to individuals.

Users must understand what happens when they report something. Is the report private? Can it become part of community data? Will an admin review it? Could AI be used to help analyze it? Can it be corrected or removed later?

Those answers need to be clear.

A reputation system without a correction process is dangerous. Websites can improve. Reports can be wrong. Context can change. People must be able to challenge, correct, or remove bad information where appropriate.

AI can help, but should not decide

AI can be useful in a project like this. It can help summarize reports, compare sources, detect patterns, and support fact-checking.

But AI should not become the final judge.

If AI is used, it should be clearly marked as support. It should also be optional, because every AI-assisted check costs money and may involve sensitive context.

The default should be cautious and privacy-friendly.

Starting small

The first version should not try to classify the entire internet.

A better start is to support moderation and basic source reputation. For example, helping admins understand whether a user, comment, post, or link needs a closer look before approval.

From there, the system can grow into browser warnings, community reports, public reputation pages, and deeper fact-checking tools.

So, do we need it?

Probably.

The web has a trust problem. People are constantly asked to judge sources, claims, links, and posts without enough context.

A careful reputation layer could help.

But it has to be built with limits. It needs transparency, privacy, GDPR-aware design, human review, and a way to fix mistakes.

The goal is not to control what people read.

The goal is to help people understand what they are looking at before they trust it.

Fact-checking Tools for Chrome

We are continuing to develop Tornevall Networks Toolbox for Social Medias with a clear focus on fact-checking.

The idea is simple: to gather information from established fact-checking organizations such as Snopes, Källkritikbyrån, Motargument and others, and use that to create a clearer picture of what is actually accurate in what we see online.

Rather than pointing out individual posts, the goal is to improve the overall understanding of how information spreads. By following different sources over time, it becomes easier to see how topics change, grow, or shift direction.

This is especially relevant during election periods, when the amount of information increases and it becomes harder to separate facts from misleading claims.

By combining multiple sources, we want to make it easier to see the bigger picture without relying on any single actor.

This is still a work in progress, but the ambition is straightforward: to make it easier to understand the flow of information online.

If you have suggestions for fact-checkers we should include, feel free to reach out.


Things are moving again

It has been a while since there was any real movement across the wider Tornevall Networks ecosystem.

That was not because everything had stopped. Most of it kept running just fine. But like many privately maintained projects, a lot of ideas ended up sitting in the background for far too long simply because life, time, and energy had to go elsewhere.

That has started to change.

Over the last few weeks, several parts of the platform have begun moving again – not just in maintenance terms, but in actual development. Some older services are being cleaned up, some tools are being rebuilt properly, and a few things that had been sitting half-finished for too long are finally getting the attention they should have had earlier.

One of the biggest shifts is happening around tools.tornevall.net, where a larger rebuild has made it possible to modernize parts of the ecosystem that had become too slow, too fragmented, or simply too outdated to keep patching forever. DNS-related tooling is being refreshed, documentation is being brought closer to reality, and a number of internal and public-facing interfaces are becoming more usable than before.

This also connects with changes already visible on the site. SocialGPT marked one kind of step forward. The ongoing DNSBL removal rebuild marks another. Older infrastructure is not being thrown away for the sake of it, but where something needs a cleaner structure, it is now being rebuilt with that in mind.

So while this is not a grand relaunch of everything at once, it is a very real shift in direction.

The platform is active again. Development is active again. And several long-running ideas are finally starting to look like real, usable systems instead of permanent work in progress.

Current areas of focus include

  • Rebuilding and modernizing tools.tornevall.net
  • Refreshing DNS-related tooling and removal workflows
  • Cleaning up older services and legacy structure
  • Improving documentation so it better reflects reality
  • Making internal and public-facing interfaces more usable
  • Bringing long-running ideas closer to fully usable systems
  • And much much more not even written down yet

DNSBL Removal Tool Upgrade in Progress

The DNSBL and FraudBL-page is currently undergoing an upgrade as part of a broader rebuild of our DNS-related tooling. The removal functionality applies to entries listed in DNSBL and FraudBL, which has been handled through a traditional database but via direct access to the underlying zone files a while.

Since it doesn’t work properly, the removal service is being modernized and restructured to improve reliability, security, and long-term maintainability. During this period, the existing web-based removal interface is half-way offline (the requests has been reported dysfunctional).

The rebuilt system will introduce a cleaner separation between web tools and API functionality. Removal requests will be handled through a dedicated API endpoint available at https://tools.tornevall.net, allowing for more predictable behavior and better automation support. Also, the DNSBL plugin at WordPress will be upgraded and refreshed.

The upcoming implementation focuses on proper CIDR handling, accurate single-IP removals, and support for server-side usage through a CLI endpoint. Access to CLI functionality will require a manually generated token to ensure controlled and auditable use.

The web interface will return in a new form, protected by modern verification mechanisms such as Captchas and Cloudflare Turnstile. The goal is to reduce abuse while keeping legitimate self-service removals straightforward.

Why the Page Is Changing

Earlier versions of this page relied on legacy components that no longer met technical or security requirements. Rather than patching outdated functionality, the decision was made to rebuild the removal system and related DNS tools as a coherent package.

The new solution will be delivered together with updated DNS utilities and an improved DNSBL Removal Kit, replacing older integrations.

About Availability and Support

The site based tools is maintained as a self-service resource. Response times and availability may vary due to private-life constraints. For this reason, the tooling is designed to minimize the need for manual intervention wherever possible.

For common questions and background information, please refer to our documentation and/or FAQ.

The site is privately maintained, not owned by any company or organization, and operates without commercial funding. All development and maintenance are done on spare time.

If you wish to support continued development, optional donation alternatives are available on the support page.

Current Status

The removal tool is under active redevelopment and will return as part of a consolidated DNS toolset with a fully functional DNSBL removal workflow.

Introducing SocialGPT

SocialGPT is a lightweight but powerful Chrome extension that integrates ChatGPT directly into your social media experience. For years, I’ve wanted a tool that would let me write smarter, sharper, more context-aware replies without opening new tabs or juggling windows. Every time I needed to draft a rebuttal or clarify a point, I wished for something embedded – something right there on the page.

Now it exists.

With SocialGPT, you can mark comment threads, automatically pull their content into an in-page editor panel, and generate AI-assisted replies in seconds – no reloads, no switching, no bullshit.

Source code: BitbucketGitHub mirror

Key Features

  • Context Marking – Highlight any number of elements in a thread to build a structured conversation context with block indexes (like [1], [2], etc).
  • Floating Reply Panel – A modal-less in-page editor where you can:
    • choose tone (e.g. cynical, friendly, brutally honest)
    • select response length (short, micro, extended)
    • switch models (GPT-4o, GPT-4, o3-mini)
    • input modifiers and custom instructions
  • Facebook-Aware – Automatically detects your profile name and injects it into the prompt for authentic replies.
  • Right-Click Access – Mark content or open the reply interface with a right-click.
  • Mark Mode Toggle – One-click switch to enable or disable GPT reading mode.
  • Response Modification – Use the Modify button to rework or fine-tune previous replies with new tone, instructions or shortened versions. This is especially useful after generation, since context and prompt fields are cleared upon reply.
  • Visual Loader – Subtle spinning loader shows when ChatGPT is generating content.
  • Fact Check Reminder – Prompts include reminders to validate and cross-reference controversial claims or disputed data before producing a final draft. Designed to prevent regurgitation of unchecked social media noise.

Tone Profiles

Organized into four categories:

  • Objective & Informative – neutral and formal, fact-based and concise, academic and precise, analytical and critical
  • Confrontational & Direct – critical and direct, cynical and sharp, aggressive and unapologetic, brutally honest
  • Satirical & Sarcastic – sarcastic and dry, snarky and dismissive, satirical and ironic, witty and clever
  • Approachable & Light – friendly and casual, conversational and soft

Ideal Use Cases

  • Rebuttals in comment sections
  • High-speed debate replies
  • Satirical or snarky thread injections
  • Public moderation with edge
  • Clarifying academic-style posts

Requirements

  • OpenAI API key (GPT-4 or GPT-4o recommended)
  • Chrome browser with extensions enabled

Created by Thomas Tornevall for real-world online interaction. Feedback and pull requests are welcome at either GitHub or Bitbucket.

Stay sharp – speak smart – strike fast.

Tornevall Networks changing some structure just slightly – The birth of Sonic Syndicate

We are still here! But there have been quite a few changes recently, both major and minor. NetCurl, which has long been a key feature of this website (frequently highlighted), reached its peak with the release of version 6.1, which introduced a range of improvements and functional updates. Earlier versions like 6.0.26 played some role, but the main focus was on refining and stabilizing the 6.1 series framework due to 6.0’s structurally poor and non-PSR-compliant foundation. The underlying TorneLIB framework was particularly affected, resulting in a chaotic structure.

In terms of marketing, NetCurl was never particularly prominent, despite being introduced via network components in certain Resurs Bank plugins (notably in version 1 of the ecomphp project, which initially required support for both SOAP and REST). That project also transitioned to a REST-only approach. Development of ECom2 began in early May 2022, rendering NetCurl largely obsolete. ECom2 includes its own integration inspired by NetCurl but designed to operate without external dependencies.

As stated: what was relevant then may no longer be. NetCurl’s domain has been relinquished and the project is no longer actively developed. The most active phase occurred in October 2021, followed by another 20 or so commits through spring 2022, during the era of PHP 7.x. On March 27, 2025, the commit message “netcurl is dead, fraudbl is new” appeared-intended as a nod to the preexisting fraudbl rather than indicating a shift. No further development is planned. However, NetCurl remains available as an LTS-based product and is still occasionally maintained, especially in connection with ecomphp.

Why the shift? Life circumstances evolve, and time for personal projects must be reevaluated. NetCurl was created during a time of ample free time. Today, the focus has shifted, and while development continues, it happens on a smaller scale. In addition, more energy is now being directed toward music (Thomas Tornevall’s second hobby), limiting the ability to maintain codebases as before.

But enough about that!

The platform “Sonic Syndicate” (https://artists.tornevall.net/) has been launched to support artists in need of a website presence. It is especially useful for streamlining the Spotify artist verification process.

And what about everything else? Well, most services are still running just fine. Tornevall Networks’ DNSBL is not currently under active development, as it remains functionally valid and stable. There are no plans to decommission it, and the fraudbl project continues in parallel. Mail services are fully operational, and there’s no intention to discontinue them. Since these systems are stable, they’re here to stay. The website will likely see further updates in the future – perhaps even more regularly, given the growing involvement in the music scene.

Preparing for netcurl 6.1.5

We’ve been holding on netcurl 6.1.5 for a while now, since the package has been a bit too small to be considered release worthy. However, the last days changed this and two bigger features has now been fixed: SoapClient timeouts and the ability to disable SSL certificate verification (which has been planned for long now).

SSL Verification becomes configurable

The verification issues is added as part to the fact that requests to self signed sites sometimes is necessary and normally netcurl security is set to never allow self signed or invalid SSL certificates. This release also contains a fix that makes the streamwrapper handle errors better, since we need to be able to catch SSL request exceptions.

SoapClient Timeouts

The SoapClient timeout handler has just recently become a problem. When a site is timing out during connection or the response, the SoapWrapper (and actuallty SoapClient itself) have never been able to figure out if exceptions thrown are thrown because of site errors or timeouts – except for through the error message.

With the new fix, a different exception will be produced under very specific circumstances: If the SoapClient throws an exception, based on error code 2 (E_WARNING), and the initial request time has exceeded the timeout configuration (from WrapperConfig) the exception is considered timeout instead of code 2. In such cases, the exception will be rethrown with code 1015 instead – but keep the error message produced by during the soap request.

const LIB_NETCURL_SOAP_TIMEOUT = 1015;

A major update for DOM Documents

DOMDocuments has always been of interest for netcurl, especially since we use those features to fetch data for RSS feeds. The most recent fixes to handle DOM data with netcurl will now become available. However, there may be a continued integration with Laminas here to furthermore make requests more stable.

Below, you can see what’s been updated.

    Release Notes - PHP_NETCURL - Version 6.1.5
  • [NETCURL-338] – Docblock classes are not properly defined
  • [NETCURL-330] – Allow manipulation of SSL Verification settings
  • [NETCURL-335] – PHP 8.1 Tests
  • [NETCURL-339] – Use xpath to fetch rendered elements
  • [NETCURL-340] – xpath automation of the otherwise manual handling
  • [NETCURL-341] – Separate DOMHandler from GenericParser
  • [NETCURL-343] – Try to verify soap timeouts.
  • [NETCURL-345] – Remove SSL verification configuration for older PHP

RSS Feed is no longer in beta state

Documentation about the /rss resource can now be found here!

Tornevall Networks has, for a while now, been running a RSS feed agent in a kind of beta mode. This period has been used to safely make sure that the services are really properly running, fetching data and provides the data correct in the feed. So far, the flaws has been very few so it is considered no longer a beta testing.

We are also monitoring RSS-feeds, which may seem a bit contra productive. However, there are purposes for this too. Amongst a few, this makes it possible for us to get a compiled view of multiple sites that handles the same kind of categories. For example, if you have several RSS-feeds for Marvel content, this makes it also possible to merge those feeds into one.

Furthermore, we are also monitoring sites where there are no RSS-feeds available. You can read about the feeder above, at the first link. We also keep track of the total feed agents helping collecting data and the status at https://auth.tornevall.net/portal/.

To get a full list of RSS feeds currently available, you can read it in json, instantly from the url below. If you want to add a new feed to this monitor feel free to contact us via the page Contact.

https://tools.tornevall.net/api/rss